1
0
mirror of https://github.com/Infisical/infisical.git synced 2025-03-28 15:29:21 +00:00

Compare commits

..

132 Commits

Author SHA1 Message Date
53605c3880 improvement: address feedback 2025-03-03 15:11:48 -08:00
4382825162 fix: address ui preventing from selecting non-default workspace 2025-03-03 10:16:15 -08:00
7ff75cdfab Merge pull request from thomas-infisical/remove-service-token-deprecation
docs: remove service token deprecation warning
2025-02-28 13:43:57 +09:00
d5aa13b277 Merge pull request from Infisical/increase-secret-reminder-note-max-length
Improvement: Increase Secret v2 Reminder Note Max Length
2025-02-28 13:12:55 +09:00
f1facf1f2c improvement: increase secret v2 reminder note max length 2025-02-28 12:26:30 +09:00
30f0f174d1 Merge pull request from akhilmhdh/feat/connector
feat: removed pool config from knex and better closing in cli
2025-02-27 10:28:26 +09:00
=
3e7110f334 feat: removed pool config from knex and better closing in cli 2025-02-27 01:32:03 +05:30
e6af7a6fb9 Merge pull request from Infisical/doc/add-ab-initio-docs
doc: add ab-initio docs
2025-02-27 02:59:50 +09:00
de420fd02c update verifyHostInputValidity 2025-02-27 02:56:28 +09:00
41a3ca149d remove on prem fet for gateway 2025-02-27 01:50:42 +09:00
da38d1a261 Merge pull request from akhilmhdh/feat/connector
Feat/connector
2025-02-27 01:20:09 +09:00
=
b0d8c8fb23 fix: resolved integration test failing 2025-02-26 21:33:49 +05:30
=
d84bac5fba feat: changes based on reviews 2025-02-26 20:45:40 +05:30
=
44f74e4d12 feat: small change 2025-02-26 20:45:40 +05:30
=
c16a4e00d8 feat: fixed feedback 2025-02-26 20:45:40 +05:30
=
11f2719842 feat: updated 2025-02-26 20:45:39 +05:30
f8153dd896 small typos 2025-02-26 20:45:39 +05:30
b104f8c07d update example to use universal auth 2025-02-26 20:45:39 +05:30
746687e5b5 update heading 2025-02-26 20:45:39 +05:30
080b1e1550 fix wording 2025-02-26 20:45:39 +05:30
38a6fd140c redo gateway docs 2025-02-26 20:45:39 +05:30
=
19d66abc38 feat: changed order 2025-02-26 20:45:39 +05:30
=
e61c0be6db fix: resolved failing test 2025-02-26 20:45:39 +05:30
=
917573931f feat: switched to projecet gateway, updated icon, cli 2025-02-26 20:45:38 +05:30
=
929a41065c feat: first doc for gateway 2025-02-26 20:45:38 +05:30
=
9b44972e77 feat: added better message on heartbeat 2025-02-26 20:45:38 +05:30
=
17e576511b feat: updated backend 2025-02-26 20:45:38 +05:30
=
afd444cad6 feat: permission changes 2025-02-26 20:45:38 +05:30
=
55b1fbdf52 feat: added validation for relay ip 2025-02-26 20:45:38 +05:30
=
46ca5c8efa feat: resolved type error from rebase 2025-02-26 20:45:38 +05:30
=
f7406ea8f8 feat: review feedback, and more changes in cli 2025-02-26 20:45:37 +05:30
=
f34370cb9d feat: completed frontend 2025-02-26 20:45:37 +05:30
=
78718cd299 feat: completed cli for gateway 2025-02-26 20:45:37 +05:30
=
1307fa49d4 feat: completed backend for gateway 2025-02-26 20:45:37 +05:30
=
a7ca242f5d feat: frontend changes for connector management 2025-02-26 20:45:37 +05:30
=
c6b3b24312 feat: gateway in cli first version\ 2025-02-26 20:45:37 +05:30
=
8520029958 feat: updated backend to new data structure for org 2025-02-26 20:45:37 +05:30
=
7905017121 feat: added gateway page in ui for org 2025-02-26 20:45:36 +05:30
=
4bbe80c083 feat: resolved permission in backend 2025-02-26 20:45:36 +05:30
=
d65ae2c61b feat: added instance gateway setup ui 2025-02-26 20:45:36 +05:30
=
84c534ef70 feat: added api for admin and org gateway management 2025-02-26 20:45:36 +05:30
617aa2f533 Merge pull request from akhilmhdh/fix/vite-error
Added vite error handler to resolve post build error
2025-02-26 18:21:21 +09:00
=
e9dd3340bf feat: added vite error handler to resolve post build error 2025-02-26 14:33:31 +05:30
1c2b4e91ba docs: remove service token deprecation warning 2025-02-26 13:38:36 +09:00
fb030401ab doc: add ab-initio docs 2025-02-26 13:37:19 +09:00
f4bd48fd1d Merge pull request from Infisical/sidebar-update
improve sidebars
2025-02-25 13:20:53 +04:00
177ccf6c9e Update SecretDetailSidebar.tsx 2025-02-25 18:15:27 +09:00
9200137d6c Merge pull request from Infisical/revert-3130-snyk-fix-9bc3e8652a6384afdd415f17c0d6ac68
Revert "[Snyk] Fix for 4 vulnerabilities"
2025-02-25 18:12:32 +09:00
a196028064 Revert "[Snyk] Fix for 4 vulnerabilities" 2025-02-25 18:12:12 +09:00
0c0e20f00e Merge pull request from Infisical/revert-3129-snyk-fix-e021ef688dc4b4af03b9ad04389eee3f
Revert "[Snyk] Security upgrade @octokit/rest from 21.0.2 to 21.1.1"
2025-02-25 18:11:56 +09:00
710429c805 Revert "[Snyk] Security upgrade @octokit/rest from 21.0.2 to 21.1.1" 2025-02-25 18:10:30 +09:00
c121bd930b fix nav 2025-02-25 18:03:13 +09:00
87d383a9c4 Update SecretDetailSidebar.tsx 2025-02-25 17:44:55 +09:00
6e590a78a0 fix lint issues 2025-02-25 17:30:15 +09:00
ab4b6c17b3 fix lint issues 2025-02-25 17:23:05 +09:00
27cd40c8ce fix lint issues 2025-02-25 17:20:52 +09:00
5f089e0b9d improve sidebars 2025-02-25 17:07:53 +09:00
19940522aa Merge pull request from Infisical/daniel/go-sdk-batch-create-docs
docs: go sdk bulk create secrets
2025-02-23 15:14:36 +09:00
28b18c1cb1 Merge pull request from Infisical/snyk-fix-e021ef688dc4b4af03b9ad04389eee3f
[Snyk] Security upgrade @octokit/rest from 21.0.2 to 21.1.1
2025-02-23 15:13:25 +09:00
7ae2cc2db8 Merge pull request from Infisical/snyk-fix-9bc3e8652a6384afdd415f17c0d6ac68
[Snyk] Fix for 4 vulnerabilities
2025-02-23 15:12:59 +09:00
97c069bc0f fix typo of bulk to batch 2025-02-23 15:09:40 +09:00
4a51b4d619 Merge pull request from akhilmhdh/fix/shared-link-min-check
feat: added min check for secret sharing
2025-02-23 15:07:40 +09:00
478e0c5ff5 Merge pull request from Infisical/aws-secrets-manager-additional-features
Feature: AWS Syncs - Additional Features
2025-02-21 08:23:06 -08:00
5c08136fca improvement: address feedback 2025-02-21 07:51:15 -08:00
cb8528adc4 merge main 2025-02-21 07:39:24 -08:00
=
d7935d30ce feat: made the function shared one 2025-02-21 14:47:04 +05:30
=
ac3bab3074 feat: added min check for secret sharing 2025-02-21 14:38:34 +05:30
4a44b7857e docs: go sdk bulk create secrets 2025-02-21 04:50:20 +04:00
63b8301065 Merge pull request from Infisical/flyio-integration-propagate-errors
Fix: Propagate Set Secrets Errors for Flyio Integration
2025-02-20 16:26:35 -08:00
babe70e00f fix: propagate set secrets error for flyio integration 2025-02-20 16:06:58 -08:00
2ba834b036 Merge pull request from Infisical/aws-secrets-manager-many-to-one-update
Fix: AWS Secrets Manager Remove Deletion of other Secrets from Many-to-One Mapping
2025-02-20 12:29:01 -08:00
db7a6f3530 fix: remove deletion of other secrets from many-to-one mapping 2025-02-20 12:21:52 -08:00
f23ea0991c improvement: address feedback 2025-02-20 11:48:47 -08:00
d80a104f7b Merge pull request from Infisical/feat/kmip-client-management
feat: kmip
2025-02-20 17:53:15 +08:00
f8ab2bcdfd feature: kms key, tags, and sync secret metadata support for aws secrets manager 2025-02-19 20:38:18 -08:00
d980d471e8 Merge pull request from Infisical/doc/add-caching-reference-to-go-sdk
doc: add caching reference for go sdk
2025-02-19 22:00:27 -05:00
9cdb4dcde9 improvement: address feedback 2025-02-19 16:52:48 -08:00
3583a09ab5 Merge pull request from Infisical/fix-org-select-none
Fix failing redirect to create new organization page on no organizations
2025-02-19 13:53:33 -08:00
86b6d23f34 Fix lint issue 2025-02-19 10:27:55 -08:00
2c31ac0569 misc: finalized KMIP icon 2025-02-20 02:11:04 +08:00
d6c1b8e30c misc: addressed comments 2025-02-20 01:46:50 +08:00
0d4d73b61d misc: update default usage to be numerical 2025-02-19 20:16:45 +08:00
198b607e2e doc: add caching reference for go sdk 2025-02-19 20:11:14 +08:00
f0e6bcef9b fix: addressed rabbit findings 2025-02-19 17:16:02 +08:00
69fb87bbfc reduce max height for resource tags 2025-02-18 20:32:57 -08:00
b0cd5bd10d feature: add support for kms key, tags, and syncing secret metadata to aws parameter store sync 2025-02-18 20:28:27 -08:00
15119ffda9 fix: backend/package.json & backend/package-lock.json to reduce vulnerabilities
The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-JS-OCTOKITENDPOINT-8730856
- https://snyk.io/vuln/SNYK-JS-OCTOKITPLUGINPAGINATEREST-8730855
- https://snyk.io/vuln/SNYK-JS-OCTOKITREQUEST-8730853
- https://snyk.io/vuln/SNYK-JS-OCTOKITREQUESTERROR-8730854
2025-02-19 04:10:31 +00:00
4df409e627 fix: frontend/package.json & frontend/package-lock.json to reduce vulnerabilities
The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-JS-OCTOKITENDPOINT-8730856
- https://snyk.io/vuln/SNYK-JS-OCTOKITPLUGINPAGINATEREST-8730855
- https://snyk.io/vuln/SNYK-JS-OCTOKITREQUEST-8730853
- https://snyk.io/vuln/SNYK-JS-OCTOKITREQUESTERROR-8730854
2025-02-19 03:23:48 +00:00
3a5a88467d Merge remote-tracking branch 'origin' into fix-org-select-none 2025-02-18 18:55:08 -08:00
012f265363 Fix lint issues 2025-02-18 18:42:39 -08:00
823bf134dd Fix case where user can get stuck in a deleted organization if they were prev logged into it 2025-02-18 18:35:22 -08:00
1f5a73047d Merge pull request from Infisical/doc/added-docs-for-egress-ips
doc: added section for egress ips
2025-02-18 19:04:03 -05:00
0366df6e19 Update terraform-cloud.mdx 2025-02-18 17:48:56 -05:00
c77e0c0666 Merge pull request from Infisical/oidc-guide-tf-cloud
OIDC Guide for Terraform cloud <> Infisical
2025-02-18 17:27:15 -05:00
8e70731c4c add terraform cloud oidc docs 2025-02-18 17:23:03 -05:00
1db8c9ea29 doc: finish up KMIP 2025-02-19 02:59:04 +08:00
21c6700db2 Merge pull request from thomas-infisical/terraform-provider
docs: add ephemeral resources documentation to Terraform guide
2025-02-18 09:55:42 -08:00
619062033b docs: enhance Terraform integration guide with ephemeral resources and best practices 2025-02-18 17:40:24 +01:00
92b3b9157a feat: added kmip server to CLI 2025-02-18 20:59:14 +08:00
f6cd78e078 misc: added exception for kmip 2025-02-18 17:41:34 +08:00
36973b1b5c Merge pull request from akhilmhdh/feat/batch-upsert
Batch upsert operation
2025-02-18 09:32:01 +05:30
=
1ca578ee03 feat: updated based on review feedback 2025-02-18 00:45:53 +05:30
=
8a7f7ac9fd feat: added test cases for bulk update 2025-02-18 00:23:10 +05:30
=
049fd8e769 feat: added upsert and ignore to bulk update 2025-02-18 00:23:10 +05:30
8e2cce865a Merge remote-tracking branch 'origin/main' into feat/kmip-client-management 2025-02-18 02:50:12 +08:00
943c2b0e69 misc: finalize KMIP management 2025-02-18 02:22:59 +08:00
8518fed726 Merge remote-tracking branch 'origin' into fix-org-select-none 2025-02-17 09:20:29 -08:00
5148435f5f Move user to create org screen on no org 2025-02-17 09:20:19 -08:00
2c825616a6 Merge pull request from Infisical/update-hardware
Update hardware for infisical
2025-02-17 09:55:35 -05:00
febbd4ade5 update hardware for infisical 2025-02-17 09:51:30 -05:00
603b740bbe misc: migrated KMIP PKI to be scoped at the org level 2025-02-17 15:54:02 +08:00
874dc01692 Update upgrading-infisical.mdx 2025-02-14 23:43:15 -05:00
b44b8bf647 Merge pull request from Infisical/upgrade-infisical-dcos
Docs for new upgrade process
2025-02-14 20:41:41 -05:00
258e561b84 add upgrade docs 2025-02-14 20:41:05 -05:00
5802638fc4 Merge pull request from Infisical/is-pending-fixes
Improvement: Use isPending Over isLoading
2025-02-14 09:10:05 -08:00
e2e7004583 improvement: additional isPending fix 2025-02-14 08:59:12 -08:00
7826324435 fix: use isPending over isLoading 2025-02-14 08:54:37 -08:00
af652f7e52 feat: kmip poc done 2025-02-14 23:55:57 +08:00
2bfc1caec5 Merge pull request from Infisical/fix-org-options-alignment
Improvement: Align Organization Options in Sidebar
2025-02-13 16:30:02 -08:00
b8a07979c3 misc: audit logs 2025-02-12 01:34:29 +08:00
292c9051bd feat: kmip create and get 2025-02-11 23:32:11 +08:00
77fac45df1 misc: reordered migration 2025-02-07 01:39:48 +08:00
0ab90383c2 Merge remote-tracking branch 'origin/main' into feat/kmip-client-management 2025-02-07 01:26:04 +08:00
a3acfa65a2 feat: finished up client cert generation 2025-02-07 01:24:32 +08:00
0269f57768 feat: completed kmip server cert config 2025-02-06 22:36:31 +08:00
9f9ded5102 misc: initial instance KMIP PKI setup 2025-02-06 02:57:40 +08:00
8b315c946c feat: added kmip to project roles section 2025-02-04 19:29:34 +08:00
dd9a7755bc feat: completed KMIP client overview 2025-02-04 18:49:48 +08:00
64c2fba350 feat: added list support and overview page 2025-02-04 02:44:59 +08:00
c7f80f7d9e feat: kmip client backend setup 2025-02-04 01:34:30 +08:00
ecf2cb6e51 misc: made improvements to wording 2025-01-30 13:09:29 +08:00
1e5a9a6020 doc: added section for egress ips 2025-01-30 03:36:46 +08:00
231 changed files with 12641 additions and 1643 deletions
backend
e2e-test/routes/v3
src
@types
db
ee
keystore
lib
server/routes
services
cli
docker-compose.prod.yml
docs
frontend
public/lotties
src
components
context
OrgPermissionContext
ProjectPermissionContext
index.tsx
helpers
hooks/api
layouts
OrganizationLayout
OrganizationLayout.tsx
components
MenuIconButton
MinimizedOrgSidebar
ProjectLayout
main.tsx
pages
admin/OverviewPage
auth
LoginPage
SignUpInvitePage
kms/KmipPage
middlewares
organization
project/RoleDetailsBySlugPage/components
public/ShareSecretPage/components
secret-manager
IntegrationsListPage/components/SecretSyncsTab
SecretDashboardPage/components
ActionBar/CreateDynamicSecretForm
DynamicSecretListView/EditDynamicSecretForm
PitDrawer
SecretListView
SecretSyncDetailsByIDPage
integrations/BitbucketConfigurePage
routeTree.gen.tsroutes.ts
sink

@ -535,6 +535,107 @@ describe.each([{ auth: AuthMode.JWT }, { auth: AuthMode.IDENTITY_ACCESS_TOKEN }]
);
});
test.each(secretTestCases)("Bulk upsert secrets in path $path", async ({ secret, path }) => {
const updateSharedSecRes = await testServer.inject({
method: "PATCH",
url: `/api/v3/secrets/batch/raw`,
headers: {
authorization: `Bearer ${authToken}`
},
body: {
workspaceId: seedData1.projectV3.id,
environment: seedData1.environment.slug,
secretPath: path,
mode: "upsert",
secrets: Array.from(Array(5)).map((_e, i) => ({
secretKey: `BULK-${secret.key}-${i + 1}`,
secretValue: "update-value",
secretComment: secret.comment
}))
}
});
expect(updateSharedSecRes.statusCode).toBe(200);
const updateSharedSecPayload = JSON.parse(updateSharedSecRes.payload);
expect(updateSharedSecPayload).toHaveProperty("secrets");
// bulk ones should exist
const secrets = await getSecrets(seedData1.environment.slug, path);
expect(secrets).toEqual(
expect.arrayContaining(
Array.from(Array(5)).map((_e, i) =>
expect.objectContaining({
secretKey: `BULK-${secret.key}-${i + 1}`,
secretValue: "update-value",
type: SecretType.Shared
})
)
)
);
await Promise.all(
Array.from(Array(5)).map((_e, i) => deleteSecret({ path, key: `BULK-${secret.key}-${i + 1}` }))
);
});
test("Bulk upsert secrets in path multiple paths", async () => {
const firstBatchSecrets = Array.from(Array(5)).map((_e, i) => ({
secretKey: `BULK-KEY-${secretTestCases[0].secret.key}-${i + 1}`,
secretValue: "update-value",
secretComment: "comment",
secretPath: secretTestCases[0].path
}));
const secondBatchSecrets = Array.from(Array(5)).map((_e, i) => ({
secretKey: `BULK-KEY-${secretTestCases[1].secret.key}-${i + 1}`,
secretValue: "update-value",
secretComment: "comment",
secretPath: secretTestCases[1].path
}));
const testSecrets = [...firstBatchSecrets, ...secondBatchSecrets];
const updateSharedSecRes = await testServer.inject({
method: "PATCH",
url: `/api/v3/secrets/batch/raw`,
headers: {
authorization: `Bearer ${authToken}`
},
body: {
workspaceId: seedData1.projectV3.id,
environment: seedData1.environment.slug,
mode: "upsert",
secrets: testSecrets
}
});
expect(updateSharedSecRes.statusCode).toBe(200);
const updateSharedSecPayload = JSON.parse(updateSharedSecRes.payload);
expect(updateSharedSecPayload).toHaveProperty("secrets");
// bulk ones should exist
const firstBatchSecretsOnInfisical = await getSecrets(seedData1.environment.slug, secretTestCases[0].path);
expect(firstBatchSecretsOnInfisical).toEqual(
expect.arrayContaining(
firstBatchSecrets.map((el) =>
expect.objectContaining({
secretKey: el.secretKey,
secretValue: "update-value",
type: SecretType.Shared
})
)
)
);
const secondBatchSecretsOnInfisical = await getSecrets(seedData1.environment.slug, secretTestCases[1].path);
expect(secondBatchSecretsOnInfisical).toEqual(
expect.arrayContaining(
secondBatchSecrets.map((el) =>
expect.objectContaining({
secretKey: el.secretKey,
secretValue: "update-value",
type: SecretType.Shared
})
)
)
);
await Promise.all(testSecrets.map((el) => deleteSecret({ path: el.secretPath, key: el.secretKey })));
});
test.each(secretTestCases)("Bulk delete secrets in path $path", async ({ secret, path }) => {
await Promise.all(
Array.from(Array(5)).map((_e, i) => createSecret({ ...secret, key: `BULK-${secret.key}-${i + 1}`, path }))

@ -13,9 +13,13 @@ import { TCertificateEstServiceFactory } from "@app/ee/services/certificate-est/
import { TDynamicSecretServiceFactory } from "@app/ee/services/dynamic-secret/dynamic-secret-service";
import { TDynamicSecretLeaseServiceFactory } from "@app/ee/services/dynamic-secret-lease/dynamic-secret-lease-service";
import { TExternalKmsServiceFactory } from "@app/ee/services/external-kms/external-kms-service";
import { TGatewayServiceFactory } from "@app/ee/services/gateway/gateway-service";
import { TGroupServiceFactory } from "@app/ee/services/group/group-service";
import { TIdentityProjectAdditionalPrivilegeServiceFactory } from "@app/ee/services/identity-project-additional-privilege/identity-project-additional-privilege-service";
import { TIdentityProjectAdditionalPrivilegeV2ServiceFactory } from "@app/ee/services/identity-project-additional-privilege-v2/identity-project-additional-privilege-v2-service";
import { TKmipClientDALFactory } from "@app/ee/services/kmip/kmip-client-dal";
import { TKmipOperationServiceFactory } from "@app/ee/services/kmip/kmip-operation-service";
import { TKmipServiceFactory } from "@app/ee/services/kmip/kmip-service";
import { TLdapConfigServiceFactory } from "@app/ee/services/ldap-config/ldap-config-service";
import { TLicenseServiceFactory } from "@app/ee/services/license/license-service";
import { TOidcConfigServiceFactory } from "@app/ee/services/oidc/oidc-config-service";
@ -126,6 +130,11 @@ declare module "fastify" {
isUserCompleted: string;
providerAuthToken: string;
};
kmipUser: {
projectId: string;
clientId: string;
name: string;
};
auditLogInfo: Pick<TCreateAuditLogDTO, "userAgent" | "userAgentType" | "ipAddress" | "actor">;
ssoConfig: Awaited<ReturnType<TSamlConfigServiceFactory["getSaml"]>>;
ldapConfig: Awaited<ReturnType<TLdapConfigServiceFactory["getLdapCfg"]>>;
@ -218,11 +227,15 @@ declare module "fastify" {
totp: TTotpServiceFactory;
appConnection: TAppConnectionServiceFactory;
secretSync: TSecretSyncServiceFactory;
kmip: TKmipServiceFactory;
kmipOperation: TKmipOperationServiceFactory;
gateway: TGatewayServiceFactory;
};
// this is exclusive use for middlewares in which we need to inject data
// everywhere else access using service layer
store: {
user: Pick<TUserDALFactory, "findById">;
kmipClient: Pick<TKmipClientDALFactory, "findByProjectAndClientId">;
};
}
}

@ -68,6 +68,9 @@ import {
TExternalKms,
TExternalKmsInsert,
TExternalKmsUpdate,
TGateways,
TGatewaysInsert,
TGatewaysUpdate,
TGitAppInstallSessions,
TGitAppInstallSessionsInsert,
TGitAppInstallSessionsUpdate,
@ -143,6 +146,18 @@ import {
TInternalKms,
TInternalKmsInsert,
TInternalKmsUpdate,
TKmipClientCertificates,
TKmipClientCertificatesInsert,
TKmipClientCertificatesUpdate,
TKmipClients,
TKmipClientsInsert,
TKmipClientsUpdate,
TKmipOrgConfigs,
TKmipOrgConfigsInsert,
TKmipOrgConfigsUpdate,
TKmipOrgServerCertificates,
TKmipOrgServerCertificatesInsert,
TKmipOrgServerCertificatesUpdate,
TKmsKeys,
TKmsKeysInsert,
TKmsKeysUpdate,
@ -167,6 +182,9 @@ import {
TOrgBots,
TOrgBotsInsert,
TOrgBotsUpdate,
TOrgGatewayConfig,
TOrgGatewayConfigInsert,
TOrgGatewayConfigUpdate,
TOrgMemberships,
TOrgMembershipsInsert,
TOrgMembershipsUpdate,
@ -188,6 +206,9 @@ import {
TProjectEnvironments,
TProjectEnvironmentsInsert,
TProjectEnvironmentsUpdate,
TProjectGateways,
TProjectGatewaysInsert,
TProjectGatewaysUpdate,
TProjectKeys,
TProjectKeysInsert,
TProjectKeysUpdate,
@ -902,5 +923,32 @@ declare module "knex/types/tables" {
TAppConnectionsUpdate
>;
[TableName.SecretSync]: KnexOriginal.CompositeTableType<TSecretSyncs, TSecretSyncsInsert, TSecretSyncsUpdate>;
[TableName.KmipClient]: KnexOriginal.CompositeTableType<TKmipClients, TKmipClientsInsert, TKmipClientsUpdate>;
[TableName.KmipOrgConfig]: KnexOriginal.CompositeTableType<
TKmipOrgConfigs,
TKmipOrgConfigsInsert,
TKmipOrgConfigsUpdate
>;
[TableName.KmipOrgServerCertificates]: KnexOriginal.CompositeTableType<
TKmipOrgServerCertificates,
TKmipOrgServerCertificatesInsert,
TKmipOrgServerCertificatesUpdate
>;
[TableName.KmipClientCertificates]: KnexOriginal.CompositeTableType<
TKmipClientCertificates,
TKmipClientCertificatesInsert,
TKmipClientCertificatesUpdate
>;
[TableName.Gateway]: KnexOriginal.CompositeTableType<TGateways, TGatewaysInsert, TGatewaysUpdate>;
[TableName.ProjectGateway]: KnexOriginal.CompositeTableType<
TProjectGateways,
TProjectGatewaysInsert,
TProjectGatewaysUpdate
>;
[TableName.OrgGatewayConfig]: KnexOriginal.CompositeTableType<
TOrgGatewayConfig,
TOrgGatewayConfigInsert,
TOrgGatewayConfigUpdate
>;
}
}

@ -0,0 +1,108 @@
import { Knex } from "knex";
import { TableName } from "../schemas";
import { createOnUpdateTrigger, dropOnUpdateTrigger } from "../utils";
export async function up(knex: Knex): Promise<void> {
const hasKmipClientTable = await knex.schema.hasTable(TableName.KmipClient);
if (!hasKmipClientTable) {
await knex.schema.createTable(TableName.KmipClient, (t) => {
t.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
t.string("name").notNullable();
t.specificType("permissions", "text[]");
t.string("description");
t.string("projectId").notNullable();
t.foreign("projectId").references("id").inTable(TableName.Project).onDelete("CASCADE");
});
}
const hasKmipOrgPkiConfig = await knex.schema.hasTable(TableName.KmipOrgConfig);
if (!hasKmipOrgPkiConfig) {
await knex.schema.createTable(TableName.KmipOrgConfig, (t) => {
t.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
t.uuid("orgId").notNullable();
t.foreign("orgId").references("id").inTable(TableName.Organization).onDelete("CASCADE");
t.unique("orgId");
t.string("caKeyAlgorithm").notNullable();
t.datetime("rootCaIssuedAt").notNullable();
t.datetime("rootCaExpiration").notNullable();
t.string("rootCaSerialNumber").notNullable();
t.binary("encryptedRootCaCertificate").notNullable();
t.binary("encryptedRootCaPrivateKey").notNullable();
t.datetime("serverIntermediateCaIssuedAt").notNullable();
t.datetime("serverIntermediateCaExpiration").notNullable();
t.string("serverIntermediateCaSerialNumber");
t.binary("encryptedServerIntermediateCaCertificate").notNullable();
t.binary("encryptedServerIntermediateCaChain").notNullable();
t.binary("encryptedServerIntermediateCaPrivateKey").notNullable();
t.datetime("clientIntermediateCaIssuedAt").notNullable();
t.datetime("clientIntermediateCaExpiration").notNullable();
t.string("clientIntermediateCaSerialNumber").notNullable();
t.binary("encryptedClientIntermediateCaCertificate").notNullable();
t.binary("encryptedClientIntermediateCaChain").notNullable();
t.binary("encryptedClientIntermediateCaPrivateKey").notNullable();
t.timestamps(true, true, true);
});
await createOnUpdateTrigger(knex, TableName.KmipOrgConfig);
}
const hasKmipOrgServerCertTable = await knex.schema.hasTable(TableName.KmipOrgServerCertificates);
if (!hasKmipOrgServerCertTable) {
await knex.schema.createTable(TableName.KmipOrgServerCertificates, (t) => {
t.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
t.uuid("orgId").notNullable();
t.foreign("orgId").references("id").inTable(TableName.Organization).onDelete("CASCADE");
t.string("commonName").notNullable();
t.string("altNames").notNullable();
t.string("serialNumber").notNullable();
t.string("keyAlgorithm").notNullable();
t.datetime("issuedAt").notNullable();
t.datetime("expiration").notNullable();
t.binary("encryptedCertificate").notNullable();
t.binary("encryptedChain").notNullable();
});
}
const hasKmipClientCertTable = await knex.schema.hasTable(TableName.KmipClientCertificates);
if (!hasKmipClientCertTable) {
await knex.schema.createTable(TableName.KmipClientCertificates, (t) => {
t.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
t.uuid("kmipClientId").notNullable();
t.foreign("kmipClientId").references("id").inTable(TableName.KmipClient).onDelete("CASCADE");
t.string("serialNumber").notNullable();
t.string("keyAlgorithm").notNullable();
t.datetime("issuedAt").notNullable();
t.datetime("expiration").notNullable();
});
}
}
export async function down(knex: Knex): Promise<void> {
const hasKmipOrgPkiConfig = await knex.schema.hasTable(TableName.KmipOrgConfig);
if (hasKmipOrgPkiConfig) {
await knex.schema.dropTable(TableName.KmipOrgConfig);
await dropOnUpdateTrigger(knex, TableName.KmipOrgConfig);
}
const hasKmipOrgServerCertTable = await knex.schema.hasTable(TableName.KmipOrgServerCertificates);
if (hasKmipOrgServerCertTable) {
await knex.schema.dropTable(TableName.KmipOrgServerCertificates);
}
const hasKmipClientCertTable = await knex.schema.hasTable(TableName.KmipClientCertificates);
if (hasKmipClientCertTable) {
await knex.schema.dropTable(TableName.KmipClientCertificates);
}
const hasKmipClientTable = await knex.schema.hasTable(TableName.KmipClient);
if (hasKmipClientTable) {
await knex.schema.dropTable(TableName.KmipClient);
}
}

@ -0,0 +1,115 @@
import { Knex } from "knex";
import { TableName } from "../schemas";
import { createOnUpdateTrigger, dropOnUpdateTrigger } from "../utils";
export async function up(knex: Knex): Promise<void> {
if (!(await knex.schema.hasTable(TableName.OrgGatewayConfig))) {
await knex.schema.createTable(TableName.OrgGatewayConfig, (t) => {
t.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
t.string("rootCaKeyAlgorithm").notNullable();
t.datetime("rootCaIssuedAt").notNullable();
t.datetime("rootCaExpiration").notNullable();
t.string("rootCaSerialNumber").notNullable();
t.binary("encryptedRootCaCertificate").notNullable();
t.binary("encryptedRootCaPrivateKey").notNullable();
t.datetime("clientCaIssuedAt").notNullable();
t.datetime("clientCaExpiration").notNullable();
t.string("clientCaSerialNumber");
t.binary("encryptedClientCaCertificate").notNullable();
t.binary("encryptedClientCaPrivateKey").notNullable();
t.string("clientCertSerialNumber").notNullable();
t.string("clientCertKeyAlgorithm").notNullable();
t.datetime("clientCertIssuedAt").notNullable();
t.datetime("clientCertExpiration").notNullable();
t.binary("encryptedClientCertificate").notNullable();
t.binary("encryptedClientPrivateKey").notNullable();
t.datetime("gatewayCaIssuedAt").notNullable();
t.datetime("gatewayCaExpiration").notNullable();
t.string("gatewayCaSerialNumber").notNullable();
t.binary("encryptedGatewayCaCertificate").notNullable();
t.binary("encryptedGatewayCaPrivateKey").notNullable();
t.uuid("orgId").notNullable();
t.foreign("orgId").references("id").inTable(TableName.Organization).onDelete("CASCADE");
t.unique("orgId");
t.timestamps(true, true, true);
});
await createOnUpdateTrigger(knex, TableName.OrgGatewayConfig);
}
if (!(await knex.schema.hasTable(TableName.Gateway))) {
await knex.schema.createTable(TableName.Gateway, (t) => {
t.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
t.string("name").notNullable();
t.string("serialNumber").notNullable();
t.string("keyAlgorithm").notNullable();
t.datetime("issuedAt").notNullable();
t.datetime("expiration").notNullable();
t.datetime("heartbeat");
t.binary("relayAddress").notNullable();
t.uuid("orgGatewayRootCaId").notNullable();
t.foreign("orgGatewayRootCaId").references("id").inTable(TableName.OrgGatewayConfig).onDelete("CASCADE");
t.uuid("identityId").notNullable();
t.foreign("identityId").references("id").inTable(TableName.Identity).onDelete("CASCADE");
t.timestamps(true, true, true);
});
await createOnUpdateTrigger(knex, TableName.Gateway);
}
if (!(await knex.schema.hasTable(TableName.ProjectGateway))) {
await knex.schema.createTable(TableName.ProjectGateway, (t) => {
t.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
t.string("projectId").notNullable();
t.foreign("projectId").references("id").inTable(TableName.Project).onDelete("CASCADE");
t.uuid("gatewayId").notNullable();
t.foreign("gatewayId").references("id").inTable(TableName.Gateway).onDelete("CASCADE");
t.timestamps(true, true, true);
});
await createOnUpdateTrigger(knex, TableName.ProjectGateway);
}
if (await knex.schema.hasTable(TableName.DynamicSecret)) {
const doesGatewayColExist = await knex.schema.hasColumn(TableName.DynamicSecret, "gatewayId");
await knex.schema.alterTable(TableName.DynamicSecret, (t) => {
// not setting a foreign constraint so that cascade effects are not triggered
if (!doesGatewayColExist) {
t.uuid("projectGatewayId");
t.foreign("projectGatewayId").references("id").inTable(TableName.ProjectGateway);
}
});
}
}
export async function down(knex: Knex): Promise<void> {
if (await knex.schema.hasTable(TableName.DynamicSecret)) {
const doesGatewayColExist = await knex.schema.hasColumn(TableName.DynamicSecret, "projectGatewayId");
await knex.schema.alterTable(TableName.DynamicSecret, (t) => {
if (doesGatewayColExist) t.dropColumn("projectGatewayId");
});
}
await knex.schema.dropTableIfExists(TableName.ProjectGateway);
await dropOnUpdateTrigger(knex, TableName.ProjectGateway);
await knex.schema.dropTableIfExists(TableName.Gateway);
await dropOnUpdateTrigger(knex, TableName.Gateway);
await knex.schema.dropTableIfExists(TableName.OrgGatewayConfig);
await dropOnUpdateTrigger(knex, TableName.OrgGatewayConfig);
}

@ -0,0 +1,35 @@
import { Knex } from "knex";
import { TableName } from "@app/db/schemas";
export async function up(knex: Knex): Promise<void> {
for await (const tableName of [
TableName.SecretV2,
TableName.SecretVersionV2,
TableName.SecretApprovalRequestSecretV2
]) {
const hasReminderNoteCol = await knex.schema.hasColumn(tableName, "reminderNote");
if (hasReminderNoteCol) {
await knex.schema.alterTable(tableName, (t) => {
t.string("reminderNote", 1024).alter();
});
}
}
}
export async function down(knex: Knex): Promise<void> {
for await (const tableName of [
TableName.SecretV2,
TableName.SecretVersionV2,
TableName.SecretApprovalRequestSecretV2
]) {
const hasReminderNoteCol = await knex.schema.hasColumn(tableName, "reminderNote");
if (hasReminderNoteCol) {
await knex.schema.alterTable(tableName, (t) => {
t.string("reminderNote").alter();
});
}
}
}

@ -26,7 +26,8 @@ export const DynamicSecretsSchema = z.object({
statusDetails: z.string().nullable().optional(),
createdAt: z.date(),
updatedAt: z.date(),
encryptedInput: zodBuffer
encryptedInput: zodBuffer,
projectGatewayId: z.string().uuid().nullable().optional()
});
export type TDynamicSecrets = z.infer<typeof DynamicSecretsSchema>;

@ -0,0 +1,29 @@
// Code generated by automation script, DO NOT EDIT.
// Automated by pulling database and generating zod schema
// To update. Just run npm run generate:schema
// Written by akhilmhdh.
import { z } from "zod";
import { zodBuffer } from "@app/lib/zod";
import { TImmutableDBKeys } from "./models";
export const GatewaysSchema = z.object({
id: z.string().uuid(),
name: z.string(),
serialNumber: z.string(),
keyAlgorithm: z.string(),
issuedAt: z.date(),
expiration: z.date(),
heartbeat: z.date().nullable().optional(),
relayAddress: zodBuffer,
orgGatewayRootCaId: z.string().uuid(),
identityId: z.string().uuid(),
createdAt: z.date(),
updatedAt: z.date()
});
export type TGateways = z.infer<typeof GatewaysSchema>;
export type TGatewaysInsert = Omit<z.input<typeof GatewaysSchema>, TImmutableDBKeys>;
export type TGatewaysUpdate = Partial<Omit<z.input<typeof GatewaysSchema>, TImmutableDBKeys>>;

@ -20,6 +20,7 @@ export * from "./certificates";
export * from "./dynamic-secret-leases";
export * from "./dynamic-secrets";
export * from "./external-kms";
export * from "./gateways";
export * from "./git-app-install-sessions";
export * from "./git-app-org";
export * from "./group-project-membership-roles";
@ -45,6 +46,10 @@ export * from "./incident-contacts";
export * from "./integration-auths";
export * from "./integrations";
export * from "./internal-kms";
export * from "./kmip-client-certificates";
export * from "./kmip-clients";
export * from "./kmip-org-configs";
export * from "./kmip-org-server-certificates";
export * from "./kms-key-versions";
export * from "./kms-keys";
export * from "./kms-root-config";
@ -53,6 +58,7 @@ export * from "./ldap-group-maps";
export * from "./models";
export * from "./oidc-configs";
export * from "./org-bots";
export * from "./org-gateway-config";
export * from "./org-memberships";
export * from "./org-roles";
export * from "./organizations";
@ -61,6 +67,7 @@ export * from "./pki-collection-items";
export * from "./pki-collections";
export * from "./project-bots";
export * from "./project-environments";
export * from "./project-gateways";
export * from "./project-keys";
export * from "./project-memberships";
export * from "./project-roles";

@ -0,0 +1,23 @@
// Code generated by automation script, DO NOT EDIT.
// Automated by pulling database and generating zod schema
// To update. Just run npm run generate:schema
// Written by akhilmhdh.
import { z } from "zod";
import { TImmutableDBKeys } from "./models";
export const KmipClientCertificatesSchema = z.object({
id: z.string().uuid(),
kmipClientId: z.string().uuid(),
serialNumber: z.string(),
keyAlgorithm: z.string(),
issuedAt: z.date(),
expiration: z.date()
});
export type TKmipClientCertificates = z.infer<typeof KmipClientCertificatesSchema>;
export type TKmipClientCertificatesInsert = Omit<z.input<typeof KmipClientCertificatesSchema>, TImmutableDBKeys>;
export type TKmipClientCertificatesUpdate = Partial<
Omit<z.input<typeof KmipClientCertificatesSchema>, TImmutableDBKeys>
>;

@ -0,0 +1,20 @@
// Code generated by automation script, DO NOT EDIT.
// Automated by pulling database and generating zod schema
// To update. Just run npm run generate:schema
// Written by akhilmhdh.
import { z } from "zod";
import { TImmutableDBKeys } from "./models";
export const KmipClientsSchema = z.object({
id: z.string().uuid(),
name: z.string(),
permissions: z.string().array().nullable().optional(),
description: z.string().nullable().optional(),
projectId: z.string()
});
export type TKmipClients = z.infer<typeof KmipClientsSchema>;
export type TKmipClientsInsert = Omit<z.input<typeof KmipClientsSchema>, TImmutableDBKeys>;
export type TKmipClientsUpdate = Partial<Omit<z.input<typeof KmipClientsSchema>, TImmutableDBKeys>>;

@ -0,0 +1,39 @@
// Code generated by automation script, DO NOT EDIT.
// Automated by pulling database and generating zod schema
// To update. Just run npm run generate:schema
// Written by akhilmhdh.
import { z } from "zod";
import { zodBuffer } from "@app/lib/zod";
import { TImmutableDBKeys } from "./models";
export const KmipOrgConfigsSchema = z.object({
id: z.string().uuid(),
orgId: z.string().uuid(),
caKeyAlgorithm: z.string(),
rootCaIssuedAt: z.date(),
rootCaExpiration: z.date(),
rootCaSerialNumber: z.string(),
encryptedRootCaCertificate: zodBuffer,
encryptedRootCaPrivateKey: zodBuffer,
serverIntermediateCaIssuedAt: z.date(),
serverIntermediateCaExpiration: z.date(),
serverIntermediateCaSerialNumber: z.string().nullable().optional(),
encryptedServerIntermediateCaCertificate: zodBuffer,
encryptedServerIntermediateCaChain: zodBuffer,
encryptedServerIntermediateCaPrivateKey: zodBuffer,
clientIntermediateCaIssuedAt: z.date(),
clientIntermediateCaExpiration: z.date(),
clientIntermediateCaSerialNumber: z.string(),
encryptedClientIntermediateCaCertificate: zodBuffer,
encryptedClientIntermediateCaChain: zodBuffer,
encryptedClientIntermediateCaPrivateKey: zodBuffer,
createdAt: z.date(),
updatedAt: z.date()
});
export type TKmipOrgConfigs = z.infer<typeof KmipOrgConfigsSchema>;
export type TKmipOrgConfigsInsert = Omit<z.input<typeof KmipOrgConfigsSchema>, TImmutableDBKeys>;
export type TKmipOrgConfigsUpdate = Partial<Omit<z.input<typeof KmipOrgConfigsSchema>, TImmutableDBKeys>>;

@ -0,0 +1,29 @@
// Code generated by automation script, DO NOT EDIT.
// Automated by pulling database and generating zod schema
// To update. Just run npm run generate:schema
// Written by akhilmhdh.
import { z } from "zod";
import { zodBuffer } from "@app/lib/zod";
import { TImmutableDBKeys } from "./models";
export const KmipOrgServerCertificatesSchema = z.object({
id: z.string().uuid(),
orgId: z.string().uuid(),
commonName: z.string(),
altNames: z.string(),
serialNumber: z.string(),
keyAlgorithm: z.string(),
issuedAt: z.date(),
expiration: z.date(),
encryptedCertificate: zodBuffer,
encryptedChain: zodBuffer
});
export type TKmipOrgServerCertificates = z.infer<typeof KmipOrgServerCertificatesSchema>;
export type TKmipOrgServerCertificatesInsert = Omit<z.input<typeof KmipOrgServerCertificatesSchema>, TImmutableDBKeys>;
export type TKmipOrgServerCertificatesUpdate = Partial<
Omit<z.input<typeof KmipOrgServerCertificatesSchema>, TImmutableDBKeys>
>;

@ -113,6 +113,10 @@ export enum TableName {
SecretApprovalRequestSecretTagV2 = "secret_approval_request_secret_tags_v2",
SnapshotSecretV2 = "secret_snapshot_secrets_v2",
ProjectSplitBackfillIds = "project_split_backfill_ids",
// Gateway
OrgGatewayConfig = "org_gateway_config",
Gateway = "gateways",
ProjectGateway = "project_gateways",
// junction tables with tags
SecretV2JnTag = "secret_v2_tag_junction",
JnSecretTag = "secret_tag_junction",
@ -132,7 +136,11 @@ export enum TableName {
SlackIntegrations = "slack_integrations",
ProjectSlackConfigs = "project_slack_configs",
AppConnection = "app_connections",
SecretSync = "secret_syncs"
SecretSync = "secret_syncs",
KmipClient = "kmip_clients",
KmipOrgConfig = "kmip_org_configs",
KmipOrgServerCertificates = "kmip_org_server_certificates",
KmipClientCertificates = "kmip_client_certificates"
}
export type TImmutableDBKeys = "id" | "createdAt" | "updatedAt";

@ -0,0 +1,43 @@
// Code generated by automation script, DO NOT EDIT.
// Automated by pulling database and generating zod schema
// To update. Just run npm run generate:schema
// Written by akhilmhdh.
import { z } from "zod";
import { zodBuffer } from "@app/lib/zod";
import { TImmutableDBKeys } from "./models";
export const OrgGatewayConfigSchema = z.object({
id: z.string().uuid(),
rootCaKeyAlgorithm: z.string(),
rootCaIssuedAt: z.date(),
rootCaExpiration: z.date(),
rootCaSerialNumber: z.string(),
encryptedRootCaCertificate: zodBuffer,
encryptedRootCaPrivateKey: zodBuffer,
clientCaIssuedAt: z.date(),
clientCaExpiration: z.date(),
clientCaSerialNumber: z.string().nullable().optional(),
encryptedClientCaCertificate: zodBuffer,
encryptedClientCaPrivateKey: zodBuffer,
clientCertSerialNumber: z.string(),
clientCertKeyAlgorithm: z.string(),
clientCertIssuedAt: z.date(),
clientCertExpiration: z.date(),
encryptedClientCertificate: zodBuffer,
encryptedClientPrivateKey: zodBuffer,
gatewayCaIssuedAt: z.date(),
gatewayCaExpiration: z.date(),
gatewayCaSerialNumber: z.string(),
encryptedGatewayCaCertificate: zodBuffer,
encryptedGatewayCaPrivateKey: zodBuffer,
orgId: z.string().uuid(),
createdAt: z.date(),
updatedAt: z.date()
});
export type TOrgGatewayConfig = z.infer<typeof OrgGatewayConfigSchema>;
export type TOrgGatewayConfigInsert = Omit<z.input<typeof OrgGatewayConfigSchema>, TImmutableDBKeys>;
export type TOrgGatewayConfigUpdate = Partial<Omit<z.input<typeof OrgGatewayConfigSchema>, TImmutableDBKeys>>;

@ -0,0 +1,20 @@
// Code generated by automation script, DO NOT EDIT.
// Automated by pulling database and generating zod schema
// To update. Just run npm run generate:schema
// Written by akhilmhdh.
import { z } from "zod";
import { TImmutableDBKeys } from "./models";
export const ProjectGatewaysSchema = z.object({
id: z.string().uuid(),
projectId: z.string(),
gatewayId: z.string().uuid(),
createdAt: z.date(),
updatedAt: z.date()
});
export type TProjectGateways = z.infer<typeof ProjectGatewaysSchema>;
export type TProjectGatewaysInsert = Omit<z.input<typeof ProjectGatewaysSchema>, TImmutableDBKeys>;
export type TProjectGatewaysUpdate = Partial<Omit<z.input<typeof ProjectGatewaysSchema>, TImmutableDBKeys>>;

@ -0,0 +1,265 @@
import { z } from "zod";
import { GatewaysSchema } from "@app/db/schemas";
import { isValidIp } from "@app/lib/ip";
import { readLimit, writeLimit } from "@app/server/config/rateLimiter";
import { slugSchema } from "@app/server/lib/schemas";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AuthMode } from "@app/services/auth/auth-type";
const SanitizedGatewaySchema = GatewaysSchema.pick({
id: true,
identityId: true,
name: true,
createdAt: true,
updatedAt: true,
issuedAt: true,
serialNumber: true,
heartbeat: true
});
const isValidRelayAddress = (relayAddress: string) => {
const [ip, port] = relayAddress.split(":");
return isValidIp(ip) && Number(port) <= 65535 && Number(port) >= 40000;
};
export const registerGatewayRouter = async (server: FastifyZodProvider) => {
server.route({
method: "POST",
url: "/register-identity",
config: {
rateLimit: writeLimit
},
schema: {
response: {
200: z.object({
turnServerUsername: z.string(),
turnServerPassword: z.string(),
turnServerRealm: z.string(),
turnServerAddress: z.string(),
infisicalStaticIp: z.string().optional()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const relayDetails = await server.services.gateway.getGatewayRelayDetails(
req.permission.id,
req.permission.orgId,
req.permission.authMethod
);
return relayDetails;
}
});
server.route({
method: "POST",
url: "/exchange-cert",
config: {
rateLimit: writeLimit
},
schema: {
body: z.object({
relayAddress: z.string().refine(isValidRelayAddress, { message: "Invalid relay address" })
}),
response: {
200: z.object({
serialNumber: z.string(),
privateKey: z.string(),
certificate: z.string(),
certificateChain: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const gatewayCertificates = await server.services.gateway.exchangeAllocatedRelayAddress({
identityOrg: req.permission.orgId,
identityId: req.permission.id,
relayAddress: req.body.relayAddress,
identityOrgAuthMethod: req.permission.authMethod
});
return gatewayCertificates;
}
});
server.route({
method: "POST",
url: "/heartbeat",
config: {
rateLimit: writeLimit
},
schema: {
response: {
200: z.object({
message: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
await server.services.gateway.heartbeat({
orgPermission: req.permission
});
return { message: "Successfully registered heartbeat" };
}
});
server.route({
method: "GET",
url: "/",
config: {
rateLimit: readLimit
},
schema: {
querystring: z.object({
projectId: z.string().optional()
}),
response: {
200: z.object({
gateways: SanitizedGatewaySchema.extend({
identity: z.object({
name: z.string(),
id: z.string()
}),
projects: z
.object({
name: z.string(),
id: z.string(),
slug: z.string()
})
.array()
}).array()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN, AuthMode.JWT]),
handler: async (req) => {
const gateways = await server.services.gateway.listGateways({
orgPermission: req.permission
});
return { gateways };
}
});
server.route({
method: "GET",
url: "/projects/:projectId",
config: {
rateLimit: readLimit
},
schema: {
params: z.object({
projectId: z.string()
}),
response: {
200: z.object({
gateways: SanitizedGatewaySchema.extend({
identity: z.object({
name: z.string(),
id: z.string()
}),
projectGatewayId: z.string()
}).array()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN, AuthMode.JWT]),
handler: async (req) => {
const gateways = await server.services.gateway.getProjectGateways({
projectId: req.params.projectId,
projectPermission: req.permission
});
return { gateways };
}
});
server.route({
method: "GET",
url: "/:id",
config: {
rateLimit: readLimit
},
schema: {
params: z.object({
id: z.string()
}),
response: {
200: z.object({
gateway: SanitizedGatewaySchema.extend({
identity: z.object({
name: z.string(),
id: z.string()
})
})
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN, AuthMode.JWT]),
handler: async (req) => {
const gateway = await server.services.gateway.getGatewayById({
orgPermission: req.permission,
id: req.params.id
});
return { gateway };
}
});
server.route({
method: "PATCH",
url: "/:id",
config: {
rateLimit: writeLimit
},
schema: {
params: z.object({
id: z.string()
}),
body: z.object({
name: slugSchema({ field: "name" }).optional(),
projectIds: z.string().array().optional()
}),
response: {
200: z.object({
gateway: SanitizedGatewaySchema
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN, AuthMode.JWT]),
handler: async (req) => {
const gateway = await server.services.gateway.updateGatewayById({
orgPermission: req.permission,
id: req.params.id,
name: req.body.name,
projectIds: req.body.projectIds
});
return { gateway };
}
});
server.route({
method: "DELETE",
url: "/:id",
config: {
rateLimit: writeLimit
},
schema: {
params: z.object({
id: z.string()
}),
response: {
200: z.object({
gateway: SanitizedGatewaySchema
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN, AuthMode.JWT]),
handler: async (req) => {
const gateway = await server.services.gateway.deleteGatewayById({
orgPermission: req.permission,
id: req.params.id
});
return { gateway };
}
});
};

@ -7,8 +7,11 @@ import { registerCaCrlRouter } from "./certificate-authority-crl-router";
import { registerDynamicSecretLeaseRouter } from "./dynamic-secret-lease-router";
import { registerDynamicSecretRouter } from "./dynamic-secret-router";
import { registerExternalKmsRouter } from "./external-kms-router";
import { registerGatewayRouter } from "./gateway-router";
import { registerGroupRouter } from "./group-router";
import { registerIdentityProjectAdditionalPrivilegeRouter } from "./identity-project-additional-privilege-router";
import { registerKmipRouter } from "./kmip-router";
import { registerKmipSpecRouter } from "./kmip-spec-router";
import { registerLdapRouter } from "./ldap-router";
import { registerLicenseRouter } from "./license-router";
import { registerOidcRouter } from "./oidc-router";
@ -65,6 +68,8 @@ export const registerV1EERoutes = async (server: FastifyZodProvider) => {
{ prefix: "/dynamic-secrets" }
);
await server.register(registerGatewayRouter, { prefix: "/gateways" });
await server.register(
async (pkiRouter) => {
await pkiRouter.register(registerCaCrlRouter, { prefix: "/crl" });
@ -110,4 +115,12 @@ export const registerV1EERoutes = async (server: FastifyZodProvider) => {
});
await server.register(registerProjectTemplateRouter, { prefix: "/project-templates" });
await server.register(
async (kmipRouter) => {
await kmipRouter.register(registerKmipRouter);
await kmipRouter.register(registerKmipSpecRouter, { prefix: "/spec" });
},
{ prefix: "/kmip" }
);
};

@ -0,0 +1,428 @@
import ms from "ms";
import { z } from "zod";
import { KmipClientsSchema } from "@app/db/schemas";
import { EventType } from "@app/ee/services/audit-log/audit-log-types";
import { KmipPermission } from "@app/ee/services/kmip/kmip-enum";
import { KmipClientOrderBy } from "@app/ee/services/kmip/kmip-types";
import { OrderByDirection } from "@app/lib/types";
import { readLimit, writeLimit } from "@app/server/config/rateLimiter";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AuthMode } from "@app/services/auth/auth-type";
import { CertKeyAlgorithm } from "@app/services/certificate/certificate-types";
import { validateAltNamesField } from "@app/services/certificate-authority/certificate-authority-validators";
const KmipClientResponseSchema = KmipClientsSchema.pick({
projectId: true,
name: true,
id: true,
description: true,
permissions: true
});
export const registerKmipRouter = async (server: FastifyZodProvider) => {
server.route({
method: "POST",
url: "/clients",
config: {
rateLimit: writeLimit
},
schema: {
body: z.object({
projectId: z.string(),
name: z.string().trim().min(1),
description: z.string().optional(),
permissions: z.nativeEnum(KmipPermission).array()
}),
response: {
200: KmipClientResponseSchema
}
},
onRequest: verifyAuth([AuthMode.JWT, AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const kmipClient = await server.services.kmip.createKmipClient({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
...req.body
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
orgId: req.permission.orgId,
projectId: kmipClient.projectId,
event: {
type: EventType.CREATE_KMIP_CLIENT,
metadata: {
id: kmipClient.id,
name: kmipClient.name,
permissions: (kmipClient.permissions ?? []) as KmipPermission[]
}
}
});
return kmipClient;
}
});
server.route({
method: "PATCH",
url: "/clients/:id",
config: {
rateLimit: writeLimit
},
schema: {
params: z.object({
id: z.string()
}),
body: z.object({
name: z.string().trim().min(1),
description: z.string().optional(),
permissions: z.nativeEnum(KmipPermission).array()
}),
response: {
200: KmipClientResponseSchema
}
},
onRequest: verifyAuth([AuthMode.JWT, AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const kmipClient = await server.services.kmip.updateKmipClient({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
...req.params,
...req.body
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
orgId: req.permission.orgId,
projectId: kmipClient.projectId,
event: {
type: EventType.UPDATE_KMIP_CLIENT,
metadata: {
id: kmipClient.id,
name: kmipClient.name,
permissions: (kmipClient.permissions ?? []) as KmipPermission[]
}
}
});
return kmipClient;
}
});
server.route({
method: "DELETE",
url: "/clients/:id",
config: {
rateLimit: writeLimit
},
schema: {
params: z.object({
id: z.string()
}),
response: {
200: KmipClientResponseSchema
}
},
onRequest: verifyAuth([AuthMode.JWT, AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const kmipClient = await server.services.kmip.deleteKmipClient({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
...req.params
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
orgId: req.permission.orgId,
projectId: kmipClient.projectId,
event: {
type: EventType.DELETE_KMIP_CLIENT,
metadata: {
id: kmipClient.id
}
}
});
return kmipClient;
}
});
server.route({
method: "GET",
url: "/clients/:id",
config: {
rateLimit: readLimit
},
schema: {
params: z.object({
id: z.string()
}),
response: {
200: KmipClientResponseSchema
}
},
onRequest: verifyAuth([AuthMode.JWT, AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const kmipClient = await server.services.kmip.getKmipClient({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
...req.params
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
orgId: req.permission.orgId,
projectId: kmipClient.projectId,
event: {
type: EventType.GET_KMIP_CLIENT,
metadata: {
id: kmipClient.id
}
}
});
return kmipClient;
}
});
server.route({
method: "GET",
url: "/clients",
config: {
rateLimit: readLimit
},
schema: {
description: "List KMIP clients",
querystring: z.object({
projectId: z.string(),
offset: z.coerce.number().min(0).optional().default(0),
limit: z.coerce.number().min(1).max(100).optional().default(100),
orderBy: z.nativeEnum(KmipClientOrderBy).optional().default(KmipClientOrderBy.Name),
orderDirection: z.nativeEnum(OrderByDirection).optional().default(OrderByDirection.ASC),
search: z.string().trim().optional()
}),
response: {
200: z.object({
kmipClients: KmipClientResponseSchema.array(),
totalCount: z.number()
})
}
},
onRequest: verifyAuth([AuthMode.JWT, AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const { kmipClients, totalCount } = await server.services.kmip.listKmipClientsByProjectId({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
...req.query
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
projectId: req.query.projectId,
event: {
type: EventType.GET_KMIP_CLIENTS,
metadata: {
ids: kmipClients.map((key) => key.id)
}
}
});
return { kmipClients, totalCount };
}
});
server.route({
method: "POST",
url: "/clients/:id/certificates",
config: {
rateLimit: writeLimit
},
schema: {
params: z.object({
id: z.string()
}),
body: z.object({
keyAlgorithm: z.nativeEnum(CertKeyAlgorithm),
ttl: z.string().refine((val) => ms(val) > 0, "TTL must be a positive number")
}),
response: {
200: z.object({
serialNumber: z.string(),
certificateChain: z.string(),
certificate: z.string(),
privateKey: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.JWT, AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const certificate = await server.services.kmip.createKmipClientCertificate({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
clientId: req.params.id,
...req.body
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
orgId: req.permission.orgId,
projectId: certificate.projectId,
event: {
type: EventType.CREATE_KMIP_CLIENT_CERTIFICATE,
metadata: {
clientId: req.params.id,
serialNumber: certificate.serialNumber,
ttl: req.body.ttl,
keyAlgorithm: req.body.keyAlgorithm
}
}
});
return certificate;
}
});
server.route({
method: "POST",
url: "/",
config: {
rateLimit: writeLimit
},
schema: {
body: z.object({
caKeyAlgorithm: z.nativeEnum(CertKeyAlgorithm)
}),
response: {
200: z.object({
serverCertificateChain: z.string(),
clientCertificateChain: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.JWT, AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const chains = await server.services.kmip.setupOrgKmip({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
...req.body
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
orgId: req.permission.orgId,
event: {
type: EventType.SETUP_KMIP,
metadata: {
keyAlgorithm: req.body.caKeyAlgorithm
}
}
});
return chains;
}
});
server.route({
method: "GET",
url: "/",
config: {
rateLimit: readLimit
},
schema: {
response: {
200: z.object({
serverCertificateChain: z.string(),
clientCertificateChain: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.JWT, AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const kmip = await server.services.kmip.getOrgKmip({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
orgId: req.permission.orgId,
event: {
type: EventType.GET_KMIP,
metadata: {
id: kmip.id
}
}
});
return kmip;
}
});
server.route({
method: "POST",
url: "/server-registration",
config: {
rateLimit: writeLimit
},
schema: {
body: z.object({
hostnamesOrIps: validateAltNamesField,
commonName: z.string().trim().min(1).optional(),
keyAlgorithm: z.nativeEnum(CertKeyAlgorithm).optional().default(CertKeyAlgorithm.RSA_2048),
ttl: z.string().refine((val) => ms(val) > 0, "TTL must be a positive number")
}),
response: {
200: z.object({
clientCertificateChain: z.string(),
certificateChain: z.string(),
certificate: z.string(),
privateKey: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const configs = await server.services.kmip.registerServer({
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
...req.body
});
await server.services.auditLog.createAuditLog({
...req.auditLogInfo,
orgId: req.permission.orgId,
event: {
type: EventType.REGISTER_KMIP_SERVER,
metadata: {
serverCertificateSerialNumber: configs.serverCertificateSerialNumber,
hostnamesOrIps: req.body.hostnamesOrIps,
commonName: req.body.commonName ?? "kmip-server",
keyAlgorithm: req.body.keyAlgorithm,
ttl: req.body.ttl
}
}
});
return configs;
}
});
};

@ -0,0 +1,477 @@
import z from "zod";
import { KmsKeysSchema } from "@app/db/schemas";
import { EventType } from "@app/ee/services/audit-log/audit-log-types";
import { SymmetricEncryption } from "@app/lib/crypto/cipher";
import { ForbiddenRequestError, NotFoundError } from "@app/lib/errors";
import { writeLimit } from "@app/server/config/rateLimiter";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { ActorType, AuthMode } from "@app/services/auth/auth-type";
export const registerKmipSpecRouter = async (server: FastifyZodProvider) => {
server.decorateRequest("kmipUser", null);
server.addHook("onRequest", async (req) => {
const clientId = req.headers["x-kmip-client-id"] as string;
const projectId = req.headers["x-kmip-project-id"] as string;
const clientCertSerialNumber = req.headers["x-kmip-client-certificate-serial-number"] as string;
const serverCertSerialNumber = req.headers["x-kmip-server-certificate-serial-number"] as string;
if (!serverCertSerialNumber) {
throw new ForbiddenRequestError({
message: "Missing server certificate serial number from request"
});
}
if (!clientCertSerialNumber) {
throw new ForbiddenRequestError({
message: "Missing client certificate serial number from request"
});
}
if (!clientId) {
throw new ForbiddenRequestError({
message: "Missing client ID from request"
});
}
if (!projectId) {
throw new ForbiddenRequestError({
message: "Missing project ID from request"
});
}
// TODO: assert that server certificate used is not revoked
// TODO: assert that client certificate used is not revoked
const kmipClient = await server.store.kmipClient.findByProjectAndClientId(projectId, clientId);
if (!kmipClient) {
throw new NotFoundError({
message: "KMIP client cannot be found."
});
}
if (kmipClient.orgId !== req.permission.orgId) {
throw new ForbiddenRequestError({
message: "Client specified in the request does not belong in the organization"
});
}
req.kmipUser = {
projectId,
clientId,
name: kmipClient.name
};
});
server.route({
method: "POST",
url: "/create",
config: {
rateLimit: writeLimit
},
schema: {
description: "KMIP endpoint for creating managed objects",
body: z.object({
algorithm: z.nativeEnum(SymmetricEncryption)
}),
response: {
200: KmsKeysSchema
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const object = await server.services.kmipOperation.create({
...req.kmipUser,
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
algorithm: req.body.algorithm
});
await server.services.auditLog.createAuditLog({
projectId: req.kmipUser.projectId,
actor: {
type: ActorType.KMIP_CLIENT,
metadata: {
clientId: req.kmipUser.clientId,
name: req.kmipUser.name
}
},
event: {
type: EventType.KMIP_OPERATION_CREATE,
metadata: {
id: object.id,
algorithm: req.body.algorithm
}
}
});
return object;
}
});
server.route({
method: "POST",
url: "/get",
config: {
rateLimit: writeLimit
},
schema: {
description: "KMIP endpoint for getting managed objects",
body: z.object({
id: z.string()
}),
response: {
200: z.object({
id: z.string(),
value: z.string(),
algorithm: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const object = await server.services.kmipOperation.get({
...req.kmipUser,
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
id: req.body.id
});
await server.services.auditLog.createAuditLog({
projectId: req.kmipUser.projectId,
actor: {
type: ActorType.KMIP_CLIENT,
metadata: {
clientId: req.kmipUser.clientId,
name: req.kmipUser.name
}
},
event: {
type: EventType.KMIP_OPERATION_GET,
metadata: {
id: object.id
}
}
});
return object;
}
});
server.route({
method: "POST",
url: "/get-attributes",
config: {
rateLimit: writeLimit
},
schema: {
description: "KMIP endpoint for getting attributes of managed object",
body: z.object({
id: z.string()
}),
response: {
200: z.object({
id: z.string(),
algorithm: z.string(),
isActive: z.boolean(),
createdAt: z.date(),
updatedAt: z.date()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const object = await server.services.kmipOperation.getAttributes({
...req.kmipUser,
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
id: req.body.id
});
await server.services.auditLog.createAuditLog({
projectId: req.kmipUser.projectId,
actor: {
type: ActorType.KMIP_CLIENT,
metadata: {
clientId: req.kmipUser.clientId,
name: req.kmipUser.name
}
},
event: {
type: EventType.KMIP_OPERATION_GET_ATTRIBUTES,
metadata: {
id: object.id
}
}
});
return object;
}
});
server.route({
method: "POST",
url: "/destroy",
config: {
rateLimit: writeLimit
},
schema: {
description: "KMIP endpoint for destroying managed objects",
body: z.object({
id: z.string()
}),
response: {
200: z.object({
id: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const object = await server.services.kmipOperation.destroy({
...req.kmipUser,
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
id: req.body.id
});
await server.services.auditLog.createAuditLog({
projectId: req.kmipUser.projectId,
actor: {
type: ActorType.KMIP_CLIENT,
metadata: {
clientId: req.kmipUser.clientId,
name: req.kmipUser.name
}
},
event: {
type: EventType.KMIP_OPERATION_DESTROY,
metadata: {
id: object.id
}
}
});
return object;
}
});
server.route({
method: "POST",
url: "/activate",
config: {
rateLimit: writeLimit
},
schema: {
description: "KMIP endpoint for activating managed object",
body: z.object({
id: z.string()
}),
response: {
200: z.object({
id: z.string(),
isActive: z.boolean()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const object = await server.services.kmipOperation.activate({
...req.kmipUser,
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
id: req.body.id
});
await server.services.auditLog.createAuditLog({
projectId: req.kmipUser.projectId,
actor: {
type: ActorType.KMIP_CLIENT,
metadata: {
clientId: req.kmipUser.clientId,
name: req.kmipUser.name
}
},
event: {
type: EventType.KMIP_OPERATION_ACTIVATE,
metadata: {
id: object.id
}
}
});
return object;
}
});
server.route({
method: "POST",
url: "/revoke",
config: {
rateLimit: writeLimit
},
schema: {
description: "KMIP endpoint for revoking managed object",
body: z.object({
id: z.string()
}),
response: {
200: z.object({
id: z.string(),
updatedAt: z.date()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const object = await server.services.kmipOperation.revoke({
...req.kmipUser,
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId,
id: req.body.id
});
await server.services.auditLog.createAuditLog({
projectId: req.kmipUser.projectId,
actor: {
type: ActorType.KMIP_CLIENT,
metadata: {
clientId: req.kmipUser.clientId,
name: req.kmipUser.name
}
},
event: {
type: EventType.KMIP_OPERATION_REVOKE,
metadata: {
id: object.id
}
}
});
return object;
}
});
server.route({
method: "POST",
url: "/locate",
config: {
rateLimit: writeLimit
},
schema: {
description: "KMIP endpoint for locating managed objects",
response: {
200: z.object({
objects: z
.object({
id: z.string(),
name: z.string(),
isActive: z.boolean(),
algorithm: z.string(),
createdAt: z.date(),
updatedAt: z.date()
})
.array()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const objects = await server.services.kmipOperation.locate({
...req.kmipUser,
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId
});
await server.services.auditLog.createAuditLog({
projectId: req.kmipUser.projectId,
actor: {
type: ActorType.KMIP_CLIENT,
metadata: {
clientId: req.kmipUser.clientId,
name: req.kmipUser.name
}
},
event: {
type: EventType.KMIP_OPERATION_LOCATE,
metadata: {
ids: objects.map((obj) => obj.id)
}
}
});
return {
objects
};
}
});
server.route({
method: "POST",
url: "/register",
config: {
rateLimit: writeLimit
},
schema: {
description: "KMIP endpoint for registering managed object",
body: z.object({
key: z.string(),
name: z.string(),
algorithm: z.nativeEnum(SymmetricEncryption)
}),
response: {
200: z.object({
id: z.string()
})
}
},
onRequest: verifyAuth([AuthMode.IDENTITY_ACCESS_TOKEN]),
handler: async (req) => {
const object = await server.services.kmipOperation.register({
...req.kmipUser,
...req.body,
actor: req.permission.type,
actorId: req.permission.id,
actorAuthMethod: req.permission.authMethod,
actorOrgId: req.permission.orgId
});
await server.services.auditLog.createAuditLog({
projectId: req.kmipUser.projectId,
actor: {
type: ActorType.KMIP_CLIENT,
metadata: {
clientId: req.kmipUser.clientId,
name: req.kmipUser.name
}
},
event: {
type: EventType.KMIP_OPERATION_REGISTER,
metadata: {
id: object.id,
algorithm: req.body.algorithm,
name: object.name
}
}
});
return object;
}
});
};

@ -21,6 +21,8 @@ import {
TUpdateSecretSyncDTO
} from "@app/services/secret-sync/secret-sync-types";
import { KmipPermission } from "../kmip/kmip-enum";
export type TListProjectAuditLogDTO = {
filter: {
userAgentType?: UserAgentType;
@ -39,7 +41,14 @@ export type TListProjectAuditLogDTO = {
export type TCreateAuditLogDTO = {
event: Event;
actor: UserActor | IdentityActor | ServiceActor | ScimClientActor | PlatformActor | UnknownUserActor;
actor:
| UserActor
| IdentityActor
| ServiceActor
| ScimClientActor
| PlatformActor
| UnknownUserActor
| KmipClientActor;
orgId?: string;
projectId?: string;
} & BaseAuthData;
@ -252,7 +261,26 @@ export enum EventType {
SECRET_SYNC_IMPORT_SECRETS = "secret-sync-import-secrets",
SECRET_SYNC_REMOVE_SECRETS = "secret-sync-remove-secrets",
OIDC_GROUP_MEMBERSHIP_MAPPING_ASSIGN_USER = "oidc-group-membership-mapping-assign-user",
OIDC_GROUP_MEMBERSHIP_MAPPING_REMOVE_USER = "oidc-group-membership-mapping-remove-user"
OIDC_GROUP_MEMBERSHIP_MAPPING_REMOVE_USER = "oidc-group-membership-mapping-remove-user",
CREATE_KMIP_CLIENT = "create-kmip-client",
UPDATE_KMIP_CLIENT = "update-kmip-client",
DELETE_KMIP_CLIENT = "delete-kmip-client",
GET_KMIP_CLIENT = "get-kmip-client",
GET_KMIP_CLIENTS = "get-kmip-clients",
CREATE_KMIP_CLIENT_CERTIFICATE = "create-kmip-client-certificate",
SETUP_KMIP = "setup-kmip",
GET_KMIP = "get-kmip",
REGISTER_KMIP_SERVER = "register-kmip-server",
KMIP_OPERATION_CREATE = "kmip-operation-create",
KMIP_OPERATION_GET = "kmip-operation-get",
KMIP_OPERATION_DESTROY = "kmip-operation-destroy",
KMIP_OPERATION_GET_ATTRIBUTES = "kmip-operation-get-attributes",
KMIP_OPERATION_ACTIVATE = "kmip-operation-activate",
KMIP_OPERATION_REVOKE = "kmip-operation-revoke",
KMIP_OPERATION_LOCATE = "kmip-operation-locate",
KMIP_OPERATION_REGISTER = "kmip-operation-register"
}
interface UserActorMetadata {
@ -275,6 +303,11 @@ interface ScimClientActorMetadata {}
interface PlatformActorMetadata {}
interface KmipClientActorMetadata {
clientId: string;
name: string;
}
interface UnknownUserActorMetadata {}
export interface UserActor {
@ -292,6 +325,11 @@ export interface PlatformActor {
metadata: PlatformActorMetadata;
}
export interface KmipClientActor {
type: ActorType.KMIP_CLIENT;
metadata: KmipClientActorMetadata;
}
export interface UnknownUserActor {
type: ActorType.UNKNOWN_USER;
metadata: UnknownUserActorMetadata;
@ -307,7 +345,7 @@ export interface ScimClientActor {
metadata: ScimClientActorMetadata;
}
export type Actor = UserActor | ServiceActor | IdentityActor | ScimClientActor | PlatformActor;
export type Actor = UserActor | ServiceActor | IdentityActor | ScimClientActor | PlatformActor | KmipClientActor;
interface GetSecretsEvent {
type: EventType.GET_SECRETS;
@ -352,6 +390,7 @@ interface CreateSecretBatchEvent {
secrets: Array<{
secretId: string;
secretKey: string;
secretPath?: string;
secretVersion: number;
secretMetadata?: TSecretMetadata;
}>;
@ -374,8 +413,14 @@ interface UpdateSecretBatchEvent {
type: EventType.UPDATE_SECRETS;
metadata: {
environment: string;
secretPath: string;
secrets: Array<{ secretId: string; secretKey: string; secretVersion: number; secretMetadata?: TSecretMetadata }>;
secretPath?: string;
secrets: Array<{
secretId: string;
secretKey: string;
secretVersion: number;
secretMetadata?: TSecretMetadata;
secretPath?: string;
}>;
};
}
@ -2084,6 +2129,139 @@ interface OidcGroupMembershipMappingRemoveUserEvent {
};
}
interface CreateKmipClientEvent {
type: EventType.CREATE_KMIP_CLIENT;
metadata: {
name: string;
id: string;
permissions: KmipPermission[];
};
}
interface UpdateKmipClientEvent {
type: EventType.UPDATE_KMIP_CLIENT;
metadata: {
name: string;
id: string;
permissions: KmipPermission[];
};
}
interface DeleteKmipClientEvent {
type: EventType.DELETE_KMIP_CLIENT;
metadata: {
id: string;
};
}
interface GetKmipClientEvent {
type: EventType.GET_KMIP_CLIENT;
metadata: {
id: string;
};
}
interface GetKmipClientsEvent {
type: EventType.GET_KMIP_CLIENTS;
metadata: {
ids: string[];
};
}
interface CreateKmipClientCertificateEvent {
type: EventType.CREATE_KMIP_CLIENT_CERTIFICATE;
metadata: {
clientId: string;
ttl: string;
keyAlgorithm: string;
serialNumber: string;
};
}
interface KmipOperationGetEvent {
type: EventType.KMIP_OPERATION_GET;
metadata: {
id: string;
};
}
interface KmipOperationDestroyEvent {
type: EventType.KMIP_OPERATION_DESTROY;
metadata: {
id: string;
};
}
interface KmipOperationCreateEvent {
type: EventType.KMIP_OPERATION_CREATE;
metadata: {
id: string;
algorithm: string;
};
}
interface KmipOperationGetAttributesEvent {
type: EventType.KMIP_OPERATION_GET_ATTRIBUTES;
metadata: {
id: string;
};
}
interface KmipOperationActivateEvent {
type: EventType.KMIP_OPERATION_ACTIVATE;
metadata: {
id: string;
};
}
interface KmipOperationRevokeEvent {
type: EventType.KMIP_OPERATION_REVOKE;
metadata: {
id: string;
};
}
interface KmipOperationLocateEvent {
type: EventType.KMIP_OPERATION_LOCATE;
metadata: {
ids: string[];
};
}
interface KmipOperationRegisterEvent {
type: EventType.KMIP_OPERATION_REGISTER;
metadata: {
id: string;
algorithm: string;
name: string;
};
}
interface SetupKmipEvent {
type: EventType.SETUP_KMIP;
metadata: {
keyAlgorithm: CertKeyAlgorithm;
};
}
interface GetKmipEvent {
type: EventType.GET_KMIP;
metadata: {
id: string;
};
}
interface RegisterKmipServerEvent {
type: EventType.REGISTER_KMIP_SERVER;
metadata: {
serverCertificateSerialNumber: string;
hostnamesOrIps: string;
commonName: string;
keyAlgorithm: CertKeyAlgorithm;
ttl: string;
};
}
export type Event =
| GetSecretsEvent
| GetSecretEvent
@ -2275,4 +2453,21 @@ export type Event =
| SecretSyncImportSecretsEvent
| SecretSyncRemoveSecretsEvent
| OidcGroupMembershipMappingAssignUserEvent
| OidcGroupMembershipMappingRemoveUserEvent;
| OidcGroupMembershipMappingRemoveUserEvent
| CreateKmipClientEvent
| UpdateKmipClientEvent
| DeleteKmipClientEvent
| GetKmipClientEvent
| GetKmipClientsEvent
| CreateKmipClientCertificateEvent
| SetupKmipEvent
| GetKmipEvent
| RegisterKmipServerEvent
| KmipOperationGetEvent
| KmipOperationDestroyEvent
| KmipOperationCreateEvent
| KmipOperationGetAttributesEvent
| KmipOperationActivateEvent
| KmipOperationRevokeEvent
| KmipOperationLocateEvent
| KmipOperationRegisterEvent;

@ -1,20 +1,31 @@
import crypto from "node:crypto";
import { getConfig } from "@app/lib/config/env";
import { BadRequestError } from "@app/lib/errors";
import { getDbConnectionHost } from "@app/lib/knex";
export const verifyHostInputValidity = (host: string) => {
export const verifyHostInputValidity = (host: string, isGateway = false) => {
const appCfg = getConfig();
const dbHost = appCfg.DB_HOST || getDbConnectionHost(appCfg.DB_CONNECTION_URI);
// no need for validation when it's dev
if (appCfg.NODE_ENV === "development") return;
if (host === "host.docker.internal") throw new BadRequestError({ message: "Invalid db host" });
if (
appCfg.isCloud &&
!isGateway &&
// localhost
// internal ips
(host === "host.docker.internal" || host.match(/^10\.\d+\.\d+\.\d+/) || host.match(/^192\.168\.\d+\.\d+/))
(host.match(/^10\.\d+\.\d+\.\d+/) || host.match(/^192\.168\.\d+\.\d+/))
)
throw new BadRequestError({ message: "Invalid db host" });
if (host === "localhost" || host === "127.0.0.1" || dbHost === host) {
if (
host === "localhost" ||
host === "127.0.0.1" ||
(dbHost?.length === host.length && crypto.timingSafeEqual(Buffer.from(dbHost || ""), Buffer.from(host)))
) {
throw new BadRequestError({ message: "Invalid db host" });
}
};

@ -16,6 +16,7 @@ import { TSecretFolderDALFactory } from "@app/services/secret-folder/secret-fold
import { TDynamicSecretLeaseDALFactory } from "../dynamic-secret-lease/dynamic-secret-lease-dal";
import { TDynamicSecretLeaseQueueServiceFactory } from "../dynamic-secret-lease/dynamic-secret-lease-queue";
import { TProjectGatewayDALFactory } from "../gateway/project-gateway-dal";
import { TDynamicSecretDALFactory } from "./dynamic-secret-dal";
import {
DynamicSecretStatus,
@ -44,6 +45,7 @@ type TDynamicSecretServiceFactoryDep = {
projectDAL: Pick<TProjectDALFactory, "findProjectBySlug">;
permissionService: Pick<TPermissionServiceFactory, "getProjectPermission">;
kmsService: Pick<TKmsServiceFactory, "createCipherPairWithDataKey">;
projectGatewayDAL: Pick<TProjectGatewayDALFactory, "findOne">;
};
export type TDynamicSecretServiceFactory = ReturnType<typeof dynamicSecretServiceFactory>;
@ -57,7 +59,8 @@ export const dynamicSecretServiceFactory = ({
permissionService,
dynamicSecretQueueService,
projectDAL,
kmsService
kmsService,
projectGatewayDAL
}: TDynamicSecretServiceFactoryDep) => {
const create = async ({
path,
@ -108,6 +111,18 @@ export const dynamicSecretServiceFactory = ({
const selectedProvider = dynamicSecretProviders[provider.type];
const inputs = await selectedProvider.validateProviderInputs(provider.inputs);
let selectedGatewayId: string | null = null;
if (inputs && typeof inputs === "object" && "projectGatewayId" in inputs && inputs.projectGatewayId) {
const projectGatewayId = inputs.projectGatewayId as string;
const projectGateway = await projectGatewayDAL.findOne({ id: projectGatewayId, projectId });
if (!projectGateway)
throw new NotFoundError({
message: `Project gateway with ${projectGatewayId} not found`
});
selectedGatewayId = projectGateway.id;
}
const isConnected = await selectedProvider.validateConnection(provider.inputs);
if (!isConnected) throw new BadRequestError({ message: "Provider connection failed" });
@ -123,7 +138,8 @@ export const dynamicSecretServiceFactory = ({
maxTTL,
defaultTTL,
folderId: folder.id,
name
name,
projectGatewayId: selectedGatewayId
});
return dynamicSecretCfg;
};
@ -195,6 +211,23 @@ export const dynamicSecretServiceFactory = ({
const newInput = { ...decryptedStoredInput, ...(inputs || {}) };
const updatedInput = await selectedProvider.validateProviderInputs(newInput);
let selectedGatewayId: string | null = null;
if (
updatedInput &&
typeof updatedInput === "object" &&
"projectGatewayId" in updatedInput &&
updatedInput?.projectGatewayId
) {
const projectGatewayId = updatedInput.projectGatewayId as string;
const projectGateway = await projectGatewayDAL.findOne({ id: projectGatewayId, projectId });
if (!projectGateway)
throw new NotFoundError({
message: `Project gateway with ${projectGatewayId} not found`
});
selectedGatewayId = projectGateway.id;
}
const isConnected = await selectedProvider.validateConnection(newInput);
if (!isConnected) throw new BadRequestError({ message: "Provider connection failed" });
@ -204,7 +237,8 @@ export const dynamicSecretServiceFactory = ({
defaultTTL,
name: newName ?? name,
status: null,
statusDetails: null
statusDetails: null,
projectGatewayId: selectedGatewayId
});
return updatedDynamicCfg;

@ -1,5 +1,6 @@
import { SnowflakeProvider } from "@app/ee/services/dynamic-secret/providers/snowflake";
import { TGatewayServiceFactory } from "../../gateway/gateway-service";
import { AwsElastiCacheDatabaseProvider } from "./aws-elasticache";
import { AwsIamProvider } from "./aws-iam";
import { AzureEntraIDProvider } from "./azure-entra-id";
@ -16,8 +17,14 @@ import { SapHanaProvider } from "./sap-hana";
import { SqlDatabaseProvider } from "./sql-database";
import { TotpProvider } from "./totp";
export const buildDynamicSecretProviders = (): Record<DynamicSecretProviders, TDynamicProviderFns> => ({
[DynamicSecretProviders.SqlDatabase]: SqlDatabaseProvider(),
type TBuildDynamicSecretProviderDTO = {
gatewayService: Pick<TGatewayServiceFactory, "fnGetGatewayClientTls">;
};
export const buildDynamicSecretProviders = ({
gatewayService
}: TBuildDynamicSecretProviderDTO): Record<DynamicSecretProviders, TDynamicProviderFns> => ({
[DynamicSecretProviders.SqlDatabase]: SqlDatabaseProvider({ gatewayService }),
[DynamicSecretProviders.Cassandra]: CassandraProvider(),
[DynamicSecretProviders.AwsIam]: AwsIamProvider(),
[DynamicSecretProviders.Redis]: RedisDatabaseProvider(),

@ -103,7 +103,8 @@ export const DynamicSecretSqlDBSchema = z.object({
creationStatement: z.string().trim(),
revocationStatement: z.string().trim(),
renewStatement: z.string().trim().optional(),
ca: z.string().optional()
ca: z.string().optional(),
projectGatewayId: z.string().nullable().optional()
});
export const DynamicSecretCassandraSchema = z.object({

@ -3,8 +3,10 @@ import knex from "knex";
import { customAlphabet } from "nanoid";
import { z } from "zod";
import { withGatewayProxy } from "@app/lib/gateway";
import { alphaNumericNanoId } from "@app/lib/nanoid";
import { TGatewayServiceFactory } from "../../gateway/gateway-service";
import { verifyHostInputValidity } from "../dynamic-secret-fns";
import { DynamicSecretSqlDBSchema, SqlProviders, TDynamicProviderFns } from "./models";
@ -25,10 +27,14 @@ const generateUsername = (provider: SqlProviders) => {
return alphaNumericNanoId(32);
};
export const SqlDatabaseProvider = (): TDynamicProviderFns => {
type TSqlDatabaseProviderDTO = {
gatewayService: Pick<TGatewayServiceFactory, "fnGetGatewayClientTls">;
};
export const SqlDatabaseProvider = ({ gatewayService }: TSqlDatabaseProviderDTO): TDynamicProviderFns => {
const validateProviderInputs = async (inputs: unknown) => {
const providerInputs = await DynamicSecretSqlDBSchema.parseAsync(inputs);
verifyHostInputValidity(providerInputs.host);
verifyHostInputValidity(providerInputs.host, Boolean(providerInputs.projectGatewayId));
return providerInputs;
};
@ -45,7 +51,6 @@ export const SqlDatabaseProvider = (): TDynamicProviderFns => {
user: providerInputs.username,
password: providerInputs.password,
ssl,
pool: { min: 0, max: 1 },
// @ts-expect-error this is because of knexjs type signature issue. This is directly passed to driver
// https://github.com/knex/knex/blob/b6507a7129d2b9fafebf5f831494431e64c6a8a0/lib/dialects/mssql/index.js#L66
// https://github.com/tediousjs/tedious/blob/ebb023ed90969a7ec0e4b036533ad52739d921f7/test/config.ci.ts#L19
@ -61,61 +66,112 @@ export const SqlDatabaseProvider = (): TDynamicProviderFns => {
return db;
};
const gatewayProxyWrapper = async (
providerInputs: z.infer<typeof DynamicSecretSqlDBSchema>,
gatewayCallback: (host: string, port: number) => Promise<void>
) => {
const relayDetails = await gatewayService.fnGetGatewayClientTls(providerInputs.projectGatewayId as string);
const [relayHost, relayPort] = relayDetails.relayAddress.split(":");
await withGatewayProxy(
async (port) => {
await gatewayCallback("localhost", port);
},
{
targetHost: providerInputs.host,
targetPort: providerInputs.port,
relayHost,
relayPort: Number(relayPort),
identityId: relayDetails.identityId,
orgId: relayDetails.orgId,
tlsOptions: {
ca: relayDetails.certChain,
cert: relayDetails.certificate,
key: relayDetails.privateKey
}
}
);
};
const validateConnection = async (inputs: unknown) => {
const providerInputs = await validateProviderInputs(inputs);
const db = await $getClient(providerInputs);
// oracle needs from keyword
const testStatement = providerInputs.client === SqlProviders.Oracle ? "SELECT 1 FROM DUAL" : "SELECT 1";
let isConnected = false;
const gatewayCallback = async (host = providerInputs.host, port = providerInputs.port) => {
const db = await $getClient({ ...providerInputs, port, host });
// oracle needs from keyword
const testStatement = providerInputs.client === SqlProviders.Oracle ? "SELECT 1 FROM DUAL" : "SELECT 1";
const isConnected = await db.raw(testStatement).then(() => true);
await db.destroy();
isConnected = await db.raw(testStatement).then(() => true);
await db.destroy();
};
if (providerInputs.projectGatewayId) {
await gatewayProxyWrapper(providerInputs, gatewayCallback);
} else {
await gatewayCallback();
}
return isConnected;
};
const create = async (inputs: unknown, expireAt: number) => {
const providerInputs = await validateProviderInputs(inputs);
const db = await $getClient(providerInputs);
const username = generateUsername(providerInputs.client);
const password = generatePassword(providerInputs.client);
const { database } = providerInputs;
const expiration = new Date(expireAt).toISOString();
const gatewayCallback = async (host = providerInputs.host, port = providerInputs.port) => {
const db = await $getClient({ ...providerInputs, port, host });
try {
const { database } = providerInputs;
const expiration = new Date(expireAt).toISOString();
const creationStatement = handlebars.compile(providerInputs.creationStatement, { noEscape: true })({
username,
password,
expiration,
database
});
const creationStatement = handlebars.compile(providerInputs.creationStatement, { noEscape: true })({
username,
password,
expiration,
database
});
const queries = creationStatement.toString().split(";").filter(Boolean);
await db.transaction(async (tx) => {
for (const query of queries) {
// eslint-disable-next-line
await tx.raw(query);
const queries = creationStatement.toString().split(";").filter(Boolean);
await db.transaction(async (tx) => {
for (const query of queries) {
// eslint-disable-next-line
await tx.raw(query);
}
});
} finally {
await db.destroy();
}
});
await db.destroy();
};
if (providerInputs.projectGatewayId) {
await gatewayProxyWrapper(providerInputs, gatewayCallback);
} else {
await gatewayCallback();
}
return { entityId: username, data: { DB_USERNAME: username, DB_PASSWORD: password } };
};
const revoke = async (inputs: unknown, entityId: string) => {
const providerInputs = await validateProviderInputs(inputs);
const db = await $getClient(providerInputs);
const username = entityId;
const { database } = providerInputs;
const revokeStatement = handlebars.compile(providerInputs.revocationStatement)({ username, database });
const queries = revokeStatement.toString().split(";").filter(Boolean);
await db.transaction(async (tx) => {
for (const query of queries) {
// eslint-disable-next-line
await tx.raw(query);
const gatewayCallback = async (host = providerInputs.host, port = providerInputs.port) => {
const db = await $getClient({ ...providerInputs, port, host });
try {
const revokeStatement = handlebars.compile(providerInputs.revocationStatement)({ username, database });
const queries = revokeStatement.toString().split(";").filter(Boolean);
await db.transaction(async (tx) => {
for (const query of queries) {
// eslint-disable-next-line
await tx.raw(query);
}
});
} finally {
await db.destroy();
}
});
await db.destroy();
};
if (providerInputs.projectGatewayId) {
await gatewayProxyWrapper(providerInputs, gatewayCallback);
} else {
await gatewayCallback();
}
return { entityId: username };
};
@ -123,28 +179,35 @@ export const SqlDatabaseProvider = (): TDynamicProviderFns => {
const providerInputs = await validateProviderInputs(inputs);
if (!providerInputs.renewStatement) return { entityId };
const db = await $getClient(providerInputs);
const gatewayCallback = async (host = providerInputs.host, port = providerInputs.port) => {
const db = await $getClient({ ...providerInputs, port, host });
const expiration = new Date(expireAt).toISOString();
const { database } = providerInputs;
const expiration = new Date(expireAt).toISOString();
const { database } = providerInputs;
const renewStatement = handlebars.compile(providerInputs.renewStatement)({
username: entityId,
expiration,
database
});
if (renewStatement) {
const queries = renewStatement.toString().split(";").filter(Boolean);
await db.transaction(async (tx) => {
for (const query of queries) {
// eslint-disable-next-line
await tx.raw(query);
}
const renewStatement = handlebars.compile(providerInputs.renewStatement)({
username: entityId,
expiration,
database
});
try {
if (renewStatement) {
const queries = renewStatement.toString().split(";").filter(Boolean);
await db.transaction(async (tx) => {
for (const query of queries) {
// eslint-disable-next-line
await tx.raw(query);
}
});
}
} finally {
await db.destroy();
}
};
if (providerInputs.projectGatewayId) {
await gatewayProxyWrapper(providerInputs, gatewayCallback);
} else {
await gatewayCallback();
}
await db.destroy();
return { entityId };
};

@ -0,0 +1,86 @@
import { Knex } from "knex";
import { TDbClient } from "@app/db";
import { GatewaysSchema, TableName, TGateways } from "@app/db/schemas";
import { DatabaseError } from "@app/lib/errors";
import {
buildFindFilter,
ormify,
selectAllTableCols,
sqlNestRelationships,
TFindFilter,
TFindOpt
} from "@app/lib/knex";
export type TGatewayDALFactory = ReturnType<typeof gatewayDALFactory>;
export const gatewayDALFactory = (db: TDbClient) => {
const orm = ormify(db, TableName.Gateway);
const find = async (filter: TFindFilter<TGateways>, { offset, limit, sort, tx }: TFindOpt<TGateways> = {}) => {
try {
const query = (tx || db)(TableName.Gateway)
// eslint-disable-next-line @typescript-eslint/no-misused-promises
.where(buildFindFilter(filter))
.join(TableName.Identity, `${TableName.Identity}.id`, `${TableName.Gateway}.identityId`)
.leftJoin(TableName.ProjectGateway, `${TableName.ProjectGateway}.gatewayId`, `${TableName.Gateway}.id`)
.leftJoin(TableName.Project, `${TableName.Project}.id`, `${TableName.ProjectGateway}.projectId`)
.select(selectAllTableCols(TableName.Gateway))
.select(
db.ref("name").withSchema(TableName.Identity).as("identityName"),
db.ref("name").withSchema(TableName.Project).as("projectName"),
db.ref("slug").withSchema(TableName.Project).as("projectSlug"),
db.ref("id").withSchema(TableName.Project).as("projectId")
);
if (limit) void query.limit(limit);
if (offset) void query.offset(offset);
if (sort) {
void query.orderBy(sort.map(([column, order, nulls]) => ({ column: column as string, order, nulls })));
}
const docs = await query;
return sqlNestRelationships({
data: docs,
key: "id",
parentMapper: (data) => ({
...GatewaysSchema.parse(data),
identity: { id: data.identityId, name: data.identityName }
}),
childrenMapper: [
{
key: "projectId",
label: "projects" as const,
mapper: ({ projectId, projectName, projectSlug }) => ({
id: projectId,
name: projectName,
slug: projectSlug
})
}
]
});
} catch (error) {
throw new DatabaseError({ error, name: `${TableName.Gateway}: Find` });
}
};
const findByProjectId = async (projectId: string, tx?: Knex) => {
try {
const query = (tx || db)(TableName.Gateway)
.join(TableName.Identity, `${TableName.Identity}.id`, `${TableName.Gateway}.identityId`)
.join(TableName.ProjectGateway, `${TableName.ProjectGateway}.gatewayId`, `${TableName.Gateway}.id`)
.select(selectAllTableCols(TableName.Gateway))
.select(
db.ref("name").withSchema(TableName.Identity).as("identityName"),
db.ref("id").withSchema(TableName.ProjectGateway).as("projectGatewayId")
)
.where({ [`${TableName.ProjectGateway}.projectId` as "projectId"]: projectId });
const docs = await query;
return docs.map((el) => ({ ...el, identity: { id: el.identityId, name: el.identityName } }));
} catch (error) {
throw new DatabaseError({ error, name: `${TableName.Gateway}: Find by project id` });
}
};
return { ...orm, find, findByProjectId };
};

@ -0,0 +1,652 @@
import crypto from "node:crypto";
import { ForbiddenError } from "@casl/ability";
import * as x509 from "@peculiar/x509";
import { z } from "zod";
import { ActionProjectType } from "@app/db/schemas";
import { KeyStorePrefixes, PgSqlLock, TKeyStoreFactory } from "@app/keystore/keystore";
import { getConfig } from "@app/lib/config/env";
import { BadRequestError, NotFoundError } from "@app/lib/errors";
import { pingGatewayAndVerify } from "@app/lib/gateway";
import { alphaNumericNanoId } from "@app/lib/nanoid";
import { getTurnCredentials } from "@app/lib/turn/credentials";
import { ActorAuthMethod, ActorType } from "@app/services/auth/auth-type";
import { CertExtendedKeyUsage, CertKeyAlgorithm, CertKeyUsage } from "@app/services/certificate/certificate-types";
import {
createSerialNumber,
keyAlgorithmToAlgCfg
} from "@app/services/certificate-authority/certificate-authority-fns";
import { TKmsServiceFactory } from "@app/services/kms/kms-service";
import { KmsDataKey } from "@app/services/kms/kms-types";
import { TLicenseServiceFactory } from "../license/license-service";
import { OrgPermissionGatewayActions, OrgPermissionSubjects } from "../permission/org-permission";
import { TPermissionServiceFactory } from "../permission/permission-service";
import { TGatewayDALFactory } from "./gateway-dal";
import {
TExchangeAllocatedRelayAddressDTO,
TGetGatewayByIdDTO,
TGetProjectGatewayByIdDTO,
THeartBeatDTO,
TListGatewaysDTO,
TUpdateGatewayByIdDTO
} from "./gateway-types";
import { TOrgGatewayConfigDALFactory } from "./org-gateway-config-dal";
import { TProjectGatewayDALFactory } from "./project-gateway-dal";
type TGatewayServiceFactoryDep = {
gatewayDAL: TGatewayDALFactory;
projectGatewayDAL: TProjectGatewayDALFactory;
orgGatewayConfigDAL: Pick<TOrgGatewayConfigDALFactory, "findOne" | "create" | "transaction" | "findById">;
licenseService: Pick<TLicenseServiceFactory, "onPremFeatures" | "getPlan">;
kmsService: Pick<TKmsServiceFactory, "createCipherPairWithDataKey" | "decryptWithRootKey">;
permissionService: Pick<TPermissionServiceFactory, "getOrgPermission" | "getProjectPermission">;
keyStore: Pick<TKeyStoreFactory, "getItem" | "setItemWithExpiry">;
};
export type TGatewayServiceFactory = ReturnType<typeof gatewayServiceFactory>;
const TURN_SERVER_CREDENTIALS_SCHEMA = z.object({
username: z.string(),
password: z.string()
});
export const gatewayServiceFactory = ({
gatewayDAL,
licenseService,
kmsService,
permissionService,
orgGatewayConfigDAL,
keyStore,
projectGatewayDAL
}: TGatewayServiceFactoryDep) => {
const $validateOrgAccessToGateway = async (orgId: string, actorId: string, actorAuthMethod: ActorAuthMethod) => {
// if (!licenseService.onPremFeatures.gateway) {
// throw new BadRequestError({
// message:
// "Gateway handshake failed due to instance plan restrictions. Please upgrade your instance to Infisical's Enterprise plan."
// });
// }
const orgLicensePlan = await licenseService.getPlan(orgId);
if (!orgLicensePlan.gateway) {
throw new BadRequestError({
message:
"Gateway handshake failed due to organization plan restrictions. Please upgrade your instance to Infisical's Enterprise plan."
});
}
const { permission } = await permissionService.getOrgPermission(
ActorType.IDENTITY,
actorId,
orgId,
actorAuthMethod,
orgId
);
ForbiddenError.from(permission).throwUnlessCan(
OrgPermissionGatewayActions.CreateGateways,
OrgPermissionSubjects.Gateway
);
};
const getGatewayRelayDetails = async (actorId: string, actorOrgId: string, actorAuthMethod: ActorAuthMethod) => {
const TURN_CRED_EXPIRY = 10 * 60; // 10 minutes
const envCfg = getConfig();
await $validateOrgAccessToGateway(actorOrgId, actorId, actorAuthMethod);
const { encryptor, decryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId: actorOrgId
});
if (!envCfg.GATEWAY_RELAY_AUTH_SECRET || !envCfg.GATEWAY_RELAY_ADDRESS || !envCfg.GATEWAY_RELAY_REALM) {
throw new BadRequestError({
message: "Gateway handshake failed due to missing instance configuration."
});
}
let turnServerUsername = "";
let turnServerPassword = "";
// keep it in redis for 5mins to avoid generating so many credentials
const previousCredential = await keyStore.getItem(KeyStorePrefixes.GatewayIdentityCredential(actorId));
if (previousCredential) {
const el = await TURN_SERVER_CREDENTIALS_SCHEMA.parseAsync(
JSON.parse(decryptor({ cipherTextBlob: Buffer.from(previousCredential, "hex") }).toString())
);
turnServerUsername = el.username;
turnServerPassword = el.password;
} else {
const el = getTurnCredentials(actorId, envCfg.GATEWAY_RELAY_AUTH_SECRET);
await keyStore.setItemWithExpiry(
KeyStorePrefixes.GatewayIdentityCredential(actorId),
TURN_CRED_EXPIRY,
encryptor({
plainText: Buffer.from(JSON.stringify({ username: el.username, password: el.password }))
}).cipherTextBlob.toString("hex")
);
turnServerUsername = el.username;
turnServerPassword = el.password;
}
return {
turnServerUsername,
turnServerPassword,
turnServerRealm: envCfg.GATEWAY_RELAY_REALM,
turnServerAddress: envCfg.GATEWAY_RELAY_ADDRESS,
infisicalStaticIp: envCfg.GATEWAY_INFISICAL_STATIC_IP_ADDRESS
};
};
const exchangeAllocatedRelayAddress = async ({
identityId,
identityOrg,
relayAddress,
identityOrgAuthMethod
}: TExchangeAllocatedRelayAddressDTO) => {
await $validateOrgAccessToGateway(identityOrg, identityId, identityOrgAuthMethod);
const { encryptor: orgKmsEncryptor, decryptor: orgKmsDecryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId: identityOrg
});
const orgGatewayConfig = await orgGatewayConfigDAL.transaction(async (tx) => {
await tx.raw("SELECT pg_advisory_xact_lock(?)", [PgSqlLock.OrgGatewayRootCaInit(identityOrg)]);
const existingGatewayConfig = await orgGatewayConfigDAL.findOne({ orgId: identityOrg });
if (existingGatewayConfig) return existingGatewayConfig;
const alg = keyAlgorithmToAlgCfg(CertKeyAlgorithm.RSA_2048);
// generate root CA
const rootCaKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const rootCaSerialNumber = createSerialNumber();
const rootCaSkObj = crypto.KeyObject.from(rootCaKeys.privateKey);
const rootCaIssuedAt = new Date();
const rootCaKeyAlgorithm = CertKeyAlgorithm.RSA_2048;
const rootCaExpiration = new Date(new Date().setFullYear(2045));
const rootCaCert = await x509.X509CertificateGenerator.createSelfSigned({
name: `O=${identityOrg},CN=Infisical Gateway Root CA`,
serialNumber: rootCaSerialNumber,
notBefore: rootCaIssuedAt,
notAfter: rootCaExpiration,
signingAlgorithm: alg,
keys: rootCaKeys,
extensions: [
// eslint-disable-next-line no-bitwise
new x509.KeyUsagesExtension(x509.KeyUsageFlags.keyCertSign | x509.KeyUsageFlags.cRLSign, true),
await x509.SubjectKeyIdentifierExtension.create(rootCaKeys.publicKey)
]
});
// generate client ca
const clientCaSerialNumber = createSerialNumber();
const clientCaIssuedAt = new Date();
const clientCaExpiration = new Date(new Date().setFullYear(2045));
const clientCaKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const clientCaSkObj = crypto.KeyObject.from(clientCaKeys.privateKey);
const clientCaCert = await x509.X509CertificateGenerator.create({
serialNumber: clientCaSerialNumber,
subject: `O=${identityOrg},CN=Client Intermediate CA`,
issuer: rootCaCert.subject,
notBefore: clientCaIssuedAt,
notAfter: clientCaExpiration,
signingKey: rootCaKeys.privateKey,
publicKey: clientCaKeys.publicKey,
signingAlgorithm: alg,
extensions: [
new x509.KeyUsagesExtension(
// eslint-disable-next-line no-bitwise
x509.KeyUsageFlags.keyCertSign |
x509.KeyUsageFlags.cRLSign |
x509.KeyUsageFlags.digitalSignature |
x509.KeyUsageFlags.keyEncipherment,
true
),
new x509.BasicConstraintsExtension(true, 0, true),
await x509.AuthorityKeyIdentifierExtension.create(rootCaCert, false),
await x509.SubjectKeyIdentifierExtension.create(clientCaKeys.publicKey)
]
});
const clientKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const clientCertSerialNumber = createSerialNumber();
const clientCert = await x509.X509CertificateGenerator.create({
serialNumber: clientCertSerialNumber,
subject: `O=${identityOrg},OU=gateway-client,CN=cloud`,
issuer: clientCaCert.subject,
notAfter: clientCaExpiration,
notBefore: clientCaIssuedAt,
signingKey: clientCaKeys.privateKey,
publicKey: clientKeys.publicKey,
signingAlgorithm: alg,
extensions: [
new x509.BasicConstraintsExtension(false),
await x509.AuthorityKeyIdentifierExtension.create(clientCaCert, false),
await x509.SubjectKeyIdentifierExtension.create(clientKeys.publicKey),
new x509.CertificatePolicyExtension(["2.5.29.32.0"]), // anyPolicy
new x509.KeyUsagesExtension(
// eslint-disable-next-line no-bitwise
x509.KeyUsageFlags[CertKeyUsage.DIGITAL_SIGNATURE] |
x509.KeyUsageFlags[CertKeyUsage.KEY_ENCIPHERMENT] |
x509.KeyUsageFlags[CertKeyUsage.KEY_AGREEMENT],
true
),
new x509.ExtendedKeyUsageExtension([x509.ExtendedKeyUsage[CertExtendedKeyUsage.CLIENT_AUTH]], true)
]
});
const clientSkObj = crypto.KeyObject.from(clientKeys.privateKey);
// generate gateway ca
const gatewayCaSerialNumber = createSerialNumber();
const gatewayCaIssuedAt = new Date();
const gatewayCaExpiration = new Date(new Date().setFullYear(2045));
const gatewayCaKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const gatewayCaSkObj = crypto.KeyObject.from(gatewayCaKeys.privateKey);
const gatewayCaCert = await x509.X509CertificateGenerator.create({
serialNumber: gatewayCaSerialNumber,
subject: `O=${identityOrg},CN=Gateway CA`,
issuer: rootCaCert.subject,
notBefore: gatewayCaIssuedAt,
notAfter: gatewayCaExpiration,
signingKey: rootCaKeys.privateKey,
publicKey: gatewayCaKeys.publicKey,
signingAlgorithm: alg,
extensions: [
new x509.KeyUsagesExtension(
// eslint-disable-next-line no-bitwise
x509.KeyUsageFlags.keyCertSign |
x509.KeyUsageFlags.cRLSign |
x509.KeyUsageFlags.digitalSignature |
x509.KeyUsageFlags.keyEncipherment,
true
),
new x509.BasicConstraintsExtension(true, 0, true),
await x509.AuthorityKeyIdentifierExtension.create(rootCaCert, false),
await x509.SubjectKeyIdentifierExtension.create(gatewayCaKeys.publicKey)
]
});
return orgGatewayConfigDAL.create({
orgId: identityOrg,
rootCaIssuedAt,
rootCaExpiration,
rootCaSerialNumber,
rootCaKeyAlgorithm,
encryptedRootCaPrivateKey: orgKmsEncryptor({
plainText: rootCaSkObj.export({
type: "pkcs8",
format: "der"
})
}).cipherTextBlob,
encryptedRootCaCertificate: orgKmsEncryptor({ plainText: Buffer.from(rootCaCert.rawData) }).cipherTextBlob,
clientCaIssuedAt,
clientCaExpiration,
clientCaSerialNumber,
encryptedClientCaPrivateKey: orgKmsEncryptor({
plainText: clientCaSkObj.export({
type: "pkcs8",
format: "der"
})
}).cipherTextBlob,
encryptedClientCaCertificate: orgKmsEncryptor({
plainText: Buffer.from(clientCaCert.rawData)
}).cipherTextBlob,
clientCertIssuedAt: clientCaIssuedAt,
clientCertExpiration: clientCaExpiration,
clientCertKeyAlgorithm: CertKeyAlgorithm.RSA_2048,
clientCertSerialNumber,
encryptedClientPrivateKey: orgKmsEncryptor({
plainText: clientSkObj.export({
type: "pkcs8",
format: "der"
})
}).cipherTextBlob,
encryptedClientCertificate: orgKmsEncryptor({
plainText: Buffer.from(clientCert.rawData)
}).cipherTextBlob,
gatewayCaIssuedAt,
gatewayCaExpiration,
gatewayCaSerialNumber,
encryptedGatewayCaPrivateKey: orgKmsEncryptor({
plainText: gatewayCaSkObj.export({
type: "pkcs8",
format: "der"
})
}).cipherTextBlob,
encryptedGatewayCaCertificate: orgKmsEncryptor({
plainText: Buffer.from(gatewayCaCert.rawData)
}).cipherTextBlob
});
});
const rootCaCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedRootCaCertificate
})
);
const clientCaCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedClientCaCertificate
})
);
const gatewayCaAlg = keyAlgorithmToAlgCfg(orgGatewayConfig.rootCaKeyAlgorithm as CertKeyAlgorithm);
const gatewayCaSkObj = crypto.createPrivateKey({
key: orgKmsDecryptor({ cipherTextBlob: orgGatewayConfig.encryptedGatewayCaPrivateKey }),
format: "der",
type: "pkcs8"
});
const gatewayCaCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedGatewayCaCertificate
})
);
const gatewayCaPrivateKey = await crypto.subtle.importKey(
"pkcs8",
gatewayCaSkObj.export({ format: "der", type: "pkcs8" }),
gatewayCaAlg,
true,
["sign"]
);
const alg = keyAlgorithmToAlgCfg(CertKeyAlgorithm.RSA_2048);
const gatewayKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const certIssuedAt = new Date();
// then need to periodically init
const certExpireAt = new Date(new Date().setMonth(new Date().getMonth() + 1));
const extensions: x509.Extension[] = [
new x509.BasicConstraintsExtension(false),
await x509.AuthorityKeyIdentifierExtension.create(gatewayCaCert, false),
await x509.SubjectKeyIdentifierExtension.create(gatewayKeys.publicKey),
new x509.CertificatePolicyExtension(["2.5.29.32.0"]), // anyPolicy
new x509.KeyUsagesExtension(
// eslint-disable-next-line no-bitwise
x509.KeyUsageFlags[CertKeyUsage.DIGITAL_SIGNATURE] | x509.KeyUsageFlags[CertKeyUsage.KEY_ENCIPHERMENT],
true
),
new x509.ExtendedKeyUsageExtension([x509.ExtendedKeyUsage[CertExtendedKeyUsage.SERVER_AUTH]], true),
// san
new x509.SubjectAlternativeNameExtension([{ type: "ip", value: relayAddress.split(":")[0] }], false)
];
const serialNumber = createSerialNumber();
const privateKey = crypto.KeyObject.from(gatewayKeys.privateKey);
const gatewayCertificate = await x509.X509CertificateGenerator.create({
serialNumber,
subject: `CN=${identityId},O=${identityOrg},OU=Gateway`,
issuer: gatewayCaCert.subject,
notBefore: certIssuedAt,
notAfter: certExpireAt,
signingKey: gatewayCaPrivateKey,
publicKey: gatewayKeys.publicKey,
signingAlgorithm: alg,
extensions
});
const appCfg = getConfig();
// just for local development
const formatedRelayAddress =
appCfg.NODE_ENV === "development" ? relayAddress.replace("127.0.0.1", "host.docker.internal") : relayAddress;
await gatewayDAL.transaction(async (tx) => {
await tx.raw("SELECT pg_advisory_xact_lock(?)", [PgSqlLock.OrgGatewayCertExchange(identityOrg)]);
const existingGateway = await gatewayDAL.findOne({ identityId, orgGatewayRootCaId: orgGatewayConfig.id });
if (existingGateway) {
return gatewayDAL.updateById(existingGateway.id, {
keyAlgorithm: CertKeyAlgorithm.RSA_2048,
issuedAt: certIssuedAt,
expiration: certExpireAt,
serialNumber,
relayAddress: orgKmsEncryptor({
plainText: Buffer.from(formatedRelayAddress)
}).cipherTextBlob
});
}
return gatewayDAL.create({
keyAlgorithm: CertKeyAlgorithm.RSA_2048,
issuedAt: certIssuedAt,
expiration: certExpireAt,
serialNumber,
relayAddress: orgKmsEncryptor({
plainText: Buffer.from(formatedRelayAddress)
}).cipherTextBlob,
identityId,
orgGatewayRootCaId: orgGatewayConfig.id,
name: `gateway-${alphaNumericNanoId(6).toLowerCase()}`
});
});
const gatewayCertificateChain = `${clientCaCert.toString("pem")}\n${rootCaCert.toString("pem")}`.trim();
return {
serialNumber,
privateKey: privateKey.export({ format: "pem", type: "pkcs8" }) as string,
certificate: gatewayCertificate.toString("pem"),
certificateChain: gatewayCertificateChain
};
};
const heartbeat = async ({ orgPermission }: THeartBeatDTO) => {
await $validateOrgAccessToGateway(orgPermission.orgId, orgPermission.id, orgPermission.authMethod);
const orgGatewayConfig = await orgGatewayConfigDAL.findOne({ orgId: orgPermission.orgId });
if (!orgGatewayConfig) throw new NotFoundError({ message: `Identity with ID ${orgPermission.id} not found.` });
const [gateway] = await gatewayDAL.find({ identityId: orgPermission.id, orgGatewayRootCaId: orgGatewayConfig.id });
if (!gateway) throw new NotFoundError({ message: `Gateway with ID ${orgPermission.id} not found.` });
const { decryptor: orgKmsDecryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId: orgGatewayConfig.orgId
});
const rootCaCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedRootCaCertificate
})
);
const gatewayCaCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedGatewayCaCertificate
})
);
const clientCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedClientCertificate
})
);
const privateKey = crypto
.createPrivateKey({
key: orgKmsDecryptor({ cipherTextBlob: orgGatewayConfig.encryptedClientPrivateKey }),
format: "der",
type: "pkcs8"
})
.export({ type: "pkcs8", format: "pem" });
const relayAddress = orgKmsDecryptor({ cipherTextBlob: gateway.relayAddress }).toString();
const [relayHost, relayPort] = relayAddress.split(":");
await pingGatewayAndVerify({
relayHost,
relayPort: Number(relayPort),
tlsOptions: {
key: privateKey,
ca: `${gatewayCaCert.toString("pem")}\n${rootCaCert.toString("pem")}`.trim(),
cert: clientCert.toString("pem")
},
identityId: orgPermission.id,
orgId: orgPermission.orgId
});
await gatewayDAL.updateById(gateway.id, { heartbeat: new Date() });
};
const listGateways = async ({ orgPermission }: TListGatewaysDTO) => {
const { permission } = await permissionService.getOrgPermission(
orgPermission.type,
orgPermission.id,
orgPermission.orgId,
orgPermission.authMethod,
orgPermission.orgId
);
ForbiddenError.from(permission).throwUnlessCan(
OrgPermissionGatewayActions.ListGateways,
OrgPermissionSubjects.Gateway
);
const orgGatewayConfig = await orgGatewayConfigDAL.findOne({ orgId: orgPermission.orgId });
if (!orgGatewayConfig) return [];
const gateways = await gatewayDAL.find({
orgGatewayRootCaId: orgGatewayConfig.id
});
return gateways;
};
const getGatewayById = async ({ orgPermission, id }: TGetGatewayByIdDTO) => {
const { permission } = await permissionService.getOrgPermission(
orgPermission.type,
orgPermission.id,
orgPermission.orgId,
orgPermission.authMethod,
orgPermission.orgId
);
ForbiddenError.from(permission).throwUnlessCan(
OrgPermissionGatewayActions.ListGateways,
OrgPermissionSubjects.Gateway
);
const orgGatewayConfig = await orgGatewayConfigDAL.findOne({ orgId: orgPermission.orgId });
if (!orgGatewayConfig) throw new NotFoundError({ message: `Gateway with ID ${id} not found.` });
const [gateway] = await gatewayDAL.find({ id, orgGatewayRootCaId: orgGatewayConfig.id });
if (!gateway) throw new NotFoundError({ message: `Gateway with ID ${id} not found.` });
return gateway;
};
const updateGatewayById = async ({ orgPermission, id, name, projectIds }: TUpdateGatewayByIdDTO) => {
const { permission } = await permissionService.getOrgPermission(
orgPermission.type,
orgPermission.id,
orgPermission.orgId,
orgPermission.authMethod,
orgPermission.orgId
);
ForbiddenError.from(permission).throwUnlessCan(
OrgPermissionGatewayActions.EditGateways,
OrgPermissionSubjects.Gateway
);
const orgGatewayConfig = await orgGatewayConfigDAL.findOne({ orgId: orgPermission.orgId });
if (!orgGatewayConfig) throw new NotFoundError({ message: `Gateway with ID ${id} not found.` });
const [gateway] = await gatewayDAL.update({ id, orgGatewayRootCaId: orgGatewayConfig.id }, { name });
if (!gateway) throw new NotFoundError({ message: `Gateway with ID ${id} not found.` });
if (projectIds) {
await projectGatewayDAL.transaction(async (tx) => {
await projectGatewayDAL.delete({ gatewayId: gateway.id }, tx);
await projectGatewayDAL.insertMany(
projectIds.map((el) => ({ gatewayId: gateway.id, projectId: el })),
tx
);
});
}
return gateway;
};
const deleteGatewayById = async ({ orgPermission, id }: TGetGatewayByIdDTO) => {
const { permission } = await permissionService.getOrgPermission(
orgPermission.type,
orgPermission.id,
orgPermission.orgId,
orgPermission.authMethod,
orgPermission.orgId
);
ForbiddenError.from(permission).throwUnlessCan(
OrgPermissionGatewayActions.DeleteGateways,
OrgPermissionSubjects.Gateway
);
const orgGatewayConfig = await orgGatewayConfigDAL.findOne({ orgId: orgPermission.orgId });
if (!orgGatewayConfig) throw new NotFoundError({ message: `Gateway with ID ${id} not found.` });
const [gateway] = await gatewayDAL.delete({ id, orgGatewayRootCaId: orgGatewayConfig.id });
if (!gateway) throw new NotFoundError({ message: `Gateway with ID ${id} not found.` });
return gateway;
};
const getProjectGateways = async ({ projectId, projectPermission }: TGetProjectGatewayByIdDTO) => {
await permissionService.getProjectPermission({
projectId,
actor: projectPermission.type,
actorId: projectPermission.id,
actorOrgId: projectPermission.orgId,
actorAuthMethod: projectPermission.authMethod,
actionProjectType: ActionProjectType.Any
});
const gateways = await gatewayDAL.findByProjectId(projectId);
return gateways;
};
// this has no permission check and used for dynamic secrets directly
// assumes permission check is already done
const fnGetGatewayClientTls = async (projectGatewayId: string) => {
const projectGateway = await projectGatewayDAL.findById(projectGatewayId);
if (!projectGateway) throw new NotFoundError({ message: `Project gateway with ID ${projectGatewayId} not found.` });
const { gatewayId } = projectGateway;
const gateway = await gatewayDAL.findById(gatewayId);
if (!gateway) throw new NotFoundError({ message: `Gateway with ID ${gatewayId} not found.` });
const orgGatewayConfig = await orgGatewayConfigDAL.findById(gateway.orgGatewayRootCaId);
const { decryptor: orgKmsDecryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId: orgGatewayConfig.orgId
});
const rootCaCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedRootCaCertificate
})
);
const gatewayCaCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedGatewayCaCertificate
})
);
const clientCert = new x509.X509Certificate(
orgKmsDecryptor({
cipherTextBlob: orgGatewayConfig.encryptedClientCertificate
})
);
const clientSkObj = crypto.createPrivateKey({
key: orgKmsDecryptor({ cipherTextBlob: orgGatewayConfig.encryptedClientPrivateKey }),
format: "der",
type: "pkcs8"
});
return {
relayAddress: orgKmsDecryptor({ cipherTextBlob: gateway.relayAddress }).toString(),
privateKey: clientSkObj.export({ type: "pkcs8", format: "pem" }),
certificate: clientCert.toString("pem"),
certChain: `${gatewayCaCert.toString("pem")}\n${rootCaCert.toString("pem")}`.trim(),
identityId: gateway.identityId,
orgId: orgGatewayConfig.orgId
};
};
return {
getGatewayRelayDetails,
exchangeAllocatedRelayAddress,
listGateways,
getGatewayById,
updateGatewayById,
deleteGatewayById,
getProjectGateways,
fnGetGatewayClientTls,
heartbeat
};
};

@ -0,0 +1,39 @@
import { OrgServiceActor } from "@app/lib/types";
import { ActorAuthMethod } from "@app/services/auth/auth-type";
export type TExchangeAllocatedRelayAddressDTO = {
identityId: string;
identityOrg: string;
identityOrgAuthMethod: ActorAuthMethod;
relayAddress: string;
};
export type TListGatewaysDTO = {
orgPermission: OrgServiceActor;
};
export type TGetGatewayByIdDTO = {
id: string;
orgPermission: OrgServiceActor;
};
export type TUpdateGatewayByIdDTO = {
id: string;
name?: string;
projectIds?: string[];
orgPermission: OrgServiceActor;
};
export type TDeleteGatewayByIdDTO = {
id: string;
orgPermission: OrgServiceActor;
};
export type TGetProjectGatewayByIdDTO = {
projectId: string;
projectPermission: OrgServiceActor;
};
export type THeartBeatDTO = {
orgPermission: OrgServiceActor;
};

@ -0,0 +1,10 @@
import { TDbClient } from "@app/db";
import { TableName } from "@app/db/schemas";
import { ormify } from "@app/lib/knex";
export type TOrgGatewayConfigDALFactory = ReturnType<typeof orgGatewayConfigDALFactory>;
export const orgGatewayConfigDALFactory = (db: TDbClient) => {
const orm = ormify(db, TableName.OrgGatewayConfig);
return orm;
};

@ -0,0 +1,10 @@
import { TDbClient } from "@app/db";
import { TableName } from "@app/db/schemas";
import { ormify } from "@app/lib/knex";
export type TProjectGatewayDALFactory = ReturnType<typeof projectGatewayDALFactory>;
export const projectGatewayDALFactory = (db: TDbClient) => {
const orm = ormify(db, TableName.ProjectGateway);
return orm;
};

@ -0,0 +1,11 @@
import { TDbClient } from "@app/db";
import { TableName } from "@app/db/schemas";
import { ormify } from "@app/lib/knex";
export type TKmipClientCertificateDALFactory = ReturnType<typeof kmipClientCertificateDALFactory>;
export const kmipClientCertificateDALFactory = (db: TDbClient) => {
const kmipClientCertOrm = ormify(db, TableName.KmipClientCertificates);
return kmipClientCertOrm;
};

@ -0,0 +1,86 @@
import { Knex } from "knex";
import { TDbClient } from "@app/db";
import { TableName, TKmipClients } from "@app/db/schemas";
import { DatabaseError } from "@app/lib/errors";
import { ormify, selectAllTableCols } from "@app/lib/knex";
import { OrderByDirection } from "@app/lib/types";
import { KmipClientOrderBy } from "./kmip-types";
export type TKmipClientDALFactory = ReturnType<typeof kmipClientDALFactory>;
export const kmipClientDALFactory = (db: TDbClient) => {
const kmipClientOrm = ormify(db, TableName.KmipClient);
const findByProjectAndClientId = async (projectId: string, clientId: string) => {
try {
const client = await db
.replicaNode()(TableName.KmipClient)
.join(TableName.Project, `${TableName.Project}.id`, `${TableName.KmipClient}.projectId`)
.join(TableName.Organization, `${TableName.Organization}.id`, `${TableName.Project}.orgId`)
.where({
[`${TableName.KmipClient}.projectId` as "projectId"]: projectId,
[`${TableName.KmipClient}.id` as "id"]: clientId
})
.select(selectAllTableCols(TableName.KmipClient))
.select(db.ref("id").withSchema(TableName.Organization).as("orgId"))
.first();
return client;
} catch (error) {
throw new DatabaseError({ error, name: "Find by project and client ID" });
}
};
const findByProjectId = async (
{
projectId,
offset = 0,
limit,
orderBy = KmipClientOrderBy.Name,
orderDirection = OrderByDirection.ASC,
search
}: {
projectId: string;
offset?: number;
limit?: number;
orderBy?: KmipClientOrderBy;
orderDirection?: OrderByDirection;
search?: string;
},
tx?: Knex
) => {
try {
const query = (tx || db.replicaNode())(TableName.KmipClient)
.where("projectId", projectId)
.where((qb) => {
if (search) {
void qb.whereILike("name", `%${search}%`);
}
})
.select<
(TKmipClients & {
total_count: number;
})[]
>(selectAllTableCols(TableName.KmipClient), db.raw(`count(*) OVER() as total_count`))
.orderBy(orderBy, orderDirection);
if (limit) {
void query.limit(limit).offset(offset);
}
const data = await query;
return { kmipClients: data, totalCount: Number(data?.[0]?.total_count ?? 0) };
} catch (error) {
throw new DatabaseError({ error, name: "Find KMIP clients by project id" });
}
};
return {
...kmipClientOrm,
findByProjectId,
findByProjectAndClientId
};
};

@ -0,0 +1,11 @@
export enum KmipPermission {
Create = "create",
Locate = "locate",
Check = "check",
Get = "get",
GetAttributes = "get-attributes",
Activate = "activate",
Revoke = "revoke",
Destroy = "destroy",
Register = "register"
}

@ -0,0 +1,422 @@
import { ForbiddenError } from "@casl/ability";
import { BadRequestError, ForbiddenRequestError, NotFoundError } from "@app/lib/errors";
import { TKmsKeyDALFactory } from "@app/services/kms/kms-key-dal";
import { TKmsServiceFactory } from "@app/services/kms/kms-service";
import { TProjectDALFactory } from "@app/services/project/project-dal";
import { OrgPermissionKmipActions, OrgPermissionSubjects } from "../permission/org-permission";
import { TPermissionServiceFactory } from "../permission/permission-service";
import { TKmipClientDALFactory } from "./kmip-client-dal";
import { KmipPermission } from "./kmip-enum";
import {
TKmipCreateDTO,
TKmipDestroyDTO,
TKmipGetAttributesDTO,
TKmipGetDTO,
TKmipLocateDTO,
TKmipRegisterDTO,
TKmipRevokeDTO
} from "./kmip-types";
type TKmipOperationServiceFactoryDep = {
kmsService: TKmsServiceFactory;
kmsDAL: TKmsKeyDALFactory;
kmipClientDAL: TKmipClientDALFactory;
projectDAL: Pick<TProjectDALFactory, "getProjectFromSplitId" | "findById">;
permissionService: Pick<TPermissionServiceFactory, "getOrgPermission">;
};
export type TKmipOperationServiceFactory = ReturnType<typeof kmipOperationServiceFactory>;
export const kmipOperationServiceFactory = ({
kmsService,
kmsDAL,
projectDAL,
kmipClientDAL,
permissionService
}: TKmipOperationServiceFactoryDep) => {
const create = async ({
projectId,
clientId,
algorithm,
actor,
actorId,
actorAuthMethod,
actorOrgId
}: TKmipCreateDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipClient = await kmipClientDAL.findOne({
id: clientId,
projectId
});
if (!kmipClient.permissions?.includes(KmipPermission.Create)) {
throw new ForbiddenRequestError({
message: "Client does not have sufficient permission to perform KMIP create"
});
}
const kmsKey = await kmsService.generateKmsKey({
encryptionAlgorithm: algorithm,
orgId: actorOrgId,
projectId,
isReserved: false
});
return kmsKey;
};
const destroy = async ({ projectId, id, clientId, actor, actorId, actorOrgId, actorAuthMethod }: TKmipDestroyDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipClient = await kmipClientDAL.findOne({
id: clientId,
projectId
});
if (!kmipClient.permissions?.includes(KmipPermission.Destroy)) {
throw new ForbiddenRequestError({
message: "Client does not have sufficient permission to perform KMIP destroy"
});
}
const key = await kmsDAL.findOne({
id,
projectId
});
if (!key) {
throw new NotFoundError({ message: `Key with ID ${id} not found` });
}
if (key.isReserved) {
throw new BadRequestError({ message: "Cannot destroy reserved keys" });
}
const completeKeyDetails = await kmsDAL.findByIdWithAssociatedKms(id);
if (!completeKeyDetails.internalKms) {
throw new BadRequestError({
message: "Cannot destroy external keys"
});
}
if (!completeKeyDetails.isDisabled) {
throw new BadRequestError({
message: "Cannot destroy active keys"
});
}
const kms = kmsDAL.deleteById(id);
return kms;
};
const get = async ({ projectId, id, clientId, actor, actorId, actorAuthMethod, actorOrgId }: TKmipGetDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipClient = await kmipClientDAL.findOne({
id: clientId,
projectId
});
if (!kmipClient.permissions?.includes(KmipPermission.Get)) {
throw new ForbiddenRequestError({
message: "Client does not have sufficient permission to perform KMIP get"
});
}
const key = await kmsDAL.findOne({
id,
projectId
});
if (!key) {
throw new NotFoundError({ message: `Key with ID ${id} not found` });
}
if (key.isReserved) {
throw new BadRequestError({ message: "Cannot get reserved keys" });
}
const completeKeyDetails = await kmsDAL.findByIdWithAssociatedKms(id);
if (!completeKeyDetails.internalKms) {
throw new BadRequestError({
message: "Cannot get external keys"
});
}
const kmsKey = await kmsService.getKeyMaterial({
kmsId: key.id
});
return {
id: key.id,
value: kmsKey.toString("base64"),
algorithm: completeKeyDetails.internalKms.encryptionAlgorithm,
isActive: !key.isDisabled,
createdAt: key.createdAt,
updatedAt: key.updatedAt
};
};
const activate = async ({ projectId, id, clientId, actor, actorId, actorAuthMethod, actorOrgId }: TKmipGetDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipClient = await kmipClientDAL.findOne({
id: clientId,
projectId
});
if (!kmipClient.permissions?.includes(KmipPermission.Activate)) {
throw new ForbiddenRequestError({
message: "Client does not have sufficient permission to perform KMIP activate"
});
}
const key = await kmsDAL.findOne({
id,
projectId
});
if (!key) {
throw new NotFoundError({ message: `Key with ID ${id} not found` });
}
return {
id: key.id,
isActive: !key.isDisabled
};
};
const revoke = async ({ projectId, id, clientId, actor, actorId, actorAuthMethod, actorOrgId }: TKmipRevokeDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipClient = await kmipClientDAL.findOne({
id: clientId,
projectId
});
if (!kmipClient.permissions?.includes(KmipPermission.Revoke)) {
throw new ForbiddenRequestError({
message: "Client does not have sufficient permission to perform KMIP revoke"
});
}
const key = await kmsDAL.findOne({
id,
projectId
});
if (!key) {
throw new NotFoundError({ message: `Key with ID ${id} not found` });
}
if (key.isReserved) {
throw new BadRequestError({ message: "Cannot revoke reserved keys" });
}
const completeKeyDetails = await kmsDAL.findByIdWithAssociatedKms(id);
if (!completeKeyDetails.internalKms) {
throw new BadRequestError({
message: "Cannot revoke external keys"
});
}
const revokedKey = await kmsDAL.updateById(key.id, {
isDisabled: true
});
return {
id: key.id,
updatedAt: revokedKey.updatedAt
};
};
const getAttributes = async ({
projectId,
id,
clientId,
actor,
actorId,
actorAuthMethod,
actorOrgId
}: TKmipGetAttributesDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipClient = await kmipClientDAL.findOne({
id: clientId,
projectId
});
if (!kmipClient.permissions?.includes(KmipPermission.GetAttributes)) {
throw new ForbiddenRequestError({
message: "Client does not have sufficient permission to perform KMIP get attributes"
});
}
const key = await kmsDAL.findOne({
id,
projectId
});
if (!key) {
throw new NotFoundError({ message: `Key with ID ${id} not found` });
}
if (key.isReserved) {
throw new BadRequestError({ message: "Cannot get reserved keys" });
}
const completeKeyDetails = await kmsDAL.findByIdWithAssociatedKms(id);
if (!completeKeyDetails.internalKms) {
throw new BadRequestError({
message: "Cannot get external keys"
});
}
return {
id: key.id,
algorithm: completeKeyDetails.internalKms.encryptionAlgorithm,
isActive: !key.isDisabled,
createdAt: key.createdAt,
updatedAt: key.updatedAt
};
};
const locate = async ({ projectId, clientId, actor, actorId, actorAuthMethod, actorOrgId }: TKmipLocateDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipClient = await kmipClientDAL.findOne({
id: clientId,
projectId
});
if (!kmipClient.permissions?.includes(KmipPermission.Locate)) {
throw new ForbiddenRequestError({
message: "Client does not have sufficient permission to perform KMIP locate"
});
}
const keys = await kmsDAL.findProjectCmeks(projectId);
return keys;
};
const register = async ({
projectId,
clientId,
key,
algorithm,
name,
actor,
actorId,
actorAuthMethod,
actorOrgId
}: TKmipRegisterDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipClient = await kmipClientDAL.findOne({
id: clientId,
projectId
});
if (!kmipClient.permissions?.includes(KmipPermission.Register)) {
throw new ForbiddenRequestError({
message: "Client does not have sufficient permission to perform KMIP register"
});
}
const project = await projectDAL.findById(projectId);
const kmsKey = await kmsService.importKeyMaterial({
name,
key: Buffer.from(key, "base64"),
algorithm,
isReserved: false,
projectId,
orgId: project.orgId
});
return kmsKey;
};
return {
create,
get,
activate,
getAttributes,
destroy,
revoke,
locate,
register
};
};

@ -0,0 +1,11 @@
import { TDbClient } from "@app/db";
import { TableName } from "@app/db/schemas";
import { ormify } from "@app/lib/knex";
export type TKmipOrgConfigDALFactory = ReturnType<typeof kmipOrgConfigDALFactory>;
export const kmipOrgConfigDALFactory = (db: TDbClient) => {
const kmipOrgConfigOrm = ormify(db, TableName.KmipOrgConfig);
return kmipOrgConfigOrm;
};

@ -0,0 +1,11 @@
import { TDbClient } from "@app/db";
import { TableName } from "@app/db/schemas";
import { ormify } from "@app/lib/knex";
export type TKmipOrgServerCertificateDALFactory = ReturnType<typeof kmipOrgServerCertificateDALFactory>;
export const kmipOrgServerCertificateDALFactory = (db: TDbClient) => {
const kmipOrgServerCertificateOrm = ormify(db, TableName.KmipOrgServerCertificates);
return kmipOrgServerCertificateOrm;
};

@ -0,0 +1,817 @@
import { ForbiddenError } from "@casl/ability";
import * as x509 from "@peculiar/x509";
import crypto, { KeyObject } from "crypto";
import ms from "ms";
import { ActionProjectType } from "@app/db/schemas";
import { BadRequestError, InternalServerError, NotFoundError } from "@app/lib/errors";
import { isValidHostname, isValidIp } from "@app/lib/ip";
import { constructPemChainFromCerts } from "@app/services/certificate/certificate-fns";
import { CertExtendedKeyUsage, CertKeyAlgorithm, CertKeyUsage } from "@app/services/certificate/certificate-types";
import {
createSerialNumber,
keyAlgorithmToAlgCfg
} from "@app/services/certificate-authority/certificate-authority-fns";
import { TKmsServiceFactory } from "@app/services/kms/kms-service";
import { KmsDataKey } from "@app/services/kms/kms-types";
import { TLicenseServiceFactory } from "../license/license-service";
import { OrgPermissionKmipActions, OrgPermissionSubjects } from "../permission/org-permission";
import { TPermissionServiceFactory } from "../permission/permission-service";
import { ProjectPermissionKmipActions, ProjectPermissionSub } from "../permission/project-permission";
import { TKmipClientCertificateDALFactory } from "./kmip-client-certificate-dal";
import { TKmipClientDALFactory } from "./kmip-client-dal";
import { TKmipOrgConfigDALFactory } from "./kmip-org-config-dal";
import { TKmipOrgServerCertificateDALFactory } from "./kmip-org-server-certificate-dal";
import {
TCreateKmipClientCertificateDTO,
TCreateKmipClientDTO,
TDeleteKmipClientDTO,
TGenerateOrgKmipServerCertificateDTO,
TGetKmipClientDTO,
TGetOrgKmipDTO,
TListKmipClientsByProjectIdDTO,
TRegisterServerDTO,
TSetupOrgKmipDTO,
TUpdateKmipClientDTO
} from "./kmip-types";
type TKmipServiceFactoryDep = {
kmipClientDAL: TKmipClientDALFactory;
kmipClientCertificateDAL: TKmipClientCertificateDALFactory;
kmipOrgServerCertificateDAL: TKmipOrgServerCertificateDALFactory;
permissionService: Pick<TPermissionServiceFactory, "getProjectPermission" | "getOrgPermission">;
kmsService: Pick<TKmsServiceFactory, "createCipherPairWithDataKey">;
kmipOrgConfigDAL: TKmipOrgConfigDALFactory;
licenseService: Pick<TLicenseServiceFactory, "getPlan">;
};
export type TKmipServiceFactory = ReturnType<typeof kmipServiceFactory>;
export const kmipServiceFactory = ({
kmipClientDAL,
permissionService,
kmipClientCertificateDAL,
kmipOrgConfigDAL,
kmsService,
kmipOrgServerCertificateDAL,
licenseService
}: TKmipServiceFactoryDep) => {
const createKmipClient = async ({
actor,
actorId,
actorOrgId,
actorAuthMethod,
projectId,
name,
description,
permissions
}: TCreateKmipClientDTO) => {
const { permission } = await permissionService.getProjectPermission({
actor,
actorId,
projectId,
actorAuthMethod,
actorOrgId,
actionProjectType: ActionProjectType.KMS
});
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionKmipActions.CreateClients,
ProjectPermissionSub.Kmip
);
const plan = await licenseService.getPlan(actorOrgId);
if (!plan.kmip)
throw new BadRequestError({
message: "Failed to create KMIP client. Upgrade your plan to enterprise."
});
const kmipClient = await kmipClientDAL.create({
projectId,
name,
description,
permissions
});
return kmipClient;
};
const updateKmipClient = async ({
actor,
actorId,
actorOrgId,
actorAuthMethod,
name,
description,
permissions,
id
}: TUpdateKmipClientDTO) => {
const kmipClient = await kmipClientDAL.findById(id);
if (!kmipClient) {
throw new NotFoundError({
message: `KMIP client with ID ${id} does not exist`
});
}
const plan = await licenseService.getPlan(actorOrgId);
if (!plan.kmip)
throw new BadRequestError({
message: "Failed to update KMIP client. Upgrade your plan to enterprise."
});
const { permission } = await permissionService.getProjectPermission({
actor,
actorId,
projectId: kmipClient.projectId,
actorAuthMethod,
actorOrgId,
actionProjectType: ActionProjectType.KMS
});
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionKmipActions.UpdateClients,
ProjectPermissionSub.Kmip
);
const updatedKmipClient = await kmipClientDAL.updateById(id, {
name,
description,
permissions
});
return updatedKmipClient;
};
const deleteKmipClient = async ({ actor, actorId, actorOrgId, actorAuthMethod, id }: TDeleteKmipClientDTO) => {
const kmipClient = await kmipClientDAL.findById(id);
if (!kmipClient) {
throw new NotFoundError({
message: `KMIP client with ID ${id} does not exist`
});
}
const { permission } = await permissionService.getProjectPermission({
actor,
actorId,
projectId: kmipClient.projectId,
actorAuthMethod,
actorOrgId,
actionProjectType: ActionProjectType.KMS
});
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionKmipActions.DeleteClients,
ProjectPermissionSub.Kmip
);
const plan = await licenseService.getPlan(actorOrgId);
if (!plan.kmip)
throw new BadRequestError({
message: "Failed to delete KMIP client. Upgrade your plan to enterprise."
});
const deletedKmipClient = await kmipClientDAL.deleteById(id);
return deletedKmipClient;
};
const getKmipClient = async ({ actor, actorId, actorOrgId, actorAuthMethod, id }: TGetKmipClientDTO) => {
const kmipClient = await kmipClientDAL.findById(id);
if (!kmipClient) {
throw new NotFoundError({
message: `KMIP client with ID ${id} does not exist`
});
}
const { permission } = await permissionService.getProjectPermission({
actor,
actorId,
projectId: kmipClient.projectId,
actorAuthMethod,
actorOrgId,
actionProjectType: ActionProjectType.KMS
});
ForbiddenError.from(permission).throwUnlessCan(ProjectPermissionKmipActions.ReadClients, ProjectPermissionSub.Kmip);
return kmipClient;
};
const listKmipClientsByProjectId = async ({
actor,
actorId,
actorOrgId,
actorAuthMethod,
projectId,
...rest
}: TListKmipClientsByProjectIdDTO) => {
const { permission } = await permissionService.getProjectPermission({
actor,
actorId,
projectId,
actorAuthMethod,
actorOrgId,
actionProjectType: ActionProjectType.KMS
});
ForbiddenError.from(permission).throwUnlessCan(ProjectPermissionKmipActions.ReadClients, ProjectPermissionSub.Kmip);
return kmipClientDAL.findByProjectId({ projectId, ...rest });
};
const createKmipClientCertificate = async ({
actor,
actorId,
actorOrgId,
actorAuthMethod,
ttl,
keyAlgorithm,
clientId
}: TCreateKmipClientCertificateDTO) => {
const kmipClient = await kmipClientDAL.findById(clientId);
if (!kmipClient) {
throw new NotFoundError({
message: `KMIP client with ID ${clientId} does not exist`
});
}
const plan = await licenseService.getPlan(actorOrgId);
if (!plan.kmip)
throw new BadRequestError({
message: "Failed to create KMIP client. Upgrade your plan to enterprise."
});
const { permission } = await permissionService.getProjectPermission({
actor,
actorId,
projectId: kmipClient.projectId,
actorAuthMethod,
actorOrgId,
actionProjectType: ActionProjectType.KMS
});
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionKmipActions.GenerateClientCertificates,
ProjectPermissionSub.Kmip
);
const kmipConfig = await kmipOrgConfigDAL.findOne({
orgId: actorOrgId
});
if (!kmipConfig) {
throw new InternalServerError({
message: "KMIP has not been configured for the organization"
});
}
const { decryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId: actorOrgId
});
const caCertObj = new x509.X509Certificate(
decryptor({ cipherTextBlob: kmipConfig.encryptedClientIntermediateCaCertificate })
);
const notBeforeDate = new Date();
const notAfterDate = new Date(new Date().getTime() + ms(ttl));
const caCertNotBeforeDate = new Date(caCertObj.notBefore);
const caCertNotAfterDate = new Date(caCertObj.notAfter);
// check not before constraint
if (notBeforeDate < caCertNotBeforeDate) {
throw new BadRequestError({ message: "notBefore date is before CA certificate's notBefore date" });
}
if (notBeforeDate > notAfterDate) throw new BadRequestError({ message: "notBefore date is after notAfter date" });
// check not after constraint
if (notAfterDate > caCertNotAfterDate) {
throw new BadRequestError({ message: "notAfter date is after CA certificate's notAfter date" });
}
const alg = keyAlgorithmToAlgCfg(keyAlgorithm);
const leafKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const extensions: x509.Extension[] = [
new x509.BasicConstraintsExtension(false),
await x509.AuthorityKeyIdentifierExtension.create(caCertObj, false),
await x509.SubjectKeyIdentifierExtension.create(leafKeys.publicKey),
new x509.CertificatePolicyExtension(["2.5.29.32.0"]), // anyPolicy
new x509.KeyUsagesExtension(
// eslint-disable-next-line no-bitwise
x509.KeyUsageFlags[CertKeyUsage.DIGITAL_SIGNATURE] |
x509.KeyUsageFlags[CertKeyUsage.KEY_ENCIPHERMENT] |
x509.KeyUsageFlags[CertKeyUsage.KEY_AGREEMENT],
true
),
new x509.ExtendedKeyUsageExtension([x509.ExtendedKeyUsage[CertExtendedKeyUsage.CLIENT_AUTH]], true)
];
const caAlg = keyAlgorithmToAlgCfg(kmipConfig.caKeyAlgorithm as CertKeyAlgorithm);
const caSkObj = crypto.createPrivateKey({
key: decryptor({ cipherTextBlob: kmipConfig.encryptedClientIntermediateCaPrivateKey }),
format: "der",
type: "pkcs8"
});
const caPrivateKey = await crypto.subtle.importKey(
"pkcs8",
caSkObj.export({ format: "der", type: "pkcs8" }),
caAlg,
true,
["sign"]
);
const serialNumber = createSerialNumber();
const leafCert = await x509.X509CertificateGenerator.create({
serialNumber,
subject: `OU=${kmipClient.projectId},CN=${clientId}`,
issuer: caCertObj.subject,
notBefore: notBeforeDate,
notAfter: notAfterDate,
signingKey: caPrivateKey,
publicKey: leafKeys.publicKey,
signingAlgorithm: alg,
extensions
});
const skLeafObj = KeyObject.from(leafKeys.privateKey);
const rootCaCert = new x509.X509Certificate(decryptor({ cipherTextBlob: kmipConfig.encryptedRootCaCertificate }));
const serverIntermediateCaCert = new x509.X509Certificate(
decryptor({ cipherTextBlob: kmipConfig.encryptedServerIntermediateCaCertificate })
);
await kmipClientCertificateDAL.create({
kmipClientId: clientId,
keyAlgorithm,
issuedAt: notBeforeDate,
expiration: notAfterDate,
serialNumber
});
return {
serialNumber,
privateKey: skLeafObj.export({ format: "pem", type: "pkcs8" }) as string,
certificate: leafCert.toString("pem"),
certificateChain: constructPemChainFromCerts([serverIntermediateCaCert, rootCaCert]),
projectId: kmipClient.projectId
};
};
const getServerCertificateBySerialNumber = async (orgId: string, serialNumber: string) => {
const serverCert = await kmipOrgServerCertificateDAL.findOne({
serialNumber,
orgId
});
if (!serverCert) {
throw new NotFoundError({
message: "Server certificate not found"
});
}
const { decryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId
});
const parsedCertificate = new x509.X509Certificate(decryptor({ cipherTextBlob: serverCert.encryptedCertificate }));
return {
publicKey: parsedCertificate.publicKey.toString("pem"),
keyAlgorithm: serverCert.keyAlgorithm as CertKeyAlgorithm
};
};
const setupOrgKmip = async ({ caKeyAlgorithm, actorOrgId, actor, actorId, actorAuthMethod }: TSetupOrgKmipDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Setup, OrgPermissionSubjects.Kmip);
const kmipConfig = await kmipOrgConfigDAL.findOne({
orgId: actorOrgId
});
if (kmipConfig) {
throw new BadRequestError({
message: "KMIP has already been configured for the organization"
});
}
const plan = await licenseService.getPlan(actorOrgId);
if (!plan.kmip)
throw new BadRequestError({
message: "Failed to setup KMIP. Upgrade your plan to enterprise."
});
const alg = keyAlgorithmToAlgCfg(caKeyAlgorithm);
// generate root CA
const rootCaSerialNumber = createSerialNumber();
const rootCaKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const rootCaSkObj = KeyObject.from(rootCaKeys.privateKey);
const rootCaIssuedAt = new Date();
const rootCaExpiration = new Date(new Date().setFullYear(new Date().getFullYear() + 20));
const rootCaCert = await x509.X509CertificateGenerator.createSelfSigned({
name: `CN=KMIP Root CA,OU=${actorOrgId}`,
serialNumber: rootCaSerialNumber,
notBefore: rootCaIssuedAt,
notAfter: rootCaExpiration,
signingAlgorithm: alg,
keys: rootCaKeys,
extensions: [
// eslint-disable-next-line no-bitwise
new x509.KeyUsagesExtension(x509.KeyUsageFlags.keyCertSign | x509.KeyUsageFlags.cRLSign, true),
await x509.SubjectKeyIdentifierExtension.create(rootCaKeys.publicKey)
]
});
// generate intermediate server CA
const serverIntermediateCaSerialNumber = createSerialNumber();
const serverIntermediateCaIssuedAt = new Date();
const serverIntermediateCaExpiration = new Date(new Date().setFullYear(new Date().getFullYear() + 10));
const serverIntermediateCaKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const serverIntermediateCaSkObj = KeyObject.from(serverIntermediateCaKeys.privateKey);
const serverIntermediateCaCert = await x509.X509CertificateGenerator.create({
serialNumber: serverIntermediateCaSerialNumber,
subject: `CN=KMIP Server Intermediate CA,OU=${actorOrgId}`,
issuer: rootCaCert.subject,
notBefore: serverIntermediateCaIssuedAt,
notAfter: serverIntermediateCaExpiration,
signingKey: rootCaKeys.privateKey,
publicKey: serverIntermediateCaKeys.publicKey,
signingAlgorithm: alg,
extensions: [
new x509.KeyUsagesExtension(
// eslint-disable-next-line no-bitwise
x509.KeyUsageFlags.keyCertSign |
x509.KeyUsageFlags.cRLSign |
x509.KeyUsageFlags.digitalSignature |
x509.KeyUsageFlags.keyEncipherment,
true
),
new x509.BasicConstraintsExtension(true, 0, true),
await x509.AuthorityKeyIdentifierExtension.create(rootCaCert, false),
await x509.SubjectKeyIdentifierExtension.create(serverIntermediateCaKeys.publicKey)
]
});
// generate intermediate client CA
const clientIntermediateCaSerialNumber = createSerialNumber();
const clientIntermediateCaIssuedAt = new Date();
const clientIntermediateCaExpiration = new Date(new Date().setFullYear(new Date().getFullYear() + 10));
const clientIntermediateCaKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const clientIntermediateCaSkObj = KeyObject.from(clientIntermediateCaKeys.privateKey);
const clientIntermediateCaCert = await x509.X509CertificateGenerator.create({
serialNumber: clientIntermediateCaSerialNumber,
subject: `CN=KMIP Client Intermediate CA,OU=${actorOrgId}`,
issuer: rootCaCert.subject,
notBefore: clientIntermediateCaIssuedAt,
notAfter: clientIntermediateCaExpiration,
signingKey: rootCaKeys.privateKey,
publicKey: clientIntermediateCaKeys.publicKey,
signingAlgorithm: alg,
extensions: [
new x509.KeyUsagesExtension(
// eslint-disable-next-line no-bitwise
x509.KeyUsageFlags.keyCertSign |
x509.KeyUsageFlags.cRLSign |
x509.KeyUsageFlags.digitalSignature |
x509.KeyUsageFlags.keyEncipherment,
true
),
new x509.BasicConstraintsExtension(true, 0, true),
await x509.AuthorityKeyIdentifierExtension.create(rootCaCert, false),
await x509.SubjectKeyIdentifierExtension.create(clientIntermediateCaKeys.publicKey)
]
});
const { encryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId: actorOrgId
});
await kmipOrgConfigDAL.create({
orgId: actorOrgId,
caKeyAlgorithm,
rootCaIssuedAt,
rootCaExpiration,
rootCaSerialNumber,
encryptedRootCaCertificate: encryptor({ plainText: Buffer.from(rootCaCert.rawData) }).cipherTextBlob,
encryptedRootCaPrivateKey: encryptor({
plainText: rootCaSkObj.export({
type: "pkcs8",
format: "der"
})
}).cipherTextBlob,
serverIntermediateCaIssuedAt,
serverIntermediateCaExpiration,
serverIntermediateCaSerialNumber,
encryptedServerIntermediateCaCertificate: encryptor({
plainText: Buffer.from(new Uint8Array(serverIntermediateCaCert.rawData))
}).cipherTextBlob,
encryptedServerIntermediateCaChain: encryptor({ plainText: Buffer.from(rootCaCert.toString("pem")) })
.cipherTextBlob,
encryptedServerIntermediateCaPrivateKey: encryptor({
plainText: serverIntermediateCaSkObj.export({
type: "pkcs8",
format: "der"
})
}).cipherTextBlob,
clientIntermediateCaIssuedAt,
clientIntermediateCaExpiration,
clientIntermediateCaSerialNumber,
encryptedClientIntermediateCaCertificate: encryptor({
plainText: Buffer.from(new Uint8Array(clientIntermediateCaCert.rawData))
}).cipherTextBlob,
encryptedClientIntermediateCaChain: encryptor({ plainText: Buffer.from(rootCaCert.toString("pem")) })
.cipherTextBlob,
encryptedClientIntermediateCaPrivateKey: encryptor({
plainText: clientIntermediateCaSkObj.export({
type: "pkcs8",
format: "der"
})
}).cipherTextBlob
});
return {
serverCertificateChain: constructPemChainFromCerts([serverIntermediateCaCert, rootCaCert]),
clientCertificateChain: constructPemChainFromCerts([clientIntermediateCaCert, rootCaCert])
};
};
const getOrgKmip = async ({ actorOrgId, actor, actorId, actorAuthMethod }: TGetOrgKmipDTO) => {
await permissionService.getOrgPermission(actor, actorId, actorOrgId, actorAuthMethod, actorOrgId);
const kmipConfig = await kmipOrgConfigDAL.findOne({
orgId: actorOrgId
});
if (!kmipConfig) {
throw new BadRequestError({
message: "KMIP has not been configured for the organization"
});
}
const { decryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId: actorOrgId
});
const rootCaCert = new x509.X509Certificate(decryptor({ cipherTextBlob: kmipConfig.encryptedRootCaCertificate }));
const serverIntermediateCaCert = new x509.X509Certificate(
decryptor({ cipherTextBlob: kmipConfig.encryptedServerIntermediateCaCertificate })
);
const clientIntermediateCaCert = new x509.X509Certificate(
decryptor({ cipherTextBlob: kmipConfig.encryptedClientIntermediateCaCertificate })
);
return {
id: kmipConfig.id,
serverCertificateChain: constructPemChainFromCerts([serverIntermediateCaCert, rootCaCert]),
clientCertificateChain: constructPemChainFromCerts([clientIntermediateCaCert, rootCaCert])
};
};
const generateOrgKmipServerCertificate = async ({
orgId,
ttl,
commonName,
altNames,
keyAlgorithm
}: TGenerateOrgKmipServerCertificateDTO) => {
const kmipOrgConfig = await kmipOrgConfigDAL.findOne({
orgId
});
if (!kmipOrgConfig) {
throw new BadRequestError({
message: "KMIP has not been configured for the organization"
});
}
const plan = await licenseService.getPlan(orgId);
if (!plan.kmip)
throw new BadRequestError({
message: "Failed to generate KMIP server certificate. Upgrade your plan to enterprise."
});
const { decryptor, encryptor } = await kmsService.createCipherPairWithDataKey({
type: KmsDataKey.Organization,
orgId
});
const caCertObj = new x509.X509Certificate(
decryptor({ cipherTextBlob: kmipOrgConfig.encryptedServerIntermediateCaCertificate })
);
const notBeforeDate = new Date();
const notAfterDate = new Date(new Date().getTime() + ms(ttl));
const caCertNotBeforeDate = new Date(caCertObj.notBefore);
const caCertNotAfterDate = new Date(caCertObj.notAfter);
// check not before constraint
if (notBeforeDate < caCertNotBeforeDate) {
throw new BadRequestError({ message: "notBefore date is before CA certificate's notBefore date" });
}
if (notBeforeDate > notAfterDate) throw new BadRequestError({ message: "notBefore date is after notAfter date" });
// check not after constraint
if (notAfterDate > caCertNotAfterDate) {
throw new BadRequestError({ message: "notAfter date is after CA certificate's notAfter date" });
}
const alg = keyAlgorithmToAlgCfg(keyAlgorithm);
const leafKeys = await crypto.subtle.generateKey(alg, true, ["sign", "verify"]);
const extensions: x509.Extension[] = [
new x509.BasicConstraintsExtension(false),
await x509.AuthorityKeyIdentifierExtension.create(caCertObj, false),
await x509.SubjectKeyIdentifierExtension.create(leafKeys.publicKey),
new x509.CertificatePolicyExtension(["2.5.29.32.0"]), // anyPolicy
new x509.KeyUsagesExtension(
// eslint-disable-next-line no-bitwise
x509.KeyUsageFlags[CertKeyUsage.DIGITAL_SIGNATURE] | x509.KeyUsageFlags[CertKeyUsage.KEY_ENCIPHERMENT],
true
),
new x509.ExtendedKeyUsageExtension([x509.ExtendedKeyUsage[CertExtendedKeyUsage.SERVER_AUTH]], true)
];
const altNamesArray: {
type: "email" | "dns" | "ip";
value: string;
}[] = altNames
.split(",")
.map((name) => name.trim())
.map((altName) => {
if (isValidHostname(altName)) {
return {
type: "dns",
value: altName
};
}
if (isValidIp(altName)) {
return {
type: "ip",
value: altName
};
}
throw new Error(`Invalid altName: ${altName}`);
});
const altNamesExtension = new x509.SubjectAlternativeNameExtension(altNamesArray, false);
extensions.push(altNamesExtension);
const caAlg = keyAlgorithmToAlgCfg(kmipOrgConfig.caKeyAlgorithm as CertKeyAlgorithm);
const decryptedCaCertChain = decryptor({
cipherTextBlob: kmipOrgConfig.encryptedServerIntermediateCaChain
}).toString("utf-8");
const caSkObj = crypto.createPrivateKey({
key: decryptor({ cipherTextBlob: kmipOrgConfig.encryptedServerIntermediateCaPrivateKey }),
format: "der",
type: "pkcs8"
});
const caPrivateKey = await crypto.subtle.importKey(
"pkcs8",
caSkObj.export({ format: "der", type: "pkcs8" }),
caAlg,
true,
["sign"]
);
const serialNumber = createSerialNumber();
const leafCert = await x509.X509CertificateGenerator.create({
serialNumber,
subject: `CN=${commonName}`,
issuer: caCertObj.subject,
notBefore: notBeforeDate,
notAfter: notAfterDate,
signingKey: caPrivateKey,
publicKey: leafKeys.publicKey,
signingAlgorithm: alg,
extensions
});
const skLeafObj = KeyObject.from(leafKeys.privateKey);
const certificateChain = `${caCertObj.toString("pem")}\n${decryptedCaCertChain}`.trim();
await kmipOrgServerCertificateDAL.create({
orgId,
keyAlgorithm,
issuedAt: notBeforeDate,
expiration: notAfterDate,
serialNumber,
commonName,
altNames,
encryptedCertificate: encryptor({ plainText: Buffer.from(new Uint8Array(leafCert.rawData)) }).cipherTextBlob,
encryptedChain: encryptor({ plainText: Buffer.from(certificateChain) }).cipherTextBlob
});
return {
serialNumber,
privateKey: skLeafObj.export({ format: "pem", type: "pkcs8" }) as string,
certificate: leafCert.toString("pem"),
certificateChain
};
};
const registerServer = async ({
actorOrgId,
actor,
actorId,
actorAuthMethod,
ttl,
commonName,
keyAlgorithm,
hostnamesOrIps
}: TRegisterServerDTO) => {
const { permission } = await permissionService.getOrgPermission(
actor,
actorId,
actorOrgId,
actorAuthMethod,
actorOrgId
);
ForbiddenError.from(permission).throwUnlessCan(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
const kmipConfig = await kmipOrgConfigDAL.findOne({
orgId: actorOrgId
});
if (!kmipConfig) {
throw new BadRequestError({
message: "KMIP has not been configured for the organization"
});
}
const plan = await licenseService.getPlan(actorOrgId);
if (!plan.kmip)
throw new BadRequestError({
message: "Failed to register KMIP server. Upgrade your plan to enterprise."
});
const { privateKey, certificate, certificateChain, serialNumber } = await generateOrgKmipServerCertificate({
orgId: actorOrgId,
commonName: commonName ?? "kmip-server",
altNames: hostnamesOrIps,
keyAlgorithm: keyAlgorithm ?? (kmipConfig.caKeyAlgorithm as CertKeyAlgorithm),
ttl
});
const { clientCertificateChain } = await getOrgKmip({
actor,
actorAuthMethod,
actorId,
actorOrgId
});
return {
serverCertificateSerialNumber: serialNumber,
clientCertificateChain,
privateKey,
certificate,
certificateChain
};
};
return {
createKmipClient,
updateKmipClient,
deleteKmipClient,
getKmipClient,
listKmipClientsByProjectId,
createKmipClientCertificate,
setupOrgKmip,
generateOrgKmipServerCertificate,
getOrgKmip,
getServerCertificateBySerialNumber,
registerServer
};
};

@ -0,0 +1,102 @@
import { SymmetricEncryption } from "@app/lib/crypto/cipher";
import { OrderByDirection, TOrgPermission, TProjectPermission } from "@app/lib/types";
import { CertKeyAlgorithm } from "@app/services/certificate/certificate-types";
import { KmipPermission } from "./kmip-enum";
export type TCreateKmipClientCertificateDTO = {
clientId: string;
keyAlgorithm: CertKeyAlgorithm;
ttl: string;
} & Omit<TProjectPermission, "projectId">;
export type TCreateKmipClientDTO = {
name: string;
description?: string;
permissions: KmipPermission[];
} & TProjectPermission;
export type TUpdateKmipClientDTO = {
id: string;
name?: string;
description?: string;
permissions?: KmipPermission[];
} & Omit<TProjectPermission, "projectId">;
export type TDeleteKmipClientDTO = {
id: string;
} & Omit<TProjectPermission, "projectId">;
export type TGetKmipClientDTO = {
id: string;
} & Omit<TProjectPermission, "projectId">;
export enum KmipClientOrderBy {
Name = "name"
}
export type TListKmipClientsByProjectIdDTO = {
offset?: number;
limit?: number;
orderBy?: KmipClientOrderBy;
orderDirection?: OrderByDirection;
search?: string;
} & TProjectPermission;
type KmipOperationBaseDTO = {
clientId: string;
projectId: string;
} & Omit<TOrgPermission, "orgId">;
export type TKmipCreateDTO = {
algorithm: SymmetricEncryption;
} & KmipOperationBaseDTO;
export type TKmipGetDTO = {
id: string;
} & KmipOperationBaseDTO;
export type TKmipGetAttributesDTO = {
id: string;
} & KmipOperationBaseDTO;
export type TKmipDestroyDTO = {
id: string;
} & KmipOperationBaseDTO;
export type TKmipActivateDTO = {
id: string;
} & KmipOperationBaseDTO;
export type TKmipRevokeDTO = {
id: string;
} & KmipOperationBaseDTO;
export type TKmipLocateDTO = KmipOperationBaseDTO;
export type TKmipRegisterDTO = {
name: string;
key: string;
algorithm: SymmetricEncryption;
} & KmipOperationBaseDTO;
export type TSetupOrgKmipDTO = {
caKeyAlgorithm: CertKeyAlgorithm;
} & Omit<TOrgPermission, "orgId">;
export type TGetOrgKmipDTO = Omit<TOrgPermission, "orgId">;
export type TGenerateOrgKmipServerCertificateDTO = {
commonName: string;
altNames: string;
keyAlgorithm: CertKeyAlgorithm;
ttl: string;
orgId: string;
};
export type TRegisterServerDTO = {
hostnamesOrIps: string;
commonName?: string;
keyAlgorithm?: CertKeyAlgorithm;
ttl: string;
} & Omit<TOrgPermission, "orgId">;

@ -50,7 +50,9 @@ export const getDefaultOnPremFeatures = (): TFeatureSet => ({
},
pkiEst: false,
enforceMfa: false,
projectTemplates: false
projectTemplates: false,
kmip: false,
gateway: false
});
export const setupLicenseRequestWithStore = (baseURL: string, refreshUrl: string, licenseKey: string) => {

@ -68,6 +68,8 @@ export type TFeatureSet = {
pkiEst: boolean;
enforceMfa: boolean;
projectTemplates: false;
kmip: false;
gateway: false;
};
export type TOrgPlansTableDTO = {

@ -23,10 +23,23 @@ export enum OrgPermissionAppConnectionActions {
Connect = "connect"
}
export enum OrgPermissionKmipActions {
Proxy = "proxy",
Setup = "setup"
}
export enum OrgPermissionAdminConsoleAction {
AccessAllProjects = "access-all-projects"
}
export enum OrgPermissionGatewayActions {
// is there a better word for this. This mean can an identity be a gateway
CreateGateways = "create-gateways",
ListGateways = "list-gateways",
EditGateways = "edit-gateways",
DeleteGateways = "delete-gateways"
}
export enum OrgPermissionSubjects {
Workspace = "workspace",
Role = "role",
@ -44,7 +57,9 @@ export enum OrgPermissionSubjects {
AdminConsole = "organization-admin-console",
AuditLogs = "audit-logs",
ProjectTemplates = "project-templates",
AppConnections = "app-connections"
AppConnections = "app-connections",
Kmip = "kmip",
Gateway = "gateway"
}
export type AppConnectionSubjectFields = {
@ -67,6 +82,7 @@ export type OrgPermissionSet =
| [OrgPermissionActions, OrgPermissionSubjects.Kms]
| [OrgPermissionActions, OrgPermissionSubjects.AuditLogs]
| [OrgPermissionActions, OrgPermissionSubjects.ProjectTemplates]
| [OrgPermissionGatewayActions, OrgPermissionSubjects.Gateway]
| [
OrgPermissionAppConnectionActions,
(
@ -74,7 +90,8 @@ export type OrgPermissionSet =
| (ForcedSubject<OrgPermissionSubjects.AppConnections> & AppConnectionSubjectFields)
)
]
| [OrgPermissionAdminConsoleAction, OrgPermissionSubjects.AdminConsole];
| [OrgPermissionAdminConsoleAction, OrgPermissionSubjects.AdminConsole]
| [OrgPermissionKmipActions, OrgPermissionSubjects.Kmip];
const AppConnectionConditionSchema = z
.object({
@ -167,6 +184,18 @@ export const OrgPermissionSchema = z.discriminatedUnion("subject", [
action: CASL_ACTION_SCHEMA_NATIVE_ENUM(OrgPermissionAdminConsoleAction).describe(
"Describe what action an entity can take."
)
}),
z.object({
subject: z.literal(OrgPermissionSubjects.Kmip).describe("The entity this permission pertains to."),
action: CASL_ACTION_SCHEMA_NATIVE_ENUM(OrgPermissionKmipActions).describe(
"Describe what action an entity can take."
)
}),
z.object({
subject: z.literal(OrgPermissionSubjects.Gateway).describe("The entity this permission pertains to."),
action: CASL_ACTION_SCHEMA_NATIVE_ENUM(OrgPermissionGatewayActions).describe(
"Describe what action an entity can take."
)
})
]);
@ -251,8 +280,18 @@ const buildAdminPermission = () => {
can(OrgPermissionAppConnectionActions.Delete, OrgPermissionSubjects.AppConnections);
can(OrgPermissionAppConnectionActions.Connect, OrgPermissionSubjects.AppConnections);
can(OrgPermissionGatewayActions.ListGateways, OrgPermissionSubjects.Gateway);
can(OrgPermissionGatewayActions.CreateGateways, OrgPermissionSubjects.Gateway);
can(OrgPermissionGatewayActions.EditGateways, OrgPermissionSubjects.Gateway);
can(OrgPermissionGatewayActions.DeleteGateways, OrgPermissionSubjects.Gateway);
can(OrgPermissionAdminConsoleAction.AccessAllProjects, OrgPermissionSubjects.AdminConsole);
can(OrgPermissionKmipActions.Setup, OrgPermissionSubjects.Kmip);
// the proxy assignment is temporary in order to prevent "more privilege" error during role assignment to MI
can(OrgPermissionKmipActions.Proxy, OrgPermissionSubjects.Kmip);
return rules;
};
@ -282,6 +321,8 @@ const buildMemberPermission = () => {
can(OrgPermissionActions.Read, OrgPermissionSubjects.AuditLogs);
can(OrgPermissionAppConnectionActions.Connect, OrgPermissionSubjects.AppConnections);
can(OrgPermissionGatewayActions.ListGateways, OrgPermissionSubjects.Gateway);
can(OrgPermissionGatewayActions.CreateGateways, OrgPermissionSubjects.Gateway);
return rules;
};

@ -44,6 +44,14 @@ export enum ProjectPermissionSecretSyncActions {
RemoveSecrets = "remove-secrets"
}
export enum ProjectPermissionKmipActions {
CreateClients = "create-clients",
UpdateClients = "update-clients",
DeleteClients = "delete-clients",
ReadClients = "read-clients",
GenerateClientCertificates = "generate-client-certificates"
}
export enum ProjectPermissionSub {
Role = "role",
Member = "member",
@ -75,7 +83,8 @@ export enum ProjectPermissionSub {
PkiCollections = "pki-collections",
Kms = "kms",
Cmek = "cmek",
SecretSyncs = "secret-syncs"
SecretSyncs = "secret-syncs",
Kmip = "kmip"
}
export type SecretSubjectFields = {
@ -156,6 +165,7 @@ export type ProjectPermissionSet =
| [ProjectPermissionActions, ProjectPermissionSub.PkiAlerts]
| [ProjectPermissionActions, ProjectPermissionSub.PkiCollections]
| [ProjectPermissionSecretSyncActions, ProjectPermissionSub.SecretSyncs]
| [ProjectPermissionKmipActions, ProjectPermissionSub.Kmip]
| [ProjectPermissionCmekActions, ProjectPermissionSub.Cmek]
| [ProjectPermissionActions.Delete, ProjectPermissionSub.Project]
| [ProjectPermissionActions.Edit, ProjectPermissionSub.Project]
@ -410,6 +420,12 @@ const GeneralPermissionSchema = [
action: CASL_ACTION_SCHEMA_NATIVE_ENUM(ProjectPermissionSecretSyncActions).describe(
"Describe what action an entity can take."
)
}),
z.object({
subject: z.literal(ProjectPermissionSub.Kmip).describe("The entity this permission pertains to."),
action: CASL_ACTION_SCHEMA_NATIVE_ENUM(ProjectPermissionKmipActions).describe(
"Describe what action an entity can take."
)
})
];
@ -575,6 +591,18 @@ const buildAdminPermissionRules = () => {
],
ProjectPermissionSub.SecretSyncs
);
can(
[
ProjectPermissionKmipActions.CreateClients,
ProjectPermissionKmipActions.UpdateClients,
ProjectPermissionKmipActions.DeleteClients,
ProjectPermissionKmipActions.ReadClients,
ProjectPermissionKmipActions.GenerateClientCertificates
],
ProjectPermissionSub.Kmip
);
return rules;
};

@ -1,12 +1,15 @@
import { Redis } from "ioredis";
import { pgAdvisoryLockHashText } from "@app/lib/crypto/hashtext";
import { Redlock, Settings } from "@app/lib/red-lock";
export enum PgSqlLock {
BootUpMigration = 2023,
SuperAdminInit = 2024,
KmsRootKeyInit = 2025
}
export const PgSqlLock = {
BootUpMigration: 2023,
SuperAdminInit: 2024,
KmsRootKeyInit: 2025,
OrgGatewayRootCaInit: (orgId: string) => pgAdvisoryLockHashText(`org-gateway-root-ca:${orgId}`),
OrgGatewayCertExchange: (orgId: string) => pgAdvisoryLockHashText(`org-gateway-cert-exchange:${orgId}`)
} as const;
export type TKeyStoreFactory = ReturnType<typeof keyStoreFactory>;
@ -33,7 +36,8 @@ export const KeyStorePrefixes = {
SecretSyncLastRunTimestamp: (syncId: string) => `secret-sync-last-run-${syncId}` as const,
IdentityAccessTokenStatusUpdate: (identityAccessTokenId: string) =>
`identity-access-token-status:${identityAccessTokenId}`,
ServiceTokenStatusUpdate: (serviceTokenId: string) => `service-token-status:${serviceTokenId}`
ServiceTokenStatusUpdate: (serviceTokenId: string) => `service-token-status:${serviceTokenId}`,
GatewayIdentityCredential: (identityId: string) => `gateway-credentials:${identityId}`
};
export const KeyStoreTtls = {

@ -721,7 +721,8 @@ export const RAW_SECRETS = {
secretName: "The name of the secret to update.",
secretComment: "Update comment to the secret.",
environment: "The slug of the environment where the secret is located.",
secretPath: "The path of the secret to update.",
mode: "Defines how the system should handle missing secrets during an update.",
secretPath: "The default path for secrets to update or upsert, if not provided in the secret details.",
secretValue: "The new value of the secret.",
skipMultilineEncoding: "Skip multiline encoding for the secret value.",
type: "The type of the secret to update.",
@ -1721,6 +1722,18 @@ export const SecretSyncs = {
initialSyncBehavior: `Specify how Infisical should resolve the initial sync to the ${destinationName} destination.`
};
},
ADDITIONAL_SYNC_OPTIONS: {
AWS_PARAMETER_STORE: {
keyId: "The AWS KMS key ID or alias to use when encrypting parameters synced by Infisical.",
tags: "Optional resource tags to add to parameters synced by Infisical.",
syncSecretMetadataAsTags: `Whether Infisical secret metadata should be added as resource tags to parameters synced by Infisical.`
},
AWS_SECRETS_MANAGER: {
keyId: "The AWS KMS key ID or alias to use when encrypting parameters synced by Infisical.",
tags: "Optional tags to add to secrets synced by Infisical.",
syncSecretMetadataAsTags: `Whether Infisical secret metadata should be added as tags to secrets synced by Infisical.`
}
},
DESTINATION_CONFIG: {
AWS_PARAMETER_STORE: {
region: "The AWS region to sync secrets to.",

@ -184,6 +184,14 @@ const envSchema = z
USE_PG_QUEUE: zodStrBool.default("false"),
SHOULD_INIT_PG_QUEUE: zodStrBool.default("false"),
/* Gateway----------------------------------------------------------------------------- */
GATEWAY_INFISICAL_STATIC_IP_ADDRESS: zpStr(z.string().optional()),
GATEWAY_RELAY_ADDRESS: zpStr(z.string().optional()),
GATEWAY_RELAY_REALM: zpStr(z.string().optional()),
GATEWAY_RELAY_AUTH_SECRET: zpStr(z.string().optional()),
/* ----------------------------------------------------------------------------- */
/* App Connections ----------------------------------------------------------------------------- */
// aws

@ -0,0 +1,29 @@
// used for postgres lock
// this is something postgres does under the hood
// convert any string to a unique number
export const hashtext = (text: string) => {
// Convert text to UTF8 bytes array for consistent behavior with PostgreSQL
const encoder = new TextEncoder();
const bytes = encoder.encode(text);
// Implementation of hash_any
let result = 0;
for (let i = 0; i < bytes.length; i += 1) {
// eslint-disable-next-line no-bitwise
result = ((result << 5) + result) ^ bytes[i];
// Keep within 32-bit integer range
// eslint-disable-next-line no-bitwise
result >>>= 0;
}
// Convert to signed 32-bit integer like PostgreSQL
// eslint-disable-next-line no-bitwise
return result | 0;
};
export const pgAdvisoryLockHashText = (text: string) => {
const hash = hashtext(text);
// Ensure positive value within PostgreSQL integer range
return Math.abs(hash) % 2 ** 31;
};

@ -0,0 +1,264 @@
/* eslint-disable no-await-in-loop */
import net from "node:net";
import tls from "node:tls";
import { BadRequestError } from "../errors";
import { logger } from "../logger";
const DEFAULT_MAX_RETRIES = 3;
const DEFAULT_RETRY_DELAY = 1000; // 1 second
const createTLSConnection = (relayHost: string, relayPort: number, tlsOptions: tls.TlsOptions = {}) => {
return new Promise<tls.TLSSocket>((resolve, reject) => {
// @ts-expect-error this is resolved in next connect
const socket = new tls.TLSSocket(null, {
rejectUnauthorized: true,
...tlsOptions
});
const cleanup = () => {
socket.removeAllListeners();
socket.end();
};
socket.once("error", (err) => {
cleanup();
reject(err);
});
socket.connect(relayPort, relayHost, () => {
resolve(socket);
});
});
};
type TPingGatewayAndVerifyDTO = {
relayHost: string;
relayPort: number;
tlsOptions: tls.TlsOptions;
maxRetries?: number;
identityId: string;
orgId: string;
};
export const pingGatewayAndVerify = async ({
relayHost,
relayPort,
tlsOptions = {},
maxRetries = DEFAULT_MAX_RETRIES,
identityId,
orgId
}: TPingGatewayAndVerifyDTO) => {
let lastError: Error | null = null;
for (let attempt = 1; attempt <= maxRetries; attempt += 1) {
try {
const socket = await createTLSConnection(relayHost, relayPort, tlsOptions);
socket.setTimeout(2000);
const pingResult = await new Promise((resolve, reject) => {
socket.once("timeout", () => {
socket.destroy();
reject(new Error("Timeout"));
});
socket.once("close", () => {
socket.destroy();
});
socket.once("end", () => {
socket.destroy();
});
socket.once("error", (err) => {
reject(err);
});
socket.write(Buffer.from("PING\n"), () => {
socket.once("data", (data) => {
const response = (data as string).toString();
const certificate = socket.getPeerCertificate();
if (certificate.subject.CN !== identityId || certificate.subject.O !== orgId) {
throw new BadRequestError({
message: `Invalid gateway. Certificate not found for ${identityId} in organization ${orgId}`
});
}
if (response === "PONG") {
resolve(true);
} else {
reject(new Error(`Unexpected response: ${response}`));
}
});
});
});
socket.end();
return pingResult;
} catch (err) {
lastError = err as Error;
if (attempt < maxRetries) {
await new Promise((resolve) => {
setTimeout(resolve, DEFAULT_RETRY_DELAY);
});
}
}
}
logger.error(lastError);
throw new BadRequestError({
message: `Failed to ping gateway after ${maxRetries} attempts. Last error: ${lastError?.message}`
});
};
interface TProxyServer {
server: net.Server;
port: number;
cleanup: () => void;
}
const setupProxyServer = ({
targetPort,
targetHost,
tlsOptions = {},
relayHost,
relayPort
}: {
targetHost: string;
targetPort: number;
relayPort: number;
relayHost: string;
tlsOptions: tls.TlsOptions;
}): Promise<TProxyServer> => {
return new Promise((resolve, reject) => {
const server = net.createServer();
// eslint-disable-next-line @typescript-eslint/no-misused-promises
server.on("connection", async (clientSocket) => {
try {
const targetSocket = await createTLSConnection(relayHost, relayPort, tlsOptions);
targetSocket.write(Buffer.from(`FORWARD-TCP ${targetHost}:${targetPort}\n`), () => {
clientSocket.on("data", (data) => {
const flushed = targetSocket.write(data);
if (!flushed) {
clientSocket.pause();
targetSocket.once("drain", () => {
clientSocket.resume();
});
}
});
targetSocket.on("data", (data) => {
const flushed = clientSocket.write(data as string);
if (!flushed) {
targetSocket.pause();
clientSocket.once("drain", () => {
targetSocket.resume();
});
}
});
});
const cleanup = () => {
clientSocket?.unpipe();
clientSocket?.end();
targetSocket?.unpipe();
targetSocket?.end();
};
clientSocket.on("error", (err) => {
logger.error(err, "Client socket error");
cleanup();
reject(err);
});
targetSocket.on("error", (err) => {
logger.error(err, "Target socket error");
cleanup();
reject(err);
});
clientSocket.on("end", cleanup);
targetSocket.on("end", cleanup);
} catch (err) {
logger.error(err, "Failed to establish target connection:");
clientSocket.end();
reject(err);
}
});
server.on("error", (err) => {
reject(err);
});
server.listen(0, () => {
const address = server.address();
if (!address || typeof address === "string") {
server.close();
reject(new Error("Failed to get server port"));
return;
}
logger.info("Gateway proxy started");
resolve({
server,
port: address.port,
cleanup: () => {
server.close();
}
});
});
});
};
interface ProxyOptions {
targetHost: string;
targetPort: number;
relayHost: string;
relayPort: number;
tlsOptions?: tls.TlsOptions;
maxRetries?: number;
identityId: string;
orgId: string;
}
export const withGatewayProxy = async (
callback: (port: number) => Promise<void>,
options: ProxyOptions
): Promise<void> => {
const {
relayHost,
relayPort,
targetHost,
targetPort,
tlsOptions = {},
maxRetries = DEFAULT_MAX_RETRIES,
identityId,
orgId
} = options;
// First, try to ping the gateway
await pingGatewayAndVerify({
relayHost,
relayPort,
tlsOptions,
maxRetries,
identityId,
orgId
});
// Setup the proxy server
const { port, cleanup } = await setupProxyServer({ targetHost, targetPort, relayPort, relayHost, tlsOptions });
try {
// Execute the callback with the allocated port
await callback(port);
} catch (err) {
logger.error(err, "Failed to proxy");
throw new BadRequestError({ message: (err as Error)?.message });
} finally {
// Ensure cleanup happens regardless of success or failure
cleanup();
}
};

@ -103,6 +103,16 @@ export const isValidIpOrCidr = (ip: string): boolean => {
return false;
};
export const isValidIp = (ip: string) => {
return net.isIPv4(ip) || net.isIPv6(ip);
};
export const isValidHostname = (name: string) => {
const hostnameRegex = /^(?!:\/\/)(\*\.)?([a-zA-Z0-9-_]{1,63}\.?)+(?!:\/\/)([a-zA-Z]{2,63})$/;
return hostnameRegex.test(name);
};
export type TIp = {
ipAddress: string;
type: IPType;

@ -0,0 +1,16 @@
import crypto from "node:crypto";
const TURN_TOKEN_TTL = 60 * 60 * 1000; // 24 hours in milliseconds
export const getTurnCredentials = (id: string, authSecret: string, ttl = TURN_TOKEN_TTL) => {
const timestamp = Math.floor((Date.now() + ttl) / 1000);
const username = `${timestamp}:${id}`;
const hmac = crypto.createHmac("sha1", authSecret);
hmac.update(username);
const password = hmac.digest("base64");
return {
username,
password
};
};

@ -27,6 +27,10 @@ import { dynamicSecretLeaseQueueServiceFactory } from "@app/ee/services/dynamic-
import { dynamicSecretLeaseServiceFactory } from "@app/ee/services/dynamic-secret-lease/dynamic-secret-lease-service";
import { externalKmsDALFactory } from "@app/ee/services/external-kms/external-kms-dal";
import { externalKmsServiceFactory } from "@app/ee/services/external-kms/external-kms-service";
import { gatewayDALFactory } from "@app/ee/services/gateway/gateway-dal";
import { gatewayServiceFactory } from "@app/ee/services/gateway/gateway-service";
import { orgGatewayConfigDALFactory } from "@app/ee/services/gateway/org-gateway-config-dal";
import { projectGatewayDALFactory } from "@app/ee/services/gateway/project-gateway-dal";
import { groupDALFactory } from "@app/ee/services/group/group-dal";
import { groupServiceFactory } from "@app/ee/services/group/group-service";
import { userGroupMembershipDALFactory } from "@app/ee/services/group/user-group-membership-dal";
@ -35,6 +39,12 @@ import { HsmModule } from "@app/ee/services/hsm/hsm-types";
import { identityProjectAdditionalPrivilegeDALFactory } from "@app/ee/services/identity-project-additional-privilege/identity-project-additional-privilege-dal";
import { identityProjectAdditionalPrivilegeServiceFactory } from "@app/ee/services/identity-project-additional-privilege/identity-project-additional-privilege-service";
import { identityProjectAdditionalPrivilegeV2ServiceFactory } from "@app/ee/services/identity-project-additional-privilege-v2/identity-project-additional-privilege-v2-service";
import { kmipClientCertificateDALFactory } from "@app/ee/services/kmip/kmip-client-certificate-dal";
import { kmipClientDALFactory } from "@app/ee/services/kmip/kmip-client-dal";
import { kmipOperationServiceFactory } from "@app/ee/services/kmip/kmip-operation-service";
import { kmipOrgConfigDALFactory } from "@app/ee/services/kmip/kmip-org-config-dal";
import { kmipOrgServerCertificateDALFactory } from "@app/ee/services/kmip/kmip-org-server-certificate-dal";
import { kmipServiceFactory } from "@app/ee/services/kmip/kmip-service";
import { ldapConfigDALFactory } from "@app/ee/services/ldap-config/ldap-config-dal";
import { ldapConfigServiceFactory } from "@app/ee/services/ldap-config/ldap-config-service";
import { ldapGroupMapDALFactory } from "@app/ee/services/ldap-config/ldap-group-map-dal";
@ -382,6 +392,14 @@ export const registerRoutes = async (
const projectTemplateDAL = projectTemplateDALFactory(db);
const resourceMetadataDAL = resourceMetadataDALFactory(db);
const kmipClientDAL = kmipClientDALFactory(db);
const kmipClientCertificateDAL = kmipClientCertificateDALFactory(db);
const kmipOrgConfigDAL = kmipOrgConfigDALFactory(db);
const kmipOrgServerCertificateDAL = kmipOrgServerCertificateDALFactory(db);
const orgGatewayConfigDAL = orgGatewayConfigDALFactory(db);
const gatewayDAL = gatewayDALFactory(db);
const projectGatewayDAL = projectGatewayDALFactory(db);
const permissionService = permissionServiceFactory({
permissionDAL,
@ -1290,7 +1308,19 @@ export const registerRoutes = async (
kmsService
});
const dynamicSecretProviders = buildDynamicSecretProviders();
const gatewayService = gatewayServiceFactory({
permissionService,
gatewayDAL,
kmsService,
licenseService,
orgGatewayConfigDAL,
keyStore,
projectGatewayDAL
});
const dynamicSecretProviders = buildDynamicSecretProviders({
gatewayService
});
const dynamicSecretQueueService = dynamicSecretLeaseQueueServiceFactory({
queueService,
dynamicSecretLeaseDAL,
@ -1308,8 +1338,10 @@ export const registerRoutes = async (
folderDAL,
permissionService,
licenseService,
kmsService
kmsService,
projectGatewayDAL
});
const dynamicSecretLeaseService = dynamicSecretLeaseServiceFactory({
projectDAL,
permissionService,
@ -1429,6 +1461,24 @@ export const registerRoutes = async (
keyStore
});
const kmipService = kmipServiceFactory({
kmipClientDAL,
permissionService,
kmipClientCertificateDAL,
kmipOrgConfigDAL,
kmsService,
kmipOrgServerCertificateDAL,
licenseService
});
const kmipOperationService = kmipOperationServiceFactory({
kmsService,
kmsDAL,
projectDAL,
kmipClientDAL,
permissionService
});
await superAdminService.initServerCfg();
// setup the communication with license key server
@ -1527,7 +1577,10 @@ export const registerRoutes = async (
projectTemplate: projectTemplateService,
totp: totpService,
appConnection: appConnectionService,
secretSync: secretSyncService
secretSync: secretSyncService,
kmip: kmipService,
kmipOperation: kmipOperationService,
gateway: gatewayService
});
const cronJobs: CronJob[] = [];
@ -1539,7 +1592,8 @@ export const registerRoutes = async (
}
server.decorate<FastifyZodProvider["store"]>("store", {
user: userDAL
user: userDAL,
kmipClient: kmipClientDAL
});
await server.register(injectIdentity, { userDAL, serviceTokenDAL });

@ -1,13 +1,19 @@
import { AppConnection } from "@app/services/app-connection/app-connection-enums";
import { z } from "zod";
import { readLimit } from "@app/server/config/rateLimiter";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AppConnection, AWSRegion } from "@app/services/app-connection/app-connection-enums";
import {
CreateAwsConnectionSchema,
SanitizedAwsConnectionSchema,
UpdateAwsConnectionSchema
} from "@app/services/app-connection/aws";
import { AuthMode } from "@app/services/auth/auth-type";
import { SecretSync } from "@app/services/secret-sync/secret-sync-enums";
import { registerAppConnectionEndpoints } from "./app-connection-endpoints";
export const registerAwsConnectionRouter = async (server: FastifyZodProvider) =>
export const registerAwsConnectionRouter = async (server: FastifyZodProvider) => {
registerAppConnectionEndpoints({
app: AppConnection.AWS,
server,
@ -15,3 +21,42 @@ export const registerAwsConnectionRouter = async (server: FastifyZodProvider) =>
createSchema: CreateAwsConnectionSchema,
updateSchema: UpdateAwsConnectionSchema
});
// The below endpoints are not exposed and for Infisical App use
server.route({
method: "GET",
url: `/:connectionId/kms-keys`,
config: {
rateLimit: readLimit
},
schema: {
params: z.object({
connectionId: z.string().uuid()
}),
querystring: z.object({
region: z.nativeEnum(AWSRegion),
destination: z.enum([SecretSync.AWSParameterStore, SecretSync.AWSSecretsManager])
}),
response: {
200: z.object({
kmsKeys: z.object({ alias: z.string(), id: z.string() }).array()
})
}
},
onRequest: verifyAuth([AuthMode.JWT]),
handler: async (req) => {
const { connectionId } = req.params;
const kmsKeys = await server.services.appConnection.aws.listKmsKeys(
{
connectionId,
...req.query
},
req.permission
);
return { kmsKeys };
}
});
};

@ -2,10 +2,9 @@ import jwt from "jsonwebtoken";
import { z } from "zod";
import { getConfig } from "@app/lib/config/env";
import { NotFoundError, UnauthorizedError } from "@app/lib/errors";
import { authRateLimit, writeLimit } from "@app/server/config/rateLimiter";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AuthMode, AuthModeRefreshJwtTokenPayload, AuthTokenType } from "@app/services/auth/auth-type";
import { AuthMode, AuthTokenType } from "@app/services/auth/auth-type";
export const registerAuthRoutes = async (server: FastifyZodProvider) => {
server.route({
@ -21,18 +20,19 @@ export const registerAuthRoutes = async (server: FastifyZodProvider) => {
})
}
},
onRequest: verifyAuth([AuthMode.JWT], { requireOrg: false }),
handler: async (req, res) => {
const { decodedToken } = await server.services.authToken.validateRefreshToken(req.cookies.jid);
const appCfg = getConfig();
if (req.auth.authMode === AuthMode.JWT) {
await server.services.login.logout(req.permission.id, req.auth.tokenVersionId);
}
await server.services.login.logout(decodedToken.userId, decodedToken.tokenVersionId);
void res.cookie("jid", "", {
httpOnly: true,
path: "/",
sameSite: "strict",
secure: appCfg.HTTPS_ENABLED
});
return { message: "Successfully logged out" };
}
});
@ -69,37 +69,8 @@ export const registerAuthRoutes = async (server: FastifyZodProvider) => {
}
},
handler: async (req) => {
const refreshToken = req.cookies.jid;
const { decodedToken, tokenVersion } = await server.services.authToken.validateRefreshToken(req.cookies.jid);
const appCfg = getConfig();
if (!refreshToken)
throw new NotFoundError({
name: "AuthTokenNotFound",
message: "Failed to find refresh token"
});
const decodedToken = jwt.verify(refreshToken, appCfg.AUTH_SECRET) as AuthModeRefreshJwtTokenPayload;
if (decodedToken.authTokenType !== AuthTokenType.REFRESH_TOKEN)
throw new UnauthorizedError({
message: "The token provided is not a refresh token",
name: "InvalidToken"
});
const tokenVersion = await server.services.authToken.getUserTokenSessionById(
decodedToken.tokenVersionId,
decodedToken.userId
);
if (!tokenVersion)
throw new UnauthorizedError({
message: "Valid token version not found",
name: "InvalidToken"
});
if (decodedToken.refreshVersion !== tokenVersion.refreshVersion) {
throw new UnauthorizedError({
message: "Token version mismatch",
name: "InvalidToken"
});
}
const token = jwt.sign(
{

@ -20,6 +20,7 @@ import { ActorType, AuthMode } from "@app/services/auth/auth-type";
import { ProjectFilterType } from "@app/services/project/project-types";
import { ResourceMetadataSchema } from "@app/services/resource-metadata/resource-metadata-schema";
import { SecretOperations, SecretProtectionType } from "@app/services/secret/secret-types";
import { SecretUpdateMode } from "@app/services/secret-v2-bridge/secret-v2-bridge-types";
import { PostHogEventTypes } from "@app/services/telemetry/telemetry-types";
import { secretRawSchema } from "../sanitizedSchemas";
@ -536,7 +537,12 @@ export const registerSecretRouter = async (server: FastifyZodProvider) => {
.optional()
.nullable()
.describe(RAW_SECRETS.CREATE.secretReminderRepeatDays),
secretReminderNote: z.string().optional().nullable().describe(RAW_SECRETS.CREATE.secretReminderNote)
secretReminderNote: z
.string()
.max(1024, "Secret reminder note cannot exceed 1024 characters")
.optional()
.nullable()
.describe(RAW_SECRETS.CREATE.secretReminderNote)
}),
response: {
200: z.union([
@ -639,7 +645,12 @@ export const registerSecretRouter = async (server: FastifyZodProvider) => {
tagIds: z.string().array().optional().describe(RAW_SECRETS.UPDATE.tagIds),
metadata: z.record(z.string()).optional(),
secretMetadata: ResourceMetadataSchema.optional(),
secretReminderNote: z.string().optional().nullable().describe(RAW_SECRETS.UPDATE.secretReminderNote),
secretReminderNote: z
.string()
.max(1024, "Secret reminder note cannot exceed 1024 characters")
.optional()
.nullable()
.describe(RAW_SECRETS.UPDATE.secretReminderNote),
secretReminderRepeatDays: z
.number()
.optional()
@ -2030,6 +2041,11 @@ export const registerSecretRouter = async (server: FastifyZodProvider) => {
.default("/")
.transform(removeTrailingSlash)
.describe(RAW_SECRETS.UPDATE.secretPath),
mode: z
.nativeEnum(SecretUpdateMode)
.optional()
.default(SecretUpdateMode.FailOnNotFound)
.describe(RAW_SECRETS.UPDATE.mode),
secrets: z
.object({
secretKey: SecretNameSchema.describe(RAW_SECRETS.UPDATE.secretName),
@ -2037,11 +2053,22 @@ export const registerSecretRouter = async (server: FastifyZodProvider) => {
.string()
.transform((val) => (val.at(-1) === "\n" ? `${val.trim()}\n` : val.trim()))
.describe(RAW_SECRETS.UPDATE.secretValue),
secretPath: z
.string()
.trim()
.transform(removeTrailingSlash)
.optional()
.describe(RAW_SECRETS.UPDATE.secretPath),
secretComment: z.string().trim().optional().describe(RAW_SECRETS.UPDATE.secretComment),
skipMultilineEncoding: z.boolean().optional().describe(RAW_SECRETS.UPDATE.skipMultilineEncoding),
newSecretName: SecretNameSchema.optional().describe(RAW_SECRETS.UPDATE.newSecretName),
tagIds: z.string().array().optional().describe(RAW_SECRETS.UPDATE.tagIds),
secretReminderNote: z.string().optional().nullable().describe(RAW_SECRETS.UPDATE.secretReminderNote),
secretReminderNote: z
.string()
.max(1024, "Secret reminder note cannot exceed 1024 characters")
.optional()
.nullable()
.describe(RAW_SECRETS.UPDATE.secretReminderNote),
secretMetadata: ResourceMetadataSchema.optional(),
secretReminderRepeatDays: z
.number()
@ -2073,7 +2100,8 @@ export const registerSecretRouter = async (server: FastifyZodProvider) => {
environment,
projectSlug,
projectId: req.body.workspaceId,
secrets: inputSecrets
secrets: inputSecrets,
mode: req.body.mode
});
if (secretOperation.type === SecretProtectionType.Approval) {
return { approval: secretOperation.approval };
@ -2092,15 +2120,39 @@ export const registerSecretRouter = async (server: FastifyZodProvider) => {
metadata: {
environment: req.body.environment,
secretPath: req.body.secretPath,
secrets: secrets.map((secret) => ({
secretId: secret.id,
secretKey: secret.secretKey,
secretVersion: secret.version,
secretMetadata: secretMetadataMap.get(secret.secretKey)
}))
secrets: secrets
.filter((el) => el.version > 1)
.map((secret) => ({
secretId: secret.id,
secretPath: secret.secretPath,
secretKey: secret.secretKey,
secretVersion: secret.version,
secretMetadata: secretMetadataMap.get(secret.secretKey)
}))
}
}
});
const createdSecrets = secrets.filter((el) => el.version === 1);
if (createdSecrets.length) {
await server.services.auditLog.createAuditLog({
projectId: secrets[0].workspace,
...req.auditLogInfo,
event: {
type: EventType.CREATE_SECRETS,
metadata: {
environment: req.body.environment,
secretPath: req.body.secretPath,
secrets: createdSecrets.map((secret) => ({
secretId: secret.id,
secretPath: secret.secretPath,
secretKey: secret.secretKey,
secretVersion: secret.version,
secretMetadata: secretMetadataMap.get(secret.secretKey)
}))
}
}
});
}
await server.services.telemetry.sendPostHogEvents({
event: PostHogEventTypes.SecretUpdated,

@ -22,18 +22,19 @@ import {
TUpdateAppConnectionDTO,
TValidateAppConnectionCredentials
} from "@app/services/app-connection/app-connection-types";
import { ValidateAwsConnectionCredentialsSchema } from "@app/services/app-connection/aws";
import { ValidateDatabricksConnectionCredentialsSchema } from "@app/services/app-connection/databricks";
import { databricksConnectionService } from "@app/services/app-connection/databricks/databricks-connection-service";
import { ValidateGitHubConnectionCredentialsSchema } from "@app/services/app-connection/github";
import { githubConnectionService } from "@app/services/app-connection/github/github-connection-service";
import { TKmsServiceFactory } from "@app/services/kms/kms-service";
import { TAppConnectionDALFactory } from "./app-connection-dal";
import { ValidateAwsConnectionCredentialsSchema } from "./aws";
import { awsConnectionService } from "./aws/aws-connection-service";
import { ValidateAzureAppConfigurationConnectionCredentialsSchema } from "./azure-app-configuration";
import { ValidateAzureKeyVaultConnectionCredentialsSchema } from "./azure-key-vault";
import { ValidateDatabricksConnectionCredentialsSchema } from "./databricks";
import { databricksConnectionService } from "./databricks/databricks-connection-service";
import { ValidateGcpConnectionCredentialsSchema } from "./gcp";
import { gcpConnectionService } from "./gcp/gcp-connection-service";
import { ValidateGitHubConnectionCredentialsSchema } from "./github";
import { githubConnectionService } from "./github/github-connection-service";
export type TAppConnectionServiceFactoryDep = {
appConnectionDAL: TAppConnectionDALFactory;
@ -369,6 +370,7 @@ export const appConnectionServiceFactory = ({
listAvailableAppConnectionsForUser,
github: githubConnectionService(connectAppConnectionById),
gcp: gcpConnectionService(connectAppConnectionById),
databricks: databricksConnectionService(connectAppConnectionById, appConnectionDAL, kmsService)
databricks: databricksConnectionService(connectAppConnectionById, appConnectionDAL, kmsService),
aws: awsConnectionService(connectAppConnectionById)
};
};

@ -1,3 +1,4 @@
import { AWSRegion } from "@app/services/app-connection/app-connection-enums";
import {
TAwsConnection,
TAwsConnectionConfig,
@ -16,6 +17,7 @@ import {
TGitHubConnectionInput,
TValidateGitHubConnectionCredentials
} from "@app/services/app-connection/github";
import { SecretSync } from "@app/services/secret-sync/secret-sync-enums";
import {
TAzureAppConfigurationConnection,
@ -73,3 +75,9 @@ export type TValidateAppConnectionCredentials =
| TValidateAzureKeyVaultConnectionCredentials
| TValidateAzureAppConfigurationConnectionCredentials
| TValidateDatabricksConnectionCredentials;
export type TListAwsConnectionKmsKeys = {
connectionId: string;
region: AWSRegion;
destination: SecretSync.AWSParameterStore | SecretSync.AWSSecretsManager;
};

@ -0,0 +1,88 @@
import AWS from "aws-sdk";
import { OrgServiceActor } from "@app/lib/types";
import { AppConnection } from "@app/services/app-connection/app-connection-enums";
import { TListAwsConnectionKmsKeys } from "@app/services/app-connection/app-connection-types";
import { getAwsConnectionConfig } from "@app/services/app-connection/aws/aws-connection-fns";
import { TAwsConnection } from "@app/services/app-connection/aws/aws-connection-types";
import { SecretSync } from "@app/services/secret-sync/secret-sync-enums";
type TGetAppConnectionFunc = (
app: AppConnection,
connectionId: string,
actor: OrgServiceActor
) => Promise<TAwsConnection>;
const listAwsKmsKeys = async (
appConnection: TAwsConnection,
{ region, destination }: Pick<TListAwsConnectionKmsKeys, "region" | "destination">
) => {
const { credentials } = await getAwsConnectionConfig(appConnection, region);
const awsKms = new AWS.KMS({
credentials,
region
});
const aliasEntries: AWS.KMS.AliasList = [];
let aliasMarker: string | undefined;
do {
// eslint-disable-next-line no-await-in-loop
const response = await awsKms.listAliases({ Limit: 100, Marker: aliasMarker }).promise();
aliasEntries.push(...(response.Aliases || []));
aliasMarker = response.NextMarker;
} while (aliasMarker);
const keyMetadataRecord: Record<string, AWS.KMS.KeyMetadata | undefined> = {};
for await (const aliasEntry of aliasEntries) {
if (aliasEntry.TargetKeyId) {
const keyDescription = await awsKms.describeKey({ KeyId: aliasEntry.TargetKeyId }).promise();
keyMetadataRecord[aliasEntry.TargetKeyId] = keyDescription.KeyMetadata;
}
}
const validAliasEntries = aliasEntries.filter((aliasEntry) => {
if (!aliasEntry.TargetKeyId) return false;
if (destination === SecretSync.AWSParameterStore && aliasEntry.AliasName === "alias/aws/ssm") return true;
if (destination === SecretSync.AWSSecretsManager && aliasEntry.AliasName === "alias/aws/secretsmanager")
return true;
if (aliasEntry.AliasName?.includes("alias/aws/")) return false;
const keyMetadata = keyMetadataRecord[aliasEntry.TargetKeyId];
if (!keyMetadata || keyMetadata.KeyUsage !== "ENCRYPT_DECRYPT" || keyMetadata.KeySpec !== "SYMMETRIC_DEFAULT")
return false;
return true;
});
const kmsKeys = validAliasEntries.map((aliasEntry) => {
return {
id: aliasEntry.TargetKeyId!,
alias: aliasEntry.AliasName!
};
});
return kmsKeys;
};
export const awsConnectionService = (getAppConnection: TGetAppConnectionFunc) => {
const listKmsKeys = async (
{ connectionId, region, destination }: TListAwsConnectionKmsKeys,
actor: OrgServiceActor
) => {
const appConnection = await getAppConnection(AppConnection.AWS, connectionId, actor);
const kmsKeys = await listAwsKmsKeys(appConnection, { region, destination });
return kmsKeys;
};
return {
listKmsKeys
};
};

@ -1,6 +1,7 @@
import crypto from "node:crypto";
import bcrypt from "bcrypt";
import jwt from "jsonwebtoken";
import { Knex } from "knex";
import { TAuthTokens, TAuthTokenSessions } from "@app/db/schemas";
@ -8,7 +9,7 @@ import { getConfig } from "@app/lib/config/env";
import { ForbiddenRequestError, NotFoundError, UnauthorizedError } from "@app/lib/errors";
import { TOrgMembershipDALFactory } from "@app/services/org-membership/org-membership-dal";
import { AuthModeJwtTokenPayload } from "../auth/auth-type";
import { AuthModeJwtTokenPayload, AuthModeRefreshJwtTokenPayload, AuthTokenType } from "../auth/auth-type";
import { TUserDALFactory } from "../user/user-dal";
import { TTokenDALFactory } from "./auth-token-dal";
import { TCreateTokenForUserDTO, TIssueAuthTokenDTO, TokenType, TValidateTokenForUserDTO } from "./auth-token-types";
@ -150,6 +151,40 @@ export const tokenServiceFactory = ({ tokenDAL, userDAL, orgMembershipDAL }: TAu
const revokeAllMySessions = async (userId: string) => tokenDAL.deleteTokenSession({ userId });
const validateRefreshToken = async (refreshToken?: string) => {
const appCfg = getConfig();
if (!refreshToken)
throw new NotFoundError({
name: "AuthTokenNotFound",
message: "Failed to find refresh token"
});
const decodedToken = jwt.verify(refreshToken, appCfg.AUTH_SECRET) as AuthModeRefreshJwtTokenPayload;
if (decodedToken.authTokenType !== AuthTokenType.REFRESH_TOKEN)
throw new UnauthorizedError({
message: "The token provided is not a refresh token",
name: "InvalidToken"
});
const tokenVersion = await getUserTokenSessionById(decodedToken.tokenVersionId, decodedToken.userId);
if (!tokenVersion)
throw new UnauthorizedError({
message: "Valid token version not found",
name: "InvalidToken"
});
if (decodedToken.refreshVersion !== tokenVersion.refreshVersion) {
throw new UnauthorizedError({
message: "Token version mismatch",
name: "InvalidToken"
});
}
return { decodedToken, tokenVersion };
};
// to parse jwt identity in inject identity plugin
const fnValidateJwtIdentity = async (token: AuthModeJwtTokenPayload) => {
const session = await tokenDAL.findOneTokenSession({
@ -188,6 +223,7 @@ export const tokenServiceFactory = ({ tokenDAL, userDAL, orgMembershipDAL }: TAu
clearTokenSessionById,
getTokenSessionByUser,
revokeAllMySessions,
validateRefreshToken,
fnValidateJwtIdentity,
getUserTokenSessionById
};

@ -35,6 +35,7 @@ export enum AuthMode {
export enum ActorType { // would extend to AWS, Azure, ...
PLATFORM = "platform", // Useful for when we want to perform logging on automated actions such as integration syncs.
KMIP_CLIENT = "kmipClient",
USER = "user", // userIdentity
SERVICE = "service",
IDENTITY = "identity",

@ -1,5 +1,7 @@
import { z } from "zod";
import { isValidIp } from "@app/lib/ip";
const isValidDate = (dateString: string) => {
const date = new Date(dateString);
return !Number.isNaN(date.getTime());
@ -25,7 +27,7 @@ export const validateAltNamesField = z
if (data === "") return true;
// Split and validate each alt name
return data.split(", ").every((name) => {
return hostnameRegex.test(name) || z.string().email().safeParse(name).success;
return hostnameRegex.test(name) || z.string().email().safeParse(name).success || isValidIp(name);
});
},
{

@ -40,3 +40,9 @@ export const isCertChainValid = async (certificates: x509.X509Certificate[]) =>
// chain.build() implicitly verifies the chain
return chainItems.length === certificates.length;
};
export const constructPemChainFromCerts = (certificates: x509.X509Certificate[]) =>
certificates
.map((cert) => cert.toString("pem"))
.join("\n")
.trim();

@ -2257,7 +2257,9 @@ const syncSecretsFlyio = async ({
}
`;
await request.post(
type TFlyioErrors = { message: string }[];
const setSecretsResp = await request.post<{ errors?: TFlyioErrors }>(
IntegrationUrls.FLYIO_API_URL,
{
query: SetSecrets,
@ -2279,6 +2281,10 @@ const syncSecretsFlyio = async ({
}
);
if (setSecretsResp.data.errors?.length) {
throw new Error(JSON.stringify(setSecretsResp.data.errors));
}
// get secrets
interface FlyioSecret {
name: string;

@ -93,6 +93,32 @@ export const kmskeyDALFactory = (db: TDbClient) => {
}
};
const findProjectCmeks = async (projectId: string, tx?: Knex) => {
try {
const result = await (tx || db.replicaNode())(TableName.KmsKey)
.where({
[`${TableName.KmsKey}.projectId` as "projectId"]: projectId,
[`${TableName.KmsKey}.isReserved` as "isReserved"]: false
})
.join(TableName.Organization, `${TableName.KmsKey}.orgId`, `${TableName.Organization}.id`)
.join(TableName.InternalKms, `${TableName.KmsKey}.id`, `${TableName.InternalKms}.kmsKeyId`)
.select(selectAllTableCols(TableName.KmsKey))
.select(
db.ref("encryptionAlgorithm").withSchema(TableName.InternalKms).as("internalKmsEncryptionAlgorithm"),
db.ref("version").withSchema(TableName.InternalKms).as("internalKmsVersion")
);
return result.map((entry) => ({
...KmsKeysSchema.parse(entry),
isActive: !entry.isDisabled,
algorithm: entry.internalKmsEncryptionAlgorithm,
version: entry.internalKmsVersion
}));
} catch (error) {
throw new DatabaseError({ error, name: "Find project cmeks" });
}
};
const listCmeksByProjectId = async (
{
projectId,
@ -167,5 +193,5 @@ export const kmskeyDALFactory = (db: TDbClient) => {
}
};
return { ...kmsOrm, findByIdWithAssociatedKms, listCmeksByProjectId, findCmekById, findCmekByName };
return { ...kmsOrm, findByIdWithAssociatedKms, listCmeksByProjectId, findCmekById, findCmekByName, findProjectCmeks };
};

@ -1,8 +1,9 @@
import { Knex } from "knex";
import { TDbClient } from "@app/db";
import { TableName } from "@app/db/schemas";
import { DatabaseError } from "@app/lib/errors";
import { ormify } from "@app/lib/knex";
import { Knex } from "knex";
export type TKmsRootConfigDALFactory = ReturnType<typeof kmsRootConfigDALFactory>;

@ -37,6 +37,8 @@ import {
TEncryptWithKmsDataKeyDTO,
TEncryptWithKmsDTO,
TGenerateKMSDTO,
TGetKeyMaterialDTO,
TImportKeyMaterialDTO,
TUpdateProjectSecretManagerKmsKeyDTO
} from "./kms-types";
@ -325,6 +327,72 @@ export const kmsServiceFactory = ({
};
};
const getKeyMaterial = async ({ kmsId }: TGetKeyMaterialDTO) => {
const kmsDoc = await kmsDAL.findByIdWithAssociatedKms(kmsId);
if (!kmsDoc) {
throw new NotFoundError({ message: `KMS with ID '${kmsId}' not found` });
}
if (kmsDoc.isReserved) {
throw new BadRequestError({
message: "Cannot get key material for reserved key"
});
}
if (kmsDoc.externalKms) {
throw new BadRequestError({
message: "Cannot get key material for external key"
});
}
const keyCipher = symmetricCipherService(SymmetricEncryption.AES_GCM_256);
const kmsKey = keyCipher.decrypt(kmsDoc.internalKms?.encryptedKey as Buffer, ROOT_ENCRYPTION_KEY);
return kmsKey;
};
const importKeyMaterial = async (
{ key, algorithm, name, isReserved, projectId, orgId }: TImportKeyMaterialDTO,
tx?: Knex
) => {
const cipher = symmetricCipherService(SymmetricEncryption.AES_GCM_256);
const expectedByteLength = getByteLengthForAlgorithm(algorithm);
if (key.byteLength !== expectedByteLength) {
throw new BadRequestError({
message: `Invalid key length for ${algorithm}. Expected ${expectedByteLength} bytes but got ${key.byteLength} bytes`
});
}
const encryptedKeyMaterial = cipher.encrypt(key, ROOT_ENCRYPTION_KEY);
const sanitizedName = name ? slugify(name) : slugify(alphaNumericNanoId(8).toLowerCase());
const dbQuery = async (db: Knex) => {
const kmsDoc = await kmsDAL.create(
{
name: sanitizedName,
orgId,
isReserved,
projectId
},
db
);
await internalKmsDAL.create(
{
version: 1,
encryptedKey: encryptedKeyMaterial,
encryptionAlgorithm: algorithm,
kmsKeyId: kmsDoc.id
},
db
);
return kmsDoc;
};
if (tx) return dbQuery(tx);
const doc = await kmsDAL.transaction(async (tx2) => dbQuery(tx2));
return doc;
};
const encryptWithKmsKey = async ({ kmsId }: Omit<TEncryptWithKmsDTO, "plainText">, tx?: Knex) => {
const kmsDoc = await kmsDAL.findByIdWithAssociatedKms(kmsId, tx);
if (!kmsDoc) {
@ -944,6 +1012,8 @@ export const kmsServiceFactory = ({
getProjectKeyBackup,
loadProjectKeyBackup,
getKmsById,
createCipherPairWithDataKey
createCipherPairWithDataKey,
getKeyMaterial,
importKeyMaterial
};
};

@ -61,3 +61,15 @@ export enum RootKeyEncryptionStrategy {
Software = "SOFTWARE",
HSM = "HSM"
}
export type TGetKeyMaterialDTO = {
kmsId: string;
};
export type TImportKeyMaterialDTO = {
key: Buffer;
algorithm: SymmetricEncryption;
name?: string;
isReserved: boolean;
projectId: string;
orgId: string;
};

@ -34,6 +34,25 @@ export const secretSharingServiceFactory = ({
orgDAL,
kmsService
}: TSecretSharingServiceFactoryDep) => {
const $validateSharedSecretExpiry = (expiresAt: string) => {
if (new Date(expiresAt) < new Date()) {
throw new BadRequestError({ message: "Expiration date cannot be in the past" });
}
// Limit Expiry Time to 1 month
const expiryTime = new Date(expiresAt).getTime();
const currentTime = new Date().getTime();
const thirtyDays = 30 * 24 * 60 * 60 * 1000;
if (expiryTime - currentTime > thirtyDays) {
throw new BadRequestError({ message: "Expiration date cannot be more than 30 days" });
}
const fiveMins = 5 * 60 * 1000;
if (expiryTime - currentTime < fiveMins) {
throw new BadRequestError({ message: "Expiration time cannot be less than 5 mins" });
}
};
const createSharedSecret = async ({
actor,
actorId,
@ -49,18 +68,7 @@ export const secretSharingServiceFactory = ({
}: TCreateSharedSecretDTO) => {
const { permission } = await permissionService.getOrgPermission(actor, actorId, orgId, actorAuthMethod, actorOrgId);
if (!permission) throw new ForbiddenRequestError({ name: "User is not a part of the specified organization" });
if (new Date(expiresAt) < new Date()) {
throw new BadRequestError({ message: "Expiration date cannot be in the past" });
}
// Limit Expiry Time to 1 month
const expiryTime = new Date(expiresAt).getTime();
const currentTime = new Date().getTime();
const thirtyDays = 30 * 24 * 60 * 60 * 1000;
if (expiryTime - currentTime > thirtyDays) {
throw new BadRequestError({ message: "Expiration date cannot be more than 30 days" });
}
$validateSharedSecretExpiry(expiresAt);
if (secretValue.length > 10_000) {
throw new BadRequestError({ message: "Shared secret value too long" });
@ -100,17 +108,7 @@ export const secretSharingServiceFactory = ({
expiresAfterViews,
accessType
}: TCreatePublicSharedSecretDTO) => {
if (new Date(expiresAt) < new Date()) {
throw new BadRequestError({ message: "Expiration date cannot be in the past" });
}
// Limit Expiry Time to 1 month
const expiryTime = new Date(expiresAt).getTime();
const currentTime = new Date().getTime();
const thirtyDays = 30 * 24 * 60 * 60 * 1000;
if (expiryTime - currentTime > thirtyDays) {
throw new BadRequestError({ message: "Expiration date cannot exceed more than 30 days" });
}
$validateSharedSecretExpiry(expiresAt);
const encryptWithRoot = kmsService.encryptWithRootKey();
const encryptedSecret = encryptWithRoot(Buffer.from(secretValue));

@ -7,6 +7,8 @@ import { TSecretMap } from "@app/services/secret-sync/secret-sync-types";
import { TAwsParameterStoreSyncWithCredentials } from "./aws-parameter-store-sync-types";
type TAWSParameterStoreRecord = Record<string, AWS.SSM.Parameter>;
type TAWSParameterStoreMetadataRecord = Record<string, AWS.SSM.ParameterMetadata>;
type TAWSParameterStoreTagsRecord = Record<string, Record<string, string>>;
const MAX_RETRIES = 5;
const BATCH_SIZE = 10;
@ -80,6 +82,129 @@ const getParametersByPath = async (ssm: AWS.SSM, path: string): Promise<TAWSPara
return awsParameterStoreSecretsRecord;
};
const getParameterMetadataByPath = async (ssm: AWS.SSM, path: string): Promise<TAWSParameterStoreMetadataRecord> => {
const awsParameterStoreMetadataRecord: TAWSParameterStoreMetadataRecord = {};
let hasNext = true;
let nextToken: string | undefined;
let attempt = 0;
while (hasNext) {
try {
// eslint-disable-next-line no-await-in-loop
const parameters = await ssm
.describeParameters({
MaxResults: 10,
NextToken: nextToken,
ParameterFilters: [
{
Key: "Path",
Option: "OneLevel",
Values: [path]
}
]
})
.promise();
attempt = 0;
if (parameters.Parameters) {
parameters.Parameters.forEach((parameter) => {
if (parameter.Name) {
// no leading slash if path is '/'
const secKey = path.length > 1 ? parameter.Name.substring(path.length) : parameter.Name;
awsParameterStoreMetadataRecord[secKey] = parameter;
}
});
}
hasNext = Boolean(parameters.NextToken);
nextToken = parameters.NextToken;
} catch (e) {
if ((e as AWSError).code === "ThrottlingException" && attempt < MAX_RETRIES) {
attempt += 1;
// eslint-disable-next-line no-await-in-loop
await sleep();
// eslint-disable-next-line no-continue
continue;
}
throw e;
}
}
return awsParameterStoreMetadataRecord;
};
const getParameterStoreTagsRecord = async (
ssm: AWS.SSM,
awsParameterStoreSecretsRecord: TAWSParameterStoreRecord,
needsTagsPermissions: boolean
): Promise<{ shouldManageTags: boolean; awsParameterStoreTagsRecord: TAWSParameterStoreTagsRecord }> => {
const awsParameterStoreTagsRecord: TAWSParameterStoreTagsRecord = {};
for await (const entry of Object.entries(awsParameterStoreSecretsRecord)) {
const [key, parameter] = entry;
if (!parameter.Name) {
// eslint-disable-next-line no-continue
continue;
}
try {
const tags = await ssm
.listTagsForResource({
ResourceType: "Parameter",
ResourceId: parameter.Name
})
.promise();
awsParameterStoreTagsRecord[key] = Object.fromEntries(tags.TagList?.map((tag) => [tag.Key, tag.Value]) ?? []);
} catch (e) {
// users aren't required to provide tag permissions to use sync so we handle gracefully if unauthorized
// and they aren't trying to configure tags
if ((e as AWSError).code === "AccessDeniedException") {
if (!needsTagsPermissions) {
return { shouldManageTags: false, awsParameterStoreTagsRecord: {} };
}
throw new SecretSyncError({
message:
"IAM role has inadequate permissions to manage resource tags. Ensure the following polices are present: ssm:ListTagsForResource, ssm:AddTagsToResource, and ssm:RemoveTagsFromResource",
shouldRetry: false
});
}
throw e;
}
}
return { shouldManageTags: true, awsParameterStoreTagsRecord };
};
const processParameterTags = ({
syncTagsRecord,
awsTagsRecord
}: {
syncTagsRecord: Record<string, string>;
awsTagsRecord: Record<string, string>;
}) => {
const tagsToAdd: AWS.SSM.TagList = [];
const tagKeysToRemove: string[] = [];
for (const syncEntry of Object.entries(syncTagsRecord)) {
const [syncKey, syncValue] = syncEntry;
if (!(syncKey in awsTagsRecord) || syncValue !== awsTagsRecord[syncKey])
tagsToAdd.push({ Key: syncKey, Value: syncValue });
}
for (const awsKey of Object.keys(awsTagsRecord)) {
if (!(awsKey in syncTagsRecord)) tagKeysToRemove.push(awsKey);
}
return { tagsToAdd, tagKeysToRemove };
};
const putParameter = async (
ssm: AWS.SSM,
params: AWS.SSM.PutParameterRequest,
@ -98,6 +223,42 @@ const putParameter = async (
}
};
const addTagsToParameter = async (
ssm: AWS.SSM,
params: Omit<AWS.SSM.AddTagsToResourceRequest, "ResourceType">,
attempt = 0
): Promise<AWS.SSM.AddTagsToResourceResult> => {
try {
return await ssm.addTagsToResource({ ...params, ResourceType: "Parameter" }).promise();
} catch (error) {
if ((error as AWSError).code === "ThrottlingException" && attempt < MAX_RETRIES) {
await sleep();
// retry
return addTagsToParameter(ssm, params, attempt + 1);
}
throw error;
}
};
const removeTagsFromParameter = async (
ssm: AWS.SSM,
params: Omit<AWS.SSM.RemoveTagsFromResourceRequest, "ResourceType">,
attempt = 0
): Promise<AWS.SSM.RemoveTagsFromResourceResult> => {
try {
return await ssm.removeTagsFromResource({ ...params, ResourceType: "Parameter" }).promise();
} catch (error) {
if ((error as AWSError).code === "ThrottlingException" && attempt < MAX_RETRIES) {
await sleep();
// retry
return removeTagsFromParameter(ssm, params, attempt + 1);
}
throw error;
}
};
const deleteParametersBatch = async (
ssm: AWS.SSM,
parameters: AWS.SSM.Parameter[],
@ -132,35 +293,92 @@ const deleteParametersBatch = async (
export const AwsParameterStoreSyncFns = {
syncSecrets: async (secretSync: TAwsParameterStoreSyncWithCredentials, secretMap: TSecretMap) => {
const { destinationConfig } = secretSync;
const { destinationConfig, syncOptions } = secretSync;
const ssm = await getSSM(secretSync);
// TODO(scott): KMS Key ID, Tags
const awsParameterStoreSecretsRecord = await getParametersByPath(ssm, destinationConfig.path);
for await (const entry of Object.entries(secretMap)) {
const [key, { value }] = entry;
const awsParameterStoreMetadataRecord = await getParameterMetadataByPath(ssm, destinationConfig.path);
// skip empty values (not allowed by AWS) or secrets that haven't changed
if (!value || (key in awsParameterStoreSecretsRecord && awsParameterStoreSecretsRecord[key].Value === value)) {
const { shouldManageTags, awsParameterStoreTagsRecord } = await getParameterStoreTagsRecord(
ssm,
awsParameterStoreSecretsRecord,
Boolean(syncOptions.tags?.length || syncOptions.syncSecretMetadataAsTags)
);
const syncTagsRecord = Object.fromEntries(syncOptions.tags?.map((tag) => [tag.key, tag.value]) ?? []);
for await (const entry of Object.entries(secretMap)) {
const [key, { value, secretMetadata }] = entry;
// skip empty values (not allowed by AWS)
if (!value) {
// eslint-disable-next-line no-continue
continue;
}
try {
await putParameter(ssm, {
Name: `${destinationConfig.path}${key}`,
Type: "SecureString",
Value: value,
Overwrite: true
});
} catch (error) {
throw new SecretSyncError({
error,
secretKey: key
const keyId = syncOptions.keyId ?? "alias/aws/ssm";
// create parameter or update if changed
if (
!(key in awsParameterStoreSecretsRecord) ||
value !== awsParameterStoreSecretsRecord[key].Value ||
keyId !== awsParameterStoreMetadataRecord[key]?.KeyId
) {
try {
await putParameter(ssm, {
Name: `${destinationConfig.path}${key}`,
Type: "SecureString",
Value: value,
Overwrite: true,
KeyId: keyId
});
} catch (error) {
throw new SecretSyncError({
error,
secretKey: key
});
}
}
if (shouldManageTags) {
const { tagsToAdd, tagKeysToRemove } = processParameterTags({
syncTagsRecord: {
// configured sync tags take preference over secret metadata
...(syncOptions.syncSecretMetadataAsTags &&
Object.fromEntries(secretMetadata?.map((tag) => [tag.key, tag.value]) ?? [])),
...syncTagsRecord
},
awsTagsRecord: awsParameterStoreTagsRecord[key] ?? {}
});
if (tagsToAdd.length) {
try {
await addTagsToParameter(ssm, {
ResourceId: `${destinationConfig.path}${key}`,
Tags: tagsToAdd
});
} catch (error) {
throw new SecretSyncError({
error,
secretKey: key
});
}
}
if (tagKeysToRemove.length) {
try {
await removeTagsFromParameter(ssm, {
ResourceId: `${destinationConfig.path}${key}`,
TagKeys: tagKeysToRemove
});
} catch (error) {
throw new SecretSyncError({
error,
secretKey: key
});
}
}
}
}

@ -8,6 +8,7 @@ import {
GenericCreateSecretSyncFieldsSchema,
GenericUpdateSecretSyncFieldsSchema
} from "@app/services/secret-sync/secret-sync-schemas";
import { TSyncOptionsConfig } from "@app/services/secret-sync/secret-sync-types";
const AwsParameterStoreSyncDestinationConfigSchema = z.object({
region: z.nativeEnum(AWSRegion).describe(SecretSyncs.DESTINATION_CONFIG.AWS_PARAMETER_STORE.region),
@ -20,19 +21,68 @@ const AwsParameterStoreSyncDestinationConfigSchema = z.object({
.describe(SecretSyncs.DESTINATION_CONFIG.AWS_PARAMETER_STORE.path)
});
export const AwsParameterStoreSyncSchema = BaseSecretSyncSchema(SecretSync.AWSParameterStore).extend({
const AwsParameterStoreSyncOptionsSchema = z.object({
keyId: z
.string()
.regex(/^([a-zA-Z0-9:/_-]+)$/, "Invalid KMS Key ID")
.min(1, "Invalid KMS Key ID")
.max(256, "Invalid KMS Key ID")
.optional()
.describe(SecretSyncs.ADDITIONAL_SYNC_OPTIONS.AWS_PARAMETER_STORE.keyId),
tags: z
.object({
key: z
.string()
.regex(
/^([\p{L}\p{Z}\p{N}_.:/=+\-@]*)$/u,
"Invalid resource tag key: keys can only contain Unicode letters, digits, white space and any of the following: _.:/=+@-"
)
.min(1, "Resource tag key required")
.max(128, "Resource tag key cannot exceed 128 characters"),
value: z
.string()
.regex(
/^([\p{L}\p{Z}\p{N}_.:/=+\-@]*)$/u,
"Invalid resource tag value: tag values can only contain Unicode letters, digits, white space and any of the following: _.:/=+@-"
)
.max(256, "Resource tag value cannot exceed 256 characters")
})
.array()
.max(50)
.refine((items) => new Set(items.map((item) => item.key)).size === items.length, {
message: "Resource tag keys must be unique"
})
.optional()
.describe(SecretSyncs.ADDITIONAL_SYNC_OPTIONS.AWS_PARAMETER_STORE.tags),
syncSecretMetadataAsTags: z
.boolean()
.optional()
.describe(SecretSyncs.ADDITIONAL_SYNC_OPTIONS.AWS_PARAMETER_STORE.syncSecretMetadataAsTags)
});
const AwsParameterStoreSyncOptionsConfig: TSyncOptionsConfig = { canImportSecrets: true };
export const AwsParameterStoreSyncSchema = BaseSecretSyncSchema(
SecretSync.AWSParameterStore,
AwsParameterStoreSyncOptionsConfig,
AwsParameterStoreSyncOptionsSchema
).extend({
destination: z.literal(SecretSync.AWSParameterStore),
destinationConfig: AwsParameterStoreSyncDestinationConfigSchema
});
export const CreateAwsParameterStoreSyncSchema = GenericCreateSecretSyncFieldsSchema(
SecretSync.AWSParameterStore
SecretSync.AWSParameterStore,
AwsParameterStoreSyncOptionsConfig,
AwsParameterStoreSyncOptionsSchema
).extend({
destinationConfig: AwsParameterStoreSyncDestinationConfigSchema
});
export const UpdateAwsParameterStoreSyncSchema = GenericUpdateSecretSyncFieldsSchema(
SecretSync.AWSParameterStore
SecretSync.AWSParameterStore,
AwsParameterStoreSyncOptionsConfig,
AwsParameterStoreSyncOptionsSchema
).extend({
destinationConfig: AwsParameterStoreSyncDestinationConfigSchema.optional()
});

@ -1,16 +1,28 @@
import { UntagResourceCommandOutput } from "@aws-sdk/client-kms";
import {
BatchGetSecretValueCommand,
CreateSecretCommand,
CreateSecretCommandInput,
DeleteSecretCommand,
DeleteSecretResponse,
DescribeSecretCommand,
DescribeSecretCommandInput,
ListSecretsCommand,
SecretsManagerClient,
TagResourceCommand,
TagResourceCommandOutput,
UntagResourceCommand,
UpdateSecretCommand,
UpdateSecretCommandInput
} from "@aws-sdk/client-secrets-manager";
import { AWSError } from "aws-sdk";
import { CreateSecretResponse, SecretListEntry, SecretValueEntry } from "aws-sdk/clients/secretsmanager";
import {
CreateSecretResponse,
DescribeSecretResponse,
SecretListEntry,
SecretValueEntry,
Tag
} from "aws-sdk/clients/secretsmanager";
import { getAwsConnectionConfig } from "@app/services/app-connection/aws/aws-connection-fns";
import { AwsSecretsManagerSyncMappingBehavior } from "@app/services/secret-sync/aws-secrets-manager/aws-secrets-manager-sync-enums";
@ -21,6 +33,7 @@ import { TAwsSecretsManagerSyncWithCredentials } from "./aws-secrets-manager-syn
type TAwsSecretsRecord = Record<string, SecretListEntry>;
type TAwsSecretValuesRecord = Record<string, SecretValueEntry>;
type TAwsSecretDescriptionsRecord = Record<string, DescribeSecretResponse>;
const MAX_RETRIES = 5;
const BATCH_SIZE = 20;
@ -135,6 +148,46 @@ const getSecretValuesRecord = async (
return awsSecretValuesRecord;
};
const describeSecret = async (
client: SecretsManagerClient,
input: DescribeSecretCommandInput,
attempt = 0
): Promise<DescribeSecretResponse> => {
try {
return await client.send(new DescribeSecretCommand(input));
} catch (error) {
if ((error as AWSError).code === "ThrottlingException" && attempt < MAX_RETRIES) {
await sleep();
// retry
return describeSecret(client, input, attempt + 1);
}
throw error;
}
};
const getSecretDescriptionsRecord = async (
client: SecretsManagerClient,
awsSecretsRecord: TAwsSecretsRecord
): Promise<TAwsSecretDescriptionsRecord> => {
const awsSecretDescriptionsRecord: TAwsSecretValuesRecord = {};
for await (const secretKey of Object.keys(awsSecretsRecord)) {
try {
awsSecretDescriptionsRecord[secretKey] = await describeSecret(client, {
SecretId: secretKey
});
} catch (error) {
throw new SecretSyncError({
secretKey,
error
});
}
}
return awsSecretDescriptionsRecord;
};
const createSecret = async (
client: SecretsManagerClient,
input: CreateSecretCommandInput,
@ -189,9 +242,71 @@ const deleteSecret = async (
}
};
const addTags = async (
client: SecretsManagerClient,
secretKey: string,
tags: Tag[],
attempt = 0
): Promise<TagResourceCommandOutput> => {
try {
return await client.send(new TagResourceCommand({ SecretId: secretKey, Tags: tags }));
} catch (error) {
if ((error as AWSError).code === "ThrottlingException" && attempt < MAX_RETRIES) {
await sleep();
// retry
return addTags(client, secretKey, tags, attempt + 1);
}
throw error;
}
};
const removeTags = async (
client: SecretsManagerClient,
secretKey: string,
tagKeys: string[],
attempt = 0
): Promise<UntagResourceCommandOutput> => {
try {
return await client.send(new UntagResourceCommand({ SecretId: secretKey, TagKeys: tagKeys }));
} catch (error) {
if ((error as AWSError).code === "ThrottlingException" && attempt < MAX_RETRIES) {
await sleep();
// retry
return removeTags(client, secretKey, tagKeys, attempt + 1);
}
throw error;
}
};
const processTags = ({
syncTagsRecord,
awsTagsRecord
}: {
syncTagsRecord: Record<string, string>;
awsTagsRecord: Record<string, string>;
}) => {
const tagsToAdd: Tag[] = [];
const tagKeysToRemove: string[] = [];
for (const syncEntry of Object.entries(syncTagsRecord)) {
const [syncKey, syncValue] = syncEntry;
if (!(syncKey in awsTagsRecord) || syncValue !== awsTagsRecord[syncKey])
tagsToAdd.push({ Key: syncKey, Value: syncValue });
}
for (const awsKey of Object.keys(awsTagsRecord)) {
if (!(awsKey in syncTagsRecord)) tagKeysToRemove.push(awsKey);
}
return { tagsToAdd, tagKeysToRemove };
};
export const AwsSecretsManagerSyncFns = {
syncSecrets: async (secretSync: TAwsSecretsManagerSyncWithCredentials, secretMap: TSecretMap) => {
const { destinationConfig } = secretSync;
const { destinationConfig, syncOptions } = secretSync;
const client = await getSecretsManagerClient(secretSync);
@ -199,9 +314,15 @@ export const AwsSecretsManagerSyncFns = {
const awsValuesRecord = await getSecretValuesRecord(client, awsSecretsRecord);
const awsDescriptionsRecord = await getSecretDescriptionsRecord(client, awsSecretsRecord);
const syncTagsRecord = Object.fromEntries(syncOptions.tags?.map((tag) => [tag.key, tag.value]) ?? []);
const keyId = syncOptions.keyId ?? "alias/aws/secretsmanager";
if (destinationConfig.mappingBehavior === AwsSecretsManagerSyncMappingBehavior.OneToOne) {
for await (const entry of Object.entries(secretMap)) {
const [key, { value }] = entry;
const [key, { value, secretMetadata }] = entry;
// skip secrets that don't have a value set
if (!value) {
@ -211,15 +332,26 @@ export const AwsSecretsManagerSyncFns = {
if (awsSecretsRecord[key]) {
// skip secrets that haven't changed
if (awsValuesRecord[key]?.SecretString === value) {
// eslint-disable-next-line no-continue
continue;
if (awsValuesRecord[key]?.SecretString !== value || keyId !== awsDescriptionsRecord[key]?.KmsKeyId) {
try {
await updateSecret(client, {
SecretId: key,
SecretString: value,
KmsKeyId: keyId
});
} catch (error) {
throw new SecretSyncError({
error,
secretKey: key
});
}
}
} else {
try {
await updateSecret(client, {
SecretId: key,
SecretString: value
await createSecret(client, {
Name: key,
SecretString: value,
KmsKeyId: keyId
});
} catch (error) {
throw new SecretSyncError({
@ -227,12 +359,34 @@ export const AwsSecretsManagerSyncFns = {
secretKey: key
});
}
} else {
}
const { tagsToAdd, tagKeysToRemove } = processTags({
syncTagsRecord: {
// configured sync tags take preference over secret metadata
...(syncOptions.syncSecretMetadataAsTags &&
Object.fromEntries(secretMetadata?.map((tag) => [tag.key, tag.value]) ?? [])),
...syncTagsRecord
},
awsTagsRecord: Object.fromEntries(
awsDescriptionsRecord[key]?.Tags?.map((tag) => [tag.Key!, tag.Value!]) ?? []
)
});
if (tagsToAdd.length) {
try {
await createSecret(client, {
Name: key,
SecretString: value
await addTags(client, key, tagsToAdd);
} catch (error) {
throw new SecretSyncError({
error,
secretKey: key
});
}
}
if (tagKeysToRemove.length) {
try {
await removeTags(client, key, tagKeysToRemove);
} catch (error) {
throw new SecretSyncError({
error,
@ -261,30 +415,45 @@ export const AwsSecretsManagerSyncFns = {
Object.fromEntries(Object.entries(secretMap).map(([key, secretData]) => [key, secretData.value]))
);
if (awsValuesRecord[destinationConfig.secretName]) {
if (awsSecretsRecord[destinationConfig.secretName]) {
await updateSecret(client, {
SecretId: destinationConfig.secretName,
SecretString: secretValue
SecretString: secretValue,
KmsKeyId: keyId
});
} else {
await createSecret(client, {
Name: destinationConfig.secretName,
SecretString: secretValue
SecretString: secretValue,
KmsKeyId: keyId
});
}
for await (const secretKey of Object.keys(awsSecretsRecord)) {
if (secretKey === destinationConfig.secretName) {
// eslint-disable-next-line no-continue
continue;
}
const { tagsToAdd, tagKeysToRemove } = processTags({
syncTagsRecord,
awsTagsRecord: Object.fromEntries(
awsDescriptionsRecord[destinationConfig.secretName]?.Tags?.map((tag) => [tag.Key!, tag.Value!]) ?? []
)
});
if (tagsToAdd.length) {
try {
await deleteSecret(client, secretKey);
await addTags(client, destinationConfig.secretName, tagsToAdd);
} catch (error) {
throw new SecretSyncError({
error,
secretKey
secretKey: destinationConfig.secretName
});
}
}
if (tagKeysToRemove.length) {
try {
await removeTags(client, destinationConfig.secretName, tagKeysToRemove);
} catch (error) {
throw new SecretSyncError({
error,
secretKey: destinationConfig.secretName
});
}
}

@ -9,6 +9,7 @@ import {
GenericCreateSecretSyncFieldsSchema,
GenericUpdateSecretSyncFieldsSchema
} from "@app/services/secret-sync/secret-sync-schemas";
import { TSyncOptionsConfig } from "@app/services/secret-sync/secret-sync-types";
const AwsSecretsManagerSyncDestinationConfigSchema = z
.discriminatedUnion("mappingBehavior", [
@ -38,22 +39,95 @@ const AwsSecretsManagerSyncDestinationConfigSchema = z
})
);
export const AwsSecretsManagerSyncSchema = BaseSecretSyncSchema(SecretSync.AWSSecretsManager).extend({
const AwsSecretsManagerSyncOptionsSchema = z.object({
keyId: z
.string()
.regex(/^([a-zA-Z0-9:/_-]+)$/, "Invalid KMS Key ID")
.min(1, "Invalid KMS Key ID")
.max(256, "Invalid KMS Key ID")
.optional()
.describe(SecretSyncs.ADDITIONAL_SYNC_OPTIONS.AWS_SECRETS_MANAGER.keyId),
tags: z
.object({
key: z
.string()
.regex(
/^([\p{L}\p{Z}\p{N}_.:/=+\-@]*)$/u,
"Invalid tag key: keys can only contain Unicode letters, digits, white space and any of the following: _.:/=+@-"
)
.min(1, "Tag key required")
.max(128, "Tag key cannot exceed 128 characters"),
value: z
.string()
.regex(
/^([\p{L}\p{Z}\p{N}_.:/=+\-@]*)$/u,
"Invalid tag value: tag values can only contain Unicode letters, digits, white space and any of the following: _.:/=+@-"
)
.max(256, "Tag value cannot exceed 256 characters")
})
.array()
.max(50)
.refine((items) => new Set(items.map((item) => item.key)).size === items.length, {
message: "Tag keys must be unique"
})
.optional()
.describe(SecretSyncs.ADDITIONAL_SYNC_OPTIONS.AWS_SECRETS_MANAGER.tags),
syncSecretMetadataAsTags: z
.boolean()
.optional()
.describe(SecretSyncs.ADDITIONAL_SYNC_OPTIONS.AWS_SECRETS_MANAGER.syncSecretMetadataAsTags)
});
const AwsSecretsManagerSyncOptionsConfig: TSyncOptionsConfig = { canImportSecrets: true };
export const AwsSecretsManagerSyncSchema = BaseSecretSyncSchema(
SecretSync.AWSSecretsManager,
AwsSecretsManagerSyncOptionsConfig,
AwsSecretsManagerSyncOptionsSchema
).extend({
destination: z.literal(SecretSync.AWSSecretsManager),
destinationConfig: AwsSecretsManagerSyncDestinationConfigSchema
});
export const CreateAwsSecretsManagerSyncSchema = GenericCreateSecretSyncFieldsSchema(
SecretSync.AWSSecretsManager
).extend({
destinationConfig: AwsSecretsManagerSyncDestinationConfigSchema
});
SecretSync.AWSSecretsManager,
AwsSecretsManagerSyncOptionsConfig,
AwsSecretsManagerSyncOptionsSchema
)
.extend({
destinationConfig: AwsSecretsManagerSyncDestinationConfigSchema
})
.superRefine((sync, ctx) => {
if (
sync.destinationConfig.mappingBehavior === AwsSecretsManagerSyncMappingBehavior.ManyToOne &&
sync.syncOptions.syncSecretMetadataAsTags
) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: 'Syncing secret metadata is not supported with "Many-to-One" mapping behavior.'
});
}
});
export const UpdateAwsSecretsManagerSyncSchema = GenericUpdateSecretSyncFieldsSchema(
SecretSync.AWSSecretsManager
).extend({
destinationConfig: AwsSecretsManagerSyncDestinationConfigSchema.optional()
});
SecretSync.AWSSecretsManager,
AwsSecretsManagerSyncOptionsConfig,
AwsSecretsManagerSyncOptionsSchema
)
.extend({
destinationConfig: AwsSecretsManagerSyncDestinationConfigSchema.optional()
})
.superRefine((sync, ctx) => {
if (
sync.destinationConfig?.mappingBehavior === AwsSecretsManagerSyncMappingBehavior.ManyToOne &&
sync.syncOptions.syncSecretMetadataAsTags
) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: 'Syncing secret metadata is not supported with "Many-to-One" mapping behavior.'
});
}
});
export const AwsSecretsManagerSyncListItemSchema = z.object({
name: z.literal("AWS Secrets Manager"),

@ -233,6 +233,7 @@ export const secretSyncQueueFactory = ({
}
secretMap[secretKey].skipMultilineEncoding = Boolean(secret.skipMultilineEncoding);
secretMap[secretKey].secretMetadata = secret.secretMetadata;
})
);
@ -258,7 +259,8 @@ export const secretSyncQueueFactory = ({
secretMap[importedSecret.key] = {
skipMultilineEncoding: importedSecret.skipMultilineEncoding,
comment: importedSecret.secretComment,
value: importedSecret.secretValue || ""
value: importedSecret.secretValue || "",
secretMetadata: importedSecret.secretMetadata
};
}
}

@ -1,4 +1,4 @@
import { z } from "zod";
import { AnyZodObject, z } from "zod";
import { SecretSyncsSchema } from "@app/db/schemas/secret-syncs";
import { SecretSyncs } from "@app/lib/api-docs";
@ -8,34 +8,45 @@ import { SecretSync, SecretSyncInitialSyncBehavior } from "@app/services/secret-
import { SECRET_SYNC_CONNECTION_MAP } from "@app/services/secret-sync/secret-sync-maps";
import { TSyncOptionsConfig } from "@app/services/secret-sync/secret-sync-types";
const SyncOptionsSchema = (secretSync: SecretSync, options: TSyncOptionsConfig = { canImportSecrets: true }) =>
z.object({
initialSyncBehavior: (options.canImportSecrets
const BaseSyncOptionsSchema = <T extends AnyZodObject | undefined = undefined>({
destination,
syncOptionsConfig: { canImportSecrets },
merge,
isUpdateSchema
}: {
destination: SecretSync;
syncOptionsConfig: TSyncOptionsConfig;
merge?: T;
isUpdateSchema?: boolean;
}) => {
const baseSchema = z.object({
initialSyncBehavior: (canImportSecrets
? z.nativeEnum(SecretSyncInitialSyncBehavior)
: z.literal(SecretSyncInitialSyncBehavior.OverwriteDestination)
).describe(SecretSyncs.SYNC_OPTIONS(secretSync).initialSyncBehavior)
// prependPrefix: z
// .string()
// .trim()
// .transform((str) => str.toUpperCase())
// .optional()
// .describe(SecretSyncs.SYNC_OPTIONS(secretSync).PREPEND_PREFIX),
// appendSuffix: z
// .string()
// .trim()
// .transform((str) => str.toUpperCase())
// .optional()
// .describe(SecretSyncs.SYNC_OPTIONS(secretSync).APPEND_SUFFIX)
).describe(SecretSyncs.SYNC_OPTIONS(destination).initialSyncBehavior)
});
export const BaseSecretSyncSchema = (destination: SecretSync, syncOptionsConfig?: TSyncOptionsConfig) =>
const schema = merge ? baseSchema.merge(merge) : baseSchema;
return (
isUpdateSchema
? schema.describe(SecretSyncs.UPDATE(destination).syncOptions).optional()
: schema.describe(SecretSyncs.CREATE(destination).syncOptions)
) as T extends AnyZodObject ? z.ZodObject<z.objectUtil.MergeShapes<typeof schema.shape, T["shape"]>> : typeof schema;
};
export const BaseSecretSyncSchema = <T extends AnyZodObject | undefined = undefined>(
destination: SecretSync,
syncOptionsConfig: TSyncOptionsConfig,
merge?: T
) =>
SecretSyncsSchema.omit({
destination: true,
destinationConfig: true,
syncOptions: true
}).extend({
// destination needs to be on the extended object for type differentiation
syncOptions: SyncOptionsSchema(destination, syncOptionsConfig),
syncOptions: BaseSyncOptionsSchema({ destination, syncOptionsConfig, merge }),
// join properties
projectId: z.string(),
connection: z.object({
@ -47,7 +58,11 @@ export const BaseSecretSyncSchema = (destination: SecretSync, syncOptionsConfig?
folder: z.object({ id: z.string(), path: z.string() }).nullable()
});
export const GenericCreateSecretSyncFieldsSchema = (destination: SecretSync, syncOptionsConfig?: TSyncOptionsConfig) =>
export const GenericCreateSecretSyncFieldsSchema = <T extends AnyZodObject | undefined = undefined>(
destination: SecretSync,
syncOptionsConfig: TSyncOptionsConfig,
merge?: T
) =>
z.object({
name: slugSchema({ field: "name" }).describe(SecretSyncs.CREATE(destination).name),
projectId: z.string().trim().min(1, "Project ID required").describe(SecretSyncs.CREATE(destination).projectId),
@ -66,10 +81,14 @@ export const GenericCreateSecretSyncFieldsSchema = (destination: SecretSync, syn
.transform(removeTrailingSlash)
.describe(SecretSyncs.CREATE(destination).secretPath),
isAutoSyncEnabled: z.boolean().default(true).describe(SecretSyncs.CREATE(destination).isAutoSyncEnabled),
syncOptions: SyncOptionsSchema(destination, syncOptionsConfig).describe(SecretSyncs.CREATE(destination).syncOptions)
syncOptions: BaseSyncOptionsSchema({ destination, syncOptionsConfig, merge })
});
export const GenericUpdateSecretSyncFieldsSchema = (destination: SecretSync, syncOptionsConfig?: TSyncOptionsConfig) =>
export const GenericUpdateSecretSyncFieldsSchema = <T extends AnyZodObject | undefined = undefined>(
destination: SecretSync,
syncOptionsConfig: TSyncOptionsConfig,
merge?: T
) =>
z.object({
name: slugSchema({ field: "name" }).describe(SecretSyncs.UPDATE(destination).name).optional(),
connectionId: z.string().uuid().describe(SecretSyncs.UPDATE(destination).connectionId).optional(),
@ -90,7 +109,5 @@ export const GenericUpdateSecretSyncFieldsSchema = (destination: SecretSync, syn
.optional()
.describe(SecretSyncs.UPDATE(destination).secretPath),
isAutoSyncEnabled: z.boolean().optional().describe(SecretSyncs.UPDATE(destination).isAutoSyncEnabled),
syncOptions: SyncOptionsSchema(destination, syncOptionsConfig)
.optional()
.describe(SecretSyncs.UPDATE(destination).syncOptions)
syncOptions: BaseSyncOptionsSchema({ destination, syncOptionsConfig, merge, isUpdateSchema: true })
});

@ -2,6 +2,7 @@ import { Job } from "bullmq";
import { TCreateAuditLogDTO } from "@app/ee/services/audit-log/audit-log-types";
import { QueueJobs } from "@app/queue";
import { ResourceMetadataDTO } from "@app/services/resource-metadata/resource-metadata-schema";
import {
TAwsSecretsManagerSync,
TAwsSecretsManagerSyncInput,
@ -197,5 +198,10 @@ export type TSendSecretSyncFailedNotificationsJobDTO = Job<
export type TSecretMap = Record<
string,
{ value: string; comment?: string; skipMultilineEncoding?: boolean | null | undefined }
{
value: string;
comment?: string;
skipMultilineEncoding?: boolean | null | undefined;
secretMetadata?: ResourceMetadataDTO;
}
>;

@ -1,7 +1,15 @@
import { ForbiddenError, PureAbility, subject } from "@casl/ability";
import { Knex } from "knex";
import { z } from "zod";
import { ActionProjectType, ProjectMembershipRole, SecretsV2Schema, SecretType, TableName } from "@app/db/schemas";
import {
ActionProjectType,
ProjectMembershipRole,
SecretsV2Schema,
SecretType,
TableName,
TSecretsV2
} from "@app/db/schemas";
import { TPermissionServiceFactory } from "@app/ee/services/permission/permission-service";
import { ProjectPermissionActions, ProjectPermissionSub } from "@app/ee/services/permission/project-permission";
import { TSecretApprovalPolicyServiceFactory } from "@app/ee/services/secret-approval-policy/secret-approval-policy-service";
@ -36,6 +44,7 @@ import {
} from "./secret-v2-bridge-fns";
import {
SecretOperations,
SecretUpdateMode,
TBackFillSecretReferencesDTO,
TCreateManySecretDTO,
TCreateSecretDTO,
@ -103,12 +112,13 @@ export const secretV2BridgeServiceFactory = ({
const $validateSecretReferences = async (
projectId: string,
permission: PureAbility,
references: ReturnType<typeof getAllSecretReferences>["nestedReferences"]
references: ReturnType<typeof getAllSecretReferences>["nestedReferences"],
tx?: Knex
) => {
if (!references.length) return;
const uniqueReferenceEnvironmentSlugs = Array.from(new Set(references.map((el) => el.environment)));
const referencesEnvironments = await projectEnvDAL.findBySlugs(projectId, uniqueReferenceEnvironmentSlugs);
const referencesEnvironments = await projectEnvDAL.findBySlugs(projectId, uniqueReferenceEnvironmentSlugs, tx);
if (referencesEnvironments.length !== uniqueReferenceEnvironmentSlugs.length)
throw new BadRequestError({
message: `Referenced environment not found. Missing ${diff(
@ -122,36 +132,41 @@ export const secretV2BridgeServiceFactory = ({
references.map((el) => ({
secretPath: el.secretPath,
envId: referencesEnvironmentGroupBySlug[el.environment][0].id
}))
})),
tx
);
const referencesFolderGroupByPath = groupBy(referredFolders.filter(Boolean), (i) => `${i?.envId}-${i?.path}`);
const referredSecrets = await secretDAL.find({
$complex: {
operator: "or",
value: references.map((el) => {
const folderId =
referencesFolderGroupByPath[`${referencesEnvironmentGroupBySlug[el.environment][0].id}-${el.secretPath}`][0]
?.id;
if (!folderId) throw new BadRequestError({ message: `Referenced path ${el.secretPath} doesn't exist` });
const referredSecrets = await secretDAL.find(
{
$complex: {
operator: "or",
value: references.map((el) => {
const folderId =
referencesFolderGroupByPath[
`${referencesEnvironmentGroupBySlug[el.environment][0].id}-${el.secretPath}`
][0]?.id;
if (!folderId) throw new BadRequestError({ message: `Referenced path ${el.secretPath} doesn't exist` });
return {
operator: "and",
value: [
{
operator: "eq",
field: "folderId",
value: folderId
},
{
operator: "eq",
field: `${TableName.SecretV2}.key` as "key",
value: el.secretKey
}
]
};
})
}
});
return {
operator: "and",
value: [
{
operator: "eq",
field: "folderId",
value: folderId
},
{
operator: "eq",
field: `${TableName.SecretV2}.key` as "key",
value: el.secretKey
}
]
};
})
}
},
{ tx }
);
if (
referredSecrets.length !==
@ -1245,8 +1260,9 @@ export const secretV2BridgeServiceFactory = ({
actorAuthMethod,
environment,
projectId,
secretPath,
secrets: inputSecrets
secretPath: defaultSecretPath = "/",
secrets: inputSecrets,
mode: updateMode
}: TUpdateManySecretDTO) => {
const { permission } = await permissionService.getProjectPermission({
actor,
@ -1257,196 +1273,280 @@ export const secretV2BridgeServiceFactory = ({
actionProjectType: ActionProjectType.SecretManager
});
const folder = await folderDAL.findBySecretPath(projectId, environment, secretPath);
if (!folder)
const secretsToUpdateGroupByPath = groupBy(inputSecrets, (el) => el.secretPath || defaultSecretPath);
const projectEnvironment = await projectEnvDAL.findOne({ projectId, slug: environment });
if (!projectEnvironment) {
throw new NotFoundError({
message: `Folder with path '${secretPath}' in environment with slug '${environment}' not found`,
message: `Environment with slug '${environment}' in project with ID '${projectId}' not found`
});
}
const folders = await folderDAL.findByManySecretPath(
Object.keys(secretsToUpdateGroupByPath).map((el) => ({ envId: projectEnvironment.id, secretPath: el }))
);
if (folders.length !== Object.keys(secretsToUpdateGroupByPath).length)
throw new NotFoundError({
message: `Folder with path '${null}' in environment with slug '${environment}' not found`,
name: "UpdateManySecret"
});
const folderId = folder.id;
const secretsToUpdate = await secretDAL.find({
folderId,
$complex: {
operator: "and",
value: [
{
operator: "or",
value: inputSecrets.map((el) => ({
operator: "and",
value: [
{
operator: "eq",
field: `${TableName.SecretV2}.key` as "key",
value: el.secretKey
},
{
operator: "eq",
field: "type",
value: SecretType.Shared
}
]
}))
}
]
}
});
if (secretsToUpdate.length !== inputSecrets.length) {
const secretsToUpdateNames = secretsToUpdate.map((secret) => secret.key);
const invalidSecrets = inputSecrets.filter((secret) => !secretsToUpdateNames.includes(secret.secretKey));
throw new NotFoundError({
message: `Secret does not exist: ${invalidSecrets.map((el) => el.secretKey).join(",")}`
});
}
const secretsToUpdateInDBGroupedByKey = groupBy(secretsToUpdate, (i) => i.key);
secretsToUpdate.forEach((el) => {
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionActions.Edit,
subject(ProjectPermissionSub.Secrets, {
environment,
secretPath,
secretName: el.key,
secretTags: el.tags.map((i) => i.slug)
})
);
});
// get all tags
const sanitizedTagIds = inputSecrets.flatMap(({ tagIds = [] }) => tagIds);
const tags = sanitizedTagIds.length ? await secretTagDAL.findManyTagsById(projectId, sanitizedTagIds) : [];
if (tags.length !== sanitizedTagIds.length) throw new NotFoundError({ message: "Tag not found" });
const tagsGroupByID = groupBy(tags, (i) => i.id);
// check again to avoid non authorized tags are removed
inputSecrets.forEach((el) => {
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionActions.Edit,
subject(ProjectPermissionSub.Secrets, {
environment,
secretPath,
secretName: el.secretKey,
secretTags: (el.tagIds || []).map((i) => tagsGroupByID[i][0].slug)
})
);
});
// now find any secret that needs to update its name
// same process as above
const secretsWithNewName = inputSecrets.filter(({ newSecretName }) => Boolean(newSecretName));
if (secretsWithNewName.length) {
const secrets = await secretDAL.find({
folderId,
$complex: {
operator: "and",
value: [
{
operator: "or",
value: secretsWithNewName.map((el) => ({
operator: "and",
value: [
{
operator: "eq",
field: `${TableName.SecretV2}.key` as "key",
value: el.secretKey
},
{
operator: "eq",
field: "type",
value: SecretType.Shared
}
]
}))
}
]
}
});
if (secrets.length)
throw new BadRequestError({
message: `Secret with new name already exists: ${secretsWithNewName.map((el) => el.newSecretName).join(",")}`
});
secretsWithNewName.forEach((el) => {
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionActions.Create,
subject(ProjectPermissionSub.Secrets, {
environment,
secretPath,
secretName: el.newSecretName as string,
secretTags: (el.tagIds || []).map((i) => tagsGroupByID[i][0].slug)
})
);
});
}
// now get all secret references made and validate the permission
const secretReferencesGroupByInputSecretKey: Record<string, ReturnType<typeof getAllSecretReferences>> = {};
const secretReferences: TSecretReference[] = [];
inputSecrets.forEach((el) => {
if (el.secretValue) {
const references = getAllSecretReferences(el.secretValue);
secretReferencesGroupByInputSecretKey[el.secretKey] = references;
secretReferences.push(...references.nestedReferences);
references.localReferences.forEach((localRefKey) => {
secretReferences.push({ secretKey: localRefKey, secretPath, environment });
});
}
});
await $validateSecretReferences(projectId, permission, secretReferences);
const { encryptor: secretManagerEncryptor, decryptor: secretManagerDecryptor } =
await kmsService.createCipherPairWithDataKey({ type: KmsDataKey.SecretManager, projectId });
const secrets = await secretDAL.transaction(async (tx) =>
fnSecretBulkUpdate({
folderId,
orgId: actorOrgId,
tx,
inputSecrets: inputSecrets.map((el) => {
const originalSecret = secretsToUpdateInDBGroupedByKey[el.secretKey][0];
const encryptedValue =
typeof el.secretValue !== "undefined"
? {
encryptedValue: secretManagerEncryptor({ plainText: Buffer.from(el.secretValue) }).cipherTextBlob,
references: secretReferencesGroupByInputSecretKey[el.secretKey]?.nestedReferences
}
: {};
const updatedSecrets: Array<TSecretsV2 & { secretPath: string }> = [];
await secretDAL.transaction(async (tx) => {
for await (const folder of folders) {
if (!folder) throw new NotFoundError({ message: "Folder not found" });
return {
filter: { id: originalSecret.id, type: SecretType.Shared },
data: {
reminderRepeatDays: el.secretReminderRepeatDays,
encryptedComment: setKnexStringValue(
el.secretComment,
(value) => secretManagerEncryptor({ plainText: Buffer.from(value) }).cipherTextBlob
),
reminderNote: el.secretReminderNote,
skipMultilineEncoding: el.skipMultilineEncoding,
key: el.newSecretName || el.secretKey,
tags: el.tagIds,
secretMetadata: el.secretMetadata,
...encryptedValue
const folderId = folder.id;
const secretPath = folder.path;
let secretsToUpdate = secretsToUpdateGroupByPath[secretPath];
const secretsToUpdateInDB = await secretDAL.find(
{
folderId,
$complex: {
operator: "and",
value: [
{
operator: "or",
value: secretsToUpdate.map((el) => ({
operator: "and",
value: [
{
operator: "eq",
field: `${TableName.SecretV2}.key` as "key",
value: el.secretKey
},
{
operator: "eq",
field: "type",
value: SecretType.Shared
}
]
}))
}
]
}
};
}),
secretDAL,
secretVersionDAL,
secretTagDAL,
secretVersionTagDAL,
resourceMetadataDAL
})
);
await snapshotService.performSnapshot(folderId);
await secretQueueService.syncSecrets({
actor,
actorId,
secretPath,
projectId,
orgId: actorOrgId,
environmentSlug: folder.environment.slug
},
{ tx }
);
if (secretsToUpdateInDB.length !== secretsToUpdate.length && updateMode === SecretUpdateMode.FailOnNotFound)
throw new NotFoundError({
message: `Secret does not exist: ${diff(
secretsToUpdate.map((el) => el.secretKey),
secretsToUpdateInDB.map((el) => el.key)
).join(", ")} in path ${folder.path}`
});
const secretsToUpdateInDBGroupedByKey = groupBy(secretsToUpdateInDB, (i) => i.key);
const secretsToCreate = secretsToUpdate.filter((el) => !secretsToUpdateInDBGroupedByKey?.[el.secretKey]);
secretsToUpdate = secretsToUpdate.filter((el) => secretsToUpdateInDBGroupedByKey?.[el.secretKey]);
secretsToUpdateInDB.forEach((el) => {
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionActions.Edit,
subject(ProjectPermissionSub.Secrets, {
environment,
secretPath,
secretName: el.key,
secretTags: el.tags.map((i) => i.slug)
})
);
});
// get all tags
const sanitizedTagIds = secretsToUpdate.flatMap(({ tagIds = [] }) => tagIds);
const tags = sanitizedTagIds.length ? await secretTagDAL.findManyTagsById(projectId, sanitizedTagIds, tx) : [];
if (tags.length !== sanitizedTagIds.length) throw new NotFoundError({ message: "Tag not found" });
const tagsGroupByID = groupBy(tags, (i) => i.id);
// check create permission allowed in upsert mode
if (updateMode === SecretUpdateMode.Upsert) {
secretsToCreate.forEach((el) => {
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionActions.Create,
subject(ProjectPermissionSub.Secrets, {
environment,
secretPath,
secretName: el.secretKey,
secretTags: (el.tagIds || []).map((i) => tagsGroupByID[i][0].slug)
})
);
});
}
// check again to avoid non authorized tags are removed
secretsToUpdate.forEach((el) => {
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionActions.Edit,
subject(ProjectPermissionSub.Secrets, {
environment,
secretPath,
secretName: el.secretKey,
secretTags: (el.tagIds || []).map((i) => tagsGroupByID[i][0].slug)
})
);
});
// now find any secret that needs to update its name
// same process as above
const secretsWithNewName = secretsToUpdate.filter(({ newSecretName }) => Boolean(newSecretName));
if (secretsWithNewName.length) {
const secrets = await secretDAL.find(
{
folderId,
$complex: {
operator: "and",
value: [
{
operator: "or",
value: secretsWithNewName.map((el) => ({
operator: "and",
value: [
{
operator: "eq",
field: `${TableName.SecretV2}.key` as "key",
value: el.secretKey
},
{
operator: "eq",
field: "type",
value: SecretType.Shared
}
]
}))
}
]
}
},
{ tx }
);
if (secrets.length)
throw new BadRequestError({
message: `Secret with new name already exists: ${secretsWithNewName
.map((el) => el.newSecretName)
.join(", ")}`
});
secretsWithNewName.forEach((el) => {
ForbiddenError.from(permission).throwUnlessCan(
ProjectPermissionActions.Create,
subject(ProjectPermissionSub.Secrets, {
environment,
secretPath,
secretName: el.newSecretName as string,
secretTags: (el.tagIds || []).map((i) => tagsGroupByID[i][0].slug)
})
);
});
}
// now get all secret references made and validate the permission
const secretReferencesGroupByInputSecretKey: Record<string, ReturnType<typeof getAllSecretReferences>> = {};
const secretReferences: TSecretReference[] = [];
secretsToUpdate.concat(SecretUpdateMode.Upsert === updateMode ? secretsToCreate : []).forEach((el) => {
if (el.secretValue) {
const references = getAllSecretReferences(el.secretValue);
secretReferencesGroupByInputSecretKey[el.secretKey] = references;
secretReferences.push(...references.nestedReferences);
references.localReferences.forEach((localRefKey) => {
secretReferences.push({ secretKey: localRefKey, secretPath, environment });
});
}
});
await $validateSecretReferences(projectId, permission, secretReferences, tx);
const bulkUpdatedSecrets = await fnSecretBulkUpdate({
folderId,
orgId: actorOrgId,
tx,
inputSecrets: secretsToUpdate.map((el) => {
const originalSecret = secretsToUpdateInDBGroupedByKey[el.secretKey][0];
const encryptedValue =
typeof el.secretValue !== "undefined"
? {
encryptedValue: secretManagerEncryptor({ plainText: Buffer.from(el.secretValue) }).cipherTextBlob,
references: secretReferencesGroupByInputSecretKey[el.secretKey]?.nestedReferences
}
: {};
return {
filter: { id: originalSecret.id, type: SecretType.Shared },
data: {
reminderRepeatDays: el.secretReminderRepeatDays,
encryptedComment: setKnexStringValue(
el.secretComment,
(value) => secretManagerEncryptor({ plainText: Buffer.from(value) }).cipherTextBlob
),
reminderNote: el.secretReminderNote,
skipMultilineEncoding: el.skipMultilineEncoding,
key: el.newSecretName || el.secretKey,
tags: el.tagIds,
secretMetadata: el.secretMetadata,
...encryptedValue
}
};
}),
secretDAL,
secretVersionDAL,
secretTagDAL,
secretVersionTagDAL,
resourceMetadataDAL
});
updatedSecrets.push(...bulkUpdatedSecrets.map((el) => ({ ...el, secretPath: folder.path })));
if (updateMode === SecretUpdateMode.Upsert) {
const bulkInsertedSecrets = await fnSecretBulkInsert({
inputSecrets: secretsToCreate.map((el) => {
const references = secretReferencesGroupByInputSecretKey[el.secretKey]?.nestedReferences;
return {
version: 1,
encryptedComment: setKnexStringValue(
el.secretComment,
(value) => secretManagerEncryptor({ plainText: Buffer.from(value) }).cipherTextBlob
),
encryptedValue: el.secretValue
? secretManagerEncryptor({ plainText: Buffer.from(el.secretValue) }).cipherTextBlob
: undefined,
skipMultilineEncoding: el.skipMultilineEncoding,
key: el.secretKey,
tagIds: el.tagIds,
references,
secretMetadata: el.secretMetadata,
type: SecretType.Shared
};
}),
folderId,
orgId: actorOrgId,
secretDAL,
resourceMetadataDAL,
secretVersionDAL,
secretTagDAL,
secretVersionTagDAL,
tx
});
updatedSecrets.push(...bulkInsertedSecrets.map((el) => ({ ...el, secretPath: folder.path })));
}
}
});
return secrets.map((el) =>
reshapeBridgeSecret(projectId, environment, secretPath, {
await Promise.allSettled(folders.map((el) => (el?.id ? snapshotService.performSnapshot(el.id) : undefined)));
await Promise.allSettled(
folders.map((el) =>
el
? secretQueueService.syncSecrets({
actor,
actorId,
secretPath: el.path,
projectId,
orgId: actorOrgId,
environmentSlug: environment
})
: undefined
)
);
return updatedSecrets.map((el) =>
reshapeBridgeSecret(projectId, environment, el.secretPath, {
...el,
value: el.encryptedValue ? secretManagerDecryptor({ cipherTextBlob: el.encryptedValue }).toString() : "",
comment: el.encryptedComment ? secretManagerDecryptor({ cipherTextBlob: el.encryptedComment }).toString() : ""

@ -23,6 +23,12 @@ export type TSecretReferenceDTO = {
secretKey: string;
};
export enum SecretUpdateMode {
Ignore = "ignore",
Upsert = "upsert",
FailOnNotFound = "failOnNotFound"
}
export type TGetSecretsDTO = {
expandSecretReferences?: boolean;
path: string;
@ -113,6 +119,7 @@ export type TUpdateManySecretDTO = Omit<TProjectPermission, "projectId"> & {
secretPath: string;
projectId: string;
environment: string;
mode: SecretUpdateMode;
secrets: {
secretKey: string;
newSecretName?: string;
@ -123,6 +130,7 @@ export type TUpdateManySecretDTO = Omit<TProjectPermission, "projectId"> & {
secretReminderRepeatDays?: number | null;
secretReminderNote?: string | null;
secretMetadata?: ResourceMetadataDTO;
secretPath?: string;
}[];
};

@ -30,7 +30,10 @@ import { groupBy, pick } from "@app/lib/fn";
import { logger } from "@app/lib/logger";
import { alphaNumericNanoId } from "@app/lib/nanoid";
import { OrgServiceActor } from "@app/lib/types";
import { TGetSecretsRawByFolderMappingsDTO } from "@app/services/secret-v2-bridge/secret-v2-bridge-types";
import {
SecretUpdateMode,
TGetSecretsRawByFolderMappingsDTO
} from "@app/services/secret-v2-bridge/secret-v2-bridge-types";
import { ActorType } from "../auth/auth-type";
import { TProjectDALFactory } from "../project/project-dal";
@ -2012,6 +2015,7 @@ export const secretServiceFactory = ({
actorOrgId,
actorAuthMethod,
secretPath,
mode = SecretUpdateMode.FailOnNotFound,
secrets: inputSecrets = []
}: TUpdateManySecretRawDTO) => {
if (!projectSlug && !optionalProjectId)
@ -2076,7 +2080,8 @@ export const secretServiceFactory = ({
actorOrgId,
actor,
actorId,
secrets: inputSecrets
secrets: inputSecrets,
mode
});
return { type: SecretProtectionType.Direct as const, secrets };
}

@ -17,6 +17,7 @@ import { TKmsServiceFactory } from "../kms/kms-service";
import { TResourceMetadataDALFactory } from "../resource-metadata/resource-metadata-dal";
import { ResourceMetadataDTO } from "../resource-metadata/resource-metadata-schema";
import { TSecretV2BridgeDALFactory } from "../secret-v2-bridge/secret-v2-bridge-dal";
import { SecretUpdateMode } from "../secret-v2-bridge/secret-v2-bridge-types";
import { TSecretVersionV2DALFactory } from "../secret-v2-bridge/secret-version-dal";
import { TSecretVersionV2TagDALFactory } from "../secret-v2-bridge/secret-version-tag-dal";
@ -274,6 +275,7 @@ export type TUpdateManySecretRawDTO = Omit<TProjectPermission, "projectId"> & {
projectId?: string;
projectSlug?: string;
environment: string;
mode: SecretUpdateMode;
secrets: {
secretKey: string;
newSecretName?: string;

@ -10,21 +10,24 @@ require (
github.com/fatih/semgroup v1.2.0
github.com/gitleaks/go-gitdiff v0.8.0
github.com/h2non/filetype v1.1.3
github.com/infisical/go-sdk v0.4.7
github.com/infisical/go-sdk v0.4.8
github.com/infisical/infisical-kmip v0.3.5
github.com/mattn/go-isatty v0.0.20
github.com/muesli/ansi v0.0.0-20221106050444-61f0cd9a192a
github.com/muesli/mango-cobra v1.2.0
github.com/muesli/reflow v0.3.0
github.com/muesli/roff v0.1.0
github.com/petar-dambovaliev/aho-corasick v0.0.0-20211021192214-5ab2d9280aa9
github.com/pion/logging v0.2.3
github.com/pion/turn/v4 v4.0.0
github.com/posthog/posthog-go v0.0.0-20221221115252-24dfed35d71a
github.com/rs/cors v1.11.0
github.com/rs/zerolog v1.26.1
github.com/spf13/cobra v1.6.1
github.com/spf13/viper v1.8.1
github.com/stretchr/testify v1.9.0
golang.org/x/crypto v0.31.0
golang.org/x/term v0.27.0
golang.org/x/crypto v0.33.0
golang.org/x/term v0.29.0
gopkg.in/yaml.v2 v2.4.0
)
@ -65,6 +68,9 @@ require (
github.com/google/s2a-go v0.1.7 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect
github.com/googleapis/gax-go/v2 v2.12.5 // indirect
github.com/gosimple/slug v1.15.0 // indirect
github.com/gosimple/unidecode v1.0.1 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/magiconair/properties v1.8.5 // indirect
@ -77,12 +83,18 @@ require (
github.com/muesli/termenv v0.15.2 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/pelletier/go-toml v1.9.3 // indirect
github.com/pion/dtls/v3 v3.0.4 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/stun/v3 v3.0.0 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/spf13/afero v1.6.0 // indirect
github.com/spf13/cast v1.3.1 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/wlynxg/anet v0.0.5 // indirect
github.com/xtgo/uuid v0.0.0-20140804021211-a0b114877d4c // indirect
go.mongodb.org/mongo-driver v1.10.0 // indirect
go.opencensus.io v0.24.0 // indirect
@ -91,12 +103,12 @@ require (
go.opentelemetry.io/otel v1.24.0 // indirect
go.opentelemetry.io/otel/metric v1.24.0 // indirect
go.opentelemetry.io/otel/trace v1.24.0 // indirect
golang.org/x/net v0.27.0 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/oauth2 v0.21.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.22.0 // indirect
golang.org/x/time v0.6.0 // indirect
google.golang.org/api v0.188.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240708141625-4ad9e859172b // indirect
@ -108,7 +120,7 @@ require (
require (
github.com/fatih/color v1.17.0
github.com/go-resty/resty/v2 v2.13.1
github.com/go-resty/resty/v2 v2.16.5
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/jedib0t/go-pretty v4.3.0+incompatible
github.com/manifoldco/promptui v0.9.0

@ -152,8 +152,8 @@ github.com/go-openapi/errors v0.20.2 h1:dxy7PGTqEh94zj2E3h1cUmQQWiM1+aeCROfAr02E
github.com/go-openapi/errors v0.20.2/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
github.com/go-openapi/strfmt v0.21.3 h1:xwhj5X6CjXEZZHMWy1zKJxvW9AfHC9pkyUjLvHtKG7o=
github.com/go-openapi/strfmt v0.21.3/go.mod h1:k+RzNO0Da+k3FrrynSNN8F7n/peCmQQqbbXjtDfvmGg=
github.com/go-resty/resty/v2 v2.13.1 h1:x+LHXBI2nMB1vqndymf26quycC4aggYJ7DECYbiz03g=
github.com/go-resty/resty/v2 v2.13.1/go.mod h1:GznXlLxkq6Nh4sU59rPmUw3VtgpO3aS96ORAI6Q7d+0=
github.com/go-resty/resty/v2 v2.16.5 h1:hBKqmWrr7uRc3euHVqmh1HTHcKn99Smr7o5spptdhTM=
github.com/go-resty/resty/v2 v2.16.5/go.mod h1:hkJtXbA2iKHzJheXYvQ8snQES5ZLGKMwQ07xAwp/fiA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
@ -237,6 +237,10 @@ github.com/googleapis/gax-go/v2 v2.12.5 h1:8gw9KZK8TiVKB6q3zHY3SBzLnrGp6HQjyfYBY
github.com/googleapis/gax-go/v2 v2.12.5/go.mod h1:BUDKcWo+RaKq5SC9vVYL0wLADa3VcfswbOMMRmB9H3E=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gosimple/slug v1.15.0 h1:wRZHsRrRcs6b0XnxMUBM6WK1U1Vg5B0R7VkIf1Xzobo=
github.com/gosimple/slug v1.15.0/go.mod h1:UiRaFH+GEilHstLUmcBgWcI42viBN7mAb818JrYOeFQ=
github.com/gosimple/unidecode v1.0.1 h1:hZzFTMMqSswvf0LBJZCZgThIZrpDHFXux9KeGmn6T/o=
github.com/gosimple/unidecode v1.0.1/go.mod h1:CP0Cr1Y1kogOtx0bJblKzsVWrqYaqfNOnHzpgWw4Awc=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/h2non/filetype v1.1.3 h1:FKkx9QbD7HR/zjK1Ia5XiBsq9zdLi5Kf3zGyFTAFkGg=
github.com/h2non/filetype v1.1.3/go.mod h1:319b3zT68BvV+WRj7cwy856M2ehB3HqNOt6sy1HndBY=
@ -255,6 +259,8 @@ github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/b
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
@ -265,8 +271,10 @@ github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/inconshreveable/mousetrap v1.0.1 h1:U3uMjPSQEBMNp1lFxmllqCPM6P5u/Xq7Pgzkat/bFNc=
github.com/inconshreveable/mousetrap v1.0.1/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/infisical/go-sdk v0.4.7 h1:+cxIdDfciMh0Syxbxbqjhvz9/ShnN1equ2zqlVQYGtw=
github.com/infisical/go-sdk v0.4.7/go.mod h1:6fWzAwTPIoKU49mQ2Oxu+aFnJu9n7k2JcNrZjzhHM2M=
github.com/infisical/go-sdk v0.4.8 h1:aphRnaauC5//PkP1ZbY9RSK2RiT1LjPS5o4CbX0x5OQ=
github.com/infisical/go-sdk v0.4.8/go.mod h1:bMO9xSaBeXkDBhTIM4FkkREAfw2V8mv5Bm7lvo4+uDk=
github.com/infisical/infisical-kmip v0.3.5 h1:QM3s0e18B+mYv3a9HQNjNAlbwZJBzXq5BAJM2scIeiE=
github.com/infisical/infisical-kmip v0.3.5/go.mod h1:bO1M4YtKyutNg1bREPmlyZspC5duSR7hyQ3lPmLzrIs=
github.com/jedib0t/go-pretty v4.3.0+incompatible h1:CGs8AVhEKg/n9YbUenWmNStRW2PHJzaeDodcfvRAbIo=
github.com/jedib0t/go-pretty v4.3.0+incompatible/go.mod h1:XemHduiw8R651AF9Pt4FwCTKeG3oo7hrHJAoznj9nag=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
@ -339,7 +347,20 @@ github.com/pelletier/go-toml v1.9.3 h1:zeC5b1GviRUyKYd6OJPvBU/mcVDVoL1OhT17FCt5d
github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/petar-dambovaliev/aho-corasick v0.0.0-20211021192214-5ab2d9280aa9 h1:lL+y4Xv20pVlCGyLzNHRC0I0rIHhIL1lTvHizoS/dU8=
github.com/petar-dambovaliev/aho-corasick v0.0.0-20211021192214-5ab2d9280aa9/go.mod h1:EHPiTAKtiFmrMldLUNswFwfZ2eJIYBHktdaUTZxYWRw=
github.com/pion/dtls/v3 v3.0.4 h1:44CZekewMzfrn9pmGrj5BNnTMDCFwr+6sLH+cCuLM7U=
github.com/pion/dtls/v3 v3.0.4/go.mod h1:R373CsjxWqNPf6MEkfdy3aSe9niZvL/JaKlGeFphtMg=
github.com/pion/logging v0.2.3 h1:gHuf0zpoh1GW67Nr6Gj4cv5Z9ZscU7g/EaoC/Ke/igI=
github.com/pion/logging v0.2.3/go.mod h1:z8YfknkquMe1csOrxK5kc+5/ZPAzMxbKLX5aXpbpC90=
github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA=
github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8=
github.com/pion/stun/v3 v3.0.0 h1:4h1gwhWLWuZWOJIJR9s2ferRO+W3zA/b6ijOI6mKzUw=
github.com/pion/stun/v3 v3.0.0/go.mod h1:HvCN8txt8mwi4FBvS3EmDghW6aQJ24T+y+1TKjB5jyU=
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
github.com/pion/turn/v4 v4.0.0 h1:qxplo3Rxa9Yg1xXDxxH8xaqcyGUtbHYw4QSCvmFWvhM=
github.com/pion/turn/v4 v4.0.0/go.mod h1:MuPDkm15nYSklKpN8vWJ9W2M0PlyQZqYt1McGuxG7mA=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@ -401,6 +422,8 @@ github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/urfave/cli v1.22.5/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU=
github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g=
github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8=
@ -413,7 +436,6 @@ github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ=
@ -448,13 +470,10 @@ golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215165025-cf75a172585e/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.33.0 h1:IOBPskki6Lysi0lo9qQvbxiQ+FvsCC/YWOecCHAixus=
golang.org/x/crypto v0.33.0/go.mod h1:bVdXmD7IV/4GdElGPozy6U7lWdRXA4qyRVGJV57uQ5M=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -490,8 +509,6 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -530,13 +547,8 @@ golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLd
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.27.0 h1:5K3Njcw06/l2y9vpGCSdcxWOYHOUk3dVNGDXN+FvAys=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -562,10 +574,8 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -612,24 +622,13 @@ golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/term v0.29.0 h1:L6pJp37ocefwRRtYPKSWOWzOtWSxVajvz2ldH/xi3iU=
golang.org/x/term v0.29.0/go.mod h1:6bl4lRlvVuDgSf3179VpIxBF0o10JUpXWOnI7nErv7s=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -639,17 +638,13 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@ -702,8 +697,6 @@ golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

@ -544,3 +544,59 @@ func CallUpdateRawSecretsV3(httpClient *resty.Client, request UpdateRawSecretByN
return nil
}
func CallRegisterGatewayIdentityV1(httpClient *resty.Client) (*GetRelayCredentialsResponseV1, error) {
var resBody GetRelayCredentialsResponseV1
response, err := httpClient.
R().
SetResult(&resBody).
SetHeader("User-Agent", USER_AGENT).
Post(fmt.Sprintf("%v/v1/gateways/register-identity", config.INFISICAL_URL))
if err != nil {
return nil, fmt.Errorf("CallRegisterGatewayIdentityV1: Unable to complete api request [err=%w]", err)
}
if response.IsError() {
return nil, fmt.Errorf("CallRegisterGatewayIdentityV1: Unsuccessful response [%v %v] [status-code=%v] [response=%v]", response.Request.Method, response.Request.URL, response.StatusCode(), response.String())
}
return &resBody, nil
}
func CallExchangeRelayCertV1(httpClient *resty.Client, request ExchangeRelayCertRequestV1) (*ExchangeRelayCertResponseV1, error) {
var resBody ExchangeRelayCertResponseV1
response, err := httpClient.
R().
SetResult(&resBody).
SetBody(request).
SetHeader("User-Agent", USER_AGENT).
Post(fmt.Sprintf("%v/v1/gateways/exchange-cert", config.INFISICAL_URL))
if err != nil {
return nil, fmt.Errorf("CallExchangeRelayCertV1: Unable to complete api request [err=%w]", err)
}
if response.IsError() {
return nil, fmt.Errorf("CallExchangeRelayCertV1: Unsuccessful response [%v %v] [status-code=%v] [response=%v]", response.Request.Method, response.Request.URL, response.StatusCode(), response.String())
}
return &resBody, nil
}
func CallGatewayHeartBeatV1(httpClient *resty.Client) error {
response, err := httpClient.
R().
SetHeader("User-Agent", USER_AGENT).
Post(fmt.Sprintf("%v/v1/gateways/heartbeat", config.INFISICAL_URL))
if err != nil {
return fmt.Errorf("CallGatewayHeartBeatV1: Unable to complete api request [err=%w]", err)
}
if response.IsError() {
return fmt.Errorf("CallGatewayHeartBeatV1: Unsuccessful response [%v %v] [status-code=%v] [response=%v]", response.Request.Method, response.Request.URL, response.StatusCode(), response.String())
}
return nil
}

@ -629,3 +629,22 @@ type GetRawSecretV3ByNameResponse struct {
} `json:"secret"`
ETag string
}
type GetRelayCredentialsResponseV1 struct {
TurnServerUsername string `json:"turnServerUsername"`
TurnServerPassword string `json:"turnServerPassword"`
TurnServerRealm string `json:"turnServerRealm"`
TurnServerAddress string `json:"turnServerAddress"`
InfisicalStaticIp string `json:"infisicalStaticIp"`
}
type ExchangeRelayCertRequestV1 struct {
RelayAddress string `json:"relayAddress"`
}
type ExchangeRelayCertResponseV1 struct {
SerialNumber string `json:"serialNumber"`
PrivateKey string `json:"privateKey"`
Certificate string `json:"certificate"`
CertificateChain string `json:"certificateChain"`
}

120
cli/packages/cmd/gateway.go Normal file

@ -0,0 +1,120 @@
package cmd
import (
// "fmt"
// "github.com/Infisical/infisical-merge/packages/api"
// "github.com/Infisical/infisical-merge/packages/models"
"context"
"fmt"
"os"
"os/signal"
"syscall"
"time"
"github.com/Infisical/infisical-merge/packages/gateway"
"github.com/Infisical/infisical-merge/packages/util"
"github.com/rs/zerolog/log"
// "github.com/Infisical/infisical-merge/packages/visualize"
// "github.com/rs/zerolog/log"
// "github.com/go-resty/resty/v2"
"github.com/posthog/posthog-go"
"github.com/spf13/cobra"
)
var gatewayCmd = &cobra.Command{
Example: `infisical gateway`,
Short: "Used to infisical gateway",
Use: "gateway",
DisableFlagsInUseLine: true,
Args: cobra.NoArgs,
Run: func(cmd *cobra.Command, args []string) {
token, err := util.GetInfisicalToken(cmd)
if err != nil {
util.HandleError(err, "Unable to parse flag")
}
if token == nil {
util.HandleError(fmt.Errorf("Token not found"))
}
Telemetry.CaptureEvent("cli-command:gateway", posthog.NewProperties().Set("version", util.CLI_VERSION))
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
sigStopCh := make(chan bool, 1)
ctx, cancel := context.WithCancel(cmd.Context())
defer cancel()
go func() {
<-sigCh
close(sigStopCh)
cancel()
// If we get a second signal, force exit
<-sigCh
log.Warn().Msgf("Force exit triggered")
os.Exit(1)
}()
// Main gateway retry loop with proper context handling
retryTicker := time.NewTicker(5 * time.Second)
defer retryTicker.Stop()
for {
if ctx.Err() != nil {
log.Info().Msg("Shutting down gateway")
return
}
gatewayInstance, err := gateway.NewGateway(token.Token)
if err != nil {
util.HandleError(err)
}
if err = gatewayInstance.ConnectWithRelay(); err != nil {
if ctx.Err() != nil {
log.Info().Msg("Shutting down gateway")
return
}
log.Error().Msgf("Gateway connection error with relay: %s", err)
log.Info().Msg("Retrying connection in 5 seconds...")
select {
case <-retryTicker.C:
continue
case <-ctx.Done():
log.Info().Msg("Shutting down gateway")
return
}
}
err = gatewayInstance.Listen(ctx)
if ctx.Err() != nil {
log.Info().Msg("Gateway shutdown complete")
return
}
log.Error().Msgf("Gateway listen error: %s", err)
log.Info().Msg("Retrying connection in 5 seconds...")
select {
case <-retryTicker.C:
continue
case <-ctx.Done():
log.Info().Msg("Shutting down gateway")
return
}
}
},
}
func init() {
gatewayCmd.SetHelpFunc(func(command *cobra.Command, strings []string) {
command.Flags().MarkHidden("domain")
command.Parent().HelpFunc()(command, strings)
})
gatewayCmd.Flags().String("token", "", "Connect with Infisical using machine identity access token")
rootCmd.AddCommand(gatewayCmd)
}

103
cli/packages/cmd/kmip.go Normal file

@ -0,0 +1,103 @@
/*
Copyright (c) 2023 Infisical Inc.
*/
package cmd
import (
"fmt"
"github.com/Infisical/infisical-merge/packages/config"
"github.com/Infisical/infisical-merge/packages/util"
kmip "github.com/infisical/infisical-kmip"
"github.com/spf13/cobra"
)
var kmipCmd = &cobra.Command{
Example: `infisical kmip`,
Short: "Used to manage KMIP servers",
Use: "kmip",
DisableFlagsInUseLine: true,
Args: cobra.NoArgs,
}
var kmipStartCmd = &cobra.Command{
Example: `infisical kmip start`,
Short: "Used to start a KMIP server",
Use: "start",
DisableFlagsInUseLine: true,
Args: cobra.NoArgs,
Run: startKmipServer,
}
func startKmipServer(cmd *cobra.Command, args []string) {
listenAddr, err := cmd.Flags().GetString("listen-address")
if err != nil {
util.HandleError(err, "Unable to parse flag")
}
identityAuthMethod, err := cmd.Flags().GetString("identity-auth-method")
if err != nil {
util.HandleError(err, "Unable to parse flag")
}
authMethodValid, strategy := util.IsAuthMethodValid(identityAuthMethod, false)
if !authMethodValid {
util.PrintErrorMessageAndExit(fmt.Sprintf("Invalid login method: %s", identityAuthMethod))
}
var identityClientId string
var identityClientSecret string
if strategy == util.AuthStrategy.UNIVERSAL_AUTH {
identityClientId, err = util.GetCmdFlagOrEnv(cmd, "identity-client-id", util.INFISICAL_UNIVERSAL_AUTH_CLIENT_ID_NAME)
if err != nil {
util.HandleError(err, "Unable to parse identity client ID")
}
identityClientSecret, err = util.GetCmdFlagOrEnv(cmd, "identity-client-secret", util.INFISICAL_UNIVERSAL_AUTH_CLIENT_SECRET_NAME)
if err != nil {
util.HandleError(err, "Unable to parse identity client secret")
}
} else {
util.PrintErrorMessageAndExit(fmt.Sprintf("Unsupported login method: %s", identityAuthMethod))
}
serverName, err := cmd.Flags().GetString("server-name")
if err != nil {
util.HandleError(err, "Unable to parse flag")
}
certificateTTL, err := cmd.Flags().GetString("certificate-ttl")
if err != nil {
util.HandleError(err, "Unable to parse flag")
}
hostnamesOrIps, err := cmd.Flags().GetString("hostnames-or-ips")
if err != nil {
util.HandleError(err, "Unable to parse flag")
}
kmip.StartServer(kmip.ServerConfig{
Addr: listenAddr,
InfisicalBaseAPIURL: config.INFISICAL_URL,
IdentityClientId: identityClientId,
IdentityClientSecret: identityClientSecret,
ServerName: serverName,
CertificateTTL: certificateTTL,
HostnamesOrIps: hostnamesOrIps,
})
}
func init() {
kmipStartCmd.Flags().String("listen-address", "localhost:5696", "The address for the KMIP server to listen on. Defaults to localhost:5696")
kmipStartCmd.Flags().String("identity-auth-method", string(util.AuthStrategy.UNIVERSAL_AUTH), "The auth method to use for authenticating the machine identity. Defaults to universal-auth.")
kmipStartCmd.Flags().String("identity-client-id", "", "Universal auth client ID of machine identity")
kmipStartCmd.Flags().String("identity-client-secret", "", "Universal auth client secret of machine identity")
kmipStartCmd.Flags().String("server-name", "kmip-server", "The name of the KMIP server")
kmipStartCmd.Flags().String("certificate-ttl", "1y", "The TTL duration for the server certificate")
kmipStartCmd.Flags().String("hostnames-or-ips", "", "Comma-separated list of hostnames or IPs")
kmipCmd.AddCommand(kmipStartCmd)
rootCmd.AddCommand(kmipCmd)
}

@ -0,0 +1,107 @@
package gateway
import (
"bufio"
"bytes"
"errors"
"io"
"net"
"sync"
"github.com/rs/zerolog/log"
)
func handleConnection(conn net.Conn) {
defer conn.Close()
log.Info().Msgf("New connection from: %s", conn.RemoteAddr().String())
// Use buffered reader for better handling of fragmented data
reader := bufio.NewReader(conn)
for {
msg, err := reader.ReadBytes('\n')
if err != nil {
if errors.Is(err, io.EOF) {
return
}
log.Error().Msgf("Error reading command: %s", err)
return
}
cmd := bytes.ToUpper(bytes.TrimSpace(bytes.Split(msg, []byte(" "))[0]))
args := bytes.TrimSpace(bytes.TrimPrefix(msg, cmd))
switch string(cmd) {
case "FORWARD-TCP":
proxyAddress := string(bytes.Split(args, []byte(" "))[0])
destTarget, err := net.Dial("tcp", proxyAddress)
if err != nil {
log.Error().Msgf("Failed to connect to target: %v", err)
return
}
defer destTarget.Close()
// Handle buffered data
buffered := reader.Buffered()
if buffered > 0 {
bufferedData := make([]byte, buffered)
_, err := reader.Read(bufferedData)
if err != nil {
log.Error().Msgf("Error reading buffered data: %v", err)
return
}
if _, err = destTarget.Write(bufferedData); err != nil {
log.Error().Msgf("Error writing buffered data: %v", err)
return
}
}
CopyData(conn, destTarget)
return
case "PING":
if _, err := conn.Write([]byte("PONG")); err != nil {
log.Error().Msgf("Error writing PONG response: %v", err)
}
return
default:
log.Error().Msgf("Unknown command: %s", string(cmd))
return
}
}
}
type CloseWrite interface {
CloseWrite() error
}
func CopyData(src, dst net.Conn) {
var wg sync.WaitGroup
wg.Add(2)
copyAndClose := func(dst, src net.Conn, done chan<- bool) {
defer wg.Done()
_, err := io.Copy(dst, src)
if err != nil && !errors.Is(err, io.EOF) {
log.Error().Msgf("Copy error: %v", err)
}
// Signal we're done writing
done <- true
// Half close the connection if possible
if c, ok := dst.(CloseWrite); ok {
c.CloseWrite()
}
}
done1 := make(chan bool, 1)
done2 := make(chan bool, 1)
go copyAndClose(dst, src, done1)
go copyAndClose(src, dst, done2)
// Wait for both copies to complete
<-done1
<-done2
wg.Wait()
}

@ -0,0 +1,350 @@
package gateway
import (
"context"
"crypto/tls"
"crypto/x509"
"fmt"
"net"
"strings"
"sync"
"time"
"github.com/Infisical/infisical-merge/packages/api"
"github.com/go-resty/resty/v2"
"github.com/pion/logging"
"github.com/pion/turn/v4"
"github.com/rs/zerolog/log"
)
type GatewayConfig struct {
TurnServerUsername string
TurnServerPassword string
TurnServerAddress string
InfisicalStaticIp string
SerialNumber string
PrivateKey string
Certificate string
CertificateChain string
}
type Gateway struct {
httpClient *resty.Client
config *GatewayConfig
client *turn.Client
}
func NewGateway(identityToken string) (Gateway, error) {
httpClient := resty.New()
httpClient.SetAuthToken(identityToken)
return Gateway{
httpClient: httpClient,
config: &GatewayConfig{},
}, nil
}
func (g *Gateway) ConnectWithRelay() error {
relayDetails, err := api.CallRegisterGatewayIdentityV1(g.httpClient)
if err != nil {
return err
}
relayAddress, relayPort := strings.Split(relayDetails.TurnServerAddress, ":")[0], strings.Split(relayDetails.TurnServerAddress, ":")[1]
var conn net.Conn
// Dial TURN Server
if relayPort == "5349" {
log.Info().Msgf("Provided relay port %s. Using TLS", relayPort)
conn, err = tls.Dial("tcp", relayDetails.TurnServerAddress, &tls.Config{
InsecureSkipVerify: false,
ServerName: relayAddress,
})
} else {
log.Info().Msgf("Provided relay port %s. Using non TLS connection.", relayPort)
peerAddr, err := net.ResolveTCPAddr("tcp", relayDetails.TurnServerAddress)
if err != nil {
return fmt.Errorf("Failed to parse turn server address: %w", err)
}
conn, err = net.DialTCP("tcp", nil, peerAddr)
}
if err != nil {
return fmt.Errorf("Failed to connect with relay server: %w", err)
}
// Start a new TURN Client and wrap our net.Conn in a STUNConn
// This allows us to simulate datagram based communication over a net.Conn
cfg := &turn.ClientConfig{
STUNServerAddr: relayDetails.TurnServerAddress,
TURNServerAddr: relayDetails.TurnServerAddress,
Conn: turn.NewSTUNConn(conn),
Username: relayDetails.TurnServerUsername,
Password: relayDetails.TurnServerPassword,
Realm: relayDetails.TurnServerRealm,
LoggerFactory: logging.NewDefaultLoggerFactory(),
}
client, err := turn.NewClient(cfg)
if err != nil {
return fmt.Errorf("Failed to create relay client: %w", err)
}
g.config = &GatewayConfig{
TurnServerUsername: relayDetails.TurnServerUsername,
TurnServerPassword: relayDetails.TurnServerPassword,
TurnServerAddress: relayDetails.TurnServerAddress,
InfisicalStaticIp: relayDetails.InfisicalStaticIp,
}
// if port not specific allow all port
if relayDetails.InfisicalStaticIp != "" && !strings.Contains(relayDetails.InfisicalStaticIp, ":") {
g.config.InfisicalStaticIp = g.config.InfisicalStaticIp + ":0"
}
g.client = client
return nil
}
func (g *Gateway) Listen(ctx context.Context) error {
defer g.client.Close()
err := g.client.Listen()
if err != nil {
return fmt.Errorf("Failed to listen to relay server: %w", err)
}
log.Info().Msg("Connected with relay")
// Allocate a relay socket on the TURN server. On success, it
// will return a net.PacketConn which represents the remote
// socket.
relayNonTlsConn, err := g.client.AllocateTCP()
if err != nil {
return fmt.Errorf("Failed to allocate relay connection: %w", err)
}
log.Info().Msg(relayNonTlsConn.Addr().String())
defer func() {
if closeErr := relayNonTlsConn.Close(); closeErr != nil {
log.Error().Msgf("Failed to close connection: %s", closeErr)
}
}()
gatewayCert, err := api.CallExchangeRelayCertV1(g.httpClient, api.ExchangeRelayCertRequestV1{
RelayAddress: relayNonTlsConn.Addr().String(),
})
if err != nil {
return err
}
g.config.SerialNumber = gatewayCert.SerialNumber
g.config.PrivateKey = gatewayCert.PrivateKey
g.config.Certificate = gatewayCert.Certificate
g.config.CertificateChain = gatewayCert.CertificateChain
shutdownCh := make(chan bool, 1)
if g.config.InfisicalStaticIp != "" {
log.Info().Msgf("Found static ip from Infisical: %s. Creating permission IP lifecycle", g.config.InfisicalStaticIp)
peerAddr, err := net.ResolveTCPAddr("tcp", g.config.InfisicalStaticIp)
if err != nil {
return fmt.Errorf("Failed to parse infisical static ip: %w", err)
}
g.registerPermissionLifecycle(func() error {
err := relayNonTlsConn.CreatePermissions(peerAddr)
return err
}, shutdownCh)
}
cert, err := tls.X509KeyPair([]byte(gatewayCert.Certificate), []byte(gatewayCert.PrivateKey))
if err != nil {
return fmt.Errorf("failed to parse cert: %s", err)
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM([]byte(gatewayCert.CertificateChain))
relayConn := tls.NewListener(relayNonTlsConn, &tls.Config{
Certificates: []tls.Certificate{cert},
MinVersion: tls.VersionTLS12,
ClientCAs: caCertPool,
ClientAuth: tls.RequireAndVerifyClientCert,
})
errCh := make(chan error, 1)
log.Info().Msg("Gateway started successfully")
g.registerHeartBeat(errCh, shutdownCh)
g.registerRelayIsActive(relayNonTlsConn.Addr().String(), errCh, shutdownCh)
// Create a WaitGroup to track active connections
var wg sync.WaitGroup
go func() {
for {
if relayDeadlineConn, ok := relayConn.(*net.TCPListener); ok {
relayDeadlineConn.SetDeadline(time.Now().Add(1 * time.Second))
}
select {
case <-ctx.Done():
return
case <-shutdownCh:
return
default:
// Accept new relay connection
conn, err := relayConn.Accept()
if err != nil {
// Check if it's a timeout error (which we expect due to our deadline)
if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
continue
}
if !strings.Contains(err.Error(), "data contains incomplete STUN or TURN frame") {
log.Error().Msgf("Failed to accept connection: %v", err)
}
continue
}
tlsConn, ok := conn.(*tls.Conn)
if !ok {
log.Error().Msg("Failed to convert to TLS connection")
conn.Close()
continue
}
// Set a deadline for the handshake to prevent hanging
tlsConn.SetDeadline(time.Now().Add(10 * time.Second))
err = tlsConn.Handshake()
// Clear the deadline after handshake
tlsConn.SetDeadline(time.Time{})
if err != nil {
log.Error().Msgf("TLS handshake failed: %v", err)
conn.Close()
continue
}
// Get connection state which contains certificate information
state := tlsConn.ConnectionState()
if len(state.PeerCertificates) > 0 {
organizationUnit := state.PeerCertificates[0].Subject.OrganizationalUnit
commonName := state.PeerCertificates[0].Subject.CommonName
if organizationUnit[0] != "gateway-client" || commonName != "cloud" {
log.Error().Msgf("Client certificate verification failed. Received %s, %s", organizationUnit, commonName)
conn.Close()
continue
}
}
// Handle the connection in a goroutine
wg.Add(1)
go func(c net.Conn) {
defer wg.Done()
defer c.Close()
// Monitor parent context to close this connection when needed
go func() {
select {
case <-ctx.Done():
c.Close() // Force close connection when context is canceled
case <-shutdownCh:
c.Close() // Force close connection when accepting loop is done
}
}()
handleConnection(c)
}(conn)
}
}
}()
select {
case <-ctx.Done():
log.Info().Msg("Shutting down gateway...")
case err = <-errCh:
}
// Signal the accept loop to stop
close(shutdownCh)
// Set a timeout for waiting on connections to close
waitCh := make(chan struct{})
go func() {
wg.Wait()
close(waitCh)
}()
select {
case <-waitCh:
// All connections closed normally
case <-time.After(5 * time.Second):
log.Warn().Msg("Timeout waiting for connections to close gracefully")
}
return err
}
func (g *Gateway) registerHeartBeat(errCh chan error, done chan bool) {
ticker := time.NewTicker(1 * time.Hour)
go func() {
time.Sleep(10 * time.Second)
log.Info().Msg("Registering first heart beat")
err := api.CallGatewayHeartBeatV1(g.httpClient)
if err != nil {
log.Error().Msgf("Failed to register heartbeat: %s", err)
}
for {
select {
case <-done:
ticker.Stop()
return
case <-ticker.C:
log.Info().Msg("Registering heart beat")
err := api.CallGatewayHeartBeatV1(g.httpClient)
errCh <- err
}
}
}()
}
func (g *Gateway) registerPermissionLifecycle(permissionFn func() error, done chan bool) {
ticker := time.NewTicker(3 * time.Minute)
go func() {
// wait for 5 mins
permissionFn()
log.Printf("Created permission for incoming connections")
for {
select {
case <-done:
ticker.Stop()
return
case <-ticker.C:
permissionFn()
}
}
}()
}
func (g *Gateway) registerRelayIsActive(serverAddr string, errCh chan error, done chan bool) {
ticker := time.NewTicker(10 * time.Second)
go func() {
time.Sleep(5 * time.Second)
for {
select {
case <-done:
ticker.Stop()
return
case <-ticker.C:
conn, err := net.Dial("tcp", serverAddr)
if err != nil {
errCh <- err
return
}
if conn != nil {
conn.Close()
}
}
}
}()
}

@ -56,4 +56,3 @@ volumes:
networks:
infisical:

@ -11,13 +11,6 @@ To interact with the Infisical API, you will need to obtain an access token. Fol
**FAQ**
<AccordionGroup>
<Accordion title="What happened to the Service Token and API Key authentication modes?">
The Service Token and API Key authentication modes are being deprecated out in favor of [Identities](/documentation/platform/identity).
We expect to make a deprecation notice in the coming months alongside a larger deprecation initiative planned for Q1/Q2 2024.
With identities, we're improving significantly over the shortcomings of Service Tokens and API Keys. Amongst many differences, identities provide broader access over the Infisical API, utilizes the same role-based
permission system used by users, and comes with ample more configurable security measures.
</Accordion>
<Accordion title="Why can I not create, read, update, or delete an identity?">
There are a few reasons for why this might happen:

@ -4,9 +4,6 @@ title: "Backend development guide"
Suppose you're interested in implementing a new feature in Infisical's backend, let's call it "feature-x." Here are the general steps you should follow.
## Database schema migration
In order to run [schema migrations](https://en.wikipedia.org/wiki/Schema_migration#:~:text=A%20schema%20migration%20is%20performed,some%20newer%20or%20older%20version) you need to expose your database connection string. Create a `.env.migration` file to set the database connection URI for migration scripts, or alternatively, export the `DB_CONNECTION_URI` environment variable.
## Creating new database model
If your feature involves a change in the database, you need to first address this by generating the necessary database schemas.

@ -0,0 +1,81 @@
---
title: "Gateway"
sidebarTitle: "Overview"
description: "How to access private network resources from Infisical"
---
The Infisical Gateway provides secure access to private resources within your network without needing direct inbound connections to your environment.
This method keeps your resources fully protected from external access while enabling Infisical to securely interact with resources like databases.
Common use cases include generating dynamic credentials or rotating credentials for private databases.
<Info>
**Note:** Gateway is a paid feature. - **Infisical Cloud users:** Gateway is
available under the **Enterprise Tier**. - **Self-Hosted Infisical:** Please
contact [sales@infisical.com](mailto:sales@infisical.com) to purchase an
enterprise license.
</Info>
## How It Works
The Gateway serves as a secure intermediary that facilitates direct communication between the Infisical server and your private network.
Its a lightweight daemon packaged within the Infisical CLI, making it easy to deploy and manage. Once set up, the Gateway establishes a connection with a relay server, ensuring that all communication between Infisical and your Gateway is fully end-to-end encrypted.
This setup guarantees that only the platform and your Gateway can decrypt the transmitted information, keeping communication with your resources secure, private and isolated.
## Deployment
The Infisical Gateway is seamlessly integrated into the Infisical CLI under the `gateway` command, making it simple to deploy and manage.
You can install the Gateway in all the same ways you install the Infisical CLI—whether via npm, Docker, or a binary.
For detailed installation instructions, refer to the Infisical [CLI Installation instructions](/cli/overview).
To function, the Gateway must authenticate with Infisical. This requires a machine identity configured with the appropriate permissions to create and manage a Gateway.
Once authenticated, the Gateway establishes a secure connection with Infisical to allow your private resources to be reachable.
### Deployment process
<Steps>
<Step title="Create a Gateway Identity">
1. Navigate to **Organization Access Control** in your Infisical dashboard.
2. Create a dedicated machine identity for your Gateway.
3. **Best Practice:** Assign a unique identity to each Gateway for better security and management.
![Create Gateway Identity](../../../images/platform/gateways/create-identity-for-gateway.png)
</Step>
<Step title="Configure Authentication Method">
You'll need to choose an authentication method to initiate communication with Infisical. View the available machine identity authentication methods [here](/documentation/platform/identities/machine-identities).
</Step>
<Step title="Deploy the Gateway">
Use the Infisical CLI to deploy the Gateway. You can log in with your machine identity and start the Gateway in one command. The example below demonstrates how to deploy the Gateway using the Universal Auth method:
```bash
infisical gateway --token $(infisical login --method=universal-auth --client-id=<> --client-secret=<> --plain)
```
Alternatively, if you already have the token, use it directly with the `--token` flag:
```bash
infisical gateway --token <your-machine-identity-token>
```
Or set it as an environment variable:
```bash
export INFISICAL_TOKEN=<your-machine-identity-token>
infisical gateway
```
<Note>
Ensure the deployed Gateway has network access to the private resources you intend to connect with Infisical.
</Note>
</Step>
<Step title="Verify Gateway Deployment">
To confirm your Gateway is working, check the deployment status by looking for the message **"Gateway started successfully"** in the Gateway logs. This indicates the Gateway is running properly. Next, verify its registration by opening your Infisical dashboard, navigating to **Organization Access Control**, and selecting the **Gateways** tab. Your newly deployed Gateway should appear in the list.
![Gateway List](../../../images/platform/gateways/gateway-list.png)
</Step>
<Step title="Link Gateway to Projects">
To enable Infisical features like dynamic secrets or secret rotation to access private resources through the Gateway, you need to link the Gateway to the relevant projects.
Start by accessing the **Gateway settings** then locate the Gateway in the list, click the options menu (**:**), and select **Edit Details**.
![Edit Gateway Option](../../../images/platform/gateways/edit-gateway.png)
In the edit modal that appears, choose the projects you want the Gateway to access and click **Save** to confirm your selections.
![Project Assignment Modal](../../../images/platform/gateways/assign-project.png)
Once added to a project, the Gateway becomes available for use by any feature that supports Gateways within that project.
</Step>
</Steps>

@ -0,0 +1,87 @@
---
title: "Terraform Cloud"
description: "How to authenticate with Infisical from Terraform Cloud using OIDC."
---
This guide will walk you through setting up Terraform Cloud to inject a [workload identity token](https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/workload-identity-tokens) and use it for OIDC-based authentication with the Infisical Terraform provider. You'll start by creating a machine identity in Infisical, then configure Terraform Cloud to pass the injected token into your Terraform runs.
<Steps>
<Step title="Create a Machine Identity in Infisical">
Follow the instructions [in this documentation](/documentation/platform/identities/oidc-auth/general) to create a machine identity with OIDC auth. Infisical OIDC configuration values for Terraform Cloud:
1. Set the OIDC Discovery URL to https://app.terraform.io.
2. Set the Issuer to https://app.terraform.io.
3. Configure the Audience to match the value you will use for **TFC_WORKLOAD_IDENTITY_AUDIENCE** in Terraform Cloud for the next step.
To view all possible claims available from Terraform cloud, visit [HashiCorps documentation](https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/workload-identity-tokens#token-structure).
</Step>
<Step title="Enable Workload Identity Token Injection in Terraform Cloud">
<Tabs>
<Tab title="Generate single token">
1. **Navigate to your workspace** in Terraform Cloud.
2. **Add a workspace variable** named `TFC_WORKLOAD_IDENTITY_AUDIENCE`:
- **Key**: `TFC_WORKLOAD_IDENTITY_AUDIENCE`
- **Value**: For example, `my-infisical-audience`
- **Category**: Environment
> **Important**:
> - The presence of `TFC_WORKLOAD_IDENTITY_AUDIENCE` is required for Terraform Cloud to inject a token.
> - If you are self-hosting HCP Terraform agents, ensure they are **v1.7.0 or above**.
Once set, Terraform Cloud will inject a workload identity token into the run environment as `TFC_WORKLOAD_IDENTITY_TOKEN`.
</Tab>
<Tab title="(Optional) Generate Multiple Tokens">
If you need multiple tokens (each with a different audience), create additional variables:
```
TFC_WORKLOAD_IDENTITY_AUDIENCE_[YOUR_TAG_HERE]
```
For example:
- `TFC_WORKLOAD_IDENTITY_AUDIENCE_INFISICAL`
- `TFC_WORKLOAD_IDENTITY_AUDIENCE_OTHER_SERVICE`
Terraform Cloud will then inject:
- `TFC_WORKLOAD_IDENTITY_TOKEN_INFISICAL`
- `TFC_WORKLOAD_IDENTITY_TOKEN_OTHER_SERVICE`
> **Note**:
> - The `[YOUR_TAG_HERE]` can only contain letters, numbers, and underscores.
> - You **cannot** use the reserved keyword `TYPE`.
> - Generating multiple tokens requires **v1.12.0 or later** if you are self-hosting agents.
</Tab>
</Tabs>
<Warning>
If you are running on self-hosted HCP Terraform agents, you must use v1.7.0 or later to enable token injection. If you need to generate multiple tokens, you must use v1.12.0 or later.
</Warning>
</Step>
<Step title="Configure the Infisical Provider">
In your Terraform configuration, reference the injected token by name. For example:
```hcl
provider "infisical" {
host = "https://app.infisical.com"
auth = {
oidc = {
identity_id = "<identity-id>"
# This must match the environment variable Terraform injects:
token_environment_variable_name = "TFC_WORKLOAD_IDENTITY_TOKEN"
}
}
}
```
- **`host`**: Defaults to `https://app.infisical.com`. Override if using a self-hosted Infisical instance.
- **`identity_id`**: The OIDC identity ID from Infisical.
- **`token_environment_variable_name`**: Must match the injected variable name from Terraform Cloud. If using single token, use `TFC_WORKLOAD_IDENTITY_TOKEN`. If using multiple tokens, choose the one you want to use (e.g., `TFC_WORKLOAD_IDENTITY_TOKEN_INFISICAL`).
</Step>
<Step title="Validate Your Setup">
1. Run a plan and apply in Terraform Cloud.
2. Verify the Infisical provider authenticates successfully without issues. If you run into authentication errors, double-check the Infisical identity has the correct roles/permissions in Infisical.
</Step>
</Steps>

@ -0,0 +1,142 @@
---
title: "KMIP Integration"
description: "Learn more about integrating with Infisical KMS using KMIP (Key Management Interoperability Protocol)."
---
<Note>
KMIP integration is an Enterprise-only feature. Please reach out to
sales@infisical.com if you have any questions.
</Note>
## Overview
Infisical KMS provides **Key Management Interoperability Protocol (KMIP)** support, enabling seamless integration with KMIP-compatible clients. This allows for enhanced key management across various applications that support the **KMIP 1.4 protocol**.
## Supported Operations
The Infisical KMIP server supports the following operations for **symmetric keys**:
- **Create** - Generate symmetric keys.
- **Register** - Register externally created keys.
- **Locate** - Find keys using attributes.
- **Get** - Retrieve keys securely.
- **Activate** - Enable keys for usage.
- **Revoke** - Revoke existing keys.
- **Destroy** - Permanently remove keys.
- **Get Attributes** - Retrieve metadata associated with keys.
- **Query** - Query server capabilities and supported operations.
## Benefits of KMIP Integration
Integrating Infisical KMS with KMIP-compatible clients provides the following benefits:
- **Standardized Key Management**: Allows interoperability with security and cryptographic applications that support KMIP.
- **Enterprise-Grade Security**: Utilizes Infisicals encryption mechanisms to securely store and manage keys.
- **Centralized Key Management**: Enables a unified approach for managing cryptographic keys across multiple environments.
## Compatibility
Infisical KMIP supports **KMIP versions 1.0 to 1.4**, ensuring compatibility with a wide range of clients and security tools.
## Secure Communication & Authorization
KMIP client-server communication is secured using **mutual TLS (mTLS)**, ensuring strong identity verification and encrypted data exchange via **PKI certificates**. Each KMIP entity must possess valid certificates signed by a trusted Root CA to establish trust.
For strong isolation, each Infisical organization has its own KMIP PKI (Public Key Infrastructure), ensuring that cryptographic operations and certificate authorities remain separate across organizations.
Infisical KMS enforces a **two-layer authorization model** for KMIP operations:
1. **KMIP Server Authorization** The KMIP server, acting as a proxy, must have the `proxy KMIP` permission to forward client requests to Infisical KMS. This is done using a **machine identity** attached to the KMIP server.
2. **KMIP Client Authorization** Clients must have the necessary KMIP-level permissions to perform specific key management operations.
By combining **mTLS for secure communication** and **machine identity-based proxying**, Infisical KMS ensures **strong authentication, controlled access, and centralized key management** for KMIP operations.
## Setup Instructions
### Setup KMIP for your organization
<Steps>
<Step title="Navigate to the organization settings > KMIP">
From there, press Setup KMIP.
![KMIP org navigate](/images/platform/kms/kmip/kmip-org-setup-navigation.png)
</Step>
<Step title="Configure KMIP PKI for the organization">
In the modal, select the desired key algorithm to use for the KMIP PKI of your organization. Press continue.
![KMIP org PKI setup](/images/platform/kms/kmip/kmip-org-setup-modal.png)
This generates the KMIP PKI for your organization. After this, you can proceed to setting up your KMIP server.
</Step>
</Steps>
### Deploying and Configuring the KMIP Server
Follow these steps to configure and deploy a KMIP server.
<Steps>
<Step title="Setup Machine Identity">
Configure a [machine identity](https://infisical.com/docs/documentation/platform/identities/machine-identities#machine-identities) for the KMIP server to use.
![KMIP create machine identity](/images/platform/kms/kmip/kmip-create-mi.png)
Create a custom organization role and give it the **Proxy KMIP** permission.
![KMIP create custom role](/images/platform/kms/kmip/kmip-create-custom-role.png)
![KMIP assign proxy to role](/images/platform/kms/kmip/kmip-assign-custom-role-proxy.png)
Assign the machine identity to the custom organization role. This allows the machine identity to serve KMIP client requests and forward them from your KMIP server to Infisical.
![KMIP assign role to machine identity](/images/platform/kms/kmip/kmip-assign-mi-to-role.png)
</Step>
<Step title="Start up the KMIP server">
To deploy the KMIP server, use the Infisical CLIs `kmip start` command.
Before proceeding, make sure you have the [Infisical CLI installed](https://infisical.com/docs/cli/overview).
Once installed, launch the KMIP server with the following command:
```bash
infisical kmip start \
--identity-client-id=<machine-identity-client-id> \ # This can be set by defining the INFISICAL_UNIVERSAL_AUTH_CLIENT_ID ENV variable
--identity-client-secret=<machine-identity-client-secret> \ # This can be set by defining the INFISICAL_UNIVERSAL_AUTH_CLIENT_SECRET ENV variable
--domain=https://app.infisical.com \
--hostnames-or-ips="my-kmip-server.com"
```
The following flags are available for the `infisical kmip start` command::
- **listen-address** (default: localhost:5696): The address the KMIP server listens on.
- **identity-auth-method** (default: universal-auth): The authentication method for the machine identity.
- **identity-client-id**: The client ID of the machine identity. This can be set by defining the `INFISICAL_UNIVERSAL_AUTH_CLIENT_ID` ENV variable.
- **identity-client-secret**: The client secret of the machine identity. This can be set by defining the `INFISICAL_UNIVERSAL_AUTH_CLIENT_SECRET` ENV variable.
- **server-name** (default: "kmip-server"): The name of the KMIP server.
- **certificate-ttl** (default: "1y"): The duration for which the server certificate is valid.
- **hostnames-or-ips:** A comma-separated list of hostnames or IPs the KMIP server will use (required).
</Step>
</Steps>
### Add and Configure KMIP Clients
<Steps>
<Step title="Navigate to the desired KMS project and select KMIP">
From there, press Add KMIP Client
![KMIP client overview](/images/platform/kms/kmip/kmip-client-overview.png)
</Step>
<Step title="Configure KMIP client">
In the modal, provide the details of your client. The selected permissions determine what KMIP operations can be performed in your KMS project.
![KMIP client modal](/images/platform/kms/kmip/kmip-client-modal.png)
</Step>
<Step title="Generate client certificate">
Once the KMIP client is created, you will have to generate a client certificate.
Press Generate Certificate.
![KMIP generate client cert](/images/platform/kms/kmip/kmip-client-generate-cert.png)
Provide the desired TTL and key algorithm to use and press Generate Client Certificate.
![KMIP client cert config](/images/platform/kms/kmip/kmip-client-cert-config-modal.png)
Configure your KMIP clients to use the generated client certificate, certificate chain and private key.
![KMIP client cert modal](/images/platform/kms/kmip/kmip-client-certificate-modal.png)
</Step>
</Steps>
## Additional Resources
- [KMIP 1.4 Specification](http://docs.oasis-open.org/kmip/spec/v1.4/os/kmip-spec-v1.4-os.html)

@ -0,0 +1,36 @@
---
title: "Networking"
sidebarTitle: "Networking"
description: "Network configuration details for Infisical Cloud"
---
## Overview
When integrating your infrastructure with Infisical Cloud, you may need to configure network access controls. This page provides the IP addresses that Infisical uses to communicate with your services.
## Egress IP Addresses
Infisical Cloud operates from two regions: US and EU. If your infrastructure has strict network policies, you may need to allow traffic from Infisical by adding the following IP addresses to your ingress rules. These are the egress IPs Infisical uses when making outbound requests to your services.
### US Region
To allow connections from Infisical US, add these IP addresses to your ingress rules:
- `3.213.63.16`
- `54.164.68.7`
### EU Region
To allow connections from Infisical EU, add these IP addresses to your ingress rules:
- `3.77.89.19`
- `3.125.209.189`
## Common Use Cases
You may need to allow Infisicals egress IPs if your services require inbound connections for:
- Secret rotation - When Infisical needs to send requests to your systems to automatically rotate credentials
- Dynamic secrets - When Infisical generates and manages temporary credentials for your cloud services
- Secret integrations - When syncing secrets with third-party services like Azure Key Vault
- Native authentication with machine identities - When using methods like Kubernetes authentication

Binary file not shown.

After

(image error) Size: 492 KiB

Binary file not shown.

Before

(image error) Size: 306 KiB

After

(image error) Size: 500 KiB

Binary file not shown.

Before

(image error) Size: 311 KiB

After

(image error) Size: 523 KiB

Binary file not shown.

After

(image error) Size: 338 KiB

Binary file not shown.

After

(image error) Size: 375 KiB

Binary file not shown.

After

(image error) Size: 384 KiB

Binary file not shown.

After

(image error) Size: 365 KiB

Some files were not shown because too many files have changed in this diff Show More