Compare commits

...

87 Commits

Author SHA1 Message Date
Sheen Capadngan
98371f99e7 misc: removed webhook url from audit logs table 2024-07-04 23:16:02 +08:00
Maidul Islam
ddfc645cdd Merge pull request #2068 from akhilmhdh/feat/audit-log-batching
Changed audit log deletion to batched process
2024-07-04 10:54:54 -04:00
=
f4d9c61404 feat: added a pause in between as breather for db delete 2024-07-04 13:59:15 +05:30
=
5342c85696 feat: changed audit log deletion to batched process 2024-07-04 13:26:11 +05:30
Sheen Capadngan
b05f3e0f1f Merge pull request #2050 from Infisical/feat/native-slack-webhook
feat: added native slack webhook type
2024-07-04 14:50:58 +08:00
Akhil Mohan
9a2645b511 Merge pull request #2065 from akhilmhdh/fix/provider-not-found
Fix provider not found error for secret rotation
2024-07-04 12:08:55 +05:30
Sheen Capadngan
cb664bb042 misc: addressed review comments 2024-07-04 13:33:32 +08:00
BlackMagiq
07db1d826b Merge pull request #2067 from Infisical/fix-license-seats-invite-propagation
Fix license seat count upon complete account invite with tx
2024-07-03 13:43:00 -07:00
Tuan Dang
74db1b75b4 Add tx support for seat count in license invitation update 2024-07-03 13:33:40 -07:00
=
d7023881e5 fix: resolving provider not found error for secret rotation 2024-07-03 20:39:02 +05:30
Maidul Islam
b74595cf35 Merge pull request #2060 from Infisical/fix/addressed-main-page-ui-ux-reports
fix: addressed main page ui/ux concerns
2024-07-03 08:40:40 -04:00
Sheen Capadngan
a45453629c misc: addressed main page ui/ux concerns 2024-07-03 18:32:21 +08:00
Sheen Capadngan
f7626d03bf misc: documentation 2024-07-03 12:26:42 +08:00
Maidul Islam
bc14153bb3 Merge pull request #2049 from akhilmhdh/dynamic-secret/mssql
Dynamic secret MS SQL
2024-07-02 21:22:34 -04:00
Maidul Islam
935a3cb036 Merge pull request #2026 from Infisical/feat/allow-toggling-login-options-as-admin
feat: allowed toggling login options as admin
2024-07-02 14:03:11 -04:00
Sheen Capadngan
148a29db19 Merge branch 'feat/allow-toggling-login-options-as-admin' of https://github.com/Infisical/infisical into feat/allow-toggling-login-options-as-admin 2024-07-03 01:58:04 +08:00
Sheen Capadngan
b12de3e4f5 misc: removed usecallback 2024-07-03 01:57:24 +08:00
Akhil Mohan
661e5ec462 Merge pull request #2052 from Infisical/maidul-2132
Main
2024-07-02 20:29:43 +05:30
Maidul Islam
5cca51d711 access prod bd in ci 2024-07-02 10:57:05 -04:00
Maidul Islam
9e9b9a7b94 update self lock out msg 2024-07-02 10:53:36 -04:00
Maidul Islam
df1ffcf934 Merge pull request #2051 from Infisical/misc/add-config-to-redacted-keys
misc: add config to redacted keys
2024-07-02 10:47:20 -04:00
Sheen Capadngan
0ef7eacd0e misc: add config to redacted keys 2024-07-02 22:34:40 +08:00
Sheen Capadngan
776822d7d5 misc: updated secret path component 2024-07-02 20:54:27 +08:00
Sheen Capadngan
fe9af20d8c fix: addressed type issue 2024-07-02 20:28:03 +08:00
Sheen Capadngan
398a8f363d misc: cleanup of form display structure 2024-07-02 20:20:25 +08:00
Sheen Capadngan
ce5dbca6e2 misc: added placeholder for incoming webhook url 2024-07-02 20:04:55 +08:00
Sheen Capadngan
ed5a7d72ab feat: added native slack webhook type 2024-07-02 19:57:58 +08:00
Sheen Capadngan
3ac6b7be65 Merge pull request #2046 from Infisical/misc/add-check-for-ldap-group
misc: added backend check for ldap group config
2024-07-02 12:59:03 +08:00
Maidul Islam
10601b5afd Merge pull request #2039 from akhilmhdh/feat/migration-file-checks
feat: added slugify migration file creater name and additional check to ensure migration files are not editied in PR
2024-07-01 21:01:47 -04:00
Maidul Islam
8eec08356b update error message 2024-07-01 20:59:56 -04:00
=
0b4d4c008a docs: dynamic secret mssql 2024-07-02 00:18:56 +05:30
=
ae953add3d feat: dynamic secret for mssql completed 2024-07-02 00:12:38 +05:30
Sheen Capadngan
5960a899ba Merge pull request #2048 from Infisical/create-pull-request/patch-1719844740
GH Action: rename new migration file timestamp
2024-07-02 01:25:54 +08:00
github-actions
ea98a0096d chore: renamed new migration files to latest timestamp (gh-action) 2024-07-01 14:38:59 +00:00
Sheen Capadngan
b8f65fc91a Merge pull request #2040 from Infisical/feat/mark-projects-as-favourite
feat: allow org members to mark projects as favorites
2024-07-01 22:38:36 +08:00
Sheen Capadngan
06a4e68ac1 misc: more improvements 2024-07-01 22:33:01 +08:00
Sheen Capadngan
9cbf9a675a misc: simplified update project favorites logic 2024-07-01 22:22:44 +08:00
Akhil Mohan
178ddf1fb9 Merge pull request #2032 from akhilmhdh/fix/role-bug
Resolved identity roleId not setting null for predefined role selection
2024-07-01 19:42:17 +05:30
Sheen Capadngan
030d4fe152 misc: added handling of empty groups and default value 2024-07-01 21:10:27 +08:00
Sheen Capadngan
46abda9041 misc: add org scoping to mutation 2024-07-01 20:22:59 +08:00
Sheen Capadngan
c976a5ccba misc: add scoping to org-level 2024-07-01 20:20:15 +08:00
Sheen Capadngan
1eb9ea9c74 misc: implemened more review comments 2024-07-01 20:10:41 +08:00
Sheen Capadngan
7d7612aaf4 misc: removed use memo 2024-07-01 18:29:56 +08:00
Sheen Capadngan
f570b3b2ee misc: combined into one list 2024-07-01 18:23:38 +08:00
Sheen Capadngan
0b8f6878fe misc: added check for ldap group 2024-07-01 18:12:16 +08:00
Sheen Capadngan
758a9211ab misc: addressed pr comments 2024-07-01 13:11:47 +08:00
Vladyslav Matsiiako
0bb2b2887b updated handbook 2024-06-30 10:02:51 -07:00
Vladyslav Matsiiako
eeb0111bbe updated handbook style 2024-06-30 02:03:59 -07:00
Vladyslav Matsiiako
d12c538511 updated handbook 2024-06-30 02:01:40 -07:00
Maidul Islam
6f67346b2a Merge pull request #2042 from Infisical/daniel/fix-k8-managed-secret-crash
fix(k8-operator): crash on predefined managed secret
2024-06-28 17:01:37 -04:00
Daniel Hougaard
a93db44bbd Helm 2024-06-28 21:34:59 +02:00
Daniel Hougaard
1ddacfda62 Fix: Annotations map nil sometimes nil when pre-created by the user 2024-06-28 21:29:33 +02:00
Sheen Capadngan
5a1e43be44 misc: only display recover when email login is enabled 2024-06-29 02:12:09 +08:00
Sheen Capadngan
04f54479cd misc: implemented review comments 2024-06-29 01:58:27 +08:00
Sheen Capadngan
351d0d0662 Merge pull request #2033 from Infisical/misc/added-secret-name-trim
misc: added secret name trimming
2024-06-29 01:14:23 +08:00
Sheen Capadngan
5a01edae7a misc: added favorites to app layout selection 2024-06-29 01:02:28 +08:00
=
506e86d666 feat: added slugify migration file creater name and additional check to ensure migration files are not editied in PR 2024-06-28 20:33:56 +05:30
Sheen Capadngan
11d9166684 misc: initial project favorite in grid view 2024-06-28 17:40:34 +08:00
Maidul Islam
1859557f90 Merge pull request #2027 from akhilmhdh/feat/secret-manager-integration-auth
AWS Secret Manager assume role based integration
2024-06-27 23:32:25 -04:00
Maidul Islam
59fc34412d small nits for admin login toggle pr 2024-06-27 20:35:15 -04:00
Maidul Islam
1b2a1f2339 Merge pull request #2019 from akhilmhdh/feat/read-replica
Postgres read replica support
2024-06-27 19:36:44 -04:00
BlackMagiq
15b4c397ab Merge pull request #2024 from Infisical/revert-2023-revert-1995-identity-based-pricing
Add support for Identity-Based Pricing"
2024-06-27 15:56:44 -07:00
Akhil Mohan
fc27ad4575 Merge pull request #2037 from Infisical/create-pull-request/patch-1719509560
GH Action: rename new migration file timestamp
2024-06-27 23:07:16 +05:30
github-actions
b7467a83ab chore: renamed new migration files to latest timestamp (gh-action) 2024-06-27 17:32:39 +00:00
Akhil Mohan
3baf434230 Merge pull request #2034 from Infisical/misc/add-on-update-trigger-oidc
misc: add onUpdate trigger to oidc config
2024-06-27 23:02:14 +05:30
Sheen Capadngan
b2d6563994 misc: added secret name trimming 2024-06-27 19:41:00 +08:00
=
cfba8f53e3 fix: resolved identity roleId not setting null for predefined role switch 2024-06-27 15:06:00 +05:30
=
3537a5eb9b feat: switch to tabs instead of seperate pages for aws secret manager assume and access key 2024-06-27 13:06:04 +05:30
=
d5b17a8f24 feat: removed explicit check for aws access key credential allowing to pick it automatically 2024-06-27 13:05:30 +05:30
Sheen Capadngan
d6881e2e68 misc: added signup option filtering 2024-06-27 13:53:12 +08:00
Sheen Capadngan
92a663a17d misc: design change to finalize scim section in org settings 2024-06-27 13:24:26 +08:00
Sheen Capadngan
b3463e0d0f misc: added explicit comment of intent 2024-06-27 12:55:39 +08:00
Sheen Capadngan
c460f22665 misc: added backend disable checks 2024-06-27 12:40:56 +08:00
Sheen Capadngan
db39d03713 misc: added check to backend 2024-06-27 01:59:02 +08:00
Sheen Capadngan
9daa5badec misc: made reusable helper for login page 2024-06-27 01:15:50 +08:00
Sheen Capadngan
e1ed37c713 misc: adjusted OrgSettingsPage and PersonalSettingsPage to include toggle 2024-06-27 01:07:28 +08:00
=
8eea82a1a0 docs: updated docs on usage of aws sm integration with assume role 2024-06-26 22:35:49 +05:30
=
694d0e3ed3 feat: updated ui for aws sm assume role integration 2024-06-26 22:35:12 +05:30
=
58f6c6b409 feat: updated integration api and queue to support aws secret manager assume role feature 2024-06-26 22:33:49 +05:30
Sheen Capadngan
98a15a901e feat: allowed toggling login options as admin 2024-06-26 22:45:14 +08:00
Maidul Islam
1c2698f533 Revert "Revert "Add support for Identity-Based Pricing"" 2024-06-25 18:04:26 -04:00
=
5d59fe8810 fix: resolved rebase issue with knex.d.ts 2024-06-25 22:54:45 +05:30
=
90eed8d39b docs: updated replica information to the docs 2024-06-25 22:51:38 +05:30
=
f5974ce9ad feat: resolved some queries giving any[] on db instance modification for replication 2024-06-25 22:51:38 +05:30
=
c6b51af4b1 feat: removed knex-tables.d.ts 2024-06-25 22:51:37 +05:30
=
c13c37fc77 feat: switched read db operations to replica nodes 2024-06-25 22:50:17 +05:30
=
259c01c110 feat: added read replica option in config and extended knex to choose 2024-06-25 22:49:28 +05:30
150 changed files with 4672 additions and 942 deletions

View File

@@ -105,6 +105,13 @@ jobs:
environment:
name: Production
steps:
- uses: twingate/github-action@v1
with:
# The Twingate Service Key used to connect Twingate to the proper service
# Learn more about [Twingate Services](https://docs.twingate.com/docs/services)
#
# Required
service-key: ${{ secrets.TWINGATE_SERVICE_KEY }}
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Node.js environment

View File

@@ -0,0 +1,25 @@
name: Check migration file edited
on:
pull_request:
types: [opened, synchronize]
paths:
- 'backend/src/db/migrations/**'
jobs:
rename:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check any migration files are modified, renamed or duplicated.
run: |
git diff --name-status HEAD^ HEAD backend/src/db/migrations | grep '^M\|^R\|^C' || true | cut -f2 | xargs -r -n1 basename > edited_files.txt
if [ -s edited_files.txt ]; then
echo "Exiting migration files cannot be modified."
cat edited_files.txt
exit 1
fi

View File

@@ -19,18 +19,16 @@ jobs:
- name: Get list of newly added files in migration folder
run: |
git diff --name-status HEAD^ HEAD backend/src/db/migrations | grep '^A' | cut -f2 | xargs -n1 basename > added_files.txt
git diff --name-status HEAD^ HEAD backend/src/db/migrations | grep '^A' || true | cut -f2 | xargs -r -n1 basename > added_files.txt
if [ ! -s added_files.txt ]; then
echo "No new files added. Skipping"
echo "SKIP_RENAME=true" >> $GITHUB_ENV
exit 0
fi
- name: Script to rename migrations
if: env.SKIP_RENAME != 'true'
run: python .github/resources/rename_migration_files.py
- name: Commit and push changes
if: env.SKIP_RENAME != 'true'
run: |
git config user.name github-actions
git config user.email github-actions@github.com

View File

@@ -5,3 +5,4 @@ frontend/src/views/Project/MembersPage/components/MemberListTab/MemberRoleForm/M
frontend/src/views/Project/MembersPage/components/MemberListTab/MemberRoleForm/SpecificPrivilegeSection.tsx:generic-api-key:292
docs/self-hosting/configuration/envars.mdx:generic-api-key:106
frontend/src/views/Project/MembersPage/components/MemberListTab/MemberRoleForm/SpecificPrivilegeSection.tsx:generic-api-key:451
docs/mint.json:generic-api-key:651

View File

@@ -3,7 +3,6 @@ import "ts-node/register";
import dotenv from "dotenv";
import jwt from "jsonwebtoken";
import knex from "knex";
import path from "path";
import { seedData1 } from "@app/db/seed-data";
@@ -15,6 +14,7 @@ import { AuthMethod, AuthTokenType } from "@app/services/auth/auth-type";
import { mockQueue } from "./mocks/queue";
import { mockSmtpServer } from "./mocks/smtp";
import { mockKeyStore } from "./mocks/keystore";
import { initDbConnection } from "@app/db";
dotenv.config({ path: path.join(__dirname, "../../.env.test"), debug: true });
export default {
@@ -23,23 +23,21 @@ export default {
async setup() {
const logger = await initLogger();
const cfg = initEnvConfig(logger);
const db = knex({
client: "pg",
connection: cfg.DB_CONNECTION_URI,
migrations: {
directory: path.join(__dirname, "../src/db/migrations"),
extension: "ts",
tableName: "infisical_migrations"
},
seeds: {
directory: path.join(__dirname, "../src/db/seeds"),
extension: "ts"
}
const db = initDbConnection({
dbConnectionUri: cfg.DB_CONNECTION_URI,
dbRootCert: cfg.DB_ROOT_CERT
});
try {
await db.migrate.latest();
await db.seed.run();
await db.migrate.latest({
directory: path.join(__dirname, "../src/db/migrations"),
extension: "ts",
tableName: "infisical_migrations"
});
await db.seed.run({
directory: path.join(__dirname, "../src/db/seeds"),
extension: "ts"
});
const smtp = mockSmtpServer();
const queue = mockQueue();
const keyStore = mockKeyStore();
@@ -74,7 +72,14 @@ export default {
// @ts-expect-error type
delete globalThis.jwtToken;
// called after all tests with this env have been run
await db.migrate.rollback({}, true);
await db.migrate.rollback(
{
directory: path.join(__dirname, "../src/db/migrations"),
extension: "ts",
tableName: "infisical_migrations"
},
true
);
await db.destroy();
}
};

1571
backend/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -72,6 +72,7 @@
"dependencies": {
"@aws-sdk/client-iam": "^3.525.0",
"@aws-sdk/client-secrets-manager": "^3.504.0",
"@aws-sdk/client-sts": "^3.600.0",
"@casl/ability": "^6.5.0",
"@fastify/cookie": "^9.3.1",
"@fastify/cors": "^8.5.0",
@@ -133,6 +134,7 @@
"posthog-node": "^3.6.2",
"probot": "^13.0.0",
"smee-client": "^2.0.0",
"tedious": "^18.2.1",
"tweetnacl": "^1.0.3",
"tweetnacl-util": "^0.15.1",
"uuid": "^9.0.1",

View File

@@ -2,13 +2,14 @@
import { execSync } from "child_process";
import path from "path";
import promptSync from "prompt-sync";
import slugify from "@sindresorhus/slugify"
const prompt = promptSync({ sigint: true });
const migrationName = prompt("Enter name for migration: ");
// Remove spaces from migration name and replace with hyphens
const formattedMigrationName = migrationName.replace(/\s+/g, "-");
const formattedMigrationName = slugify(migrationName);
execSync(
`npx knex migrate:make --knexfile ${path.join(__dirname, "../src/db/knexfile.ts")} -x ts ${formattedMigrationName}`,

View File

@@ -1,4 +1,4 @@
import { Knex } from "knex";
import { Knex as KnexOriginal } from "knex";
import {
TableName,
@@ -280,318 +280,371 @@ import {
TWebhooksUpdate
} from "@app/db/schemas";
declare module "knex" {
namespace Knex {
interface QueryInterface {
primaryNode(): KnexOriginal;
replicaNode(): KnexOriginal;
}
}
}
declare module "knex/types/tables" {
interface Tables {
[TableName.Users]: Knex.CompositeTableType<TUsers, TUsersInsert, TUsersUpdate>;
[TableName.Groups]: Knex.CompositeTableType<TGroups, TGroupsInsert, TGroupsUpdate>;
[TableName.CertificateAuthority]: Knex.CompositeTableType<
[TableName.Users]: KnexOriginal.CompositeTableType<TUsers, TUsersInsert, TUsersUpdate>;
[TableName.Groups]: KnexOriginal.CompositeTableType<TGroups, TGroupsInsert, TGroupsUpdate>;
[TableName.CertificateAuthority]: KnexOriginal.CompositeTableType<
TCertificateAuthorities,
TCertificateAuthoritiesInsert,
TCertificateAuthoritiesUpdate
>;
[TableName.CertificateAuthorityCert]: Knex.CompositeTableType<
[TableName.CertificateAuthorityCert]: KnexOriginal.CompositeTableType<
TCertificateAuthorityCerts,
TCertificateAuthorityCertsInsert,
TCertificateAuthorityCertsUpdate
>;
[TableName.CertificateAuthoritySecret]: Knex.CompositeTableType<
[TableName.CertificateAuthoritySecret]: KnexOriginal.CompositeTableType<
TCertificateAuthoritySecret,
TCertificateAuthoritySecretInsert,
TCertificateAuthoritySecretUpdate
>;
[TableName.CertificateAuthorityCrl]: Knex.CompositeTableType<
[TableName.CertificateAuthorityCrl]: KnexOriginal.CompositeTableType<
TCertificateAuthorityCrl,
TCertificateAuthorityCrlInsert,
TCertificateAuthorityCrlUpdate
>;
[TableName.Certificate]: Knex.CompositeTableType<TCertificates, TCertificatesInsert, TCertificatesUpdate>;
[TableName.CertificateBody]: Knex.CompositeTableType<
[TableName.Certificate]: KnexOriginal.CompositeTableType<TCertificates, TCertificatesInsert, TCertificatesUpdate>;
[TableName.CertificateBody]: KnexOriginal.CompositeTableType<
TCertificateBodies,
TCertificateBodiesInsert,
TCertificateBodiesUpdate
>;
[TableName.CertificateSecret]: Knex.CompositeTableType<
[TableName.CertificateSecret]: KnexOriginal.CompositeTableType<
TCertificateSecrets,
TCertificateSecretsInsert,
TCertificateSecretsUpdate
>;
[TableName.UserGroupMembership]: Knex.CompositeTableType<
[TableName.UserGroupMembership]: KnexOriginal.CompositeTableType<
TUserGroupMembership,
TUserGroupMembershipInsert,
TUserGroupMembershipUpdate
>;
[TableName.GroupProjectMembership]: Knex.CompositeTableType<
[TableName.GroupProjectMembership]: KnexOriginal.CompositeTableType<
TGroupProjectMemberships,
TGroupProjectMembershipsInsert,
TGroupProjectMembershipsUpdate
>;
[TableName.GroupProjectMembershipRole]: Knex.CompositeTableType<
[TableName.GroupProjectMembershipRole]: KnexOriginal.CompositeTableType<
TGroupProjectMembershipRoles,
TGroupProjectMembershipRolesInsert,
TGroupProjectMembershipRolesUpdate
>;
[TableName.UserAliases]: Knex.CompositeTableType<TUserAliases, TUserAliasesInsert, TUserAliasesUpdate>;
[TableName.UserEncryptionKey]: Knex.CompositeTableType<
[TableName.UserAliases]: KnexOriginal.CompositeTableType<TUserAliases, TUserAliasesInsert, TUserAliasesUpdate>;
[TableName.UserEncryptionKey]: KnexOriginal.CompositeTableType<
TUserEncryptionKeys,
TUserEncryptionKeysInsert,
TUserEncryptionKeysUpdate
>;
[TableName.AuthTokens]: Knex.CompositeTableType<TAuthTokens, TAuthTokensInsert, TAuthTokensUpdate>;
[TableName.AuthTokenSession]: Knex.CompositeTableType<
[TableName.AuthTokens]: KnexOriginal.CompositeTableType<TAuthTokens, TAuthTokensInsert, TAuthTokensUpdate>;
[TableName.AuthTokenSession]: KnexOriginal.CompositeTableType<
TAuthTokenSessions,
TAuthTokenSessionsInsert,
TAuthTokenSessionsUpdate
>;
[TableName.BackupPrivateKey]: Knex.CompositeTableType<
[TableName.BackupPrivateKey]: KnexOriginal.CompositeTableType<
TBackupPrivateKey,
TBackupPrivateKeyInsert,
TBackupPrivateKeyUpdate
>;
[TableName.Organization]: Knex.CompositeTableType<TOrganizations, TOrganizationsInsert, TOrganizationsUpdate>;
[TableName.OrgMembership]: Knex.CompositeTableType<TOrgMemberships, TOrgMembershipsInsert, TOrgMembershipsUpdate>;
[TableName.OrgRoles]: Knex.CompositeTableType<TOrgRoles, TOrgRolesInsert, TOrgRolesUpdate>;
[TableName.IncidentContact]: Knex.CompositeTableType<
[TableName.Organization]: KnexOriginal.CompositeTableType<
TOrganizations,
TOrganizationsInsert,
TOrganizationsUpdate
>;
[TableName.OrgMembership]: KnexOriginal.CompositeTableType<
TOrgMemberships,
TOrgMembershipsInsert,
TOrgMembershipsUpdate
>;
[TableName.OrgRoles]: KnexOriginal.CompositeTableType<TOrgRoles, TOrgRolesInsert, TOrgRolesUpdate>;
[TableName.IncidentContact]: KnexOriginal.CompositeTableType<
TIncidentContacts,
TIncidentContactsInsert,
TIncidentContactsUpdate
>;
[TableName.UserAction]: Knex.CompositeTableType<TUserActions, TUserActionsInsert, TUserActionsUpdate>;
[TableName.SuperAdmin]: Knex.CompositeTableType<TSuperAdmin, TSuperAdminInsert, TSuperAdminUpdate>;
[TableName.ApiKey]: Knex.CompositeTableType<TApiKeys, TApiKeysInsert, TApiKeysUpdate>;
[TableName.Project]: Knex.CompositeTableType<TProjects, TProjectsInsert, TProjectsUpdate>;
[TableName.ProjectMembership]: Knex.CompositeTableType<
[TableName.UserAction]: KnexOriginal.CompositeTableType<TUserActions, TUserActionsInsert, TUserActionsUpdate>;
[TableName.SuperAdmin]: KnexOriginal.CompositeTableType<TSuperAdmin, TSuperAdminInsert, TSuperAdminUpdate>;
[TableName.ApiKey]: KnexOriginal.CompositeTableType<TApiKeys, TApiKeysInsert, TApiKeysUpdate>;
[TableName.Project]: KnexOriginal.CompositeTableType<TProjects, TProjectsInsert, TProjectsUpdate>;
[TableName.ProjectMembership]: KnexOriginal.CompositeTableType<
TProjectMemberships,
TProjectMembershipsInsert,
TProjectMembershipsUpdate
>;
[TableName.Environment]: Knex.CompositeTableType<
[TableName.Environment]: KnexOriginal.CompositeTableType<
TProjectEnvironments,
TProjectEnvironmentsInsert,
TProjectEnvironmentsUpdate
>;
[TableName.ProjectBot]: Knex.CompositeTableType<TProjectBots, TProjectBotsInsert, TProjectBotsUpdate>;
[TableName.ProjectUserMembershipRole]: Knex.CompositeTableType<
[TableName.ProjectBot]: KnexOriginal.CompositeTableType<TProjectBots, TProjectBotsInsert, TProjectBotsUpdate>;
[TableName.ProjectUserMembershipRole]: KnexOriginal.CompositeTableType<
TProjectUserMembershipRoles,
TProjectUserMembershipRolesInsert,
TProjectUserMembershipRolesUpdate
>;
[TableName.ProjectRoles]: Knex.CompositeTableType<TProjectRoles, TProjectRolesInsert, TProjectRolesUpdate>;
[TableName.ProjectUserAdditionalPrivilege]: Knex.CompositeTableType<
[TableName.ProjectRoles]: KnexOriginal.CompositeTableType<TProjectRoles, TProjectRolesInsert, TProjectRolesUpdate>;
[TableName.ProjectUserAdditionalPrivilege]: KnexOriginal.CompositeTableType<
TProjectUserAdditionalPrivilege,
TProjectUserAdditionalPrivilegeInsert,
TProjectUserAdditionalPrivilegeUpdate
>;
[TableName.ProjectKeys]: Knex.CompositeTableType<TProjectKeys, TProjectKeysInsert, TProjectKeysUpdate>;
[TableName.Secret]: Knex.CompositeTableType<TSecrets, TSecretsInsert, TSecretsUpdate>;
[TableName.SecretReference]: Knex.CompositeTableType<
[TableName.ProjectKeys]: KnexOriginal.CompositeTableType<TProjectKeys, TProjectKeysInsert, TProjectKeysUpdate>;
[TableName.Secret]: KnexOriginal.CompositeTableType<TSecrets, TSecretsInsert, TSecretsUpdate>;
[TableName.SecretReference]: KnexOriginal.CompositeTableType<
TSecretReferences,
TSecretReferencesInsert,
TSecretReferencesUpdate
>;
[TableName.SecretBlindIndex]: Knex.CompositeTableType<
[TableName.SecretBlindIndex]: KnexOriginal.CompositeTableType<
TSecretBlindIndexes,
TSecretBlindIndexesInsert,
TSecretBlindIndexesUpdate
>;
[TableName.SecretVersion]: Knex.CompositeTableType<TSecretVersions, TSecretVersionsInsert, TSecretVersionsUpdate>;
[TableName.SecretFolder]: Knex.CompositeTableType<TSecretFolders, TSecretFoldersInsert, TSecretFoldersUpdate>;
[TableName.SecretFolderVersion]: Knex.CompositeTableType<
[TableName.SecretVersion]: KnexOriginal.CompositeTableType<
TSecretVersions,
TSecretVersionsInsert,
TSecretVersionsUpdate
>;
[TableName.SecretFolder]: KnexOriginal.CompositeTableType<
TSecretFolders,
TSecretFoldersInsert,
TSecretFoldersUpdate
>;
[TableName.SecretFolderVersion]: KnexOriginal.CompositeTableType<
TSecretFolderVersions,
TSecretFolderVersionsInsert,
TSecretFolderVersionsUpdate
>;
[TableName.SecretSharing]: Knex.CompositeTableType<TSecretSharing, TSecretSharingInsert, TSecretSharingUpdate>;
[TableName.RateLimit]: Knex.CompositeTableType<TRateLimit, TRateLimitInsert, TRateLimitUpdate>;
[TableName.SecretTag]: Knex.CompositeTableType<TSecretTags, TSecretTagsInsert, TSecretTagsUpdate>;
[TableName.SecretImport]: Knex.CompositeTableType<TSecretImports, TSecretImportsInsert, TSecretImportsUpdate>;
[TableName.Integration]: Knex.CompositeTableType<TIntegrations, TIntegrationsInsert, TIntegrationsUpdate>;
[TableName.Webhook]: Knex.CompositeTableType<TWebhooks, TWebhooksInsert, TWebhooksUpdate>;
[TableName.ServiceToken]: Knex.CompositeTableType<TServiceTokens, TServiceTokensInsert, TServiceTokensUpdate>;
[TableName.IntegrationAuth]: Knex.CompositeTableType<
[TableName.SecretSharing]: KnexOriginal.CompositeTableType<
TSecretSharing,
TSecretSharingInsert,
TSecretSharingUpdate
>;
[TableName.RateLimit]: KnexOriginal.CompositeTableType<TRateLimit, TRateLimitInsert, TRateLimitUpdate>;
[TableName.SecretTag]: KnexOriginal.CompositeTableType<TSecretTags, TSecretTagsInsert, TSecretTagsUpdate>;
[TableName.SecretImport]: KnexOriginal.CompositeTableType<
TSecretImports,
TSecretImportsInsert,
TSecretImportsUpdate
>;
[TableName.Integration]: KnexOriginal.CompositeTableType<TIntegrations, TIntegrationsInsert, TIntegrationsUpdate>;
[TableName.Webhook]: KnexOriginal.CompositeTableType<TWebhooks, TWebhooksInsert, TWebhooksUpdate>;
[TableName.ServiceToken]: KnexOriginal.CompositeTableType<
TServiceTokens,
TServiceTokensInsert,
TServiceTokensUpdate
>;
[TableName.IntegrationAuth]: KnexOriginal.CompositeTableType<
TIntegrationAuths,
TIntegrationAuthsInsert,
TIntegrationAuthsUpdate
>;
[TableName.Identity]: Knex.CompositeTableType<TIdentities, TIdentitiesInsert, TIdentitiesUpdate>;
[TableName.IdentityUniversalAuth]: Knex.CompositeTableType<
[TableName.Identity]: KnexOriginal.CompositeTableType<TIdentities, TIdentitiesInsert, TIdentitiesUpdate>;
[TableName.IdentityUniversalAuth]: KnexOriginal.CompositeTableType<
TIdentityUniversalAuths,
TIdentityUniversalAuthsInsert,
TIdentityUniversalAuthsUpdate
>;
[TableName.IdentityKubernetesAuth]: Knex.CompositeTableType<
[TableName.IdentityKubernetesAuth]: KnexOriginal.CompositeTableType<
TIdentityKubernetesAuths,
TIdentityKubernetesAuthsInsert,
TIdentityKubernetesAuthsUpdate
>;
[TableName.IdentityGcpAuth]: Knex.CompositeTableType<
[TableName.IdentityGcpAuth]: KnexOriginal.CompositeTableType<
TIdentityGcpAuths,
TIdentityGcpAuthsInsert,
TIdentityGcpAuthsUpdate
>;
[TableName.IdentityAwsAuth]: Knex.CompositeTableType<
[TableName.IdentityAwsAuth]: KnexOriginal.CompositeTableType<
TIdentityAwsAuths,
TIdentityAwsAuthsInsert,
TIdentityAwsAuthsUpdate
>;
[TableName.IdentityAzureAuth]: Knex.CompositeTableType<
[TableName.IdentityAzureAuth]: KnexOriginal.CompositeTableType<
TIdentityAzureAuths,
TIdentityAzureAuthsInsert,
TIdentityAzureAuthsUpdate
>;
[TableName.IdentityUaClientSecret]: Knex.CompositeTableType<
[TableName.IdentityUaClientSecret]: KnexOriginal.CompositeTableType<
TIdentityUaClientSecrets,
TIdentityUaClientSecretsInsert,
TIdentityUaClientSecretsUpdate
>;
[TableName.IdentityAccessToken]: Knex.CompositeTableType<
[TableName.IdentityAccessToken]: KnexOriginal.CompositeTableType<
TIdentityAccessTokens,
TIdentityAccessTokensInsert,
TIdentityAccessTokensUpdate
>;
[TableName.IdentityOrgMembership]: Knex.CompositeTableType<
[TableName.IdentityOrgMembership]: KnexOriginal.CompositeTableType<
TIdentityOrgMemberships,
TIdentityOrgMembershipsInsert,
TIdentityOrgMembershipsUpdate
>;
[TableName.IdentityProjectMembership]: Knex.CompositeTableType<
[TableName.IdentityProjectMembership]: KnexOriginal.CompositeTableType<
TIdentityProjectMemberships,
TIdentityProjectMembershipsInsert,
TIdentityProjectMembershipsUpdate
>;
[TableName.IdentityProjectMembershipRole]: Knex.CompositeTableType<
[TableName.IdentityProjectMembershipRole]: KnexOriginal.CompositeTableType<
TIdentityProjectMembershipRole,
TIdentityProjectMembershipRoleInsert,
TIdentityProjectMembershipRoleUpdate
>;
[TableName.IdentityProjectAdditionalPrivilege]: Knex.CompositeTableType<
[TableName.IdentityProjectAdditionalPrivilege]: KnexOriginal.CompositeTableType<
TIdentityProjectAdditionalPrivilege,
TIdentityProjectAdditionalPrivilegeInsert,
TIdentityProjectAdditionalPrivilegeUpdate
>;
[TableName.AccessApprovalPolicy]: Knex.CompositeTableType<
[TableName.AccessApprovalPolicy]: KnexOriginal.CompositeTableType<
TAccessApprovalPolicies,
TAccessApprovalPoliciesInsert,
TAccessApprovalPoliciesUpdate
>;
[TableName.AccessApprovalPolicyApprover]: Knex.CompositeTableType<
[TableName.AccessApprovalPolicyApprover]: KnexOriginal.CompositeTableType<
TAccessApprovalPoliciesApprovers,
TAccessApprovalPoliciesApproversInsert,
TAccessApprovalPoliciesApproversUpdate
>;
[TableName.AccessApprovalRequest]: Knex.CompositeTableType<
[TableName.AccessApprovalRequest]: KnexOriginal.CompositeTableType<
TAccessApprovalRequests,
TAccessApprovalRequestsInsert,
TAccessApprovalRequestsUpdate
>;
[TableName.AccessApprovalRequestReviewer]: Knex.CompositeTableType<
[TableName.AccessApprovalRequestReviewer]: KnexOriginal.CompositeTableType<
TAccessApprovalRequestsReviewers,
TAccessApprovalRequestsReviewersInsert,
TAccessApprovalRequestsReviewersUpdate
>;
[TableName.ScimToken]: Knex.CompositeTableType<TScimTokens, TScimTokensInsert, TScimTokensUpdate>;
[TableName.SecretApprovalPolicy]: Knex.CompositeTableType<
[TableName.ScimToken]: KnexOriginal.CompositeTableType<TScimTokens, TScimTokensInsert, TScimTokensUpdate>;
[TableName.SecretApprovalPolicy]: KnexOriginal.CompositeTableType<
TSecretApprovalPolicies,
TSecretApprovalPoliciesInsert,
TSecretApprovalPoliciesUpdate
>;
[TableName.SecretApprovalPolicyApprover]: Knex.CompositeTableType<
[TableName.SecretApprovalPolicyApprover]: KnexOriginal.CompositeTableType<
TSecretApprovalPoliciesApprovers,
TSecretApprovalPoliciesApproversInsert,
TSecretApprovalPoliciesApproversUpdate
>;
[TableName.SecretApprovalRequest]: Knex.CompositeTableType<
[TableName.SecretApprovalRequest]: KnexOriginal.CompositeTableType<
TSecretApprovalRequests,
TSecretApprovalRequestsInsert,
TSecretApprovalRequestsUpdate
>;
[TableName.SecretApprovalRequestReviewer]: Knex.CompositeTableType<
[TableName.SecretApprovalRequestReviewer]: KnexOriginal.CompositeTableType<
TSecretApprovalRequestsReviewers,
TSecretApprovalRequestsReviewersInsert,
TSecretApprovalRequestsReviewersUpdate
>;
[TableName.SecretApprovalRequestSecret]: Knex.CompositeTableType<
[TableName.SecretApprovalRequestSecret]: KnexOriginal.CompositeTableType<
TSecretApprovalRequestsSecrets,
TSecretApprovalRequestsSecretsInsert,
TSecretApprovalRequestsSecretsUpdate
>;
[TableName.SecretApprovalRequestSecretTag]: Knex.CompositeTableType<
[TableName.SecretApprovalRequestSecretTag]: KnexOriginal.CompositeTableType<
TSecretApprovalRequestSecretTags,
TSecretApprovalRequestSecretTagsInsert,
TSecretApprovalRequestSecretTagsUpdate
>;
[TableName.SecretRotation]: Knex.CompositeTableType<
[TableName.SecretRotation]: KnexOriginal.CompositeTableType<
TSecretRotations,
TSecretRotationsInsert,
TSecretRotationsUpdate
>;
[TableName.SecretRotationOutput]: Knex.CompositeTableType<
[TableName.SecretRotationOutput]: KnexOriginal.CompositeTableType<
TSecretRotationOutputs,
TSecretRotationOutputsInsert,
TSecretRotationOutputsUpdate
>;
[TableName.Snapshot]: Knex.CompositeTableType<TSecretSnapshots, TSecretSnapshotsInsert, TSecretSnapshotsUpdate>;
[TableName.SnapshotSecret]: Knex.CompositeTableType<
[TableName.Snapshot]: KnexOriginal.CompositeTableType<
TSecretSnapshots,
TSecretSnapshotsInsert,
TSecretSnapshotsUpdate
>;
[TableName.SnapshotSecret]: KnexOriginal.CompositeTableType<
TSecretSnapshotSecrets,
TSecretSnapshotSecretsInsert,
TSecretSnapshotSecretsUpdate
>;
[TableName.SnapshotFolder]: Knex.CompositeTableType<
[TableName.SnapshotFolder]: KnexOriginal.CompositeTableType<
TSecretSnapshotFolders,
TSecretSnapshotFoldersInsert,
TSecretSnapshotFoldersUpdate
>;
[TableName.DynamicSecret]: Knex.CompositeTableType<TDynamicSecrets, TDynamicSecretsInsert, TDynamicSecretsUpdate>;
[TableName.DynamicSecretLease]: Knex.CompositeTableType<
[TableName.DynamicSecret]: KnexOriginal.CompositeTableType<
TDynamicSecrets,
TDynamicSecretsInsert,
TDynamicSecretsUpdate
>;
[TableName.DynamicSecretLease]: KnexOriginal.CompositeTableType<
TDynamicSecretLeases,
TDynamicSecretLeasesInsert,
TDynamicSecretLeasesUpdate
>;
[TableName.SamlConfig]: Knex.CompositeTableType<TSamlConfigs, TSamlConfigsInsert, TSamlConfigsUpdate>;
[TableName.OidcConfig]: Knex.CompositeTableType<TOidcConfigs, TOidcConfigsInsert, TOidcConfigsUpdate>;
[TableName.LdapConfig]: Knex.CompositeTableType<TLdapConfigs, TLdapConfigsInsert, TLdapConfigsUpdate>;
[TableName.LdapGroupMap]: Knex.CompositeTableType<TLdapGroupMaps, TLdapGroupMapsInsert, TLdapGroupMapsUpdate>;
[TableName.OrgBot]: Knex.CompositeTableType<TOrgBots, TOrgBotsInsert, TOrgBotsUpdate>;
[TableName.AuditLog]: Knex.CompositeTableType<TAuditLogs, TAuditLogsInsert, TAuditLogsUpdate>;
[TableName.AuditLogStream]: Knex.CompositeTableType<
[TableName.SamlConfig]: KnexOriginal.CompositeTableType<TSamlConfigs, TSamlConfigsInsert, TSamlConfigsUpdate>;
[TableName.OidcConfig]: KnexOriginal.CompositeTableType<TOidcConfigs, TOidcConfigsInsert, TOidcConfigsUpdate>;
[TableName.LdapConfig]: KnexOriginal.CompositeTableType<TLdapConfigs, TLdapConfigsInsert, TLdapConfigsUpdate>;
[TableName.LdapGroupMap]: KnexOriginal.CompositeTableType<
TLdapGroupMaps,
TLdapGroupMapsInsert,
TLdapGroupMapsUpdate
>;
[TableName.OrgBot]: KnexOriginal.CompositeTableType<TOrgBots, TOrgBotsInsert, TOrgBotsUpdate>;
[TableName.AuditLog]: KnexOriginal.CompositeTableType<TAuditLogs, TAuditLogsInsert, TAuditLogsUpdate>;
[TableName.AuditLogStream]: KnexOriginal.CompositeTableType<
TAuditLogStreams,
TAuditLogStreamsInsert,
TAuditLogStreamsUpdate
>;
[TableName.GitAppInstallSession]: Knex.CompositeTableType<
[TableName.GitAppInstallSession]: KnexOriginal.CompositeTableType<
TGitAppInstallSessions,
TGitAppInstallSessionsInsert,
TGitAppInstallSessionsUpdate
>;
[TableName.GitAppOrg]: Knex.CompositeTableType<TGitAppOrg, TGitAppOrgInsert, TGitAppOrgUpdate>;
[TableName.SecretScanningGitRisk]: Knex.CompositeTableType<
[TableName.GitAppOrg]: KnexOriginal.CompositeTableType<TGitAppOrg, TGitAppOrgInsert, TGitAppOrgUpdate>;
[TableName.SecretScanningGitRisk]: KnexOriginal.CompositeTableType<
TSecretScanningGitRisks,
TSecretScanningGitRisksInsert,
TSecretScanningGitRisksUpdate
>;
[TableName.TrustedIps]: Knex.CompositeTableType<TTrustedIps, TTrustedIpsInsert, TTrustedIpsUpdate>;
[TableName.TrustedIps]: KnexOriginal.CompositeTableType<TTrustedIps, TTrustedIpsInsert, TTrustedIpsUpdate>;
// Junction tables
[TableName.JnSecretTag]: Knex.CompositeTableType<
[TableName.JnSecretTag]: KnexOriginal.CompositeTableType<
TSecretTagJunction,
TSecretTagJunctionInsert,
TSecretTagJunctionUpdate
>;
[TableName.SecretVersionTag]: Knex.CompositeTableType<
[TableName.SecretVersionTag]: KnexOriginal.CompositeTableType<
TSecretVersionTagJunction,
TSecretVersionTagJunctionInsert,
TSecretVersionTagJunctionUpdate
>;
// KMS service
[TableName.KmsServerRootConfig]: Knex.CompositeTableType<
[TableName.KmsServerRootConfig]: KnexOriginal.CompositeTableType<
TKmsRootConfig,
TKmsRootConfigInsert,
TKmsRootConfigUpdate
>;
[TableName.KmsKey]: Knex.CompositeTableType<TKmsKeys, TKmsKeysInsert, TKmsKeysUpdate>;
[TableName.KmsKeyVersion]: Knex.CompositeTableType<TKmsKeyVersions, TKmsKeyVersionsInsert, TKmsKeyVersionsUpdate>;
[TableName.KmsKey]: KnexOriginal.CompositeTableType<TKmsKeys, TKmsKeysInsert, TKmsKeysUpdate>;
[TableName.KmsKeyVersion]: KnexOriginal.CompositeTableType<
TKmsKeyVersions,
TKmsKeyVersionsInsert,
TKmsKeyVersionsUpdate
>;
}
}

View File

@@ -1,8 +1,38 @@
import knex from "knex";
import knex, { Knex } from "knex";
export type TDbClient = ReturnType<typeof initDbConnection>;
export const initDbConnection = ({ dbConnectionUri, dbRootCert }: { dbConnectionUri: string; dbRootCert?: string }) => {
const db = knex({
export const initDbConnection = ({
dbConnectionUri,
dbRootCert,
readReplicas = []
}: {
dbConnectionUri: string;
dbRootCert?: string;
readReplicas?: {
dbConnectionUri: string;
dbRootCert?: string;
}[];
}) => {
// akhilmhdh: the default Knex is knex.Knex<any, any[]>. but when assigned with knex({<config>}) the value is knex.Knex<any, unknown[]>
// this was causing issue with files like `snapshot-dal` `findRecursivelySnapshots` this i am explicitly putting the any and unknown[]
// eslint-disable-next-line
let db: Knex<any, unknown[]>;
// eslint-disable-next-line
let readReplicaDbs: Knex<any, unknown[]>[];
// @ts-expect-error the querybuilder type is expected but our intension is to return a knex instance
knex.QueryBuilder.extend("primaryNode", () => {
return db;
});
// @ts-expect-error the querybuilder type is expected but our intension is to return a knex instance
knex.QueryBuilder.extend("replicaNode", () => {
if (!readReplicaDbs.length) return db;
const selectedReplica = readReplicaDbs[Math.floor(Math.random() * readReplicaDbs.length)];
return selectedReplica;
});
db = knex({
client: "pg",
connection: {
connectionString: dbConnectionUri,
@@ -22,5 +52,21 @@ export const initDbConnection = ({ dbConnectionUri, dbRootCert }: { dbConnection
}
});
readReplicaDbs = readReplicas.map((el) => {
const replicaDbCertificate = el.dbRootCert || dbRootCert;
return knex({
client: "pg",
connection: {
connectionString: el.dbConnectionUri,
ssl: replicaDbCertificate
? {
rejectUnauthorized: true,
ca: Buffer.from(replicaDbCertificate, "base64").toString("ascii")
}
: false
}
});
});
return db;
};

View File

@@ -0,0 +1,35 @@
import { Knex } from "knex";
import { TableName } from "../schemas";
export async function up(knex: Knex): Promise<void> {
const hasAwsAssumeRoleCipherText = await knex.schema.hasColumn(
TableName.IntegrationAuth,
"awsAssumeIamRoleArnCipherText"
);
const hasAwsAssumeRoleIV = await knex.schema.hasColumn(TableName.IntegrationAuth, "awsAssumeIamRoleArnIV");
const hasAwsAssumeRoleTag = await knex.schema.hasColumn(TableName.IntegrationAuth, "awsAssumeIamRoleArnTag");
if (await knex.schema.hasTable(TableName.IntegrationAuth)) {
await knex.schema.alterTable(TableName.IntegrationAuth, (t) => {
if (!hasAwsAssumeRoleCipherText) t.text("awsAssumeIamRoleArnCipherText");
if (!hasAwsAssumeRoleIV) t.text("awsAssumeIamRoleArnIV");
if (!hasAwsAssumeRoleTag) t.text("awsAssumeIamRoleArnTag");
});
}
}
export async function down(knex: Knex): Promise<void> {
const hasAwsAssumeRoleCipherText = await knex.schema.hasColumn(
TableName.IntegrationAuth,
"awsAssumeIamRoleArnCipherText"
);
const hasAwsAssumeRoleIV = await knex.schema.hasColumn(TableName.IntegrationAuth, "awsAssumeIamRoleArnIV");
const hasAwsAssumeRoleTag = await knex.schema.hasColumn(TableName.IntegrationAuth, "awsAssumeIamRoleArnTag");
if (await knex.schema.hasTable(TableName.IntegrationAuth)) {
await knex.schema.alterTable(TableName.IntegrationAuth, (t) => {
if (hasAwsAssumeRoleCipherText) t.dropColumn("awsAssumeIamRoleArnCipherText");
if (hasAwsAssumeRoleIV) t.dropColumn("awsAssumeIamRoleArnIV");
if (hasAwsAssumeRoleTag) t.dropColumn("awsAssumeIamRoleArnTag");
});
}
}

View File

@@ -0,0 +1,19 @@
import { Knex } from "knex";
import { TableName } from "../schemas";
export async function up(knex: Knex): Promise<void> {
if (!(await knex.schema.hasColumn(TableName.SuperAdmin, "enabledLoginMethods"))) {
await knex.schema.alterTable(TableName.SuperAdmin, (tb) => {
tb.specificType("enabledLoginMethods", "text[]");
});
}
}
export async function down(knex: Knex): Promise<void> {
if (await knex.schema.hasColumn(TableName.SuperAdmin, "enabledLoginMethods")) {
await knex.schema.alterTable(TableName.SuperAdmin, (t) => {
t.dropColumn("enabledLoginMethods");
});
}
}

View File

@@ -0,0 +1,19 @@
import { Knex } from "knex";
import { TableName } from "../schemas";
export async function up(knex: Knex): Promise<void> {
if (!(await knex.schema.hasColumn(TableName.OrgMembership, "projectFavorites"))) {
await knex.schema.alterTable(TableName.OrgMembership, (tb) => {
tb.specificType("projectFavorites", "text[]");
});
}
}
export async function down(knex: Knex): Promise<void> {
if (await knex.schema.hasColumn(TableName.OrgMembership, "projectFavorites")) {
await knex.schema.alterTable(TableName.OrgMembership, (t) => {
t.dropColumn("projectFavorites");
});
}
}

View File

@@ -0,0 +1,53 @@
import { Knex } from "knex";
import { WebhookType } from "@app/services/webhook/webhook-types";
import { TableName } from "../schemas";
export async function up(knex: Knex): Promise<void> {
const hasUrlCipherText = await knex.schema.hasColumn(TableName.Webhook, "urlCipherText");
const hasUrlIV = await knex.schema.hasColumn(TableName.Webhook, "urlIV");
const hasUrlTag = await knex.schema.hasColumn(TableName.Webhook, "urlTag");
const hasType = await knex.schema.hasColumn(TableName.Webhook, "type");
if (await knex.schema.hasTable(TableName.Webhook)) {
await knex.schema.alterTable(TableName.Webhook, (tb) => {
if (!hasUrlCipherText) {
tb.text("urlCipherText");
}
if (!hasUrlIV) {
tb.string("urlIV");
}
if (!hasUrlTag) {
tb.string("urlTag");
}
if (!hasType) {
tb.string("type").defaultTo(WebhookType.GENERAL);
}
});
}
}
export async function down(knex: Knex): Promise<void> {
const hasUrlCipherText = await knex.schema.hasColumn(TableName.Webhook, "urlCipherText");
const hasUrlIV = await knex.schema.hasColumn(TableName.Webhook, "urlIV");
const hasUrlTag = await knex.schema.hasColumn(TableName.Webhook, "urlTag");
const hasType = await knex.schema.hasColumn(TableName.Webhook, "type");
if (await knex.schema.hasTable(TableName.Webhook)) {
await knex.schema.alterTable(TableName.Webhook, (t) => {
if (hasUrlCipherText) {
t.dropColumn("urlCipherText");
}
if (hasUrlIV) {
t.dropColumn("urlIV");
}
if (hasUrlTag) {
t.dropColumn("urlTag");
}
if (hasType) {
t.dropColumn("type");
}
});
}
}

View File

@@ -29,7 +29,10 @@ export const IntegrationAuthsSchema = z.object({
keyEncoding: z.string(),
projectId: z.string(),
createdAt: z.date(),
updatedAt: z.date()
updatedAt: z.date(),
awsAssumeIamRoleArnCipherText: z.string().nullable().optional(),
awsAssumeIamRoleArnIV: z.string().nullable().optional(),
awsAssumeIamRoleArnTag: z.string().nullable().optional()
});
export type TIntegrationAuths = z.infer<typeof IntegrationAuthsSchema>;

View File

@@ -16,7 +16,8 @@ export const OrgMembershipsSchema = z.object({
updatedAt: z.date(),
userId: z.string().uuid().nullable().optional(),
orgId: z.string().uuid(),
roleId: z.string().uuid().nullable().optional()
roleId: z.string().uuid().nullable().optional(),
projectFavorites: z.string().array().nullable().optional()
});
export type TOrgMemberships = z.infer<typeof OrgMembershipsSchema>;

View File

@@ -18,7 +18,8 @@ export const SuperAdminSchema = z.object({
trustSamlEmails: z.boolean().default(false).nullable().optional(),
trustLdapEmails: z.boolean().default(false).nullable().optional(),
trustOidcEmails: z.boolean().default(false).nullable().optional(),
defaultAuthOrgId: z.string().uuid().nullable().optional()
defaultAuthOrgId: z.string().uuid().nullable().optional(),
enabledLoginMethods: z.string().array().nullable().optional()
});
export type TSuperAdmin = z.infer<typeof SuperAdminSchema>;

View File

@@ -21,7 +21,11 @@ export const WebhooksSchema = z.object({
keyEncoding: z.string().nullable().optional(),
createdAt: z.date(),
updatedAt: z.date(),
envId: z.string().uuid()
envId: z.string().uuid(),
urlCipherText: z.string().nullable().optional(),
urlIV: z.string().nullable().optional(),
urlTag: z.string().nullable().optional(),
type: z.string().default("general").nullable().optional()
});
export type TWebhooks = z.infer<typeof WebhooksSchema>;

View File

@@ -32,7 +32,7 @@ export const accessApprovalPolicyDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const doc = await accessApprovalPolicyFindQuery(tx || db, {
const doc = await accessApprovalPolicyFindQuery(tx || db.replicaNode(), {
[`${TableName.AccessApprovalPolicy}.id` as "id"]: id
});
const formatedDoc = mergeOneToManyRelation(
@@ -54,7 +54,7 @@ export const accessApprovalPolicyDALFactory = (db: TDbClient) => {
const find = async (filter: TFindFilter<TAccessApprovalPolicies & { projectId: string }>, tx?: Knex) => {
try {
const docs = await accessApprovalPolicyFindQuery(tx || db, filter);
const docs = await accessApprovalPolicyFindQuery(tx || db.replicaNode(), filter);
const formatedDoc = mergeOneToManyRelation(
docs,
"id",

View File

@@ -14,7 +14,8 @@ export const accessApprovalRequestDALFactory = (db: TDbClient) => {
const findRequestsWithPrivilegeByPolicyIds = async (policyIds: string[]) => {
try {
const docs = await db(TableName.AccessApprovalRequest)
const docs = await db
.replicaNode()(TableName.AccessApprovalRequest)
.whereIn(`${TableName.AccessApprovalRequest}.policyId`, policyIds)
.leftJoin(
@@ -170,7 +171,7 @@ export const accessApprovalRequestDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const sql = findQuery({ [`${TableName.AccessApprovalRequest}.id` as "id"]: id }, tx || db);
const sql = findQuery({ [`${TableName.AccessApprovalRequest}.id` as "id"]: id }, tx || db.replicaNode());
const docs = await sql;
const formatedDoc = sqlNestRelationships({
data: docs,
@@ -207,7 +208,8 @@ export const accessApprovalRequestDALFactory = (db: TDbClient) => {
const getCount = async ({ projectId }: { projectId: string }) => {
try {
const accessRequests = await db(TableName.AccessApprovalRequest)
const accessRequests = await db
.replicaNode()(TableName.AccessApprovalRequest)
.leftJoin(
TableName.AccessApprovalPolicy,
`${TableName.AccessApprovalRequest}.policyId`,

View File

@@ -4,6 +4,7 @@ import { TDbClient } from "@app/db";
import { TableName } from "@app/db/schemas";
import { DatabaseError } from "@app/lib/errors";
import { ormify, stripUndefinedInWhere } from "@app/lib/knex";
import { logger } from "@app/lib/logger";
export type TAuditLogDALFactory = ReturnType<typeof auditLogDALFactory>;
@@ -27,7 +28,7 @@ export const auditLogDALFactory = (db: TDbClient) => {
tx?: Knex
) => {
try {
const sqlQuery = (tx || db)(TableName.AuditLog)
const sqlQuery = (tx || db.replicaNode())(TableName.AuditLog)
.where(
stripUndefinedInWhere({
projectId,
@@ -55,13 +56,34 @@ export const auditLogDALFactory = (db: TDbClient) => {
// delete all audit log that have expired
const pruneAuditLog = async (tx?: Knex) => {
try {
const today = new Date();
const docs = await (tx || db)(TableName.AuditLog).where("expiresAt", "<", today).del();
return docs;
} catch (error) {
throw new DatabaseError({ error, name: "PruneAuditLog" });
}
const AUDIT_LOG_PRUNE_BATCH_SIZE = 10000;
const MAX_RETRY_ON_FAILURE = 3;
const today = new Date();
let deletedAuditLogIds: { id: string }[] = [];
let numberOfRetryOnFailure = 0;
do {
try {
const findExpiredLogSubQuery = (tx || db)(TableName.AuditLog)
.where("expiresAt", "<", today)
.select("id")
.limit(AUDIT_LOG_PRUNE_BATCH_SIZE);
// eslint-disable-next-line no-await-in-loop
deletedAuditLogIds = await (tx || db)(TableName.AuditLog)
.whereIn("id", findExpiredLogSubQuery)
.del()
.returning("id");
numberOfRetryOnFailure = 0; // reset
// eslint-disable-next-line no-await-in-loop
await new Promise((resolve) => {
setTimeout(resolve, 100); // time to breathe for db
});
} catch (error) {
numberOfRetryOnFailure += 1;
logger.error(error, "Failed to delete audit log on pruning");
}
} while (deletedAuditLogIds.length > 0 && numberOfRetryOnFailure < MAX_RETRY_ON_FAILURE);
};
return { ...auditLogOrm, pruneAuditLog, find };

View File

@@ -771,7 +771,6 @@ interface CreateWebhookEvent {
webhookId: string;
environment: string;
secretPath: string;
webhookUrl: string;
isDisabled: boolean;
};
}
@@ -782,7 +781,6 @@ interface UpdateWebhookStatusEvent {
webhookId: string;
environment: string;
secretPath: string;
webhookUrl: string;
isDisabled: boolean;
};
}
@@ -793,7 +791,6 @@ interface DeleteWebhookEvent {
webhookId: string;
environment: string;
secretPath: string;
webhookUrl: string;
isDisabled: boolean;
};
}

View File

@@ -12,7 +12,10 @@ export const dynamicSecretLeaseDALFactory = (db: TDbClient) => {
const countLeasesForDynamicSecret = async (dynamicSecretId: string, tx?: Knex) => {
try {
const doc = await (tx || db)(TableName.DynamicSecretLease).count("*").where({ dynamicSecretId }).first();
const doc = await (tx || db.replicaNode())(TableName.DynamicSecretLease)
.count("*")
.where({ dynamicSecretId })
.first();
return parseInt(doc || "0", 10);
} catch (error) {
throw new DatabaseError({ error, name: "DynamicSecretCountLeases" });
@@ -21,7 +24,7 @@ export const dynamicSecretLeaseDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const doc = await (tx || db)(TableName.DynamicSecretLease)
const doc = await (tx || db.replicaNode())(TableName.DynamicSecretLease)
.where({ [`${TableName.DynamicSecretLease}.id` as "id"]: id })
.first()
.join(

View File

@@ -3,7 +3,8 @@ import { z } from "zod";
export enum SqlProviders {
Postgres = "postgres",
MySQL = "mysql2",
Oracle = "oracledb"
Oracle = "oracledb",
MsSQL = "mssql"
}
export const DynamicSecretSqlDBSchema = z.object({

View File

@@ -12,7 +12,7 @@ export const groupDALFactory = (db: TDbClient) => {
const findGroups = async (filter: TFindFilter<TGroups>, { offset, limit, sort, tx }: TFindOpt<TGroups> = {}) => {
try {
const query = (tx || db)(TableName.Groups)
const query = (tx || db.replicaNode())(TableName.Groups)
// eslint-disable-next-line
.where(buildFindFilter(filter))
.select(selectAllTableCols(TableName.Groups));
@@ -32,7 +32,7 @@ export const groupDALFactory = (db: TDbClient) => {
const findByOrgId = async (orgId: string, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.Groups)
const docs = await (tx || db.replicaNode())(TableName.Groups)
.where(`${TableName.Groups}.orgId`, orgId)
.leftJoin(TableName.OrgRoles, `${TableName.Groups}.roleId`, `${TableName.OrgRoles}.id`)
.select(selectAllTableCols(TableName.Groups))
@@ -74,11 +74,12 @@ export const groupDALFactory = (db: TDbClient) => {
username?: string;
}) => {
try {
let query = db(TableName.OrgMembership)
let query = db
.replicaNode()(TableName.OrgMembership)
.where(`${TableName.OrgMembership}.orgId`, orgId)
.join(TableName.Users, `${TableName.OrgMembership}.userId`, `${TableName.Users}.id`)
.leftJoin(TableName.UserGroupMembership, function () {
this.on(`${TableName.UserGroupMembership}.userId`, "=", `${TableName.Users}.id`).andOn(
.leftJoin(TableName.UserGroupMembership, (bd) => {
bd.on(`${TableName.UserGroupMembership}.userId`, "=", `${TableName.Users}.id`).andOn(
`${TableName.UserGroupMembership}.groupId`,
"=",
db.raw("?", [groupId])

View File

@@ -18,7 +18,7 @@ export const userGroupMembershipDALFactory = (db: TDbClient) => {
*/
const filterProjectsByUserMembership = async (userId: string, groupId: string, projectIds: string[], tx?: Knex) => {
try {
const userProjectMemberships: string[] = await (tx || db)(TableName.ProjectMembership)
const userProjectMemberships: string[] = await (tx || db.replicaNode())(TableName.ProjectMembership)
.where(`${TableName.ProjectMembership}.userId`, userId)
.whereIn(`${TableName.ProjectMembership}.projectId`, projectIds)
.pluck(`${TableName.ProjectMembership}.projectId`);
@@ -43,7 +43,8 @@ export const userGroupMembershipDALFactory = (db: TDbClient) => {
// special query
const findUserGroupMembershipsInProject = async (usernames: string[], projectId: string) => {
try {
const usernameDocs: string[] = await db(TableName.UserGroupMembership)
const usernameDocs: string[] = await db
.replicaNode()(TableName.UserGroupMembership)
.join(
TableName.GroupProjectMembership,
`${TableName.UserGroupMembership}.groupId`,
@@ -73,7 +74,7 @@ export const userGroupMembershipDALFactory = (db: TDbClient) => {
try {
// get list of groups in the project with id [projectId]
// that that are not the group with id [groupId]
const groups: string[] = await (tx || db)(TableName.GroupProjectMembership)
const groups: string[] = await (tx || db.replicaNode())(TableName.GroupProjectMembership)
.where(`${TableName.GroupProjectMembership}.projectId`, projectId)
.whereNot(`${TableName.GroupProjectMembership}.groupId`, groupId)
.pluck(`${TableName.GroupProjectMembership}.groupId`);
@@ -83,8 +84,8 @@ export const userGroupMembershipDALFactory = (db: TDbClient) => {
.where(`${TableName.UserGroupMembership}.groupId`, groupId)
.where(`${TableName.UserGroupMembership}.isPending`, false)
.join(TableName.Users, `${TableName.UserGroupMembership}.userId`, `${TableName.Users}.id`)
.leftJoin(TableName.ProjectMembership, function () {
this.on(`${TableName.Users}.id`, "=", `${TableName.ProjectMembership}.userId`).andOn(
.leftJoin(TableName.ProjectMembership, (bd) => {
bd.on(`${TableName.Users}.id`, "=", `${TableName.ProjectMembership}.userId`).andOn(
`${TableName.ProjectMembership}.projectId`,
"=",
db.raw("?", [projectId])
@@ -107,9 +108,9 @@ export const userGroupMembershipDALFactory = (db: TDbClient) => {
db.ref("publicKey").withSchema(TableName.UserEncryptionKey)
)
.where({ isGhost: false }) // MAKE SURE USER IS NOT A GHOST USER
.whereNotIn(`${TableName.UserGroupMembership}.userId`, function () {
.whereNotIn(`${TableName.UserGroupMembership}.userId`, (bd) => {
// eslint-disable-next-line @typescript-eslint/no-floating-promises
this.select(`${TableName.UserGroupMembership}.userId`)
bd.select(`${TableName.UserGroupMembership}.userId`)
.from(TableName.UserGroupMembership)
.whereIn(`${TableName.UserGroupMembership}.groupId`, groups);
});

View File

@@ -34,6 +34,7 @@ import { TProjectBotDALFactory } from "@app/services/project-bot/project-bot-dal
import { TProjectKeyDALFactory } from "@app/services/project-key/project-key-dal";
import { SmtpTemplates, TSmtpService } from "@app/services/smtp/smtp-service";
import { getServerCfg } from "@app/services/super-admin/super-admin-service";
import { LoginMethod } from "@app/services/super-admin/super-admin-types";
import { TUserDALFactory } from "@app/services/user/user-dal";
import { normalizeUsername } from "@app/services/user/user-fns";
import { TUserAliasDALFactory } from "@app/services/user-alias/user-alias-dal";
@@ -53,7 +54,7 @@ import {
TTestLdapConnectionDTO,
TUpdateLdapCfgDTO
} from "./ldap-config-types";
import { testLDAPConfig } from "./ldap-fns";
import { searchGroups, testLDAPConfig } from "./ldap-fns";
import { TLdapGroupMapDALFactory } from "./ldap-group-map-dal";
type TLdapConfigServiceFactoryDep = {
@@ -286,7 +287,7 @@ export const ldapConfigServiceFactory = ({
return ldapConfig;
};
const getLdapCfg = async (filter: { orgId: string; isActive?: boolean }) => {
const getLdapCfg = async (filter: { orgId: string; isActive?: boolean; id?: string }) => {
const ldapConfig = await ldapConfigDAL.findOne(filter);
if (!ldapConfig) throw new BadRequestError({ message: "Failed to find organization LDAP data" });
@@ -417,6 +418,13 @@ export const ldapConfigServiceFactory = ({
}: TLdapLoginDTO) => {
const appCfg = getConfig();
const serverCfg = await getServerCfg();
if (serverCfg.enabledLoginMethods && !serverCfg.enabledLoginMethods.includes(LoginMethod.LDAP)) {
throw new BadRequestError({
message: "Login with LDAP is disabled by administrator."
});
}
let userAlias = await userAliasDAL.findOne({
externalId,
orgId,
@@ -456,6 +464,21 @@ export const ldapConfigServiceFactory = ({
}
});
} else {
const plan = await licenseService.getPlan(orgId);
if (plan?.memberLimit && plan.membersUsed >= plan.memberLimit) {
// limit imposed on number of members allowed / number of members used exceeds the number of members allowed
throw new BadRequestError({
message: "Failed to create new member via LDAP due to member limit reached. Upgrade plan to add more members."
});
}
if (plan?.identityLimit && plan.identitiesUsed >= plan.identityLimit) {
// limit imposed on number of identities allowed / number of identities used exceeds the number of identities allowed
throw new BadRequestError({
message: "Failed to create new member via LDAP due to member limit reached. Upgrade plan to add more members."
});
}
userAlias = await userDAL.transaction(async (tx) => {
let newUser: TUsers | undefined;
if (serverCfg.trustSamlEmails) {
@@ -701,11 +724,25 @@ export const ldapConfigServiceFactory = ({
message: "Failed to create LDAP group map due to plan restriction. Upgrade plan to create LDAP group map."
});
const ldapConfig = await ldapConfigDAL.findOne({
id: ldapConfigId,
orgId
const ldapConfig = await getLdapCfg({
orgId,
id: ldapConfigId
});
if (!ldapConfig) throw new BadRequestError({ message: "Failed to find organization LDAP data" });
if (!ldapConfig.groupSearchBase) {
throw new BadRequestError({
message: "Configure a group search base in your LDAP configuration in order to proceed."
});
}
const groupSearchFilter = `(cn=${ldapGroupCN})`;
const groups = await searchGroups(ldapConfig, groupSearchFilter, ldapConfig.groupSearchBase);
if (!groups.some((g) => g.cn === ldapGroupCN)) {
throw new BadRequestError({
message: "Failed to find LDAP Group CN"
});
}
const group = await groupDAL.findOne({ slug: groupSlug, orgId });
if (!group) throw new BadRequestError({ message: "Failed to find group" });

View File

@@ -10,7 +10,8 @@ export const ldapGroupMapDALFactory = (db: TDbClient) => {
const findLdapGroupMapsByLdapConfigId = async (ldapConfigId: string) => {
try {
const docs = await db(TableName.LdapGroupMap)
const docs = await db
.replicaNode()(TableName.LdapGroupMap)
.where(`${TableName.LdapGroupMap}.ldapConfigId`, ldapConfigId)
.join(TableName.Groups, `${TableName.LdapGroupMap}.groupId`, `${TableName.Groups}.id`)
.select(selectAllTableCols(TableName.LdapGroupMap))

View File

@@ -7,6 +7,8 @@ export const getDefaultOnPremFeatures = () => {
workspacesUsed: 0,
memberLimit: null,
membersUsed: 0,
identityLimit: null,
identitiesUsed: 0,
environmentLimit: null,
environmentsUsed: 0,
secretVersioning: true,

View File

@@ -15,6 +15,8 @@ export const getDefaultOnPremFeatures = (): TFeatureSet => ({
membersUsed: 0,
environmentLimit: null,
environmentsUsed: 0,
identityLimit: null,
identitiesUsed: 0,
dynamicSecret: false,
secretVersioning: true,
pitRecovery: false,

View File

@@ -9,7 +9,7 @@ export type TLicenseDALFactory = ReturnType<typeof licenseDALFactory>;
export const licenseDALFactory = (db: TDbClient) => {
const countOfOrgMembers = async (orgId: string | null, tx?: Knex) => {
try {
const doc = await (tx || db)(TableName.OrgMembership)
const doc = await (tx || db.replicaNode())(TableName.OrgMembership)
.where({ status: OrgMembershipStatus.Accepted })
.andWhere((bd) => {
if (orgId) {
@@ -19,11 +19,44 @@ export const licenseDALFactory = (db: TDbClient) => {
.join(TableName.Users, `${TableName.OrgMembership}.userId`, `${TableName.Users}.id`)
.where(`${TableName.Users}.isGhost`, false)
.count();
return doc?.[0].count;
return Number(doc?.[0].count);
} catch (error) {
throw new DatabaseError({ error, name: "Count of Org Members" });
}
};
return { countOfOrgMembers };
const countOrgUsersAndIdentities = async (orgId: string | null, tx?: Knex) => {
try {
// count org users
const userDoc = await (tx || db)(TableName.OrgMembership)
.where({ status: OrgMembershipStatus.Accepted })
.andWhere((bd) => {
if (orgId) {
void bd.where({ orgId });
}
})
.join(TableName.Users, `${TableName.OrgMembership}.userId`, `${TableName.Users}.id`)
.where(`${TableName.Users}.isGhost`, false)
.count();
const userCount = Number(userDoc?.[0].count);
// count org identities
const identityDoc = await (tx || db)(TableName.IdentityOrgMembership)
.where((bd) => {
if (orgId) {
void bd.where({ orgId });
}
})
.count();
const identityCount = Number(identityDoc?.[0].count);
return userCount + identityCount;
} catch (error) {
throw new DatabaseError({ error, name: "Count of Org Users + Identities" });
}
};
return { countOfOrgMembers, countOrgUsersAndIdentities };
};

View File

@@ -5,6 +5,7 @@
// TODO(akhilmhdh): With tony find out the api structure and fill it here
import { ForbiddenError } from "@casl/ability";
import { Knex } from "knex";
import { TKeyStoreFactory } from "@app/keystore/keystore";
import { getConfig } from "@app/lib/config/env";
@@ -155,6 +156,7 @@ export const licenseServiceFactory = ({
LICENSE_SERVER_CLOUD_PLAN_TTL,
JSON.stringify(currentPlan)
);
return currentPlan;
}
} catch (error) {
@@ -199,21 +201,27 @@ export const licenseServiceFactory = ({
await licenseServerCloudApi.request.delete(`/api/license-server/v1/customers/${customerId}`);
};
const updateSubscriptionOrgMemberCount = async (orgId: string) => {
const updateSubscriptionOrgMemberCount = async (orgId: string, tx?: Knex) => {
if (instanceType === InstanceType.Cloud) {
const org = await orgDAL.findOrgById(orgId);
if (!org) throw new BadRequestError({ message: "Org not found" });
const count = await licenseDAL.countOfOrgMembers(orgId);
const quantity = await licenseDAL.countOfOrgMembers(orgId, tx);
const quantityIdentities = await licenseDAL.countOrgUsersAndIdentities(orgId, tx);
if (org?.customerId) {
await licenseServerCloudApi.request.patch(`/api/license-server/v1/customers/${org.customerId}/cloud-plan`, {
quantity: count
quantity,
quantityIdentities
});
}
await keyStore.deleteItem(FEATURE_CACHE_KEY(orgId));
} else if (instanceType === InstanceType.EnterpriseOnPrem) {
const usedSeats = await licenseDAL.countOfOrgMembers(null);
await licenseServerOnPremApi.request.patch(`/api/license/v1/license`, { usedSeats });
const usedSeats = await licenseDAL.countOfOrgMembers(null, tx);
const usedIdentitySeats = await licenseDAL.countOrgUsersAndIdentities(null, tx);
await licenseServerOnPremApi.request.patch(`/api/license/v1/license`, {
usedSeats,
usedIdentitySeats
});
}
await refreshPlan(orgId);
};

View File

@@ -31,6 +31,8 @@ export type TFeatureSet = {
dynamicSecret: false;
memberLimit: null;
membersUsed: 0;
identityLimit: null;
identitiesUsed: 0;
environmentLimit: null;
environmentsUsed: 0;
secretVersioning: true;

View File

@@ -26,6 +26,7 @@ import { TOrgDALFactory } from "@app/services/org/org-dal";
import { TOrgMembershipDALFactory } from "@app/services/org-membership/org-membership-dal";
import { SmtpTemplates, TSmtpService } from "@app/services/smtp/smtp-service";
import { getServerCfg } from "@app/services/super-admin/super-admin-service";
import { LoginMethod } from "@app/services/super-admin/super-admin-types";
import { TUserDALFactory } from "@app/services/user/user-dal";
import { normalizeUsername } from "@app/services/user/user-fns";
import { TUserAliasDALFactory } from "@app/services/user-alias/user-alias-dal";
@@ -157,6 +158,13 @@ export const oidcConfigServiceFactory = ({
const oidcLogin = async ({ externalId, email, firstName, lastName, orgId, callbackPort }: TOidcLoginDTO) => {
const serverCfg = await getServerCfg();
if (serverCfg.enabledLoginMethods && !serverCfg.enabledLoginMethods.includes(LoginMethod.OIDC)) {
throw new BadRequestError({
message: "Login with OIDC is disabled by administrator."
});
}
const appCfg = getConfig();
const userAlias = await userAliasDAL.findOne({
externalId,

View File

@@ -10,7 +10,8 @@ export type TPermissionDALFactory = ReturnType<typeof permissionDALFactory>;
export const permissionDALFactory = (db: TDbClient) => {
const getOrgPermission = async (userId: string, orgId: string) => {
try {
const membership = await db(TableName.OrgMembership)
const membership = await db
.replicaNode()(TableName.OrgMembership)
.leftJoin(TableName.OrgRoles, `${TableName.OrgMembership}.roleId`, `${TableName.OrgRoles}.id`)
.join(TableName.Organization, `${TableName.OrgMembership}.orgId`, `${TableName.Organization}.id`)
.where("userId", userId)
@@ -28,7 +29,8 @@ export const permissionDALFactory = (db: TDbClient) => {
const getOrgIdentityPermission = async (identityId: string, orgId: string) => {
try {
const membership = await db(TableName.IdentityOrgMembership)
const membership = await db
.replicaNode()(TableName.IdentityOrgMembership)
.leftJoin(TableName.OrgRoles, `${TableName.IdentityOrgMembership}.roleId`, `${TableName.OrgRoles}.id`)
.join(TableName.Organization, `${TableName.IdentityOrgMembership}.orgId`, `${TableName.Organization}.id`)
.where("identityId", identityId)
@@ -45,11 +47,13 @@ export const permissionDALFactory = (db: TDbClient) => {
const getProjectPermission = async (userId: string, projectId: string) => {
try {
const groups: string[] = await db(TableName.GroupProjectMembership)
const groups: string[] = await db
.replicaNode()(TableName.GroupProjectMembership)
.where(`${TableName.GroupProjectMembership}.projectId`, projectId)
.pluck(`${TableName.GroupProjectMembership}.groupId`);
const groupDocs = await db(TableName.UserGroupMembership)
const groupDocs = await db
.replicaNode()(TableName.UserGroupMembership)
.where(`${TableName.UserGroupMembership}.userId`, userId)
.whereIn(`${TableName.UserGroupMembership}.groupId`, groups)
.join(
@@ -231,7 +235,8 @@ export const permissionDALFactory = (db: TDbClient) => {
const getProjectIdentityPermission = async (identityId: string, projectId: string) => {
try {
const docs = await db(TableName.IdentityProjectMembership)
const docs = await db
.replicaNode()(TableName.IdentityProjectMembership)
.join(
TableName.IdentityProjectMembershipRole,
`${TableName.IdentityProjectMembershipRole}.projectMembershipId`,

View File

@@ -10,7 +10,8 @@ export const samlConfigDALFactory = (db: TDbClient) => {
const findEnforceableSamlCfg = async (orgId: string) => {
try {
const samlCfg = await db(TableName.SamlConfig)
const samlCfg = await db
.replicaNode()(TableName.SamlConfig)
.where({
orgId,
isActive: true

View File

@@ -28,6 +28,7 @@ import { TOrgDALFactory } from "@app/services/org/org-dal";
import { TOrgMembershipDALFactory } from "@app/services/org-membership/org-membership-dal";
import { SmtpTemplates, TSmtpService } from "@app/services/smtp/smtp-service";
import { getServerCfg } from "@app/services/super-admin/super-admin-service";
import { LoginMethod } from "@app/services/super-admin/super-admin-types";
import { TUserDALFactory } from "@app/services/user/user-dal";
import { normalizeUsername } from "@app/services/user/user-fns";
import { TUserAliasDALFactory } from "@app/services/user-alias/user-alias-dal";
@@ -335,6 +336,13 @@ export const samlConfigServiceFactory = ({
}: TSamlLoginDTO) => {
const appCfg = getConfig();
const serverCfg = await getServerCfg();
if (serverCfg.enabledLoginMethods && !serverCfg.enabledLoginMethods.includes(LoginMethod.SAML)) {
throw new BadRequestError({
message: "Login with SAML is disabled by administrator."
});
}
const userAlias = await userAliasDAL.findOne({
externalId,
orgId,
@@ -380,6 +388,21 @@ export const samlConfigServiceFactory = ({
return foundUser;
});
} else {
const plan = await licenseService.getPlan(orgId);
if (plan?.memberLimit && plan.membersUsed >= plan.memberLimit) {
// limit imposed on number of members allowed / number of members used exceeds the number of members allowed
throw new BadRequestError({
message: "Failed to create new member via SAML due to member limit reached. Upgrade plan to add more members."
});
}
if (plan?.identityLimit && plan.identitiesUsed >= plan.identityLimit) {
// limit imposed on number of identities allowed / number of identities used exceeds the number of identities allowed
throw new BadRequestError({
message: "Failed to create new member via SAML due to member limit reached. Upgrade plan to add more members."
});
}
user = await userDAL.transaction(async (tx) => {
let newUser: TUsers | undefined;
if (serverCfg.trustSamlEmails) {

View File

@@ -30,7 +30,7 @@ export const secretApprovalPolicyDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const doc = await sapFindQuery(tx || db, {
const doc = await sapFindQuery(tx || db.replicaNode(), {
[`${TableName.SecretApprovalPolicy}.id` as "id"]: id
});
const formatedDoc = mergeOneToManyRelation(
@@ -52,7 +52,7 @@ export const secretApprovalPolicyDALFactory = (db: TDbClient) => {
const find = async (filter: TFindFilter<TSecretApprovalPolicies & { projectId: string }>, tx?: Knex) => {
try {
const docs = await sapFindQuery(tx || db, filter);
const docs = await sapFindQuery(tx || db.replicaNode(), filter);
const formatedDoc = mergeOneToManyRelation(
docs,
"id",

View File

@@ -62,7 +62,7 @@ export const secretApprovalRequestDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const sql = findQuery({ [`${TableName.SecretApprovalRequest}.id` as "id"]: id }, tx || db);
const sql = findQuery({ [`${TableName.SecretApprovalRequest}.id` as "id"]: id }, tx || db.replicaNode());
const docs = await sql;
const formatedDoc = sqlNestRelationships({
data: docs,
@@ -102,7 +102,7 @@ export const secretApprovalRequestDALFactory = (db: TDbClient) => {
const docs = await (tx || db)
.with(
"temp",
(tx || db)(TableName.SecretApprovalRequest)
(tx || db.replicaNode())(TableName.SecretApprovalRequest)
.join(TableName.SecretFolder, `${TableName.SecretApprovalRequest}.folderId`, `${TableName.SecretFolder}.id`)
.join(TableName.Environment, `${TableName.SecretFolder}.envId`, `${TableName.Environment}.id`)
.join(
@@ -148,7 +148,7 @@ export const secretApprovalRequestDALFactory = (db: TDbClient) => {
try {
// akhilmhdh: If ever u wanted a 1 to so many relationship connected with pagination
// this is the place u wanna look at.
const query = (tx || db)(TableName.SecretApprovalRequest)
const query = (tx || db.replicaNode())(TableName.SecretApprovalRequest)
.join(TableName.SecretFolder, `${TableName.SecretApprovalRequest}.folderId`, `${TableName.SecretFolder}.id`)
.join(TableName.Environment, `${TableName.SecretFolder}.envId`, `${TableName.Environment}.id`)
.join(

View File

@@ -47,7 +47,7 @@ export const secretApprovalRequestSecretDALFactory = (db: TDbClient) => {
const findByRequestId = async (requestId: string, tx?: Knex) => {
try {
const doc = await (tx || db)({
const doc = await (tx || db.replicaNode())({
secVerTag: TableName.SecretTag
})
.from(TableName.SecretApprovalRequestSecret)

View File

@@ -41,7 +41,7 @@ export const secretRotationDALFactory = (db: TDbClient) => {
const find = async (filter: TFindFilter<TSecretRotations & { projectId: string }>, tx?: Knex) => {
try {
const data = await findQuery(filter, tx || db);
const data = await findQuery(filter, tx || db.replicaNode());
return sqlNestRelationships({
data,
key: "id",
@@ -93,7 +93,7 @@ export const secretRotationDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const doc = await (tx || db)(TableName.SecretRotation)
const doc = await (tx || db.replicaNode())(TableName.SecretRotation)
.join(TableName.Environment, `${TableName.SecretRotation}.envId`, `${TableName.Environment}.id`)
.where({ [`${TableName.SecretRotation}.id` as "id"]: id })
.select(selectAllTableCols(TableName.SecretRotation))

View File

@@ -331,7 +331,7 @@ export const secretRotationQueueFactory = ({
logger.info("Finished rotating: rotation id: ", rotationId);
} catch (error) {
logger.error(error);
logger.error(error, "Failed to execute secret rotation");
if (error instanceof DisableRotationErrors) {
if (job.id) {
await queue.stopRepeatableJobByJobId(QueueName.SecretRotation, job.id);

View File

@@ -133,7 +133,7 @@ export const secretRotationServiceFactory = ({
creds: []
};
const encData = infisicalSymmetricEncypt(JSON.stringify(unencryptedData));
const secretRotation = secretRotationDAL.transaction(async (tx) => {
const secretRotation = await secretRotationDAL.transaction(async (tx) => {
const doc = await secretRotationDAL.create(
{
provider,
@@ -148,13 +148,13 @@ export const secretRotationServiceFactory = ({
},
tx
);
await secretRotationQueue.addToQueue(doc.id, doc.interval);
const outputSecretMapping = await secretRotationDAL.secretOutputInsertMany(
Object.entries(outputs).map(([key, secretId]) => ({ key, secretId, rotationId: doc.id })),
tx
);
return { ...doc, outputs: outputSecretMapping, environment: folder.environment };
});
await secretRotationQueue.addToQueue(secretRotation.id, secretRotation.interval);
return secretRotation;
};
@@ -212,9 +212,9 @@ export const secretRotationServiceFactory = ({
);
const deletedDoc = await secretRotationDAL.transaction(async (tx) => {
const strat = await secretRotationDAL.deleteById(rotationId, tx);
await secretRotationQueue.removeFromQueue(strat.id, strat.interval);
return strat;
});
await secretRotationQueue.removeFromQueue(deletedDoc.id, deletedDoc.interval);
return { ...doc, ...deletedDoc };
};

View File

@@ -21,7 +21,7 @@ export const snapshotDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const data = await (tx || db)(TableName.Snapshot)
const data = await (tx || db.replicaNode())(TableName.Snapshot)
.where(`${TableName.Snapshot}.id`, id)
.join(TableName.Environment, `${TableName.Snapshot}.envId`, `${TableName.Environment}.id`)
.select(selectAllTableCols(TableName.Snapshot))
@@ -43,7 +43,7 @@ export const snapshotDALFactory = (db: TDbClient) => {
const countOfSnapshotsByFolderId = async (folderId: string, tx?: Knex) => {
try {
const doc = await (tx || db)(TableName.Snapshot)
const doc = await (tx || db.replicaNode())(TableName.Snapshot)
.where({ folderId })
.groupBy(["folderId"])
.count("folderId")
@@ -56,7 +56,7 @@ export const snapshotDALFactory = (db: TDbClient) => {
const findSecretSnapshotDataById = async (snapshotId: string, tx?: Knex) => {
try {
const data = await (tx || db)(TableName.Snapshot)
const data = await (tx || db.replicaNode())(TableName.Snapshot)
.where(`${TableName.Snapshot}.id`, snapshotId)
.join(TableName.Environment, `${TableName.Snapshot}.envId`, `${TableName.Environment}.id`)
.leftJoin(TableName.SnapshotSecret, `${TableName.Snapshot}.id`, `${TableName.SnapshotSecret}.snapshotId`)
@@ -309,7 +309,7 @@ export const snapshotDALFactory = (db: TDbClient) => {
// when we need to rollback we will pull from these snapshots
const findLatestSnapshotByFolderId = async (folderId: string, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.Snapshot)
const docs = await (tx || db.replicaNode())(TableName.Snapshot)
.where(`${TableName.Snapshot}.folderId`, folderId)
.join<TSecretSnapshots>(
(tx || db)(TableName.Snapshot).groupBy("folderId").max("createdAt").select("folderId").as("latestVersion"),

View File

@@ -692,6 +692,7 @@ export const INTEGRATION_AUTH = {
integration: "The slug of integration for the auth object.",
accessId: "The unique authorized access id of the external integration provider.",
accessToken: "The unique authorized access token of the external integration provider.",
awsAssumeIamRoleArn: "The AWS IAM Role to be assumed by Infisical",
url: "",
namespace: "",
refreshToken: "The refresh token for integration authorization."

View File

@@ -10,6 +10,14 @@ const zodStrBool = z
.optional()
.transform((val) => val === "true");
const databaseReadReplicaSchema = z
.object({
DB_CONNECTION_URI: z.string().describe("Postgres read replica database connection string"),
DB_ROOT_CERT: zpStr(z.string().optional().describe("Postgres read replica database certificate string"))
})
.array()
.optional();
const envSchema = z
.object({
PORT: z.coerce.number().default(4000),
@@ -29,6 +37,7 @@ const envSchema = z
DB_USER: zpStr(z.string().describe("Postgres database username").optional()),
DB_PASSWORD: zpStr(z.string().describe("Postgres database password").optional()),
DB_NAME: zpStr(z.string().describe("Postgres database name").optional()),
DB_READ_REPLICAS: zpStr(z.string().describe("Postgres read replicas").optional()),
BCRYPT_SALT_ROUND: z.number().default(12),
NODE_ENV: z.enum(["development", "test", "production"]).default("production"),
SALT_ROUNDS: z.coerce.number().default(10),
@@ -101,6 +110,9 @@ const envSchema = z
// azure
CLIENT_ID_AZURE: zpStr(z.string().optional()),
CLIENT_SECRET_AZURE: zpStr(z.string().optional()),
// aws
CLIENT_ID_AWS_INTEGRATION: zpStr(z.string().optional()),
CLIENT_SECRET_AWS_INTEGRATION: zpStr(z.string().optional()),
// gitlab
CLIENT_ID_GITLAB: zpStr(z.string().optional()),
CLIENT_SECRET_GITLAB: zpStr(z.string().optional()),
@@ -127,6 +139,9 @@ const envSchema = z
})
.transform((data) => ({
...data,
DB_READ_REPLICAS: data.DB_READ_REPLICAS
? databaseReadReplicaSchema.parse(JSON.parse(data.DB_READ_REPLICAS))
: undefined,
isCloud: Boolean(data.LICENSE_SERVER_KEY),
isSmtpConfigured: Boolean(data.SMTP_HOST),
isRedisConfigured: Boolean(data.REDIS_URL),

View File

@@ -50,7 +50,7 @@ export const ormify = <DbOps extends object, Tname extends keyof Tables>(db: Kne
}),
findById: async (id: string, tx?: Knex) => {
try {
const result = await (tx || db)(tableName)
const result = await (tx || db.replicaNode())(tableName)
.where({ id } as never)
.first("*");
return result;
@@ -60,7 +60,7 @@ export const ormify = <DbOps extends object, Tname extends keyof Tables>(db: Kne
},
findOne: async (filter: Partial<Tables[Tname]["base"]>, tx?: Knex) => {
try {
const res = await (tx || db)(tableName).where(filter).first("*");
const res = await (tx || db.replicaNode())(tableName).where(filter).first("*");
return res;
} catch (error) {
throw new DatabaseError({ error, name: "Find one" });
@@ -71,7 +71,7 @@ export const ormify = <DbOps extends object, Tname extends keyof Tables>(db: Kne
{ offset, limit, sort, tx }: TFindOpt<Tables[Tname]["base"]> = {}
) => {
try {
const query = (tx || db)(tableName).where(buildFindFilter(filter));
const query = (tx || db.replicaNode())(tableName).where(buildFindFilter(filter));
if (limit) void query.limit(limit);
if (offset) void query.offset(offset);
if (sort) {

View File

@@ -58,7 +58,8 @@ const redactedKeys = [
"decryptedSecret",
"secrets",
"key",
"password"
"password",
"config"
];
export const initLogger = async () => {

View File

@@ -15,7 +15,11 @@ const run = async () => {
const appCfg = initEnvConfig(logger);
const db = initDbConnection({
dbConnectionUri: appCfg.DB_CONNECTION_URI,
dbRootCert: appCfg.DB_ROOT_CERT
dbRootCert: appCfg.DB_ROOT_CERT,
readReplicas: appCfg.DB_READ_REPLICAS?.map((el) => ({
dbRootCert: el.DB_ROOT_CERT,
dbConnectionUri: el.DB_CONNECTION_URI
}))
});
const smtp = smtpServiceFactory(formatSmtpConfig());

View File

@@ -415,8 +415,10 @@ export const registerRoutes = async (
userAliasDAL,
orgMembershipDAL,
tokenService,
smtpService
smtpService,
projectMembershipDAL
});
const loginService = authLoginServiceFactory({ userDAL, smtpService, tokenService, orgDAL, tokenDAL: authTokenDAL });
const passwordService = authPaswordServiceFactory({
tokenService,
@@ -806,7 +808,8 @@ export const registerRoutes = async (
const identityService = identityServiceFactory({
permissionService,
identityDAL,
identityOrgMembershipDAL
identityOrgMembershipDAL,
licenseService
});
const identityAccessTokenService = identityAccessTokenServiceFactory({
identityAccessTokenDAL,

View File

@@ -8,6 +8,7 @@ import { verifySuperAdmin } from "@app/server/plugins/auth/superAdmin";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AuthMode } from "@app/services/auth/auth-type";
import { getServerCfg } from "@app/services/super-admin/super-admin-service";
import { LoginMethod } from "@app/services/super-admin/super-admin-types";
import { PostHogEventTypes } from "@app/services/telemetry/telemetry-types";
export const registerAdminRouter = async (server: FastifyZodProvider) => {
@@ -54,7 +55,14 @@ export const registerAdminRouter = async (server: FastifyZodProvider) => {
trustSamlEmails: z.boolean().optional(),
trustLdapEmails: z.boolean().optional(),
trustOidcEmails: z.boolean().optional(),
defaultAuthOrgId: z.string().optional().nullable()
defaultAuthOrgId: z.string().optional().nullable(),
enabledLoginMethods: z
.nativeEnum(LoginMethod)
.array()
.optional()
.refine((methods) => !methods || methods.length > 0, {
message: "At least one login method should be enabled."
})
}),
response: {
200: z.object({
@@ -70,7 +78,7 @@ export const registerAdminRouter = async (server: FastifyZodProvider) => {
});
},
handler: async (req) => {
const config = await server.services.superAdmin.updateServerCfg(req.body);
const config = await server.services.superAdmin.updateServerCfg(req.body, req.permission.id);
return { config };
}
});

View File

@@ -240,6 +240,12 @@ export const registerIntegrationAuthRouter = async (server: FastifyZodProvider)
integration: z.string().trim().describe(INTEGRATION_AUTH.CREATE_ACCESS_TOKEN.integration),
accessId: z.string().trim().optional().describe(INTEGRATION_AUTH.CREATE_ACCESS_TOKEN.accessId),
accessToken: z.string().trim().optional().describe(INTEGRATION_AUTH.CREATE_ACCESS_TOKEN.accessToken),
awsAssumeIamRoleArn: z
.string()
.url()
.trim()
.optional()
.describe(INTEGRATION_AUTH.CREATE_ACCESS_TOKEN.awsAssumeIamRoleArn),
url: z.string().url().trim().optional().describe(INTEGRATION_AUTH.CREATE_ACCESS_TOKEN.url),
namespace: z.string().trim().optional().describe(INTEGRATION_AUTH.CREATE_ACCESS_TOKEN.namespace),
refreshToken: z.string().trim().optional().describe(INTEGRATION_AUTH.CREATE_ACCESS_TOKEN.refreshToken)

View File

@@ -3,7 +3,7 @@ import { z } from "zod";
import { UserEncryptionKeysSchema, UsersSchema } from "@app/db/schemas";
import { getConfig } from "@app/lib/config/env";
import { logger } from "@app/lib/logger";
import { authRateLimit, readLimit } from "@app/server/config/rateLimiter";
import { authRateLimit, readLimit, writeLimit } from "@app/server/config/rateLimiter";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AuthMode } from "@app/services/auth/auth-type";
@@ -90,4 +90,48 @@ export const registerUserRouter = async (server: FastifyZodProvider) => {
return res.redirect(`${appCfg.SITE_URL}/login`);
}
});
server.route({
method: "GET",
url: "/me/project-favorites",
config: {
rateLimit: readLimit
},
schema: {
querystring: z.object({
orgId: z.string().trim()
}),
response: {
200: z.object({
projectFavorites: z.string().array()
})
}
},
onRequest: verifyAuth([AuthMode.JWT]),
handler: async (req) => {
return server.services.user.getUserProjectFavorites(req.permission.id, req.query.orgId);
}
});
server.route({
method: "PUT",
url: "/me/project-favorites",
config: {
rateLimit: writeLimit
},
schema: {
body: z.object({
orgId: z.string().trim(),
projectFavorites: z.string().array()
})
},
onRequest: verifyAuth([AuthMode.JWT]),
handler: async (req) => {
return server.services.user.updateUserProjectFavorites(
req.permission.id,
req.body.orgId,
req.body.projectFavorites
);
}
});
};

View File

@@ -6,13 +6,17 @@ import { removeTrailingSlash } from "@app/lib/fn";
import { readLimit, writeLimit } from "@app/server/config/rateLimiter";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AuthMode } from "@app/services/auth/auth-type";
import { WebhookType } from "@app/services/webhook/webhook-types";
export const sanitizedWebhookSchema = WebhooksSchema.omit({
encryptedSecretKey: true,
iv: true,
tag: true,
algorithm: true,
keyEncoding: true
keyEncoding: true,
urlCipherText: true,
urlIV: true,
urlTag: true
}).merge(
z.object({
projectId: z.string(),
@@ -33,13 +37,24 @@ export const registerWebhookRouter = async (server: FastifyZodProvider) => {
},
onRequest: verifyAuth([AuthMode.JWT]),
schema: {
body: z.object({
workspaceId: z.string().trim(),
environment: z.string().trim(),
webhookUrl: z.string().url().trim(),
webhookSecretKey: z.string().trim().optional(),
secretPath: z.string().trim().default("/").transform(removeTrailingSlash)
}),
body: z
.object({
type: z.nativeEnum(WebhookType).default(WebhookType.GENERAL),
workspaceId: z.string().trim(),
environment: z.string().trim(),
webhookUrl: z.string().url().trim(),
webhookSecretKey: z.string().trim().optional(),
secretPath: z.string().trim().default("/").transform(removeTrailingSlash)
})
.superRefine((data, ctx) => {
if (data.type === WebhookType.SLACK && !data.webhookUrl.includes("hooks.slack.com")) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: "Incoming Webhook URL is invalid.",
path: ["webhookUrl"]
});
}
}),
response: {
200: z.object({
message: z.string(),
@@ -66,8 +81,7 @@ export const registerWebhookRouter = async (server: FastifyZodProvider) => {
environment: webhook.environment.slug,
webhookId: webhook.id,
isDisabled: webhook.isDisabled,
secretPath: webhook.secretPath,
webhookUrl: webhook.url
secretPath: webhook.secretPath
}
}
});
@@ -116,8 +130,7 @@ export const registerWebhookRouter = async (server: FastifyZodProvider) => {
environment: webhook.environment.slug,
webhookId: webhook.id,
isDisabled: webhook.isDisabled,
secretPath: webhook.secretPath,
webhookUrl: webhook.url
secretPath: webhook.secretPath
}
}
});
@@ -156,8 +169,7 @@ export const registerWebhookRouter = async (server: FastifyZodProvider) => {
environment: webhook.environment.slug,
webhookId: webhook.id,
isDisabled: webhook.isDisabled,
secretPath: webhook.secretPath,
webhookUrl: webhook.url
secretPath: webhook.secretPath
}
}
});

View File

@@ -14,7 +14,7 @@ export const tokenDALFactory = (db: TDbClient) => {
const findOneTokenSession = async (filter: Partial<TAuthTokenSessions>): Promise<TAuthTokenSessions | undefined> => {
try {
const doc = await db(TableName.AuthTokenSession).where(filter).first();
const doc = await db.replicaNode()(TableName.AuthTokenSession).where(filter).first();
return doc;
} catch (error) {
throw new DatabaseError({ error, name: "FindOneTokenSession" });
@@ -44,7 +44,7 @@ export const tokenDALFactory = (db: TDbClient) => {
const findTokenSessions = async (filter: Partial<TAuthTokenSessions>, tx?: Knex) => {
try {
const sessions = await (tx || db)(TableName.AuthTokenSession).where(filter);
const sessions = await (tx || db.replicaNode())(TableName.AuthTokenSession).where(filter);
return sessions;
} catch (error) {
throw new DatabaseError({ name: "Find all token session", error });

View File

@@ -17,6 +17,7 @@ import { TAuthTokenServiceFactory } from "../auth-token/auth-token-service";
import { TokenType } from "../auth-token/auth-token-types";
import { TOrgDALFactory } from "../org/org-dal";
import { SmtpTemplates, TSmtpService } from "../smtp/smtp-service";
import { LoginMethod } from "../super-admin/super-admin-types";
import { TUserDALFactory } from "../user/user-dal";
import { enforceUserLockStatus, validateProviderAuthToken } from "./auth-fns";
import {
@@ -158,9 +159,22 @@ export const authLoginServiceFactory = ({
const userEnc = await userDAL.findUserEncKeyByUsername({
username: email
});
const serverCfg = await getServerCfg();
if (
serverCfg.enabledLoginMethods &&
!serverCfg.enabledLoginMethods.includes(LoginMethod.EMAIL) &&
!providerAuthToken
) {
throw new BadRequestError({
message: "Login with email is disabled by administrator."
});
}
if (!userEnc || (userEnc && !userEnc.isAccepted)) {
throw new Error("Failed to find user");
}
if (!userEnc.authMethods?.includes(AuthMethod.EMAIL)) {
validateProviderAuthToken(providerAuthToken as string, email);
}
@@ -507,6 +521,40 @@ export const authLoginServiceFactory = ({
let user = await userDAL.findUserByUsername(email);
const serverCfg = await getServerCfg();
if (serverCfg.enabledLoginMethods) {
switch (authMethod) {
case AuthMethod.GITHUB: {
if (!serverCfg.enabledLoginMethods.includes(LoginMethod.GITHUB)) {
throw new BadRequestError({
message: "Login with Github is disabled by administrator.",
name: "Oauth 2 login"
});
}
break;
}
case AuthMethod.GOOGLE: {
if (!serverCfg.enabledLoginMethods.includes(LoginMethod.GOOGLE)) {
throw new BadRequestError({
message: "Login with Google is disabled by administrator.",
name: "Oauth 2 login"
});
}
break;
}
case AuthMethod.GITLAB: {
if (!serverCfg.enabledLoginMethods.includes(LoginMethod.GITLAB)) {
throw new BadRequestError({
message: "Login with Gitlab is disabled by administrator.",
name: "Oauth 2 login"
});
}
break;
}
default:
break;
}
}
const appCfg = getConfig();
if (!user) {

View File

@@ -364,7 +364,7 @@ export const authSignupServiceFactory = ({
tx
);
const uniqueOrgId = [...new Set(updatedMembersips.map(({ orgId }) => orgId))];
await Promise.allSettled(uniqueOrgId.map((orgId) => licenseService.updateSubscriptionOrgMemberCount(orgId)));
await Promise.allSettled(uniqueOrgId.map((orgId) => licenseService.updateSubscriptionOrgMemberCount(orgId, tx)));
await convertPendingGroupAdditionsToGroupMemberships({
userIds: [user.id],

View File

@@ -16,6 +16,7 @@ export const certificateAuthorityDALFactory = (db: TDbClient) => {
parentCaId?: string;
encryptedCertificate: Buffer;
}[] = await db
.replicaNode()
.withRecursive("cte", (cte) => {
void cte
.select("ca.id as caId", "ca.parentCaId", "cert.encryptedCertificate")

View File

@@ -14,7 +14,8 @@ export const certificateDALFactory = (db: TDbClient) => {
count: string;
}
const count = await db(TableName.Certificate)
const count = await db
.replicaNode()(TableName.Certificate)
.join(TableName.CertificateAuthority, `${TableName.Certificate}.caId`, `${TableName.CertificateAuthority}.id`)
.join(TableName.Project, `${TableName.CertificateAuthority}.projectId`, `${TableName.Project}.id`)
.where(`${TableName.Project}.id`, projectId)

View File

@@ -12,7 +12,7 @@ export const groupProjectDALFactory = (db: TDbClient) => {
const findByProjectId = async (projectId: string, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.GroupProjectMembership)
const docs = await (tx || db.replicaNode())(TableName.GroupProjectMembership)
.where(`${TableName.GroupProjectMembership}.projectId`, projectId)
.join(TableName.Groups, `${TableName.GroupProjectMembership}.groupId`, `${TableName.Groups}.id`)
.join(

View File

@@ -12,7 +12,7 @@ export const identityAccessTokenDALFactory = (db: TDbClient) => {
const findOne = async (filter: Partial<TIdentityAccessTokens>, tx?: Knex) => {
try {
const doc = await (tx || db)(TableName.IdentityAccessToken)
const doc = await (tx || db.replicaNode())(TableName.IdentityAccessToken)
.where(filter)
.join(TableName.Identity, `${TableName.Identity}.id`, `${TableName.IdentityAccessToken}.identityId`)
.leftJoin(TableName.IdentityUaClientSecret, (qb) => {

View File

@@ -12,7 +12,7 @@ export const identityProjectDALFactory = (db: TDbClient) => {
const findByProjectId = async (projectId: string, filter: { identityId?: string } = {}, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.IdentityProjectMembership)
const docs = await (tx || db.replicaNode())(TableName.IdentityProjectMembership)
.where(`${TableName.IdentityProjectMembership}.projectId`, projectId)
.join(TableName.Identity, `${TableName.IdentityProjectMembership}.identityId`, `${TableName.Identity}.id`)
.where((qb) => {

View File

@@ -12,7 +12,7 @@ export const identityOrgDALFactory = (db: TDbClient) => {
const findOne = async (filter: Partial<TIdentityOrgMemberships>, tx?: Knex) => {
try {
const [data] = await (tx || db)(TableName.IdentityOrgMembership)
const [data] = await (tx || db.replicaNode())(TableName.IdentityOrgMembership)
.where(filter)
.join(TableName.Identity, `${TableName.IdentityOrgMembership}.identityId`, `${TableName.Identity}.id`)
.select(selectAllTableCols(TableName.IdentityOrgMembership))
@@ -29,7 +29,7 @@ export const identityOrgDALFactory = (db: TDbClient) => {
const find = async (filter: Partial<TIdentityOrgMemberships>, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.IdentityOrgMembership)
const docs = await (tx || db.replicaNode())(TableName.IdentityOrgMembership)
.where(filter)
.join(TableName.Identity, `${TableName.IdentityOrgMembership}.identityId`, `${TableName.Identity}.id`)
.leftJoin(TableName.OrgRoles, `${TableName.IdentityOrgMembership}.roleId`, `${TableName.OrgRoles}.id`)

View File

@@ -1,6 +1,7 @@
import { ForbiddenError } from "@casl/ability";
import { OrgMembershipRole, TableName, TOrgRoles } from "@app/db/schemas";
import { TLicenseServiceFactory } from "@app/ee/services/license/license-service";
import { OrgPermissionActions, OrgPermissionSubjects } from "@app/ee/services/permission/org-permission";
import { TPermissionServiceFactory } from "@app/ee/services/permission/permission-service";
import { isAtLeastAsPrivileged } from "@app/lib/casl";
@@ -16,6 +17,7 @@ type TIdentityServiceFactoryDep = {
identityDAL: TIdentityDALFactory;
identityOrgMembershipDAL: TIdentityOrgDALFactory;
permissionService: Pick<TPermissionServiceFactory, "getOrgPermission" | "getOrgPermissionByRole">;
licenseService: Pick<TLicenseServiceFactory, "getPlan" | "updateSubscriptionOrgMemberCount">;
};
export type TIdentityServiceFactory = ReturnType<typeof identityServiceFactory>;
@@ -23,7 +25,8 @@ export type TIdentityServiceFactory = ReturnType<typeof identityServiceFactory>;
export const identityServiceFactory = ({
identityDAL,
identityOrgMembershipDAL,
permissionService
permissionService,
licenseService
}: TIdentityServiceFactoryDep) => {
const createIdentity = async ({
name,
@@ -45,6 +48,14 @@ export const identityServiceFactory = ({
const hasRequiredPriviledges = isAtLeastAsPrivileged(permission, rolePermission);
if (!hasRequiredPriviledges) throw new BadRequestError({ message: "Failed to create a more privileged identity" });
const plan = await licenseService.getPlan(orgId);
if (plan?.identityLimit && plan.identitiesUsed >= plan.identityLimit) {
// limit imposed on number of identities allowed / number of identities used exceeds the number of identities allowed
throw new BadRequestError({
message: "Failed to create identity due to identity limit reached. Upgrade plan to create more identities."
});
}
const identity = await identityDAL.transaction(async (tx) => {
const newIdentity = await identityDAL.create({ name }, tx);
await identityOrgMembershipDAL.create(
@@ -58,6 +69,7 @@ export const identityServiceFactory = ({
);
return newIdentity;
});
await licenseService.updateSubscriptionOrgMemberCount(orgId);
return identity;
};
@@ -115,7 +127,7 @@ export const identityServiceFactory = ({
{ identityId: id },
{
role: customRole ? OrgMembershipRole.Custom : role,
roleId: customRole?.id
roleId: customRole?.id || null
},
tx
);
@@ -168,6 +180,9 @@ export const identityServiceFactory = ({
throw new ForbiddenRequestError({ message: "Failed to delete more privileged identity" });
const deletedIdentity = await identityDAL.deleteById(id);
await licenseService.updateSubscriptionOrgMemberCount(identityOrgMembership.orgId);
return { ...deletedIdentity, orgId: identityOrgMembership.orgId };
};

View File

@@ -178,7 +178,8 @@ export const integrationAuthServiceFactory = ({
actorAuthMethod,
accessId,
namespace,
accessToken
accessToken,
awsAssumeIamRoleArn
}: TSaveIntegrationAccessTokenDTO) => {
if (!Object.values(Integrations).includes(integration as Integrations))
throw new BadRequestError({ message: "Invalid integration" });
@@ -230,7 +231,7 @@ export const integrationAuthServiceFactory = ({
updateDoc.accessExpiresAt = tokenDetails.accessExpiresAt;
}
if (!refreshToken && (accessId || accessToken)) {
if (!refreshToken && (accessId || accessToken || awsAssumeIamRoleArn)) {
if (accessToken) {
const accessEncToken = encryptSymmetric128BitHexKeyUTF8(accessToken, key);
updateDoc.accessIV = accessEncToken.iv;
@@ -243,6 +244,12 @@ export const integrationAuthServiceFactory = ({
updateDoc.accessIdTag = accessEncToken.tag;
updateDoc.accessIdCiphertext = accessEncToken.ciphertext;
}
if (awsAssumeIamRoleArn) {
const awsAssumeIamRoleArnEnc = encryptSymmetric128BitHexKeyUTF8(awsAssumeIamRoleArn, key);
updateDoc.awsAssumeIamRoleArnCipherText = awsAssumeIamRoleArnEnc.ciphertext;
updateDoc.awsAssumeIamRoleArnIV = awsAssumeIamRoleArnEnc.iv;
updateDoc.awsAssumeIamRoleArnTag = awsAssumeIamRoleArnEnc.tag;
}
}
return integrationAuthDAL.create(updateDoc);
};
@@ -251,6 +258,14 @@ export const integrationAuthServiceFactory = ({
const getIntegrationAccessToken = async (integrationAuth: TIntegrationAuths, botKey: string) => {
let accessToken: string | undefined;
let accessId: string | undefined;
// this means its not access token based
if (
integrationAuth.integration === Integrations.AWS_SECRET_MANAGER &&
integrationAuth.awsAssumeIamRoleArnCipherText
) {
return { accessToken: "", accessId: "" };
}
if (integrationAuth.accessTag && integrationAuth.accessIV && integrationAuth.accessCiphertext) {
accessToken = decryptSymmetric128BitHexKeyUTF8({
ciphertext: integrationAuth.accessCiphertext,

View File

@@ -17,6 +17,7 @@ export type TSaveIntegrationAccessTokenDTO = {
url?: string;
namespace?: string;
refreshToken?: string;
awsAssumeIamRoleArn?: string;
} & TProjectPermission;
export type TDeleteIntegrationAuthsDTO = TProjectPermission & {

View File

@@ -17,14 +17,17 @@ import {
UntagResourceCommand,
UpdateSecretCommand
} from "@aws-sdk/client-secrets-manager";
import { AssumeRoleCommand, STSClient } from "@aws-sdk/client-sts";
import { Octokit } from "@octokit/rest";
import AWS, { AWSError } from "aws-sdk";
import { AxiosError } from "axios";
import { randomUUID } from "crypto";
import sodium from "libsodium-wrappers";
import isEqual from "lodash.isequal";
import { z } from "zod";
import { SecretType, TIntegrationAuths, TIntegrations, TSecrets } from "@app/db/schemas";
import { getConfig } from "@app/lib/config/env";
import { request } from "@app/lib/config/request";
import { BadRequestError } from "@app/lib/errors";
import { logger } from "@app/lib/logger";
@@ -695,24 +698,61 @@ const syncSecretsAWSSecretManager = async ({
integration,
secrets,
accessId,
accessToken
accessToken,
awsAssumeRoleArn,
projectId
}: {
integration: TIntegrations;
secrets: Record<string, { value: string; comment?: string }>;
accessId: string | null;
accessToken: string;
awsAssumeRoleArn: string | null;
projectId?: string;
}) => {
const appCfg = getConfig();
const metadata = z.record(z.any()).parse(integration.metadata || {});
if (!accessId) {
throw new Error("AWS access ID is required");
if (!accessId && !awsAssumeRoleArn) {
throw new Error("AWS access ID/AWS Assume Role is required");
}
let accessKeyId = "";
let secretAccessKey = "";
let sessionToken;
if (awsAssumeRoleArn) {
const client = new STSClient({
region: integration.region as string,
credentials:
appCfg.CLIENT_ID_AWS_INTEGRATION && appCfg.CLIENT_SECRET_AWS_INTEGRATION
? {
accessKeyId: appCfg.CLIENT_ID_AWS_INTEGRATION,
secretAccessKey: appCfg.CLIENT_SECRET_AWS_INTEGRATION
}
: undefined
});
const command = new AssumeRoleCommand({
RoleArn: awsAssumeRoleArn,
RoleSessionName: `infisical-sm-${randomUUID()}`,
DurationSeconds: 900, // 15mins
ExternalId: projectId
});
const response = await client.send(command);
if (!response.Credentials?.AccessKeyId || !response.Credentials?.SecretAccessKey)
throw new Error("Failed to assume role");
accessKeyId = response.Credentials?.AccessKeyId;
secretAccessKey = response.Credentials?.SecretAccessKey;
sessionToken = response.Credentials?.SessionToken;
} else {
accessKeyId = accessId as string;
secretAccessKey = accessToken;
}
const secretsManager = new SecretsManagerClient({
region: integration.region as string,
credentials: {
accessKeyId: accessId,
secretAccessKey: accessToken
accessKeyId,
secretAccessKey,
sessionToken
}
});
@@ -3568,7 +3608,9 @@ export const syncIntegrationSecrets = async ({
secrets,
accessId,
accessToken,
appendices
awsAssumeRoleArn,
appendices,
projectId
}: {
createManySecretsRawFn: (params: TCreateManySecretsRawFn) => Promise<Array<TSecrets & { _id: string }>>;
updateManySecretsRawFn: (params: TUpdateManySecretsRawFn) => Promise<Array<TSecrets & { _id: string }>>;
@@ -3585,8 +3627,10 @@ export const syncIntegrationSecrets = async ({
integrationAuth: TIntegrationAuths;
secrets: Record<string, { value: string; comment?: string }>;
accessId: string | null;
awsAssumeRoleArn: string | null;
accessToken: string;
appendices?: { prefix: string; suffix: string };
projectId?: string;
}) => {
let response: { isSynced: boolean; syncMessage: string } | null = null;
@@ -3620,7 +3664,9 @@ export const syncIntegrationSecrets = async ({
integration,
secrets,
accessId,
accessToken
accessToken,
awsAssumeRoleArn,
projectId
});
break;
case Integrations.HEROKU:

View File

@@ -22,7 +22,7 @@ export const integrationDALFactory = (db: TDbClient) => {
const find = async (filter: Partial<TIntegrations>, tx?: Knex) => {
try {
const docs = await integrationFindQuery(tx || db, filter);
const docs = await integrationFindQuery(tx || db.replicaNode(), filter);
return docs.map(({ envId, envSlug, envName, ...el }) => ({
...el,
environment: {
@@ -38,7 +38,7 @@ export const integrationDALFactory = (db: TDbClient) => {
const findOne = async (filter: Partial<TIntegrations>, tx?: Knex) => {
try {
const doc = await integrationFindQuery(tx || db, filter).first();
const doc = await integrationFindQuery(tx || db.replicaNode(), filter).first();
if (!doc) return;
const { envName: name, envSlug: slug, envId: id, ...el } = doc;
@@ -50,7 +50,7 @@ export const integrationDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const doc = await integrationFindQuery(tx || db, {
const doc = await integrationFindQuery(tx || db.replicaNode(), {
[`${TableName.Integration}.id` as "id"]: id
}).first();
if (!doc) return;
@@ -64,7 +64,7 @@ export const integrationDALFactory = (db: TDbClient) => {
const findByProjectId = async (projectId: string, tx?: Knex) => {
try {
const integrations = await (tx || db)(TableName.Integration)
const integrations = await (tx || db.replicaNode())(TableName.Integration)
.where(`${TableName.Environment}.projectId`, projectId)
.join(TableName.Environment, `${TableName.Integration}.envId`, `${TableName.Environment}.id`)
.select(db.ref("name").withSchema(TableName.Environment).as("envName"))
@@ -90,7 +90,7 @@ export const integrationDALFactory = (db: TDbClient) => {
// used for syncing secrets
// this will populate integration auth also
const findByProjectIdV2 = async (projectId: string, environment: string, tx?: Knex) => {
const docs = await (tx || db)(TableName.Integration)
const docs = await (tx || db.replicaNode())(TableName.Integration)
.where(`${TableName.Environment}.projectId`, projectId)
.where("isActive", true)
.where(`${TableName.Environment}.slug`, environment)
@@ -120,7 +120,10 @@ export const integrationDALFactory = (db: TDbClient) => {
db.ref("accessExpiresAt").withSchema(TableName.IntegrationAuth).as("accessExpiresAtAu"),
db.ref("metadata").withSchema(TableName.IntegrationAuth).as("metadataAu"),
db.ref("algorithm").withSchema(TableName.IntegrationAuth).as("algorithmAu"),
db.ref("keyEncoding").withSchema(TableName.IntegrationAuth).as("keyEncodingAu")
db.ref("keyEncoding").withSchema(TableName.IntegrationAuth).as("keyEncodingAu"),
db.ref("awsAssumeIamRoleArnCipherText").withSchema(TableName.IntegrationAuth),
db.ref("awsAssumeIamRoleArnIV").withSchema(TableName.IntegrationAuth),
db.ref("awsAssumeIamRoleArnTag").withSchema(TableName.IntegrationAuth)
);
return docs.map(
({
@@ -146,6 +149,9 @@ export const integrationDALFactory = (db: TDbClient) => {
algorithmAu: algorithm,
keyEncodingAu: keyEncoding,
accessExpiresAtAu: accessExpiresAt,
awsAssumeIamRoleArnIV,
awsAssumeIamRoleArnCipherText,
awsAssumeIamRoleArnTag,
...el
}) => ({
...el,
@@ -174,7 +180,10 @@ export const integrationDALFactory = (db: TDbClient) => {
metadata,
algorithm,
keyEncoding,
accessExpiresAt
accessExpiresAt,
awsAssumeIamRoleArnIV,
awsAssumeIamRoleArnCipherText,
awsAssumeIamRoleArnTag
}
})
);

View File

@@ -16,7 +16,7 @@ export const incidentContactDALFactory = (db: TDbClient) => {
const findByOrgId = async (orgId: string) => {
try {
const incidentContacts = await db(TableName.IncidentContact).where({ orgId });
const incidentContacts = await db.replicaNode()(TableName.IncidentContact).where({ orgId });
return incidentContacts;
} catch (error) {
throw new DatabaseError({ name: "Incident contact list", error });
@@ -25,7 +25,8 @@ export const incidentContactDALFactory = (db: TDbClient) => {
const findOne = async (orgId: string, data: Partial<TIncidentContacts>) => {
try {
const incidentContacts = await db(TableName.IncidentContact)
const incidentContacts = await db
.replicaNode()(TableName.IncidentContact)
.where({ orgId, ...data })
.first();
return incidentContacts;

View File

@@ -20,7 +20,7 @@ export const orgDALFactory = (db: TDbClient) => {
const findOrgById = async (orgId: string) => {
try {
const org = await db(TableName.Organization).where({ id: orgId }).first();
const org = await db.replicaNode()(TableName.Organization).where({ id: orgId }).first();
return org;
} catch (error) {
throw new DatabaseError({ error, name: "Find org by id" });
@@ -30,7 +30,8 @@ export const orgDALFactory = (db: TDbClient) => {
// special query
const findAllOrgsByUserId = async (userId: string): Promise<TOrganizations[]> => {
try {
const org = await db(TableName.OrgMembership)
const org = await db
.replicaNode()(TableName.OrgMembership)
.where({ userId })
.join(TableName.Organization, `${TableName.OrgMembership}.orgId`, `${TableName.Organization}.id`)
.select(selectAllTableCols(TableName.Organization));
@@ -42,7 +43,8 @@ export const orgDALFactory = (db: TDbClient) => {
const findOrgByProjectId = async (projectId: string): Promise<TOrganizations> => {
try {
const [org] = await db(TableName.Project)
const [org] = await db
.replicaNode()(TableName.Project)
.where({ [`${TableName.Project}.id` as "id"]: projectId })
.join(TableName.Organization, `${TableName.Project}.orgId`, `${TableName.Organization}.id`)
.select(selectAllTableCols(TableName.Organization));
@@ -56,7 +58,8 @@ export const orgDALFactory = (db: TDbClient) => {
// special query
const findAllOrgMembers = async (orgId: string) => {
try {
const members = await db(TableName.OrgMembership)
const members = await db
.replicaNode()(TableName.OrgMembership)
.where(`${TableName.OrgMembership}.orgId`, orgId)
.join(TableName.Users, `${TableName.OrgMembership}.userId`, `${TableName.Users}.id`)
.leftJoin<TUserEncryptionKeys>(
@@ -95,7 +98,8 @@ export const orgDALFactory = (db: TDbClient) => {
count: string;
}
const count = await db(TableName.OrgMembership)
const count = await db
.replicaNode()(TableName.OrgMembership)
.where(`${TableName.OrgMembership}.orgId`, orgId)
.count("*")
.join(TableName.Users, `${TableName.OrgMembership}.userId`, `${TableName.Users}.id`)
@@ -110,7 +114,8 @@ export const orgDALFactory = (db: TDbClient) => {
const findOrgMembersByUsername = async (orgId: string, usernames: string[]) => {
try {
const members = await db(TableName.OrgMembership)
const members = await db
.replicaNode()(TableName.OrgMembership)
.where(`${TableName.OrgMembership}.orgId`, orgId)
.join(TableName.Users, `${TableName.OrgMembership}.userId`, `${TableName.Users}.id`)
.leftJoin<TUserEncryptionKeys>(
@@ -145,7 +150,8 @@ export const orgDALFactory = (db: TDbClient) => {
const findOrgGhostUser = async (orgId: string) => {
try {
const member = await db(TableName.OrgMembership)
const member = await db
.replicaNode()(TableName.OrgMembership)
.where({ orgId })
.join(TableName.Users, `${TableName.OrgMembership}.userId`, `${TableName.Users}.id`)
.leftJoin(TableName.UserEncryptionKey, `${TableName.UserEncryptionKey}.userId`, `${TableName.Users}.id`)
@@ -169,7 +175,8 @@ export const orgDALFactory = (db: TDbClient) => {
const ghostUserExists = async (orgId: string) => {
try {
const member = await db(TableName.OrgMembership)
const member = await db
.replicaNode()(TableName.OrgMembership)
.where({ orgId })
.join(TableName.Users, `${TableName.OrgMembership}.userId`, `${TableName.Users}.id`)
.leftJoin(TableName.UserEncryptionKey, `${TableName.UserEncryptionKey}.userId`, `${TableName.Users}.id`)
@@ -257,7 +264,7 @@ export const orgDALFactory = (db: TDbClient) => {
{ offset, limit, sort, tx }: TFindOpt<TOrgMemberships> = {}
) => {
try {
const query = (tx || db)(TableName.OrgMembership)
const query = (tx || db.replicaNode())(TableName.OrgMembership)
// eslint-disable-next-line
.where(buildFindFilter(filter))
.join(TableName.Users, `${TableName.Users}.id`, `${TableName.OrgMembership}.userId`)

View File

@@ -420,13 +420,20 @@ export const orgServiceFactory = ({
}
const plan = await licenseService.getPlan(orgId);
if (plan.memberLimit !== null && plan.membersUsed >= plan.memberLimit) {
// case: limit imposed on number of members allowed
// case: number of members used exceeds the number of members allowed
if (plan?.memberLimit && plan.membersUsed >= plan.memberLimit) {
// limit imposed on number of members allowed / number of members used exceeds the number of members allowed
throw new BadRequestError({
message: "Failed to invite member due to member limit reached. Upgrade plan to invite more members."
});
}
if (plan?.identityLimit && plan.identitiesUsed >= plan.identityLimit) {
// limit imposed on number of identities allowed / number of identities used exceeds the number of identities allowed
throw new BadRequestError({
message: "Failed to invite member due to member limit reached. Upgrade plan to invite more members."
});
}
const invitee = await orgDAL.transaction(async (tx) => {
const inviteeUser = await userDAL.findUserByUsername(inviteeEmail, tx);
if (inviteeUser) {

View File

@@ -12,7 +12,7 @@ export const projectBotDALFactory = (db: TDbClient) => {
const findOne = async (filter: Partial<TProjectBots>, tx?: Knex) => {
try {
const bot = await (tx || db)(TableName.ProjectBot)
const bot = await (tx || db.replicaNode())(TableName.ProjectBot)
.where(filter)
.leftJoin(TableName.Users, `${TableName.ProjectBot}.senderId`, `${TableName.Users}.id`)
.leftJoin(TableName.UserEncryptionKey, `${TableName.UserEncryptionKey}.userId`, `${TableName.Users}.id`)

View File

@@ -12,7 +12,9 @@ export const projectEnvDALFactory = (db: TDbClient) => {
const findBySlugs = async (projectId: string, env: string[], tx?: Knex) => {
try {
const envs = await (tx || db)(TableName.Environment).where("projectId", projectId).whereIn("slug", env);
const envs = await (tx || db.replicaNode())(TableName.Environment)
.where("projectId", projectId)
.whereIn("slug", env);
return envs;
} catch (error) {
throw new DatabaseError({ error, name: "Find by slugs" });

View File

@@ -16,7 +16,7 @@ export const projectKeyDALFactory = (db: TDbClient) => {
tx?: Knex
): Promise<(TProjectKeys & { sender: { publicKey: string } }) | undefined> => {
try {
const projectKey = await (tx || db)(TableName.ProjectKeys)
const projectKey = await (tx || db.replicaNode())(TableName.ProjectKeys)
.join(TableName.Users, `${TableName.ProjectKeys}.senderId`, `${TableName.Users}.id`)
.join(TableName.UserEncryptionKey, `${TableName.UserEncryptionKey}.userId`, `${TableName.Users}.id`)
.where({ projectId, receiverId: userId })
@@ -34,7 +34,7 @@ export const projectKeyDALFactory = (db: TDbClient) => {
const findAllProjectUserPubKeys = async (projectId: string, tx?: Knex) => {
try {
const pubKeys = await (tx || db)(TableName.ProjectMembership)
const pubKeys = await (tx || db.replicaNode())(TableName.ProjectMembership)
.where({ projectId })
.join(TableName.Users, `${TableName.ProjectMembership}.userId`, `${TableName.Users}.id`)
.join(TableName.UserEncryptionKey, `${TableName.Users}.id`, `${TableName.UserEncryptionKey}.userId`)

View File

@@ -13,7 +13,8 @@ export const projectMembershipDALFactory = (db: TDbClient) => {
// special query
const findAllProjectMembers = async (projectId: string, filter: { usernames?: string[]; username?: string } = {}) => {
try {
const docs = await db(TableName.ProjectMembership)
const docs = await db
.replicaNode()(TableName.ProjectMembership)
.where({ [`${TableName.ProjectMembership}.projectId` as "projectId"]: projectId })
.join(TableName.Users, `${TableName.ProjectMembership}.userId`, `${TableName.Users}.id`)
.where((qb) => {
@@ -108,7 +109,7 @@ export const projectMembershipDALFactory = (db: TDbClient) => {
const findProjectGhostUser = async (projectId: string, tx?: Knex) => {
try {
const ghostUser = await (tx || db)(TableName.ProjectMembership)
const ghostUser = await (tx || db.replicaNode())(TableName.ProjectMembership)
.where({ projectId })
.join(TableName.Users, `${TableName.ProjectMembership}.userId`, `${TableName.Users}.id`)
.select(selectAllTableCols(TableName.Users))
@@ -123,7 +124,8 @@ export const projectMembershipDALFactory = (db: TDbClient) => {
const findMembershipsByUsername = async (projectId: string, usernames: string[]) => {
try {
const members = await db(TableName.ProjectMembership)
const members = await db
.replicaNode()(TableName.ProjectMembership)
.where({ projectId })
.join(TableName.Users, `${TableName.ProjectMembership}.userId`, `${TableName.Users}.id`)
.join<TUserEncryptionKeys>(
@@ -149,7 +151,8 @@ export const projectMembershipDALFactory = (db: TDbClient) => {
const findProjectMembershipsByUserId = async (orgId: string, userId: string) => {
try {
const memberships = await db(TableName.ProjectMembership)
const memberships = await db
.replicaNode()(TableName.ProjectMembership)
.where({ userId })
.join(TableName.Project, `${TableName.ProjectMembership}.projectId`, `${TableName.Project}.id`)
.where({ [`${TableName.Project}.orgId` as "orgId"]: orgId })

View File

@@ -14,7 +14,8 @@ export const projectDALFactory = (db: TDbClient) => {
const findAllProjects = async (userId: string) => {
try {
const workspaces = await db(TableName.ProjectMembership)
const workspaces = await db
.replicaNode()(TableName.ProjectMembership)
.where({ userId })
.join(TableName.Project, `${TableName.ProjectMembership}.projectId`, `${TableName.Project}.id`)
.leftJoin(TableName.Environment, `${TableName.Environment}.projectId`, `${TableName.Project}.id`)
@@ -83,7 +84,7 @@ export const projectDALFactory = (db: TDbClient) => {
const findProjectGhostUser = async (projectId: string, tx?: Knex) => {
try {
const ghostUser = await (tx || db)(TableName.ProjectMembership)
const ghostUser = await (tx || db.replicaNode())(TableName.ProjectMembership)
.where({ projectId })
.join(TableName.Users, `${TableName.ProjectMembership}.userId`, `${TableName.Users}.id`)
.select(selectAllTableCols(TableName.Users))
@@ -109,7 +110,8 @@ export const projectDALFactory = (db: TDbClient) => {
const findAllProjectsByIdentity = async (identityId: string) => {
try {
const workspaces = await db(TableName.IdentityProjectMembership)
const workspaces = await db
.replicaNode()(TableName.IdentityProjectMembership)
.where({ identityId })
.join(TableName.Project, `${TableName.IdentityProjectMembership}.projectId`, `${TableName.Project}.id`)
.leftJoin(TableName.Environment, `${TableName.Environment}.projectId`, `${TableName.Project}.id`)
@@ -151,7 +153,8 @@ export const projectDALFactory = (db: TDbClient) => {
const findProjectById = async (id: string) => {
try {
const workspaces = await db(TableName.Project)
const workspaces = await db
.replicaNode()(TableName.Project)
.where(`${TableName.Project}.id`, id)
.leftJoin(TableName.Environment, `${TableName.Environment}.projectId`, `${TableName.Project}.id`)
.select(
@@ -198,7 +201,8 @@ export const projectDALFactory = (db: TDbClient) => {
throw new BadRequestError({ message: "Organization ID is required when querying with slugs" });
}
const projects = await db(TableName.Project)
const projects = await db
.replicaNode()(TableName.Project)
.where(`${TableName.Project}.slug`, slug)
.where(`${TableName.Project}.orgId`, orgId)
.leftJoin(TableName.Environment, `${TableName.Environment}.projectId`, `${TableName.Project}.id`)

View File

@@ -12,7 +12,7 @@ export const secretBlindIndexDALFactory = (db: TDbClient) => {
const countOfSecretsWithNullSecretBlindIndex = async (projectId: string, tx?: Knex) => {
try {
const doc = await (tx || db)(TableName.Secret)
const doc = await (tx || db.replicaNode())(TableName.Secret)
.leftJoin(TableName.SecretFolder, `${TableName.SecretFolder}.id`, `${TableName.Secret}.folderId`)
.leftJoin(TableName.Environment, `${TableName.Environment}.id`, `${TableName.SecretFolder}.envId`)
.where({ projectId })
@@ -26,7 +26,7 @@ export const secretBlindIndexDALFactory = (db: TDbClient) => {
const findAllSecretsByProjectId = async (projectId: string, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.Secret)
const docs = await (tx || db.replicaNode())(TableName.Secret)
.leftJoin(TableName.SecretFolder, `${TableName.SecretFolder}.id`, `${TableName.Secret}.folderId`)
.leftJoin(TableName.Environment, `${TableName.Environment}.id`, `${TableName.SecretFolder}.envId`)
.where({ projectId })
@@ -43,7 +43,7 @@ export const secretBlindIndexDALFactory = (db: TDbClient) => {
const findSecretsByProjectId = async (projectId: string, secretIds: string[], tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.Secret)
const docs = await (tx || db.replicaNode())(TableName.Secret)
.leftJoin(TableName.SecretFolder, `${TableName.SecretFolder}.id`, `${TableName.Secret}.folderId`)
.leftJoin(TableName.Environment, `${TableName.Environment}.id`, `${TableName.SecretFolder}.envId`)
.where({ projectId })

View File

@@ -211,7 +211,12 @@ export const secretFolderDALFactory = (db: TDbClient) => {
const findBySecretPath = async (projectId: string, environment: string, path: string, tx?: Knex) => {
try {
const folder = await sqlFindFolderByPathQuery(tx || db, projectId, environment, removeTrailingSlash(path))
const folder = await sqlFindFolderByPathQuery(
tx || db.replicaNode(),
projectId,
environment,
removeTrailingSlash(path)
)
.orderBy("depth", "desc")
.first();
if (folder && folder.path !== removeTrailingSlash(path)) {
@@ -230,7 +235,12 @@ export const secretFolderDALFactory = (db: TDbClient) => {
// it will stop automatically at /path2
const findClosestFolder = async (projectId: string, environment: string, path: string, tx?: Knex) => {
try {
const folder = await sqlFindFolderByPathQuery(tx || db, projectId, environment, removeTrailingSlash(path))
const folder = await sqlFindFolderByPathQuery(
tx || db.replicaNode(),
projectId,
environment,
removeTrailingSlash(path)
)
.orderBy("depth", "desc")
.first();
if (!folder) return;
@@ -247,7 +257,7 @@ export const secretFolderDALFactory = (db: TDbClient) => {
envId,
secretPath: removeTrailingSlash(secretPath)
}));
const folders = await sqlFindMultipleFolderByEnvPathQuery(tx || db, formatedQuery);
const folders = await sqlFindMultipleFolderByEnvPathQuery(tx || db.replicaNode(), formatedQuery);
return formatedQuery.map(({ envId, secretPath }) =>
folders.find(({ path: targetPath, envId: targetEnvId }) => targetPath === secretPath && targetEnvId === envId)
);
@@ -260,7 +270,7 @@ export const secretFolderDALFactory = (db: TDbClient) => {
// that is instances in which for a given folderid find the secret path
const findSecretPathByFolderIds = async (projectId: string, folderIds: string[], tx?: Knex) => {
try {
const folders = await sqlFindSecretPathByFolderId(tx || db, projectId, folderIds);
const folders = await sqlFindSecretPathByFolderId(tx || db.replicaNode(), projectId, folderIds);
// travelling all the way from leaf node to root contains real path
const rootFolders = groupBy(
@@ -299,7 +309,7 @@ export const secretFolderDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const folder = await (tx || db)(TableName.SecretFolder)
const folder = await (tx || db.replicaNode())(TableName.SecretFolder)
.where({ [`${TableName.SecretFolder}.id` as "id"]: id })
.join(TableName.Environment, `${TableName.SecretFolder}.envId`, `${TableName.Environment}.id`)
.select(selectAllTableCols(TableName.SecretFolder))

View File

@@ -13,7 +13,7 @@ export const secretFolderVersionDALFactory = (db: TDbClient) => {
// This will fetch all latest secret versions from a folder
const findLatestVersionByFolderId = async (folderId: string, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.SecretFolderVersion)
const docs = await (tx || db.replicaNode())(TableName.SecretFolderVersion)
.join(TableName.SecretFolder, `${TableName.SecretFolderVersion}.folderId`, `${TableName.SecretFolder}.id`)
.where({ parentId: folderId, isReserved: false })
.join<TSecretFolderVersions>(
@@ -38,7 +38,9 @@ export const secretFolderVersionDALFactory = (db: TDbClient) => {
const findLatestFolderVersions = async (folderIds: string[], tx?: Knex) => {
try {
const docs: Array<TSecretFolderVersions & { max: number }> = await (tx || db)(TableName.SecretFolderVersion)
const docs: Array<TSecretFolderVersions & { max: number }> = await (tx || db.replicaNode())(
TableName.SecretFolderVersion
)
.whereIn("folderId", folderIds)
.join(
(tx || db)(TableName.SecretFolderVersion)

View File

@@ -51,7 +51,7 @@ export const secretImportDALFactory = (db: TDbClient) => {
const find = async (filter: Partial<TSecretImports & { projectId: string }>, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.SecretImport)
const docs = await (tx || db.replicaNode())(TableName.SecretImport)
.where(filter)
.join(TableName.Environment, `${TableName.SecretImport}.importEnv`, `${TableName.Environment}.id`)
.select(
@@ -72,7 +72,7 @@ export const secretImportDALFactory = (db: TDbClient) => {
const findByFolderIds = async (folderIds: string[], tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.SecretImport)
const docs = await (tx || db.replicaNode())(TableName.SecretImport)
.whereIn("folderId", folderIds)
.where("isReplication", false)
.join(TableName.Environment, `${TableName.SecretImport}.importEnv`, `${TableName.Environment}.id`)

View File

@@ -13,7 +13,7 @@ export const secretTagDALFactory = (db: TDbClient) => {
const findManyTagsById = async (projectId: string, ids: string[], tx?: Knex) => {
try {
const tags = await (tx || db)(TableName.SecretTag).where({ projectId }).whereIn("id", ids);
const tags = await (tx || db.replicaNode())(TableName.SecretTag).where({ projectId }).whereIn("id", ids);
return tags;
} catch (error) {
throw new DatabaseError({ error, name: "Find all by ids" });

View File

@@ -114,7 +114,7 @@ export const secretDALFactory = (db: TDbClient) => {
userId = undefined;
}
const secs = await (tx || db)(TableName.Secret)
const secs = await (tx || db.replicaNode())(TableName.Secret)
.where({ folderId })
.where((bd) => {
void bd.whereNull("userId").orWhere({ userId: userId || null });
@@ -152,7 +152,7 @@ export const secretDALFactory = (db: TDbClient) => {
const getSecretTags = async (secretId: string, tx?: Knex) => {
try {
const tags = await (tx || db)(TableName.JnSecretTag)
const tags = await (tx || db.replicaNode())(TableName.JnSecretTag)
.join(TableName.SecretTag, `${TableName.JnSecretTag}.${TableName.SecretTag}Id`, `${TableName.SecretTag}.id`)
.where({ [`${TableName.Secret}Id` as const]: secretId })
.select(db.ref("id").withSchema(TableName.SecretTag).as("tagId"))
@@ -179,7 +179,7 @@ export const secretDALFactory = (db: TDbClient) => {
userId = undefined;
}
const secs = await (tx || db)(TableName.Secret)
const secs = await (tx || db.replicaNode())(TableName.Secret)
.whereIn("folderId", folderIds)
.where((bd) => {
void bd.whereNull("userId").orWhere({ userId: userId || null });
@@ -223,7 +223,7 @@ export const secretDALFactory = (db: TDbClient) => {
) => {
if (!blindIndexes.length) return [];
try {
const secrets = await (tx || db)(TableName.Secret)
const secrets = await (tx || db.replicaNode())(TableName.Secret)
.where({ folderId })
.where((bd) => {
blindIndexes.forEach((el) => {
@@ -278,7 +278,7 @@ export const secretDALFactory = (db: TDbClient) => {
const findReferencedSecretReferences = async (projectId: string, envSlug: string, secretPath: string, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.SecretReference)
const docs = await (tx || db.replicaNode())(TableName.SecretReference)
.where({
secretPath,
environment: envSlug
@@ -298,7 +298,7 @@ export const secretDALFactory = (db: TDbClient) => {
// special query to backfill secret value
const findAllProjectSecretValues = async (projectId: string, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.Secret)
const docs = await (tx || db.replicaNode())(TableName.Secret)
.join(TableName.SecretFolder, `${TableName.Secret}.folderId`, `${TableName.SecretFolder}.id`)
.join(TableName.Environment, `${TableName.SecretFolder}.envId`, `${TableName.Environment}.id`)
.where("projectId", projectId)
@@ -313,7 +313,7 @@ export const secretDALFactory = (db: TDbClient) => {
const findOneWithTags = async (filter: Partial<TSecrets>, tx?: Knex) => {
try {
const rawDocs = await (tx || db)(TableName.Secret)
const rawDocs = await (tx || db.replicaNode())(TableName.Secret)
.where(filter)
.leftJoin(TableName.JnSecretTag, `${TableName.Secret}.id`, `${TableName.JnSecretTag}.${TableName.Secret}Id`)
.leftJoin(TableName.SecretTag, `${TableName.JnSecretTag}.${TableName.SecretTag}Id`, `${TableName.SecretTag}.id`)

View File

@@ -525,6 +525,18 @@ export const secretQueueFactory = ({
const botKey = await projectBotService.getBotKey(projectId);
const { accessToken, accessId } = await integrationAuthService.getIntegrationAccessToken(integrationAuth, botKey);
const awsAssumeRoleArn =
integrationAuth.awsAssumeIamRoleArnTag &&
integrationAuth.awsAssumeIamRoleArnIV &&
integrationAuth.awsAssumeIamRoleArnCipherText
? decryptSymmetric128BitHexKeyUTF8({
ciphertext: integrationAuth.awsAssumeIamRoleArnCipherText,
iv: integrationAuth.awsAssumeIamRoleArnIV,
tag: integrationAuth.awsAssumeIamRoleArnTag,
key: botKey
})
: null;
const secrets = await getIntegrationSecrets({
environment,
projectId,
@@ -544,6 +556,8 @@ export const secretQueueFactory = ({
}
try {
// akhilmhdh: this needs to changed later to be more easier to use
// at present this is not at all extendable like to add a new parameter for just one integration need to modify multiple places
const response = await syncIntegrationSecrets({
createManySecretsRawFn,
updateManySecretsRawFn,
@@ -552,7 +566,9 @@ export const secretQueueFactory = ({
integrationAuth,
secrets: Object.keys(suffixedSecrets).length !== 0 ? suffixedSecrets : secrets,
accessId: accessId as string,
awsAssumeRoleArn,
accessToken,
projectId,
appendices: {
prefix: metadata?.secretPrefix || "",
suffix: metadata?.secretSuffix || ""

View File

@@ -13,7 +13,7 @@ export const secretVersionDALFactory = (db: TDbClient) => {
// This will fetch all latest secret versions from a folder
const findLatestVersionByFolderId = async (folderId: string, tx?: Knex) => {
try {
const docs = await (tx || db)(TableName.SecretVersion)
const docs = await (tx || db.replicaNode())(TableName.SecretVersion)
.where(`${TableName.SecretVersion}.folderId`, folderId)
.join(TableName.Secret, `${TableName.Secret}.id`, `${TableName.SecretVersion}.secretId`)
.join<TSecretVersions, TSecretVersions & { secretId: string; max: number }>(
@@ -90,7 +90,7 @@ export const secretVersionDALFactory = (db: TDbClient) => {
const findLatestVersionMany = async (folderId: string, secretIds: string[], tx?: Knex) => {
try {
if (!secretIds.length) return {};
const docs: Array<TSecretVersions & { max: number }> = await (tx || db)(TableName.SecretVersion)
const docs: Array<TSecretVersions & { max: number }> = await (tx || db.replicaNode())(TableName.SecretVersion)
.where("folderId", folderId)
.whereIn(`${TableName.SecretVersion}.secretId`, secretIds)
.join(

View File

@@ -12,7 +12,7 @@ export const serviceTokenDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const doc = await (tx || db)(TableName.ServiceToken)
const doc = await (tx || db.replicaNode())(TableName.ServiceToken)
.leftJoin<TUsers>(
TableName.Users,
`${TableName.Users}.id`,

View File

@@ -12,7 +12,7 @@ import { AuthMethod } from "../auth/auth-type";
import { TOrgServiceFactory } from "../org/org-service";
import { TUserDALFactory } from "../user/user-dal";
import { TSuperAdminDALFactory } from "./super-admin-dal";
import { TAdminSignUpDTO } from "./super-admin-types";
import { LoginMethod, TAdminSignUpDTO } from "./super-admin-types";
type TSuperAdminServiceFactoryDep = {
serverCfgDAL: TSuperAdminDALFactory;
@@ -79,7 +79,37 @@ export const superAdminServiceFactory = ({
return newCfg;
};
const updateServerCfg = async (data: TSuperAdminUpdate) => {
const updateServerCfg = async (data: TSuperAdminUpdate, userId: string) => {
if (data.enabledLoginMethods) {
const superAdminUser = await userDAL.findById(userId);
const loginMethodToAuthMethod = {
[LoginMethod.EMAIL]: [AuthMethod.EMAIL],
[LoginMethod.GOOGLE]: [AuthMethod.GOOGLE],
[LoginMethod.GITLAB]: [AuthMethod.GITLAB],
[LoginMethod.GITHUB]: [AuthMethod.GITHUB],
[LoginMethod.LDAP]: [AuthMethod.LDAP],
[LoginMethod.OIDC]: [AuthMethod.OIDC],
[LoginMethod.SAML]: [
AuthMethod.AZURE_SAML,
AuthMethod.GOOGLE_SAML,
AuthMethod.JUMPCLOUD_SAML,
AuthMethod.KEYCLOAK_SAML,
AuthMethod.OKTA_SAML
]
};
if (
!data.enabledLoginMethods.some((loginMethod) =>
loginMethodToAuthMethod[loginMethod as LoginMethod].some(
(authMethod) => superAdminUser.authMethods?.includes(authMethod)
)
)
) {
throw new BadRequestError({
message: "You must configure at least one auth method to prevent account lockout"
});
}
}
const updatedServerCfg = await serverCfgDAL.updateById(ADMIN_CONFIG_DB_UUID, data);
await keyStore.setItemWithExpiry(ADMIN_CONFIG_KEY, ADMIN_CONFIG_KEY_EXP, JSON.stringify(updatedServerCfg));
@@ -167,7 +197,7 @@ export const superAdminServiceFactory = ({
orgName: initialOrganizationName
});
await updateServerCfg({ initialized: true });
await updateServerCfg({ initialized: true }, userInfo.user.id);
const token = await authService.generateUserTokens({
user: userInfo.user,
authMethod: AuthMethod.EMAIL,

View File

@@ -15,3 +15,13 @@ export type TAdminSignUpDTO = {
ip: string;
userAgent: string;
};
export enum LoginMethod {
EMAIL = "email",
GOOGLE = "google",
GITHUB = "github",
GITLAB = "gitlab",
SAML = "saml",
LDAP = "ldap",
OIDC = "oidc"
}

View File

@@ -22,7 +22,8 @@ export const userDALFactory = (db: TDbClient) => {
// -------------------------
const findUserEncKeyByUsername = async ({ username }: { username: string }) => {
try {
return await db(TableName.Users)
return await db
.replicaNode()(TableName.Users)
.where({
username,
isGhost: false
@@ -36,7 +37,7 @@ export const userDALFactory = (db: TDbClient) => {
const findUserEncKeyByUserIdsBatch = async ({ userIds }: { userIds: string[] }, tx?: Knex) => {
try {
return await (tx || db)(TableName.Users)
return await (tx || db.replicaNode())(TableName.Users)
.where({
isGhost: false
})
@@ -49,7 +50,8 @@ export const userDALFactory = (db: TDbClient) => {
const findUserEncKeyByUserId = async (userId: string) => {
try {
const user = await db(TableName.Users)
const user = await db
.replicaNode()(TableName.Users)
.where(`${TableName.Users}.id`, userId)
.join(TableName.UserEncryptionKey, `${TableName.Users}.id`, `${TableName.UserEncryptionKey}.userId`)
.first();
@@ -65,7 +67,8 @@ export const userDALFactory = (db: TDbClient) => {
const findUserByProjectMembershipId = async (projectMembershipId: string) => {
try {
return await db(TableName.ProjectMembership)
return await db
.replicaNode()(TableName.ProjectMembership)
.where({ [`${TableName.ProjectMembership}.id` as "id"]: projectMembershipId })
.join(TableName.Users, `${TableName.ProjectMembership}.userId`, `${TableName.Users}.id`)
.first();
@@ -76,7 +79,8 @@ export const userDALFactory = (db: TDbClient) => {
const findUsersByProjectMembershipIds = async (projectMembershipIds: string[]) => {
try {
return await db(TableName.ProjectMembership)
return await db
.replicaNode()(TableName.ProjectMembership)
.whereIn(`${TableName.ProjectMembership}.id`, projectMembershipIds)
.join(TableName.Users, `${TableName.ProjectMembership}.userId`, `${TableName.Users}.id`)
.select("*");
@@ -128,7 +132,7 @@ export const userDALFactory = (db: TDbClient) => {
// ---------------------
const findOneUserAction = (filter: TUserActionsUpdate, tx?: Knex) => {
try {
return (tx || db)(TableName.UserAction).where(filter).first("*");
return (tx || db.replicaNode())(TableName.UserAction).where(filter).first("*");
} catch (error) {
throw new DatabaseError({ error, name: "Find one user action" });
}

View File

@@ -8,6 +8,7 @@ import { SmtpTemplates, TSmtpService } from "@app/services/smtp/smtp-service";
import { TUserAliasDALFactory } from "@app/services/user-alias/user-alias-dal";
import { AuthMethod } from "../auth/auth-type";
import { TProjectMembershipDALFactory } from "../project-membership/project-membership-dal";
import { TUserDALFactory } from "./user-dal";
type TUserServiceFactoryDep = {
@@ -26,8 +27,9 @@ type TUserServiceFactoryDep = {
| "delete"
>;
userAliasDAL: Pick<TUserAliasDALFactory, "find" | "insertMany">;
orgMembershipDAL: Pick<TOrgMembershipDALFactory, "find" | "insertMany">;
orgMembershipDAL: Pick<TOrgMembershipDALFactory, "find" | "insertMany" | "findOne" | "updateById">;
tokenService: Pick<TAuthTokenServiceFactory, "createTokenForUser" | "validateTokenForUser">;
projectMembershipDAL: Pick<TProjectMembershipDALFactory, "find">;
smtpService: Pick<TSmtpService, "sendMail">;
};
@@ -37,6 +39,7 @@ export const userServiceFactory = ({
userDAL,
userAliasDAL,
orgMembershipDAL,
projectMembershipDAL,
tokenService,
smtpService
}: TUserServiceFactoryDep) => {
@@ -247,6 +250,51 @@ export const userServiceFactory = ({
return privateKey;
};
const getUserProjectFavorites = async (userId: string, orgId: string) => {
const orgMembership = await orgMembershipDAL.findOne({
userId,
orgId
});
if (!orgMembership) {
throw new BadRequestError({
message: "User does not belong in the organization."
});
}
return { projectFavorites: orgMembership.projectFavorites || [] };
};
const updateUserProjectFavorites = async (userId: string, orgId: string, projectIds: string[]) => {
const orgMembership = await orgMembershipDAL.findOne({
userId,
orgId
});
if (!orgMembership) {
throw new BadRequestError({
message: "User does not belong in the organization."
});
}
const matchingUserProjectMemberships = await projectMembershipDAL.find({
userId,
$in: {
projectId: projectIds
}
});
const memberProjectFavorites = matchingUserProjectMemberships.map(
(projectMembership) => projectMembership.projectId
);
const updatedOrgMembership = await orgMembershipDAL.updateById(orgMembership.id, {
projectFavorites: memberProjectFavorites
});
return updatedOrgMembership.projectFavorites;
};
return {
sendEmailVerificationCode,
verifyEmailVerificationCode,
@@ -258,6 +306,8 @@ export const userServiceFactory = ({
createUserAction,
getUserAction,
unlockUser,
getUserPrivateKey
getUserPrivateKey,
getUserProjectFavorites,
updateUserProjectFavorites
};
};

View File

@@ -22,7 +22,7 @@ export const webhookDALFactory = (db: TDbClient) => {
const find = async (filter: Partial<TWebhooks>, tx?: Knex) => {
try {
const docs = await webhookFindQuery(tx || db, filter);
const docs = await webhookFindQuery(tx || db.replicaNode(), filter);
return docs.map(({ envId, envSlug, envName, ...el }) => ({
...el,
envId,
@@ -39,7 +39,7 @@ export const webhookDALFactory = (db: TDbClient) => {
const findOne = async (filter: Partial<TWebhooks>, tx?: Knex) => {
try {
const doc = await webhookFindQuery(tx || db, filter).first();
const doc = await webhookFindQuery(tx || db.replicaNode(), filter).first();
if (!doc) return;
const { envName: name, envSlug: slug, envId: id, ...el } = doc;
@@ -51,7 +51,7 @@ export const webhookDALFactory = (db: TDbClient) => {
const findById = async (id: string, tx?: Knex) => {
try {
const doc = await webhookFindQuery(tx || db, {
const doc = await webhookFindQuery(tx || db.replicaNode(), {
[`${TableName.Webhook}.id` as "id"]: id
}).first();
if (!doc) return;
@@ -65,7 +65,7 @@ export const webhookDALFactory = (db: TDbClient) => {
const findAllWebhooks = async (projectId: string, environment?: string, secretPath?: string, tx?: Knex) => {
try {
const webhooks = await (tx || db)(TableName.Webhook)
const webhooks = await (tx || db.replicaNode())(TableName.Webhook)
.where(`${TableName.Environment}.projectId`, projectId)
.where((qb) => {
if (environment) {

View File

@@ -4,55 +4,63 @@ import { AxiosError } from "axios";
import picomatch from "picomatch";
import { SecretKeyEncoding, TWebhooks } from "@app/db/schemas";
import { getConfig } from "@app/lib/config/env";
import { request } from "@app/lib/config/request";
import { decryptSymmetric, decryptSymmetric128BitHexKeyUTF8 } from "@app/lib/crypto";
import { infisicalSymmetricDecrypt } from "@app/lib/crypto/encryption";
import { BadRequestError } from "@app/lib/errors";
import { logger } from "@app/lib/logger";
import { TProjectEnvDALFactory } from "../project-env/project-env-dal";
import { TWebhookDALFactory } from "./webhook-dal";
import { WebhookType } from "./webhook-types";
const WEBHOOK_TRIGGER_TIMEOUT = 15 * 1000;
export const triggerWebhookRequest = async (
{ url, encryptedSecretKey, iv, tag, keyEncoding }: TWebhooks,
data: Record<string, unknown>
) => {
const headers: Record<string, string> = {};
const payload = { ...data, timestamp: Date.now() };
const appCfg = getConfig();
export const decryptWebhookDetails = (webhook: TWebhooks) => {
const { keyEncoding, iv, encryptedSecretKey, tag, urlCipherText, urlIV, urlTag, url } = webhook;
let decryptedSecretKey = "";
let decryptedUrl = url;
if (encryptedSecretKey) {
const encryptionKey = appCfg.ENCRYPTION_KEY;
const rootEncryptionKey = appCfg.ROOT_ENCRYPTION_KEY;
let secretKey;
if (rootEncryptionKey && keyEncoding === SecretKeyEncoding.BASE64) {
// case: encoding scheme is base64
secretKey = decryptSymmetric({
ciphertext: encryptedSecretKey,
iv: iv as string,
tag: tag as string,
key: rootEncryptionKey
});
} else if (encryptionKey && keyEncoding === SecretKeyEncoding.UTF8) {
// case: encoding scheme is utf8
secretKey = decryptSymmetric128BitHexKeyUTF8({
ciphertext: encryptedSecretKey,
iv: iv as string,
tag: tag as string,
key: encryptionKey
});
}
if (secretKey) {
const webhookSign = crypto.createHmac("sha256", secretKey).update(JSON.stringify(payload)).digest("hex");
headers["x-infisical-signature"] = `t=${payload.timestamp};${webhookSign}`;
}
decryptedSecretKey = infisicalSymmetricDecrypt({
keyEncoding: keyEncoding as SecretKeyEncoding,
ciphertext: encryptedSecretKey,
iv: iv as string,
tag: tag as string
});
}
if (urlCipherText) {
decryptedUrl = infisicalSymmetricDecrypt({
keyEncoding: keyEncoding as SecretKeyEncoding,
ciphertext: urlCipherText,
iv: urlIV as string,
tag: urlTag as string
});
}
return {
secretKey: decryptedSecretKey,
url: decryptedUrl
};
};
export const triggerWebhookRequest = async (webhook: TWebhooks, data: Record<string, unknown>) => {
const headers: Record<string, string> = {};
const payload = { ...data, timestamp: Date.now() };
const { secretKey, url } = decryptWebhookDetails(webhook);
if (secretKey) {
const webhookSign = crypto.createHmac("sha256", secretKey).update(JSON.stringify(payload)).digest("hex");
headers["x-infisical-signature"] = `t=${payload.timestamp};${webhookSign}`;
}
const req = await request.post(url, payload, {
headers,
timeout: WEBHOOK_TRIGGER_TIMEOUT,
signal: AbortSignal.timeout(WEBHOOK_TRIGGER_TIMEOUT)
});
return req;
};
@@ -60,15 +68,48 @@ export const getWebhookPayload = (
eventName: string,
workspaceId: string,
environment: string,
secretPath?: string
) => ({
event: eventName,
project: {
workspaceId,
environment,
secretPath
secretPath?: string,
type?: string | null
) => {
switch (type) {
case WebhookType.SLACK:
return {
text: "A secret value has been added or modified.",
attachments: [
{
color: "#E7F256",
fields: [
{
title: "Workspace ID",
value: workspaceId,
short: false
},
{
title: "Environment",
value: environment,
short: false
},
{
title: "Secret Path",
value: secretPath,
short: false
}
]
}
]
};
case WebhookType.GENERAL:
default:
return {
event: eventName,
project: {
workspaceId,
environment,
secretPath
}
};
}
});
};
export type TFnTriggerWebhookDTO = {
projectId: string;
@@ -95,9 +136,10 @@ export const fnTriggerWebhook = async ({
logger.info("Secret webhook job started", { environment, secretPath, projectId });
const webhooksTriggered = await Promise.allSettled(
toBeTriggeredHooks.map((hook) =>
triggerWebhookRequest(hook, getWebhookPayload("secrets.modified", projectId, environment, secretPath))
triggerWebhookRequest(hook, getWebhookPayload("secrets.modified", projectId, environment, secretPath, hook.type))
)
);
// filter hooks by status
const successWebhooks = webhooksTriggered
.filter(({ status }) => status === "fulfilled")

View File

@@ -1,15 +1,14 @@
import { ForbiddenError } from "@casl/ability";
import { SecretEncryptionAlgo, SecretKeyEncoding, TWebhooksInsert } from "@app/db/schemas";
import { TWebhooksInsert } from "@app/db/schemas";
import { TPermissionServiceFactory } from "@app/ee/services/permission/permission-service";
import { ProjectPermissionActions, ProjectPermissionSub } from "@app/ee/services/permission/project-permission";
import { getConfig } from "@app/lib/config/env";
import { encryptSymmetric, encryptSymmetric128BitHexKeyUTF8 } from "@app/lib/crypto";
import { infisicalSymmetricEncypt } from "@app/lib/crypto/encryption";
import { BadRequestError } from "@app/lib/errors";
import { TProjectEnvDALFactory } from "../project-env/project-env-dal";
import { TWebhookDALFactory } from "./webhook-dal";
import { getWebhookPayload, triggerWebhookRequest } from "./webhook-fns";
import { decryptWebhookDetails, getWebhookPayload, triggerWebhookRequest } from "./webhook-fns";
import {
TCreateWebhookDTO,
TDeleteWebhookDTO,
@@ -36,7 +35,8 @@ export const webhookServiceFactory = ({ webhookDAL, projectEnvDAL, permissionSer
webhookUrl,
environment,
secretPath,
webhookSecretKey
webhookSecretKey,
type
}: TCreateWebhookDTO) => {
const { permission } = await permissionService.getProjectPermission(
actor,
@@ -50,30 +50,29 @@ export const webhookServiceFactory = ({ webhookDAL, projectEnvDAL, permissionSer
if (!env) throw new BadRequestError({ message: "Env not found" });
const insertDoc: TWebhooksInsert = {
url: webhookUrl,
url: "", // deprecated - we are moving away from plaintext URLs
envId: env.id,
isDisabled: false,
secretPath: secretPath || "/"
secretPath: secretPath || "/",
type
};
if (webhookSecretKey) {
const appCfg = getConfig();
const encryptionKey = appCfg.ENCRYPTION_KEY;
const rootEncryptionKey = appCfg.ROOT_ENCRYPTION_KEY;
if (rootEncryptionKey) {
const { ciphertext, iv, tag } = encryptSymmetric(webhookSecretKey, rootEncryptionKey);
insertDoc.encryptedSecretKey = ciphertext;
insertDoc.iv = iv;
insertDoc.tag = tag;
insertDoc.algorithm = SecretEncryptionAlgo.AES_256_GCM;
insertDoc.keyEncoding = SecretKeyEncoding.BASE64;
} else if (encryptionKey) {
const { ciphertext, iv, tag } = encryptSymmetric128BitHexKeyUTF8(webhookSecretKey, encryptionKey);
insertDoc.encryptedSecretKey = ciphertext;
insertDoc.iv = iv;
insertDoc.tag = tag;
insertDoc.algorithm = SecretEncryptionAlgo.AES_256_GCM;
insertDoc.keyEncoding = SecretKeyEncoding.UTF8;
}
const { ciphertext, iv, tag, algorithm, encoding } = infisicalSymmetricEncypt(webhookSecretKey);
insertDoc.encryptedSecretKey = ciphertext;
insertDoc.iv = iv;
insertDoc.tag = tag;
insertDoc.algorithm = algorithm;
insertDoc.keyEncoding = encoding;
}
if (webhookUrl) {
const { ciphertext, iv, tag, algorithm, encoding } = infisicalSymmetricEncypt(webhookUrl);
insertDoc.urlCipherText = ciphertext;
insertDoc.urlIV = iv;
insertDoc.urlTag = tag;
insertDoc.algorithm = algorithm;
insertDoc.keyEncoding = encoding;
}
const webhook = await webhookDAL.create(insertDoc);
@@ -131,7 +130,7 @@ export const webhookServiceFactory = ({ webhookDAL, projectEnvDAL, permissionSer
try {
await triggerWebhookRequest(
webhook,
getWebhookPayload("test", webhook.projectId, webhook.environment.slug, webhook.secretPath)
getWebhookPayload("test", webhook.projectId, webhook.environment.slug, webhook.secretPath, webhook.type)
);
} catch (err) {
webhookError = (err as Error).message;
@@ -162,7 +161,14 @@ export const webhookServiceFactory = ({ webhookDAL, projectEnvDAL, permissionSer
);
ForbiddenError.from(permission).throwUnlessCan(ProjectPermissionActions.Read, ProjectPermissionSub.Webhooks);
return webhookDAL.findAllWebhooks(projectId, environment, secretPath);
const webhooks = await webhookDAL.findAllWebhooks(projectId, environment, secretPath);
return webhooks.map((w) => {
const { url } = decryptWebhookDetails(w);
return {
...w,
url
};
});
};
return {

View File

@@ -5,6 +5,7 @@ export type TCreateWebhookDTO = {
secretPath?: string;
webhookUrl: string;
webhookSecretKey?: string;
type: string;
} & TProjectPermission;
export type TUpdateWebhookDTO = {
@@ -24,3 +25,8 @@ export type TListWebhookDTO = {
environment?: string;
secretPath?: string;
} & TProjectPermission;
export enum WebhookType {
GENERAL = "general",
SLACK = "slack"
}

View File

@@ -885,7 +885,7 @@ func SetEncryptedSecrets(secretArgs []string, secretType string, environmentName
}
// Key and value from argument
key := splitKeyValueFromArg[0]
key := strings.TrimSpace(splitKeyValueFromArg[0])
value := splitKeyValueFromArg[1]
hashedKey := fmt.Sprintf("%x", sha256.Sum256([]byte(key)))

View File

@@ -10,4 +10,8 @@ To request time off, just submit a request in Rippling and let Maidul know at le
## National holidays
Since Infisical's team is globally distributed, it is hard for us to keep track of all the various national holidays across many different countries. Whether you'd like to celebrate Christmas or National Brisket Day (which, by the way, is on May 28th), you are welcome to take PTO on those days  just let Maidul know at least a week ahead so that we can adjust our planning.
Since Infisical's team is globally distributed, it is hard for us to keep track of all the various national holidays across many different countries. Whether you'd like to celebrate Christmas or National Brisket Day (which, by the way, is on May 28th), you are welcome to take PTO on those days  just let Maidul know at least a week ahead so that we can adjust our planning.
## Winter Break
Every year, Infisical team goes on a company-wide vacation during winter holidays. This year, the winter break period starts on December 21st, 2024 and ends on January 5th, 2025. You should expect to do no scheduled work during this period, but we will have a rotation process for [high and urgent service disruptions](https://infisical.com/sla).

View File

@@ -64,5 +64,10 @@
],
"integrations": {
"intercom": "hsg644ru"
},
"analytics": {
"koala": {
"publicApiKey": "pk_b50d7184e0e39ddd5cdb43cf6abeadd9b97d"
}
}
}

View File

@@ -10,7 +10,6 @@
#sidebar {
left: 0;
padding-left: 48px;
padding-right: 30px;
border-right: 1px;
border-color: #cdd64b;
@@ -18,6 +17,10 @@
border-right: 1px solid #ebebeb;
}
#sidebar-content {
padding-left: 2rem;
}
#sidebar .relative .sticky {
opacity: 0;
}

View File

@@ -0,0 +1,191 @@
version: "3.9"
services:
nginx:
container_name: infisical-dev-nginx
image: nginx
restart: always
ports:
- 8080:80
volumes:
- ./nginx/default.dev.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- backend
- frontend
db:
image: bitnami/postgresql:14
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
POSTGRESQL_PASSWORD: infisical
POSTGRESQL_USERNAME: infisical
POSTGRESQL_DATABASE: infisical
POSTGRESQL_REPLICATION_MODE: master
POSTGRESQL_REPLICATION_USER: repl_user
POSTGRESQL_REPLICATION_PASSWORD: repl_password
POSTGRESQL_SYNCHRONOUS_COMMIT_MODE: on
POSTGRESQL_NUM_SYNCHRONOUS_REPLICAS: 1
db-slave:
image: bitnami/postgresql:14
ports:
- "5433:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
POSTGRESQL_PASSWORD: infisical
POSTGRESQL_USERNAME: infisical
POSTGRESQL_DATABASE: infisical
POSTGRESQL_REPLICATION_MODE: slave
POSTGRESQL_REPLICATION_USER: repl_user
POSTGRESQL_REPLICATION_PASSWORD: repl_password
POSTGRESQL_MASTER_HOST: db
POSTGRESQL_MASTER_PORT_NUMBER: 5432
redis:
image: redis
container_name: infisical-dev-redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 6379:6379
volumes:
- redis_data:/data
redis-commander:
container_name: infisical-dev-redis-commander
image: rediscommander/redis-commander
restart: always
depends_on:
- redis
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8085:8081"
db-test:
profiles: ["test"]
image: postgres:14-alpine
ports:
- "5430:5432"
environment:
POSTGRES_PASSWORD: infisical
POSTGRES_USER: infisical
POSTGRES_DB: infisical-test
db-migration:
container_name: infisical-db-migration
depends_on:
- db
build:
context: ./backend
dockerfile: Dockerfile.dev
env_file: .env
environment:
- DB_CONNECTION_URI=postgres://infisical:infisical@db/infisical?sslmode=disable
command: npm run migration:latest
volumes:
- ./backend/src:/app/src
backend:
container_name: infisical-dev-api
build:
context: ./backend
dockerfile: Dockerfile.dev
depends_on:
db:
condition: service_started
redis:
condition: service_started
db-migration:
condition: service_completed_successfully
env_file:
- .env
ports:
- 4000:4000
environment:
- NODE_ENV=development
- DB_CONNECTION_URI=postgres://infisical:infisical@db/infisical?sslmode=disable
- TELEMETRY_ENABLED=false
volumes:
- ./backend/src:/app/src
extra_hosts:
- "host.docker.internal:host-gateway"
frontend:
container_name: infisical-dev-frontend
restart: unless-stopped
depends_on:
- backend
build:
context: ./frontend
dockerfile: Dockerfile.dev
volumes:
- ./frontend/src:/app/src/ # mounted whole src to avoid missing reload on new files
- ./frontend/public:/app/public
env_file: .env
environment:
- NEXT_PUBLIC_ENV=development
- INFISICAL_TELEMETRY_ENABLED=false
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin@example.com
PGADMIN_DEFAULT_PASSWORD: pass
ports:
- 5050:80
depends_on:
- db
smtp-server:
container_name: infisical-dev-smtp-server
image: lytrax/mailhog:latest # https://github.com/mailhog/MailHog/issues/353#issuecomment-821137362
restart: always
logging:
driver: "none" # disable saving logs
ports:
- 1025:1025 # SMTP server
- 8025:8025 # Web UI
openldap: # note: more advanced configuration is available
image: osixia/openldap:1.5.0
restart: always
environment:
LDAP_ORGANISATION: Acme
LDAP_DOMAIN: acme.com
LDAP_ADMIN_PASSWORD: admin
ports:
- 389:389
- 636:636
volumes:
- ldap_data:/var/lib/ldap
- ldap_config:/etc/ldap/slapd.d
profiles: [ldap]
phpldapadmin: # username: cn=admin,dc=acme,dc=com, pass is admin
image: osixia/phpldapadmin:latest
restart: always
environment:
- PHPLDAPADMIN_LDAP_HOSTS=openldap
- PHPLDAPADMIN_HTTPS=false
ports:
- 6433:80
depends_on:
- openldap
profiles: [ldap]
volumes:
postgres-data:
driver: local
postgres-slave-data:
driver: local
redis_data:
driver: local
ldap_data:
ldap_config:

View File

@@ -0,0 +1,118 @@
---
title: "MS SQL"
description: "How to dynamically generate MS SQL database users."
---
The Infisical MS SQL dynamic secret allows you to generate Microsoft SQL server database credentials on demand based on configured role.
## Prerequisite
Create a user with the required permission in your SQL instance. This user will be used to create new accounts on-demand.
## Set up Dynamic Secrets with MS SQL
<Steps>
<Step title="Open Secret Overview Dashboard">
Open the Secret Overview dashboard and select the environment in which you would like to add a dynamic secret.
</Step>
<Step title="Click on the 'Add Dynamic Secret' button">
![Add Dynamic Secret Button](../../../images/platform/dynamic-secrets/add-dynamic-secret-button.png)
</Step>
<Step title="Select `SQL Database`">
![Dynamic Secret Modal](../../../images/platform/dynamic-secrets/dynamic-secret-modal.png)
</Step>
<Step title="Provide the inputs for dynamic secret parameters">
<ParamField path="Secret Name" type="string" required>
Name by which you want the secret to be referenced
</ParamField>
<ParamField path="Default TTL" type="string" required>
Default time-to-live for a generated secret (it is possible to modify this value when a secret is generate)
</ParamField>
<ParamField path="Max TTL" type="string" required>
Maximum time-to-live for a generated secret
</ParamField>
<ParamField path="Service" type="string" required>
Choose the service you want to generate dynamic secrets for. This must be selected as **MS SQL**.
</ParamField>
<ParamField path="Host" type="string" required>
Database host
</ParamField>
<ParamField path="Port" type="number" required>
Database port
</ParamField>
<ParamField path="User" type="string" required>
Username that will be used to create dynamic secrets
</ParamField>
<ParamField path="Password" type="string" required>
Password that will be used to create dynamic secrets
</ParamField>
<ParamField path="Database Name" type="string" required>
Name of the database for which you want to create dynamic secrets
</ParamField>
<ParamField path="CA(SSL)" type="string">
A CA may be required if your DB requires it for incoming connections. AWS RDS instances with default settings will requires a CA which can be downloaded [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.CertificatesAllRegions).
</ParamField>
![Dynamic Secret Setup Modal](../../../images/platform/dynamic-secrets/dynamic-secret-setup-modal-mssql.png)
</Step>
<Step title="(Optional) Modify SQL Statements">
If you want to provide specific privileges for the generated dynamic credentials, you can modify the SQL statement to your needs. This is useful if you want to only give access to a specific table(s).
![Modify SQL Statements Modal](../../../images/platform/dynamic-secrets/modify-sql-statements-mssql.png)
</Step>
<Step title="Click 'Submit'">
After submitting the form, you will see a dynamic secret created in the dashboard.
<Note>
If this step fails, you may have to add the CA certficate.
</Note>
![Dynamic Secret](../../../images/platform/dynamic-secrets/dynamic-secret.png)
</Step>
<Step title="Generate dynamic secrets">
Once you've successfully configured the dynamic secret, you're ready to generate on-demand credentials.
To do this, simply click on the 'Generate' button which appears when hovering over the dynamic secret item.
Alternatively, you can initiate the creation of a new lease by selecting 'New Lease' from the dynamic secret lease list section.
![Dynamic Secret](/images/platform/dynamic-secrets/dynamic-secret-generate.png)
![Dynamic Secret](/images/platform/dynamic-secrets/dynamic-secret-lease-empty.png)
When generating these secrets, it's important to specify a Time-to-Live (TTL) duration. This will dictate how long the credentials are valid for.
![Provision Lease](/images/platform/dynamic-secrets/provision-lease.png)
<Tip>
Ensure that the TTL for the lease fall within the maximum TTL defined when configuring the dynamic secret.
</Tip>
Once you click the `Submit` button, a new secret lease will be generated and the credentials for it will be shown to you.
![Provision Lease](/images/platform/dynamic-secrets/lease-values.png)
</Step>
</Steps>
## Audit or Revoke Leases
Once you have created one or more leases, you will be able to access them by clicking on the respective dynamic secret item on the dashboard.
This will allow you see the expiration time of the lease or delete the lease before it's set time to live.
![Provision Lease](/images/platform/dynamic-secrets/lease-data.png)
## Renew Leases
To extend the life of the generated dynamic secret leases past its initial time to live, simply click on the **Renew** as illustrated below.
![Provision Lease](/images/platform/dynamic-secrets/dynamic-secret-lease-renew.png)
<Warning>
Lease renewals cannot exceed the maximum TTL set when configuring the dynamic secret
</Warning>

Some files were not shown because too many files have changed in this diff Show More