Compare commits

...

128 Commits

Author SHA1 Message Date
Daniel Hougaard
6685f8aa0a fix(identity): remove access tokens when auth method is removed 2024-11-19 22:24:17 +04:00
Maidul Islam
54f3f94185 Merge pull request #2741 from phamleduy04/sort-repo-github-intergration-app
Add sort to Github integration dropdown box
2024-11-19 11:46:43 -05:00
Scott Wilson
907537f7c0 Merge pull request #2755 from Infisical/empty-secret-value-fixes
Fix: Handle Empty Secret Values in Update, Bulk Create and Bulk Update Secret(s)
2024-11-19 08:45:38 -08:00
Scott Wilson
61263b9384 fix: unhandle empty value in bulk create/insert secrets 2024-11-19 08:30:58 -08:00
Scott Wilson
b6d8be2105 fix: handle empty string to allow clearing secret on update 2024-11-19 08:16:30 -08:00
Maidul Islam
61d516ef35 Merge pull request #2754 from Infisical/daniel/azure-auth-better-error 2024-11-19 09:00:23 -05:00
Daniel Hougaard
31fc64fb4c Update identity-azure-auth-service.ts 2024-11-19 17:54:31 +04:00
Maidul Islam
8bf7e4c4d1 Merge pull request #2743 from akhilmhdh/fix/auth-method-migration
fix: migration in loop due to cornercase
2024-11-18 16:01:04 -05:00
=
2027d4b44e feat: moved auth method deletion to top 2024-11-19 02:17:25 +05:30
Maidul Islam
d401c9074e Merge pull request #2715 from Infisical/misc/finalize-org-migration-script
misc: finalize org migration script
2024-11-18 14:15:20 -05:00
Sheen
afe35dbbb5 Merge pull request #2747 from Infisical/misc/finalized-design-of-totp-registration
misc: finalized design of totp registration
2024-11-19 02:13:54 +08:00
Maidul Islam
6ff1602fd5 Merge pull request #2708 from Infisical/misc/oidc-setup-extra-handling
misc: added OIDC error and edge-case handling
2024-11-18 10:56:09 -05:00
Maidul Islam
6603364749 Merge pull request #2750 from Infisical/daniel/migrate-unlock-command
fix: add migration unlock command
2024-11-18 10:28:43 -05:00
Daniel Hougaard
53bea22b85 fix: added unlock command 2024-11-18 19:22:43 +04:00
Sheen
d521ee7b7e Merge pull request #2748 from Infisical/misc/address-role-slugs-issue-invite-user-endpoint
misc: address role slug issue in invite user endpoint
2024-11-18 21:58:31 +08:00
Sheen Capadngan
827931e416 misc: addressed comment 2024-11-18 21:52:36 +08:00
Sheen Capadngan
faa83344a7 misc: address role slug issue in invite user endpoint 2024-11-18 21:43:06 +08:00
Sheen Capadngan
089a7e880b misc: added message for bypass 2024-11-18 17:29:01 +08:00
Sheen Capadngan
64ec741f1a misc: updated documentation totp ui 2024-11-18 17:24:03 +08:00
Sheen Capadngan
c98233ddaf misc: finalized design of totp registration 2024-11-18 17:14:21 +08:00
Maidul Islam
ae17981c41 Merge pull request #2746 from Infisical/vmatsiiako-changelog-patch-1
added handbook updates
2024-11-17 23:44:49 -05:00
Vladyslav Matsiiako
6c49c7da3c added handbook updates 2024-11-17 23:43:57 -05:00
Vlad Matsiiako
2de04b6fe5 Merge pull request #2745 from Infisical/vmatsiiako-docs-patch-1-1
Fix typo in docs
2024-11-17 23:01:15 -05:00
Vlad Matsiiako
5c9ec1e4be Fix typo in docs 2024-11-17 09:55:32 -05:00
Sheen
ba89491d4c Merge pull request #2731 from Infisical/feat/totp-authenticator
feat: TOTP authenticator
2024-11-16 11:58:39 +08:00
Maidul Islam
483e596a7a Merge pull request #2744 from Infisical/daniel/npm-cli-windows-fix
fix: NPM-based CLI windows symlink
2024-11-15 15:37:32 -07:00
Daniel Hougaard
65f122bd41 Update index.cjs 2024-11-16 01:37:43 +04:00
Sheen Capadngan
682b552fdc misc: addressed remaining comments 2024-11-16 03:15:39 +08:00
=
d4cfd0b6ed fix: migration in loop due to cornercase 2024-11-16 00:37:57 +05:30
Duy Pham Le
e8f09d2c7b fix(ui): add sort to github integration dropdown box 2024-11-15 10:26:38 -06:00
Sheen Capadngan
774371a218 misc: added mention of authenticator in the docs 2024-11-16 00:10:56 +08:00
Sheen Capadngan
c4b54de303 misc: migrated to switch component 2024-11-15 23:49:20 +08:00
Sheen Capadngan
433971a72d misc: addressed comments 1 2024-11-15 23:25:32 +08:00
Maidul Islam
4acf9413f0 Merge pull request #2737 from Infisical/backfill-identity-metadata
Fix: Handle Missing User/Identity Metadata Keys in Permissions Check
2024-11-15 01:34:45 -07:00
Maidul Islam
f0549cab98 Merge pull request #2739 from Infisical/fix-ca-alert-migrations
only create triggers when create new table
2024-11-15 00:56:39 -07:00
Maidul Islam
d75e49dce5 update trigegr to only create if it doesn't exit 2024-11-15 00:52:08 -07:00
Maidul Islam
8819abd710 only create triggers when create new table 2024-11-15 00:42:30 -07:00
Maidul Islam
796f76da46 Merge pull request #2738 from Infisical/fix-cert-migration
Fix ca version migration
2024-11-14 23:20:09 -07:00
Maidul Islam
d6e1ed4d1e revert docker compose changes 2024-11-14 23:10:54 -07:00
Maidul Islam
1295b68d80 Fix ca version migration
We didn't do a check to see if the column already exists. Because of this, we get this error during migrations:

```
| migration file "20240802181855_ca-cert-version.ts" failed
infisical-db-migration  | migration failed with error: alter table "certificates" add column "caCertId" uuid null - column "caCertId" of relation "certificates" already exists
```
2024-11-14 23:07:30 -07:00
Scott Wilson
c79f84c064 fix: use proxy on metadata permissions check to handle missing keys 2024-11-14 11:36:07 -08:00
BlackMagiq
d0c50960ef Merge pull request #2735 from Infisical/doc/add-gitlab-oidc-auth-documentation
doc: add docs for gitlab oidc auth
2024-11-14 10:44:01 -07:00
Sheen
85089a08e1 Merge pull request #2736 from Infisical/misc/update-login-self-hosting-label
misc: updated login self-hosting label to include dedicated
2024-11-15 01:41:45 +08:00
Sheen Capadngan
bf97294dad misc: added idp label 2024-11-15 01:41:20 +08:00
Sheen Capadngan
4053078d95 misc: updated login self-hosting label for dedicated 2024-11-15 01:36:33 +08:00
Sheen Capadngan
4ba3899861 doc: add docs for gitlab oidc auth 2024-11-15 01:07:36 +08:00
Sheen Capadngan
6bae3628c0 misc: readded saml email error 2024-11-14 19:37:13 +08:00
Sheen Capadngan
4cb935dae7 misc: addressed signupinvite issue 2024-11-14 19:10:21 +08:00
Maidul Islam
ccad684ab2 Merge pull request #2734 from Infisical/docs-for-linux-ha
linux HA reference architecture
2024-11-14 02:04:13 -07:00
Maidul Islam
fd77708cad add docs for linux ha 2024-11-14 02:02:23 -07:00
Maidul Islam
9aebd712d1 Merge pull request #2732 from Infisical/daniel/npm-cli-fixes
fix: cli npm release windows and symlink bugs
2024-11-13 20:58:22 -07:00
Daniel Hougaard
05f07b25ac fix: cli npm release windows and symlink bugs 2024-11-14 06:13:14 +04:00
Sheen Capadngan
5b0dbf04b2 misc: minor ui 2024-11-14 03:22:02 +08:00
Sheen Capadngan
b050db84ab feat: added totp support for cli 2024-11-14 02:27:33 +08:00
Sheen Capadngan
8fef6911f1 misc: addressed lint 2024-11-14 01:25:23 +08:00
Sheen Capadngan
44ba31a743 misc: added org mfa settings update and other fixes 2024-11-14 01:16:15 +08:00
Sheen Capadngan
6bdbac4750 feat: initial implementation for totp authenticator 2024-11-14 00:07:35 +08:00
Scott Wilson
60fb195706 Merge pull request #2726 from Infisical/scott/paste-secrets
Feat: Paste Secrets for Upload
2024-11-12 17:57:13 -08:00
Scott Wilson
c8109b4e84 improvement: add example paste value formats 2024-11-12 16:46:35 -08:00
Scott Wilson
1f2b0443cc improvement: address requested changes 2024-11-12 16:11:47 -08:00
Daniel Hougaard
dd1cabf9f6 Merge pull request #2727 from Infisical/daniel/fix-npm-cli-symlink
fix: npm cli symlink
2024-11-12 22:47:01 +04:00
Daniel Hougaard
8b781b925a fix: npm cli symlink 2024-11-12 22:45:37 +04:00
Scott Wilson
ddcf5b576b improvement: improve field error message 2024-11-12 10:25:23 -08:00
Scott Wilson
7138b392f2 Feature: add ability to paste .env, .yml or .json secrets for upload and also fix upload when keys conflict but are not on current page 2024-11-12 10:21:07 -08:00
Daniel Hougaard
bfce1021fb Merge pull request #1076 from G3root/infisical-npm
feat: infisical cli for npm
2024-11-12 21:48:47 +04:00
Daniel Hougaard
93c0313b28 docs: added NPM install option 2024-11-12 21:48:04 +04:00
Daniel Hougaard
8cfc217519 Update README.md 2024-11-12 21:38:34 +04:00
Akhil Mohan
d272c6217a Merge pull request #2722 from Infisical/scott/secret-refrence-fixes
Fix: Secret Reference Multiple References and Special Character Stripping
2024-11-12 22:49:18 +05:30
Daniel Hougaard
2fe2ddd9fc Update package.json 2024-11-12 21:17:53 +04:00
Daniel Hougaard
e330ddd5ee fix: remove dry run 2024-11-12 20:56:18 +04:00
Daniel Hougaard
7aba9c1a50 Update index.cjs 2024-11-12 20:54:55 +04:00
Daniel Hougaard
4cd8e0fa67 fix: workflow fixes 2024-11-12 20:47:10 +04:00
Daniel Hougaard
ea3d164ead Update release_build_infisical_cli.yml 2024-11-12 20:40:45 +04:00
Daniel Hougaard
df468e4865 Update release_build_infisical_cli.yml 2024-11-12 20:39:16 +04:00
Daniel Hougaard
66e96018c4 Update release_build_infisical_cli.yml 2024-11-12 20:37:28 +04:00
Daniel Hougaard
3b02eedca6 feat: npm CLI 2024-11-12 20:36:09 +04:00
nafees nazik
a55fe2b788 chore: add git ignore 2024-11-12 17:40:46 +04:00
nafees nazik
5d7a267f1d chore: add package.json 2024-11-12 17:40:37 +04:00
nafees nazik
b16ab6f763 feat: add script 2024-11-12 17:40:37 +04:00
BlackMagiq
2d2ad0724f Merge pull request #2681 from Infisical/dependabot/npm_and_yarn/frontend/multi-6b7e5c81f3
chore(deps): bump body-parser and express in /frontend
2024-11-11 17:36:37 -07:00
Maidul Islam
e90efb7fc8 Merge pull request #2684 from Infisical/daniel/hsm-docs
docs: hardware security module
2024-11-11 16:21:56 -07:00
Maidul Islam
17d5e4bdab Merge pull request #2671 from Infisical/daniel/hsm
feat: hardware security module's support
2024-11-11 15:38:02 -07:00
Daniel Hougaard
f22a5580a6 requested changes 2024-11-12 02:27:38 +04:00
Scott Wilson
334a728259 chore: remove console log 2024-11-11 14:06:12 -08:00
Scott Wilson
4a3143e689 fix: correct unique secret check to account for env and path 2024-11-11 14:04:36 -08:00
Scott Wilson
14810de054 fix: correct secret reference value replacement to support special characters 2024-11-11 13:46:39 -08:00
Scott Wilson
8cfcbaa12c fix: correct secret reference validation check to permit referencing the same secret multiple times and improve error message 2024-11-11 13:17:25 -08:00
Daniel Hougaard
148f522c58 updated migrations 2024-11-11 21:52:35 +04:00
Daniel Hougaard
603fcd8ab5 Update hsm-service.ts 2024-11-11 21:47:07 +04:00
Daniel Hougaard
a1474145ae Update hsm-service.ts 2024-11-11 21:47:07 +04:00
Daniel Hougaard
7c055f71f7 Update hsm-service.ts 2024-11-11 21:47:07 +04:00
Daniel Hougaard
14884cd6b0 Update Dockerfile.standalone-infisical 2024-11-11 21:47:07 +04:00
Daniel Hougaard
98fd146e85 cleanup 2024-11-11 21:47:07 +04:00
Daniel Hougaard
1d3dca11e7 Revert "temp: team debugging"
This reverts commit 6533d731f829d79f41bf2f7209e3a636553792b1.
2024-11-11 21:47:00 +04:00
Daniel Hougaard
22f8a3daa7 temp: team debugging 2024-11-11 21:46:53 +04:00
Daniel Hougaard
395b3d9e05 requested changes
requested changes

temp: team debugging

Revert "temp: team debugging"

This reverts commit 6533d731f829d79f41bf2f7209e3a636553792b1.

feat: hsm support

Update hsm-service.ts

feat: hsm support
2024-11-11 21:45:06 +04:00
Daniel Hougaard
1041e136fb added keystore 2024-11-11 21:45:06 +04:00
Daniel Hougaard
21024b0d72 requested changes 2024-11-11 21:45:06 +04:00
Daniel Hougaard
00e68dc0bf Update hsm-fns.ts 2024-11-11 21:45:06 +04:00
Daniel Hougaard
5e068cd8a0 feat: wait for session wrapper 2024-11-11 21:45:06 +04:00
Daniel Hougaard
abdf8f46a3 Update super-admin-service.ts 2024-11-11 21:45:06 +04:00
Daniel Hougaard
1cf046f6b3 Update super-admin-service.ts 2024-11-11 21:45:06 +04:00
Daniel Hougaard
0fda6d6f4d requested changes 2024-11-11 21:45:06 +04:00
Daniel Hougaard
8d4115925c requested changes 2024-11-11 21:45:06 +04:00
Daniel Hougaard
d0b3c6b66a Create docker-compose.hsm.prod.yml 2024-11-11 21:45:06 +04:00
Daniel Hougaard
a1685af119 feat: hsm cryptographic tests 2024-11-11 21:45:06 +04:00
Daniel Hougaard
8d4a06e9e4 modified: src/lib/config/env.ts 2024-11-11 21:45:06 +04:00
Daniel Hougaard
6dbe3c8793 fix: removed exported field 2024-11-11 21:45:06 +04:00
Daniel Hougaard
a3ec1a27de fix: removed recovery 2024-11-11 21:45:06 +04:00
Daniel Hougaard
472f02e8b1 feat: added key wrapping 2024-11-11 21:45:06 +04:00
Daniel Hougaard
3989646b80 fix: dockerfile 2024-11-11 21:45:06 +04:00
Daniel Hougaard
472f5eb8b4 Update env.ts 2024-11-11 21:45:05 +04:00
Daniel Hougaard
f5b039f939 Update vitest-environment-knex.ts 2024-11-11 21:45:05 +04:00
Daniel Hougaard
b7b3d07e9f cleanup 2024-11-11 21:45:05 +04:00
Daniel Hougaard
891a1ea2b9 feat: HSM support 2024-11-11 21:45:05 +04:00
Daniel Hougaard
a807f0cf6c feat: added option for choosing encryption method 2024-11-11 21:45:05 +04:00
Daniel Hougaard
cfc0b2fb8d fix: renamed migration 2024-11-11 21:45:05 +04:00
Daniel Hougaard
f096a567de feat: Hardware security modules 2024-11-11 21:45:05 +04:00
Sheen Capadngan
ada63b9e7d misc: finalize org migration script 2024-11-10 11:49:25 +08:00
Daniel Hougaard
8ef078872e Update hsm-integration.mdx 2024-11-09 05:16:52 +04:00
Daniel Hougaard
5f93016d22 Update hsm-integration.mdx 2024-11-09 00:41:39 +04:00
Daniel Hougaard
f220246eb4 feat: hsm docs 2024-11-09 00:33:54 +04:00
Sheen Capadngan
3f6a0c77f1 misc: finalized user messages 2024-11-09 01:51:11 +08:00
Sheen Capadngan
9e4b66e215 misc: made users automatically verified 2024-11-09 00:38:45 +08:00
Sheen Capadngan
8a14914bc3 misc: added more error handling 2024-11-08 21:43:25 +08:00
Scott Wilson
6956d14e2e docs: suggested changes 2024-11-04 19:46:25 -08:00
Daniel Hougaard
bae7c6c3d7 docs: hardware security module 2024-11-04 03:22:32 +04:00
dependabot[bot]
e8b33f27fc chore(deps): bump body-parser and express in /frontend
Bumps [body-parser](https://github.com/expressjs/body-parser) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.

Updates `body-parser` from 1.20.2 to 1.20.3
- [Release notes](https://github.com/expressjs/body-parser/releases)
- [Changelog](https://github.com/expressjs/body-parser/blob/master/HISTORY.md)
- [Commits](https://github.com/expressjs/body-parser/compare/1.20.2...1.20.3)

Updates `express` from 4.19.2 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.19.2...4.21.1)

---
updated-dependencies:
- dependency-name: body-parser
  dependency-type: indirect
- dependency-name: express
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-02 22:36:32 +00:00
144 changed files with 5214 additions and 1119 deletions

View File

@@ -1,62 +1,115 @@
name: Release standalone docker image
on:
push:
tags:
- "infisical/v*.*.*-postgres"
push:
tags:
- "infisical/v*.*.*-postgres"
jobs:
infisical-tests:
name: Run tests before deployment
# https://docs.github.com/en/actions/using-workflows/reusing-workflows#overview
uses: ./.github/workflows/run-backend-tests.yml
infisical-standalone:
name: Build infisical standalone image postgres
runs-on: ubuntu-latest
needs: [infisical-tests]
steps:
- name: Extract version from tag
id: extract_version
run: echo "::set-output name=version::${GITHUB_REF_NAME#infisical/}"
- name: ☁️ Checkout source
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: 📦 Install dependencies to test all dependencies
run: npm ci --only-production
working-directory: backend
- name: version output
run: |
echo "Output Value: ${{ steps.version.outputs.major }}"
echo "Output Value: ${{ steps.version.outputs.minor }}"
echo "Output Value: ${{ steps.version.outputs.patch }}"
echo "Output Value: ${{ steps.version.outputs.version }}"
echo "Output Value: ${{ steps.version.outputs.version_type }}"
echo "Output Value: ${{ steps.version.outputs.increment }}"
- name: Save commit hashes for tag
id: commit
uses: pr-mpt/actions-commit-hash@v2
- name: 🔧 Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: 🐋 Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Depot CLI
uses: depot/setup-action@v1
- name: 📦 Build backend and export to Docker
uses: depot/build-push-action@v1
with:
project: 64mmf0n610
token: ${{ secrets.DEPOT_PROJECT_TOKEN }}
push: true
context: .
tags: |
infisical/infisical:latest-postgres
infisical/infisical:${{ steps.commit.outputs.short }}
infisical/infisical:${{ steps.extract_version.outputs.version }}
platforms: linux/amd64,linux/arm64
file: Dockerfile.standalone-infisical
build-args: |
POSTHOG_API_KEY=${{ secrets.PUBLIC_POSTHOG_API_KEY }}
INFISICAL_PLATFORM_VERSION=${{ steps.extract_version.outputs.version }}
infisical-tests:
name: Run tests before deployment
# https://docs.github.com/en/actions/using-workflows/reusing-workflows#overview
uses: ./.github/workflows/run-backend-tests.yml
infisical-standalone:
name: Build infisical standalone image postgres
runs-on: ubuntu-latest
needs: [infisical-tests]
steps:
- name: Extract version from tag
id: extract_version
run: echo "::set-output name=version::${GITHUB_REF_NAME#infisical/}"
- name: ☁️ Checkout source
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: 📦 Install dependencies to test all dependencies
run: npm ci --only-production
working-directory: backend
- name: version output
run: |
echo "Output Value: ${{ steps.version.outputs.major }}"
echo "Output Value: ${{ steps.version.outputs.minor }}"
echo "Output Value: ${{ steps.version.outputs.patch }}"
echo "Output Value: ${{ steps.version.outputs.version }}"
echo "Output Value: ${{ steps.version.outputs.version_type }}"
echo "Output Value: ${{ steps.version.outputs.increment }}"
- name: Save commit hashes for tag
id: commit
uses: pr-mpt/actions-commit-hash@v2
- name: 🔧 Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: 🐋 Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Depot CLI
uses: depot/setup-action@v1
- name: 📦 Build backend and export to Docker
uses: depot/build-push-action@v1
with:
project: 64mmf0n610
token: ${{ secrets.DEPOT_PROJECT_TOKEN }}
push: true
context: .
tags: |
infisical/infisical:latest-postgres
infisical/infisical:${{ steps.commit.outputs.short }}
infisical/infisical:${{ steps.extract_version.outputs.version }}
platforms: linux/amd64,linux/arm64
file: Dockerfile.standalone-infisical
build-args: |
POSTHOG_API_KEY=${{ secrets.PUBLIC_POSTHOG_API_KEY }}
INFISICAL_PLATFORM_VERSION=${{ steps.extract_version.outputs.version }}
infisical-fips-standalone:
name: Build infisical standalone image postgres
runs-on: ubuntu-latest
needs: [infisical-tests]
steps:
- name: Extract version from tag
id: extract_version
run: echo "::set-output name=version::${GITHUB_REF_NAME#infisical/}"
- name: ☁️ Checkout source
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: 📦 Install dependencies to test all dependencies
run: npm ci --only-production
working-directory: backend
- name: version output
run: |
echo "Output Value: ${{ steps.version.outputs.major }}"
echo "Output Value: ${{ steps.version.outputs.minor }}"
echo "Output Value: ${{ steps.version.outputs.patch }}"
echo "Output Value: ${{ steps.version.outputs.version }}"
echo "Output Value: ${{ steps.version.outputs.version_type }}"
echo "Output Value: ${{ steps.version.outputs.increment }}"
- name: Save commit hashes for tag
id: commit
uses: pr-mpt/actions-commit-hash@v2
- name: 🔧 Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: 🐋 Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Depot CLI
uses: depot/setup-action@v1
- name: 📦 Build backend and export to Docker
uses: depot/build-push-action@v1
with:
project: 64mmf0n610
token: ${{ secrets.DEPOT_PROJECT_TOKEN }}
push: true
context: .
tags: |
infisical/infisical-fips:latest-postgres
infisical/infisical-fips:${{ steps.commit.outputs.short }}
infisical/infisical-fips:${{ steps.extract_version.outputs.version }}
platforms: linux/amd64,linux/arm64
file: Dockerfile.fips.standalone-infisical
build-args: |
POSTHOG_API_KEY=${{ secrets.PUBLIC_POSTHOG_API_KEY }}
INFISICAL_PLATFORM_VERSION=${{ steps.extract_version.outputs.version }}

View File

@@ -10,8 +10,7 @@ on:
permissions:
contents: write
# packages: write
# issues: write
jobs:
cli-integration-tests:
name: Run tests before deployment
@@ -26,6 +25,63 @@ jobs:
CLI_TESTS_USER_PASSWORD: ${{ secrets.CLI_TESTS_USER_PASSWORD }}
CLI_TESTS_INFISICAL_VAULT_FILE_PASSPHRASE: ${{ secrets.CLI_TESTS_INFISICAL_VAULT_FILE_PASSPHRASE }}
npm-release:
runs-on: ubuntu-20.04
env:
working-directory: ./npm
needs:
- cli-integration-tests
- goreleaser
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Extract version
run: |
VERSION=$(echo ${{ github.ref_name }} | sed 's/infisical-cli\/v//')
echo "Version extracted: $VERSION"
echo "CLI_VERSION=$VERSION" >> $GITHUB_ENV
- name: Print version
run: echo ${{ env.CLI_VERSION }}
- name: Setup Node
uses: actions/setup-node@8f152de45cc393bb48ce5d89d36b731f54556e65 # v4.0.0
with:
node-version: 20
cache: "npm"
cache-dependency-path: ./npm/package-lock.json
- name: Install dependencies
working-directory: ${{ env.working-directory }}
run: npm install --ignore-scripts
- name: Set NPM version
working-directory: ${{ env.working-directory }}
run: npm version ${{ env.CLI_VERSION }} --allow-same-version --no-git-tag-version
- name: Setup NPM
working-directory: ${{ env.working-directory }}
run: |
echo 'registry="https://registry.npmjs.org/"' > ./.npmrc
echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" >> ./.npmrc
echo 'registry="https://registry.npmjs.org/"' > ~/.npmrc
echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" >> ~/.npmrc
env:
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
- name: Pack NPM
working-directory: ${{ env.working-directory }}
run: npm pack
- name: Publish NPM
working-directory: ${{ env.working-directory }}
run: npm publish --tarball=./infisical-sdk-${{github.ref_name}} --access public --registry=https://registry.npmjs.org/
env:
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
goreleaser:
runs-on: ubuntu-20.04
needs: [cli-integration-tests]

2
.gitignore vendored
View File

@@ -71,3 +71,5 @@ frontend-build
cli/infisical-merge
cli/test/infisical-merge
/backend/binary
/npm/bin

View File

@@ -6,3 +6,4 @@ frontend/src/views/Project/MembersPage/components/MemberListTab/MemberRoleForm/S
docs/self-hosting/configuration/envars.mdx:generic-api-key:106
frontend/src/views/Project/MembersPage/components/MemberListTab/MemberRoleForm/SpecificPrivilegeSection.tsx:generic-api-key:451
docs/mint.json:generic-api-key:651
backend/src/ee/services/hsm/hsm-service.ts:generic-api-key:134

View File

@@ -0,0 +1,167 @@
ARG POSTHOG_HOST=https://app.posthog.com
ARG POSTHOG_API_KEY=posthog-api-key
ARG INTERCOM_ID=intercom-id
ARG CAPTCHA_SITE_KEY=captcha-site-key
FROM node:20-slim AS base
FROM base AS frontend-dependencies
WORKDIR /app
COPY frontend/package.json frontend/package-lock.json frontend/next.config.js ./
# Install dependencies
RUN npm ci --only-production --ignore-scripts
# Rebuild the source code only when needed
FROM base AS frontend-builder
WORKDIR /app
# Copy dependencies
COPY --from=frontend-dependencies /app/node_modules ./node_modules
# Copy all files
COPY /frontend .
ENV NODE_ENV production
ENV NEXT_PUBLIC_ENV production
ARG POSTHOG_HOST
ENV NEXT_PUBLIC_POSTHOG_HOST $POSTHOG_HOST
ARG POSTHOG_API_KEY
ENV NEXT_PUBLIC_POSTHOG_API_KEY $POSTHOG_API_KEY
ARG INTERCOM_ID
ENV NEXT_PUBLIC_INTERCOM_ID $INTERCOM_ID
ARG INFISICAL_PLATFORM_VERSION
ENV NEXT_PUBLIC_INFISICAL_PLATFORM_VERSION $INFISICAL_PLATFORM_VERSION
ARG CAPTCHA_SITE_KEY
ENV NEXT_PUBLIC_CAPTCHA_SITE_KEY $CAPTCHA_SITE_KEY
# Build
RUN npm run build
# Production image
FROM base AS frontend-runner
WORKDIR /app
RUN groupadd -r -g 1001 nodejs && useradd -r -u 1001 -g nodejs non-root-user
RUN mkdir -p /app/.next/cache/images && chown non-root-user:nodejs /app/.next/cache/images
VOLUME /app/.next/cache/images
COPY --chown=non-root-user:nodejs --chmod=555 frontend/scripts ./scripts
COPY --from=frontend-builder /app/public ./public
RUN chown non-root-user:nodejs ./public/data
COPY --from=frontend-builder --chown=non-root-user:nodejs /app/.next/standalone ./
COPY --from=frontend-builder --chown=non-root-user:nodejs /app/.next/static ./.next/static
USER non-root-user
ENV NEXT_TELEMETRY_DISABLED 1
##
## BACKEND
##
FROM base AS backend-build
ENV ChrystokiConfigurationPath=/usr/safenet/lunaclient/
RUN groupadd -r -g 1001 nodejs && useradd -r -u 1001 -g nodejs non-root-user
WORKDIR /app
# Required for pkcs11js
RUN apt-get update && apt-get install -y \
python3 \
make \
g++ \
&& rm -rf /var/lib/apt/lists/*
COPY backend/package*.json ./
RUN npm ci --only-production
COPY /backend .
COPY --chown=non-root-user:nodejs standalone-entrypoint.sh standalone-entrypoint.sh
RUN npm i -D tsconfig-paths
RUN npm run build
# Production stage
FROM base AS backend-runner
ENV ChrystokiConfigurationPath=/usr/safenet/lunaclient/
WORKDIR /app
# Required for pkcs11js
RUN apt-get update && apt-get install -y \
python3 \
make \
g++ \
&& rm -rf /var/lib/apt/lists/*
COPY backend/package*.json ./
RUN npm ci --only-production
COPY --from=backend-build /app .
RUN mkdir frontend-build
# Production stage
FROM base AS production
# Install necessary packages
RUN apt-get update && apt-get install -y \
ca-certificates \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
# Install Infisical CLI
RUN curl -1sLf 'https://dl.cloudsmith.io/public/infisical/infisical-cli/setup.deb.sh' | bash \
&& apt-get update && apt-get install -y infisical=0.31.1 \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd -r -g 1001 nodejs && useradd -r -u 1001 -g nodejs non-root-user
# Give non-root-user permission to update SSL certs
RUN chown -R non-root-user /etc/ssl/certs
RUN chown non-root-user /etc/ssl/certs/ca-certificates.crt
RUN chmod -R u+rwx /etc/ssl/certs
RUN chmod u+rw /etc/ssl/certs/ca-certificates.crt
RUN chown non-root-user /usr/sbin/update-ca-certificates
RUN chmod u+rx /usr/sbin/update-ca-certificates
## set pre baked keys
ARG POSTHOG_API_KEY
ENV NEXT_PUBLIC_POSTHOG_API_KEY=$POSTHOG_API_KEY \
BAKED_NEXT_PUBLIC_POSTHOG_API_KEY=$POSTHOG_API_KEY
ARG INTERCOM_ID=intercom-id
ENV NEXT_PUBLIC_INTERCOM_ID=$INTERCOM_ID \
BAKED_NEXT_PUBLIC_INTERCOM_ID=$INTERCOM_ID
ARG CAPTCHA_SITE_KEY
ENV NEXT_PUBLIC_CAPTCHA_SITE_KEY=$CAPTCHA_SITE_KEY \
BAKED_NEXT_PUBLIC_CAPTCHA_SITE_KEY=$CAPTCHA_SITE_KEY
WORKDIR /
COPY --from=backend-runner /app /backend
COPY --from=frontend-runner /app ./backend/frontend-build
ENV PORT 8080
ENV HOST=0.0.0.0
ENV HTTPS_ENABLED false
ENV NODE_ENV production
ENV STANDALONE_BUILD true
ENV STANDALONE_MODE true
ENV ChrystokiConfigurationPath=/usr/safenet/lunaclient/
WORKDIR /backend
ENV TELEMETRY_ENABLED true
EXPOSE 8080
EXPOSE 443
USER non-root-user
CMD ["./standalone-entrypoint.sh"]

View File

@@ -72,6 +72,9 @@ RUN addgroup --system --gid 1001 nodejs \
WORKDIR /app
# Required for pkcs11js
RUN apk add --no-cache python3 make g++
COPY backend/package*.json ./
RUN npm ci --only-production
@@ -85,6 +88,9 @@ FROM base AS backend-runner
WORKDIR /app
# Required for pkcs11js
RUN apk add --no-cache python3 make g++
COPY backend/package*.json ./
RUN npm ci --only-production

View File

@@ -3,6 +3,12 @@ FROM node:20-alpine AS build
WORKDIR /app
# Required for pkcs11js
RUN apk --update add \
python3 \
make \
g++
COPY package*.json ./
RUN npm ci --only-production
@@ -11,12 +17,17 @@ RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
ENV npm_config_cache /home/node/.npm
COPY package*.json ./
RUN apk --update add \
python3 \
make \
g++
RUN npm ci --only-production && npm cache clean --force
COPY --from=build /app .

View File

@@ -1,5 +1,44 @@
FROM node:20-alpine
# ? Setup a test SoftHSM module. In production a real HSM is used.
ARG SOFTHSM2_VERSION=2.5.0
ENV SOFTHSM2_VERSION=${SOFTHSM2_VERSION} \
SOFTHSM2_SOURCES=/tmp/softhsm2
# install build dependencies including python3
RUN apk --update add \
alpine-sdk \
autoconf \
automake \
git \
libtool \
openssl-dev \
python3 \
make \
g++
# build and install SoftHSM2
RUN git clone https://github.com/opendnssec/SoftHSMv2.git ${SOFTHSM2_SOURCES}
WORKDIR ${SOFTHSM2_SOURCES}
RUN git checkout ${SOFTHSM2_VERSION} -b ${SOFTHSM2_VERSION} \
&& sh autogen.sh \
&& ./configure --prefix=/usr/local --disable-gost \
&& make \
&& make install
WORKDIR /root
RUN rm -fr ${SOFTHSM2_SOURCES}
# install pkcs11-tool
RUN apk --update add opensc
RUN softhsm2-util --init-token --slot 0 --label "auth-app" --pin 1234 --so-pin 0000
# ? App setup
RUN apk add --no-cache bash curl && curl -1sLf \
'https://dl.cloudsmith.io/public/infisical/infisical-cli/setup.alpine.sh' | bash \
&& apk add infisical=0.8.1 && apk add --no-cache git

View File

@@ -5,6 +5,9 @@ export const mockSmtpServer = (): TSmtpService => {
return {
sendMail: async (data) => {
storage.push(data);
},
verify: async () => {
return true;
}
};
};

View File

@@ -16,6 +16,7 @@ import { initDbConnection } from "@app/db";
import { queueServiceFactory } from "@app/queue";
import { keyStoreFactory } from "@app/keystore/keystore";
import { Redis } from "ioredis";
import { initializeHsmModule } from "@app/ee/services/hsm/hsm-fns";
dotenv.config({ path: path.join(__dirname, "../../.env.test"), debug: true });
export default {
@@ -54,7 +55,12 @@ export default {
const smtp = mockSmtpServer();
const queue = queueServiceFactory(cfg.REDIS_URL);
const keyStore = keyStoreFactory(cfg.REDIS_URL);
const server = await main({ db, smtp, logger, queue, keyStore });
const hsmModule = initializeHsmModule();
hsmModule.initialize();
const server = await main({ db, smtp, logger, queue, keyStore, hsmModule: hsmModule.getModule() });
// @ts-expect-error type
globalThis.testServer = server;
// @ts-expect-error type

View File

@@ -75,6 +75,7 @@
"openid-client": "^5.6.5",
"ora": "^7.0.1",
"oracledb": "^6.4.0",
"otplib": "^12.0.1",
"passport-github": "^1.1.0",
"passport-gitlab2": "^5.0.0",
"passport-google-oauth20": "^2.0.0",
@@ -83,6 +84,7 @@
"pg-query-stream": "^4.5.3",
"picomatch": "^3.0.1",
"pino": "^8.16.2",
"pkcs11js": "^2.1.6",
"pkijs": "^3.2.4",
"posthog-node": "^3.6.2",
"probot": "^13.3.8",
@@ -120,6 +122,7 @@
"@types/passport-google-oauth20": "^2.0.14",
"@types/pg": "^8.10.9",
"@types/picomatch": "^2.3.3",
"@types/pkcs11js": "^1.0.4",
"@types/prompt-sync": "^4.2.3",
"@types/resolve": "^1.20.6",
"@types/safe-regex": "^1.1.6",
@@ -6813,6 +6816,48 @@
"node": ">=8.0.0"
}
},
"node_modules/@otplib/core": {
"version": "12.0.1",
"resolved": "https://registry.npmjs.org/@otplib/core/-/core-12.0.1.tgz",
"integrity": "sha512-4sGntwbA/AC+SbPhbsziRiD+jNDdIzsZ3JUyfZwjtKyc/wufl1pnSIaG4Uqx8ymPagujub0o92kgBnB89cuAMA=="
},
"node_modules/@otplib/plugin-crypto": {
"version": "12.0.1",
"resolved": "https://registry.npmjs.org/@otplib/plugin-crypto/-/plugin-crypto-12.0.1.tgz",
"integrity": "sha512-qPuhN3QrT7ZZLcLCyKOSNhuijUi9G5guMRVrxq63r9YNOxxQjPm59gVxLM+7xGnHnM6cimY57tuKsjK7y9LM1g==",
"dependencies": {
"@otplib/core": "^12.0.1"
}
},
"node_modules/@otplib/plugin-thirty-two": {
"version": "12.0.1",
"resolved": "https://registry.npmjs.org/@otplib/plugin-thirty-two/-/plugin-thirty-two-12.0.1.tgz",
"integrity": "sha512-MtT+uqRso909UkbrrYpJ6XFjj9D+x2Py7KjTO9JDPhL0bJUYVu5kFP4TFZW4NFAywrAtFRxOVY261u0qwb93gA==",
"dependencies": {
"@otplib/core": "^12.0.1",
"thirty-two": "^1.0.2"
}
},
"node_modules/@otplib/preset-default": {
"version": "12.0.1",
"resolved": "https://registry.npmjs.org/@otplib/preset-default/-/preset-default-12.0.1.tgz",
"integrity": "sha512-xf1v9oOJRyXfluBhMdpOkr+bsE+Irt+0D5uHtvg6x1eosfmHCsCC6ej/m7FXiWqdo0+ZUI6xSKDhJwc8yfiOPQ==",
"dependencies": {
"@otplib/core": "^12.0.1",
"@otplib/plugin-crypto": "^12.0.1",
"@otplib/plugin-thirty-two": "^12.0.1"
}
},
"node_modules/@otplib/preset-v11": {
"version": "12.0.1",
"resolved": "https://registry.npmjs.org/@otplib/preset-v11/-/preset-v11-12.0.1.tgz",
"integrity": "sha512-9hSetMI7ECqbFiKICrNa4w70deTUfArtwXykPUvSHWOdzOlfa9ajglu7mNCntlvxycTiOAXkQGwjQCzzDEMRMg==",
"dependencies": {
"@otplib/core": "^12.0.1",
"@otplib/plugin-crypto": "^12.0.1",
"@otplib/plugin-thirty-two": "^12.0.1"
}
},
"node_modules/@peculiar/asn1-cms": {
"version": "2.3.8",
"resolved": "https://registry.npmjs.org/@peculiar/asn1-cms/-/asn1-cms-2.3.8.tgz",
@@ -8830,6 +8875,17 @@
"integrity": "sha512-Yll76ZHikRFCyz/pffKGjrCwe/le2CDwOP5F210KQo27kpRE46U2rDnzikNlVn6/ezH3Mhn46bJMTfeVTtcYMg==",
"dev": true
},
"node_modules/@types/pkcs11js": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/@types/pkcs11js/-/pkcs11js-1.0.4.tgz",
"integrity": "sha512-Pkq8VbwZZv7o/6ODFOhxw0s0M8J4ucg4/I4V1dSCn8tUwWgIKIYzuV4Pp2fYuir81DgQXAF5TpGyhBMjJ3FjFw==",
"deprecated": "This is a stub types definition for pkcs11js (https://github.com/PeculiarVentures/pkcs11js). pkcs11js provides its own type definitions, so you don't need @types/pkcs11js installed!",
"dev": true,
"license": "MIT",
"dependencies": {
"pkcs11js": "*"
}
},
"node_modules/@types/prompt-sync": {
"version": "4.2.3",
"resolved": "https://registry.npmjs.org/@types/prompt-sync/-/prompt-sync-4.2.3.tgz",
@@ -16440,6 +16496,16 @@
"node": ">=14.6"
}
},
"node_modules/otplib": {
"version": "12.0.1",
"resolved": "https://registry.npmjs.org/otplib/-/otplib-12.0.1.tgz",
"integrity": "sha512-xDGvUOQjop7RDgxTQ+o4pOol0/3xSZzawTiPKRrHnQWAy0WjhNs/5HdIDJCrqC4MBynmjXgULc6YfioaxZeFgg==",
"dependencies": {
"@otplib/core": "^12.0.1",
"@otplib/preset-default": "^12.0.1",
"@otplib/preset-v11": "^12.0.1"
}
},
"node_modules/p-finally": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/p-finally/-/p-finally-1.0.0.tgz",
@@ -17066,6 +17132,20 @@
"node": ">= 6"
}
},
"node_modules/pkcs11js": {
"version": "2.1.6",
"resolved": "https://registry.npmjs.org/pkcs11js/-/pkcs11js-2.1.6.tgz",
"integrity": "sha512-+t5jxzB749q8GaEd1yNx3l98xYuaVK6WW/Vjg1Mk1Iy5bMu/A5W4O/9wZGrpOknWF6lFQSb12FXX+eSNxdriwA==",
"hasInstallScript": true,
"license": "MIT",
"engines": {
"node": ">=18.0.0"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/PeculiarVentures"
}
},
"node_modules/pkg-conf": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/pkg-conf/-/pkg-conf-3.1.0.tgz",
@@ -19526,6 +19606,14 @@
"node": ">=0.8"
}
},
"node_modules/thirty-two": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/thirty-two/-/thirty-two-1.0.2.tgz",
"integrity": "sha512-OEI0IWCe+Dw46019YLl6V10Us5bi574EvlJEOcAkB29IzQ/mYD1A6RyNHLjZPiHCmuodxvgF6U+vZO1L15lxVA==",
"engines": {
"node": ">=0.2.6"
}
},
"node_modules/thread-stream": {
"version": "2.4.1",
"resolved": "https://registry.npmjs.org/thread-stream/-/thread-stream-2.4.1.tgz",
@@ -19761,9 +19849,10 @@
}
},
"node_modules/tslib": {
"version": "2.6.3",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.6.3.tgz",
"integrity": "sha512-xNvxJEOUiWPGhUuUdQgAJPKOOJfGnIyKySOc09XkKsgdUV/3E2zvwZYdejjmRgPCgcym1juLH3226yA7sEFJKQ=="
"version": "2.8.0",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.0.tgz",
"integrity": "sha512-jWVzBLplnCmoaTr13V9dYbiQ99wvZRd0vNWaDRg+aVYRcjDF3nDksxFDE/+fkXnKhpnUUkmx5pK/v8mCtLVqZA==",
"license": "0BSD"
},
"node_modules/tsup": {
"version": "8.0.1",

View File

@@ -50,6 +50,7 @@
"auditlog-migration:down": "knex --knexfile ./src/db/auditlog-knexfile.ts --client pg migrate:down",
"auditlog-migration:list": "knex --knexfile ./src/db/auditlog-knexfile.ts --client pg migrate:list",
"auditlog-migration:status": "knex --knexfile ./src/db/auditlog-knexfile.ts --client pg migrate:status",
"auditlog-migration:unlock": "knex --knexfile ./src/db/auditlog-knexfile.ts migrate:unlock",
"auditlog-migration:rollback": "knex --knexfile ./src/db/auditlog-knexfile.ts migrate:rollback",
"migration:new": "tsx ./scripts/create-migration.ts",
"migration:up": "npm run auditlog-migration:up && knex --knexfile ./src/db/knexfile.ts --client pg migrate:up",
@@ -58,6 +59,7 @@
"migration:latest": "npm run auditlog-migration:latest && knex --knexfile ./src/db/knexfile.ts --client pg migrate:latest",
"migration:status": "npm run auditlog-migration:status && knex --knexfile ./src/db/knexfile.ts --client pg migrate:status",
"migration:rollback": "npm run auditlog-migration:rollback && knex --knexfile ./src/db/knexfile.ts migrate:rollback",
"migration:unlock": "npm run auditlog-migration:unlock && knex --knexfile ./src/db/knexfile.ts migrate:unlock",
"migrate:org": "tsx ./scripts/migrate-organization.ts",
"seed:new": "tsx ./scripts/create-seed-file.ts",
"seed": "knex --knexfile ./src/db/knexfile.ts --client pg seed:run",
@@ -84,6 +86,7 @@
"@types/passport-google-oauth20": "^2.0.14",
"@types/pg": "^8.10.9",
"@types/picomatch": "^2.3.3",
"@types/pkcs11js": "^1.0.4",
"@types/prompt-sync": "^4.2.3",
"@types/resolve": "^1.20.6",
"@types/safe-regex": "^1.1.6",
@@ -180,6 +183,7 @@
"openid-client": "^5.6.5",
"ora": "^7.0.1",
"oracledb": "^6.4.0",
"otplib": "^12.0.1",
"passport-github": "^1.1.0",
"passport-gitlab2": "^5.0.0",
"passport-google-oauth20": "^2.0.0",
@@ -188,6 +192,7 @@
"pg-query-stream": "^4.5.3",
"picomatch": "^3.0.1",
"pino": "^8.16.2",
"pkcs11js": "^2.1.6",
"pkijs": "^3.2.4",
"posthog-node": "^3.6.2",
"probot": "^13.3.8",

View File

@@ -8,61 +8,80 @@ const prompt = promptSync({
sigint: true
});
const sanitizeInputParam = (value: string) => {
// Escape double quotes and wrap the entire value in double quotes
if (value) {
return `"${value.replace(/"/g, '\\"')}"`;
}
return '""';
};
const exportDb = () => {
const exportHost = prompt("Enter your Postgres Host to migrate from: ");
const exportPort = prompt("Enter your Postgres Port to migrate from [Default = 5432]: ") ?? "5432";
const exportUser = prompt("Enter your Postgres User to migrate from: [Default = infisical]: ") ?? "infisical";
const exportPassword = prompt("Enter your Postgres Password to migrate from: ");
const exportDatabase = prompt("Enter your Postgres Database to migrate from [Default = infisical]: ") ?? "infisical";
const exportHost = sanitizeInputParam(prompt("Enter your Postgres Host to migrate from: "));
const exportPort = sanitizeInputParam(
prompt("Enter your Postgres Port to migrate from [Default = 5432]: ") ?? "5432"
);
const exportUser = sanitizeInputParam(
prompt("Enter your Postgres User to migrate from: [Default = infisical]: ") ?? "infisical"
);
const exportPassword = sanitizeInputParam(prompt("Enter your Postgres Password to migrate from: "));
const exportDatabase = sanitizeInputParam(
prompt("Enter your Postgres Database to migrate from [Default = infisical]: ") ?? "infisical"
);
// we do not include the audit_log and secret_sharing entries
execSync(
`PGDATABASE="${exportDatabase}" PGPASSWORD="${exportPassword}" PGHOST="${exportHost}" PGPORT=${exportPort} PGUSER=${exportUser} pg_dump infisical --exclude-table-data="secret_sharing" --exclude-table-data="audit_log*" > ${path.join(
`PGDATABASE=${exportDatabase} PGPASSWORD=${exportPassword} PGHOST=${exportHost} PGPORT=${exportPort} PGUSER=${exportUser} pg_dump -Fc infisical --exclude-table-data="secret_sharing" --exclude-table-data="audit_log*" > ${path.join(
__dirname,
"../src/db/dump.sql"
"../src/db/backup.dump"
)}`,
{ stdio: "inherit" }
);
};
const importDbForOrg = () => {
const importHost = prompt("Enter your Postgres Host to migrate to: ");
const importPort = prompt("Enter your Postgres Port to migrate to [Default = 5432]: ") ?? "5432";
const importUser = prompt("Enter your Postgres User to migrate to: [Default = infisical]: ") ?? "infisical";
const importPassword = prompt("Enter your Postgres Password to migrate to: ");
const importDatabase = prompt("Enter your Postgres Database to migrate to [Default = infisical]: ") ?? "infisical";
const orgId = prompt("Enter the organization ID to migrate: ");
const importHost = sanitizeInputParam(prompt("Enter your Postgres Host to migrate to: "));
const importPort = sanitizeInputParam(prompt("Enter your Postgres Port to migrate to [Default = 5432]: ") ?? "5432");
const importUser = sanitizeInputParam(
prompt("Enter your Postgres User to migrate to: [Default = infisical]: ") ?? "infisical"
);
const importPassword = sanitizeInputParam(prompt("Enter your Postgres Password to migrate to: "));
const importDatabase = sanitizeInputParam(
prompt("Enter your Postgres Database to migrate to [Default = infisical]: ") ?? "infisical"
);
const orgId = sanitizeInputParam(prompt("Enter the organization ID to migrate: "));
if (!existsSync(path.join(__dirname, "../src/db/dump.sql"))) {
if (!existsSync(path.join(__dirname, "../src/db/backup.dump"))) {
console.log("File not found, please export the database first.");
return;
}
execSync(
`PGDATABASE="${importDatabase}" PGPASSWORD="${importPassword}" PGHOST="${importHost}" PGPORT=${importPort} PGUSER=${importUser} psql -f ${path.join(
`PGDATABASE=${importDatabase} PGPASSWORD=${importPassword} PGHOST=${importHost} PGPORT=${importPort} PGUSER=${importUser} pg_restore -d ${importDatabase} --verbose ${path.join(
__dirname,
"../src/db/dump.sql"
)}`
"../src/db/backup.dump"
)}`,
{ maxBuffer: 1024 * 1024 * 4096 }
);
execSync(
`PGDATABASE="${importDatabase}" PGPASSWORD="${importPassword}" PGHOST="${importHost}" PGPORT=${importPort} PGUSER=${importUser} psql -c "DELETE FROM public.organizations WHERE id != '${orgId}'"`
`PGDATABASE=${importDatabase} PGPASSWORD=${importPassword} PGHOST=${importHost} PGPORT=${importPort} PGUSER=${importUser} psql -c "DELETE FROM public.organizations WHERE id != '${orgId}'"`
);
// delete global/instance-level resources not relevant to the organization to migrate
// users
execSync(
`PGDATABASE="${importDatabase}" PGPASSWORD="${importPassword}" PGHOST="${importHost}" PGPORT=${importPort} PGUSER=${importUser} psql -c 'DELETE FROM users WHERE users.id NOT IN (SELECT org_memberships."userId" FROM org_memberships)'`
`PGDATABASE=${importDatabase} PGPASSWORD=${importPassword} PGHOST=${importHost} PGPORT=${importPort} PGUSER=${importUser} psql -c 'DELETE FROM users WHERE users.id NOT IN (SELECT org_memberships."userId" FROM org_memberships)'`
);
// identities
execSync(
`PGDATABASE="${importDatabase}" PGPASSWORD="${importPassword}" PGHOST="${importHost}" PGPORT=${importPort} PGUSER=${importUser} psql -c 'DELETE FROM identities WHERE id NOT IN (SELECT "identityId" FROM identity_org_memberships)'`
`PGDATABASE=${importDatabase} PGPASSWORD=${importPassword} PGHOST=${importHost} PGPORT=${importPort} PGUSER=${importUser} psql -c 'DELETE FROM identities WHERE id NOT IN (SELECT "identityId" FROM identity_org_memberships)'`
);
// reset slack configuration in superAdmin
execSync(
`PGDATABASE="${importDatabase}" PGPASSWORD="${importPassword}" PGHOST="${importHost}" PGPORT=${importPort} PGUSER=${importUser} psql -c 'UPDATE super_admin SET "encryptedSlackClientId" = null, "encryptedSlackClientSecret" = null'`
`PGDATABASE=${importDatabase} PGPASSWORD=${importPassword} PGHOST=${importHost} PGPORT=${importPort} PGUSER=${importUser} psql -c 'UPDATE super_admin SET "encryptedSlackClientId" = null, "encryptedSlackClientSecret" = null'`
);
console.log("Organization migrated successfully.");

View File

@@ -44,6 +44,7 @@ import { TCmekServiceFactory } from "@app/services/cmek/cmek-service";
import { TExternalGroupOrgRoleMappingServiceFactory } from "@app/services/external-group-org-role-mapping/external-group-org-role-mapping-service";
import { TExternalMigrationServiceFactory } from "@app/services/external-migration/external-migration-service";
import { TGroupProjectServiceFactory } from "@app/services/group-project/group-project-service";
import { THsmServiceFactory } from "@app/services/hsm/hsm-service";
import { TIdentityServiceFactory } from "@app/services/identity/identity-service";
import { TIdentityAccessTokenServiceFactory } from "@app/services/identity-access-token/identity-access-token-service";
import { TIdentityAwsAuthServiceFactory } from "@app/services/identity-aws-auth/identity-aws-auth-service";
@@ -78,6 +79,7 @@ import { TServiceTokenServiceFactory } from "@app/services/service-token/service
import { TSlackServiceFactory } from "@app/services/slack/slack-service";
import { TSuperAdminServiceFactory } from "@app/services/super-admin/super-admin-service";
import { TTelemetryServiceFactory } from "@app/services/telemetry/telemetry-service";
import { TTotpServiceFactory } from "@app/services/totp/totp-service";
import { TUserDALFactory } from "@app/services/user/user-dal";
import { TUserServiceFactory } from "@app/services/user/user-service";
import { TUserEngagementServiceFactory } from "@app/services/user-engagement/user-engagement-service";
@@ -184,6 +186,7 @@ declare module "fastify" {
rateLimit: TRateLimitServiceFactory;
userEngagement: TUserEngagementServiceFactory;
externalKms: TExternalKmsServiceFactory;
hsm: THsmServiceFactory;
orgAdmin: TOrgAdminServiceFactory;
slack: TSlackServiceFactory;
workflowIntegration: TWorkflowIntegrationServiceFactory;
@@ -191,6 +194,7 @@ declare module "fastify" {
migration: TExternalMigrationServiceFactory;
externalGroupOrgRoleMapping: TExternalGroupOrgRoleMappingServiceFactory;
projectTemplate: TProjectTemplateServiceFactory;
totp: TTotpServiceFactory;
};
// this is exclusive use for middlewares in which we need to inject data
// everywhere else access using service layer

View File

@@ -314,6 +314,9 @@ import {
TSuperAdmin,
TSuperAdminInsert,
TSuperAdminUpdate,
TTotpConfigs,
TTotpConfigsInsert,
TTotpConfigsUpdate,
TTrustedIps,
TTrustedIpsInsert,
TTrustedIpsUpdate,
@@ -826,5 +829,6 @@ declare module "knex/types/tables" {
TProjectTemplatesInsert,
TProjectTemplatesUpdate
>;
[TableName.TotpConfig]: KnexOriginal.CompositeTableType<TTotpConfigs, TTotpConfigsInsert, TTotpConfigsUpdate>;
}
}

View File

@@ -64,23 +64,25 @@ export async function up(knex: Knex): Promise<void> {
}
if (await knex.schema.hasTable(TableName.Certificate)) {
await knex.schema.alterTable(TableName.Certificate, (t) => {
t.uuid("caCertId").nullable();
t.foreign("caCertId").references("id").inTable(TableName.CertificateAuthorityCert);
});
const hasCaCertIdColumn = await knex.schema.hasColumn(TableName.Certificate, "caCertId");
if (!hasCaCertIdColumn) {
await knex.schema.alterTable(TableName.Certificate, (t) => {
t.uuid("caCertId").nullable();
t.foreign("caCertId").references("id").inTable(TableName.CertificateAuthorityCert);
});
await knex.raw(`
await knex.raw(`
UPDATE "${TableName.Certificate}" cert
SET "caCertId" = (
SELECT caCert.id
FROM "${TableName.CertificateAuthorityCert}" caCert
WHERE caCert."caId" = cert."caId"
)
`);
)`);
await knex.schema.alterTable(TableName.Certificate, (t) => {
t.uuid("caCertId").notNullable().alter();
});
await knex.schema.alterTable(TableName.Certificate, (t) => {
t.uuid("caCertId").notNullable().alter();
});
}
}
}

View File

@@ -2,7 +2,7 @@ import { Knex } from "knex";
import { TableName } from "../schemas";
const BATCH_SIZE = 30_000;
const BATCH_SIZE = 10_000;
export async function up(knex: Knex): Promise<void> {
const hasAuthMethodColumnAccessToken = await knex.schema.hasColumn(TableName.IdentityAccessToken, "authMethod");
@@ -12,7 +12,18 @@ export async function up(knex: Knex): Promise<void> {
t.string("authMethod").nullable();
});
let nullableAccessTokens = await knex(TableName.IdentityAccessToken).whereNull("authMethod").limit(BATCH_SIZE);
// first we remove identities without auth method that is unused
// ! We delete all access tokens where the identity has no auth method set!
// ! Which means un-configured identities that for some reason have access tokens, will have their access tokens deleted.
await knex(TableName.IdentityAccessToken)
.leftJoin(TableName.Identity, `${TableName.Identity}.id`, `${TableName.IdentityAccessToken}.identityId`)
.whereNull(`${TableName.Identity}.authMethod`)
.delete();
let nullableAccessTokens = await knex(TableName.IdentityAccessToken)
.whereNull("authMethod")
.limit(BATCH_SIZE)
.select("id");
let totalUpdated = 0;
do {
@@ -33,24 +44,15 @@ export async function up(knex: Knex): Promise<void> {
});
// eslint-disable-next-line no-await-in-loop
nullableAccessTokens = await knex(TableName.IdentityAccessToken).whereNull("authMethod").limit(BATCH_SIZE);
nullableAccessTokens = await knex(TableName.IdentityAccessToken)
.whereNull("authMethod")
.limit(BATCH_SIZE)
.select("id");
totalUpdated += batchIds.length;
console.log(`Updated ${batchIds.length} access tokens in batch <> Total updated: ${totalUpdated}`);
} while (nullableAccessTokens.length > 0);
// ! We delete all access tokens where the identity has no auth method set!
// ! Which means un-configured identities that for some reason have access tokens, will have their access tokens deleted.
await knex(TableName.IdentityAccessToken)
.whereNotExists((queryBuilder) => {
void queryBuilder
.select("id")
.from(TableName.Identity)
.whereRaw(`${TableName.IdentityAccessToken}."identityId" = ${TableName.Identity}.id`)
.whereNotNull("authMethod");
})
.delete();
// Finally we set the authMethod to notNullable after populating the column.
// This will fail if the data is not populated correctly, so it's safe.
await knex.schema.alterTable(TableName.IdentityAccessToken, (t) => {

View File

@@ -0,0 +1,21 @@
import { Knex } from "knex";
import { TableName } from "../schemas";
export async function up(knex: Knex): Promise<void> {
if (await knex.schema.hasColumn(TableName.OidcConfig, "orgId")) {
await knex.schema.alterTable(TableName.OidcConfig, (t) => {
t.dropForeign("orgId");
t.foreign("orgId").references("id").inTable(TableName.Organization).onDelete("CASCADE");
});
}
}
export async function down(knex: Knex): Promise<void> {
if (await knex.schema.hasColumn(TableName.OidcConfig, "orgId")) {
await knex.schema.alterTable(TableName.OidcConfig, (t) => {
t.dropForeign("orgId");
t.foreign("orgId").references("id").inTable(TableName.Organization);
});
}
}

View File

@@ -0,0 +1,23 @@
import { Knex } from "knex";
import { TableName } from "../schemas";
export async function up(knex: Knex): Promise<void> {
const hasEncryptionStrategy = await knex.schema.hasColumn(TableName.KmsServerRootConfig, "encryptionStrategy");
const hasTimestampsCol = await knex.schema.hasColumn(TableName.KmsServerRootConfig, "createdAt");
await knex.schema.alterTable(TableName.KmsServerRootConfig, (t) => {
if (!hasEncryptionStrategy) t.string("encryptionStrategy").defaultTo("SOFTWARE");
if (!hasTimestampsCol) t.timestamps(true, true, true);
});
}
export async function down(knex: Knex): Promise<void> {
const hasEncryptionStrategy = await knex.schema.hasColumn(TableName.KmsServerRootConfig, "encryptionStrategy");
const hasTimestampsCol = await knex.schema.hasColumn(TableName.KmsServerRootConfig, "createdAt");
await knex.schema.alterTable(TableName.KmsServerRootConfig, (t) => {
if (hasEncryptionStrategy) t.dropColumn("encryptionStrategy");
if (hasTimestampsCol) t.dropTimestamps(true);
});
}

View File

@@ -0,0 +1,54 @@
import { Knex } from "knex";
import { TableName } from "../schemas";
import { createOnUpdateTrigger, dropOnUpdateTrigger } from "../utils";
export async function up(knex: Knex): Promise<void> {
if (!(await knex.schema.hasTable(TableName.TotpConfig))) {
await knex.schema.createTable(TableName.TotpConfig, (t) => {
t.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
t.uuid("userId").notNullable();
t.foreign("userId").references("id").inTable(TableName.Users).onDelete("CASCADE");
t.boolean("isVerified").defaultTo(false).notNullable();
t.binary("encryptedRecoveryCodes").notNullable();
t.binary("encryptedSecret").notNullable();
t.timestamps(true, true, true);
t.unique("userId");
});
await createOnUpdateTrigger(knex, TableName.TotpConfig);
}
const doesOrgMfaMethodColExist = await knex.schema.hasColumn(TableName.Organization, "selectedMfaMethod");
await knex.schema.alterTable(TableName.Organization, (t) => {
if (!doesOrgMfaMethodColExist) {
t.string("selectedMfaMethod");
}
});
const doesUserSelectedMfaMethodColExist = await knex.schema.hasColumn(TableName.Users, "selectedMfaMethod");
await knex.schema.alterTable(TableName.Users, (t) => {
if (!doesUserSelectedMfaMethodColExist) {
t.string("selectedMfaMethod");
}
});
}
export async function down(knex: Knex): Promise<void> {
await dropOnUpdateTrigger(knex, TableName.TotpConfig);
await knex.schema.dropTableIfExists(TableName.TotpConfig);
const doesOrgMfaMethodColExist = await knex.schema.hasColumn(TableName.Organization, "selectedMfaMethod");
await knex.schema.alterTable(TableName.Organization, (t) => {
if (doesOrgMfaMethodColExist) {
t.dropColumn("selectedMfaMethod");
}
});
const doesUserSelectedMfaMethodColExist = await knex.schema.hasColumn(TableName.Users, "selectedMfaMethod");
await knex.schema.alterTable(TableName.Users, (t) => {
if (doesUserSelectedMfaMethodColExist) {
t.dropColumn("selectedMfaMethod");
}
});
}

View File

@@ -106,6 +106,7 @@ export * from "./secrets-v2";
export * from "./service-tokens";
export * from "./slack-integrations";
export * from "./super-admin";
export * from "./totp-configs";
export * from "./trusted-ips";
export * from "./user-actions";
export * from "./user-aliases";

View File

@@ -11,7 +11,10 @@ import { TImmutableDBKeys } from "./models";
export const KmsRootConfigSchema = z.object({
id: z.string().uuid(),
encryptedRootKey: zodBuffer
encryptedRootKey: zodBuffer,
encryptionStrategy: z.string(),
createdAt: z.date(),
updatedAt: z.date()
});
export type TKmsRootConfig = z.infer<typeof KmsRootConfigSchema>;

View File

@@ -117,6 +117,7 @@ export enum TableName {
ExternalKms = "external_kms",
InternalKms = "internal_kms",
InternalKmsKeyVersion = "internal_kms_key_version",
TotpConfig = "totp_configs",
// @depreciated
KmsKeyVersion = "kms_key_versions",
WorkflowIntegrations = "workflow_integrations",

View File

@@ -21,7 +21,8 @@ export const OrganizationsSchema = z.object({
kmsDefaultKeyId: z.string().uuid().nullable().optional(),
kmsEncryptedDataKey: zodBuffer.nullable().optional(),
defaultMembershipRole: z.string().default("member"),
enforceMfa: z.boolean().default(false)
enforceMfa: z.boolean().default(false),
selectedMfaMethod: z.string().nullable().optional()
});
export type TOrganizations = z.infer<typeof OrganizationsSchema>;

View File

@@ -0,0 +1,24 @@
// Code generated by automation script, DO NOT EDIT.
// Automated by pulling database and generating zod schema
// To update. Just run npm run generate:schema
// Written by akhilmhdh.
import { z } from "zod";
import { zodBuffer } from "@app/lib/zod";
import { TImmutableDBKeys } from "./models";
export const TotpConfigsSchema = z.object({
id: z.string().uuid(),
userId: z.string().uuid(),
isVerified: z.boolean().default(false),
encryptedRecoveryCodes: zodBuffer,
encryptedSecret: zodBuffer,
createdAt: z.date(),
updatedAt: z.date()
});
export type TTotpConfigs = z.infer<typeof TotpConfigsSchema>;
export type TTotpConfigsInsert = Omit<z.input<typeof TotpConfigsSchema>, TImmutableDBKeys>;
export type TTotpConfigsUpdate = Partial<Omit<z.input<typeof TotpConfigsSchema>, TImmutableDBKeys>>;

View File

@@ -26,7 +26,8 @@ export const UsersSchema = z.object({
consecutiveFailedMfaAttempts: z.number().default(0).nullable().optional(),
isLocked: z.boolean().default(false).nullable().optional(),
temporaryLockDateEnd: z.date().nullable().optional(),
consecutiveFailedPasswordAttempts: z.number().default(0).nullable().optional()
consecutiveFailedPasswordAttempts: z.number().default(0).nullable().optional(),
selectedMfaMethod: z.string().nullable().optional()
});
export type TUsers = z.infer<typeof UsersSchema>;

View File

@@ -2,6 +2,9 @@ import { Knex } from "knex";
import { TableName } from "./schemas";
interface PgTriggerResult {
rows: Array<{ exists: boolean }>;
}
export const createJunctionTable = (knex: Knex, tableName: TableName, table1Name: TableName, table2Name: TableName) =>
knex.schema.createTable(tableName, (table) => {
table.uuid("id", { primaryKey: true }).defaultTo(knex.fn.uuid());
@@ -28,13 +31,26 @@ DROP FUNCTION IF EXISTS on_update_timestamp() CASCADE;
// we would be using this to apply updatedAt where ever we wanta
// remember to set `timestamps(true,true,true)` before this on schema
export const createOnUpdateTrigger = (knex: Knex, tableName: string) =>
knex.raw(`
CREATE TRIGGER "${tableName}_updatedAt"
BEFORE UPDATE ON ${tableName}
FOR EACH ROW
EXECUTE PROCEDURE on_update_timestamp();
`);
export const createOnUpdateTrigger = async (knex: Knex, tableName: string) => {
const triggerExists = await knex.raw<PgTriggerResult>(`
SELECT EXISTS (
SELECT 1
FROM pg_trigger
WHERE tgname = '${tableName}_updatedAt'
);
`);
if (!triggerExists?.rows?.[0]?.exists) {
return knex.raw(`
CREATE TRIGGER "${tableName}_updatedAt"
BEFORE UPDATE ON ${tableName}
FOR EACH ROW
EXECUTE PROCEDURE on_update_timestamp();
`);
}
return null;
};
export const dropOnUpdateTrigger = (knex: Knex, tableName: string) =>
knex.raw(`DROP TRIGGER IF EXISTS "${tableName}_updatedAt" ON ${tableName}`);

View File

@@ -122,6 +122,8 @@ export const registerSamlRouter = async (server: FastifyZodProvider) => {
},
`email: ${email} firstName: ${profile.firstName as string}`
);
throw new Error("Invalid saml request. Missing email or first name");
}
const userMetadata = Object.keys(profile.attributes || {})

View File

@@ -0,0 +1,58 @@
import * as pkcs11js from "pkcs11js";
import { getConfig } from "@app/lib/config/env";
import { logger } from "@app/lib/logger";
import { HsmModule } from "./hsm-types";
export const initializeHsmModule = () => {
const appCfg = getConfig();
// Create a new instance of PKCS11 module
const pkcs11 = new pkcs11js.PKCS11();
let isInitialized = false;
const initialize = () => {
if (!appCfg.isHsmConfigured) {
return;
}
try {
// Load the PKCS#11 module
pkcs11.load(appCfg.HSM_LIB_PATH!);
// Initialize the module
pkcs11.C_Initialize();
isInitialized = true;
logger.info("PKCS#11 module initialized");
} catch (err) {
logger.error("Failed to initialize PKCS#11 module:", err);
throw err;
}
};
const finalize = () => {
if (isInitialized) {
try {
pkcs11.C_Finalize();
isInitialized = false;
logger.info("PKCS#11 module finalized");
} catch (err) {
logger.error("Failed to finalize PKCS#11 module:", err);
throw err;
}
}
};
const getModule = (): HsmModule => ({
pkcs11,
isInitialized
});
return {
initialize,
finalize,
getModule
};
};

View File

@@ -0,0 +1,470 @@
import pkcs11js from "pkcs11js";
import { getConfig } from "@app/lib/config/env";
import { logger } from "@app/lib/logger";
import { HsmKeyType, HsmModule } from "./hsm-types";
type THsmServiceFactoryDep = {
hsmModule: HsmModule;
};
export type THsmServiceFactory = ReturnType<typeof hsmServiceFactory>;
type SyncOrAsync<T> = T | Promise<T>;
type SessionCallback<T> = (session: pkcs11js.Handle) => SyncOrAsync<T>;
// eslint-disable-next-line no-empty-pattern
export const hsmServiceFactory = ({ hsmModule: { isInitialized, pkcs11 } }: THsmServiceFactoryDep) => {
const appCfg = getConfig();
// Constants for buffer structures
const IV_LENGTH = 16; // Luna HSM typically expects 16-byte IV for cbc
const BLOCK_SIZE = 16;
const HMAC_SIZE = 32;
const AES_KEY_SIZE = 256;
const HMAC_KEY_SIZE = 256;
const $withSession = async <T>(callbackWithSession: SessionCallback<T>): Promise<T> => {
const RETRY_INTERVAL = 200; // 200ms between attempts
const MAX_TIMEOUT = 90_000; // 90 seconds maximum total time
let sessionHandle: pkcs11js.Handle | null = null;
const removeSession = () => {
if (sessionHandle !== null) {
try {
pkcs11.C_Logout(sessionHandle);
pkcs11.C_CloseSession(sessionHandle);
logger.info("HSM: Terminated session successfully");
} catch (error) {
logger.error(error, "HSM: Failed to terminate session");
} finally {
sessionHandle = null;
}
}
};
try {
if (!pkcs11 || !isInitialized) {
throw new Error("PKCS#11 module is not initialized");
}
// Get slot list
let slots: pkcs11js.Handle[];
try {
slots = pkcs11.C_GetSlotList(false); // false to get all slots
} catch (error) {
throw new Error(`Failed to get slot list: ${(error as Error)?.message}`);
}
if (slots.length === 0) {
throw new Error("No slots available");
}
if (appCfg.HSM_SLOT >= slots.length) {
throw new Error(`HSM slot ${appCfg.HSM_SLOT} not found or not initialized`);
}
const slotId = slots[appCfg.HSM_SLOT];
const startTime = Date.now();
while (Date.now() - startTime < MAX_TIMEOUT) {
try {
// Open session
// eslint-disable-next-line no-bitwise
sessionHandle = pkcs11.C_OpenSession(slotId, pkcs11js.CKF_SERIAL_SESSION | pkcs11js.CKF_RW_SESSION);
// Login
try {
pkcs11.C_Login(sessionHandle, pkcs11js.CKU_USER, appCfg.HSM_PIN);
logger.info("HSM: Successfully authenticated");
break;
} catch (error) {
// Handle specific error cases
if (error instanceof pkcs11js.Pkcs11Error) {
if (error.code === pkcs11js.CKR_PIN_INCORRECT) {
// We throw instantly here to prevent further attempts, because if too many attempts are made, the HSM will potentially wipe all key material
logger.error(error, `HSM: Incorrect PIN detected for HSM slot ${appCfg.HSM_SLOT}`);
throw new Error("HSM: Incorrect HSM Pin detected. Please check the HSM configuration.");
}
if (error.code === pkcs11js.CKR_USER_ALREADY_LOGGED_IN) {
logger.warn("HSM: Session already logged in");
}
}
throw error; // Re-throw other errors
}
} catch (error) {
logger.warn(`HSM: Session creation failed. Retrying... Error: ${(error as Error)?.message}`);
if (sessionHandle !== null) {
try {
pkcs11.C_CloseSession(sessionHandle);
} catch (closeError) {
logger.error(closeError, "HSM: Failed to close session");
}
sessionHandle = null;
}
// Wait before retrying
// eslint-disable-next-line no-await-in-loop
await new Promise((resolve) => {
setTimeout(resolve, RETRY_INTERVAL);
});
}
}
if (sessionHandle === null) {
throw new Error("HSM: Failed to open session after maximum retries");
}
// Execute callback with session handle
const result = await callbackWithSession(sessionHandle);
removeSession();
return result;
} catch (error) {
logger.error(error, "HSM: Failed to open session");
throw error;
} finally {
// Ensure cleanup
removeSession();
}
};
const $findKey = (sessionHandle: pkcs11js.Handle, type: HsmKeyType) => {
const label = type === HsmKeyType.HMAC ? `${appCfg.HSM_KEY_LABEL}_HMAC` : appCfg.HSM_KEY_LABEL;
const keyType = type === HsmKeyType.HMAC ? pkcs11js.CKK_GENERIC_SECRET : pkcs11js.CKK_AES;
const template = [
{ type: pkcs11js.CKA_CLASS, value: pkcs11js.CKO_SECRET_KEY },
{ type: pkcs11js.CKA_KEY_TYPE, value: keyType },
{ type: pkcs11js.CKA_LABEL, value: label }
];
try {
// Initialize search
pkcs11.C_FindObjectsInit(sessionHandle, template);
try {
// Find first matching object
const handles = pkcs11.C_FindObjects(sessionHandle, 1);
if (handles.length === 0) {
throw new Error("Failed to find master key");
}
return handles[0]; // Return the key handle
} finally {
// Always finalize the search operation
pkcs11.C_FindObjectsFinal(sessionHandle);
}
} catch (error) {
return null;
}
};
const $keyExists = (session: pkcs11js.Handle, type: HsmKeyType): boolean => {
try {
const key = $findKey(session, type);
// items(0) will throw an error if no items are found
// Return true only if we got a valid object with handle
return !!key && key.length > 0;
} catch (error) {
// If items(0) throws, it means no key was found
// eslint-disable-next-line @typescript-eslint/no-unsafe-member-access, @typescript-eslint/no-explicit-any, @typescript-eslint/no-unsafe-call
logger.error(error, "HSM: Failed while checking for HSM key presence");
if (error instanceof pkcs11js.Pkcs11Error) {
if (error.code === pkcs11js.CKR_OBJECT_HANDLE_INVALID) {
return false;
}
}
return false;
}
};
const encrypt: {
(data: Buffer, providedSession: pkcs11js.Handle): Promise<Buffer>;
(data: Buffer): Promise<Buffer>;
} = async (data: Buffer, providedSession?: pkcs11js.Handle) => {
if (!pkcs11 || !isInitialized) {
throw new Error("PKCS#11 module is not initialized");
}
const $performEncryption = (sessionHandle: pkcs11js.Handle) => {
try {
const aesKey = $findKey(sessionHandle, HsmKeyType.AES);
if (!aesKey) {
throw new Error("HSM: Encryption failed, AES key not found");
}
const hmacKey = $findKey(sessionHandle, HsmKeyType.HMAC);
if (!hmacKey) {
throw new Error("HSM: Encryption failed, HMAC key not found");
}
const iv = Buffer.alloc(IV_LENGTH);
pkcs11.C_GenerateRandom(sessionHandle, iv);
const encryptMechanism = {
mechanism: pkcs11js.CKM_AES_CBC_PAD,
parameter: iv
};
pkcs11.C_EncryptInit(sessionHandle, encryptMechanism, aesKey);
// Calculate max buffer size (input length + potential full block of padding)
const maxEncryptedLength = Math.ceil(data.length / BLOCK_SIZE) * BLOCK_SIZE + BLOCK_SIZE;
// Encrypt the data - this returns the encrypted data directly
const encryptedData = pkcs11.C_Encrypt(sessionHandle, data, Buffer.alloc(maxEncryptedLength));
// Initialize HMAC
const hmacMechanism = {
mechanism: pkcs11js.CKM_SHA256_HMAC
};
pkcs11.C_SignInit(sessionHandle, hmacMechanism, hmacKey);
// Sign the IV and encrypted data
pkcs11.C_SignUpdate(sessionHandle, iv);
pkcs11.C_SignUpdate(sessionHandle, encryptedData);
// Get the HMAC
const hmac = Buffer.alloc(HMAC_SIZE);
pkcs11.C_SignFinal(sessionHandle, hmac);
// Combine encrypted data and HMAC [Encrypted Data | HMAC]
const finalBuffer = Buffer.alloc(encryptedData.length + hmac.length);
encryptedData.copy(finalBuffer);
hmac.copy(finalBuffer, encryptedData.length);
return Buffer.concat([iv, finalBuffer]);
} catch (error) {
logger.error(error, "HSM: Failed to perform encryption");
throw new Error(`HSM: Encryption failed: ${(error as Error)?.message}`);
}
};
if (providedSession) {
return $performEncryption(providedSession);
}
const result = await $withSession($performEncryption);
return result;
};
const decrypt: {
(encryptedBlob: Buffer, providedSession: pkcs11js.Handle): Promise<Buffer>;
(encryptedBlob: Buffer): Promise<Buffer>;
} = async (encryptedBlob: Buffer, providedSession?: pkcs11js.Handle) => {
if (!pkcs11 || !isInitialized) {
throw new Error("PKCS#11 module is not initialized");
}
const $performDecryption = (sessionHandle: pkcs11js.Handle) => {
try {
// structure is: [IV (16 bytes) | Encrypted Data (N bytes) | HMAC (32 bytes)]
const iv = encryptedBlob.subarray(0, IV_LENGTH);
const encryptedDataWithHmac = encryptedBlob.subarray(IV_LENGTH);
// Split encrypted data and HMAC
const hmac = encryptedDataWithHmac.subarray(-HMAC_SIZE); // Last 32 bytes are HMAC
const encryptedData = encryptedDataWithHmac.subarray(0, -HMAC_SIZE); // Everything except last 32 bytes
// Find the keys
const aesKey = $findKey(sessionHandle, HsmKeyType.AES);
if (!aesKey) {
throw new Error("HSM: Decryption failed, AES key not found");
}
const hmacKey = $findKey(sessionHandle, HsmKeyType.HMAC);
if (!hmacKey) {
throw new Error("HSM: Decryption failed, HMAC key not found");
}
// Verify HMAC first
const hmacMechanism = {
mechanism: pkcs11js.CKM_SHA256_HMAC
};
pkcs11.C_VerifyInit(sessionHandle, hmacMechanism, hmacKey);
pkcs11.C_VerifyUpdate(sessionHandle, iv);
pkcs11.C_VerifyUpdate(sessionHandle, encryptedData);
try {
pkcs11.C_VerifyFinal(sessionHandle, hmac);
} catch (error) {
logger.error(error, "HSM: HMAC verification failed");
throw new Error("HSM: Decryption failed"); // Generic error for failed verification
}
// Only decrypt if verification passed
const decryptMechanism = {
mechanism: pkcs11js.CKM_AES_CBC_PAD,
parameter: iv
};
pkcs11.C_DecryptInit(sessionHandle, decryptMechanism, aesKey);
const tempBuffer = Buffer.alloc(encryptedData.length);
const decryptedData = pkcs11.C_Decrypt(sessionHandle, encryptedData, tempBuffer);
// Create a new buffer from the decrypted data
return Buffer.from(decryptedData);
} catch (error) {
logger.error(error, "HSM: Failed to perform decryption");
throw new Error("HSM: Decryption failed"); // Generic error for failed decryption, to avoid leaking details about why it failed (such as padding related errors)
}
};
if (providedSession) {
return $performDecryption(providedSession);
}
const result = await $withSession($performDecryption);
return result;
};
// We test the core functionality of the PKCS#11 module that we are using throughout Infisical. This is to ensure that the user doesn't configure a faulty or unsupported HSM device.
const $testPkcs11Module = async (session: pkcs11js.Handle) => {
try {
if (!pkcs11 || !isInitialized) {
throw new Error("PKCS#11 module is not initialized");
}
if (!session) {
throw new Error("HSM: Attempted to run test without a valid session");
}
const randomData = pkcs11.C_GenerateRandom(session, Buffer.alloc(500));
const encryptedData = await encrypt(randomData, session);
const decryptedData = await decrypt(encryptedData, session);
const randomDataHex = randomData.toString("hex");
const decryptedDataHex = decryptedData.toString("hex");
if (randomDataHex !== decryptedDataHex && Buffer.compare(randomData, decryptedData)) {
throw new Error("HSM: Startup test failed. Decrypted data does not match original data");
}
return true;
} catch (error) {
logger.error(error, "HSM: Error testing PKCS#11 module");
return false;
}
};
const isActive = async () => {
if (!isInitialized || !appCfg.isHsmConfigured) {
return false;
}
let pkcs11TestPassed = false;
try {
pkcs11TestPassed = await $withSession($testPkcs11Module);
} catch (err) {
logger.error(err, "HSM: Error testing PKCS#11 module");
}
return appCfg.isHsmConfigured && isInitialized && pkcs11TestPassed;
};
const startService = async () => {
if (!appCfg.isHsmConfigured || !pkcs11 || !isInitialized) return;
try {
await $withSession(async (sessionHandle) => {
// Check if master key exists, create if not
const genericAttributes = [
{ type: pkcs11js.CKA_TOKEN, value: true }, // Persistent storage
{ type: pkcs11js.CKA_EXTRACTABLE, value: false }, // Cannot be extracted
{ type: pkcs11js.CKA_SENSITIVE, value: true }, // Sensitive value
{ type: pkcs11js.CKA_PRIVATE, value: true } // Requires authentication
];
if (!$keyExists(sessionHandle, HsmKeyType.AES)) {
// Template for generating 256-bit AES master key
const keyTemplate = [
{ type: pkcs11js.CKA_CLASS, value: pkcs11js.CKO_SECRET_KEY },
{ type: pkcs11js.CKA_KEY_TYPE, value: pkcs11js.CKK_AES },
{ type: pkcs11js.CKA_VALUE_LEN, value: AES_KEY_SIZE / 8 },
{ type: pkcs11js.CKA_LABEL, value: appCfg.HSM_KEY_LABEL! },
{ type: pkcs11js.CKA_ENCRYPT, value: true }, // Allow encryption
{ type: pkcs11js.CKA_DECRYPT, value: true }, // Allow decryption
...genericAttributes
];
// Generate the key
pkcs11.C_GenerateKey(
sessionHandle,
{
mechanism: pkcs11js.CKM_AES_KEY_GEN
},
keyTemplate
);
logger.info(`HSM: Master key created successfully with label: ${appCfg.HSM_KEY_LABEL}`);
}
// Check if HMAC key exists, create if not
if (!$keyExists(sessionHandle, HsmKeyType.HMAC)) {
const hmacKeyTemplate = [
{ type: pkcs11js.CKA_CLASS, value: pkcs11js.CKO_SECRET_KEY },
{ type: pkcs11js.CKA_KEY_TYPE, value: pkcs11js.CKK_GENERIC_SECRET },
{ type: pkcs11js.CKA_VALUE_LEN, value: HMAC_KEY_SIZE / 8 }, // 256-bit key
{ type: pkcs11js.CKA_LABEL, value: `${appCfg.HSM_KEY_LABEL!}_HMAC` },
{ type: pkcs11js.CKA_SIGN, value: true }, // Allow signing
{ type: pkcs11js.CKA_VERIFY, value: true }, // Allow verification
...genericAttributes
];
// Generate the HMAC key
pkcs11.C_GenerateKey(
sessionHandle,
{
mechanism: pkcs11js.CKM_GENERIC_SECRET_KEY_GEN
},
hmacKeyTemplate
);
logger.info(`HSM: HMAC key created successfully with label: ${appCfg.HSM_KEY_LABEL}_HMAC`);
}
// Get slot info to check supported mechanisms
const slotId = pkcs11.C_GetSessionInfo(sessionHandle).slotID;
const mechanisms = pkcs11.C_GetMechanismList(slotId);
// Check for AES CBC PAD support
const hasAesCbc = mechanisms.includes(pkcs11js.CKM_AES_CBC_PAD);
if (!hasAesCbc) {
throw new Error(`Required mechanism CKM_AEC_CBC_PAD not supported by HSM`);
}
// Run test encryption/decryption
const testPassed = await $testPkcs11Module(sessionHandle);
if (!testPassed) {
throw new Error("PKCS#11 module test failed. Please ensure that the HSM is correctly configured.");
}
});
} catch (error) {
logger.error(error, "HSM: Error initializing HSM service:");
throw error;
}
};
return {
encrypt,
startService,
isActive,
decrypt
};
};

View File

@@ -0,0 +1,11 @@
import pkcs11js from "pkcs11js";
export type HsmModule = {
pkcs11: pkcs11js.PKCS11;
isInitialized: boolean;
};
export enum HsmKeyType {
AES = "AES",
HMAC = "hmac"
}

View File

@@ -29,6 +29,7 @@ export const getDefaultOnPremFeatures = (): TFeatureSet => ({
auditLogStreams: false,
auditLogStreamLimit: 3,
samlSSO: false,
hsm: false,
oidcSSO: false,
scim: false,
ldap: false,

View File

@@ -46,6 +46,7 @@ export type TFeatureSet = {
auditLogStreams: false;
auditLogStreamLimit: 3;
samlSSO: false;
hsm: false;
oidcSSO: false;
scim: false;
ldap: false;

View File

@@ -17,7 +17,7 @@ import {
infisicalSymmetricDecrypt,
infisicalSymmetricEncypt
} from "@app/lib/crypto/encryption";
import { BadRequestError, ForbiddenRequestError, NotFoundError } from "@app/lib/errors";
import { BadRequestError, ForbiddenRequestError, NotFoundError, OidcAuthError } from "@app/lib/errors";
import { AuthMethod, AuthTokenType } from "@app/services/auth/auth-type";
import { TAuthTokenServiceFactory } from "@app/services/auth-token/auth-token-service";
import { TokenType } from "@app/services/auth-token/auth-token-types";
@@ -56,7 +56,7 @@ type TOidcConfigServiceFactoryDep = {
orgBotDAL: Pick<TOrgBotDALFactory, "findOne" | "create" | "transaction">;
licenseService: Pick<TLicenseServiceFactory, "getPlan" | "updateSubscriptionOrgMemberCount">;
tokenService: Pick<TAuthTokenServiceFactory, "createTokenForUser">;
smtpService: Pick<TSmtpService, "sendMail">;
smtpService: Pick<TSmtpService, "sendMail" | "verify">;
permissionService: Pick<TPermissionServiceFactory, "getOrgPermission">;
oidcConfigDAL: Pick<TOidcConfigDALFactory, "findOne" | "update" | "create">;
};
@@ -223,6 +223,7 @@ export const oidcConfigServiceFactory = ({
let newUser: TUsers | undefined;
if (serverCfg.trustOidcEmails) {
// we prioritize getting the most complete user to create the new alias under
newUser = await userDAL.findOne(
{
email,
@@ -230,6 +231,23 @@ export const oidcConfigServiceFactory = ({
},
tx
);
if (!newUser) {
// this fetches user entries created via invites
newUser = await userDAL.findOne(
{
username: email
},
tx
);
if (newUser && !newUser.isEmailVerified) {
// we automatically mark it as email-verified because we've configured trust for OIDC emails
newUser = await userDAL.updateById(newUser.id, {
isEmailVerified: true
});
}
}
}
if (!newUser) {
@@ -332,14 +350,20 @@ export const oidcConfigServiceFactory = ({
userId: user.id
});
await smtpService.sendMail({
template: SmtpTemplates.EmailVerification,
subjectLine: "Infisical confirmation code",
recipients: [user.email],
substitutions: {
code: token
}
});
await smtpService
.sendMail({
template: SmtpTemplates.EmailVerification,
subjectLine: "Infisical confirmation code",
recipients: [user.email],
substitutions: {
code: token
}
})
.catch((err: Error) => {
throw new OidcAuthError({
message: `Error sending email confirmation code for user registration - contact the Infisical instance admin. ${err.message}`
});
});
}
return { isUserCompleted, providerAuthToken };
@@ -395,6 +419,18 @@ export const oidcConfigServiceFactory = ({
message: `Organization bot for organization with ID '${org.id}' not found`,
name: "OrgBotNotFound"
});
const serverCfg = await getServerCfg();
if (isActive && !serverCfg.trustOidcEmails) {
const isSmtpConnected = await smtpService.verify();
if (!isSmtpConnected) {
throw new BadRequestError({
message:
"Cannot enable OIDC when there are issues with the instance's SMTP configuration. Bypass this by turning on trust for OIDC emails in the server admin console."
});
}
}
const key = infisicalSymmetricDecrypt({
ciphertext: orgBot.encryptedSymmetricKey,
iv: orgBot.symmetricKeyIV,

View File

@@ -29,4 +29,18 @@ function validateOrgSSO(actorAuthMethod: ActorAuthMethod, isOrgSsoEnforced: TOrg
}
}
export { isAuthMethodSaml, validateOrgSSO };
const escapeHandlebarsMissingMetadata = (obj: Record<string, string>) => {
const handler = {
get(target: Record<string, string>, prop: string) {
if (!(prop in target)) {
// eslint-disable-next-line no-param-reassign
target[prop] = `{{identity.metadata.${prop}}}`; // Add missing key as an "own" property
}
return target[prop];
}
};
return new Proxy(obj, handler);
};
export { escapeHandlebarsMissingMetadata, isAuthMethodSaml, validateOrgSSO };

View File

@@ -21,7 +21,7 @@ import { TServiceTokenDALFactory } from "@app/services/service-token/service-tok
import { orgAdminPermissions, orgMemberPermissions, orgNoAccessPermissions, OrgPermissionSet } from "./org-permission";
import { TPermissionDALFactory } from "./permission-dal";
import { validateOrgSSO } from "./permission-fns";
import { escapeHandlebarsMissingMetadata, validateOrgSSO } from "./permission-fns";
import { TBuildOrgPermissionDTO, TBuildProjectPermissionDTO } from "./permission-service-types";
import {
buildServiceTokenProjectPermission,
@@ -227,11 +227,13 @@ export const permissionServiceFactory = ({
})) || [];
const rules = buildProjectPermissionRules(rolePermissions.concat(additionalPrivileges));
const templatedRules = handlebars.compile(JSON.stringify(rules), { data: false, strict: true });
const metadataKeyValuePair = objectify(
userProjectPermission.metadata,
(i) => i.key,
(i) => i.value
const templatedRules = handlebars.compile(JSON.stringify(rules), { data: false });
const metadataKeyValuePair = escapeHandlebarsMissingMetadata(
objectify(
userProjectPermission.metadata,
(i) => i.key,
(i) => i.value
)
);
const interpolateRules = templatedRules(
{
@@ -292,12 +294,15 @@ export const permissionServiceFactory = ({
})) || [];
const rules = buildProjectPermissionRules(rolePermissions.concat(additionalPrivileges));
const templatedRules = handlebars.compile(JSON.stringify(rules), { data: false, strict: true });
const metadataKeyValuePair = objectify(
identityProjectPermission.metadata,
(i) => i.key,
(i) => i.value
const templatedRules = handlebars.compile(JSON.stringify(rules), { data: false });
const metadataKeyValuePair = escapeHandlebarsMissingMetadata(
objectify(
identityProjectPermission.metadata,
(i) => i.key,
(i) => i.value
)
);
const interpolateRules = templatedRules(
{
identity: {

View File

@@ -163,10 +163,22 @@ const envSchema = z
SSL_CLIENT_CERTIFICATE_HEADER_KEY: zpStr(z.string().optional()).default("x-ssl-client-cert"),
WORKFLOW_SLACK_CLIENT_ID: zpStr(z.string().optional()),
WORKFLOW_SLACK_CLIENT_SECRET: zpStr(z.string().optional()),
ENABLE_MSSQL_SECRET_ROTATION_ENCRYPT: zodStrBool.default("true")
ENABLE_MSSQL_SECRET_ROTATION_ENCRYPT: zodStrBool.default("true"),
// HSM
HSM_LIB_PATH: zpStr(z.string().optional()),
HSM_PIN: zpStr(z.string().optional()),
HSM_KEY_LABEL: zpStr(z.string().optional()),
HSM_SLOT: z.coerce.number().optional().default(0)
})
// To ensure that basic encryption is always possible.
.refine(
(data) => Boolean(data.ENCRYPTION_KEY) || Boolean(data.ROOT_ENCRYPTION_KEY),
"Either ENCRYPTION_KEY or ROOT_ENCRYPTION_KEY must be defined."
)
.transform((data) => ({
...data,
DB_READ_REPLICAS: data.DB_READ_REPLICAS
? databaseReadReplicaSchema.parse(JSON.parse(data.DB_READ_REPLICAS))
: undefined,
@@ -175,10 +187,14 @@ const envSchema = z
isRedisConfigured: Boolean(data.REDIS_URL),
isDevelopmentMode: data.NODE_ENV === "development",
isProductionMode: data.NODE_ENV === "production" || IS_PACKAGED,
isSecretScanningConfigured:
Boolean(data.SECRET_SCANNING_GIT_APP_ID) &&
Boolean(data.SECRET_SCANNING_PRIVATE_KEY) &&
Boolean(data.SECRET_SCANNING_WEBHOOK_SECRET),
isHsmConfigured:
Boolean(data.HSM_LIB_PATH) && Boolean(data.HSM_PIN) && Boolean(data.HSM_KEY_LABEL) && data.HSM_SLOT !== undefined,
samlDefaultOrgSlug: data.DEFAULT_SAML_ORG_SLUG,
SECRET_SCANNING_ORG_WHITELIST: data.SECRET_SCANNING_ORG_WHITELIST?.split(",")
}));

View File

@@ -133,3 +133,15 @@ export class ScimRequestError extends Error {
this.status = status;
}
}
export class OidcAuthError extends Error {
name: string;
error: unknown;
constructor({ name, error, message }: { message?: string; name?: string; error?: unknown }) {
super(message || "Something went wrong");
this.name = name || "OidcAuthError";
this.error = error;
}
}

View File

@@ -1,6 +1,8 @@
import dotenv from "dotenv";
import path from "path";
import { initializeHsmModule } from "@app/ee/services/hsm/hsm-fns";
import { initAuditLogDbConnection, initDbConnection } from "./db";
import { keyStoreFactory } from "./keystore/keystore";
import { formatSmtpConfig, initEnvConfig, IS_PACKAGED } from "./lib/config/env";
@@ -53,13 +55,17 @@ const run = async () => {
const queue = queueServiceFactory(appCfg.REDIS_URL);
const keyStore = keyStoreFactory(appCfg.REDIS_URL);
const server = await main({ db, auditLogDb, smtp, logger, queue, keyStore });
const hsmModule = initializeHsmModule();
hsmModule.initialize();
const server = await main({ db, auditLogDb, hsmModule: hsmModule.getModule(), smtp, logger, queue, keyStore });
const bootstrap = await bootstrapCheck({ db });
// eslint-disable-next-line
process.on("SIGINT", async () => {
await server.close();
await db.destroy();
hsmModule.finalize();
process.exit(0);
});
@@ -67,6 +73,7 @@ const run = async () => {
process.on("SIGTERM", async () => {
await server.close();
await db.destroy();
hsmModule.finalize();
process.exit(0);
});

View File

@@ -14,6 +14,7 @@ import fastify from "fastify";
import { Knex } from "knex";
import { Logger } from "pino";
import { HsmModule } from "@app/ee/services/hsm/hsm-types";
import { TKeyStoreFactory } from "@app/keystore/keystore";
import { getConfig, IS_PACKAGED } from "@app/lib/config/env";
import { TQueueServiceFactory } from "@app/queue";
@@ -36,16 +37,19 @@ type TMain = {
logger?: Logger;
queue: TQueueServiceFactory;
keyStore: TKeyStoreFactory;
hsmModule: HsmModule;
};
// Run the server!
export const main = async ({ db, auditLogDb, smtp, logger, queue, keyStore }: TMain) => {
export const main = async ({ db, hsmModule, auditLogDb, smtp, logger, queue, keyStore }: TMain) => {
const appCfg = getConfig();
const server = fastify({
logger: appCfg.NODE_ENV === "test" ? false : logger,
trustProxy: true,
connectionTimeout: 30 * 1000,
ignoreTrailingSlash: true
connectionTimeout: appCfg.isHsmConfigured ? 90_000 : 30_000,
ignoreTrailingSlash: true,
pluginTimeout: 40_000
}).withTypeProvider<ZodTypeProvider>();
server.setValidatorCompiler(validatorCompiler);
@@ -95,7 +99,7 @@ export const main = async ({ db, auditLogDb, smtp, logger, queue, keyStore }: TM
await server.register(maintenanceMode);
await server.register(registerRoutes, { smtp, queue, db, auditLogDb, keyStore });
await server.register(registerRoutes, { smtp, queue, db, auditLogDb, keyStore, hsmModule });
if (appCfg.isProductionMode) {
await server.register(registerExternalNextjs, {

View File

@@ -46,10 +46,10 @@ export const bootstrapCheck = async ({ db }: BootstrapOpt) => {
await createTransport(smtpCfg)
.verify()
.then(async () => {
console.info("SMTP successfully connected");
console.info(`SMTP - Verified connection to ${appCfg.SMTP_HOST}:${appCfg.SMTP_PORT}`);
})
.catch((err) => {
console.error(`SMTP - Failed to connect to ${appCfg.SMTP_HOST}:${appCfg.SMTP_PORT}`);
.catch((err: Error) => {
console.error(`SMTP - Failed to connect to ${appCfg.SMTP_HOST}:${appCfg.SMTP_PORT} - ${err.message}`);
logger.error(err);
});

View File

@@ -10,6 +10,7 @@ import {
GatewayTimeoutError,
InternalServerError,
NotFoundError,
OidcAuthError,
RateLimitError,
ScimRequestError,
UnauthorizedError
@@ -83,7 +84,10 @@ export const fastifyErrHandler = fastifyPlugin(async (server: FastifyZodProvider
status: error.status,
detail: error.detail
});
// Handle JWT errors and make them more human-readable for the end-user.
} else if (error instanceof OidcAuthError) {
void res
.status(HttpStatusCodes.InternalServerError)
.send({ statusCode: HttpStatusCodes.InternalServerError, message: error.message, error: error.name });
} else if (error instanceof jwt.JsonWebTokenError) {
const message = (() => {
if (error.message === JWTErrors.JwtExpired) {

View File

@@ -1,5 +1,4 @@
import { CronJob } from "cron";
// import { Redis } from "ioredis";
import { Knex } from "knex";
import { z } from "zod";
@@ -31,6 +30,8 @@ import { externalKmsServiceFactory } from "@app/ee/services/external-kms/externa
import { groupDALFactory } from "@app/ee/services/group/group-dal";
import { groupServiceFactory } from "@app/ee/services/group/group-service";
import { userGroupMembershipDALFactory } from "@app/ee/services/group/user-group-membership-dal";
import { hsmServiceFactory } from "@app/ee/services/hsm/hsm-service";
import { HsmModule } from "@app/ee/services/hsm/hsm-types";
import { identityProjectAdditionalPrivilegeDALFactory } from "@app/ee/services/identity-project-additional-privilege/identity-project-additional-privilege-dal";
import { identityProjectAdditionalPrivilegeServiceFactory } from "@app/ee/services/identity-project-additional-privilege/identity-project-additional-privilege-service";
import { identityProjectAdditionalPrivilegeV2ServiceFactory } from "@app/ee/services/identity-project-additional-privilege-v2/identity-project-additional-privilege-v2-service";
@@ -200,6 +201,8 @@ import { getServerCfg, superAdminServiceFactory } from "@app/services/super-admi
import { telemetryDALFactory } from "@app/services/telemetry/telemetry-dal";
import { telemetryQueueServiceFactory } from "@app/services/telemetry/telemetry-queue";
import { telemetryServiceFactory } from "@app/services/telemetry/telemetry-service";
import { totpConfigDALFactory } from "@app/services/totp/totp-config-dal";
import { totpServiceFactory } from "@app/services/totp/totp-service";
import { userDALFactory } from "@app/services/user/user-dal";
import { userServiceFactory } from "@app/services/user/user-service";
import { userAliasDALFactory } from "@app/services/user-alias/user-alias-dal";
@@ -223,10 +226,18 @@ export const registerRoutes = async (
{
auditLogDb,
db,
hsmModule,
smtp: smtpService,
queue: queueService,
keyStore
}: { auditLogDb?: Knex; db: Knex; smtp: TSmtpService; queue: TQueueServiceFactory; keyStore: TKeyStoreFactory }
}: {
auditLogDb?: Knex;
db: Knex;
hsmModule: HsmModule;
smtp: TSmtpService;
queue: TQueueServiceFactory;
keyStore: TKeyStoreFactory;
}
) => {
const appCfg = getConfig();
await server.register(registerSecretScannerGhApp, { prefix: "/ss-webhook" });
@@ -339,6 +350,7 @@ export const registerRoutes = async (
const slackIntegrationDAL = slackIntegrationDALFactory(db);
const projectSlackConfigDAL = projectSlackConfigDALFactory(db);
const workflowIntegrationDAL = workflowIntegrationDALFactory(db);
const totpConfigDAL = totpConfigDALFactory(db);
const externalGroupOrgRoleMappingDAL = externalGroupOrgRoleMappingDALFactory(db);
@@ -352,14 +364,21 @@ export const registerRoutes = async (
projectDAL
});
const licenseService = licenseServiceFactory({ permissionService, orgDAL, licenseDAL, keyStore });
const hsmService = hsmServiceFactory({
hsmModule
});
const kmsService = kmsServiceFactory({
kmsRootConfigDAL,
keyStore,
kmsDAL,
internalKmsDAL,
orgDAL,
projectDAL
projectDAL,
hsmService
});
const externalKmsService = externalKmsServiceFactory({
kmsDAL,
kmsService,
@@ -495,12 +514,19 @@ export const registerRoutes = async (
projectMembershipDAL
});
const loginService = authLoginServiceFactory({ userDAL, smtpService, tokenService, orgDAL });
const totpService = totpServiceFactory({
totpConfigDAL,
userDAL,
kmsService
});
const loginService = authLoginServiceFactory({ userDAL, smtpService, tokenService, orgDAL, totpService });
const passwordService = authPaswordServiceFactory({
tokenService,
smtpService,
authDAL,
userDAL
userDAL,
totpConfigDAL
});
const projectBotService = projectBotServiceFactory({ permissionService, projectBotDAL, projectDAL });
@@ -556,6 +582,7 @@ export const registerRoutes = async (
userDAL,
authService: loginService,
serverCfgDAL: superAdminDAL,
kmsRootConfigDAL,
orgService,
keyStore,
licenseService,
@@ -1261,10 +1288,13 @@ export const registerRoutes = async (
});
await superAdminService.initServerCfg();
//
// setup the communication with license key server
await licenseService.init();
// Start HSM service if it's configured/enabled.
await hsmService.startService();
await telemetryQueue.startTelemetryCheck();
await dailyResourceCleanUp.startCleanUp();
await dailyExpiringPkiItemAlert.startSendingAlerts();
@@ -1342,13 +1372,15 @@ export const registerRoutes = async (
secretSharing: secretSharingService,
userEngagement: userEngagementService,
externalKms: externalKmsService,
hsm: hsmService,
cmek: cmekService,
orgAdmin: orgAdminService,
slack: slackService,
workflowIntegration: workflowIntegrationService,
migration: migrationService,
externalGroupOrgRoleMapping: externalGroupOrgRoleMappingService,
projectTemplate: projectTemplateService
projectTemplate: projectTemplateService,
totp: totpService
});
const cronJobs: CronJob[] = [];

View File

@@ -7,6 +7,7 @@ import { readLimit, writeLimit } from "@app/server/config/rateLimiter";
import { verifySuperAdmin } from "@app/server/plugins/auth/superAdmin";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AuthMode } from "@app/services/auth/auth-type";
import { RootKeyEncryptionStrategy } from "@app/services/kms/kms-types";
import { getServerCfg } from "@app/services/super-admin/super-admin-service";
import { LoginMethod } from "@app/services/super-admin/super-admin-types";
import { PostHogEventTypes } from "@app/services/telemetry/telemetry-types";
@@ -195,6 +196,57 @@ export const registerAdminRouter = async (server: FastifyZodProvider) => {
}
});
server.route({
method: "GET",
url: "/encryption-strategies",
config: {
rateLimit: readLimit
},
schema: {
response: {
200: z.object({
strategies: z
.object({
strategy: z.nativeEnum(RootKeyEncryptionStrategy),
enabled: z.boolean()
})
.array()
})
}
},
onRequest: (req, res, done) => {
verifyAuth([AuthMode.JWT])(req, res, () => {
verifySuperAdmin(req, res, done);
});
},
handler: async () => {
const encryptionDetails = await server.services.superAdmin.getConfiguredEncryptionStrategies();
return encryptionDetails;
}
});
server.route({
method: "PATCH",
url: "/encryption-strategies",
config: {
rateLimit: writeLimit
},
schema: {
body: z.object({
strategy: z.nativeEnum(RootKeyEncryptionStrategy)
})
},
onRequest: (req, res, done) => {
verifyAuth([AuthMode.JWT])(req, res, () => {
verifySuperAdmin(req, res, done);
});
},
handler: async (req) => {
await server.services.superAdmin.updateRootEncryptionStrategy(req.body.strategy);
}
});
server.route({
method: "POST",
url: "/signup",

View File

@@ -108,7 +108,8 @@ export const registerAuthRoutes = async (server: FastifyZodProvider) => {
tokenVersionId: tokenVersion.id,
accessVersion: tokenVersion.accessVersion,
organizationId: decodedToken.organizationId,
isMfaVerified: decodedToken.isMfaVerified
isMfaVerified: decodedToken.isMfaVerified,
mfaMethod: decodedToken.mfaMethod
},
appCfg.AUTH_SECRET,
{ expiresIn: appCfg.JWT_AUTH_LIFETIME }

View File

@@ -840,4 +840,91 @@ export const registerDashboardRouter = async (server: FastifyZodProvider) => {
};
}
});
server.route({
method: "GET",
url: "/secrets-by-keys",
config: {
rateLimit: secretsLimit
},
schema: {
security: [
{
bearerAuth: []
}
],
querystring: z.object({
projectId: z.string().trim(),
environment: z.string().trim(),
secretPath: z.string().trim().default("/").transform(removeTrailingSlash),
keys: z.string().trim().transform(decodeURIComponent)
}),
response: {
200: z.object({
secrets: secretRawSchema
.extend({
secretPath: z.string().optional(),
tags: SecretTagsSchema.pick({
id: true,
slug: true,
color: true
})
.extend({ name: z.string() })
.array()
.optional()
})
.array()
.optional()
})
}
},
onRequest: verifyAuth([AuthMode.JWT]),
handler: async (req) => {
const { secretPath, projectId, environment } = req.query;
const keys = req.query.keys?.split(",").filter((key) => Boolean(key.trim())) ?? [];
if (!keys.length) throw new BadRequestError({ message: "One or more keys required" });
const { secrets } = await server.services.secret.getSecretsRaw({
actorId: req.permission.id,
actor: req.permission.type,
actorOrgId: req.permission.orgId,
environment,
actorAuthMethod: req.permission.authMethod,
projectId,
path: secretPath,
keys
});
await server.services.auditLog.createAuditLog({
projectId,
...req.auditLogInfo,
event: {
type: EventType.GET_SECRETS,
metadata: {
environment,
secretPath,
numberOfSecrets: secrets.length
}
}
});
if (getUserAgentType(req.headers["user-agent"]) !== UserAgentType.K8_OPERATOR) {
await server.services.telemetry.sendPostHogEvents({
event: PostHogEventTypes.SecretPulled,
distinctId: getTelemetryDistinctId(req),
properties: {
numberOfSecrets: secrets.length,
workspaceId: projectId,
environment,
secretPath,
channel: getUserAgentType(req.headers["user-agent"]),
...req.auditLogInfo
}
});
}
return { secrets };
}
});
};

View File

@@ -15,7 +15,7 @@ import { AUDIT_LOGS, ORGANIZATIONS } from "@app/lib/api-docs";
import { getLastMidnightDateISO } from "@app/lib/fn";
import { readLimit, writeLimit } from "@app/server/config/rateLimiter";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { ActorType, AuthMode } from "@app/services/auth/auth-type";
import { ActorType, AuthMode, MfaMethod } from "@app/services/auth/auth-type";
import { integrationAuthPubSchema } from "../sanitizedSchemas";
@@ -259,7 +259,8 @@ export const registerOrgRouter = async (server: FastifyZodProvider) => {
message: "Membership role must be a valid slug"
})
.optional(),
enforceMfa: z.boolean().optional()
enforceMfa: z.boolean().optional(),
selectedMfaMethod: z.nativeEnum(MfaMethod).optional()
}),
response: {
200: z.object({

View File

@@ -169,4 +169,103 @@ export const registerUserRouter = async (server: FastifyZodProvider) => {
return groupMemberships;
}
});
server.route({
method: "GET",
url: "/me/totp",
config: {
rateLimit: readLimit
},
schema: {
response: {
200: z.object({
isVerified: z.boolean(),
recoveryCodes: z.string().array()
})
}
},
onRequest: verifyAuth([AuthMode.JWT]),
handler: async (req) => {
return server.services.totp.getUserTotpConfig({
userId: req.permission.id
});
}
});
server.route({
method: "DELETE",
url: "/me/totp",
config: {
rateLimit: writeLimit
},
onRequest: verifyAuth([AuthMode.JWT]),
handler: async (req) => {
return server.services.totp.deleteUserTotpConfig({
userId: req.permission.id
});
}
});
server.route({
method: "POST",
url: "/me/totp/register",
config: {
rateLimit: writeLimit
},
schema: {
response: {
200: z.object({
otpUrl: z.string(),
recoveryCodes: z.string().array()
})
}
},
onRequest: verifyAuth([AuthMode.JWT], {
requireOrg: false
}),
handler: async (req) => {
return server.services.totp.registerUserTotp({
userId: req.permission.id
});
}
});
server.route({
method: "POST",
url: "/me/totp/verify",
config: {
rateLimit: writeLimit
},
schema: {
body: z.object({
totp: z.string()
}),
response: {
200: z.object({})
}
},
onRequest: verifyAuth([AuthMode.JWT], {
requireOrg: false
}),
handler: async (req) => {
return server.services.totp.verifyUserTotpConfig({
userId: req.permission.id,
totp: req.body.totp
});
}
});
server.route({
method: "POST",
url: "/me/totp/recovery-codes",
config: {
rateLimit: writeLimit
},
onRequest: verifyAuth([AuthMode.JWT]),
handler: async (req) => {
return server.services.totp.createUserTotpRecoveryCodes({
userId: req.permission.id
});
}
});
};

View File

@@ -2,8 +2,9 @@ import jwt from "jsonwebtoken";
import { z } from "zod";
import { getConfig } from "@app/lib/config/env";
import { BadRequestError, NotFoundError } from "@app/lib/errors";
import { mfaRateLimit } from "@app/server/config/rateLimiter";
import { AuthModeMfaJwtTokenPayload, AuthTokenType } from "@app/services/auth/auth-type";
import { AuthModeMfaJwtTokenPayload, AuthTokenType, MfaMethod } from "@app/services/auth/auth-type";
export const registerMfaRouter = async (server: FastifyZodProvider) => {
const cfg = getConfig();
@@ -49,6 +50,38 @@ export const registerMfaRouter = async (server: FastifyZodProvider) => {
}
});
server.route({
method: "GET",
url: "/mfa/check/totp",
config: {
rateLimit: mfaRateLimit
},
schema: {
response: {
200: z.object({
isVerified: z.boolean()
})
}
},
handler: async (req) => {
try {
const totpConfig = await server.services.totp.getUserTotpConfig({
userId: req.mfa.userId
});
return {
isVerified: Boolean(totpConfig)
};
} catch (error) {
if (error instanceof NotFoundError || error instanceof BadRequestError) {
return { isVerified: false };
}
throw error;
}
}
});
server.route({
url: "/mfa/verify",
method: "POST",
@@ -57,7 +90,8 @@ export const registerMfaRouter = async (server: FastifyZodProvider) => {
},
schema: {
body: z.object({
mfaToken: z.string().trim()
mfaToken: z.string().trim(),
mfaMethod: z.nativeEnum(MfaMethod).optional().default(MfaMethod.EMAIL)
}),
response: {
200: z.object({
@@ -86,7 +120,8 @@ export const registerMfaRouter = async (server: FastifyZodProvider) => {
ip: req.realIp,
userId: req.mfa.userId,
orgId: req.mfa.orgId,
mfaToken: req.body.mfaToken
mfaToken: req.body.mfaToken,
mfaMethod: req.body.mfaMethod
});
void res.setCookie("jid", token.refresh, {

View File

@@ -27,7 +27,7 @@ export const registerProjectMembershipRouter = async (server: FastifyZodProvider
body: z.object({
emails: z.string().email().array().default([]).describe(PROJECT_USERS.INVITE_MEMBER.emails),
usernames: z.string().array().default([]).describe(PROJECT_USERS.INVITE_MEMBER.usernames),
roleSlugs: z.string().array().optional().describe(PROJECT_USERS.INVITE_MEMBER.roleSlugs)
roleSlugs: z.string().array().min(1).optional().describe(PROJECT_USERS.INVITE_MEMBER.roleSlugs)
}),
response: {
200: z.object({
@@ -49,7 +49,7 @@ export const registerProjectMembershipRouter = async (server: FastifyZodProvider
projects: [
{
id: req.params.projectId,
projectRoleSlug: [ProjectMembershipRole.Member]
projectRoleSlug: req.body.roleSlugs || [ProjectMembershipRole.Member]
}
]
});

View File

@@ -4,7 +4,7 @@ import { AuthTokenSessionsSchema, OrganizationsSchema, UserEncryptionKeysSchema,
import { ApiKeysSchema } from "@app/db/schemas/api-keys";
import { authRateLimit, readLimit, writeLimit } from "@app/server/config/rateLimiter";
import { verifyAuth } from "@app/server/plugins/auth/verify-auth";
import { AuthMethod, AuthMode } from "@app/services/auth/auth-type";
import { AuthMethod, AuthMode, MfaMethod } from "@app/services/auth/auth-type";
export const registerUserRouter = async (server: FastifyZodProvider) => {
server.route({
@@ -56,7 +56,8 @@ export const registerUserRouter = async (server: FastifyZodProvider) => {
},
schema: {
body: z.object({
isMfaEnabled: z.boolean()
isMfaEnabled: z.boolean().optional(),
selectedMfaMethod: z.nativeEnum(MfaMethod).optional()
}),
response: {
200: z.object({
@@ -66,7 +67,12 @@ export const registerUserRouter = async (server: FastifyZodProvider) => {
},
preHandler: verifyAuth([AuthMode.JWT, AuthMode.API_KEY]),
handler: async (req) => {
const user = await server.services.user.toggleUserMfa(req.permission.id, req.body.isMfaEnabled);
const user = await server.services.user.updateUserMfa({
userId: req.permission.id,
isMfaEnabled: req.body.isMfaEnabled,
selectedMfaMethod: req.body.selectedMfaMethod
});
return { user };
}
});

View File

@@ -48,7 +48,8 @@ export const registerLoginRouter = async (server: FastifyZodProvider) => {
response: {
200: z.object({
token: z.string(),
isMfaEnabled: z.boolean()
isMfaEnabled: z.boolean(),
mfaMethod: z.string().optional()
})
}
},
@@ -64,7 +65,8 @@ export const registerLoginRouter = async (server: FastifyZodProvider) => {
if (tokens.isMfaEnabled) {
return {
token: tokens.mfa as string,
isMfaEnabled: true
isMfaEnabled: true,
mfaMethod: tokens.mfaMethod
};
}

View File

@@ -17,6 +17,7 @@ import { TokenType } from "../auth-token/auth-token-types";
import { TOrgDALFactory } from "../org/org-dal";
import { SmtpTemplates, TSmtpService } from "../smtp/smtp-service";
import { LoginMethod } from "../super-admin/super-admin-types";
import { TTotpServiceFactory } from "../totp/totp-service";
import { TUserDALFactory } from "../user/user-dal";
import { enforceUserLockStatus, validateProviderAuthToken } from "./auth-fns";
import {
@@ -26,13 +27,14 @@ import {
TOauthTokenExchangeDTO,
TVerifyMfaTokenDTO
} from "./auth-login-type";
import { AuthMethod, AuthModeJwtTokenPayload, AuthModeMfaJwtTokenPayload, AuthTokenType } from "./auth-type";
import { AuthMethod, AuthModeJwtTokenPayload, AuthModeMfaJwtTokenPayload, AuthTokenType, MfaMethod } from "./auth-type";
type TAuthLoginServiceFactoryDep = {
userDAL: TUserDALFactory;
orgDAL: TOrgDALFactory;
tokenService: TAuthTokenServiceFactory;
smtpService: TSmtpService;
totpService: Pick<TTotpServiceFactory, "verifyUserTotp" | "verifyWithUserRecoveryCode">;
};
export type TAuthLoginFactory = ReturnType<typeof authLoginServiceFactory>;
@@ -40,7 +42,8 @@ export const authLoginServiceFactory = ({
userDAL,
tokenService,
smtpService,
orgDAL
orgDAL,
totpService
}: TAuthLoginServiceFactoryDep) => {
/*
* Private
@@ -100,7 +103,8 @@ export const authLoginServiceFactory = ({
userAgent,
organizationId,
authMethod,
isMfaVerified
isMfaVerified,
mfaMethod
}: {
user: TUsers;
ip: string;
@@ -108,6 +112,7 @@ export const authLoginServiceFactory = ({
organizationId?: string;
authMethod: AuthMethod;
isMfaVerified?: boolean;
mfaMethod?: MfaMethod;
}) => {
const cfg = getConfig();
await updateUserDeviceSession(user, ip, userAgent);
@@ -126,7 +131,8 @@ export const authLoginServiceFactory = ({
tokenVersionId: tokenSession.id,
accessVersion: tokenSession.accessVersion,
organizationId,
isMfaVerified
isMfaVerified,
mfaMethod
},
cfg.AUTH_SECRET,
{ expiresIn: cfg.JWT_AUTH_LIFETIME }
@@ -140,7 +146,8 @@ export const authLoginServiceFactory = ({
tokenVersionId: tokenSession.id,
refreshVersion: tokenSession.refreshVersion,
organizationId,
isMfaVerified
isMfaVerified,
mfaMethod
},
cfg.AUTH_SECRET,
{ expiresIn: cfg.JWT_REFRESH_LIFETIME }
@@ -353,8 +360,12 @@ export const authLoginServiceFactory = ({
});
}
// send multi factor auth token if they it enabled
if ((selectedOrg.enforceMfa || user.isMfaEnabled) && user.email && !decodedToken.isMfaVerified) {
const shouldCheckMfa = selectedOrg.enforceMfa || user.isMfaEnabled;
const orgMfaMethod = selectedOrg.enforceMfa ? selectedOrg.selectedMfaMethod ?? MfaMethod.EMAIL : undefined;
const userMfaMethod = user.isMfaEnabled ? user.selectedMfaMethod ?? MfaMethod.EMAIL : undefined;
const mfaMethod = orgMfaMethod ?? userMfaMethod;
if (shouldCheckMfa && (!decodedToken.isMfaVerified || decodedToken.mfaMethod !== mfaMethod)) {
enforceUserLockStatus(Boolean(user.isLocked), user.temporaryLockDateEnd);
const mfaToken = jwt.sign(
@@ -369,12 +380,14 @@ export const authLoginServiceFactory = ({
}
);
await sendUserMfaCode({
userId: user.id,
email: user.email
});
if (mfaMethod === MfaMethod.EMAIL && user.email) {
await sendUserMfaCode({
userId: user.id,
email: user.email
});
}
return { isMfaEnabled: true, mfa: mfaToken } as const;
return { isMfaEnabled: true, mfa: mfaToken, mfaMethod } as const;
}
const tokens = await generateUserTokens({
@@ -383,7 +396,8 @@ export const authLoginServiceFactory = ({
userAgent,
ip: ipAddress,
organizationId,
isMfaVerified: decodedToken.isMfaVerified
isMfaVerified: decodedToken.isMfaVerified,
mfaMethod: decodedToken.mfaMethod
});
return {
@@ -458,17 +472,39 @@ export const authLoginServiceFactory = ({
* Multi factor authentication verification of code
* Third step of login in which user completes with mfa
* */
const verifyMfaToken = async ({ userId, mfaToken, mfaJwtToken, ip, userAgent, orgId }: TVerifyMfaTokenDTO) => {
const verifyMfaToken = async ({
userId,
mfaToken,
mfaMethod,
mfaJwtToken,
ip,
userAgent,
orgId
}: TVerifyMfaTokenDTO) => {
const appCfg = getConfig();
const user = await userDAL.findById(userId);
enforceUserLockStatus(Boolean(user.isLocked), user.temporaryLockDateEnd);
try {
await tokenService.validateTokenForUser({
type: TokenType.TOKEN_EMAIL_MFA,
userId,
code: mfaToken
});
if (mfaMethod === MfaMethod.EMAIL) {
await tokenService.validateTokenForUser({
type: TokenType.TOKEN_EMAIL_MFA,
userId,
code: mfaToken
});
} else if (mfaMethod === MfaMethod.TOTP) {
if (mfaToken.length === 6) {
await totpService.verifyUserTotp({
userId,
totp: mfaToken
});
} else {
await totpService.verifyWithUserRecoveryCode({
userId,
recoveryCode: mfaToken
});
}
}
} catch (err) {
const updatedUser = await processFailedMfaAttempt(userId);
if (updatedUser.isLocked) {
@@ -513,7 +549,8 @@ export const authLoginServiceFactory = ({
userAgent,
organizationId: orgId,
authMethod: decodedToken.authMethod,
isMfaVerified: true
isMfaVerified: true,
mfaMethod
});
return { token, user: userEnc };

View File

@@ -1,4 +1,4 @@
import { AuthMethod } from "./auth-type";
import { AuthMethod, MfaMethod } from "./auth-type";
export type TLoginGenServerPublicKeyDTO = {
email: string;
@@ -19,6 +19,7 @@ export type TLoginClientProofDTO = {
export type TVerifyMfaTokenDTO = {
userId: string;
mfaToken: string;
mfaMethod: MfaMethod;
mfaJwtToken: string;
ip: string;
userAgent: string;

View File

@@ -8,6 +8,7 @@ import { generateSrpServerKey, srpCheckClientProof } from "@app/lib/crypto";
import { TAuthTokenServiceFactory } from "../auth-token/auth-token-service";
import { TokenType } from "../auth-token/auth-token-types";
import { SmtpTemplates, TSmtpService } from "../smtp/smtp-service";
import { TTotpConfigDALFactory } from "../totp/totp-config-dal";
import { TUserDALFactory } from "../user/user-dal";
import { TAuthDALFactory } from "./auth-dal";
import { TChangePasswordDTO, TCreateBackupPrivateKeyDTO, TResetPasswordViaBackupKeyDTO } from "./auth-password-type";
@@ -18,6 +19,7 @@ type TAuthPasswordServiceFactoryDep = {
userDAL: TUserDALFactory;
tokenService: TAuthTokenServiceFactory;
smtpService: TSmtpService;
totpConfigDAL: Pick<TTotpConfigDALFactory, "delete">;
};
export type TAuthPasswordFactory = ReturnType<typeof authPaswordServiceFactory>;
@@ -25,7 +27,8 @@ export const authPaswordServiceFactory = ({
authDAL,
userDAL,
tokenService,
smtpService
smtpService,
totpConfigDAL
}: TAuthPasswordServiceFactoryDep) => {
/*
* Pre setup for pass change with srp protocol
@@ -185,6 +188,12 @@ export const authPaswordServiceFactory = ({
temporaryLockDateEnd: null,
consecutiveFailedMfaAttempts: 0
});
/* we reset the mobile authenticator configs of the user
because we want this to be one of the recovery modes from account lockout */
await totpConfigDAL.delete({
userId
});
};
/*

View File

@@ -53,6 +53,7 @@ export type AuthModeJwtTokenPayload = {
accessVersion: number;
organizationId?: string;
isMfaVerified?: boolean;
mfaMethod?: MfaMethod;
};
export type AuthModeMfaJwtTokenPayload = {
@@ -71,6 +72,7 @@ export type AuthModeRefreshJwtTokenPayload = {
refreshVersion: number;
organizationId?: string;
isMfaVerified?: boolean;
mfaMethod?: MfaMethod;
};
export type AuthModeProviderJwtTokenPayload = {
@@ -85,3 +87,8 @@ export type AuthModeProviderSignUpTokenPayload = {
authTokenType: AuthTokenType.SIGNUP_TOKEN;
userId: string;
};
export enum MfaMethod {
EMAIL = "email",
TOTP = "totp"
}

View File

@@ -29,7 +29,7 @@ import {
} from "./identity-aws-auth-types";
type TIdentityAwsAuthServiceFactoryDep = {
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create">;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create" | "delete">;
identityAwsAuthDAL: Pick<TIdentityAwsAuthDALFactory, "findOne" | "transaction" | "create" | "updateById" | "delete">;
identityOrgMembershipDAL: Pick<TIdentityOrgDALFactory, "findOne">;
licenseService: Pick<TLicenseServiceFactory, "getPlan">;
@@ -346,6 +346,8 @@ export const identityAwsAuthServiceFactory = ({
const revokedIdentityAwsAuth = await identityAwsAuthDAL.transaction(async (tx) => {
const deletedAwsAuth = await identityAwsAuthDAL.delete({ identityId }, tx);
await identityAccessTokenDAL.delete({ identityId, authMethod: IdentityAuthMethod.AWS_AUTH }, tx);
return { ...deletedAwsAuth?.[0], orgId: identityMembershipOrg.orgId };
});
return revokedIdentityAwsAuth;

View File

@@ -30,7 +30,7 @@ type TIdentityAzureAuthServiceFactoryDep = {
"findOne" | "transaction" | "create" | "updateById" | "delete"
>;
identityOrgMembershipDAL: Pick<TIdentityOrgDALFactory, "findOne">;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create">;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create" | "delete">;
permissionService: Pick<TPermissionServiceFactory, "getOrgPermission">;
licenseService: Pick<TLicenseServiceFactory, "getPlan">;
};
@@ -70,7 +70,9 @@ export const identityAzureAuthServiceFactory = ({
.map((servicePrincipalId) => servicePrincipalId.trim())
.some((servicePrincipalId) => servicePrincipalId === azureIdentity.oid);
if (!isServicePrincipalAllowed) throw new UnauthorizedError({ message: "Service principal not allowed" });
if (!isServicePrincipalAllowed) {
throw new UnauthorizedError({ message: `Service principal '${azureIdentity.oid}' not allowed` });
}
}
const identityAccessToken = await identityAzureAuthDAL.transaction(async (tx) => {
@@ -317,6 +319,8 @@ export const identityAzureAuthServiceFactory = ({
const revokedIdentityAzureAuth = await identityAzureAuthDAL.transaction(async (tx) => {
const deletedAzureAuth = await identityAzureAuthDAL.delete({ identityId }, tx);
await identityAccessTokenDAL.delete({ identityId, authMethod: IdentityAuthMethod.AZURE_AUTH }, tx);
return { ...deletedAzureAuth?.[0], orgId: identityMembershipOrg.orgId };
});
return revokedIdentityAzureAuth;

View File

@@ -28,7 +28,7 @@ import {
type TIdentityGcpAuthServiceFactoryDep = {
identityGcpAuthDAL: Pick<TIdentityGcpAuthDALFactory, "findOne" | "transaction" | "create" | "updateById" | "delete">;
identityOrgMembershipDAL: Pick<TIdentityOrgDALFactory, "findOne">;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create">;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create" | "delete">;
permissionService: Pick<TPermissionServiceFactory, "getOrgPermission">;
licenseService: Pick<TLicenseServiceFactory, "getPlan">;
};
@@ -365,6 +365,8 @@ export const identityGcpAuthServiceFactory = ({
const revokedIdentityGcpAuth = await identityGcpAuthDAL.transaction(async (tx) => {
const deletedGcpAuth = await identityGcpAuthDAL.delete({ identityId }, tx);
await identityAccessTokenDAL.delete({ identityId, authMethod: IdentityAuthMethod.GCP_AUTH }, tx);
return { ...deletedGcpAuth?.[0], orgId: identityMembershipOrg.orgId };
});
return revokedIdentityGcpAuth;

View File

@@ -41,7 +41,7 @@ type TIdentityKubernetesAuthServiceFactoryDep = {
TIdentityKubernetesAuthDALFactory,
"create" | "findOne" | "transaction" | "updateById" | "delete"
>;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create">;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create" | "delete">;
identityOrgMembershipDAL: Pick<TIdentityOrgDALFactory, "findOne" | "findById">;
orgBotDAL: Pick<TOrgBotDALFactory, "findOne" | "transaction" | "create">;
permissionService: Pick<TPermissionServiceFactory, "getOrgPermission">;
@@ -622,6 +622,7 @@ export const identityKubernetesAuthServiceFactory = ({
const revokedIdentityKubernetesAuth = await identityKubernetesAuthDAL.transaction(async (tx) => {
const deletedKubernetesAuth = await identityKubernetesAuthDAL.delete({ identityId }, tx);
await identityAccessTokenDAL.delete({ identityId, authMethod: IdentityAuthMethod.KUBERNETES_AUTH }, tx);
return { ...deletedKubernetesAuth?.[0], orgId: identityMembershipOrg.orgId };
});
return revokedIdentityKubernetesAuth;

View File

@@ -39,7 +39,7 @@ import {
type TIdentityOidcAuthServiceFactoryDep = {
identityOidcAuthDAL: TIdentityOidcAuthDALFactory;
identityOrgMembershipDAL: Pick<TIdentityOrgDALFactory, "findOne">;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create">;
identityAccessTokenDAL: Pick<TIdentityAccessTokenDALFactory, "create" | "delete">;
permissionService: Pick<TPermissionServiceFactory, "getOrgPermission">;
licenseService: Pick<TLicenseServiceFactory, "getPlan">;
orgBotDAL: Pick<TOrgBotDALFactory, "findOne" | "transaction" | "create">;
@@ -539,6 +539,8 @@ export const identityOidcAuthServiceFactory = ({
const revokedIdentityOidcAuth = await identityOidcAuthDAL.transaction(async (tx) => {
const deletedOidcAuth = await identityOidcAuthDAL.delete({ identityId }, tx);
await identityAccessTokenDAL.delete({ identityId, authMethod: IdentityAuthMethod.OIDC_AUTH }, tx);
return { ...deletedOidcAuth?.[0], orgId: identityMembershipOrg.orgId };
});

View File

@@ -1,5 +1,7 @@
import { SymmetricEncryption } from "@app/lib/crypto/cipher";
export const KMS_ROOT_CONFIG_UUID = "00000000-0000-0000-0000-000000000000";
export const getByteLengthForAlgorithm = (encryptionAlgorithm: SymmetricEncryption) => {
switch (encryptionAlgorithm) {
case SymmetricEncryption.AES_GCM_128:

View File

@@ -2,13 +2,14 @@ import slugify from "@sindresorhus/slugify";
import { Knex } from "knex";
import { z } from "zod";
import { KmsKeysSchema } from "@app/db/schemas";
import { KmsKeysSchema, TKmsRootConfig } from "@app/db/schemas";
import { AwsKmsProviderFactory } from "@app/ee/services/external-kms/providers/aws-kms";
import {
ExternalKmsAwsSchema,
KmsProviders,
TExternalKmsProviderFns
} from "@app/ee/services/external-kms/providers/model";
import { THsmServiceFactory } from "@app/ee/services/hsm/hsm-service";
import { KeyStorePrefixes, TKeyStoreFactory } from "@app/keystore/keystore";
import { getConfig } from "@app/lib/config/env";
import { randomSecureBytes } from "@app/lib/crypto";
@@ -17,7 +18,7 @@ import { generateHash } from "@app/lib/crypto/encryption";
import { BadRequestError, ForbiddenRequestError, NotFoundError } from "@app/lib/errors";
import { logger } from "@app/lib/logger";
import { alphaNumericNanoId } from "@app/lib/nanoid";
import { getByteLengthForAlgorithm } from "@app/services/kms/kms-fns";
import { getByteLengthForAlgorithm, KMS_ROOT_CONFIG_UUID } from "@app/services/kms/kms-fns";
import { TOrgDALFactory } from "../org/org-dal";
import { TProjectDALFactory } from "../project/project-dal";
@@ -27,6 +28,7 @@ import { TKmsRootConfigDALFactory } from "./kms-root-config-dal";
import {
KmsDataKey,
KmsType,
RootKeyEncryptionStrategy,
TDecryptWithKeyDTO,
TDecryptWithKmsDTO,
TEncryptionWithKeyDTO,
@@ -40,15 +42,14 @@ type TKmsServiceFactoryDep = {
kmsDAL: TKmsKeyDALFactory;
projectDAL: Pick<TProjectDALFactory, "findById" | "updateById" | "transaction">;
orgDAL: Pick<TOrgDALFactory, "findById" | "updateById" | "transaction">;
kmsRootConfigDAL: Pick<TKmsRootConfigDALFactory, "findById" | "create">;
kmsRootConfigDAL: Pick<TKmsRootConfigDALFactory, "findById" | "create" | "updateById">;
keyStore: Pick<TKeyStoreFactory, "acquireLock" | "waitTillReady" | "setItemWithExpiry">;
internalKmsDAL: Pick<TInternalKmsDALFactory, "create">;
hsmService: THsmServiceFactory;
};
export type TKmsServiceFactory = ReturnType<typeof kmsServiceFactory>;
const KMS_ROOT_CONFIG_UUID = "00000000-0000-0000-0000-000000000000";
const KMS_ROOT_CREATION_WAIT_KEY = "wait_till_ready_kms_root_key";
const KMS_ROOT_CREATION_WAIT_TIME = 10;
@@ -63,7 +64,8 @@ export const kmsServiceFactory = ({
keyStore,
internalKmsDAL,
orgDAL,
projectDAL
projectDAL,
hsmService
}: TKmsServiceFactoryDep) => {
let ROOT_ENCRYPTION_KEY = Buffer.alloc(0);
@@ -610,6 +612,65 @@ export const kmsServiceFactory = ({
}
};
const $getBasicEncryptionKey = () => {
const appCfg = getConfig();
const encryptionKey = appCfg.ENCRYPTION_KEY || appCfg.ROOT_ENCRYPTION_KEY;
const isBase64 = !appCfg.ENCRYPTION_KEY;
if (!encryptionKey)
throw new Error(
"Root encryption key not found for KMS service. Did you set the ENCRYPTION_KEY or ROOT_ENCRYPTION_KEY environment variables?"
);
const encryptionKeyBuffer = Buffer.from(encryptionKey, isBase64 ? "base64" : "utf8");
return encryptionKeyBuffer;
};
const $decryptRootKey = async (kmsRootConfig: TKmsRootConfig) => {
// case 1: root key is encrypted with HSM
if (kmsRootConfig.encryptionStrategy === RootKeyEncryptionStrategy.HSM) {
const hsmIsActive = await hsmService.isActive();
if (!hsmIsActive) {
throw new Error("Unable to decrypt root KMS key. HSM service is inactive. Did you configure the HSM?");
}
const decryptedKey = await hsmService.decrypt(kmsRootConfig.encryptedRootKey);
return decryptedKey;
}
// case 2: root key is encrypted with software encryption
if (kmsRootConfig.encryptionStrategy === RootKeyEncryptionStrategy.Software) {
const cipher = symmetricCipherService(SymmetricEncryption.AES_GCM_256);
const encryptionKeyBuffer = $getBasicEncryptionKey();
return cipher.decrypt(kmsRootConfig.encryptedRootKey, encryptionKeyBuffer);
}
throw new Error(`Invalid root key encryption strategy: ${kmsRootConfig.encryptionStrategy}`);
};
const $encryptRootKey = async (plainKeyBuffer: Buffer, strategy: RootKeyEncryptionStrategy) => {
if (strategy === RootKeyEncryptionStrategy.HSM) {
const hsmIsActive = await hsmService.isActive();
if (!hsmIsActive) {
throw new Error("Unable to encrypt root KMS key. HSM service is inactive. Did you configure the HSM?");
}
const encrypted = await hsmService.encrypt(plainKeyBuffer);
return encrypted;
}
if (strategy === RootKeyEncryptionStrategy.Software) {
const cipher = symmetricCipherService(SymmetricEncryption.AES_GCM_256);
const encryptionKeyBuffer = $getBasicEncryptionKey();
return cipher.encrypt(plainKeyBuffer, encryptionKeyBuffer);
}
// eslint-disable-next-line @typescript-eslint/restrict-template-expressions
throw new Error(`Invalid root key encryption strategy: ${strategy}`);
};
// by keeping the decrypted data key in inner scope
// none of the entities outside can interact directly or expose the data key
// NOTICE: If changing here update migrations/utils/kms
@@ -771,7 +832,6 @@ export const kmsServiceFactory = ({
},
tx
);
return kmsDAL.findByIdWithAssociatedKms(key.id, tx);
});
@@ -794,14 +854,6 @@ export const kmsServiceFactory = ({
// akhilmhdh: a copy of this is made in migrations/utils/kms
const startService = async () => {
const appCfg = getConfig();
// This will switch to a seal process and HMS flow in future
const encryptionKey = appCfg.ENCRYPTION_KEY || appCfg.ROOT_ENCRYPTION_KEY;
// if root key its base64 encoded
const isBase64 = !appCfg.ENCRYPTION_KEY;
if (!encryptionKey) throw new Error("Root encryption key not found for KMS service.");
const encryptionKeyBuffer = Buffer.from(encryptionKey, isBase64 ? "base64" : "utf8");
const lock = await keyStore.acquireLock([`KMS_ROOT_CFG_LOCK`], 3000, { retryCount: 3 }).catch(() => null);
if (!lock) {
await keyStore.waitTillReady({
@@ -813,31 +865,69 @@ export const kmsServiceFactory = ({
// check if KMS root key was already generated and saved in DB
const kmsRootConfig = await kmsRootConfigDAL.findById(KMS_ROOT_CONFIG_UUID);
const cipher = symmetricCipherService(SymmetricEncryption.AES_GCM_256);
// case 1: a root key already exists in the DB
if (kmsRootConfig) {
if (lock) await lock.release();
logger.info("KMS: Encrypted ROOT Key found from DB. Decrypting.");
const decryptedRootKey = cipher.decrypt(kmsRootConfig.encryptedRootKey, encryptionKeyBuffer);
// set the flag so that other instancen nodes can start
logger.info(`KMS: Encrypted ROOT Key found from DB. Decrypting. [strategy=${kmsRootConfig.encryptionStrategy}]`);
const decryptedRootKey = await $decryptRootKey(kmsRootConfig);
// set the flag so that other instance nodes can start
await keyStore.setItemWithExpiry(KMS_ROOT_CREATION_WAIT_KEY, KMS_ROOT_CREATION_WAIT_TIME, "true");
logger.info("KMS: Loading ROOT Key into Memory.");
ROOT_ENCRYPTION_KEY = decryptedRootKey;
return;
}
logger.info("KMS: Generating ROOT Key");
// case 2: no config is found, so we create a new root key with basic encryption
logger.info("KMS: Generating new ROOT Key");
const newRootKey = randomSecureBytes(32);
const encryptedRootKey = cipher.encrypt(newRootKey, encryptionKeyBuffer);
// @ts-expect-error id is kept as fixed for idempotence and to avoid race condition
await kmsRootConfigDAL.create({ encryptedRootKey, id: KMS_ROOT_CONFIG_UUID });
const encryptedRootKey = await $encryptRootKey(newRootKey, RootKeyEncryptionStrategy.Software).catch((err) => {
logger.error({ hsmEnabled: hsmService.isActive() }, "KMS: Failed to encrypt ROOT Key");
throw err;
});
// set the flag so that other instancen nodes can start
await kmsRootConfigDAL.create({
// @ts-expect-error id is kept as fixed for idempotence and to avoid race condition
id: KMS_ROOT_CONFIG_UUID,
encryptedRootKey,
encryptionStrategy: RootKeyEncryptionStrategy.Software
});
// set the flag so that other instance nodes can start
await keyStore.setItemWithExpiry(KMS_ROOT_CREATION_WAIT_KEY, KMS_ROOT_CREATION_WAIT_TIME, "true");
logger.info("KMS: Saved and loaded ROOT Key into memory");
if (lock) await lock.release();
ROOT_ENCRYPTION_KEY = newRootKey;
};
const updateEncryptionStrategy = async (strategy: RootKeyEncryptionStrategy) => {
const kmsRootConfig = await kmsRootConfigDAL.findById(KMS_ROOT_CONFIG_UUID);
if (!kmsRootConfig) {
throw new NotFoundError({ message: "KMS root config not found" });
}
if (kmsRootConfig.encryptionStrategy === strategy) {
return;
}
const decryptedRootKey = await $decryptRootKey(kmsRootConfig);
const encryptedRootKey = await $encryptRootKey(decryptedRootKey, strategy);
if (!encryptedRootKey) {
logger.error("KMS: Failed to re-encrypt ROOT Key with selected strategy");
throw new Error("Failed to re-encrypt ROOT Key with selected strategy");
}
await kmsRootConfigDAL.updateById(KMS_ROOT_CONFIG_UUID, {
encryptedRootKey,
encryptionStrategy: strategy
});
ROOT_ENCRYPTION_KEY = decryptedRootKey;
};
return {
startService,
generateKmsKey,
@@ -849,6 +939,7 @@ export const kmsServiceFactory = ({
encryptWithRootKey,
decryptWithRootKey,
getOrgKmsKeyId,
updateEncryptionStrategy,
getProjectSecretManagerKmsKeyId,
updateProjectSecretManagerKmsKey,
getProjectKeyBackup,

View File

@@ -56,3 +56,8 @@ export type TUpdateProjectSecretManagerKmsKeyDTO = {
projectId: string;
kms: { type: KmsType.Internal } | { type: KmsType.External; kmsId: string };
};
export enum RootKeyEncryptionStrategy {
Software = "SOFTWARE",
HSM = "HSM"
}

View File

@@ -268,7 +268,7 @@ export const orgServiceFactory = ({
actorOrgId,
actorAuthMethod,
orgId,
data: { name, slug, authEnforced, scimEnabled, defaultMembershipRoleSlug, enforceMfa }
data: { name, slug, authEnforced, scimEnabled, defaultMembershipRoleSlug, enforceMfa, selectedMfaMethod }
}: TUpdateOrgDTO) => {
const appCfg = getConfig();
const { permission } = await permissionService.getOrgPermission(actor, actorId, orgId, actorAuthMethod, actorOrgId);
@@ -333,7 +333,8 @@ export const orgServiceFactory = ({
authEnforced,
scimEnabled,
defaultMembershipRole,
enforceMfa
enforceMfa,
selectedMfaMethod
});
if (!org) throw new NotFoundError({ message: `Organization with ID '${orgId}' not found` });
return org;

View File

@@ -1,6 +1,6 @@
import { TOrgPermission } from "@app/lib/types";
import { ActorAuthMethod, ActorType } from "../auth/auth-type";
import { ActorAuthMethod, ActorType, MfaMethod } from "../auth/auth-type";
export type TUpdateOrgMembershipDTO = {
userId: string;
@@ -65,6 +65,7 @@ export type TUpdateOrgDTO = {
scimEnabled: boolean;
defaultMembershipRoleSlug: string;
enforceMfa: boolean;
selectedMfaMethod: MfaMethod;
}>;
} & TOrgPermission;

View File

@@ -361,6 +361,10 @@ export const secretV2BridgeDALFactory = (db: TDbClient) => {
void bd.whereILike(`${TableName.SecretV2}.key`, `%${filters?.search}%`);
}
}
if (filters?.keys) {
void bd.whereIn(`${TableName.SecretV2}.key`, filters.keys);
}
})
.where((bd) => {
void bd.whereNull(`${TableName.SecretV2}.userId`).orWhere({ userId: userId || null });

View File

@@ -518,7 +518,10 @@ export const expandSecretReferencesFactory = ({
}
if (referencedSecretValue) {
expandedValue = expandedValue.replaceAll(interpolationSyntax, referencedSecretValue);
expandedValue = expandedValue.replaceAll(
interpolationSyntax,
() => referencedSecretValue // prevents special characters from triggering replacement patterns
);
}
}
}

View File

@@ -150,9 +150,13 @@ export const secretV2BridgeServiceFactory = ({
}
});
if (referredSecrets.length !== references.length)
if (
referredSecrets.length !==
new Set(references.map(({ secretKey, secretPath, environment }) => `${secretKey}.${secretPath}.${environment}`))
.size // only count unique references
)
throw new BadRequestError({
message: `Referenced secret not found. Found only ${diff(
message: `Referenced secret(s) not found: ${diff(
references.map((el) => el.secretKey),
referredSecrets.map((el) => el.key)
).join(",")}`
@@ -410,12 +414,13 @@ export const secretV2BridgeServiceFactory = ({
type: KmsDataKey.SecretManager,
projectId
});
const encryptedValue = secretValue
? {
encryptedValue: secretManagerEncryptor({ plainText: Buffer.from(secretValue) }).cipherTextBlob,
references: getAllSecretReferences(secretValue).nestedReferences
}
: {};
const encryptedValue =
typeof secretValue === "string"
? {
encryptedValue: secretManagerEncryptor({ plainText: Buffer.from(secretValue) }).cipherTextBlob,
references: getAllSecretReferences(secretValue).nestedReferences
}
: {};
if (secretValue) {
const { nestedReferences, localReferences } = getAllSecretReferences(secretValue);
@@ -1161,7 +1166,7 @@ export const secretV2BridgeServiceFactory = ({
const newSecrets = await secretDAL.transaction(async (tx) =>
fnSecretBulkInsert({
inputSecrets: inputSecrets.map((el) => {
const references = secretReferencesGroupByInputSecretKey[el.secretKey].nestedReferences;
const references = secretReferencesGroupByInputSecretKey[el.secretKey]?.nestedReferences;
return {
version: 1,
@@ -1368,7 +1373,7 @@ export const secretV2BridgeServiceFactory = ({
typeof el.secretValue !== "undefined"
? {
encryptedValue: secretManagerEncryptor({ plainText: Buffer.from(el.secretValue) }).cipherTextBlob,
references: secretReferencesGroupByInputSecretKey[el.secretKey].nestedReferences
references: secretReferencesGroupByInputSecretKey[el.secretKey]?.nestedReferences
}
: {};

View File

@@ -33,6 +33,7 @@ export type TGetSecretsDTO = {
offset?: number;
limit?: number;
search?: string;
keys?: string[];
} & TProjectPermission;
export type TGetASecretDTO = {
@@ -294,6 +295,7 @@ export type TFindSecretsByFolderIdsFilter = {
search?: string;
tagSlugs?: string[];
includeTagsInSearch?: boolean;
keys?: string[];
};
export type TGetSecretsRawByFolderMappingsDTO = {

View File

@@ -185,6 +185,7 @@ export type TGetSecretsRawDTO = {
offset?: number;
limit?: number;
search?: string;
keys?: string[];
} & TProjectPermission;
export type TGetASecretRawDTO = {

View File

@@ -77,5 +77,21 @@ export const smtpServiceFactory = (cfg: TSmtpConfig) => {
}
};
return { sendMail };
const verify = async () => {
const isConnected = smtp
.verify()
.then(async () => {
logger.info("SMTP connected");
return true;
})
.catch((err: Error) => {
logger.error("SMTP error");
logger.error(err);
return false;
});
return isConnected;
};
return { sendMail, verify };
};

View File

@@ -10,7 +10,10 @@ import { BadRequestError, NotFoundError } from "@app/lib/errors";
import { TAuthLoginFactory } from "../auth/auth-login-service";
import { AuthMethod } from "../auth/auth-type";
import { KMS_ROOT_CONFIG_UUID } from "../kms/kms-fns";
import { TKmsRootConfigDALFactory } from "../kms/kms-root-config-dal";
import { TKmsServiceFactory } from "../kms/kms-service";
import { RootKeyEncryptionStrategy } from "../kms/kms-types";
import { TOrgServiceFactory } from "../org/org-service";
import { TUserDALFactory } from "../user/user-dal";
import { TSuperAdminDALFactory } from "./super-admin-dal";
@@ -20,7 +23,8 @@ type TSuperAdminServiceFactoryDep = {
serverCfgDAL: TSuperAdminDALFactory;
userDAL: TUserDALFactory;
authService: Pick<TAuthLoginFactory, "generateUserTokens">;
kmsService: Pick<TKmsServiceFactory, "encryptWithRootKey" | "decryptWithRootKey">;
kmsService: Pick<TKmsServiceFactory, "encryptWithRootKey" | "decryptWithRootKey" | "updateEncryptionStrategy">;
kmsRootConfigDAL: TKmsRootConfigDALFactory;
orgService: Pick<TOrgServiceFactory, "createOrganization">;
keyStore: Pick<TKeyStoreFactory, "getItem" | "setItemWithExpiry" | "deleteItem">;
licenseService: Pick<TLicenseServiceFactory, "onPremFeatures">;
@@ -47,6 +51,7 @@ export const superAdminServiceFactory = ({
authService,
orgService,
keyStore,
kmsRootConfigDAL,
kmsService,
licenseService
}: TSuperAdminServiceFactoryDep) => {
@@ -288,12 +293,70 @@ export const superAdminServiceFactory = ({
};
};
const getConfiguredEncryptionStrategies = async () => {
const appCfg = getConfig();
const kmsRootCfg = await kmsRootConfigDAL.findById(KMS_ROOT_CONFIG_UUID);
if (!kmsRootCfg) {
throw new NotFoundError({ name: "KmsRootConfig", message: "KMS root configuration not found" });
}
const selectedStrategy = kmsRootCfg.encryptionStrategy;
const enabledStrategies: { enabled: boolean; strategy: RootKeyEncryptionStrategy }[] = [];
if (appCfg.ROOT_ENCRYPTION_KEY || appCfg.ENCRYPTION_KEY) {
const basicStrategy = RootKeyEncryptionStrategy.Software;
enabledStrategies.push({
enabled: selectedStrategy === basicStrategy,
strategy: basicStrategy
});
}
if (appCfg.isHsmConfigured) {
const hsmStrategy = RootKeyEncryptionStrategy.HSM;
enabledStrategies.push({
enabled: selectedStrategy === hsmStrategy,
strategy: hsmStrategy
});
}
return {
strategies: enabledStrategies
};
};
const updateRootEncryptionStrategy = async (strategy: RootKeyEncryptionStrategy) => {
if (!licenseService.onPremFeatures.hsm) {
throw new BadRequestError({
message: "Failed to update encryption strategy due to plan restriction. Upgrade to Infisical's Enterprise plan."
});
}
const configuredStrategies = await getConfiguredEncryptionStrategies();
const foundStrategy = configuredStrategies.strategies.find((s) => s.strategy === strategy);
if (!foundStrategy) {
throw new BadRequestError({ message: "Invalid encryption strategy" });
}
if (foundStrategy.enabled) {
throw new BadRequestError({ message: "The selected encryption strategy is already enabled" });
}
await kmsService.updateEncryptionStrategy(strategy);
};
return {
initServerCfg,
updateServerCfg,
adminSignUp,
getUsers,
deleteUser,
getAdminSlackConfig
getAdminSlackConfig,
updateRootEncryptionStrategy,
getConfiguredEncryptionStrategies
};
};

View File

@@ -0,0 +1,11 @@
import { TDbClient } from "@app/db";
import { TableName } from "@app/db/schemas";
import { ormify } from "@app/lib/knex";
export type TTotpConfigDALFactory = ReturnType<typeof totpConfigDALFactory>;
export const totpConfigDALFactory = (db: TDbClient) => {
const totpConfigDal = ormify(db, TableName.TotpConfig);
return totpConfigDal;
};

View File

@@ -0,0 +1,3 @@
import crypto from "node:crypto";
export const generateRecoveryCode = () => String(crypto.randomInt(10 ** 7, 10 ** 8 - 1));

View File

@@ -0,0 +1,270 @@
import { authenticator } from "otplib";
import { BadRequestError, ForbiddenRequestError, NotFoundError } from "@app/lib/errors";
import { TKmsServiceFactory } from "../kms/kms-service";
import { TUserDALFactory } from "../user/user-dal";
import { TTotpConfigDALFactory } from "./totp-config-dal";
import { generateRecoveryCode } from "./totp-fns";
import {
TCreateUserTotpRecoveryCodesDTO,
TDeleteUserTotpConfigDTO,
TGetUserTotpConfigDTO,
TRegisterUserTotpDTO,
TVerifyUserTotpConfigDTO,
TVerifyUserTotpDTO,
TVerifyWithUserRecoveryCodeDTO
} from "./totp-types";
type TTotpServiceFactoryDep = {
userDAL: TUserDALFactory;
totpConfigDAL: TTotpConfigDALFactory;
kmsService: TKmsServiceFactory;
};
export type TTotpServiceFactory = ReturnType<typeof totpServiceFactory>;
const MAX_RECOVERY_CODE_LIMIT = 10;
export const totpServiceFactory = ({ totpConfigDAL, kmsService, userDAL }: TTotpServiceFactoryDep) => {
const getUserTotpConfig = async ({ userId }: TGetUserTotpConfigDTO) => {
const totpConfig = await totpConfigDAL.findOne({
userId
});
if (!totpConfig) {
throw new NotFoundError({
message: "TOTP configuration not found"
});
}
if (!totpConfig.isVerified) {
throw new BadRequestError({
message: "TOTP configuration has not been verified"
});
}
const decryptWithRoot = kmsService.decryptWithRootKey();
const recoveryCodes = decryptWithRoot(totpConfig.encryptedRecoveryCodes).toString().split(",");
return {
isVerified: totpConfig.isVerified,
recoveryCodes
};
};
const registerUserTotp = async ({ userId }: TRegisterUserTotpDTO) => {
const totpConfig = await totpConfigDAL.transaction(async (tx) => {
const verifiedTotpConfig = await totpConfigDAL.findOne(
{
userId,
isVerified: true
},
tx
);
if (verifiedTotpConfig) {
throw new BadRequestError({
message: "TOTP configuration for user already exists"
});
}
const unverifiedTotpConfig = await totpConfigDAL.findOne({
userId,
isVerified: false
});
if (unverifiedTotpConfig) {
return unverifiedTotpConfig;
}
const encryptWithRoot = kmsService.encryptWithRootKey();
// create new TOTP configuration
const secret = authenticator.generateSecret();
const encryptedSecret = encryptWithRoot(Buffer.from(secret));
const recoveryCodes = Array.from({ length: MAX_RECOVERY_CODE_LIMIT }).map(generateRecoveryCode);
const encryptedRecoveryCodes = encryptWithRoot(Buffer.from(recoveryCodes.join(",")));
const newTotpConfig = await totpConfigDAL.create({
userId,
encryptedRecoveryCodes,
encryptedSecret
});
return newTotpConfig;
});
const user = await userDAL.findById(userId);
const decryptWithRoot = kmsService.decryptWithRootKey();
const secret = decryptWithRoot(totpConfig.encryptedSecret).toString();
const recoveryCodes = decryptWithRoot(totpConfig.encryptedRecoveryCodes).toString().split(",");
const otpUrl = authenticator.keyuri(user.username, "Infisical", secret);
return {
otpUrl,
recoveryCodes
};
};
const verifyUserTotpConfig = async ({ userId, totp }: TVerifyUserTotpConfigDTO) => {
const totpConfig = await totpConfigDAL.findOne({
userId
});
if (!totpConfig) {
throw new NotFoundError({
message: "TOTP configuration not found"
});
}
if (totpConfig.isVerified) {
throw new BadRequestError({
message: "TOTP configuration has already been verified"
});
}
const decryptWithRoot = kmsService.decryptWithRootKey();
const secret = decryptWithRoot(totpConfig.encryptedSecret).toString();
const isValid = authenticator.verify({
token: totp,
secret
});
if (isValid) {
await totpConfigDAL.updateById(totpConfig.id, {
isVerified: true
});
} else {
throw new BadRequestError({
message: "Invalid TOTP token"
});
}
};
const verifyUserTotp = async ({ userId, totp }: TVerifyUserTotpDTO) => {
const totpConfig = await totpConfigDAL.findOne({
userId
});
if (!totpConfig) {
throw new NotFoundError({
message: "TOTP configuration not found"
});
}
if (!totpConfig.isVerified) {
throw new BadRequestError({
message: "TOTP configuration has not been verified"
});
}
const decryptWithRoot = kmsService.decryptWithRootKey();
const secret = decryptWithRoot(totpConfig.encryptedSecret).toString();
const isValid = authenticator.verify({
token: totp,
secret
});
if (!isValid) {
throw new ForbiddenRequestError({
message: "Invalid TOTP"
});
}
};
const verifyWithUserRecoveryCode = async ({ userId, recoveryCode }: TVerifyWithUserRecoveryCodeDTO) => {
const totpConfig = await totpConfigDAL.findOne({
userId
});
if (!totpConfig) {
throw new NotFoundError({
message: "TOTP configuration not found"
});
}
if (!totpConfig.isVerified) {
throw new BadRequestError({
message: "TOTP configuration has not been verified"
});
}
const decryptWithRoot = kmsService.decryptWithRootKey();
const encryptWithRoot = kmsService.encryptWithRootKey();
const recoveryCodes = decryptWithRoot(totpConfig.encryptedRecoveryCodes).toString().split(",");
const matchingCode = recoveryCodes.find((code) => recoveryCode === code);
if (!matchingCode) {
throw new ForbiddenRequestError({
message: "Invalid TOTP recovery code"
});
}
const updatedRecoveryCodes = recoveryCodes.filter((code) => code !== matchingCode);
const encryptedRecoveryCodes = encryptWithRoot(Buffer.from(updatedRecoveryCodes.join(",")));
await totpConfigDAL.updateById(totpConfig.id, {
encryptedRecoveryCodes
});
};
const deleteUserTotpConfig = async ({ userId }: TDeleteUserTotpConfigDTO) => {
const totpConfig = await totpConfigDAL.findOne({
userId
});
if (!totpConfig) {
throw new NotFoundError({
message: "TOTP configuration not found"
});
}
await totpConfigDAL.deleteById(totpConfig.id);
};
const createUserTotpRecoveryCodes = async ({ userId }: TCreateUserTotpRecoveryCodesDTO) => {
const decryptWithRoot = kmsService.decryptWithRootKey();
const encryptWithRoot = kmsService.encryptWithRootKey();
return totpConfigDAL.transaction(async (tx) => {
const totpConfig = await totpConfigDAL.findOne(
{
userId,
isVerified: true
},
tx
);
if (!totpConfig) {
throw new NotFoundError({
message: "Valid TOTP configuration not found"
});
}
const recoveryCodes = decryptWithRoot(totpConfig.encryptedRecoveryCodes).toString().split(",");
if (recoveryCodes.length >= MAX_RECOVERY_CODE_LIMIT) {
throw new BadRequestError({
message: `Cannot have more than ${MAX_RECOVERY_CODE_LIMIT} recovery codes at a time`
});
}
const toGenerateCount = MAX_RECOVERY_CODE_LIMIT - recoveryCodes.length;
const newRecoveryCodes = Array.from({ length: toGenerateCount }).map(generateRecoveryCode);
const encryptedRecoveryCodes = encryptWithRoot(Buffer.from([...recoveryCodes, ...newRecoveryCodes].join(",")));
await totpConfigDAL.updateById(totpConfig.id, {
encryptedRecoveryCodes
});
});
};
return {
registerUserTotp,
verifyUserTotpConfig,
getUserTotpConfig,
verifyUserTotp,
verifyWithUserRecoveryCode,
deleteUserTotpConfig,
createUserTotpRecoveryCodes
};
};

View File

@@ -0,0 +1,30 @@
export type TRegisterUserTotpDTO = {
userId: string;
};
export type TVerifyUserTotpConfigDTO = {
userId: string;
totp: string;
};
export type TGetUserTotpConfigDTO = {
userId: string;
};
export type TVerifyUserTotpDTO = {
userId: string;
totp: string;
};
export type TVerifyWithUserRecoveryCodeDTO = {
userId: string;
recoveryCode: string;
};
export type TDeleteUserTotpConfigDTO = {
userId: string;
};
export type TCreateUserTotpRecoveryCodesDTO = {
userId: string;
};

View File

@@ -15,7 +15,7 @@ import { AuthMethod } from "../auth/auth-type";
import { TGroupProjectDALFactory } from "../group-project/group-project-dal";
import { TProjectMembershipDALFactory } from "../project-membership/project-membership-dal";
import { TUserDALFactory } from "./user-dal";
import { TListUserGroupsDTO } from "./user-types";
import { TListUserGroupsDTO, TUpdateUserMfaDTO } from "./user-types";
type TUserServiceFactoryDep = {
userDAL: Pick<
@@ -171,15 +171,24 @@ export const userServiceFactory = ({
});
};
const toggleUserMfa = async (userId: string, isMfaEnabled: boolean) => {
const updateUserMfa = async ({ userId, isMfaEnabled, selectedMfaMethod }: TUpdateUserMfaDTO) => {
const user = await userDAL.findById(userId);
if (!user || !user.email) throw new BadRequestError({ name: "Failed to toggle MFA" });
let mfaMethods;
if (isMfaEnabled === undefined) {
mfaMethods = undefined;
} else {
mfaMethods = isMfaEnabled ? ["email"] : [];
}
const updatedUser = await userDAL.updateById(userId, {
isMfaEnabled,
mfaMethods: isMfaEnabled ? ["email"] : []
mfaMethods,
selectedMfaMethod
});
return updatedUser;
};
@@ -327,7 +336,7 @@ export const userServiceFactory = ({
return {
sendEmailVerificationCode,
verifyEmailVerificationCode,
toggleUserMfa,
updateUserMfa,
updateUserName,
updateAuthMethods,
deleteUser,

View File

@@ -1,5 +1,7 @@
import { TOrgPermission } from "@app/lib/types";
import { MfaMethod } from "../auth/auth-type";
export type TListUserGroupsDTO = {
username: string;
} & Omit<TOrgPermission, "orgId">;
@@ -8,3 +10,9 @@ export enum UserEncryption {
V1 = 1,
V2 = 2
}
export type TUpdateUserMfaDTO = {
userId: string;
isMfaEnabled?: boolean;
selectedMfaMethod?: MfaMethod;
};

View File

@@ -136,8 +136,9 @@ type GetOrganizationsResponse struct {
}
type SelectOrganizationResponse struct {
Token string `json:"token"`
MfaEnabled bool `json:"isMfaEnabled"`
Token string `json:"token"`
MfaEnabled bool `json:"isMfaEnabled"`
MfaMethod string `json:"mfaMethod"`
}
type SelectOrganizationRequest struct {
@@ -260,8 +261,9 @@ type GetLoginTwoV2Response struct {
}
type VerifyMfaTokenRequest struct {
Email string `json:"email"`
MFAToken string `json:"mfaToken"`
Email string `json:"email"`
MFAToken string `json:"mfaToken"`
MFAMethod string `json:"mfaMethod"`
}
type VerifyMfaTokenResponse struct {

View File

@@ -79,13 +79,14 @@ var initCmd = &cobra.Command{
if tokenResponse.MfaEnabled {
i := 1
for i < 6 {
mfaVerifyCode := askForMFACode()
mfaVerifyCode := askForMFACode(tokenResponse.MfaMethod)
httpClient := resty.New()
httpClient.SetAuthToken(tokenResponse.Token)
verifyMFAresponse, mfaErrorResponse, requestError := api.CallVerifyMfaToken(httpClient, api.VerifyMfaTokenRequest{
Email: userCreds.UserCredentials.Email,
MFAToken: mfaVerifyCode,
Email: userCreds.UserCredentials.Email,
MFAToken: mfaVerifyCode,
MFAMethod: tokenResponse.MfaMethod,
})
if requestError != nil {
util.HandleError(err)
@@ -99,7 +100,7 @@ var initCmd = &cobra.Command{
break
}
}
if mfaErrorResponse.Context.Code == "mfa_expired" {
util.PrintErrorMessageAndExit("Your 2FA verification code has expired, please try logging in again")
break

View File

@@ -343,7 +343,7 @@ func cliDefaultLogin(userCredentialsToBeStored *models.UserCredentials) {
if loginTwoResponse.MfaEnabled {
i := 1
for i < 6 {
mfaVerifyCode := askForMFACode()
mfaVerifyCode := askForMFACode("email")
httpClient := resty.New()
httpClient.SetAuthToken(loginTwoResponse.Token)
@@ -532,7 +532,7 @@ func askForDomain() error {
const (
INFISICAL_CLOUD_US = "Infisical Cloud (US Region)"
INFISICAL_CLOUD_EU = "Infisical Cloud (EU Region)"
SELF_HOSTING = "Self-Hosting"
SELF_HOSTING = "Self-Hosting or Dedicated Instance"
ADD_NEW_DOMAIN = "Add a new domain"
)
@@ -756,13 +756,14 @@ func GetJwtTokenWithOrganizationId(oldJwtToken string, email string) string {
if selectedOrgRes.MfaEnabled {
i := 1
for i < 6 {
mfaVerifyCode := askForMFACode()
mfaVerifyCode := askForMFACode(selectedOrgRes.MfaMethod)
httpClient := resty.New()
httpClient.SetAuthToken(selectedOrgRes.Token)
verifyMFAresponse, mfaErrorResponse, requestError := api.CallVerifyMfaToken(httpClient, api.VerifyMfaTokenRequest{
Email: email,
MFAToken: mfaVerifyCode,
Email: email,
MFAToken: mfaVerifyCode,
MFAMethod: selectedOrgRes.MfaMethod,
})
if requestError != nil {
util.HandleError(err)
@@ -817,9 +818,15 @@ func generateFromPassword(password string, salt []byte, p *params) (hash []byte,
return hash, nil
}
func askForMFACode() string {
func askForMFACode(mfaMethod string) string {
var label string
if mfaMethod == "totp" {
label = "Enter the verification code from your mobile authenticator app or use a recovery code"
} else {
label = "Enter the 2FA verification code sent to your email"
}
mfaCodePromptUI := promptui.Prompt{
Label: "Enter the 2FA verification code sent to your email",
Label: label,
}
mfaVerifyCode, err := mfaCodePromptUI.Run()

View File

@@ -0,0 +1,28 @@
---
title: "Compensation"
sidebarTitle: "Compensation"
description: "This guide explains how various compensation processes work at Infisical."
---
## Probation period
We are fully committed to ensuring that you are set up for success, but also understand that it may take some time to determine whether or not there is a long term fit between you and Infisical.
The first 3 months of your employment with Infisical is a probation period. During this time, you can choose to end your contract with 1 week's notice. If we chose to end your contract, Infisical will pay you 4 weeks' pay, but usually ask you to finish on the same day.
People in sales roles, such as Account Executives, have a 6 month probation period - this is to account for the fact that it can be difficult to establish whether or not someone is able to close contracts within their first 3 months, given sales cycles.
Your manager is responsible for monitoring and specifically reviewing your performance throughout this initial period. If under-performance is a concern, or if there is any hesitation regarding the future at Infisical, this should be discussed immediately with you and your manager.
## Severance
At Infisical, average performance gets a generous severance.
If Infisical decides to end your contract after the first 3 months of employment have been completed, we will give you 10 weeks' pay. It is likely we will ask you to stop working immediately.
If the decision to leave is yours, then we just require 1 month of notice.
We have structured notice in this way as we believe it is in neither Infisical's nor your interest to lock you into a role that is no longer right for you due to financial considerations. This extended notice period only applies in the case of under-performance or a change in business needs - if your contract is terminated due to gross misconduct then you may be dismissed without notice. If this policy conflicts with the requirements of your local jurisdiction, then those local laws will take priority.

View File

@@ -58,6 +58,7 @@
"pages": [
"handbook/onboarding",
"handbook/spending-money",
"handbook/compensation",
"handbook/time-off",
"handbook/hiring",
"handbook/meetings",

View File

@@ -69,4 +69,4 @@ volumes:
driver: local
networks:
infisical:
infisical:

View File

@@ -9,7 +9,7 @@ You can use it across various environments, whether it's local development, CI/C
## Installation
<Tabs>
<Tab title="MacOS">
<Tab title="MacOS">
Use [brew](https://brew.sh/) package manager
```bash
@@ -21,9 +21,8 @@ You can use it across various environments, whether it's local development, CI/C
```bash
brew update && brew upgrade infisical
```
</Tab>
<Tab title="Windows">
</Tab>
<Tab title="Windows">
Use [Scoop](https://scoop.sh/) package manager
```bash
@@ -40,7 +39,20 @@ You can use it across various environments, whether it's local development, CI/C
scoop update infisical
```
</Tab>
</Tab>
<Tab title="NPM">
Use [NPM](https://www.npmjs.com/) package manager
```bash
npm install -g @infisical/cli
```
### Updates
```bash
npm update -g @infisical/cli
```
</Tab>
<Tab title="Alpine">
Install prerequisite
```bash

View File

@@ -0,0 +1,145 @@
---
title: GitLab
description: "Learn how to authenticate GitLab pipelines with Infisical using OpenID Connect (OIDC)."
---
**OIDC Auth** is a platform-agnostic JWT-based authentication method that can be used to authenticate from any platform or environment using an identity provider with OpenID Connect.
## Diagram
The following sequence diagram illustrates the OIDC Auth workflow for authenticating GitLab pipelines with Infisical.
```mermaid
sequenceDiagram
participant Client as GitLab Pipeline
participant Idp as GitLab Identity Provider
participant Infis as Infisical
Client->>Idp: Step 1: Request identity token
Idp-->>Client: Return JWT with verifiable claims
Note over Client,Infis: Step 2: Login Operation
Client->>Infis: Send signed JWT to /api/v1/auth/oidc-auth/login
Note over Infis,Idp: Step 3: Query verification
Infis->>Idp: Request JWT public key using OIDC Discovery
Idp-->>Infis: Return public key
Note over Infis: Step 4: JWT validation
Infis->>Client: Return short-lived access token
Note over Client,Infis: Step 5: Access Infisical API with Token
Client->>Infis: Make authenticated requests using the short-lived access token
```
## Concept
At a high-level, Infisical authenticates a client by verifying the JWT and checking that it meets specific requirements (e.g. it is issued by a trusted identity provider) at the `/api/v1/auth/oidc-auth/login` endpoint. If successful,
then Infisical returns a short-lived access token that can be used to make authenticated requests to the Infisical API.
To be more specific:
1. The GitLab pipeline requests an identity token from GitLab's identity provider.
2. The fetched identity token is sent to Infisical at the `/api/v1/auth/oidc-auth/login` endpoint.
3. Infisical fetches the public key that was used to sign the identity token from GitLab's identity provider using OIDC Discovery.
4. Infisical validates the JWT using the public key provided by the identity provider and checks that the subject, audience, and claims of the token matches with the set criteria.
5. If all is well, Infisical returns a short-lived access token that the GitLab pipeline can use to make authenticated requests to the Infisical API.
<Note>
Infisical needs network-level access to GitLab's identity provider endpoints.
</Note>
## Guide
In the following steps, we explore how to create and use identities to access the Infisical API using the OIDC Auth authentication method.
<Steps>
<Step title="Creating an identity">
To create an identity, head to your Organization Settings > Access Control > Machine Identities and press **Create identity**.
![identities organization](/images/platform/identities/identities-org.png)
When creating an identity, you specify an organization level [role](/documentation/platform/role-based-access-controls) for it to assume; you can configure roles in Organization Settings > Access Control > Organization Roles.
![identities organization create](/images/platform/identities/identities-org-create.png)
Now input a few details for your new identity. Here's some guidance for each field:
- Name (required): A friendly name for the identity.
- Role (required): A role from the **Organization Roles** tab for the identity to assume. The organization role assigned will determine what organization level resources this identity can have access to.
Once you've created an identity, you'll be redirected to a page where you can manage the identity.
![identities page](/images/platform/identities/identities-page.png)
Since the identity has been configured with Universal Auth by default, you should re-configure it to use OIDC Auth instead. To do this, press to edit the **Authentication** section,
remove the existing Universal Auth configuration, and add a new OIDC Auth configuration onto the identity.
![identities page remove default auth](/images/platform/identities/identities-page-remove-default-auth.png)
![identities create oidc auth method](/images/platform/identities/identities-org-create-oidc-auth-method.png)
<Warning>Restrict access by configuring the Subject, Audiences, and Claims fields</Warning>
Here's some more guidance on each field:
- OIDC Discovery URL: The URL used to retrieve the OpenID Connect configuration from the identity provider. This will be used to fetch the public key needed for verifying the provided JWT. For GitLab SaaS (GitLab.com), this should be set to `https://gitlab.com`. For self-hosted GitLab instances, use the domain of your GitLab instance.
- Issuer: The unique identifier of the identity provider issuing the JWT. This value is used to verify the iss (issuer) claim in the JWT to ensure the token is issued by a trusted provider. This should also be set to the domain of the Gitlab instance.
- CA Certificate: The PEM-encoded CA cert for establishing secure communication with the Identity Provider endpoints. For GitLab.com, this can be left blank.
- Subject: The expected principal that is the subject of the JWT. For GitLab pipelines, this should be set to a string that uniquely identifies the pipeline and its context, in the format `project_path:{group}/{project}:ref_type:{type}:ref:{branch_name}` (e.g., `project_path:example-group/example-project:ref_type:branch:ref:main`).
- Claims: Additional information or attributes that should be present in the JWT for it to be valid. You can refer to GitLab's [documentation](https://docs.gitlab.com/ee/ci/secrets/id_token_authentication.html#token-payload) for the list of supported claims.
- Access Token TTL (default is `2592000` equivalent to 30 days): The lifetime for an acccess token in seconds. This value will be referenced at renewal time.
- Access Token Max TTL (default is `2592000` equivalent to 30 days): The maximum lifetime for an acccess token in seconds. This value will be referenced at renewal time.
- Access Token Max Number of Uses (default is `0`): The maximum number of times that an access token can be used; a value of `0` implies infinite number of uses.
- Access Token Trusted IPs: The IPs or CIDR ranges that access tokens can be used from. By default, each token is given the `0.0.0.0/0`, allowing usage from any network address.
<Tip>For more details on the appropriate values for the OIDC fields, refer to GitLab's [documentation](https://docs.gitlab.com/ee/ci/secrets/id_token_authentication.html#token-payload). </Tip>
<Info>The `subject`, `audiences`, and `claims` fields support glob pattern matching; however, we highly recommend using hardcoded values whenever possible.</Info>
</Step>
<Step title="Adding an identity to a project">
To enable the identity to access project-level resources such as secrets within a specific project, you should add it to that project.
To do this, head over to the project you want to add the identity to and go to Project Settings > Access Control > Machine Identities and press **Add identity**.
Next, select the identity you want to add to the project and the project level role you want to allow it to assume. The project role assigned will determine what project level resources this identity can have access to.
![identities project](/images/platform/identities/identities-project.png)
![identities project create](/images/platform/identities/identities-project-create.png)
</Step>
<Step title="Accessing the Infisical API with the identity">
As demonstration, we will be using the Infisical CLI to fetch Infisical secrets and utilize them within a GitLab pipeline.
To access Infisical secrets as the identity, you need to use an identity token from GitLab which matches the OIDC configuration defined for the machine identity.
This can be done by defining the `id_tokens` property. The resulting token would then be used to login with OIDC like the following: `infisical login --method=oidc-auth --oidc-jwt=$GITLAB_TOKEN`
Below is a complete example of how a GitLab pipeline can be configured to work with secrets from Infisical using the Infisical CLI with OIDC Auth:
```yaml
image: ubuntu
stages:
- build
build-job:
stage: build
id_tokens:
INFISICAL_ID_TOKEN:
aud: infisical-aud-test
script:
- apt update && apt install -y curl
- curl -1sLf 'https://dl.cloudsmith.io/public/infisical/infisical-cli/setup.deb.sh' | bash
- apt-get update && apt-get install -y infisical
- export INFISICAL_TOKEN=$(infisical login --method=oidc-auth --machine-identity-id=4e807a78-1b1c-4bd6-9609-ef2b0cf4fd54 --oidc-jwt=$INFISICAL_ID_TOKEN --silent --plain)
- infisical run --projectId=1d0443c1-cd43-4b3a-91a3-9d5f81254a89 --env=dev -- npm run build
```
The `id_tokens` keyword is used to request an ID token for the job. In this example, an ID token named `INFISICAL_ID_TOKEN` is requested with the audience (`aud`) claim set to "infisical-aud-test". This ID token will be used to authenticate with Infisical.
<Note>
Each identity access token has a time-to-live (TTL) which you can infer from the response of the login operation; the default TTL is `7200` seconds, which can be adjusted.
If an identity access token expires, it can no longer authenticate with the Infisical API. In this case, a new access token should be obtained by performing another login operation.
</Note>
</Step>
</Steps>

View File

@@ -0,0 +1,262 @@
---
title: "HSM Integration"
description: "Learn more about integrating an HSM with Infisical KMS."
---
<Note>
Changing the encryption strategy for your instance is an Enterprise-only feature.
This section is intended for users who have obtained an Enterprise license and are on-premise.
Please reach out to sales@infisical.com if you have any questions.
</Note>
## Overview
Infisical KMS currently supports two encryption strategies:
1. **Standard Encryption**: This is the default encryption strategy used by Infisical KMS. It uses a software-protected encryption key to encrypt KMS keys within your Infisical instance. The root encryption key is defined by setting the `ENCRYPTION_KEY` environment variable.
2. **Hardware Security Module (HSM)**: This encryption strategy uses a Hardware Security Module (HSM) to create a root encryption key that is stored on a physical device to encrypt the KMS keys within your instance.
## Hardware Security Module (HSM)
![HSM Illustration](/images/platform/kms/hsm/hsm-illustration.png)
Using a hardware security module comes with the added benefit of having a secure and tamper-proof device to store your encryption keys. This ensures that your data is protected from unauthorized access.
<Warning>
All encryption keys used for cryptographic operations are stored within the HSM. This means that if the HSM is lost or destroyed, you will no longer be able to decrypt your data stored within Infisical. Most providers offer recovery options for HSM devices, which you should consider when setting up an HSM device.
</Warning>
Enabling HSM encryption has a set of key benefits:
1. **Root Key Wrapping**: The root KMS encryption key that is used to secure your Infisical instance will be encrypted using the HSM device rather than the standard software-protected key.
2. **FIPS 140-2/3 Compliance**: Using an HSM device ensures that your Infisical instance is FIPS 140-2 or FIPS 140-3 compliant. For FIPS 140-3, ensure that your HSM is FIPS 140-3 validated.
#### Caveats
- **Performance**: Using an HSM device can have a performance impact on your Infisical instance. This is due to the additional latency introduced by the HSM device. This is however only noticeable when your instance(s) start up or when the encryption strategy is changed.
- **Key Recovery**: If the HSM device is lost or destroyed, you will no longer be able to decrypt your data stored within Infisical. Most HSM providers offer recovery options, which you should consider when setting up an HSM device.
### Requirements
- An Infisical instance with a version number that is equal to or greater than `v0.91.0`.
- If you are using Docker, your instance must be using the `infisical/infisical-fips` image.
- An HSM device from a provider such as [Thales Luna HSM](https://cpl.thalesgroup.com/encryption/data-protection-on-demand/services/luna-cloud-hsm), [AWS CloudHSM](https://aws.amazon.com/cloudhsm/), or others.
### FIPS Compliance
FIPS, also known as the Federal Information Processing Standard, is a set of standards that are used to accredit cryptographic modules. FIPS 140-2 and FIPS 140-3 are the two most common standards used for cryptographic modules. If your HSM uses FIPS 140-3 validated hardware, Infisical will automatically be FIPS 140-3 compliant. If your HSM uses FIPS 140-2 validated hardware, Infisical will be FIPS 140-2 compliant.
HSM devices are especially useful for organizations that operate in regulated industries such as healthcare, finance, and government, where data security and compliance are of the utmost importance.
For organizations that work with US government agencies, FIPS compliance is almost always a requirement when dealing with sensitive information. FIPS compliance ensures that the cryptographic modules used by the organization meet the security requirements set by the US government.
## Setup Instructions
<Steps>
<Step title="Setting up an HSM Device">
To set up HSM encryption, you need to configure an HSM provider and HSM key. The HSM provider is used to connect to the HSM device, and the HSM key is used to encrypt Infisical's KMS keys. We recommend using a Cloud HSM provider such as [Thales Luna HSM](https://cpl.thalesgroup.com/encryption/data-protection-on-demand/services/luna-cloud-hsm) or [AWS CloudHSM](https://aws.amazon.com/cloudhsm/).
You need to follow the instructions provided by the HSM provider to set up the HSM device. Once the HSM device is set up, the HSM device can be used within Infisical.
After setting up the HSM from your provider, you will have a set of files that you can use to access the HSM. These files need to be present on the machine where Infisical is running.
If you are using containers, you will need to mount the folder where these files are stored as a volume in the container.
The setup process for an HSM device varies depending on the provider. We have created a guide for Thales Luna Cloud HSM, which you can find below.
</Step>
<Step title="Configure HSM on Infisical">
<Warning>
Are you using Docker? If you are using Docker, please follow the instructions in the [Using HSM's with Docker](#using-hsms-with-docker) section.
</Warning>
Configuring the HSM on Infisical requires setting a set of environment variables:
- `HSM_LIB_PATH`: The path to the PKCS#11 library provided by the HSM provider. This usually comes in the form of a `.so` for Linux and MacOS, or a `.dll` file for Windows. For Docker, you need to mount the library path as a volume. Further instructions can be found below. If you are using Docker, make sure to set the HSM_LIB_PATH environment variable to the path where the library is mounted in the container.
- `HSM_PIN`: The PKCS#11 PIN to use for authentication with the HSM device.
- `HSM_SLOT`: The slot number to use for the HSM device. This is typically between `0` and `5` for most HSM devices.
- `HSM_KEY_LABEL`: The label of the key to use for encryption. **Please note that if no key is found with the provided label, the HSM will create a new key with the provided label.**
You can read more about the [default instance configurations](/self-hosting/configuration/envars) here.
</Step>
<Step title="Restart Instance">
After setting up the HSM, you need to restart the Infisical instance for the changes to take effect.
</Step>
<Step title="Navigate to the Server Admin Console">
![Server Admin Console](/images/platform/kms/hsm/server-admin-console.png)
</Step>
<Step title="Update the KMS Encryption Strategy to HSM">
![Set Encryption Strategy](/images/platform/kms/hsm/encryption-strategy.png)
Once you press the 'Save' button, your Infisical instance will immediately switch to the HSM encryption strategy. This will re-encrypt your KMS key with keys from the HSM device.
</Step>
<Step title="Verify Encryption Strategy">
To verify that the HSM was correctly configured, you can try creating a new secret in one of your projects. If the secret is created successfully, the HSM is now being used for encryption.
</Step>
</Steps>
## Using HSMs with Docker
When using Docker, you need to mount the path containing the HSM client files. This section covers how to configure your Infisical instance to use an HSM with Docker.
<Tabs>
<Tab title="Thales Luna Cloud HSM">
<Steps>
<Step title="Create HSM client folder">
When using Docker, you are able to set your HSM library path to any location on your machine. In this example, we are going to be using `/etc/luna-docker`.
```bash
mkdir /etc/luna-docker
```
After [setting up your Luna Cloud HSM client](https://thalesdocs.com/gphsm/luna/7/docs/network/Content/install/client_install/add_dpod.htm), you should have a set of files, referred to as the HSM client. You don't need all the files, but for simplicity we recommend copying all the files from the client.
A folder structure of a client folder will often look like this:
```
partition-ca-certificate.pem
partition-certificate.pem
server-certificate.pem
Chrystoki.conf
/plugins
libcloud.plugin
/lock
/libs
/64
libCryptoki2.so
/jsp
LunaProvider.jar
/64
libLunaAPI.so
/etc
openssl.cnf
/bin
/64
ckdemo
lunacm
multitoken
vtl
```
The most important parts of the client folder is the `Chrystoki.conf` file, and the `libs`, `plugins`, and `jsp` folders. You need to copy these files to the folder you created in the first step.
```bash
cp -r /<path-to-where-your-luna-client-is-located> /etc/luna-docker
```
</Step>
<Step title="Update Chrystoki.conf">
The `Chrystoki.conf` file is used to configure the HSM client. You need to update the `Chrystoki.conf` file to point to the correct file paths.
In this example, we will be mounting the `/etc/luna-docker` folder to the Docker container under a different path. The path we will use in this example is `/usr/safenet/lunaclient`. This means `/etc/luna-docker` will be mounted to `/usr/safenet/lunaclient` in the Docker container.
An example config file will look like this:
```Chrystoki.conf
Chrystoki2 = {
# This path points to the mounted path, /usr/safenet/lunaclient
LibUNIX64 = /usr/safenet/lunaclient/libs/64/libCryptoki2.so;
}
Luna = {
DefaultTimeOut = 500000;
PEDTimeout1 = 100000;
PEDTimeout2 = 200000;
PEDTimeout3 = 20000;
KeypairGenTimeOut = 2700000;
CloningCommandTimeOut = 300000;
CommandTimeOutPedSet = 720000;
}
CardReader = {
LunaG5Slots = 0;
RemoteCommand = 1;
}
Misc = {
# Update the paths to point to the mounted path if your folder structure is different from the one mentioned in the previous step.
PluginModuleDir = /usr/safenet/lunaclient/plugins;
MutexFolder = /usr/safenet/lunaclient/lock;
PE1746Enabled = 1;
ToolsDir = /usr/bin;
}
Presentation = {
ShowEmptySlots = no;
}
LunaSA Client = {
ReceiveTimeout = 20000;
# Update the paths to point to the mounted path if your folder structure is different from the one mentioned in the previous step.
SSLConfigFile = /usr/safenet/lunaclient/etc/openssl.cnf;
ClientPrivKeyFile = ./etc/ClientNameKey.pem;
ClientCertFile = ./etc/ClientNameCert.pem;
ServerCAFile = ./etc/CAFile.pem;
NetClient = 1;
TCPKeepAlive = 1;
}
REST = {
AppLogLevel = error
ServerName = <REDACTED>;
ServerPort = 443;
AuthTokenConfigURI = <REDACTED>;
AuthTokenClientId = <REDACTED>;
AuthTokenClientSecret = <REDACTED>;
RestClient = 1;
ClientTimeoutSec = 120;
ClientPoolSize = 32;
ClientEofRetryCount = 15;
ClientConnectRetryCount = 900;
ClientConnectIntervalMs = 1000;
}
XTC = {
Enabled = 1;
TimeoutSec = 600;
}
```
Save the file after updating the paths.
</Step>
<Step title="Run Docker">
Running Docker with HSM encryption requires setting the HSM-related environment variables as mentioned previously in the [HSM setup instructions](#setup-instructions). You can set these environment variables in your Docker run command.
We are setting the environment variables for Docker via the command line in this example, but you can also pass in a `.env` file to set these environment variables.
<Warning>
If no key is found with the provided key label, the HSM will create a new key with the provided label.
Infisical depends on an AES and HMAC key to be present in the HSM. If these keys are not present, Infisical will create them. The AES key label will be the value of the `HSM_KEY_LABEL` environment variable, and the HMAC key label will be the value of the `HSM_KEY_LABEL` environment variable with the suffix `_HMAC`.
</Warning>
```bash
docker run -p 80:8080 \
-v /etc/luna-docker:/usr/safenet/lunaclient \
-e HSM_LIB_PATH="/usr/safenet/lunaclient/libs/64/libCryptoki2.so" \
-e HSM_PIN="<your-hsm-device-pin>" \
-e HSM_SLOT=<hsm-device-slot> \
-e HSM_KEY_LABEL="<your-key-label>" \
# The rest are unrelated to HSM setup...
-e ENCRYPTION_KEY="<>" \
-e AUTH_SECRET="<>" \
-e DB_CONNECTION_URI="<>" \
-e REDIS_URL="<>" \
-e SITE_URL="<>" \
infisical/infisical-fips:<version> # Replace <version> with the version you want to use
```
We recommend reading further about [using Infisical with Docker](/self-hosting/deployment-options/standalone-infisical).
</Step>
</Steps>
After following these steps, your Docker setup will be ready to use HSM encryption.
</Tab>
</Tabs>
## Disabling HSM Encryption
To disable HSM encryption, navigate to Infisical's Server Admin Console and set the KMS encryption strategy to `Software-based Encryption`. This will revert the encryption strategy back to the default software-based encryption.
<Note>
In order to disable HSM encryption, the Infisical instance must be able to access the HSM device. If the HSM device is no longer accessible, you will not be able to disable HSM encryption.
</Note>

View File

@@ -8,6 +8,10 @@ description: "Learn how to manage and use cryptographic keys with Infisical."
Infisical can be used as a Key Management System (KMS), referred to as Infisical KMS, to centralize management of keys to be used for cryptographic operations like encryption/decryption.
By default your Infisical data such as projects and the data within them are encrypted at rest using Infisical's own KMS. This ensures that your data is secure and protected from unauthorized access.
If you are on-premise, your KMS root key will be created at random with the `ROOT_ENCRYPTION_KEY` environment variable. You can also use a Hardware Security Module (HSM), to create the root key. Read more about [HSM](/docs/documentation/platform/kms/encryption-strategies).
<Note>
Keys managed in KMS are not extractable from the platform. Additionally, data
is never stored when performing cryptographic operations.

View File

@@ -4,19 +4,18 @@ sidebarTitle: "MFA"
description: "Learn how to secure your Infisical account with MFA."
---
MFA requires users to provide multiple forms of identification to access their account. Currently, this means logging in with your password and a 6-digit code sent to your email.
MFA requires users to provide multiple forms of identification to access their account.
## Email 2FA
Check the box in Personal Settings > Two-factor Authentication to enable email-based 2FA.
If 2-factor authentication is enabled in the Personal settings page, email will be used for MFA by default.
![Email-based MFA](../../images/mfa-email.png)
![Email-based MFA](/images/mfa-email.png)
<Note>
Infisical currently supports email-based 2FA. We're actively working on
building support for other forms of identification via SMS and Authenticator
App.
</Note>
## Mobile Authenticator 2FA
You can use any mobile authenticator app (Authy, Google Authenticator, Duo, etc.) to secure your account. After registration with an authenticator, select **Mobile Authenticator** as your 2FA method.
![Authenticator-based MFA](/images/mfa-authenticator.png)
## Entra ID / Azure AD MFA
@@ -25,32 +24,39 @@ Check the box in Personal Settings > Two-factor Authentication to enable email-b
We also encourage you to have your team download and setup the
[Microsoft Authenticator App](https://www.microsoft.com/en-us/security/mobile-authenticator-app) prior to enabling MFA.
</Note>
<Steps>
<Step title="Open your Infisical Application in the Microsoft Entra Admin Center">
![Entra Infisical app](../../images/platform/mfa/entra/mfa_entra_infisical_app.png)
</Step>
<Step title="Tap on Conditional Access under the Security Tab">
![conditional access](../../images/platform/mfa/entra/mfa_entra_conditional_access.png)
</Step>
<Step title="Tap on Create New Policy from Templates">
![create policy](../../images/platform/mfa/entra/mfa_entra_create_policy.png)
</Step>
<Step title="Select Require MFA for All Users and Tap on Review + Create">
![require MFA and review policy](../../images/platform/mfa/entra/mfa_entra_review_policy.png)
<Note>
By default all users except the configuring admin will be setup to require MFA.
Microsoft encourages keeping at least one admin excluded from MFA to prevent accidental lockout.
</Note>
</Step>
<Step title="Set Policy State to Enabled and Tap on Create">
![enable policy and confirm](../../images/platform/mfa/entra/mfa_entra_confirm_policy.png)
</Step>
<Step title="MFA is now Required When Accessing Infisical">
![mfa login](../../images/platform/mfa/entra/mfa_entra_login.png)
<Note>
If users have not setup MFA for Entra / Azure they will be prompted to do so at this time.
</Note>
</Step>
</Steps>
<Step title="Open your Infisical Application in the Microsoft Entra Admin Center">
![Entra Infisical
app](/images/platform/mfa/entra/mfa_entra_infisical_app.png)
</Step>
<Step title="Tap on Conditional Access under the Security Tab">
![conditional
access](/images/platform/mfa/entra/mfa_entra_conditional_access.png)
</Step>
<Step title="Tap on Create New Policy from Templates">
![create policy](/images/platform/mfa/entra/mfa_entra_create_policy.png)
</Step>
<Step title="Select Require MFA for All Users and Tap on Review + Create">
![require MFA and review
policy](/images/platform/mfa/entra/mfa_entra_review_policy.png)
<Note>
By default all users except the configuring admin will be setup to require
MFA. Microsoft encourages keeping at least one admin excluded from MFA to
prevent accidental lockout.
</Note>
</Step>
<Step title="Set Policy State to Enabled and Tap on Create">
![enable policy and
confirm](/images/platform/mfa/entra/mfa_entra_confirm_policy.png)
</Step>
<Step title="MFA is now Required When Accessing Infisical">
![mfa login](/images/platform/mfa/entra/mfa_entra_login.png)
<Note>
If users have not setup MFA for Entra / Azure they will be prompted to do
so at this time.
</Note>
</Step>
</Steps>

Binary file not shown.

After

Width:  |  Height:  |  Size: 557 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 236 KiB

After

Width:  |  Height:  |  Size: 558 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 189 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 206 KiB

View File

@@ -118,6 +118,7 @@
"group": "Key Management (KMS)",
"pages": [
"documentation/platform/kms/overview",
"documentation/platform/kms/hsm-integration",
"documentation/platform/kms/kubernetes-encryption"
]
},
@@ -225,7 +226,8 @@
"pages": [
"documentation/platform/identities/oidc-auth/general",
"documentation/platform/identities/oidc-auth/github",
"documentation/platform/identities/oidc-auth/circleci"
"documentation/platform/identities/oidc-auth/circleci",
"documentation/platform/identities/oidc-auth/gitlab"
]
},
"documentation/platform/mfa",
@@ -291,7 +293,10 @@
},
{
"group": "Reference architectures",
"pages": ["self-hosting/reference-architectures/aws-ecs"]
"pages": [
"self-hosting/reference-architectures/aws-ecs",
"self-hosting/reference-architectures/linux-deployment-ha"
]
},
"self-hosting/ee",
"self-hosting/faq"

View File

@@ -79,7 +79,7 @@ description: "Learn how to use Helm chart to install Infisical on your Kubernete
</Step>
<Step title="Database schema migration ">
Infisical relies a relational database, which means that database schemas need to be migrated before the instance can become operational.
Infisical relies on a relational database, which means that database schemas need to be migrated before the instance can become operational.
To automate this process, the chart includes a option named `infisical.autoDatabaseSchemaMigration`.
When this option is enabled, a deployment/upgrade will only occur _after_ a successful schema migration.

View File

@@ -1,520 +0,0 @@
---
title: "Automatically deploy Infisical with High Availability"
sidebarTitle: "High Availability"
---
# Self-Hosting Infisical with a native High Availability (HA) deployment
This page describes the Infisical architecture designed to provide high availability (HA) and how to deploy Infisical with high availability. The high availability deployment is designed to ensure that Infisical services are always available and can handle service failures gracefully, without causing service disruptions.
<Info>
This deployment option is currently only available for Debian-based nodes (e.g., Ubuntu, Debian).
We plan on adding support for other operating systems in the future.
</Info>
## High availability architecture
| Service | Nodes | Configuration | GCP | AWS |
|----------------------------------|----------------|------------------------------|---------------|--------------|
| External load balancer$^1$ | 1 | 4 vCPU, 3.6 GB memory | n1-highcpu-4 | c5n.xlarge |
| Internal load balancer$^2$ | 1 | 4 vCPU, 3.6 GB memory | n1-highcpu-4 | c5n.xlarge |
| Etcd cluster$^3$ | 3 | 4 vCPU, 3.6 GB memory | n1-highcpu-4 | c5n.xlarge |
| PostgreSQL$^4$ | 3 | 2 vCPU, 7.5 GB memory | n1-standard-2 | m5.large |
| Sentinel$^4$ | 3 | 2 vCPU, 7.5 GB memory | n1-standard-2 | m5.large |
| Redis$^4$ | 3 | 2 vCPU, 7.5 GB memory | n1-standard-2 | m5.large |
| Infisical Core | 3 | 8 vCPU, 7.2 GB memory | n1-highcpu-8 | c5.2xlarge |
**Footnotes:**
1. External load balancer: If you wish to have multiple instances of the internal load balancer, you will need to use an external load balancer to distribute incoming traffic across multiple internal load balancers.
Using multiple internal load balancers is recommended for high-traffic environments. In the following guide we will use a single internal load balancer, as external load balancing falls outside the scope of this guide.
2. Internal load balancer: The internal load balancer (a HAProxy instance) is used to distribute incoming traffic across multiple Infisical Core instances, Postgres nodes, and Redis nodes. The internal load balancer exposes a set of ports _(80 for Infiscial, 5000 for Read/Write postgres, 5001 for Read-only postgres, and 6379 for Redis)_. Where these ports route to is determained by the internal load balancer based on the availability and health of the service nodes.
The internal load balancer is only accessible from within the same network, and is not exposed to the public internet.
3. Etcd cluster: Etcd is a distributed key-value store used to store and distribute data between the PostgreSQL nodes. Etcd is dependent on high disk I/O performance, therefore it is highly recommended to use highly performant SSD disks for the Etcd nodes, with _at least_ 80GB of disk space.
4. The Redis and PostgreSQL nodes will automatically be configured for high availability and used in your Infisical Core instances. However, you can optionally choose to bring your own database (BYOD), and skip these nodes. See more on how to [provide your own databases](#provide-your-own-databases).
<Info>
For all services that require multiple nodes, it is recommended to deploy them across multiple availability zones (AZs) to ensure high availability and fault tolerance. This will help prevent service disruptions in the event of an AZ failure.
</Info>
![High availability stack](../../images/self-hosting/deployment-options/native/ha-stack.png)
The image above shows how a high availability deployment of Infisical is structured. In this example, an external load balancer is used to distribute incoming traffic across multiple internal load balancers. The internal load balancers. The external load balancer isn't required, and it will require additional configuration to set up.
### Fault Tolerance
This setup provides N+1 redundancy, meaning it can tolerate the failure of any single node without service interruption.
## Ansible
### What is Ansible
Ansible is an open-source automation tool that simplifies application deployment, configuration management, and task automation.
At Infisical, we use Ansible to automate the deployment of Infisical services. The Ansible roles are designed to make it easy to deploy Infisical services in a high availability environment.
### Installing Ansible
<Steps>
<Step title="Install using the pipx Python package manager">
```bash
pipx install --include-deps ansible
```
</Step>
<Step title="Verify the installation">
```bash
ansible --version
```
</Step>
</Steps>
### Understanding Ansible Concepts
* Inventory _(inventory.ini)_: A file that lists your target hosts.
* Playbook _(playbook.yml)_: YAML file containing a set of tasks to be executed on hosts.
* Roles: Reusable units of organization for playbooks. Roles are used to group tasks together in a structured and reusable manner.
### Basic Ansible Commands
Running a playbook with with an invetory file:
```bash
ansible-playbook -i inventory.ini playbook.yml
```
This is how you would run the playbook containing the roles for setting up Infisical in a high availability environment.
### Installing the Infisical High Availability Deployment Ansible Role
The Infisical Ansible role is available on Ansible Galaxy. You can install the role by running the following command:
```bash
ansible-galaxy collection install infisical.infisical_core_ha_deployment
```
## Set up components
1. External load balancer (optional, and not covered in this guide)
2. [Configure Etcd cluster](#configure-etcd-cluster)
3. [Configure PostgreSQL database](#configure-postgresql-database)
4. [Configure Redis/Sentinel](#configure-redis-and-sentinel)
5. [Configure Infisical Core](#configure-infisical-core)
The servers start on the same 52.1.0.0/24 private network range, and can connect to each other freely on these addresses.
The following list includes descriptions of each server and its assigned IP:
52.1.0.1: External Load Balancer
52.1.0.2: Internal Load Balancer
52.1.0.3: Etcd 1
52.1.0.4: Etcd 2
52.1.0.5: Etcd 3
52.1.0.6: PostgreSQL 1
52.1.0.7: PostgreSQL 2
52.1.0.8: PostgreSQL 3
52.1.0.9: Redis 1
52.1.0.10: Redis 2
52.1.0.11: Redis 3
52.1.0.12: Sentinel 1
52.1.0.13: Sentinel 2
52.1.0.14: Sentinel 3
52.1.0.15: Infisical Core 1
52.1.0.16: Infisical Core 2
52.1.0.17: Infisical Core 3
### Configure Etcd cluster
Configuring the ETCD cluster is the first step in setting up a high availability deployment of Infisical.
The ETCD cluster is used to store and distribute data between the PostgreSQL nodes. The ETCD cluster is a distributed key-value store that is highly available and fault-tolerant.
```yaml example.playbook.yml
- hosts: all
gather_facts: true
- name: Set up etcd cluster
hosts: etcd
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: etcd
```
```ini example.inventory.ini
[etcd]
etcd1 ansible_host=52.1.0.3
etcd2 ansible_host=52.1.0.4
etcd3 ansible_host=52.1.0.5
[etcd:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=./ssh-key.pem
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
```
### Configure PostgreSQL database
The Postgres role takes a set of parameters that are used to configure your PostgreSQL database.
Make sure to set the following variables in your playbook.yml file:
- `postgres_super_user_password`: The password for the 'postgres' database user.
- `postgres_db_name`: The name of the database that will be created on the leader node and replicated to the secondary nodes.
- `postgres_user`: The name of the user that will be created on the leader node and replicated to the secondary nodes.
- `postgres_user_password`: The password for the user that will be created on the leader node and replicated to the secondary nodes.
- `etcd_hosts`: The list of etcd hosts that the PostgreSQL nodes will use to communicate with etcd. By default you want to keep this value set to `"{{ groups['etcd'] }}"`
```yaml example.playbook.yml
- hosts: all
gather_facts: true
- name: Set up PostgreSQL with Patroni
hosts: postgres
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: postgres
vars:
postgres_super_user_password: "your-super-user-password"
postgres_user: infisical-user
postgres_user_password: "your-password"
postgres_db_name: infisical-db
etcd_hosts: "{{ groups['etcd'] }}"
```
```ini example.inventory.ini
[postgres]
postgres1 ansible_host=52.1.0.6
postgres2 ansible_host=52.1.0.7
postgres3 ansible_host=52.1.0.8
```
### Configure Redis and Sentinel
The Redis role takes a single variable as input, which is the redis password.
The Sentinel and Redis hosts will run the same role, therefore we are running the task for both the sentinel and redis hosts, `hosts: redis:sentinel`.
- `redis_password`: The password that will be set for the Redis instance.
```yaml example.playbook.yml
- hosts: all
gather_facts: true
- name: Setup Redis and Sentinel
hosts: redis:sentinel
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: redis
vars:
redis_password: "REDIS_PASSWORD"
```
```ini example.inventory.ini
[redis]
redis1 ansible_host=52.1.0.9
redis2 ansible_host=52.1.0.10
redis3 ansible_host=52.1.0.11
[sentinel]
sentinel1 ansible_host=52.1.0.12
sentinel2 ansible_host=52.1.0.13
sentinel3 ansible_host=52.1.0.14
```
### Configure Internal Load Balancer
The internal load balancer used is HAProxy. HAProxy will expose a set of ports as listed below. Each port will route to a different service based on the availability and health of the service nodes.
- Port 80: Infisical Core
- Port 5000: Read/Write PostgreSQL
- Port 5001: Read-only PostgreSQL
- Port 6379: Redis
- Port 7000: HAProxy monitoring
These ports will need to be exposed on your network to become accessible from the outside world.
The HAProxy configuration file is generated by the Infisical Core role, and is located at `/etc/haproxy/haproxy.cfg` on your internal load balancer node.
The HAProxy setup comes with a monitoring panel. You have to set the username/password combination for the monitoring panel by setting the `stats_user` and `stats_password` variables in the HAProxy role.
Once the HAProxy role has fully executed, you can monitor your HA setup by navigating to `http://52.1.0.2:7000/haproxy?stats` in your browser.
```ini example.inventory.ini
[haproxy]
internal_lb ansible_host=52.1.0.2
```
```yaml example.playbook.yml
- name: Set up HAProxy
hosts: haproxy
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: haproxy
vars:
stats_user: "stats-username"
stats_password: "stats-password!"
postgres_servers: "{{ groups['postgres'] }}"
infisical_servers: "{{ groups['infisical'] }}"
redis_servers: "{{ groups['redis'] }}"
```
### Configure Infisical Core
The Infisical Core role will set up your actual Infisical instances.
The `env_vars` variable is used to set the environment variables that Infisical will use. The minimum required environment variables are `ENCRYPTION_KEY` and `AUTH_SECRET`. You can find a list of all available environment variables [here](/docs/self-hosting/configuration/envars#general-platform).
The `DB_CONNECTION_URI` and `REDIS_URL` variables will automatically be set if you're running the full playbook. However, you can choose to set them yourself, and skip the Postgres, etcd, redis/sentinel roles entirely.
<Info>
If you later need to add new environment varibles to your Infisical deployments, it's important you add the variables to **all** your Infisical nodes.<br/>
You can find the environment file for Infisical at `/etc/infisical/environment`.<br/>
After editing the environment file, you need to reload the Infisical service by doing `systemctl restart infisical`.
</Info>
```yaml example.playbook.yml
- hosts: all
gather_facts: true
- name: Setup Infisical
hosts: infisical
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: infisical
env_vars:
ENCRYPTION_KEY: "YOUR_ENCRYPTION_KEY" # openssl rand -hex 16
AUTH_SECRET: "YOUR_AUTH_SECRET" # openssl rand -base64 32
```
```ini example.inventory.ini
[infisical]
infisical1 ansible_host=52.1.0.15
infisical2 ansible_host=52.1.0.16
infisical3 ansible_host=52.1.0.17
```
## Provide your own databases
Bringing your own database is an option using the Infisical Core deployment role.
By bringing your own database, you're able to skip the Etcd, Postgres, and Redis/Sentinel roles entirely.
To bring your own database, you need to set the `DB_CONNECTION_URI` and `REDIS_URL` environment variables in the Infisical Core role.
```yaml example.playbook.yml
- hosts: all
gather_facts: true
- name: Setup Infisical
hosts: infisical
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: infisical
env_vars:
ENCRYPTION_KEY: "YOUR_ENCRYPTION_KEY" # openssl rand -hex 16
AUTH_SECRET: "YOUR_AUTH_SECRET" # openssl rand -base64 32
DB_CONNECTION_URI: "postgres://user:password@localhost:5432/infisical"
REDIS_URL: "redis://localhost:6379"
```
```ini example.inventory.ini
[infisical]
infisical1 ansible_host=52.1.0.15
infisical2 ansible_host=52.1.0.16
infisical3 ansible_host=52.1.0.17
```
## Full deployment example
To make it easier to get started, we've provided a full deployment example that you can use to deploy Infisical in a high availability environment.
- This deployment does not use an external load balancer.
- You **must** change the environment variables defined in the `playbook.yml` example.
- You have update the IP addresses in the `inventory.ini` file to match your own network configuration.
- You need to set the SSH key and ssh user in the `inventory.ini` file.
<Steps>
<Step title="Install Ansible">
Install Ansible using the pipx Python package manager.
```bash
pipx install --include-deps ansible
```
</Step>
<Step title="Install the Infisical deployment Ansible Role">
Install the Infisical deployment role from Ansible Galaxy.
```bash
ansible-galaxy collection install infisical.infisical_core_ha_deployment
```
</Step>
<Step title="Setup your hosts">
Create an `inventory.ini` file, and define your hosts and their IP addresses. You can use the example below as a template, and update the IP addresses to match your own network configuration.
Make sure to set the SSH key and ssh user in the `inventory.ini` file. Please see the example below.
```ini example.inventory.ini
[etcd]
etcd1 ansible_host=52.1.0.3
etcd2 ansible_host=52.1.0.4
etcd3 ansible_host=52.1.0.5
[postgres]
postgres1 ansible_host=52.1.0.6
postgres2 ansible_host=52.1.0.7
postgres3 ansible_host=52.1.0.8
[infisical]
infisical1 ansible_host=52.1.0.15
infisical2 ansible_host=52.1.0.16
infisical3 ansible_host=52.1.0.17
[redis]
redis1 ansible_host=52.1.0.9
redis2 ansible_host=52.1.0.10
redis3 ansible_host=52.1.0.11
[sentinel]
sentinel1 ansible_host=52.1.0.12
sentinel2 ansible_host=52.1.0.13
sentinel3 ansible_host=52.1.0.14
[haproxy]
internal_lb ansible_host=52.1.0.2
; This can be defined individually for each host, or globally for all hosts.
; In this case the credentials are the same for all hosts, so we define them globally as seen below ([all:vars]).
[all:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=./your-ssh-key.pem
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
```
</Step>
<Step title="Setup your Ansible playbook">
The Ansible playbook is where you define which roles/tasks to execute on which hosts.
```yaml example.playbook.yml
---
# Important, we must gather facts from all hosts prior to running the roles to ensure we have all the information we need.
- hosts: all
gather_facts: true
- name: Set up etcd cluster
hosts: etcd
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: etcd
- name: Set up PostgreSQL with Patroni
hosts: postgres
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: postgres
vars:
postgres_super_user_password: "<ENTER_SUPERUSER_PASSWORD>" # Password for the 'postgres' database user
# A database with these credentials will be created on the leader node, and replicated to the secondary nodes.
postgres_db_name: <ENTER_DB_NAME>
postgres_user: <ENTER_DB_USER>
postgres_user_password: <ENTER_DB_USER_PASSWORD>
etcd_hosts: "{{ groups['etcd'] }}"
- name: Setup Redis and Sentinel
hosts: redis:sentinel
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: redis
vars:
redis_password: "<ENTER_REDIS_PASSWORD>"
- name: Set up HAProxy
hosts: haproxy
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: haproxy
vars:
stats_user: "<ENTER_HAPROXY_STATS_USERNAME>"
stats_password: "<ENTER_HAPROXY_STATS_PASSWORD>"
postgres_servers: "{{ groups['postgres'] }}"
infisical_servers: "{{ groups['infisical'] }}"
redis_servers: "{{ groups['redis'] }}"
- name: Setup Infisical
hosts: infisical
become: true
collections:
- infisical.infisical_core_ha_deployment
roles:
- role: infisical
env_vars:
ENCRYPTION_KEY: "YOUR_ENCRYPTION_KEY" # openssl rand -hex 16
AUTH_SECRET: "YOUR_AUTH_SECRET" # openssl rand -base64 32
```
</Step>
<Step title="Run the Ansible playbook">
After creating the `playbook.yml` and `inventory.ini` files, you can run the playbook using the following command
```bash
ansible-playbook -i inventory.ini playbook.yml
```
This step may take upwards of 10 minutes to complete, depending on the number of nodes and the network speed.
Once the playbook has completed, you should have a fully deployed high availability Infisical environment.
To access Infisical, you can try navigating to `http://52.1.0.2`, in order to view your newly deployed Infisical instance.
</Step>
</Steps>
## Post-deployment steps
After deploying Infisical in a high availability environment, you should perform the following post-deployment steps:
- Check your deployment to ensure that all services are running as expected. You can use the HAProxy monitoring panel to check the status of your services (http://52.1.0.2:7000/haproxy?stats)
- Attempt to access the Infisical Core instances to ensure that they are accessible from the internal load balancer. (http://52.1.0.2)
A HAProxy stats page indicating success will look like the image below
![HAProxy stats page](../../images/self-hosting/deployment-options/native/haproxy-stats.png)
## Security Considerations
### Network Security
Secure the network that your instances run on. While this falls outside the scope of Infisical deployment, it's crucial for overall security.
AWS-specific recommendations:
Use Virtual Private Cloud (VPC) to isolate your infrastructure.
Configure security groups to restrict inbound and outbound traffic.
Use Network Access Control Lists (NACLs) for additional network-level security.
<Note>
Please take note that the Infisical team cannot provide infrastructure support for **free self-hosted** deployments.<br/>If you need help with infrastructure, we recommend upgrading to a [paid plan](https://infisical.com/pricing) which includes infrastructure support.
You can also join our community [Slack](https://infisical.com/slack) for help and support from the community.
</Note>
### Troubleshooting
<Accordion title="Ansible: Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user">
If you encounter this issue, please update your ansible config (`ansible.cfg`) file with the following configuration:
```ini
[defaults]
allow_world_readable_tmpfiles = true
```
You can read more about the solution [here](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/sh_shell.html#parameter-world_readable_temp)
</Accordion>
<Accordion title="I'm unable to connect to access the Infisical instance on the web">
This issue can be caused by a number of reasons, mostly realted to the network configuration. Here are a few things you can check:
1. Ensure that the firewall is not blocking the connection. You can check this by running `ufw status`. Ensure that port 80 is open.
2. If you're using a cloud provider like AWS or GCP, ensure that the security group allows traffic on port 80.
3. Ensure that the HAProxy service is running. You can check this by running `systemctl status haproxy`.
4. Ensure that the Infisical service is running. You can check this by running `systemctl status infisical`.
</Accordion>

View File

@@ -1,5 +1,5 @@
---
title: "AWS ECS"
title: "AWS ECS (HA)"
description: "Reference architecture for self-hosting Infisical on AWS ECS"
---

View File

@@ -0,0 +1,383 @@
---
title: "Linux (HA)"
description: "Infisical High Availability Deployment architecture for Linux"
---
This guide describes how to achieve a highly available deployment of Infisical on Linux machines without containerization. The architecture provided serves as a foundation for minimum high availability, which you can scale based on your specific requirements.
## Architecture Overview
![High availability stack](/images/self-hosting/deployment-options/native/ha-stack.png)
The deployment consists of the following key components:
| Service | Nodes | Recommended Specs | GCP Instance | AWS Instance |
|---------------------------|-------|---------------------------|-----------------|--------------|
| External Load Balancer | 1 | 4 vCPU, 4 GB memory | n1-highcpu-4 | c5n.xlarge |
| Internal Load Balancer | 1 | 4 vCPU, 4 GB memory | n1-highcpu-4 | c5n.xlarge |
| Etcd Cluster | 3 | 4 vCPU, 4 GB memory | n1-highcpu-4 | c5n.xlarge |
| PostgreSQL Cluster | 3 | 2 vCPU, 8 GB memory | n1-standard-2 | m5.large |
| Redis + Sentinel | 3+3 | 2 vCPU, 8 GB memory | n1-standard-2 | m5.large |
| Infisical Core | 3 | 2 vCPU, 4 GB memory | n1-highcpu-2 | c5.large |
### Network Architecture
All servers operate within the 52.1.0.0/24 private network range with the following IP assignments:
| Service | IP Address |
|----------------------|------------|
| External Load Balancer| 52.1.0.1 |
| Internal Load Balancer| 52.1.0.2 |
| Etcd Node 1 | 52.1.0.3 |
| Etcd Node 2 | 52.1.0.4 |
| Etcd Node 3 | 52.1.0.5 |
| PostgreSQL Node 1 | 52.1.0.6 |
| PostgreSQL Node 2 | 52.1.0.7 |
| PostgreSQL Node 3 | 52.1.0.8 |
| Redis Node 1 | 52.1.0.9 |
| Redis Node 2 | 52.1.0.10 |
| Redis Node 3 | 52.1.0.11 |
| Sentinel Node 1 | 52.1.0.12 |
| Sentinel Node 2 | 52.1.0.13 |
| Sentinel Node 3 | 52.1.0.14 |
| Infisical Core 1 | 52.1.0.15 |
| Infisical Core 2 | 52.1.0.16 |
| Infisical Core 3 | 52.1.0.17 |
## Component Setup Guide
### 1. Configure Etcd Cluster
The Etcd cluster is needed for leader election in the PostgreSQL HA setup. Skip this step if using managed PostgreSQL.
1. Install Etcd on each node:
```bash
sudo apt update
sudo apt install etcd
```
2. Configure each node with unique identifiers and cluster membership. Example configuration for Node 1 (`/etc/etcd/etcd.conf`):
```yaml
name: etcd1
data-dir: /var/lib/etcd
initial-cluster-state: new
initial-cluster-token: etcd-cluster-1
initial-cluster: etcd1=http://52.1.0.3:2380,etcd2=http://52.1.0.4:2380,etcd3=http://52.1.0.5:2380
initial-advertise-peer-urls: http://52.1.0.3:2380
listen-peer-urls: http://52.1.0.3:2380
listen-client-urls: http://52.1.0.3:2379,http://127.0.0.1:2379
advertise-client-urls: http://52.1.0.3:2379
```
### 2. Configure PostgreSQL
For production deployments, you have two options for highly available PostgreSQL:
#### Option A: Managed PostgreSQL Service (Recommended for Most Users)
Use cloud provider managed services:
- AWS: Amazon RDS for PostgreSQL with Multi-AZ
- GCP: Cloud SQL for PostgreSQL with HA configuration
- Azure: Azure Database for PostgreSQL with zone redundant HA
These services handle replication, failover, and maintenance automatically.
#### Option B: Self-Managed PostgreSQL Cluster
Full HA installation guide of PostgreSQL is beyond the scope of this document. However, we have provided an overview of resources and code snippets below to guide your deployment.
1. Required Components:
- PostgreSQL 14+ on each node
- Patroni for cluster management
- Etcd for distributed consensus
2. Documentation we recommend you read:
- [Complete Patroni Setup Guide](https://patroni.readthedocs.io/en/latest/README.html)
- [PostgreSQL Replication Documentation](https://www.postgresql.org/docs/current/high-availability.html)
3. Key Steps Overview:
```bash
# 1. Install requirements on each PostgreSQL node
sudo apt update
sudo apt install -y postgresql-14 postgresql-contrib-14 python3-pip
pip3 install patroni[etcd] psycopg2-binary
# 2. Create Patroni config directory
sudo mkdir /etc/patroni
sudo chown postgres:postgres /etc/patroni
# 3. Create Patroni configuration (example for first node)
# /etc/patroni/config.yml - REQUIRES CAREFUL CUSTOMIZATION
```
```yaml
scope: infisical-cluster
namespace: /db/
name: postgresql1
restapi:
listen: 52.1.0.6:8008
connect_address: 52.1.0.6:8008
etcd:
hosts: 52.1.0.3:2379,52.1.0.4:2379,52.1.0.5:2379
bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
parameters:
max_connections: 1000
shared_buffers: 2GB
work_mem: 8MB
max_worker_processes: 8
max_parallel_workers_per_gather: 4
max_parallel_workers: 8
wal_level: replica
hot_standby: "on"
max_wal_senders: 10
max_replication_slots: 10
hot_standby_feedback: "on"
```
4. Important considerations:
- Proper disk configuration for WAL and data directories
- Network latency between nodes
- Backup strategy and point-in-time recovery
- Monitoring and alerting setup
- Connection pooling configuration
- Security and network access controls
5. Recommended readings:
- [PostgreSQL Backup and Recovery](https://www.postgresql.org/docs/current/backup.html)
- [PostgreSQL Monitoring](https://www.postgresql.org/docs/current/monitoring.html)
### 3. Configure Redis and Sentinel
Similar to PostgreSQL, a full HA Redis setup guide is beyond the scope of this document. Below are the key resources and considerations for your deployment.
#### Option A: Managed Redis Service (Recommended for Most Users)
Use cloud provider managed Redis services:
- AWS: ElastiCache for Redis with Multi-AZ
- GCP: Memorystore for Redis with HA
- Azure: Azure Cache for Redis with zone redundancy
Follow your cloud provider's documentation:
- [AWS ElastiCache Documentation](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html)
- [GCP Memorystore Documentation](https://cloud.google.com/memorystore/docs/redis)
- [Azure Redis Cache Documentation](https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/)
#### Option B: Self-Managed Redis Cluster
Setting up a production Redis HA cluster requires understanding several components. Refer to these linked resources:
1. Required Reading:
- [Redis Sentinel Documentation](https://redis.io/docs/management/sentinel/)
- [Redis Replication Guide](https://redis.io/topics/replication)
- [Redis Security Guide](https://redis.io/topics/security)
2. Key Steps Overview:
```bash
# 1. Install Redis on all nodes
sudo apt update
sudo apt install redis-server
# 2. Configure master node (52.1.0.9)
# /etc/redis/redis.conf
```
```conf
bind 52.1.0.9
port 6379
dir /var/lib/redis
maxmemory 3gb
maxmemory-policy noeviction
requirepass "your_redis_password"
masterauth "your_redis_password"
```
3. Configure replica nodes (`52.1.0.10`, `52.1.0.11`):
```conf
bind 52.1.0.10 # Change for each replica
port 6379
dir /var/lib/redis
replicaof 52.1.0.9 6379
masterauth "your_redis_password"
requirepass "your_redis_password"
```
4. Configure Sentinel nodes (`52.1.0.12`, `52.1.0.13`, `52.1.0.14`):
```conf
port 26379
sentinel monitor mymaster 52.1.0.9 6379 2
sentinel auth-pass mymaster "your_redis_password"
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
```
5. Recommended Additional Reading:
- [Redis High Availability Tools](https://redis.io/topics/high-availability)
- [Redis Sentinel Client Implementation](https://redis.io/topics/sentinel-clients)
### 4. Configure HAProxy Load Balancer
Install and configure HAProxy for internal load balancing:
```conf ha-proxy-config
global
maxconn 10000
log stdout format raw local0
defaults
log global
mode tcp
retries 3
timeout client 30m
timeout connect 10s
timeout server 30m
timeout check 5s
listen stats
mode http
bind *:7000
stats enable
stats uri /
resolvers hostdns
nameserver dns 127.0.0.11:53
resolve_retries 3
timeout resolve 1s
timeout retry 1s
hold valid 5s
frontend postgres_master
bind *:5000
default_backend postgres_master_backend
frontend postgres_replicas
bind *:5001
default_backend postgres_replica_backend
backend postgres_master_backend
option httpchk GET /master
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server postgres-1 52.1.0.6:5432 check port 8008
server postgres-2 52.1.0.7:5432 check port 8008
server postgres-3 52.1.0.8:5432 check port 8008
backend postgres_replica_backend
option httpchk GET /replica
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server postgres-1 52.1.0.6:5432 check port 8008
server postgres-2 52.1.0.7:5432 check port 8008
server postgres-3 52.1.0.8:5432 check port 8008
frontend redis_master_frontend
bind *:6379
default_backend redis_master_backend
backend redis_master_backend
option tcp-check
tcp-check send AUTH\ 123456\r\n
tcp-check expect string +OK
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
server redis-1 52.1.0.9:6379 check inter 1s
server redis-2 52.1.0.10:6379 check inter 1s
server redis-3 52.1.0.11:6379 check inter 1s
frontend infisical_frontend
bind *:80
default_backend infisical_backend
backend infisical_backend
option httpchk GET /api/status
http-check expect status 200
server infisical-1 52.1.0.15:8080 check inter 1s
server infisical-2 52.1.0.16:8080 check inter 1s
server infisical-3 52.1.0.17:8080 check inter 1s
```
### 5. Deploy Infisical Core
<Tabs>
<Tab title="Debian/Ubuntu">
First, add the Infisical repository:
```bash
curl -1sLf \
'https://dl.cloudsmith.io/public/infisical/infisical-core/setup.deb.sh' \
| sudo -E bash
```
Then install Infisical:
```bash
sudo apt-get update && sudo apt-get install -y infisical-core
```
<Info>
For production environments, we strongly recommend installing a specific version of the package to maintain consistency across reinstalls. View available versions at [Infisical Package Versions](https://cloudsmith.io/~infisical/repos/infisical-core/packages/).
</Info>
</Tab>
<Tab title="RedHat/CentOS/Amazon Linux">
First, add the Infisical repository:
```bash
curl -1sLf \
'https://dl.cloudsmith.io/public/infisical/infisical-core/setup.rpm.sh' \
| sudo -E bash
```
Then install Infisical:
```bash
sudo yum install infisical-core
```
<Info>
For production environments, we strongly recommend installing a specific version of the package to maintain consistency across reinstalls. View available versions at [Infisical Package Versions](https://cloudsmith.io/~infisical/repos/infisical-core/packages/).
</Info>
</Tab>
</Tabs>
Next, create configuration file `/etc/infisical/infisical.rb` with the following:
```ruby
infisical_core['ENCRYPTION_KEY'] = 'your-secure-encryption-key'
infisical_core['AUTH_SECRET'] = 'your-secure-auth-secret'
infisical_core['DB_CONNECTION_URI'] = 'postgres://user:pass@52.1.0.2:5000/infisical'
infisical_core['REDIS_URL'] = 'redis://52.1.0.2:6379'
infisical_core['PORT'] = 8080
```
To generate `ENCRYPTION_KEY` and `AUTH_SECRET` view the [following configurations documentation here](/self-hosting/configuration/envars).
If you are using managed services for either Postgres or Redis, please replace the values of the secrets accordingly.
Lastly, start and verify each node running infisical-core:
```bash
sudo infisical-ctl reconfigure
sudo infisical-ctl status
```
## Monitoring and Maintenance
1. Monitor HAProxy stats: `http://52.1.0.2:7000/haproxy?stats`
2. Monitor Infisical logs: `sudo infisical-ctl tail`
3. Check cluster health:
- Etcd: `etcdctl cluster-health`
- PostgreSQL: `patronictl list`
- Redis: `redis-cli info replication`

Some files were not shown because too many files have changed in this diff Show More