mongo to postgres guide

This commit is contained in:
Maidul Islam
2024-02-27 00:35:41 -05:00
parent 4bf7e8bbd1
commit 1f9f15136e
5 changed files with 219 additions and 29 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 332 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

View File

@ -165,7 +165,6 @@
"pages": [
"self-hosting/overview",
"self-hosting/configuration/requirements",
"self-hosting/configuration/schema-migrations",
{
"group": "Installation methods",
"pages": [
@ -175,6 +174,13 @@
]
},
"self-hosting/configuration/envars",
{
"group": "Guides",
"pages": [
"self-hosting/configuration/schema-migrations",
"self-hosting/guides/mongo-to-postgres"
]
},
"self-hosting/faq"
]
},

View File

@ -0,0 +1,184 @@
---
title: "Migrate Mongo to Postgres"
description: "How to migrate from MongoDB to PostgreSQL for Infisical"
---
This guide will provide step by step instructions on migrating your Infisical instance running on MongoDB to the newly released PostgreSQL version of Infisical.
The newly released Postgres version of Infisical is the only version of Infisical that will receive feature updates and patches going forward.
<Info>
If your deployment is using a Docker image tag that includes `postgres`, then you are already using the Postgres version of Infisical, and you can skip this guide.
</Info>
## Prerequisites
Before starting the migration, ensure you have the following command line tools installed:
- [pg_dump](https://www.postgresql.org/docs/current/app-pgrestore.html)
- [pg_restore](https://www.postgresql.org/docs/current/app-pgdump.html)
- [mongodump](https://www.mongodb.com/docs/database-tools/mongodump/)
- [mongorestore](https://www.mongodb.com/docs/database-tools/mongorestore/)
- [Docker](https://docs.docker.com/engine/install/)
## Prepare for migration
<Steps>
<Step title="Backup Production MongoDB Data">
While the migration script will not mutate any MongoDB production data, we recommend you to take a backup of your MongoDB instance if possible.
</Step>
<Step title="Set Migration Mode">
To prevent new data entries during the migration, set your Infisical instance to migration mode by setting the environment variable `MIGRATION_MODE=true` and redeploying your instance.
This mode will block all write operations, only allowing GET requests. It also disables user logins and sets up a migration page to prevent UI interactions.
![migration mode](/images/self-hosting/guides/mongo-postgres/mongo-migration.png)
</Step>
<Step title="Start local instances of Mongo and Postgres databases">
Start local instances of MongoDB and Postgres. This will be used in later steps to process and transform the data locally.
To start local instances of the two databases, create a file called `docker-compose.yaml` as shown below.
```yaml docker-compose.yaml
version: '3.1'
services:
mongodb:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
postgres:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
mongodb_data:
postgres_data:
```
Next, run the command below in the same working directory where the `docker-compose.yaml` file resides to start both services.
```
docker-compose up
```
</Step>
</Steps>
## Dump MongoDB
To speed up the data transformation process, the first step involves transferring the production data from Infisical's MongoDB to a local machine.
This is achieved by creating a dump of the production database and then uploading this dumped data into a local Mongo instance.
By having a running local instance of the production database, we will significantly reduce the time it takes to run the migration script.
<Steps>
<Step title="Dump MongoDB data to your local machine using">
```
mongodump --uri=<your_mongo_prod_uri> --archive="mongodump-db" --db=<db name> --excludeCollection=auditlogs
```
</Step>
<Step title="Restore this data to the local MongoDB instance">
```
mongorestore --uri=mongodb://root:example@localhost:27017/ --archive="mongodump-db"
```
</Step>
</Steps>
## Start the migration
Once started, the migration script will transform MongoDB data into an equivalent PostgreSQL format.
<Steps>
<Step title="Clone Infisical Repository">
Clone the Infisical MongoDB repository.
```
git clone https://github.com/Infisical/infisical.git
```
</Step>
<Step title="Change directory">
Once the repository has been cloned, change directory to the script folder.
```
cd pg-migrator
```
</Step>
<Step title="Install dependencies">
```
npm install
```
</Step>
<Step title="Execute Migration Script">
```
npm run migration
```
When executing the above command, you'll be asked to provide the MongoDB connection string for the database containing your production Infisical data. Since our production Mongo data is transferred to a local Mongo instance, you should input the connection string for this local instance.
```
mongodb://root:example@localhost:27017/<db-name>?authSource=admin
```
<Tip>
Remember to replace `<db-name>` with the name of the MongoDB database. If you are not sure the name, you can use [Compass](https://www.mongodb.com/products/tools/compass) to view the available databases.
</Tip>
Next, you will be asked to enter the Postgres connection string for the database where the transformed data should be stored.
Input the connection string of the local Postgres instance that was set up earlier in the guide.
```
postgres://infisical:infisical@localhost/infisical?sslmode=disable
```
</Step>
<Step title="Store migration metadata">
Once the script has completed, you will notice a new folder has been created called `db` in the `pg-migrator` folder.
This folder contains meta data for schema mapping and can be helpful when debugging migration related issues.
We highly recommend you to make a copy of this folder in case you need assistance from the Infisical team during your migration process.
<Info>
The `db` folder does not contain any sensitive data
</Info>
</Step>
</Steps>
## Finalizing Migration
At this stage, the data from the Mongo instance of Infisical should have been successfully converted into its Postgres equivalent.
The remaining step involves transferring the local Postgres database, which now contains all the migrated data, to your chosen production Postgres environment.
Rather than transferring the data row-by-row from your local machine to the production Postgres database, we will first create a dump file from the local Postgres and then upload this file to your production Postgres instance.
<Steps>
<Step title="Dump from local PostgreSQL">
```
pg_dump -h localhost -U infisical -Fc -b -v -f dumpfilelocation.sql -d infisical
```
</Step>
<Step title="Upload to production PostgreSQL">
```
pg_restore --clean -v -h <host> -U <db-user-name> -d <database-name> -j 2 dumpfilelocation.sql
```
<Tip>
Remember to replace `<host>`, `<db-user-name>`, `<database-name>` with the corresponding details of your production Postgres database.
</Tip>
</Step>
<Step title="Verify Data Upload">
Use a tool like Beekeeper Studio to confirm that the data has been successfully transferred to your production Postgres DB.
</Step>
</Steps>
## Post-Migration Steps
After successfully migrating the data to PostgreSQL, you can proceed to deploy Infisical using your preferred deployment method.
Refer to [Infisical's self-hosting documentation](https://infisical.com/docs/self-hosting/overview) for deployment options.
Remember to use your production PostgreSQL connection string for the new deployment and transfer all [environment variables](/self-hosting/configuration/envars) from the MongoDB version of Infisical to the new version (they are all compatible).

View File

@ -66,7 +66,7 @@ enum SecretEncryptionAlgo {
AES_256_GCM = "aes-256-gcm",
}
const ENV_SLUG_LENGTH = 15;
const ENV_SLUG_LENGTH = 500;
enum SecretKeyEncoding {
UTF8 = "utf8",
@ -210,9 +210,9 @@ export const migrateCollection = async <
return (await tx
.batchInsert<Tables[K]["base"]>(postgresTableName, pgDoc as any)
.returning(returnKeys as any)) as Pick<
Tables[K]["base"],
R[number]
>[];
Tables[K]["base"],
R[number]
>[];
});
await postPgProcessing?.(mongooseDoc, newUserIds);
}
@ -230,9 +230,9 @@ export const migrateCollection = async <
return (await tx
.batchInsert(postgresTableName, pgDoc as any)
.returning(returnKeys as any)) as Pick<
Tables[K]["base"],
R[number]
>[];
Tables[K]["base"],
R[number]
>[];
});
await postPgProcessing?.(mongooseDoc, newUserIds);
}
@ -258,9 +258,9 @@ const main = async () => {
try {
dotenv.config();
process.env.MONGO_DB_URL = "mongodb://root:example@localhost:27017/test?authSource=admin"
// process.env.MONGO_DB_URL = "mongodb://root:example@localhost:27017/test?authSource=admin"
process.env.POSTGRES_DB_URL = "postgres://infisical:infisical@localhost/infisical?sslmode=disable"
// process.env.POSTGRES_DB_URL = "postgres://infisical:infisical@localhost/infisical?sslmode=disable"
process.env.START_FRESH = "true";
const prompt = promptSync({ sigint: true });
@ -313,7 +313,7 @@ const main = async () => {
preProcessing: async (doc) => {
if (["64058e0ea5c55c6a8203fed7", "64155f5d75c91bf4e176eb85", "6434ff80b82e04f17008aa13"].includes(doc._id.toString())) {
console.log("Skipping duplicate user")
return
return
}
const id = uuidV4();
@ -843,9 +843,9 @@ const main = async () => {
await folderKv.put(folder.id, id);
const parentId = folder?.parentId
? await folderKv.get(folder?.parentId).catch((e) => {
console.log("parent folder not found==>", folder);
throw e;
})
console.log("parent folder not found==>", folder);
throw e;
})
: null;
pgFolder.push({
@ -1548,8 +1548,8 @@ const main = async () => {
returnKeys: ["id"],
preProcessing: async (doc) => {
// dangling identity
if (!await identityKv.get(doc.identity.toString()).catch(() => null)){
return
if (!await identityKv.get(doc.identity.toString()).catch(() => null)) {
return
}
const id = uuidV4();
@ -1584,8 +1584,8 @@ const main = async () => {
returnKeys: ["id"],
preProcessing: async (doc) => {
// dangling identity
if (!await identityKv.get(doc.identity.toString()).catch(() => null)){
return
if (!await identityKv.get(doc.identity.toString()).catch(() => null)) {
return
}
const identityUAId = await identityUaKv.get(
@ -1617,15 +1617,15 @@ const main = async () => {
returnKeys: ["id"],
preProcessing: async (doc) => {
// dangling identity
if (!await identityKv.get(doc.identity.toString()).catch(() => null)){
return
if (!await identityKv.get(doc.identity.toString()).catch(() => null)) {
return
}
await identityAccessTokenKv.put(doc._id.toString(), doc._id.toString());
const identityUAClientSecretId = doc?.identityUniversalAuthClientSecret
? await identityUaClientSecKv.get(
doc.identityUniversalAuthClientSecret.toString(),
)
doc.identityUniversalAuthClientSecret.toString(),
)
: null;
const identityId = await identityKv.get(doc.identity.toString());
return {
@ -1652,8 +1652,8 @@ const main = async () => {
returnKeys: ["id"],
preProcessing: async (doc) => {
// dangling identity
if (!await identityKv.get(doc.identity.toString()).catch(() => null)){
return
if (!await identityKv.get(doc.identity.toString()).catch(() => null)) {
return
}
const id = uuidV4();
@ -1687,8 +1687,8 @@ const main = async () => {
returnKeys: ["id"],
preProcessing: async (doc) => {
// dangling identity
if (!await identityKv.get(doc.identity.toString()).catch(() => null)){
return
if (!await identityKv.get(doc.identity.toString()).catch(() => null)) {
return
}
const id = uuidV4();
@ -2317,8 +2317,8 @@ const main = async () => {
const statusChangeBy = doc.statusChangeBy
? await projectMembKv
.get(doc.statusChangeBy.toString())
.catch(() => null)
.get(doc.statusChangeBy.toString())
.catch(() => null)
: null;
return {
id,
@ -2454,7 +2454,7 @@ const main = async () => {
secretCommentCiphertext:
commit.newVersion.secretCommentCiphertext ||
secret.secretCommentCiphertext,
secretVersion,
secretVersion,
createdAt: new Date((doc as any).createdAt),
updatedAt: new Date((doc as any).updatedAt),
};