Skip to content

Mongo | Mongo pool upgrade #9195

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

naveenpaul1
Copy link
Contributor

@naveenpaul1 naveenpaul1 commented Aug 21, 2025

Describe the Problem

backing store in upgrade scenario
Replace bucket with tier with new backingstore

Explain the Changes

  1. Upgrade Script
  • Create a new Backingstore pool in upgrade scenario
  • Replace bucket with tier with new backingstore
  • Delete eisting mongo_pool internal poool
  1. Add system id to backingstore -> backingstore_{id}
  2. add system_id to api system_full_info

Issues: Fixed #xxx / Gap #xxx

  1. 4.20 upgrade issues

Testing Instructions:

  1. Deploy 4.19 and run remove_mongo_pool.js upgrade script
  2. Or run test remove mongo_pool to version 5.20.0 test in test_upgrade_scripts.js file with 4.19 code
  • Doc added/updated
  • Tests added

Summary by CodeRabbit

  • New Features

    • System info now includes a required system_id field.
  • Refactor

    • Default pool names are now system-scoped (suffixed with system ID) for clearer identification.
  • Chores

    • Added an upgrade flow to create a system-scoped default pool, migrate buckets to it, and remove the legacy internal pool.
    • Updated integration tests to use the new default pool naming.
  • Behavior Change

    • Account creation validation now treats the system-scoped default pool as internal for S3-related checks.

Copy link

coderabbitai bot commented Aug 21, 2025

Walkthrough

Adds system_id to public system schema/response. Introduces system-scoped default pool naming (DEFAULT_POOL_NAME-{system_id}) and selection logic. Updates bucket default-pool migration and upgrade script to create/migrate to the new default pool before removing the internal pool. Minor null-safety in node server and account-server checks; tests updated.

Changes

Cohort / File(s) Summary
API schema: system_full_info
src/api/system_api.js
Added required string field system_id to definitions.system_full_info and its properties.
System service: expose system_id & default pool naming
src/server/system_services/system_server.js
read_system now returns system_id as string; default HOSTS pool name creation now uses ${config.DEFAULT_POOL_NAME}-${system._id}.
Pool selection logic & exports
src/server/system_services/pool_server.js
Introduced system-scoped default pool detection (composite name), added get_optimal_default_pool(system), renamed/updated internal non-default pool logic to skip system default, and adjusted exports (removed old get_optimal_non_mongo_pool_id/get_optimal_non_default_pool_id).
Bucket server: default-pool resolver change
src/server/system_services/bucket_server.js
Switched resolver to pool_server.get_optimal_default_pool(system) when updating buckets' default pool.
Upgrade script: remove_mongo_pool
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
Reworked flow to create DEFAULT_POOL_NAME-{system_id}, update buckets' tier mirrors to the new pool, then remove internal Mongo pool; added helper functions, validation, logging and guards to skip no-op cases.
Account creation validation
src/server/system_services/account_server.js
validate_create_account_params now treats internal/default pool name as ${config.DEFAULT_POOL_NAME}-${req.system._id} when checking for non-internal resources.
Node server: null-safety
src/server/node_services/node_server.js
Used optional chaining (req.system.pools_by_name?.[pool_name]) in _prepare_nodes_query to avoid errors when pools_by_name is missing.
Tests: upgrade script expectations
src/test/integration_tests/internal/test_upgrade_scripts.js
Tests updated to assert existence/name of DEFAULT_POOL_NAME-{system_id} after running upgrade script.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Admin as Upgrade Runner
  participant Script as remove_mongo_pool.js
  participant System as System DB
  participant PoolSrv as pool_server
  participant BucketSrv as bucket_server
  participant Buckets as Buckets Collection

  Admin->>Script: run()
  Script->>System: Read system, pools, buckets
  Script->>Script: Compute defaultName = DEFAULT_POOL_NAME-{system_id}
  Script->>System: Create/insert default pool (if missing)
  Script->>PoolSrv: get_optimal_default_pool(system)
  PoolSrv-->>Script: default_pool

  alt Internal pool exists and differs
    Script->>BucketSrv: update_buckets_default_pool(internal_pool, default_pool)
    BucketSrv->>Buckets: Bulk update tier mirrors -> default_pool
    Buckets-->>BucketSrv: Updated
    Script->>System: Remove internal pool
  else No internal pool or already same
    Script->>Script: Skip updates/removal
  end

  Script-->>Admin: Completed
  note over Script,System: Logs show create/update/remove steps
Loading
sequenceDiagram
  autonumber
  participant Client as API Client
  participant SysSrv as system_server
  participant PoolSrv as pool_server
  participant BucketSrv as bucket_server

  Client->>SysSrv: read_system(req)
  SysSrv-->>Client: {..., system_id}

  Client->>BucketSrv: update_all_buckets_default_pool()
  BucketSrv->>PoolSrv: get_optimal_default_pool(system)
  PoolSrv-->>BucketSrv: default_pool
  BucketSrv->>BucketSrv: Validate and skip if same/missing
  BucketSrv->>BucketSrv: Update buckets using internal pool (if needed)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

size/XL, Enable-Auto-Rebase

Suggested reviewers

  • liranmauda
  • romayalon
  • jackyalbo
  • tangledbytes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Signed-off-by: Naveen Paul <[email protected]>
@naveenpaul1 naveenpaul1 marked this pull request as ready for review August 21, 2025 10:50
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/server/system_services/pool_server.js (2)

1312-1325: Constrain search to the current system and skip INTERNAL pools to avoid cross-system/default regressions

Iterating over system_store.data.pools without filtering by system can select a pool from a different system. Also skip INTERNAL (legacy mongo) pools; otherwise we might “fix” default resources to another internal pool. This can corrupt account defaults in multi-system stores.

Apply this diff:

-async function get_optimal_non_default_pool_id() {
-    for (const pool of system_store.data.pools) {
+async function get_optimal_non_default_pool_id(system) {
+    for (const pool of system_store.data.pools) {
+        if (String(pool.system?._id || pool.system) !== String(system._id)) continue;
         // skip backingstore_pool.
-        if (_is_optimal_default_pool(pool)) {
+        if (_is_optimal_default_pool(pool) || pool.resource_type === 'INTERNAL' || pool.mongo_info) {
             continue;
         }
         const aggr_nodes = await nodes_client.instance().aggregate_nodes_by_pool([pool.name], pool.system._id);
         const aggr_hosts = await nodes_client.instance().aggregate_hosts_by_pool([pool.name], pool.system._id);
         const { mode = '' } = get_pool_info(pool, aggr_nodes, aggr_hosts);
         if (mode === 'OPTIMAL') {
             return pool._id;
         }
     }
}

1329-1355: Limit default resource migration to legacy internal pools and update pool selector to scope by system

The current implementation in update_account_default_resource unconditionally overwrites every non-operator account’s default_resource with the first “optimal” pool found across all systems. To avoid unintended cross-system migrations and only target accounts still on the legacy INTERNAL/Mongo pool, please apply the following mandatory refactors:

• Modify get_optimal_non_default_pool_id to accept a system parameter and only scan that system’s pools:

--- a/src/server/system_services/pool_server.js
+++ b/src/server/system_services/pool_server.js
@@
-async function get_optimal_non_default_pool_id() {
-    for (const pool of system_store.data.pools) {
+async function get_optimal_non_default_pool_id(system) {
+    // Only consider pools belonging to this system
+    const pools = system_store.data.pools.filter(p =>
+        String(p.system._id) === String(system._id)
+    );
+    for (const pool of pools) {
         // skip default pool
         if (_is_optimal_default_pool(pool)) {
             continue;

• Update the call site in update_account_default_resource to pass in the system, and narrow the account filter to only those still using a legacy INTERNAL/Mongo pool on that system:

--- a/src/server/system_services/pool_server.js
+++ b/src/server/system_services/pool_server.js
@@ async function update_account_default_resource() {
-                const optimal_pool_id = await get_optimal_non_default_pool_id();
+                const optimal_pool_id = await get_optimal_non_default_pool_id(system);
@@
-                const updates = system_store.data.accounts
-                    .filter(account =>
-                        account.email.unwrap() !== config.OPERATOR_ACCOUNT_EMAIL &&
-                        account.default_resource
-                    )
+                const updates = system_store.data.accounts
+                    .filter(account =>
+                        // skip operator and empty defaults
+                        account.email.unwrap() !== config.OPERATOR_ACCOUNT_EMAIL &&
+                        account.default_resource &&
+                        // only accounts on this system’s legacy INTERNAL/Mongo pool
+                        String(account.default_resource.system?._id || account.default_resource.system) === String(system._id) &&
+                        (account.default_resource.resource_type === 'INTERNAL' || account.default_resource.mongo_info)
+                    )
                     .map(account => ({
                         _id: account._id,
                         $set: {
                             default_resource: optimal_pool_id
                         }
                     }));

• (Optional) Gate this migration behind a one-time upgrade flag so it runs only once per cluster lifecycle.

Files/lines to update:

  • src/server/system_services/pool_server.js:
    • Function definition around lines 1308–1312
    • update_account_default_resource call site around lines 1329–1336
src/server/system_services/bucket_server.js (1)

1322-1331: Critical Refactor Required: Correct Bucket Migration Direction in update_all_buckets_default_pool

The current implementation in src/server/system_services/bucket_server.js incorrectly treats the new default/backingstore pool as the “internal” (legacy) pool, causing buckets to be moved in the opposite direction of the intended upgrade:

• get_optimal_default_pool() returns the new default backingstore, not the legacy INTERNAL/mongo pool
• As a result, buckets using the new default are migrated to pool_name, instead of migrating buckets off the legacy INTERNAL pool into the new default

After running a repository-wide search for other uses of is_using_internal_storage() and INTERNAL pool detection, no additional sites exhibit this inversion—this change is isolated to update_all_buckets_default_pool in bucket_server.js.

Please update the detection logic to identify the legacy INTERNAL/mongo pool via structural markers (resource_type or mongo_info) and then migrate those buckets into the new default:

--- a/src/server/system_services/bucket_server.js
+++ b/src/server/system_services/bucket_server.js
@@ -1322,7 +1322,13 @@ async function update_all_buckets_default_pool(req) {
     const pool = req.system.pools_by_name[pool_name];
     if (!pool) throw new RpcError('INVALID_POOL_NAME');
-    const internal_pool = pool_server.get_optimal_default_pool(pool.system);
+    // Find the legacy INTERNAL (mongo) pool, falling back to name-prefix if needed
+    const internal_pool = pool_server.get_legacy_internal_pool
+        ? pool_server.get_legacy_internal_pool(pool.system)
+        : _.find(
+            Object.values(pool.system.pools_by_name || {}),
+            p => p.resource_type === 'INTERNAL' || p.mongo_info
+          );
     if (!internal_pool || !internal_pool._id) return;
     if (String(pool._id) === String(internal_pool._id)) return;
     const buckets_with_internal_pool = _.filter(

Follow-up:

• Export get_legacy_internal_pool(system) from pool_server.js (using structural checks and a DEFAULT_POOL_NAME prefix fallback)
• Once available, switch this handler to call pool_server.get_legacy_internal_pool(pool.system) directly

This refactor is mandatory to ensure buckets are migrated off the legacy INTERNAL pool into the new default backingstore.

🧹 Nitpick comments (7)
src/server/system_services/pool_server.js (1)

1431-1431: Consider exporting a helper for legacy internal pool detection

Since multiple places need to find the legacy INTERNAL/mongo pool during upgrade/migration, add and export:

function get_legacy_internal_pool(system) {
  return _.find(Object.values(system.pools_by_name || {}), p => p?.resource_type === 'INTERNAL' || p?.mongo_info);
}
exports.get_legacy_internal_pool = get_legacy_internal_pool;

This promotes a single source of truth and aligns with the guidance to use structural markers.

src/test/integration_tests/internal/test_upgrade_scripts.js (2)

137-165: Strengthen assertions and tidy logging

  • If the new pool wasn’t created, exact_added.name will throw before assertion. Assert existence first for a clearer failure.
  • Minor typo in afte_names variable.

Apply this diff:

-        const afte_names = system_store.data.pools.map(e => e.name);
-        dbg.info("End : List all the pools in system: ", afte_names);
+        const after_names = system_store.data.pools.map(e => e.name);
+        dbg.info("End : List all the pools in system: ", after_names);
@@
-        const exact_added = system_store.data.pools.find(pool => pool.name === pool_name);
-        assert.strictEqual(exact_added.name, pool_name);
+        const exact_added = system_store.data.pools.find(pool => pool.name === pool_name);
+        assert.ok(exact_added, `Expected pool ${pool_name} to be created`);
+        assert.strictEqual(exact_added.name, pool_name);

134-141: Test still references potentially removed config INTERNAL name; fallback is fine, but prefer structural detection

Using config.INTERNAL_STORAGE_POOL_NAME is okay here with a fallback, but long-term stability is better if tests seed an INTERNAL resource by resource_type: 'INTERNAL' without relying on legacy names.

Would you like me to propose a version of the test that seeds purely by resource_type: 'INTERNAL' and asserts via structural checks?

src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (4)

11-16: Typos and clarity in logs

Minor but user-facing when running upgrades:

  • “Monogo” → “Mongo”
  • “Create new default pool Created...” → “Created new default pool.”

Apply:

-        dbg.log0(`Start: Monogo pool upgrade script...`);
+        dbg.log0(`Start: Mongo pool upgrade script...`);
-        dbg.log0(`End: Create new default pool Created...`);
+        dbg.log0(`End: Created new default pool.`);
-        dbg.log0(`End: Monogo pool upgrade script...`);
+        dbg.log0(`End: Mongo pool upgrade script.`);

13-16: Prefer deriving the target default pool via API rather than by name to future-proof

If naming changes again, deriving by name will break. Consider using a helper like pool_server.get_optimal_default_pool(system) (as referenced elsewhere in this PR) to resolve the pool, and only fall back to name.

Example:

-        const pool_name = `${config.DEFAULT_POOL_NAME}-${system_store.data.systems[0]._id}`;
-        await create_new_default_pool(dbg, pool_name, system_store);
+        const system = system_store.data.systems[0];
+        const pool = pool_server.get_optimal_default_pool
+            ? pool_server.get_optimal_default_pool(system)
+            : null;
+        const pool_name = pool ? pool.name : `${config.DEFAULT_POOL_NAME}-${system._id}`;
+        if (!pool) await create_new_default_pool(dbg, pool_name, system_store);

21-24: Guard for absent internal pools and no-op updates

update_buckets_default_pool is called even when no internal/mongo pool exists. The function currently early-returns on falsy mongo_pool, which is fine. Consider logging that there’s nothing to migrate to make the script’s output clearer.

Apply:

-        await update_buckets_default_pool(dbg, pool_name, mongo_pools[0], system_store);
+        await update_buckets_default_pool(dbg, pool_name, mongo_pools[0], system_store);
+        if (!mongo_pools[0]) dbg.log0('No internal/mongo pool detected; skipping bucket migration');

69-83: Set backingstore flag on hosts pool (if applicable)

Some detection paths rely on hosts_pool_info.backingstore === true. Explicitly set it to avoid name-based fallbacks.

Apply:

-    fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 };
+    fs_pool.hosts_pool_info = { is_managed: false, host_count: 0, backingstore: true };
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7de3270 and 3c2fa20.

📒 Files selected for processing (7)
  • src/api/system_api.js (2 hunks)
  • src/server/node_services/node_server.js (1 hunks)
  • src/server/system_services/bucket_server.js (1 hunks)
  • src/server/system_services/pool_server.js (3 hunks)
  • src/server/system_services/system_server.js (2 hunks)
  • src/test/integration_tests/internal/test_upgrade_scripts.js (2 hunks)
  • src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
src/test/**/*.*

⚙️ CodeRabbit configuration file

src/test/**/*.*: Ensure that the PR includes tests for the changes.

Files:

  • src/test/integration_tests/internal/test_upgrade_scripts.js
🧠 Learnings (7)
📓 Common learnings
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.
📚 Learning: 2025-08-08T13:12:46.728Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.

Applied to files:

  • src/test/integration_tests/internal/test_upgrade_scripts.js
  • src/server/system_services/bucket_server.js
  • src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
  • src/server/node_services/node_server.js
  • src/server/system_services/system_server.js
  • src/server/system_services/pool_server.js
📚 Learning: 2025-08-11T06:12:12.318Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.

Applied to files:

  • src/test/integration_tests/internal/test_upgrade_scripts.js
  • src/server/system_services/bucket_server.js
  • src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
  • src/server/system_services/pool_server.js
📚 Learning: 2025-08-08T13:10:36.141Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.

Applied to files:

  • src/test/integration_tests/internal/test_upgrade_scripts.js
  • src/server/system_services/bucket_server.js
  • src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
  • src/server/node_services/node_server.js
  • src/server/system_services/system_server.js
  • src/server/system_services/pool_server.js
📚 Learning: 2025-08-08T13:13:42.192Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.

Applied to files:

  • src/test/integration_tests/internal/test_upgrade_scripts.js
  • src/server/system_services/bucket_server.js
  • src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
  • src/server/node_services/node_server.js
  • src/server/system_services/system_server.js
  • src/server/system_services/pool_server.js
📚 Learning: 2025-08-08T13:10:57.608Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1308-1311
Timestamp: 2025-08-08T13:10:57.608Z
Learning: In noobaa-core, config.INTERNAL_STORAGE_POOL_NAME has been removed. Use config.DEFAULT_POOL_NAME for internal/backingstore pool identification. Prefer exact match first, with prefix fallback to handle potential system-id suffixes in pool names.

Applied to files:

  • src/test/integration_tests/internal/test_upgrade_scripts.js
  • src/server/system_services/system_server.js
  • src/server/system_services/pool_server.js
📚 Learning: 2025-08-08T13:08:38.361Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/bucket_server.js:1324-1327
Timestamp: 2025-08-08T13:08:38.361Z
Learning: In src/server/system_services/bucket_server.js, the update_all_buckets_default_pool(req) handler expects req.rpc_params.pool_name to be a plain string (not a SensitiveString wrapper), so calling .unwrap() is not needed there.

Applied to files:

  • src/server/system_services/bucket_server.js
  • src/server/node_services/node_server.js
🧬 Code graph analysis (5)
src/test/integration_tests/internal/test_upgrade_scripts.js (3)
src/server/system_services/pool_server.js (18)
  • pool_name (715-715)
  • config (12-12)
  • system (1330-1330)
  • system_store (19-19)
  • pool (146-146)
  • pool (175-175)
  • pool (221-221)
  • pool (422-422)
  • pool (452-452)
  • pool (506-506)
  • pool (519-519)
  • pool (577-577)
  • pool (617-617)
  • pool (667-667)
  • pool (780-780)
  • pool (926-926)
  • pool (935-935)
  • pool (1246-1246)
src/server/system_services/system_server.js (10)
  • pool_name (192-192)
  • config (18-18)
  • system (75-75)
  • system (99-99)
  • system (125-173)
  • system (178-178)
  • system (482-482)
  • system (861-861)
  • system_store (32-32)
  • system_store (1189-1189)
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (2)
  • pool_name (13-13)
  • config (5-5)
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (4)
src/server/system_services/bucket_server.js (27)
  • _ (7-7)
  • _ (1740-1747)
  • config (14-14)
  • pool_name (1323-1323)
  • pool (1324-1324)
  • pool (1529-1529)
  • bucket (343-343)
  • bucket (357-357)
  • bucket (376-376)
  • bucket (394-394)
  • bucket (419-419)
  • bucket (431-431)
  • bucket (446-446)
  • bucket (482-482)
  • bucket (491-491)
  • bucket (504-504)
  • bucket (513-513)
  • bucket (539-539)
  • bucket (558-558)
  • bucket (572-572)
  • bucket (590-590)
  • bucket (604-604)
  • updates (1334-1334)
  • internal_pool (1326-1326)
  • tiers (1585-1585)
  • mirrors (1588-1588)
  • spread_pools (1591-1591)
src/server/system_services/pool_server.js (26)
  • _ (9-9)
  • _ (1175-1175)
  • _ (1200-1200)
  • config (12-12)
  • pool_name (715-715)
  • pool (146-146)
  • pool (175-175)
  • pool (221-221)
  • pool (422-422)
  • pool (452-452)
  • pool (506-506)
  • pool (519-519)
  • pool (577-577)
  • pool (617-617)
  • pool (667-667)
  • pool (780-780)
  • pool (926-926)
  • pool (935-935)
  • pool (1246-1246)
  • updates (457-461)
  • updates (1335-1345)
  • name (274-274)
  • name (356-356)
  • name (925-925)
  • name (934-934)
  • name (979-979)
src/server/system_services/system_server.js (13)
  • _ (6-6)
  • config (18-18)
  • pool_name (192-192)
  • updates (654-654)
  • updates (664-664)
  • updates (805-805)
  • account (583-583)
  • account (769-769)
  • account (790-790)
  • account (1136-1136)
  • fs_pool (193-193)
  • default_pool (190-190)
  • default_pool (360-360)
src/test/integration_tests/internal/test_upgrade_scripts.js (2)
  • config (18-18)
  • pool_name (137-137)
src/server/node_services/node_server.js (2)
src/server/system_services/bucket_server.js (6)
  • pool (1324-1324)
  • pool (1529-1529)
  • req (1116-1116)
  • req (1149-1149)
  • req (2013-2013)
  • pool_name (1323-1323)
src/server/system_services/pool_server.js (20)
  • pool (146-146)
  • pool (175-175)
  • pool (221-221)
  • pool (422-422)
  • pool (452-452)
  • pool (506-506)
  • pool (519-519)
  • pool (577-577)
  • pool (617-617)
  • pool (667-667)
  • pool (780-780)
  • pool (926-926)
  • pool (935-935)
  • pool (1246-1246)
  • req (145-145)
  • req (525-525)
  • req (811-811)
  • req (1366-1366)
  • req (1393-1393)
  • pool_name (715-715)
src/server/system_services/system_server.js (2)
src/test/integration_tests/internal/test_upgrade_scripts.js (3)
  • pool_name (137-137)
  • config (18-18)
  • system (134-134)
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (2)
  • pool_name (13-13)
  • config (5-5)
src/server/system_services/pool_server.js (5)
src/server/system_services/bucket_server.js (5)
  • pool (1324-1324)
  • pool (1529-1529)
  • config (14-14)
  • system (689-689)
  • system_store (28-28)
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (2)
  • pool (39-39)
  • config (5-5)
src/hosted_agents/hosted_agents.js (4)
  • pool (327-327)
  • pool (375-375)
  • config (18-18)
  • system_store (10-10)
src/server/system_services/system_server.js (9)
  • config (18-18)
  • system (75-75)
  • system (99-99)
  • system (125-173)
  • system (178-178)
  • system (482-482)
  • system (861-861)
  • system_store (32-32)
  • system_store (1189-1189)
config.js (1)
  • config (7-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (3)
src/api/system_api.js (1)

478-502: Schema update validated: system_id is already returned by read_system

  • The read_system implementation in src/server/system_services/system_server.js returns
    system_id: system._id.toString() as part of its response object, satisfying the new schema requirement.
  • The OpenAPI definition in src/api/system_api.js already lists system_id under both required and properties for system_full_info.
  • No references to system_full_info were found in this repo’s UI/client code—ensure any external clients or stubs are updated if they consume this shape.
  • (Optional) Add an integration test for GET /api/system/read_system to assert that system_id is always present in the response.
src/server/system_services/system_server.js (2)

191-196: Default pool name suffixed with system id — LGTM

Using ${config.DEFAULT_POOL_NAME}-${system._id} makes the default pool unique per system and aligns with the new naming convention referenced in tests and upgrade scripts. No concerns on correctness.


579-582: Confirmed API schema and tests include system_id – ready to merge

  • src/api/system_api.js now lists 'system_id' in the required array for system_full_info and defines it as a string type.
  • Integration tests across the codebase already reference and exercise system_id (e.g. in system store and agent blocks verifier tests), and none fail with the new field.
  • No lingering references to system_full_info outside the API schema were found, so UI/CLI code will ignore the extra property by design.

All downstream consumers should handle the additional system_id field without issue. Merging as-is.

Comment on lines 90 to 94
query.pools = new Set(_.map(query.pools, pool_name => {
const pool = req.system.pools_by_name[pool_name];
const pool = req.system.pools_by_name?.[pool_name];
return String(pool._id);
}));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Optional chaining introduced, but still dereferencing possibly-undefined pool on the next line

req.system.pools_by_name?.[pool_name] can return undefined; String(pool._id) will then throw. Either skip unknown pools or fail fast with a clear error.

Apply this diff to make the mapping null-safe and avoid runtime exceptions:

-        query.pools = new Set(_.map(query.pools, pool_name => {
-            const pool = req.system.pools_by_name?.[pool_name];
-            return String(pool._id);
-        }));
+        const pool_ids = _.compact(_.map(query.pools, pool_name => {
+            const p = req.system.pools_by_name?.[pool_name];
+            if (!p) dbg.log1('list_nodes: skipping unknown pool in filter', pool_name);
+            return p?._id && String(p._id);
+        }));
+        query.pools = new Set(pool_ids);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
query.pools = new Set(_.map(query.pools, pool_name => {
const pool = req.system.pools_by_name[pool_name];
const pool = req.system.pools_by_name?.[pool_name];
return String(pool._id);
}));
}
// Replace unsafe direct mapping over query.pools with a null-safe filter/mapping
- query.pools = new Set(_.map(query.pools, pool_name => {
- const pool = req.system.pools_by_name?.[pool_name];
- return String(pool._id);
const pool_ids = _.compact(_.map(query.pools, pool_name => {
const p = req.system.pools_by_name?.[pool_name];
if (!p) dbg.log1('list_nodes: skipping unknown pool in filter', pool_name);
return p?._id && String(p._id);
}));
query.pools = new Set(pool_ids);
}
🤖 Prompt for AI Agents
In src/server/node_services/node_server.js around lines 90 to 94, the code uses
optional chaining to look up pools but then dereferences pool._id without
checking pool; update the mapping to handle undefined by first filtering out
missing pools (or throw a clear error if you prefer fail-fast). Specifically,
transform query.pools by mapping pool names to pool objects, filter out
undefined entries (or throw an Error with the missing pool_name), then map to
String(pool._id) and wrap in a Set; optionally log a warning for skipped unknown
pools.

Comment on lines +1304 to 1306
function _is_optimal_default_pool(pool) {
return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Default/backingstore detection should use structural markers, not strict name equality

Rely on hosts_pool_info.backingstore / cloud_pool_info.backingstore with name-prefix fallback, per upgrade guidance, instead of strict equality against ${config.DEFAULT_POOL_NAME}-${system_id}. This avoids brittleness if the name changes or has a suffix.

Apply this diff:

-function _is_optimal_default_pool(pool) {
-    return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`);
+function _is_optimal_default_pool(pool) {
+    // Prefer structural marker (backingstore), fallback to name prefix to handle suffixes.
+    if (pool.hosts_pool_info?.backingstore || pool.cloud_pool_info?.backingstore) return true;
+    const expected_prefix = `${config.DEFAULT_POOL_NAME}-`;
+    return typeof pool.name === 'string' && pool.name.startsWith(expected_prefix);
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function _is_optimal_default_pool(pool) {
return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`);
}
function _is_optimal_default_pool(pool) {
// Prefer structural marker (backingstore), fallback to name prefix to handle suffixes.
if (pool.hosts_pool_info?.backingstore || pool.cloud_pool_info?.backingstore) return true;
const expected_prefix = `${config.DEFAULT_POOL_NAME}-`;
return typeof pool.name === 'string' && pool.name.startsWith(expected_prefix);
}
🤖 Prompt for AI Agents
In src/server/system_services/pool_server.js around lines 1304-1306, the
default/backingstore detection currently uses strict equality on the full pool
name; change it to first check structural markers
(pool.hosts_pool_info?.backingstore or pool.cloud_pool_info?.backingstore) and
return true if either marker is truthy, otherwise fall back to a name-prefix
check (pool.name.startsWith(`${config.DEFAULT_POOL_NAME}-${pool.system._id}`))
to allow suffixes; ensure null-safe access and return a boolean.

Comment on lines +1308 to 1310
function get_optimal_default_pool(system) {
return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`];
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make get_optimal_default_pool() resilient; support structural fallback

Current code assumes the composite key exists and pools_by_name is always defined. Use optional chaining and structural fallback to improve robustness across upgrades.

Apply this diff:

-function get_optimal_default_pool(system) {
-    return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`];
+function get_optimal_default_pool(system) {
+    const by_name = system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`];
+    if (by_name) return by_name;
+    // Fallback: find by structural marker or name prefix
+    return _.find(Object.values(system.pools_by_name || {}), p =>
+        p?.hosts_pool_info?.backingstore || p?.cloud_pool_info?.backingstore ||
+        (typeof p?.name === 'string' && p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`)));
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function get_optimal_default_pool(system) {
return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`];
}
function get_optimal_default_pool(system) {
const by_name = system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`];
if (by_name) return by_name;
// Fallback: find by structural marker or name prefix
return _.find(Object.values(system.pools_by_name || {}), p =>
p?.hosts_pool_info?.backingstore ||
p?.cloud_pool_info?.backingstore ||
(typeof p?.name === 'string' && p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`))
);
}
🤖 Prompt for AI Agents
In src/server/system_services/pool_server.js around lines 1308 to 1310,
get_optimal_default_pool currently assumes system.pools_by_name exists and the
composite key is present; update it to be resilient by using optional chaining
and structural fallbacks: attempt to return
system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`], then try
system.pools_by_name?.[config.DEFAULT_POOL_NAME], then search
Object.values(system.pools_by_name || {}) for a pool whose name equals or
startsWith config.DEFAULT_POOL_NAME + '-' and return the first match, and
finally fall back to searching system.pools (if present) for the same name
patterns; return null/undefined if no match found.

Comment on lines 9 to +16
async function run({ dbg, system_store }) {
try {
dbg.log0(`Starting monogo pool delete...`);
dbg.log0(`Start: Monogo pool upgrade script...`);
dbg.log0(`Start: Create new default pool...`);
const pool_name = `${config.DEFAULT_POOL_NAME}-${system_store.data.systems[0]._id}`;
await create_new_default_pool(dbg, pool_name, system_store);
dbg.log0(`End: Create new default pool Created...`);

Copy link

@coderabbitai coderabbitai bot Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Scope: iterate all systems for multi-tenant stores

Script currently operates only on systems[0]. Per prior guidance, handle multi-system stores and remove all matching internal/mongo pools within one run.

High-level approach:

  • Loop over system_store.data.systems.
  • For each, ensure/create the new default pool (idempotent).
  • Detect and migrate buckets for that system.
  • Remove all internal/mongo pools for that system.

I can provide a consolidated patch if you confirm multi-system support is required for your deployments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multi-system is not supported

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@naveenpaul1 Thanks for the clarification! That resolves my concern about the script scope. Since multi-system deployments are not supported, the current approach of operating on system_store.data.systems[0] is correct and appropriate.

The upgrade script's single-system logic is properly implemented for your use case.

🐰
◕ ◕
( · ) curious head tilt
---


✏️ Learnings added
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9195
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-16
Timestamp: 2025-08-21T11:07:05.147Z
Learning: In noobaa-core, multi-system deployments are not supported. Upgrade scripts can safely operate on system_store.data.systems[0] without needing to handle multiple systems.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.

Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.

Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.

Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.

Comment on lines +11 to +31
dbg.log0(`Start: Monogo pool upgrade script...`);
dbg.log0(`Start: Create new default pool...`);
const pool_name = `${config.DEFAULT_POOL_NAME}-${system_store.data.systems[0]._id}`;
await create_new_default_pool(dbg, pool_name, system_store);
dbg.log0(`End: Create new default pool Created...`);

const internal_mongo_pool = `${config.INTERNAL_STORAGE_POOL_NAME}-${system_store.data.systems[0]._id}`;
dbg.log0(`Internal mongo pool id is : ${internal_mongo_pool}`);
const pool_ids = system_store.data.pools.filter(pool => pool.name === internal_mongo_pool);
if (pool_ids.length > 0) {
dbg.log0(`Removing default mongo pool: ${pool_ids[0]._id}`);
await system_store.make_changes({ remove: { pools: [pool_ids[0]._id] }});
const mongo_pools = system_store.data.pools.filter(pool => pool.name === internal_mongo_pool);

dbg.log0(`Start: Update bucket default bucket pool with new default pool...`);
await update_buckets_default_pool(dbg, pool_name, mongo_pools[0], system_store);
dbg.log0(`End: Updated bucket default bucket pool with new default pool...`);

if (mongo_pools.length > 0) {
dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`);
await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }});
} else {
dbg.log0('Removing mongo pool: Could not find the mongo pool...');
}
dbg.log0(`End: Monogo pool upgrade script...`);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

End-to-end safety checks and idempotency

To make the upgrade script safe to re-run:

  • Ensure pool creation is idempotent (skip if exists).
  • Skip bucket updates if the tier already points to the new pool.
  • Remove internal pools only after successful bucket migration.
  • Wrap changes per phase and log counts for visibility.

Run these checks locally:

Also applies to: 38-67, 69-83, 101-112


🏁 Script executed:

#!/bin/bash
# 1) Confirm there are no remaining references to INTERNAL_STORAGE_POOL_NAME
rg -nP 'INTERNAL_STORAGE_POOL_NAME' src | sed -n '1,200p'

# 2) Confirm we’re not writing pools with non-canonical `owner_id`
rg -nP 'owner_id\s*:' src | sed -n '1,120p'

# 3) Check that tier updates use $set and we never call new_system_store_id() from a system object
rg -nP -n -C2 '\btiers\b.*\bmirrors\b|\bnew_system_store_id\(' src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js

# 4) Sanity: potential multi-system references to systems\[0]
rg -nP -n '\bsystems\[0\]' src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js

Length of output: 2475


Enhance idempotency, structural detection & multi‐system support in remove_mongo_pool.js

Please refactor the upgrade script at src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (particularly lines 11–31, 38–67, 69–83, 101–112) to address the following mandatory fixes:

  • Avoid relying solely on config.INTERNAL_STORAGE_POOL_NAME
    • Detect legacy “internal” Mongo pools by checking each pool’s mongo_info field and resource_type === 'INTERNAL' instead of name prefix lookup retrieved_learnings
    • Iterate over all system_store.data.systems, not just systems[0], to capture multi-system deployments retrieved_learnings

  • Make default‐pool creation idempotent
    • Before calling create_new_default_pool(dbg, poolName, system_store), verify system_store.data.pools_by_name[poolName] doesn’t already exist

  • Harden bucket‐migration step for re-runs
    • In update_buckets_default_pool, skip any bucket whose tier already references the new pool (i.e. spread_pools includes the new pool’s _id and the default tier matches)
    • Ensure you use system_store.new_system_store_id() (not system_store.data.systems[i].new_system_store_id()) when generating new store IDs run_scripts

  • Delay pool removal until all bucket migrations succeed
    • After migrating buckets across every system, confirm none remain on the internal pool before calling make_changes({ remove: { pools: […] } })
    • Batch removals per system, commit each phase separately, and log the count of pools removed

  • Structure script into clear, logged phases:

    1. Detection & Creation – identify existing default pools, create missing ones
    2. Bucket Migration – update only buckets needing migration, log number updated
    3. Pool Removal – remove only fully‐migrated internal pools, log number removed

These changes will ensure the script can be safely re-run, handle multi-system stores, and recover gracefully from partial failures.

🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 11–31,
38–67, 69–83 and 101–112: the script currently assumes a single system and finds
internal Mongo pools by name and unconditionally creates/removes pools, which
breaks idempotency and multi-system deployments; refactor into three logged
phases (Detection & Creation, Bucket Migration, Pool Removal), iterate over all
system_store.data.systems, detect legacy internal pools by checking
pool.mongo_info and resource_type === 'INTERNAL' (not name prefix), before
calling create_new_default_pool verify system_store.data.pools_by_name[poolName]
does not exist, in update_buckets_default_pool skip buckets whose tiers already
reference the new pool (spread_pools includes new pool _id and default tier
matches) and use system_store.new_system_store_id() to generate new IDs, after
processing all systems ensure no buckets remain on an internal pool before
removing it, batch removals per system and call make_changes separately per
batch, and add logging counts for created default pools, buckets migrated, and
pools removed to make the script re-runnable and safe for multi-system stores.

Comment on lines +25 to +31
if (mongo_pools.length > 0) {
dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`);
await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }});
} else {
dbg.log0('Removing mongo pool: Could not find the mongo pool...');
}
dbg.log0(`End: Monogo pool upgrade script...`);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove all detected internal/mongo pools, not just the first

If more than one matching pool exists (legacy residues), only the first is removed. Remove all to leave no internal/mongo pools behind.

Apply:

-        if (mongo_pools.length > 0) {
-            dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`);
-            await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }});
+        if (mongo_pools.length > 0) {
+            const ids = mongo_pools.map(p => p._id);
+            dbg.log0(`Removing ${ids.length} internal/mongo pool(s): ${ids.join(', ')}`);
+            await system_store.make_changes({ remove: { pools: ids }});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (mongo_pools.length > 0) {
dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`);
await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }});
} else {
dbg.log0('Removing mongo pool: Could not find the mongo pool...');
}
dbg.log0(`End: Monogo pool upgrade script...`);
if (mongo_pools.length > 0) {
const ids = mongo_pools.map(p => p._id);
dbg.log0(`Removing ${ids.length} internal/mongo pool(s): ${ids.join(', ')}`);
await system_store.make_changes({ remove: { pools: ids }});
} else {
dbg.log0('Removing mongo pool: Could not find the mongo pool...');
}
dbg.log0(`End: Monogo pool upgrade script...`);
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 25 to
31, the script only removes the first detected mongo pool (mongo_pools[0]) when
multiple matching internal/mongo pools may exist; change it to remove all
detected pools by collecting all mongo_pools ids and passing that array to
system_store.make_changes({ remove: { pools: [...] }}), and update the dbg.log0
calls to report how many/which pools are being removed and handle the zero-pools
case accordingly.

Comment on lines +38 to +67
async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) {
const pool = system_store.data.systems[0].pools_by_name[pool_name];
if (!pool) {
dbg.error('INVALID_POOL_NAME:');
throw new Error('INVALID_POOL_NAME');
}
if (!mongo_pool || !mongo_pool._id) return;
if (String(pool._id) === String(mongo_pool._id)) return;
const buckets_with_internal_pool = _.filter(system_store.data.systems[0].buckets_by_name, bucket =>
is_using_internal_storage(bucket, mongo_pool));
if (!buckets_with_internal_pool.length) return;

// The loop pushes one update per bucket
const updates = _.uniqBy([], '_id');
for (const bucket of buckets_with_internal_pool) {
updates.push({
_id: bucket.tiering.tiers[0].tier._id,
mirrors: [{
_id: system_store.data.systems[0].new_system_store_id(),
spread_pools: [pool._id]
}]
});
}
dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`);
await system_store.make_changes({
update: {
tiers: updates
}
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Tier update construction has two functional bugs (bad id generation and missing $set) and an overly strict matcher

Issues:

  • system_store.data.systems[0].new_system_store_id() is invalid; ID creation is on system_store, not the system object.
  • Updates do not use $set; depending on make_changes semantics, this may overwrite entire tier docs or be ignored.
  • is_using_internal_storage assumes id is under _id. In some stores spread_pools entries can be ObjectIds (no _id), causing false negatives.

Patch:

-    const updates = _.uniqBy([], '_id');
+    const updates = [];
     for (const bucket of buckets_with_internal_pool) {
-        updates.push({
-            _id: bucket.tiering.tiers[0].tier._id,
-            mirrors: [{
-                _id: system_store.data.systems[0].new_system_store_id(),
-                spread_pools: [pool._id]
-            }]
-        });
+        updates.push({
+            _id: bucket.tiering.tiers[0].tier._id,
+            $set: {
+                mirrors: [{
+                    _id: system_store.new_system_store_id(),
+                    spread_pools: [pool._id]
+                }]
+            }
+        });
     }
-    dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`);
+    dbg.log0(`Updating ${buckets_with_internal_pool.length} bucket(s) to use ${pool_name} as default resource`);
     await system_store.make_changes({
         update: {
-            tiers: updates
+            tiers: updates
         }
     });

Additionally, consider relaxing is_using_internal_storage as patched below (see dedicated comment).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) {
const pool = system_store.data.systems[0].pools_by_name[pool_name];
if (!pool) {
dbg.error('INVALID_POOL_NAME:');
throw new Error('INVALID_POOL_NAME');
}
if (!mongo_pool || !mongo_pool._id) return;
if (String(pool._id) === String(mongo_pool._id)) return;
const buckets_with_internal_pool = _.filter(system_store.data.systems[0].buckets_by_name, bucket =>
is_using_internal_storage(bucket, mongo_pool));
if (!buckets_with_internal_pool.length) return;
// The loop pushes one update per bucket
const updates = _.uniqBy([], '_id');
for (const bucket of buckets_with_internal_pool) {
updates.push({
_id: bucket.tiering.tiers[0].tier._id,
mirrors: [{
_id: system_store.data.systems[0].new_system_store_id(),
spread_pools: [pool._id]
}]
});
}
dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`);
await system_store.make_changes({
update: {
tiers: updates
}
});
}
async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) {
const pool = system_store.data.systems[0].pools_by_name[pool_name];
if (!pool) {
dbg.error('INVALID_POOL_NAME:');
throw new Error('INVALID_POOL_NAME');
}
if (!mongo_pool || !mongo_pool._id) return;
if (String(pool._id) === String(mongo_pool._id)) return;
const buckets_with_internal_pool = _.filter(
system_store.data.systems[0].buckets_by_name,
bucket => is_using_internal_storage(bucket, mongo_pool)
);
if (!buckets_with_internal_pool.length) return;
// The loop pushes one update per bucket
const updates = [];
for (const bucket of buckets_with_internal_pool) {
- updates.push({
- _id: bucket.tiering.tiers[0].tier._id,
- mirrors: [{
- _id: system_store.data.systems[0].new_system_store_id(),
- spread_pools: [pool._id]
- }]
updates.push({
_id: bucket.tiering.tiers[0].tier._id,
$set: {
mirrors: [{
_id: system_store.new_system_store_id(),
spread_pools: [pool._id]
}]
}
});
}
dbg.log0(`Updating ${buckets_with_internal_pool.length} bucket(s) to use ${pool_name} as default resource`);
await system_store.make_changes({
update: {
tiers: updates
}
});
}
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 38 to
67, fix three issues: replace the incorrect call
system_store.data.systems[0].new_system_store_id() with
system_store.new_system_store_id(); build each tier update as a $set operation
(e.g. { _id: <tierId>, $set: { mirrors: [...] } }) so make_changes performs a
partial update instead of replacing the whole document; and relax the matcher
used to detect internal storage by normalizing spread_pools entries to their id
values (handle entries that are either plain ObjectId/strings or objects with an
_id property) and compare via String(...) to mongo_pool._id or pool._id to avoid
false negatives. Ensure updates are unique by tier _id before calling
system_store.make_changes.

Comment on lines +69 to +83
async function create_new_default_pool(dbg, pool_name, system_store) {
// TODO: UPDATE EMAIL
const account = system_store.get_account_by_email(new SensitiveString('[email protected]'));
if (!account) {
dbg.error('NO_SUCH_ACCOUNT', 'No such account email: [email protected]');
}
const fs_pool = new_pool_defaults(system_store, pool_name, system_store.data.systems[0]._id, 'HOSTS', 'BLOCK_STORE_FS', account._id);
fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 };
const default_pool = fs_pool;
await system_store.make_changes({
insert: {
pools: [default_pool]
}
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Owner resolution and pool construction are fragile; avoid hard-coded admin email and reuse pool_server.new_pool_defaults

Problems:

  • Hard-coding [email protected] may fail; environments often use config.OPERATOR_ACCOUNT_EMAIL or the system owner.
  • Local new_pool_defaults() risks schema drift (uses owner_id vs canonical owner and may miss defaults). Reuse pool_server.new_pool_defaults(...) for a schema-accurate pool.

Apply:

+const pool_server = require('../../../server/system_services/pool_server');
@@
-    // TODO: UPDATE EMAIL
-    const account = system_store.get_account_by_email(new SensitiveString('[email protected]'));
-    if (!account) {
-        dbg.error('NO_SUCH_ACCOUNT', 'No such account email: [email protected]');
-    }
-    const fs_pool = new_pool_defaults(system_store, pool_name, system_store.data.systems[0]._id, 'HOSTS', 'BLOCK_STORE_FS', account._id);
+    const system = system_store.data.systems[0];
+    // Prefer the system owner; fall back to operator account if needed.
+    const owner_ref = system.owner && (system.owner._id || system.owner.id || system.owner);
+    const owner = system_store.data.get_by_id ? (system_store.data.get_by_id(owner_ref) || {}) : {};
+    const owner_id = owner._id || owner_ref;
+    if (!owner_id) {
+        dbg.error('MISSING_OWNER', 'Could not resolve system owner for pool creation');
+        throw new Error('MISSING_OWNER');
+    }
+    // Idempotency: skip if pool already exists
+    if (system.pools_by_name && system.pools_by_name[pool_name]) {
+        dbg.log0(`Default pool ${pool_name} already exists; skipping creation`);
+        return;
+    }
+    const fs_pool = pool_server.new_pool_defaults(pool_name, system._id, 'HOSTS', 'BLOCK_STORE_FS', owner_id);
     fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 };

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +85 to +99
function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) {
const now = Date.now();
return {
_id: system_store.new_system_store_id(),
system: system_id,
name: name,
owner_id,
resource_type: resource_type,
pool_node_type: pool_node_type,
storage_stats: {
blocks_size: 0,
last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS)
},
};
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Local new_pool_defaults() duplicates core logic and uses non-canonical field owner_id

This helper risks schema drift and breaks permissions/auditing. Given the refactor above to use pool_server.new_pool_defaults, drop this local function to avoid divergence.

Apply:

-function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) {
-    const now = Date.now();
-    return {
-        _id: system_store.new_system_store_id(),
-        system: system_id,
-        name: name,
-        owner_id,
-        resource_type: resource_type,
-        pool_node_type: pool_node_type,
-        storage_stats: {
-            blocks_size: 0,
-            last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS)
-        },
-    };
-}
+// Removed: use pool_server.new_pool_defaults(...) for a canonical pool shape.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) {
const now = Date.now();
return {
_id: system_store.new_system_store_id(),
system: system_id,
name: name,
owner_id,
resource_type: resource_type,
pool_node_type: pool_node_type,
storage_stats: {
blocks_size: 0,
last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS)
},
};
}
// Removed: use pool_server.new_pool_defaults(...) for a canonical pool shape.
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 85 to
99, the local helper function new_pool_defaults duplicates core logic and uses a
non-canonical field owner_id; remove this local function and replace its usages
to call the centralized pool_server.new_pool_defaults instead (pass the same
args: system_store, name, system_id, resource_type, pool_node_type, owner_id),
and ensure any created objects match the canonical schema (rename owner_id to
owner if required by pool_server) so there is no schema drift or
permission/audit divergence.

Comment on lines +101 to +112
function is_using_internal_storage(bucket, internal_pool) {
if (!internal_pool || !internal_pool._id) return false;
const tiers = bucket.tiering && bucket.tiering.tiers;
if (!tiers || tiers.length !== 1) return false;
const mirrors = tiers[0].tier.mirrors;
if (mirrors.length !== 1) return false;
const spread_pools = mirrors[0].spread_pools;
if (spread_pools.length !== 1) return false;

return String(spread_pools[0]._id) === String(internal_pool._id);
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle both ObjectId and populated refs in is_using_internal_storage

spread_pools[0] might be either an ObjectId or a populated object. The current check misses the former.

Apply:

-    const spread_pools = mirrors[0].spread_pools;
+    const spread_pools = mirrors[0].spread_pools;
     if (spread_pools.length !== 1) return false;
-
-    return String(spread_pools[0]._id) === String(internal_pool._id);
+    const target = spread_pools[0];
+    const pool_id = (target && target._id) || target;
+    return String(pool_id) === String(internal_pool._id);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function is_using_internal_storage(bucket, internal_pool) {
if (!internal_pool || !internal_pool._id) return false;
const tiers = bucket.tiering && bucket.tiering.tiers;
if (!tiers || tiers.length !== 1) return false;
const mirrors = tiers[0].tier.mirrors;
if (mirrors.length !== 1) return false;
const spread_pools = mirrors[0].spread_pools;
if (spread_pools.length !== 1) return false;
return String(spread_pools[0]._id) === String(internal_pool._id);
}
function is_using_internal_storage(bucket, internal_pool) {
if (!internal_pool || !internal_pool._id) return false;
const tiers = bucket.tiering && bucket.tiering.tiers;
if (!tiers || tiers.length !== 1) return false;
const mirrors = tiers[0].tier.mirrors;
if (mirrors.length !== 1) return false;
const spread_pools = mirrors[0].spread_pools;
if (spread_pools.length !== 1) return false;
const target = spread_pools[0];
const pool_id = (target && target._id) || target;
return String(pool_id) === String(internal_pool._id);
}
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 101 to
112, the function is_using_internal_storage assumes spread_pools[0] is always a
populated object and therefore accesses _id; update the comparison to handle
both populated refs and raw ObjectId refs by normalizing the value before
comparing (e.g., use the spread_pools[0]._id if present, otherwise use
spread_pools[0] directly, then coerce both sides to String or use toString()
safely), and ensure you defensively check existence (optional chaining) to avoid
runtime errors.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
src/server/system_services/account_server.js (5)

12-17: Import pool_server to reuse centralized default-pool logic

Subsequent fixes rely on selecting the optimal default pool. Importing pool_server here keeps behavior consistent with bucket_server and the PR’s new helpers.

 const config = require('../../../config');
 const dbg = require('../../util/debug_module')(__filename);
 const { RpcError } = require('../../rpc');
 const Dispatcher = require('../notifications/dispatcher');
 const SensitiveString = require('../../util/sensitive_string');
+const pool_server = require('../system_services/pool_server');

88-97: Hard-coded 'backingstores' fallback breaks with system-scoped pool names; use get_optimal_default_pool()

When default_resource isn’t provided, the code assumes req.system.pools_by_name.backingstores exists. With the new naming (DEFAULT_POOL_NAME-{system_id} or DEFAULT_POOL_NAME_{system_id}), this key won’t exist, causing a BAD_REQUEST where older behavior worked.

Use the centralized selector.

-            // Default pool resource is backingstores
-            const resource = req.rpc_params.default_resource ?
-            req.system.pools_by_name[req.rpc_params.default_resource] ||
-                (req.system.namespace_resources_by_name && req.system.namespace_resources_by_name[req.rpc_params.default_resource]) :
-                req.system.pools_by_name.backingstores;
+            // Default pool resource: resolve optimally from system-scoped default pool
+            const resource = req.rpc_params.default_resource
+                ? (req.system.pools_by_name[req.rpc_params.default_resource] ||
+                   (req.system.namespace_resources_by_name &&
+                    req.system.namespace_resources_by_name[req.rpc_params.default_resource]))
+                : pool_server.get_optimal_default_pool(req.system);

1398-1402: Bug: comparing an object to 0 always fails; check pool count instead

req.system.pools_by_name === 0 is always false. If the system truly has no pools, we won’t block creation as intended.

-            if (req.system.pools_by_name === 0) {
+            if (!req.system.pools_by_name || Object.keys(req.system.pools_by_name).length === 0) {
                 throw new RpcError('No resources in the system - Can\'t create accounts');
             }

376-384: Potential TypeError: uses req.system after assigning fallback to local ‘system’

When req.system is undefined, you assign system = system_store.data.systems_by_name[...] (Line 366) but still access req.system.pools_by_name here. This will throw.

Also, the logic doesn’t distinguish default vs non-default pools; it blocks if there are any pools at all. Align it with the “non-internal” check used in create-account validation.

-        if (!req.rpc_params.default_resource) {
-            const pools = _.filter(req.system.pools_by_name);
-            if (pools.length) { // has resources which is not internal - must supply resource
-                throw new RpcError('BAD_REQUEST', 'Enabling S3 requires providing default_resource');
-            }
-        }
+        if (!req.rpc_params.default_resource) {
+            const pools = Object.values(system.pools_by_name || {});
+            const has_non_internal = pools.some(p => {
+                const is_default =
+                    (p.hosts_pool_info && p.hosts_pool_info.backingstore) ||
+                    (p.cloud_pool_info && p.cloud_pool_info.backingstore) ||
+                    (typeof p.name === 'string' && (
+                        p.name === config.DEFAULT_POOL_NAME ||
+                        p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`) ||
+                        p.name.startsWith(`${config.DEFAULT_POOL_NAME}_`)
+                    ));
+                return !is_default;
+            });
+            if (has_non_internal) {
+                // has resources which are not the internal/default one - must supply resource
+                throw new RpcError('BAD_REQUEST', 'Enabling S3 requires providing default_resource');
+            }
+        }

390-393: Use optimal default pool instead of arbitrary first pool

Choosing the first entry in pools_by_name is nondeterministic and may select a non-default pool. Use get_optimal_default_pool(system) and fall back only if not found.

-        } else {
-            update.default_resource = (Object.values(req.system.pools_by_name)[0])._id;
-        }
+        } else {
+            const def_pool = pool_server.get_optimal_default_pool(system);
+            update.default_resource = (def_pool && def_pool._id) ||
+                ((Object.values(system.pools_by_name)[0] || {})._id);
+        }
🧹 Nitpick comments (1)
src/server/system_services/account_server.js (1)

88-93: Message: update comment to reflect new behavior

The in-code comment still says “Default pool resource is backingstores”. Update it to reflect system-scoped default pool selection.

-            // Default pool resource is backingstores
+            // Default pool resource is the system-scoped default pool (e.g., `${config.DEFAULT_POOL_NAME}-{system_id}`)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3c2fa20 and bf87e67.

📒 Files selected for processing (1)
  • src/server/system_services/account_server.js (1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.
📚 Learning: 2025-08-08T13:10:36.141Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.

Applied to files:

  • src/server/system_services/account_server.js
📚 Learning: 2025-08-08T13:13:42.192Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.

Applied to files:

  • src/server/system_services/account_server.js
📚 Learning: 2025-08-08T13:12:46.728Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.

Applied to files:

  • src/server/system_services/account_server.js
📚 Learning: 2025-08-08T13:10:57.608Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1308-1311
Timestamp: 2025-08-08T13:10:57.608Z
Learning: In noobaa-core, config.INTERNAL_STORAGE_POOL_NAME has been removed. Use config.DEFAULT_POOL_NAME for internal/backingstore pool identification. Prefer exact match first, with prefix fallback to handle potential system-id suffixes in pool names.

Applied to files:

  • src/server/system_services/account_server.js
📚 Learning: 2025-08-08T13:08:38.361Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/bucket_server.js:1324-1327
Timestamp: 2025-08-08T13:08:38.361Z
Learning: In src/server/system_services/bucket_server.js, the update_all_buckets_default_pool(req) handler expects req.rpc_params.pool_name to be a plain string (not a SensitiveString wrapper), so calling .unwrap() is not needed there.

Applied to files:

  • src/server/system_services/account_server.js
🧬 Code graph analysis (1)
src/server/system_services/account_server.js (1)
config.js (1)
  • config (7-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-package-lock-validation
🔇 Additional comments (2)
src/server/system_services/account_server.js (2)

1386-1389: Nit: ensure stable stringification of system id

If req.system._id is not a plain string, implicit stringification usually works (ObjectId has toString), but being explicit avoids surprises.

-        Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) :
+        Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${String(req.system._id)}`) :

Note: This nit is moot if you apply the earlier structural detection refactor. In that case, skip this change.


1-1528: Ensure PR description matches the canonical pool‐naming format

All existing code and tests use the hyphenated, plural form based on config.DEFAULT_POOL_NAME (which is set to "backingstores"). That yields pool names like:

  • backingstores-<system_id>

Confirmations:
• config.js (line 246): config.DEFAULT_POOL_NAME = 'backingstores'
• remove_mongo_pool.js (5.20.0): const pool_name = ${config.DEFAULT_POOL_NAME}-${system_id};
• system_server.js (line 192): const pool_name = ${config.DEFAULT_POOL_NAME}-${system._id};
• pool_server.js: detection via pool.name === ${config.DEFAULT_POOL_NAME}-${pool.system._id}``
• integration tests reference ${config.DEFAULT_POOL_NAME}-${system._id}; no underscore‐based constructions found

No code paths create or match names like backingstore_<id>. Please update the PR description to reference the hyphenated plural format (backingstores-<system_id>), ensuring alignment between docs and implementation.

Comment on lines 1386 to 1389
const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
Object.values(req.system.pools_by_name).some(p => p.name !== 'backingstores') :
Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) :
false;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Default-pool detection via exact name is brittle and will misclassify pools

Comparing against a single formatted name ${config.DEFAULT_POOL_NAME}-${req.system._id} breaks on:

  • Legacy systems where the pool is still named exactly backingstores (no suffix).
  • Systems using underscore vs hyphen (PR text says backingstore_{id}).
  • Any future suffixing pattern changes.

Result: has_non_internal_resources may become true even when only the internal/default pool exists, forcing clients to provide default_resource unnecessarily.

Replace with structural detection and prefix fallback (per past learnings), so only non-default pools are counted as “non-internal”.

-    const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
-        Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) :
-        false;
+    const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
+        Object.values(req.system.pools_by_name).some(p => {
+            const is_default =
+                // Prefer structural markers first
+                (p.hosts_pool_info && p.hosts_pool_info.backingstore) ||
+                (p.cloud_pool_info && p.cloud_pool_info.backingstore) ||
+                // Fallback: name-based prefix match to handle optional suffixes
+                (typeof p.name === 'string' && (
+                    p.name === config.DEFAULT_POOL_NAME ||
+                    p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`) ||
+                    p.name.startsWith(`${config.DEFAULT_POOL_NAME}_`)
+                ));
+            return !is_default;
+        }) :
+        false;

Would you prefer centralizing this by calling a helper (e.g., pool_server.get_optimal_default_pool) instead? I can refactor to that to keep logic in one place.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
Object.values(req.system.pools_by_name).some(p => p.name !== 'backingstores') :
Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) :
false;
const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
Object.values(req.system.pools_by_name).some(p => {
const is_default =
// Prefer structural markers first
(p.hosts_pool_info && p.hosts_pool_info.backingstore) ||
(p.cloud_pool_info && p.cloud_pool_info.backingstore) ||
// Fallback: name-based prefix match to handle optional suffixes
(typeof p.name === 'string' && (
p.name === config.DEFAULT_POOL_NAME ||
p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`) ||
p.name.startsWith(`${config.DEFAULT_POOL_NAME}_`)
));
return !is_default;
}) :
false;
🤖 Prompt for AI Agents
In src/server/system_services/account_server.js around lines 1386 to 1389, the
current exact-name check against `${config.DEFAULT_POOL_NAME}-${req.system._id}`
is brittle and misclassifies default/internal pools; replace it with structural
detection that first checks any explicit internal/default marker on the pool
object (e.g., pool.internal, pool.isDefault, or same owner/system id), then as a
fallback accept names that start with the DEFAULT_POOL_NAME followed by optional
separators and the system id or nothing (e.g., /^backingstore(?:[-_]?<id>)?$/
semantics) so only truly non-default pools are counted as non-internal;
alternatively call the centralized helper (e.g.,
pool_server.get_optimal_default_pool) if available to keep logic in one place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant