Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions src/api/system_api.js
Original file line number Diff line number Diff line change
Expand Up @@ -479,6 +479,7 @@ module.exports = {
type: 'object',
required: [
'name',
'system_id',
'roles',
'tiers',
'pools',
Expand All @@ -495,6 +496,9 @@ module.exports = {
name: {
type: 'string',
},
system_id: {
type: 'string',
},
roles: {
type: 'array',
items: {
Expand Down
2 changes: 1 addition & 1 deletion src/server/node_services/node_server.js
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ function _prepare_nodes_query(req) {
}
if (query.pools) {
query.pools = new Set(_.map(query.pools, pool_name => {
const pool = req.system.pools_by_name[pool_name];
const pool = req.system.pools_by_name?.[pool_name];
return String(pool._id);
}));
}
Comment on lines 90 to 94
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Optional chaining introduced, but still dereferencing possibly-undefined pool on the next line

req.system.pools_by_name?.[pool_name] can return undefined; String(pool._id) will then throw. Either skip unknown pools or fail fast with a clear error.

Apply this diff to make the mapping null-safe and avoid runtime exceptions:

-        query.pools = new Set(_.map(query.pools, pool_name => {
-            const pool = req.system.pools_by_name?.[pool_name];
-            return String(pool._id);
-        }));
+        const pool_ids = _.compact(_.map(query.pools, pool_name => {
+            const p = req.system.pools_by_name?.[pool_name];
+            if (!p) dbg.log1('list_nodes: skipping unknown pool in filter', pool_name);
+            return p?._id && String(p._id);
+        }));
+        query.pools = new Set(pool_ids);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
query.pools = new Set(_.map(query.pools, pool_name => {
const pool = req.system.pools_by_name[pool_name];
const pool = req.system.pools_by_name?.[pool_name];
return String(pool._id);
}));
}
// Replace unsafe direct mapping over query.pools with a null-safe filter/mapping
- query.pools = new Set(_.map(query.pools, pool_name => {
- const pool = req.system.pools_by_name?.[pool_name];
- return String(pool._id);
const pool_ids = _.compact(_.map(query.pools, pool_name => {
const p = req.system.pools_by_name?.[pool_name];
if (!p) dbg.log1('list_nodes: skipping unknown pool in filter', pool_name);
return p?._id && String(p._id);
}));
query.pools = new Set(pool_ids);
}
🤖 Prompt for AI Agents
In src/server/node_services/node_server.js around lines 90 to 94, the code uses
optional chaining to look up pools but then dereferences pool._id without
checking pool; update the mapping to handle undefined by first filtering out
missing pools (or throw a clear error if you prefer fail-fast). Specifically,
transform query.pools by mapping pool names to pool objects, filter out
undefined entries (or throw an Error with the missing pool_name), then map to
String(pool._id) and wrap in a Set; optionally log a warning for skipped unknown
pools.

Expand Down
2 changes: 1 addition & 1 deletion src/server/system_services/account_server.js
Original file line number Diff line number Diff line change
Expand Up @@ -1384,7 +1384,7 @@ function validate_create_account_permissions(req) {
function validate_create_account_params(req) {
// find none-internal pools
const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
Object.values(req.system.pools_by_name).some(p => p.name !== 'backingstores') :
Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) :
false;

Comment on lines 1386 to 1389
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Default-pool detection via exact name is brittle and will misclassify pools

Comparing against a single formatted name ${config.DEFAULT_POOL_NAME}-${req.system._id} breaks on:

  • Legacy systems where the pool is still named exactly backingstores (no suffix).
  • Systems using underscore vs hyphen (PR text says backingstore_{id}).
  • Any future suffixing pattern changes.

Result: has_non_internal_resources may become true even when only the internal/default pool exists, forcing clients to provide default_resource unnecessarily.

Replace with structural detection and prefix fallback (per past learnings), so only non-default pools are counted as “non-internal”.

-    const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
-        Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) :
-        false;
+    const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
+        Object.values(req.system.pools_by_name).some(p => {
+            const is_default =
+                // Prefer structural markers first
+                (p.hosts_pool_info && p.hosts_pool_info.backingstore) ||
+                (p.cloud_pool_info && p.cloud_pool_info.backingstore) ||
+                // Fallback: name-based prefix match to handle optional suffixes
+                (typeof p.name === 'string' && (
+                    p.name === config.DEFAULT_POOL_NAME ||
+                    p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`) ||
+                    p.name.startsWith(`${config.DEFAULT_POOL_NAME}_`)
+                ));
+            return !is_default;
+        }) :
+        false;

Would you prefer centralizing this by calling a helper (e.g., pool_server.get_optimal_default_pool) instead? I can refactor to that to keep logic in one place.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
Object.values(req.system.pools_by_name).some(p => p.name !== 'backingstores') :
Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) :
false;
const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
Object.values(req.system.pools_by_name).some(p => {
const is_default =
// Prefer structural markers first
(p.hosts_pool_info && p.hosts_pool_info.backingstore) ||
(p.cloud_pool_info && p.cloud_pool_info.backingstore) ||
// Fallback: name-based prefix match to handle optional suffixes
(typeof p.name === 'string' && (
p.name === config.DEFAULT_POOL_NAME ||
p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`) ||
p.name.startsWith(`${config.DEFAULT_POOL_NAME}_`)
));
return !is_default;
}) :
false;
🤖 Prompt for AI Agents
In src/server/system_services/account_server.js around lines 1386 to 1389, the
current exact-name check against `${config.DEFAULT_POOL_NAME}-${req.system._id}`
is brittle and misclassifies default/internal pools; replace it with structural
detection that first checks any explicit internal/default marker on the pool
object (e.g., pool.internal, pool.isDefault, or same owner/system id), then as a
fallback accept names that start with the DEFAULT_POOL_NAME followed by optional
separators and the system id or nothing (e.g., /^backingstore(?:[-_]?<id>)?$/
semantics) so only truly non-default pools are counted as non-internal;
alternatively call the centralized helper (e.g.,
pool_server.get_optimal_default_pool) if available to keep logic in one place.

if (req.rpc_params.name.unwrap() !== req.rpc_params.name.unwrap().trim()) {
Expand Down
2 changes: 1 addition & 1 deletion src/server/system_services/bucket_server.js
Original file line number Diff line number Diff line change
Expand Up @@ -1323,7 +1323,7 @@ async function update_all_buckets_default_pool(req) {
const pool_name = req.rpc_params.pool_name;
const pool = req.system.pools_by_name[pool_name];
if (!pool) throw new RpcError('INVALID_POOL_NAME');
const internal_pool = pool_server.get_optimal_non_default_pool_id(pool.system);
const internal_pool = pool_server.get_optimal_default_pool(pool.system);
if (!internal_pool || !internal_pool._id) return;
if (String(pool._id) === String(internal_pool._id)) return;
const buckets_with_internal_pool = _.filter(req.system.buckets_by_name, bucket =>
Expand Down
17 changes: 8 additions & 9 deletions src/server/system_services/pool_server.js
Original file line number Diff line number Diff line change
Expand Up @@ -1301,18 +1301,18 @@ function _is_cloud_pool(pool) {
return Boolean(pool.cloud_pool_info);
}

function _is_optimal_non_default_pool_id(pool) {
return Boolean(pool.name === config.DEFAULT_POOL_NAME);
function _is_optimal_default_pool(pool) {
return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`);
}
Comment on lines +1304 to 1306
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Default/backingstore detection should use structural markers, not strict name equality

Rely on hosts_pool_info.backingstore / cloud_pool_info.backingstore with name-prefix fallback, per upgrade guidance, instead of strict equality against ${config.DEFAULT_POOL_NAME}-${system_id}. This avoids brittleness if the name changes or has a suffix.

Apply this diff:

-function _is_optimal_default_pool(pool) {
-    return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`);
+function _is_optimal_default_pool(pool) {
+    // Prefer structural marker (backingstore), fallback to name prefix to handle suffixes.
+    if (pool.hosts_pool_info?.backingstore || pool.cloud_pool_info?.backingstore) return true;
+    const expected_prefix = `${config.DEFAULT_POOL_NAME}-`;
+    return typeof pool.name === 'string' && pool.name.startsWith(expected_prefix);
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function _is_optimal_default_pool(pool) {
return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`);
}
function _is_optimal_default_pool(pool) {
// Prefer structural marker (backingstore), fallback to name prefix to handle suffixes.
if (pool.hosts_pool_info?.backingstore || pool.cloud_pool_info?.backingstore) return true;
const expected_prefix = `${config.DEFAULT_POOL_NAME}-`;
return typeof pool.name === 'string' && pool.name.startsWith(expected_prefix);
}
🤖 Prompt for AI Agents
In src/server/system_services/pool_server.js around lines 1304-1306, the
default/backingstore detection currently uses strict equality on the full pool
name; change it to first check structural markers
(pool.hosts_pool_info?.backingstore or pool.cloud_pool_info?.backingstore) and
return true if either marker is truthy, otherwise fall back to a name-prefix
check (pool.name.startsWith(`${config.DEFAULT_POOL_NAME}-${pool.system._id}`))
to allow suffixes; ensure null-safe access and return a boolean.


function get_optimal_non_default_pool_id(system) {
return system.pools_by_name[config.DEFAULT_POOL_NAME];
function get_optimal_default_pool(system) {
return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`];
}
Comment on lines +1308 to 1310
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make get_optimal_default_pool() resilient; support structural fallback

Current code assumes the composite key exists and pools_by_name is always defined. Use optional chaining and structural fallback to improve robustness across upgrades.

Apply this diff:

-function get_optimal_default_pool(system) {
-    return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`];
+function get_optimal_default_pool(system) {
+    const by_name = system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`];
+    if (by_name) return by_name;
+    // Fallback: find by structural marker or name prefix
+    return _.find(Object.values(system.pools_by_name || {}), p =>
+        p?.hosts_pool_info?.backingstore || p?.cloud_pool_info?.backingstore ||
+        (typeof p?.name === 'string' && p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`)));
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function get_optimal_default_pool(system) {
return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`];
}
function get_optimal_default_pool(system) {
const by_name = system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`];
if (by_name) return by_name;
// Fallback: find by structural marker or name prefix
return _.find(Object.values(system.pools_by_name || {}), p =>
p?.hosts_pool_info?.backingstore ||
p?.cloud_pool_info?.backingstore ||
(typeof p?.name === 'string' && p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`))
);
}
🤖 Prompt for AI Agents
In src/server/system_services/pool_server.js around lines 1308 to 1310,
get_optimal_default_pool currently assumes system.pools_by_name exists and the
composite key is present; update it to be resilient by using optional chaining
and structural fallbacks: attempt to return
system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`], then try
system.pools_by_name?.[config.DEFAULT_POOL_NAME], then search
Object.values(system.pools_by_name || {}) for a pool whose name equals or
startsWith config.DEFAULT_POOL_NAME + '-' and return the first match, and
finally fall back to searching system.pools (if present) for the same name
patterns; return null/undefined if no match found.


async function get_optimal_non_mongo_pool_id() {
async function get_optimal_non_default_pool_id() {
for (const pool of system_store.data.pools) {
// skip backingstore_pool.
if (_is_optimal_non_default_pool_id(pool)) {
if (_is_optimal_default_pool(pool)) {
continue;
}
const aggr_nodes = await nodes_client.instance().aggregate_nodes_by_pool([pool.name], pool.system._id);
Expand All @@ -1329,7 +1329,7 @@ async function update_account_default_resource() {
try {
const system = system_store.data.systems[0];
if (system) {
const optimal_pool_id = await get_optimal_non_mongo_pool_id();
const optimal_pool_id = await get_optimal_non_default_pool_id();

if (optimal_pool_id) {
const updates = system_store.data.accounts
Expand Down Expand Up @@ -1423,10 +1423,9 @@ exports.assign_pool_to_region = assign_pool_to_region;
exports.scale_hosts_pool = scale_hosts_pool;
exports.update_hosts_pool = update_hosts_pool;
exports.update_cloud_pool = update_cloud_pool;
exports.get_optimal_non_mongo_pool_id = get_optimal_non_mongo_pool_id;
exports.get_hosts_pool_agent_config = get_hosts_pool_agent_config;
exports.update_issues_report = update_issues_report;
exports.update_last_monitoring = update_last_monitoring;
exports.calc_namespace_resource_mode = calc_namespace_resource_mode;
exports.check_deletion_ownership = check_deletion_ownership;
exports.get_optimal_non_default_pool_id = get_optimal_non_default_pool_id;
exports.get_optimal_default_pool = get_optimal_default_pool;
3 changes: 2 additions & 1 deletion src/server/system_services/system_server.js
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ function new_system_changes(req, name, owner_account_id) {

let default_pool;
if (config.DEFAULT_POOL_TYPE === 'HOSTS') {
const pool_name = config.DEFAULT_POOL_NAME;
const pool_name = `${config.DEFAULT_POOL_NAME}-${system._id}`;
const fs_pool = pool_server.new_pool_defaults(pool_name, system._id, 'HOSTS', 'BLOCK_STORE_FS', owner_account_id);
fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 };
default_pool = fs_pool;
Expand Down Expand Up @@ -577,6 +577,7 @@ async function read_system(req) {

return {
name: system.name,
system_id: system._id.toString(),
objects: objects_sys.count.toJSNumber(),
roles: _.map(system.roles_by_account, function(roles, account_id) {
const account = system_store.data.get_by_id(account_id);
Expand Down
3 changes: 3 additions & 0 deletions src/test/integration_tests/internal/test_upgrade_scripts.js
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ mocha.describe('test upgrade scripts', async function() {
const system = system_store.data.systems[0];
const base = config.INTERNAL_STORAGE_POOL_NAME || config.DEFAULT_POOL_NAME || 'system-internal-storage-pool';
const internal_name = `${base}-${system._id}`;
const pool_name = `${config.DEFAULT_POOL_NAME}-${system._id}`;

// Seed an internal mongo pool entry
await system_store.make_changes({
Expand All @@ -159,6 +160,8 @@ mocha.describe('test upgrade scripts', async function() {
const prefix_exists = system_store.data.pools.find(pool => pool.name.startsWith(base));
assert.strictEqual(exact_removed, undefined);
assert.strictEqual(prefix_exists, undefined);
const exact_added = system_store.data.pools.find(pool => pool.name === pool_name);
assert.strictEqual(exact_added.name, pool_name);
});

mocha.after(async function() {
Expand Down
99 changes: 94 additions & 5 deletions src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
Original file line number Diff line number Diff line change
@@ -1,26 +1,115 @@
/* Copyright (C) 2025 NooBaa */
"use strict";
const _ = require('lodash');

const config = require('../../../../config.js');
const SensitiveString = require('../../../util/sensitive_string');


async function run({ dbg, system_store }) {
try {
dbg.log0(`Starting monogo pool delete...`);
dbg.log0(`Start: Monogo pool upgrade script...`);
dbg.log0(`Start: Create new default pool...`);
const pool_name = `${config.DEFAULT_POOL_NAME}-${system_store.data.systems[0]._id}`;
await create_new_default_pool(dbg, pool_name, system_store);
dbg.log0(`End: Create new default pool Created...`);

Comment on lines 9 to +16
Copy link

@coderabbitai coderabbitai bot Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Scope: iterate all systems for multi-tenant stores

Script currently operates only on systems[0]. Per prior guidance, handle multi-system stores and remove all matching internal/mongo pools within one run.

High-level approach:

  • Loop over system_store.data.systems.
  • For each, ensure/create the new default pool (idempotent).
  • Detect and migrate buckets for that system.
  • Remove all internal/mongo pools for that system.

I can provide a consolidated patch if you confirm multi-system support is required for your deployments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multi-system is not supported

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@naveenpaul1 Thanks for the clarification! That resolves my concern about the script scope. Since multi-system deployments are not supported, the current approach of operating on system_store.data.systems[0] is correct and appropriate.

The upgrade script's single-system logic is properly implemented for your use case.

🐰
◕ ◕
( · ) curious head tilt
---


✏️ Learnings added
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9195
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-16
Timestamp: 2025-08-21T11:07:05.147Z
Learning: In noobaa-core, multi-system deployments are not supported. Upgrade scripts can safely operate on system_store.data.systems[0] without needing to handle multiple systems.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.

Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.

Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.

Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.

const internal_mongo_pool = `${config.INTERNAL_STORAGE_POOL_NAME}-${system_store.data.systems[0]._id}`;
dbg.log0(`Internal mongo pool id is : ${internal_mongo_pool}`);
const pool_ids = system_store.data.pools.filter(pool => pool.name === internal_mongo_pool);
if (pool_ids.length > 0) {
dbg.log0(`Removing default mongo pool: ${pool_ids[0]._id}`);
await system_store.make_changes({ remove: { pools: [pool_ids[0]._id] }});
const mongo_pools = system_store.data.pools.filter(pool => pool.name === internal_mongo_pool);

dbg.log0(`Start: Update bucket default bucket pool with new default pool...`);
await update_buckets_default_pool(dbg, pool_name, mongo_pools[0], system_store);
dbg.log0(`End: Updated bucket default bucket pool with new default pool...`);

Comment on lines 17 to +24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Do not rely on removed config.INTERNAL_STORAGE_POOL_NAME; detect internal/mongo pools structurally

Per recent learnings, config.INTERNAL_STORAGE_POOL_NAME was removed. The current logic will break on newer configs and risks missing legacy internal pools. Detect internal/mongo pools using structural markers (e.g., pool.mongo_info or pool.resource_type === 'INTERNAL') and scope to the current system.

Replace the name-based selection with structural detection and proper scoping:

-        const internal_mongo_pool = `${config.INTERNAL_STORAGE_POOL_NAME}-${system_store.data.systems[0]._id}`;
-        dbg.log0(`Internal mongo pool id is : ${internal_mongo_pool}`);
-        const mongo_pools = system_store.data.pools.filter(pool => pool.name === internal_mongo_pool);
+        const system = system_store.data.systems[0];
+        const mongo_pools = Object.values(system.pools_by_name || {}).filter(pool =>
+            (pool && String(pool.system._id || pool.system) === String(system._id)) &&
+            (pool.mongo_info || pool.resource_type === 'INTERNAL')
+        );
+        dbg.log0(`Detected ${mongo_pools.length} internal/mongo pool(s) for system ${system._id}`);

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 17 to
24, the code currently finds the internal mongo pool by a removed name constant;
replace that name-based selection with structural detection and proper scoping
to the current system: filter system_store.data.pools for pools belonging to
system_store.data.systems[0]._id and where pool.mongo_info is present OR
pool.resource_type === 'INTERNAL', log if none or multiple found, pick the
correct matched pool (e.g., first match) and pass that into
update_buckets_default_pool, and remove any dependency on
config.INTERNAL_STORAGE_POOL_NAME.

if (mongo_pools.length > 0) {
dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`);
await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }});
} else {
dbg.log0('Removing mongo pool: Could not find the mongo pool...');
}
dbg.log0(`End: Monogo pool upgrade script...`);
Comment on lines +11 to +31
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

End-to-end safety checks and idempotency

To make the upgrade script safe to re-run:

  • Ensure pool creation is idempotent (skip if exists).
  • Skip bucket updates if the tier already points to the new pool.
  • Remove internal pools only after successful bucket migration.
  • Wrap changes per phase and log counts for visibility.

Run these checks locally:

Also applies to: 38-67, 69-83, 101-112


🏁 Script executed:

#!/bin/bash
# 1) Confirm there are no remaining references to INTERNAL_STORAGE_POOL_NAME
rg -nP 'INTERNAL_STORAGE_POOL_NAME' src | sed -n '1,200p'

# 2) Confirm we’re not writing pools with non-canonical `owner_id`
rg -nP 'owner_id\s*:' src | sed -n '1,120p'

# 3) Check that tier updates use $set and we never call new_system_store_id() from a system object
rg -nP -n -C2 '\btiers\b.*\bmirrors\b|\bnew_system_store_id\(' src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js

# 4) Sanity: potential multi-system references to systems\[0]
rg -nP -n '\bsystems\[0\]' src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js

Length of output: 2475


Enhance idempotency, structural detection & multi‐system support in remove_mongo_pool.js

Please refactor the upgrade script at src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (particularly lines 11–31, 38–67, 69–83, 101–112) to address the following mandatory fixes:

  • Avoid relying solely on config.INTERNAL_STORAGE_POOL_NAME
    • Detect legacy “internal” Mongo pools by checking each pool’s mongo_info field and resource_type === 'INTERNAL' instead of name prefix lookup retrieved_learnings
    • Iterate over all system_store.data.systems, not just systems[0], to capture multi-system deployments retrieved_learnings

  • Make default‐pool creation idempotent
    • Before calling create_new_default_pool(dbg, poolName, system_store), verify system_store.data.pools_by_name[poolName] doesn’t already exist

  • Harden bucket‐migration step for re-runs
    • In update_buckets_default_pool, skip any bucket whose tier already references the new pool (i.e. spread_pools includes the new pool’s _id and the default tier matches)
    • Ensure you use system_store.new_system_store_id() (not system_store.data.systems[i].new_system_store_id()) when generating new store IDs run_scripts

  • Delay pool removal until all bucket migrations succeed
    • After migrating buckets across every system, confirm none remain on the internal pool before calling make_changes({ remove: { pools: […] } })
    • Batch removals per system, commit each phase separately, and log the count of pools removed

  • Structure script into clear, logged phases:

    1. Detection & Creation – identify existing default pools, create missing ones
    2. Bucket Migration – update only buckets needing migration, log number updated
    3. Pool Removal – remove only fully‐migrated internal pools, log number removed

These changes will ensure the script can be safely re-run, handle multi-system stores, and recover gracefully from partial failures.

🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 11–31,
38–67, 69–83 and 101–112: the script currently assumes a single system and finds
internal Mongo pools by name and unconditionally creates/removes pools, which
breaks idempotency and multi-system deployments; refactor into three logged
phases (Detection & Creation, Bucket Migration, Pool Removal), iterate over all
system_store.data.systems, detect legacy internal pools by checking
pool.mongo_info and resource_type === 'INTERNAL' (not name prefix), before
calling create_new_default_pool verify system_store.data.pools_by_name[poolName]
does not exist, in update_buckets_default_pool skip buckets whose tiers already
reference the new pool (spread_pools includes new pool _id and default tier
matches) and use system_store.new_system_store_id() to generate new IDs, after
processing all systems ensure no buckets remain on an internal pool before
removing it, batch removals per system and call make_changes separately per
batch, and add logging counts for created default pools, buckets migrated, and
pools removed to make the script re-runnable and safe for multi-system stores.

Comment on lines +25 to +31
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove all detected internal/mongo pools, not just the first

If more than one matching pool exists (legacy residues), only the first is removed. Remove all to leave no internal/mongo pools behind.

Apply:

-        if (mongo_pools.length > 0) {
-            dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`);
-            await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }});
+        if (mongo_pools.length > 0) {
+            const ids = mongo_pools.map(p => p._id);
+            dbg.log0(`Removing ${ids.length} internal/mongo pool(s): ${ids.join(', ')}`);
+            await system_store.make_changes({ remove: { pools: ids }});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (mongo_pools.length > 0) {
dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`);
await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }});
} else {
dbg.log0('Removing mongo pool: Could not find the mongo pool...');
}
dbg.log0(`End: Monogo pool upgrade script...`);
if (mongo_pools.length > 0) {
const ids = mongo_pools.map(p => p._id);
dbg.log0(`Removing ${ids.length} internal/mongo pool(s): ${ids.join(', ')}`);
await system_store.make_changes({ remove: { pools: ids }});
} else {
dbg.log0('Removing mongo pool: Could not find the mongo pool...');
}
dbg.log0(`End: Monogo pool upgrade script...`);
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 25 to
31, the script only removes the first detected mongo pool (mongo_pools[0]) when
multiple matching internal/mongo pools may exist; change it to remove all
detected pools by collecting all mongo_pools ids and passing that array to
system_store.make_changes({ remove: { pools: [...] }}), and update the dbg.log0
calls to report how many/which pools are being removed and handle the zero-pools
case accordingly.

} catch (err) {
dbg.error('Got error while removing mongo pool:', err);
throw err;
}
}

async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) {
const pool = system_store.data.systems[0].pools_by_name[pool_name];
if (!pool) {
dbg.error('INVALID_POOL_NAME:');
throw new Error('INVALID_POOL_NAME');
}
if (!mongo_pool || !mongo_pool._id) return;
if (String(pool._id) === String(mongo_pool._id)) return;
const buckets_with_internal_pool = _.filter(system_store.data.systems[0].buckets_by_name, bucket =>
is_using_internal_storage(bucket, mongo_pool));
if (!buckets_with_internal_pool.length) return;

// The loop pushes one update per bucket
const updates = _.uniqBy([], '_id');
for (const bucket of buckets_with_internal_pool) {
updates.push({
_id: bucket.tiering.tiers[0].tier._id,
mirrors: [{
_id: system_store.data.systems[0].new_system_store_id(),
spread_pools: [pool._id]
}]
});
}
dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`);
await system_store.make_changes({
update: {
tiers: updates
}
});
}
Comment on lines +38 to +67
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Tier update construction has two functional bugs (bad id generation and missing $set) and an overly strict matcher

Issues:

  • system_store.data.systems[0].new_system_store_id() is invalid; ID creation is on system_store, not the system object.
  • Updates do not use $set; depending on make_changes semantics, this may overwrite entire tier docs or be ignored.
  • is_using_internal_storage assumes id is under _id. In some stores spread_pools entries can be ObjectIds (no _id), causing false negatives.

Patch:

-    const updates = _.uniqBy([], '_id');
+    const updates = [];
     for (const bucket of buckets_with_internal_pool) {
-        updates.push({
-            _id: bucket.tiering.tiers[0].tier._id,
-            mirrors: [{
-                _id: system_store.data.systems[0].new_system_store_id(),
-                spread_pools: [pool._id]
-            }]
-        });
+        updates.push({
+            _id: bucket.tiering.tiers[0].tier._id,
+            $set: {
+                mirrors: [{
+                    _id: system_store.new_system_store_id(),
+                    spread_pools: [pool._id]
+                }]
+            }
+        });
     }
-    dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`);
+    dbg.log0(`Updating ${buckets_with_internal_pool.length} bucket(s) to use ${pool_name} as default resource`);
     await system_store.make_changes({
         update: {
-            tiers: updates
+            tiers: updates
         }
     });

Additionally, consider relaxing is_using_internal_storage as patched below (see dedicated comment).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) {
const pool = system_store.data.systems[0].pools_by_name[pool_name];
if (!pool) {
dbg.error('INVALID_POOL_NAME:');
throw new Error('INVALID_POOL_NAME');
}
if (!mongo_pool || !mongo_pool._id) return;
if (String(pool._id) === String(mongo_pool._id)) return;
const buckets_with_internal_pool = _.filter(system_store.data.systems[0].buckets_by_name, bucket =>
is_using_internal_storage(bucket, mongo_pool));
if (!buckets_with_internal_pool.length) return;
// The loop pushes one update per bucket
const updates = _.uniqBy([], '_id');
for (const bucket of buckets_with_internal_pool) {
updates.push({
_id: bucket.tiering.tiers[0].tier._id,
mirrors: [{
_id: system_store.data.systems[0].new_system_store_id(),
spread_pools: [pool._id]
}]
});
}
dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`);
await system_store.make_changes({
update: {
tiers: updates
}
});
}
async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) {
const pool = system_store.data.systems[0].pools_by_name[pool_name];
if (!pool) {
dbg.error('INVALID_POOL_NAME:');
throw new Error('INVALID_POOL_NAME');
}
if (!mongo_pool || !mongo_pool._id) return;
if (String(pool._id) === String(mongo_pool._id)) return;
const buckets_with_internal_pool = _.filter(
system_store.data.systems[0].buckets_by_name,
bucket => is_using_internal_storage(bucket, mongo_pool)
);
if (!buckets_with_internal_pool.length) return;
// The loop pushes one update per bucket
const updates = [];
for (const bucket of buckets_with_internal_pool) {
- updates.push({
- _id: bucket.tiering.tiers[0].tier._id,
- mirrors: [{
- _id: system_store.data.systems[0].new_system_store_id(),
- spread_pools: [pool._id]
- }]
updates.push({
_id: bucket.tiering.tiers[0].tier._id,
$set: {
mirrors: [{
_id: system_store.new_system_store_id(),
spread_pools: [pool._id]
}]
}
});
}
dbg.log0(`Updating ${buckets_with_internal_pool.length} bucket(s) to use ${pool_name} as default resource`);
await system_store.make_changes({
update: {
tiers: updates
}
});
}
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 38 to
67, fix three issues: replace the incorrect call
system_store.data.systems[0].new_system_store_id() with
system_store.new_system_store_id(); build each tier update as a $set operation
(e.g. { _id: <tierId>, $set: { mirrors: [...] } }) so make_changes performs a
partial update instead of replacing the whole document; and relax the matcher
used to detect internal storage by normalizing spread_pools entries to their id
values (handle entries that are either plain ObjectId/strings or objects with an
_id property) and compare via String(...) to mongo_pool._id or pool._id to avoid
false negatives. Ensure updates are unique by tier _id before calling
system_store.make_changes.


async function create_new_default_pool(dbg, pool_name, system_store) {
// TODO: UPDATE EMAIL
const account = system_store.get_account_by_email(new SensitiveString('[email protected]'));
if (!account) {
dbg.error('NO_SUCH_ACCOUNT', 'No such account email: [email protected]');
}
const fs_pool = new_pool_defaults(system_store, pool_name, system_store.data.systems[0]._id, 'HOSTS', 'BLOCK_STORE_FS', account._id);
fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 };
const default_pool = fs_pool;
await system_store.make_changes({
insert: {
pools: [default_pool]
}
});
}
Comment on lines +69 to +83
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Owner resolution and pool construction are fragile; avoid hard-coded admin email and reuse pool_server.new_pool_defaults

Problems:

  • Hard-coding [email protected] may fail; environments often use config.OPERATOR_ACCOUNT_EMAIL or the system owner.
  • Local new_pool_defaults() risks schema drift (uses owner_id vs canonical owner and may miss defaults). Reuse pool_server.new_pool_defaults(...) for a schema-accurate pool.

Apply:

+const pool_server = require('../../../server/system_services/pool_server');
@@
-    // TODO: UPDATE EMAIL
-    const account = system_store.get_account_by_email(new SensitiveString('[email protected]'));
-    if (!account) {
-        dbg.error('NO_SUCH_ACCOUNT', 'No such account email: [email protected]');
-    }
-    const fs_pool = new_pool_defaults(system_store, pool_name, system_store.data.systems[0]._id, 'HOSTS', 'BLOCK_STORE_FS', account._id);
+    const system = system_store.data.systems[0];
+    // Prefer the system owner; fall back to operator account if needed.
+    const owner_ref = system.owner && (system.owner._id || system.owner.id || system.owner);
+    const owner = system_store.data.get_by_id ? (system_store.data.get_by_id(owner_ref) || {}) : {};
+    const owner_id = owner._id || owner_ref;
+    if (!owner_id) {
+        dbg.error('MISSING_OWNER', 'Could not resolve system owner for pool creation');
+        throw new Error('MISSING_OWNER');
+    }
+    // Idempotency: skip if pool already exists
+    if (system.pools_by_name && system.pools_by_name[pool_name]) {
+        dbg.log0(`Default pool ${pool_name} already exists; skipping creation`);
+        return;
+    }
+    const fs_pool = pool_server.new_pool_defaults(pool_name, system._id, 'HOSTS', 'BLOCK_STORE_FS', owner_id);
     fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 };

Committable suggestion skipped: line range outside the PR's diff.


function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) {
const now = Date.now();
return {
_id: system_store.new_system_store_id(),
system: system_id,
name: name,
owner_id,
resource_type: resource_type,
pool_node_type: pool_node_type,
storage_stats: {
blocks_size: 0,
last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS)
},
};
}
Comment on lines +85 to +99
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Local new_pool_defaults() duplicates core logic and uses non-canonical field owner_id

This helper risks schema drift and breaks permissions/auditing. Given the refactor above to use pool_server.new_pool_defaults, drop this local function to avoid divergence.

Apply:

-function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) {
-    const now = Date.now();
-    return {
-        _id: system_store.new_system_store_id(),
-        system: system_id,
-        name: name,
-        owner_id,
-        resource_type: resource_type,
-        pool_node_type: pool_node_type,
-        storage_stats: {
-            blocks_size: 0,
-            last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS)
-        },
-    };
-}
+// Removed: use pool_server.new_pool_defaults(...) for a canonical pool shape.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) {
const now = Date.now();
return {
_id: system_store.new_system_store_id(),
system: system_id,
name: name,
owner_id,
resource_type: resource_type,
pool_node_type: pool_node_type,
storage_stats: {
blocks_size: 0,
last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS)
},
};
}
// Removed: use pool_server.new_pool_defaults(...) for a canonical pool shape.
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 85 to
99, the local helper function new_pool_defaults duplicates core logic and uses a
non-canonical field owner_id; remove this local function and replace its usages
to call the centralized pool_server.new_pool_defaults instead (pass the same
args: system_store, name, system_id, resource_type, pool_node_type, owner_id),
and ensure any created objects match the canonical schema (rename owner_id to
owner if required by pool_server) so there is no schema drift or
permission/audit divergence.


function is_using_internal_storage(bucket, internal_pool) {
if (!internal_pool || !internal_pool._id) return false;
const tiers = bucket.tiering && bucket.tiering.tiers;
if (!tiers || tiers.length !== 1) return false;
const mirrors = tiers[0].tier.mirrors;
if (mirrors.length !== 1) return false;
const spread_pools = mirrors[0].spread_pools;
if (spread_pools.length !== 1) return false;

return String(spread_pools[0]._id) === String(internal_pool._id);
}

Comment on lines +101 to +112
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle both ObjectId and populated refs in is_using_internal_storage

spread_pools[0] might be either an ObjectId or a populated object. The current check misses the former.

Apply:

-    const spread_pools = mirrors[0].spread_pools;
+    const spread_pools = mirrors[0].spread_pools;
     if (spread_pools.length !== 1) return false;
-
-    return String(spread_pools[0]._id) === String(internal_pool._id);
+    const target = spread_pools[0];
+    const pool_id = (target && target._id) || target;
+    return String(pool_id) === String(internal_pool._id);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function is_using_internal_storage(bucket, internal_pool) {
if (!internal_pool || !internal_pool._id) return false;
const tiers = bucket.tiering && bucket.tiering.tiers;
if (!tiers || tiers.length !== 1) return false;
const mirrors = tiers[0].tier.mirrors;
if (mirrors.length !== 1) return false;
const spread_pools = mirrors[0].spread_pools;
if (spread_pools.length !== 1) return false;
return String(spread_pools[0]._id) === String(internal_pool._id);
}
function is_using_internal_storage(bucket, internal_pool) {
if (!internal_pool || !internal_pool._id) return false;
const tiers = bucket.tiering && bucket.tiering.tiers;
if (!tiers || tiers.length !== 1) return false;
const mirrors = tiers[0].tier.mirrors;
if (mirrors.length !== 1) return false;
const spread_pools = mirrors[0].spread_pools;
if (spread_pools.length !== 1) return false;
const target = spread_pools[0];
const pool_id = (target && target._id) || target;
return String(pool_id) === String(internal_pool._id);
}
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 101 to
112, the function is_using_internal_storage assumes spread_pools[0] is always a
populated object and therefore accesses _id; update the comparison to handle
both populated refs and raw ObjectId refs by normalizing the value before
comparing (e.g., use the spread_pools[0]._id if present, otherwise use
spread_pools[0] directly, then coerce both sides to String or use toString()
safely), and ensure you defensively check existence (optional chaining) to avoid
runtime errors.

module.exports = {
run,
description: 'Noobaa no longer support mongo_pool backingstore, Remove mongo pool',
Expand Down
Loading