-
Notifications
You must be signed in to change notification settings - Fork 87
Mongo | Mongo pool upgrade #9195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
WalkthroughAdds Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Admin as Upgrade Runner
participant Script as remove_mongo_pool.js
participant System as System DB
participant PoolSrv as pool_server
participant BucketSrv as bucket_server
participant Buckets as Buckets Collection
Admin->>Script: run()
Script->>System: Read system, pools, buckets
Script->>Script: Compute defaultName = DEFAULT_POOL_NAME-{system_id}
Script->>System: Create/insert default pool (if missing)
Script->>PoolSrv: get_optimal_default_pool(system)
PoolSrv-->>Script: default_pool
alt Internal pool exists and differs
Script->>BucketSrv: update_buckets_default_pool(internal_pool, default_pool)
BucketSrv->>Buckets: Bulk update tier mirrors -> default_pool
Buckets-->>BucketSrv: Updated
Script->>System: Remove internal pool
else No internal pool or already same
Script->>Script: Skip updates/removal
end
Script-->>Admin: Completed
note over Script,System: Logs show create/update/remove steps
sequenceDiagram
autonumber
participant Client as API Client
participant SysSrv as system_server
participant PoolSrv as pool_server
participant BucketSrv as bucket_server
Client->>SysSrv: read_system(req)
SysSrv-->>Client: {..., system_id}
Client->>BucketSrv: update_all_buckets_default_pool()
BucketSrv->>PoolSrv: get_optimal_default_pool(system)
PoolSrv-->>BucketSrv: default_pool
BucketSrv->>BucketSrv: Validate and skip if same/missing
BucketSrv->>BucketSrv: Update buckets using internal pool (if needed)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
Signed-off-by: Naveen Paul <[email protected]>
4fdbdea
to
3c2fa20
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/server/system_services/pool_server.js (2)
1312-1325
: Constrain search to the current system and skip INTERNAL pools to avoid cross-system/default regressionsIterating over
system_store.data.pools
without filtering bysystem
can select a pool from a different system. Also skipINTERNAL
(legacy mongo) pools; otherwise we might “fix” default resources to another internal pool. This can corrupt account defaults in multi-system stores.Apply this diff:
-async function get_optimal_non_default_pool_id() { - for (const pool of system_store.data.pools) { +async function get_optimal_non_default_pool_id(system) { + for (const pool of system_store.data.pools) { + if (String(pool.system?._id || pool.system) !== String(system._id)) continue; // skip backingstore_pool. - if (_is_optimal_default_pool(pool)) { + if (_is_optimal_default_pool(pool) || pool.resource_type === 'INTERNAL' || pool.mongo_info) { continue; } const aggr_nodes = await nodes_client.instance().aggregate_nodes_by_pool([pool.name], pool.system._id); const aggr_hosts = await nodes_client.instance().aggregate_hosts_by_pool([pool.name], pool.system._id); const { mode = '' } = get_pool_info(pool, aggr_nodes, aggr_hosts); if (mode === 'OPTIMAL') { return pool._id; } } }
1329-1355
: Limit default resource migration to legacy internal pools and update pool selector to scope by systemThe current implementation in
update_account_default_resource
unconditionally overwrites every non-operator account’sdefault_resource
with the first “optimal” pool found across all systems. To avoid unintended cross-system migrations and only target accounts still on the legacy INTERNAL/Mongo pool, please apply the following mandatory refactors:• Modify
get_optimal_non_default_pool_id
to accept asystem
parameter and only scan that system’s pools:--- a/src/server/system_services/pool_server.js +++ b/src/server/system_services/pool_server.js @@ -async function get_optimal_non_default_pool_id() { - for (const pool of system_store.data.pools) { +async function get_optimal_non_default_pool_id(system) { + // Only consider pools belonging to this system + const pools = system_store.data.pools.filter(p => + String(p.system._id) === String(system._id) + ); + for (const pool of pools) { // skip default pool if (_is_optimal_default_pool(pool)) { continue;• Update the call site in
update_account_default_resource
to pass in thesystem
, and narrow the account filter to only those still using a legacy INTERNAL/Mongo pool on that system:--- a/src/server/system_services/pool_server.js +++ b/src/server/system_services/pool_server.js @@ async function update_account_default_resource() { - const optimal_pool_id = await get_optimal_non_default_pool_id(); + const optimal_pool_id = await get_optimal_non_default_pool_id(system); @@ - const updates = system_store.data.accounts - .filter(account => - account.email.unwrap() !== config.OPERATOR_ACCOUNT_EMAIL && - account.default_resource - ) + const updates = system_store.data.accounts + .filter(account => + // skip operator and empty defaults + account.email.unwrap() !== config.OPERATOR_ACCOUNT_EMAIL && + account.default_resource && + // only accounts on this system’s legacy INTERNAL/Mongo pool + String(account.default_resource.system?._id || account.default_resource.system) === String(system._id) && + (account.default_resource.resource_type === 'INTERNAL' || account.default_resource.mongo_info) + ) .map(account => ({ _id: account._id, $set: { default_resource: optimal_pool_id } }));• (Optional) Gate this migration behind a one-time upgrade flag so it runs only once per cluster lifecycle.
Files/lines to update:
src/server/system_services/pool_server.js
:
- Function definition around lines 1308–1312
update_account_default_resource
call site around lines 1329–1336src/server/system_services/bucket_server.js (1)
1322-1331
: Critical Refactor Required: Correct Bucket Migration Direction in update_all_buckets_default_poolThe current implementation in src/server/system_services/bucket_server.js incorrectly treats the new default/backingstore pool as the “internal” (legacy) pool, causing buckets to be moved in the opposite direction of the intended upgrade:
• get_optimal_default_pool() returns the new default backingstore, not the legacy INTERNAL/mongo pool
• As a result, buckets using the new default are migrated to pool_name, instead of migrating buckets off the legacy INTERNAL pool into the new defaultAfter running a repository-wide search for other uses of is_using_internal_storage() and INTERNAL pool detection, no additional sites exhibit this inversion—this change is isolated to update_all_buckets_default_pool in bucket_server.js.
Please update the detection logic to identify the legacy INTERNAL/mongo pool via structural markers (resource_type or mongo_info) and then migrate those buckets into the new default:
--- a/src/server/system_services/bucket_server.js +++ b/src/server/system_services/bucket_server.js @@ -1322,7 +1322,13 @@ async function update_all_buckets_default_pool(req) { const pool = req.system.pools_by_name[pool_name]; if (!pool) throw new RpcError('INVALID_POOL_NAME'); - const internal_pool = pool_server.get_optimal_default_pool(pool.system); + // Find the legacy INTERNAL (mongo) pool, falling back to name-prefix if needed + const internal_pool = pool_server.get_legacy_internal_pool + ? pool_server.get_legacy_internal_pool(pool.system) + : _.find( + Object.values(pool.system.pools_by_name || {}), + p => p.resource_type === 'INTERNAL' || p.mongo_info + ); if (!internal_pool || !internal_pool._id) return; if (String(pool._id) === String(internal_pool._id)) return; const buckets_with_internal_pool = _.filter(Follow-up:
• Export get_legacy_internal_pool(system) from pool_server.js (using structural checks and a DEFAULT_POOL_NAME prefix fallback)
• Once available, switch this handler to call pool_server.get_legacy_internal_pool(pool.system) directlyThis refactor is mandatory to ensure buckets are migrated off the legacy INTERNAL pool into the new default backingstore.
🧹 Nitpick comments (7)
src/server/system_services/pool_server.js (1)
1431-1431
: Consider exporting a helper for legacy internal pool detectionSince multiple places need to find the legacy INTERNAL/mongo pool during upgrade/migration, add and export:
function get_legacy_internal_pool(system) { return _.find(Object.values(system.pools_by_name || {}), p => p?.resource_type === 'INTERNAL' || p?.mongo_info); } exports.get_legacy_internal_pool = get_legacy_internal_pool;This promotes a single source of truth and aligns with the guidance to use structural markers.
src/test/integration_tests/internal/test_upgrade_scripts.js (2)
137-165
: Strengthen assertions and tidy logging
- If the new pool wasn’t created,
exact_added.name
will throw before assertion. Assert existence first for a clearer failure.- Minor typo in
afte_names
variable.Apply this diff:
- const afte_names = system_store.data.pools.map(e => e.name); - dbg.info("End : List all the pools in system: ", afte_names); + const after_names = system_store.data.pools.map(e => e.name); + dbg.info("End : List all the pools in system: ", after_names); @@ - const exact_added = system_store.data.pools.find(pool => pool.name === pool_name); - assert.strictEqual(exact_added.name, pool_name); + const exact_added = system_store.data.pools.find(pool => pool.name === pool_name); + assert.ok(exact_added, `Expected pool ${pool_name} to be created`); + assert.strictEqual(exact_added.name, pool_name);
134-141
: Test still references potentially removed config INTERNAL name; fallback is fine, but prefer structural detectionUsing
config.INTERNAL_STORAGE_POOL_NAME
is okay here with a fallback, but long-term stability is better if tests seed an INTERNAL resource byresource_type: 'INTERNAL'
without relying on legacy names.Would you like me to propose a version of the test that seeds purely by
resource_type: 'INTERNAL'
and asserts via structural checks?src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (4)
11-16
: Typos and clarity in logsMinor but user-facing when running upgrades:
- “Monogo” → “Mongo”
- “Create new default pool Created...” → “Created new default pool.”
Apply:
- dbg.log0(`Start: Monogo pool upgrade script...`); + dbg.log0(`Start: Mongo pool upgrade script...`); - dbg.log0(`End: Create new default pool Created...`); + dbg.log0(`End: Created new default pool.`); - dbg.log0(`End: Monogo pool upgrade script...`); + dbg.log0(`End: Mongo pool upgrade script.`);
13-16
: Prefer deriving the target default pool via API rather than by name to future-proofIf naming changes again, deriving by name will break. Consider using a helper like
pool_server.get_optimal_default_pool(system)
(as referenced elsewhere in this PR) to resolve the pool, and only fall back to name.Example:
- const pool_name = `${config.DEFAULT_POOL_NAME}-${system_store.data.systems[0]._id}`; - await create_new_default_pool(dbg, pool_name, system_store); + const system = system_store.data.systems[0]; + const pool = pool_server.get_optimal_default_pool + ? pool_server.get_optimal_default_pool(system) + : null; + const pool_name = pool ? pool.name : `${config.DEFAULT_POOL_NAME}-${system._id}`; + if (!pool) await create_new_default_pool(dbg, pool_name, system_store);
21-24
: Guard for absent internal pools and no-op updates
update_buckets_default_pool
is called even when no internal/mongo pool exists. The function currently early-returns on falsymongo_pool
, which is fine. Consider logging that there’s nothing to migrate to make the script’s output clearer.Apply:
- await update_buckets_default_pool(dbg, pool_name, mongo_pools[0], system_store); + await update_buckets_default_pool(dbg, pool_name, mongo_pools[0], system_store); + if (!mongo_pools[0]) dbg.log0('No internal/mongo pool detected; skipping bucket migration');
69-83
: Setbackingstore
flag on hosts pool (if applicable)Some detection paths rely on
hosts_pool_info.backingstore === true
. Explicitly set it to avoid name-based fallbacks.Apply:
- fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 }; + fs_pool.hosts_pool_info = { is_managed: false, host_count: 0, backingstore: true };
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
src/api/system_api.js
(2 hunks)src/server/node_services/node_server.js
(1 hunks)src/server/system_services/bucket_server.js
(1 hunks)src/server/system_services/pool_server.js
(3 hunks)src/server/system_services/system_server.js
(2 hunks)src/test/integration_tests/internal/test_upgrade_scripts.js
(2 hunks)src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
src/test/**/*.*
⚙️ CodeRabbit configuration file
src/test/**/*.*
: Ensure that the PR includes tests for the changes.
Files:
src/test/integration_tests/internal/test_upgrade_scripts.js
🧠 Learnings (7)
📓 Common learnings
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.
📚 Learning: 2025-08-08T13:12:46.728Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.
Applied to files:
src/test/integration_tests/internal/test_upgrade_scripts.js
src/server/system_services/bucket_server.js
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
src/server/node_services/node_server.js
src/server/system_services/system_server.js
src/server/system_services/pool_server.js
📚 Learning: 2025-08-11T06:12:12.318Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.
Applied to files:
src/test/integration_tests/internal/test_upgrade_scripts.js
src/server/system_services/bucket_server.js
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
src/server/system_services/pool_server.js
📚 Learning: 2025-08-08T13:10:36.141Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.
Applied to files:
src/test/integration_tests/internal/test_upgrade_scripts.js
src/server/system_services/bucket_server.js
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
src/server/node_services/node_server.js
src/server/system_services/system_server.js
src/server/system_services/pool_server.js
📚 Learning: 2025-08-08T13:13:42.192Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.
Applied to files:
src/test/integration_tests/internal/test_upgrade_scripts.js
src/server/system_services/bucket_server.js
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
src/server/node_services/node_server.js
src/server/system_services/system_server.js
src/server/system_services/pool_server.js
📚 Learning: 2025-08-08T13:10:57.608Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1308-1311
Timestamp: 2025-08-08T13:10:57.608Z
Learning: In noobaa-core, config.INTERNAL_STORAGE_POOL_NAME has been removed. Use config.DEFAULT_POOL_NAME for internal/backingstore pool identification. Prefer exact match first, with prefix fallback to handle potential system-id suffixes in pool names.
Applied to files:
src/test/integration_tests/internal/test_upgrade_scripts.js
src/server/system_services/system_server.js
src/server/system_services/pool_server.js
📚 Learning: 2025-08-08T13:08:38.361Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/bucket_server.js:1324-1327
Timestamp: 2025-08-08T13:08:38.361Z
Learning: In src/server/system_services/bucket_server.js, the update_all_buckets_default_pool(req) handler expects req.rpc_params.pool_name to be a plain string (not a SensitiveString wrapper), so calling .unwrap() is not needed there.
Applied to files:
src/server/system_services/bucket_server.js
src/server/node_services/node_server.js
🧬 Code graph analysis (5)
src/test/integration_tests/internal/test_upgrade_scripts.js (3)
src/server/system_services/pool_server.js (18)
pool_name
(715-715)config
(12-12)system
(1330-1330)system_store
(19-19)pool
(146-146)pool
(175-175)pool
(221-221)pool
(422-422)pool
(452-452)pool
(506-506)pool
(519-519)pool
(577-577)pool
(617-617)pool
(667-667)pool
(780-780)pool
(926-926)pool
(935-935)pool
(1246-1246)src/server/system_services/system_server.js (10)
pool_name
(192-192)config
(18-18)system
(75-75)system
(99-99)system
(125-173)system
(178-178)system
(482-482)system
(861-861)system_store
(32-32)system_store
(1189-1189)src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (2)
pool_name
(13-13)config
(5-5)
src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (4)
src/server/system_services/bucket_server.js (27)
_
(7-7)_
(1740-1747)config
(14-14)pool_name
(1323-1323)pool
(1324-1324)pool
(1529-1529)bucket
(343-343)bucket
(357-357)bucket
(376-376)bucket
(394-394)bucket
(419-419)bucket
(431-431)bucket
(446-446)bucket
(482-482)bucket
(491-491)bucket
(504-504)bucket
(513-513)bucket
(539-539)bucket
(558-558)bucket
(572-572)bucket
(590-590)bucket
(604-604)updates
(1334-1334)internal_pool
(1326-1326)tiers
(1585-1585)mirrors
(1588-1588)spread_pools
(1591-1591)src/server/system_services/pool_server.js (26)
_
(9-9)_
(1175-1175)_
(1200-1200)config
(12-12)pool_name
(715-715)pool
(146-146)pool
(175-175)pool
(221-221)pool
(422-422)pool
(452-452)pool
(506-506)pool
(519-519)pool
(577-577)pool
(617-617)pool
(667-667)pool
(780-780)pool
(926-926)pool
(935-935)pool
(1246-1246)updates
(457-461)updates
(1335-1345)name
(274-274)name
(356-356)name
(925-925)name
(934-934)name
(979-979)src/server/system_services/system_server.js (13)
_
(6-6)config
(18-18)pool_name
(192-192)updates
(654-654)updates
(664-664)updates
(805-805)account
(583-583)account
(769-769)account
(790-790)account
(1136-1136)fs_pool
(193-193)default_pool
(190-190)default_pool
(360-360)src/test/integration_tests/internal/test_upgrade_scripts.js (2)
config
(18-18)pool_name
(137-137)
src/server/node_services/node_server.js (2)
src/server/system_services/bucket_server.js (6)
pool
(1324-1324)pool
(1529-1529)req
(1116-1116)req
(1149-1149)req
(2013-2013)pool_name
(1323-1323)src/server/system_services/pool_server.js (20)
pool
(146-146)pool
(175-175)pool
(221-221)pool
(422-422)pool
(452-452)pool
(506-506)pool
(519-519)pool
(577-577)pool
(617-617)pool
(667-667)pool
(780-780)pool
(926-926)pool
(935-935)pool
(1246-1246)req
(145-145)req
(525-525)req
(811-811)req
(1366-1366)req
(1393-1393)pool_name
(715-715)
src/server/system_services/system_server.js (2)
src/test/integration_tests/internal/test_upgrade_scripts.js (3)
pool_name
(137-137)config
(18-18)system
(134-134)src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (2)
pool_name
(13-13)config
(5-5)
src/server/system_services/pool_server.js (5)
src/server/system_services/bucket_server.js (5)
pool
(1324-1324)pool
(1529-1529)config
(14-14)system
(689-689)system_store
(28-28)src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js (2)
pool
(39-39)config
(5-5)src/hosted_agents/hosted_agents.js (4)
pool
(327-327)pool
(375-375)config
(18-18)system_store
(10-10)src/server/system_services/system_server.js (9)
config
(18-18)system
(75-75)system
(99-99)system
(125-173)system
(178-178)system
(482-482)system
(861-861)system_store
(32-32)system_store
(1189-1189)config.js (1)
config
(7-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Noobaa Image
- GitHub Check: run-jest-unit-tests
🔇 Additional comments (3)
src/api/system_api.js (1)
478-502
: Schema update validated:system_id
is already returned byread_system
- The
read_system
implementation insrc/server/system_services/system_server.js
returns
system_id: system._id.toString()
as part of its response object, satisfying the new schema requirement.- The OpenAPI definition in
src/api/system_api.js
already listssystem_id
under bothrequired
andproperties
forsystem_full_info
.- No references to
system_full_info
were found in this repo’s UI/client code—ensure any external clients or stubs are updated if they consume this shape.- (Optional) Add an integration test for
GET /api/system/read_system
to assert thatsystem_id
is always present in the response.src/server/system_services/system_server.js (2)
191-196
: Default pool name suffixed with system id — LGTMUsing
${config.DEFAULT_POOL_NAME}-${system._id}
makes the default pool unique per system and aligns with the new naming convention referenced in tests and upgrade scripts. No concerns on correctness.
579-582
: Confirmed API schema and tests includesystem_id
– ready to merge
- src/api/system_api.js now lists
'system_id'
in therequired
array forsystem_full_info
and defines it as a string type.- Integration tests across the codebase already reference and exercise
system_id
(e.g. in system store and agent blocks verifier tests), and none fail with the new field.- No lingering references to
system_full_info
outside the API schema were found, so UI/CLI code will ignore the extra property by design.All downstream consumers should handle the additional
system_id
field without issue. Merging as-is.
query.pools = new Set(_.map(query.pools, pool_name => { | ||
const pool = req.system.pools_by_name[pool_name]; | ||
const pool = req.system.pools_by_name?.[pool_name]; | ||
return String(pool._id); | ||
})); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional chaining introduced, but still dereferencing possibly-undefined pool
on the next line
req.system.pools_by_name?.[pool_name]
can return undefined; String(pool._id)
will then throw. Either skip unknown pools or fail fast with a clear error.
Apply this diff to make the mapping null-safe and avoid runtime exceptions:
- query.pools = new Set(_.map(query.pools, pool_name => {
- const pool = req.system.pools_by_name?.[pool_name];
- return String(pool._id);
- }));
+ const pool_ids = _.compact(_.map(query.pools, pool_name => {
+ const p = req.system.pools_by_name?.[pool_name];
+ if (!p) dbg.log1('list_nodes: skipping unknown pool in filter', pool_name);
+ return p?._id && String(p._id);
+ }));
+ query.pools = new Set(pool_ids);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
query.pools = new Set(_.map(query.pools, pool_name => { | |
const pool = req.system.pools_by_name[pool_name]; | |
const pool = req.system.pools_by_name?.[pool_name]; | |
return String(pool._id); | |
})); | |
} | |
// Replace unsafe direct mapping over query.pools with a null-safe filter/mapping | |
- query.pools = new Set(_.map(query.pools, pool_name => { | |
- const pool = req.system.pools_by_name?.[pool_name]; | |
- return String(pool._id); | |
const pool_ids = _.compact(_.map(query.pools, pool_name => { | |
const p = req.system.pools_by_name?.[pool_name]; | |
if (!p) dbg.log1('list_nodes: skipping unknown pool in filter', pool_name); | |
return p?._id && String(p._id); | |
})); | |
query.pools = new Set(pool_ids); | |
} |
🤖 Prompt for AI Agents
In src/server/node_services/node_server.js around lines 90 to 94, the code uses
optional chaining to look up pools but then dereferences pool._id without
checking pool; update the mapping to handle undefined by first filtering out
missing pools (or throw a clear error if you prefer fail-fast). Specifically,
transform query.pools by mapping pool names to pool objects, filter out
undefined entries (or throw an Error with the missing pool_name), then map to
String(pool._id) and wrap in a Set; optionally log a warning for skipped unknown
pools.
function _is_optimal_default_pool(pool) { | ||
return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Default/backingstore detection should use structural markers, not strict name equality
Rely on hosts_pool_info.backingstore
/ cloud_pool_info.backingstore
with name-prefix fallback, per upgrade guidance, instead of strict equality against ${config.DEFAULT_POOL_NAME}-${system_id}
. This avoids brittleness if the name changes or has a suffix.
Apply this diff:
-function _is_optimal_default_pool(pool) {
- return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`);
+function _is_optimal_default_pool(pool) {
+ // Prefer structural marker (backingstore), fallback to name prefix to handle suffixes.
+ if (pool.hosts_pool_info?.backingstore || pool.cloud_pool_info?.backingstore) return true;
+ const expected_prefix = `${config.DEFAULT_POOL_NAME}-`;
+ return typeof pool.name === 'string' && pool.name.startsWith(expected_prefix);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function _is_optimal_default_pool(pool) { | |
return Boolean(pool.name === `${config.DEFAULT_POOL_NAME}-${pool.system._id}`); | |
} | |
function _is_optimal_default_pool(pool) { | |
// Prefer structural marker (backingstore), fallback to name prefix to handle suffixes. | |
if (pool.hosts_pool_info?.backingstore || pool.cloud_pool_info?.backingstore) return true; | |
const expected_prefix = `${config.DEFAULT_POOL_NAME}-`; | |
return typeof pool.name === 'string' && pool.name.startsWith(expected_prefix); | |
} |
🤖 Prompt for AI Agents
In src/server/system_services/pool_server.js around lines 1304-1306, the
default/backingstore detection currently uses strict equality on the full pool
name; change it to first check structural markers
(pool.hosts_pool_info?.backingstore or pool.cloud_pool_info?.backingstore) and
return true if either marker is truthy, otherwise fall back to a name-prefix
check (pool.name.startsWith(`${config.DEFAULT_POOL_NAME}-${pool.system._id}`))
to allow suffixes; ensure null-safe access and return a boolean.
function get_optimal_default_pool(system) { | ||
return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`]; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make get_optimal_default_pool()
resilient; support structural fallback
Current code assumes the composite key exists and pools_by_name
is always defined. Use optional chaining and structural fallback to improve robustness across upgrades.
Apply this diff:
-function get_optimal_default_pool(system) {
- return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`];
+function get_optimal_default_pool(system) {
+ const by_name = system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`];
+ if (by_name) return by_name;
+ // Fallback: find by structural marker or name prefix
+ return _.find(Object.values(system.pools_by_name || {}), p =>
+ p?.hosts_pool_info?.backingstore || p?.cloud_pool_info?.backingstore ||
+ (typeof p?.name === 'string' && p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`)));
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function get_optimal_default_pool(system) { | |
return system.pools_by_name[`${config.DEFAULT_POOL_NAME}-${system._id}`]; | |
} | |
function get_optimal_default_pool(system) { | |
const by_name = system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`]; | |
if (by_name) return by_name; | |
// Fallback: find by structural marker or name prefix | |
return _.find(Object.values(system.pools_by_name || {}), p => | |
p?.hosts_pool_info?.backingstore || | |
p?.cloud_pool_info?.backingstore || | |
(typeof p?.name === 'string' && p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`)) | |
); | |
} |
🤖 Prompt for AI Agents
In src/server/system_services/pool_server.js around lines 1308 to 1310,
get_optimal_default_pool currently assumes system.pools_by_name exists and the
composite key is present; update it to be resilient by using optional chaining
and structural fallbacks: attempt to return
system.pools_by_name?.[`${config.DEFAULT_POOL_NAME}-${system._id}`], then try
system.pools_by_name?.[config.DEFAULT_POOL_NAME], then search
Object.values(system.pools_by_name || {}) for a pool whose name equals or
startsWith config.DEFAULT_POOL_NAME + '-' and return the first match, and
finally fall back to searching system.pools (if present) for the same name
patterns; return null/undefined if no match found.
async function run({ dbg, system_store }) { | ||
try { | ||
dbg.log0(`Starting monogo pool delete...`); | ||
dbg.log0(`Start: Monogo pool upgrade script...`); | ||
dbg.log0(`Start: Create new default pool...`); | ||
const pool_name = `${config.DEFAULT_POOL_NAME}-${system_store.data.systems[0]._id}`; | ||
await create_new_default_pool(dbg, pool_name, system_store); | ||
dbg.log0(`End: Create new default pool Created...`); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Scope: iterate all systems for multi-tenant stores
Script currently operates only on systems[0]
. Per prior guidance, handle multi-system stores and remove all matching internal/mongo pools within one run.
High-level approach:
- Loop over
system_store.data.systems
. - For each, ensure/create the new default pool (idempotent).
- Detect and migrate buckets for that system.
- Remove all internal/mongo pools for that system.
I can provide a consolidated patch if you confirm multi-system support is required for your deployments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
multi-system is not supported
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@naveenpaul1 Thanks for the clarification! That resolves my concern about the script scope. Since multi-system deployments are not supported, the current approach of operating on system_store.data.systems[0]
is correct and appropriate.
The upgrade script's single-system logic is properly implemented for your use case.
🐰
◕ ◕
( · ) curious head tilt
---
✏️ Learnings added
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9195
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-16
Timestamp: 2025-08-21T11:07:05.147Z
Learning: In noobaa-core, multi-system deployments are not supported. Upgrade scripts can safely operate on system_store.data.systems[0] without needing to handle multiple systems.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.
dbg.log0(`Start: Monogo pool upgrade script...`); | ||
dbg.log0(`Start: Create new default pool...`); | ||
const pool_name = `${config.DEFAULT_POOL_NAME}-${system_store.data.systems[0]._id}`; | ||
await create_new_default_pool(dbg, pool_name, system_store); | ||
dbg.log0(`End: Create new default pool Created...`); | ||
|
||
const internal_mongo_pool = `${config.INTERNAL_STORAGE_POOL_NAME}-${system_store.data.systems[0]._id}`; | ||
dbg.log0(`Internal mongo pool id is : ${internal_mongo_pool}`); | ||
const pool_ids = system_store.data.pools.filter(pool => pool.name === internal_mongo_pool); | ||
if (pool_ids.length > 0) { | ||
dbg.log0(`Removing default mongo pool: ${pool_ids[0]._id}`); | ||
await system_store.make_changes({ remove: { pools: [pool_ids[0]._id] }}); | ||
const mongo_pools = system_store.data.pools.filter(pool => pool.name === internal_mongo_pool); | ||
|
||
dbg.log0(`Start: Update bucket default bucket pool with new default pool...`); | ||
await update_buckets_default_pool(dbg, pool_name, mongo_pools[0], system_store); | ||
dbg.log0(`End: Updated bucket default bucket pool with new default pool...`); | ||
|
||
if (mongo_pools.length > 0) { | ||
dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`); | ||
await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }}); | ||
} else { | ||
dbg.log0('Removing mongo pool: Could not find the mongo pool...'); | ||
} | ||
dbg.log0(`End: Monogo pool upgrade script...`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
End-to-end safety checks and idempotency
To make the upgrade script safe to re-run:
- Ensure pool creation is idempotent (skip if exists).
- Skip bucket updates if the tier already points to the new pool.
- Remove internal pools only after successful bucket migration.
- Wrap changes per phase and log counts for visibility.
Run these checks locally:
Also applies to: 38-67, 69-83, 101-112
🏁 Script executed:
#!/bin/bash
# 1) Confirm there are no remaining references to INTERNAL_STORAGE_POOL_NAME
rg -nP 'INTERNAL_STORAGE_POOL_NAME' src | sed -n '1,200p'
# 2) Confirm we’re not writing pools with non-canonical `owner_id`
rg -nP 'owner_id\s*:' src | sed -n '1,120p'
# 3) Check that tier updates use $set and we never call new_system_store_id() from a system object
rg -nP -n -C2 '\btiers\b.*\bmirrors\b|\bnew_system_store_id\(' src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
# 4) Sanity: potential multi-system references to systems\[0]
rg -nP -n '\bsystems\[0\]' src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
Length of output: 2475
Enhance idempotency, structural detection & multi‐system support in remove_mongo_pool.js
Please refactor the upgrade script at src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js
(particularly lines 11–31, 38–67, 69–83, 101–112) to address the following mandatory fixes:
-
Avoid relying solely on
config.INTERNAL_STORAGE_POOL_NAME
• Detect legacy “internal” Mongo pools by checking eachpool
’smongo_info
field andresource_type === 'INTERNAL'
instead of name prefix lookup retrieved_learnings
• Iterate over allsystem_store.data.systems
, not justsystems[0]
, to capture multi-system deployments retrieved_learnings -
Make default‐pool creation idempotent
• Before callingcreate_new_default_pool(dbg, poolName, system_store)
, verifysystem_store.data.pools_by_name[poolName]
doesn’t already exist -
Harden bucket‐migration step for re-runs
• Inupdate_buckets_default_pool
, skip any bucket whose tier already references the new pool (i.e.spread_pools
includes the new pool’s_id
and the default tier matches)
• Ensure you usesystem_store.new_system_store_id()
(notsystem_store.data.systems[i].new_system_store_id()
) when generating new store IDs run_scripts -
Delay pool removal until all bucket migrations succeed
• After migrating buckets across every system, confirm none remain on the internal pool before callingmake_changes({ remove: { pools: […] } })
• Batch removals per system, commit each phase separately, and log the count of pools removed -
Structure script into clear, logged phases:
- Detection & Creation – identify existing default pools, create missing ones
- Bucket Migration – update only buckets needing migration, log number updated
- Pool Removal – remove only fully‐migrated internal pools, log number removed
These changes will ensure the script can be safely re-run, handle multi-system stores, and recover gracefully from partial failures.
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 11–31,
38–67, 69–83 and 101–112: the script currently assumes a single system and finds
internal Mongo pools by name and unconditionally creates/removes pools, which
breaks idempotency and multi-system deployments; refactor into three logged
phases (Detection & Creation, Bucket Migration, Pool Removal), iterate over all
system_store.data.systems, detect legacy internal pools by checking
pool.mongo_info and resource_type === 'INTERNAL' (not name prefix), before
calling create_new_default_pool verify system_store.data.pools_by_name[poolName]
does not exist, in update_buckets_default_pool skip buckets whose tiers already
reference the new pool (spread_pools includes new pool _id and default tier
matches) and use system_store.new_system_store_id() to generate new IDs, after
processing all systems ensure no buckets remain on an internal pool before
removing it, batch removals per system and call make_changes separately per
batch, and add logging counts for created default pools, buckets migrated, and
pools removed to make the script re-runnable and safe for multi-system stores.
if (mongo_pools.length > 0) { | ||
dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`); | ||
await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }}); | ||
} else { | ||
dbg.log0('Removing mongo pool: Could not find the mongo pool...'); | ||
} | ||
dbg.log0(`End: Monogo pool upgrade script...`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove all detected internal/mongo pools, not just the first
If more than one matching pool exists (legacy residues), only the first is removed. Remove all to leave no internal/mongo pools behind.
Apply:
- if (mongo_pools.length > 0) {
- dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`);
- await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }});
+ if (mongo_pools.length > 0) {
+ const ids = mongo_pools.map(p => p._id);
+ dbg.log0(`Removing ${ids.length} internal/mongo pool(s): ${ids.join(', ')}`);
+ await system_store.make_changes({ remove: { pools: ids }});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (mongo_pools.length > 0) { | |
dbg.log0(`Removing default mongo pool: ${mongo_pools[0]._id}`); | |
await system_store.make_changes({ remove: { pools: [mongo_pools[0]._id] }}); | |
} else { | |
dbg.log0('Removing mongo pool: Could not find the mongo pool...'); | |
} | |
dbg.log0(`End: Monogo pool upgrade script...`); | |
if (mongo_pools.length > 0) { | |
const ids = mongo_pools.map(p => p._id); | |
dbg.log0(`Removing ${ids.length} internal/mongo pool(s): ${ids.join(', ')}`); | |
await system_store.make_changes({ remove: { pools: ids }}); | |
} else { | |
dbg.log0('Removing mongo pool: Could not find the mongo pool...'); | |
} | |
dbg.log0(`End: Monogo pool upgrade script...`); |
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 25 to
31, the script only removes the first detected mongo pool (mongo_pools[0]) when
multiple matching internal/mongo pools may exist; change it to remove all
detected pools by collecting all mongo_pools ids and passing that array to
system_store.make_changes({ remove: { pools: [...] }}), and update the dbg.log0
calls to report how many/which pools are being removed and handle the zero-pools
case accordingly.
async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) { | ||
const pool = system_store.data.systems[0].pools_by_name[pool_name]; | ||
if (!pool) { | ||
dbg.error('INVALID_POOL_NAME:'); | ||
throw new Error('INVALID_POOL_NAME'); | ||
} | ||
if (!mongo_pool || !mongo_pool._id) return; | ||
if (String(pool._id) === String(mongo_pool._id)) return; | ||
const buckets_with_internal_pool = _.filter(system_store.data.systems[0].buckets_by_name, bucket => | ||
is_using_internal_storage(bucket, mongo_pool)); | ||
if (!buckets_with_internal_pool.length) return; | ||
|
||
// The loop pushes one update per bucket | ||
const updates = _.uniqBy([], '_id'); | ||
for (const bucket of buckets_with_internal_pool) { | ||
updates.push({ | ||
_id: bucket.tiering.tiers[0].tier._id, | ||
mirrors: [{ | ||
_id: system_store.data.systems[0].new_system_store_id(), | ||
spread_pools: [pool._id] | ||
}] | ||
}); | ||
} | ||
dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`); | ||
await system_store.make_changes({ | ||
update: { | ||
tiers: updates | ||
} | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Tier update construction has two functional bugs (bad id generation and missing $set) and an overly strict matcher
Issues:
system_store.data.systems[0].new_system_store_id()
is invalid; ID creation is onsystem_store
, not the system object.- Updates do not use
$set
; depending onmake_changes
semantics, this may overwrite entire tier docs or be ignored. is_using_internal_storage
assumes id is under_id
. In some storesspread_pools
entries can be ObjectIds (no_id
), causing false negatives.
Patch:
- const updates = _.uniqBy([], '_id');
+ const updates = [];
for (const bucket of buckets_with_internal_pool) {
- updates.push({
- _id: bucket.tiering.tiers[0].tier._id,
- mirrors: [{
- _id: system_store.data.systems[0].new_system_store_id(),
- spread_pools: [pool._id]
- }]
- });
+ updates.push({
+ _id: bucket.tiering.tiers[0].tier._id,
+ $set: {
+ mirrors: [{
+ _id: system_store.new_system_store_id(),
+ spread_pools: [pool._id]
+ }]
+ }
+ });
}
- dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`);
+ dbg.log0(`Updating ${buckets_with_internal_pool.length} bucket(s) to use ${pool_name} as default resource`);
await system_store.make_changes({
update: {
- tiers: updates
+ tiers: updates
}
});
Additionally, consider relaxing is_using_internal_storage
as patched below (see dedicated comment).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) { | |
const pool = system_store.data.systems[0].pools_by_name[pool_name]; | |
if (!pool) { | |
dbg.error('INVALID_POOL_NAME:'); | |
throw new Error('INVALID_POOL_NAME'); | |
} | |
if (!mongo_pool || !mongo_pool._id) return; | |
if (String(pool._id) === String(mongo_pool._id)) return; | |
const buckets_with_internal_pool = _.filter(system_store.data.systems[0].buckets_by_name, bucket => | |
is_using_internal_storage(bucket, mongo_pool)); | |
if (!buckets_with_internal_pool.length) return; | |
// The loop pushes one update per bucket | |
const updates = _.uniqBy([], '_id'); | |
for (const bucket of buckets_with_internal_pool) { | |
updates.push({ | |
_id: bucket.tiering.tiers[0].tier._id, | |
mirrors: [{ | |
_id: system_store.data.systems[0].new_system_store_id(), | |
spread_pools: [pool._id] | |
}] | |
}); | |
} | |
dbg.log0(`Updating ${buckets_with_internal_pool.length} buckets to use ${pool_name} as default resource`); | |
await system_store.make_changes({ | |
update: { | |
tiers: updates | |
} | |
}); | |
} | |
async function update_buckets_default_pool(dbg, pool_name, mongo_pool, system_store) { | |
const pool = system_store.data.systems[0].pools_by_name[pool_name]; | |
if (!pool) { | |
dbg.error('INVALID_POOL_NAME:'); | |
throw new Error('INVALID_POOL_NAME'); | |
} | |
if (!mongo_pool || !mongo_pool._id) return; | |
if (String(pool._id) === String(mongo_pool._id)) return; | |
const buckets_with_internal_pool = _.filter( | |
system_store.data.systems[0].buckets_by_name, | |
bucket => is_using_internal_storage(bucket, mongo_pool) | |
); | |
if (!buckets_with_internal_pool.length) return; | |
// The loop pushes one update per bucket | |
const updates = []; | |
for (const bucket of buckets_with_internal_pool) { | |
- updates.push({ | |
- _id: bucket.tiering.tiers[0].tier._id, | |
- mirrors: [{ | |
- _id: system_store.data.systems[0].new_system_store_id(), | |
- spread_pools: [pool._id] | |
- }] | |
updates.push({ | |
_id: bucket.tiering.tiers[0].tier._id, | |
$set: { | |
mirrors: [{ | |
_id: system_store.new_system_store_id(), | |
spread_pools: [pool._id] | |
}] | |
} | |
}); | |
} | |
dbg.log0(`Updating ${buckets_with_internal_pool.length} bucket(s) to use ${pool_name} as default resource`); | |
await system_store.make_changes({ | |
update: { | |
tiers: updates | |
} | |
}); | |
} |
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 38 to
67, fix three issues: replace the incorrect call
system_store.data.systems[0].new_system_store_id() with
system_store.new_system_store_id(); build each tier update as a $set operation
(e.g. { _id: <tierId>, $set: { mirrors: [...] } }) so make_changes performs a
partial update instead of replacing the whole document; and relax the matcher
used to detect internal storage by normalizing spread_pools entries to their id
values (handle entries that are either plain ObjectId/strings or objects with an
_id property) and compare via String(...) to mongo_pool._id or pool._id to avoid
false negatives. Ensure updates are unique by tier _id before calling
system_store.make_changes.
async function create_new_default_pool(dbg, pool_name, system_store) { | ||
// TODO: UPDATE EMAIL | ||
const account = system_store.get_account_by_email(new SensitiveString('[email protected]')); | ||
if (!account) { | ||
dbg.error('NO_SUCH_ACCOUNT', 'No such account email: [email protected]'); | ||
} | ||
const fs_pool = new_pool_defaults(system_store, pool_name, system_store.data.systems[0]._id, 'HOSTS', 'BLOCK_STORE_FS', account._id); | ||
fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 }; | ||
const default_pool = fs_pool; | ||
await system_store.make_changes({ | ||
insert: { | ||
pools: [default_pool] | ||
} | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Owner resolution and pool construction are fragile; avoid hard-coded admin email and reuse pool_server.new_pool_defaults
Problems:
- Hard-coding
[email protected]
may fail; environments often useconfig.OPERATOR_ACCOUNT_EMAIL
or the system owner. - Local
new_pool_defaults()
risks schema drift (usesowner_id
vs canonicalowner
and may miss defaults). Reusepool_server.new_pool_defaults(...)
for a schema-accurate pool.
Apply:
+const pool_server = require('../../../server/system_services/pool_server');
@@
- // TODO: UPDATE EMAIL
- const account = system_store.get_account_by_email(new SensitiveString('[email protected]'));
- if (!account) {
- dbg.error('NO_SUCH_ACCOUNT', 'No such account email: [email protected]');
- }
- const fs_pool = new_pool_defaults(system_store, pool_name, system_store.data.systems[0]._id, 'HOSTS', 'BLOCK_STORE_FS', account._id);
+ const system = system_store.data.systems[0];
+ // Prefer the system owner; fall back to operator account if needed.
+ const owner_ref = system.owner && (system.owner._id || system.owner.id || system.owner);
+ const owner = system_store.data.get_by_id ? (system_store.data.get_by_id(owner_ref) || {}) : {};
+ const owner_id = owner._id || owner_ref;
+ if (!owner_id) {
+ dbg.error('MISSING_OWNER', 'Could not resolve system owner for pool creation');
+ throw new Error('MISSING_OWNER');
+ }
+ // Idempotency: skip if pool already exists
+ if (system.pools_by_name && system.pools_by_name[pool_name]) {
+ dbg.log0(`Default pool ${pool_name} already exists; skipping creation`);
+ return;
+ }
+ const fs_pool = pool_server.new_pool_defaults(pool_name, system._id, 'HOSTS', 'BLOCK_STORE_FS', owner_id);
fs_pool.hosts_pool_info = { is_managed: false, host_count: 0 };
Committable suggestion skipped: line range outside the PR's diff.
function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) { | ||
const now = Date.now(); | ||
return { | ||
_id: system_store.new_system_store_id(), | ||
system: system_id, | ||
name: name, | ||
owner_id, | ||
resource_type: resource_type, | ||
pool_node_type: pool_node_type, | ||
storage_stats: { | ||
blocks_size: 0, | ||
last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS) | ||
}, | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Local new_pool_defaults()
duplicates core logic and uses non-canonical field owner_id
This helper risks schema drift and breaks permissions/auditing. Given the refactor above to use pool_server.new_pool_defaults
, drop this local function to avoid divergence.
Apply:
-function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) {
- const now = Date.now();
- return {
- _id: system_store.new_system_store_id(),
- system: system_id,
- name: name,
- owner_id,
- resource_type: resource_type,
- pool_node_type: pool_node_type,
- storage_stats: {
- blocks_size: 0,
- last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS)
- },
- };
-}
+// Removed: use pool_server.new_pool_defaults(...) for a canonical pool shape.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function new_pool_defaults(system_store, name, system_id, resource_type, pool_node_type, owner_id) { | |
const now = Date.now(); | |
return { | |
_id: system_store.new_system_store_id(), | |
system: system_id, | |
name: name, | |
owner_id, | |
resource_type: resource_type, | |
pool_node_type: pool_node_type, | |
storage_stats: { | |
blocks_size: 0, | |
last_update: now - (2 * config.MD_GRACE_IN_MILLISECONDS) | |
}, | |
}; | |
} | |
// Removed: use pool_server.new_pool_defaults(...) for a canonical pool shape. |
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 85 to
99, the local helper function new_pool_defaults duplicates core logic and uses a
non-canonical field owner_id; remove this local function and replace its usages
to call the centralized pool_server.new_pool_defaults instead (pass the same
args: system_store, name, system_id, resource_type, pool_node_type, owner_id),
and ensure any created objects match the canonical schema (rename owner_id to
owner if required by pool_server) so there is no schema drift or
permission/audit divergence.
function is_using_internal_storage(bucket, internal_pool) { | ||
if (!internal_pool || !internal_pool._id) return false; | ||
const tiers = bucket.tiering && bucket.tiering.tiers; | ||
if (!tiers || tiers.length !== 1) return false; | ||
const mirrors = tiers[0].tier.mirrors; | ||
if (mirrors.length !== 1) return false; | ||
const spread_pools = mirrors[0].spread_pools; | ||
if (spread_pools.length !== 1) return false; | ||
|
||
return String(spread_pools[0]._id) === String(internal_pool._id); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle both ObjectId and populated refs in is_using_internal_storage
spread_pools[0]
might be either an ObjectId or a populated object. The current check misses the former.
Apply:
- const spread_pools = mirrors[0].spread_pools;
+ const spread_pools = mirrors[0].spread_pools;
if (spread_pools.length !== 1) return false;
-
- return String(spread_pools[0]._id) === String(internal_pool._id);
+ const target = spread_pools[0];
+ const pool_id = (target && target._id) || target;
+ return String(pool_id) === String(internal_pool._id);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function is_using_internal_storage(bucket, internal_pool) { | |
if (!internal_pool || !internal_pool._id) return false; | |
const tiers = bucket.tiering && bucket.tiering.tiers; | |
if (!tiers || tiers.length !== 1) return false; | |
const mirrors = tiers[0].tier.mirrors; | |
if (mirrors.length !== 1) return false; | |
const spread_pools = mirrors[0].spread_pools; | |
if (spread_pools.length !== 1) return false; | |
return String(spread_pools[0]._id) === String(internal_pool._id); | |
} | |
function is_using_internal_storage(bucket, internal_pool) { | |
if (!internal_pool || !internal_pool._id) return false; | |
const tiers = bucket.tiering && bucket.tiering.tiers; | |
if (!tiers || tiers.length !== 1) return false; | |
const mirrors = tiers[0].tier.mirrors; | |
if (mirrors.length !== 1) return false; | |
const spread_pools = mirrors[0].spread_pools; | |
if (spread_pools.length !== 1) return false; | |
const target = spread_pools[0]; | |
const pool_id = (target && target._id) || target; | |
return String(pool_id) === String(internal_pool._id); | |
} |
🤖 Prompt for AI Agents
In src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js around lines 101 to
112, the function is_using_internal_storage assumes spread_pools[0] is always a
populated object and therefore accesses _id; update the comparison to handle
both populated refs and raw ObjectId refs by normalizing the value before
comparing (e.g., use the spread_pools[0]._id if present, otherwise use
spread_pools[0] directly, then coerce both sides to String or use toString()
safely), and ensure you defensively check existence (optional chaining) to avoid
runtime errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/server/system_services/account_server.js (5)
12-17
: Import pool_server to reuse centralized default-pool logicSubsequent fixes rely on selecting the optimal default pool. Importing
pool_server
here keeps behavior consistent withbucket_server
and the PR’s new helpers.const config = require('../../../config'); const dbg = require('../../util/debug_module')(__filename); const { RpcError } = require('../../rpc'); const Dispatcher = require('../notifications/dispatcher'); const SensitiveString = require('../../util/sensitive_string'); +const pool_server = require('../system_services/pool_server');
88-97
: Hard-coded 'backingstores' fallback breaks with system-scoped pool names; use get_optimal_default_pool()When
default_resource
isn’t provided, the code assumesreq.system.pools_by_name.backingstores
exists. With the new naming (DEFAULT_POOL_NAME-{system_id}
orDEFAULT_POOL_NAME_{system_id}
), this key won’t exist, causing a BAD_REQUEST where older behavior worked.Use the centralized selector.
- // Default pool resource is backingstores - const resource = req.rpc_params.default_resource ? - req.system.pools_by_name[req.rpc_params.default_resource] || - (req.system.namespace_resources_by_name && req.system.namespace_resources_by_name[req.rpc_params.default_resource]) : - req.system.pools_by_name.backingstores; + // Default pool resource: resolve optimally from system-scoped default pool + const resource = req.rpc_params.default_resource + ? (req.system.pools_by_name[req.rpc_params.default_resource] || + (req.system.namespace_resources_by_name && + req.system.namespace_resources_by_name[req.rpc_params.default_resource])) + : pool_server.get_optimal_default_pool(req.system);
1398-1402
: Bug: comparing an object to 0 always fails; check pool count instead
req.system.pools_by_name === 0
is always false. If the system truly has no pools, we won’t block creation as intended.- if (req.system.pools_by_name === 0) { + if (!req.system.pools_by_name || Object.keys(req.system.pools_by_name).length === 0) { throw new RpcError('No resources in the system - Can\'t create accounts'); }
376-384
: Potential TypeError: uses req.system after assigning fallback to local ‘system’When
req.system
is undefined, you assignsystem = system_store.data.systems_by_name[...]
(Line 366) but still accessreq.system.pools_by_name
here. This will throw.Also, the logic doesn’t distinguish default vs non-default pools; it blocks if there are any pools at all. Align it with the “non-internal” check used in create-account validation.
- if (!req.rpc_params.default_resource) { - const pools = _.filter(req.system.pools_by_name); - if (pools.length) { // has resources which is not internal - must supply resource - throw new RpcError('BAD_REQUEST', 'Enabling S3 requires providing default_resource'); - } - } + if (!req.rpc_params.default_resource) { + const pools = Object.values(system.pools_by_name || {}); + const has_non_internal = pools.some(p => { + const is_default = + (p.hosts_pool_info && p.hosts_pool_info.backingstore) || + (p.cloud_pool_info && p.cloud_pool_info.backingstore) || + (typeof p.name === 'string' && ( + p.name === config.DEFAULT_POOL_NAME || + p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`) || + p.name.startsWith(`${config.DEFAULT_POOL_NAME}_`) + )); + return !is_default; + }); + if (has_non_internal) { + // has resources which are not the internal/default one - must supply resource + throw new RpcError('BAD_REQUEST', 'Enabling S3 requires providing default_resource'); + } + }
390-393
: Use optimal default pool instead of arbitrary first poolChoosing the first entry in
pools_by_name
is nondeterministic and may select a non-default pool. Useget_optimal_default_pool(system)
and fall back only if not found.- } else { - update.default_resource = (Object.values(req.system.pools_by_name)[0])._id; - } + } else { + const def_pool = pool_server.get_optimal_default_pool(system); + update.default_resource = (def_pool && def_pool._id) || + ((Object.values(system.pools_by_name)[0] || {})._id); + }
🧹 Nitpick comments (1)
src/server/system_services/account_server.js (1)
88-93
: Message: update comment to reflect new behaviorThe in-code comment still says “Default pool resource is backingstores”. Update it to reflect system-scoped default pool selection.
- // Default pool resource is backingstores + // Default pool resource is the system-scoped default pool (e.g., `${config.DEFAULT_POOL_NAME}-{system_id}`)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/server/system_services/account_server.js
(1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:6-22
Timestamp: 2025-08-11T06:12:12.318Z
Learning: In the noobaa-core upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js, bucket migration from the internal mongo pool to a new default pool is planned to be handled in separate future PRs with comprehensive testing, rather than being included directly in the pool removal script.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.
📚 Learning: 2025-08-08T13:10:36.141Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.
Applied to files:
src/server/system_services/account_server.js
📚 Learning: 2025-08-08T13:13:42.192Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.192Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.
Applied to files:
src/server/system_services/account_server.js
📚 Learning: 2025-08-08T13:12:46.728Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.728Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.
Applied to files:
src/server/system_services/account_server.js
📚 Learning: 2025-08-08T13:10:57.608Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1308-1311
Timestamp: 2025-08-08T13:10:57.608Z
Learning: In noobaa-core, config.INTERNAL_STORAGE_POOL_NAME has been removed. Use config.DEFAULT_POOL_NAME for internal/backingstore pool identification. Prefer exact match first, with prefix fallback to handle potential system-id suffixes in pool names.
Applied to files:
src/server/system_services/account_server.js
📚 Learning: 2025-08-08T13:08:38.361Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/bucket_server.js:1324-1327
Timestamp: 2025-08-08T13:08:38.361Z
Learning: In src/server/system_services/bucket_server.js, the update_all_buckets_default_pool(req) handler expects req.rpc_params.pool_name to be a plain string (not a SensitiveString wrapper), so calling .unwrap() is not needed there.
Applied to files:
src/server/system_services/account_server.js
🧬 Code graph analysis (1)
src/server/system_services/account_server.js (1)
config.js (1)
config
(7-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Noobaa Image
- GitHub Check: run-package-lock-validation
🔇 Additional comments (2)
src/server/system_services/account_server.js (2)
1386-1389
: Nit: ensure stable stringification of system idIf
req.system._id
is not a plain string, implicit stringification usually works (ObjectId has toString), but being explicit avoids surprises.- Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) : + Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${String(req.system._id)}`) :Note: This nit is moot if you apply the earlier structural detection refactor. In that case, skip this change.
1-1528
: Ensure PR description matches the canonical pool‐naming formatAll existing code and tests use the hyphenated, plural form based on
config.DEFAULT_POOL_NAME
(which is set to"backingstores"
). That yields pool names like:
- backingstores-<system_id>
Confirmations:
• config.js (line 246):config.DEFAULT_POOL_NAME = 'backingstores'
• remove_mongo_pool.js (5.20.0):const pool_name =
${config.DEFAULT_POOL_NAME}-${system_id};
• system_server.js (line 192):const pool_name =
${config.DEFAULT_POOL_NAME}-${system._id};
• pool_server.js: detection viapool.name ===
${config.DEFAULT_POOL_NAME}-${pool.system._id}``
• integration tests reference${config.DEFAULT_POOL_NAME}-${system._id}
; no underscore‐based constructions foundNo code paths create or match names like
backingstore_<id>
. Please update the PR description to reference the hyphenated plural format (backingstores-<system_id>
), ensuring alignment between docs and implementation.
const has_non_internal_resources = (req.system && req.system.pools_by_name) ? | ||
Object.values(req.system.pools_by_name).some(p => p.name !== 'backingstores') : | ||
Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) : | ||
false; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Default-pool detection via exact name is brittle and will misclassify pools
Comparing against a single formatted name ${config.DEFAULT_POOL_NAME}-${req.system._id}
breaks on:
- Legacy systems where the pool is still named exactly
backingstores
(no suffix). - Systems using underscore vs hyphen (PR text says
backingstore_{id}
). - Any future suffixing pattern changes.
Result: has_non_internal_resources
may become true even when only the internal/default pool exists, forcing clients to provide default_resource
unnecessarily.
Replace with structural detection and prefix fallback (per past learnings), so only non-default pools are counted as “non-internal”.
- const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
- Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) :
- false;
+ const has_non_internal_resources = (req.system && req.system.pools_by_name) ?
+ Object.values(req.system.pools_by_name).some(p => {
+ const is_default =
+ // Prefer structural markers first
+ (p.hosts_pool_info && p.hosts_pool_info.backingstore) ||
+ (p.cloud_pool_info && p.cloud_pool_info.backingstore) ||
+ // Fallback: name-based prefix match to handle optional suffixes
+ (typeof p.name === 'string' && (
+ p.name === config.DEFAULT_POOL_NAME ||
+ p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`) ||
+ p.name.startsWith(`${config.DEFAULT_POOL_NAME}_`)
+ ));
+ return !is_default;
+ }) :
+ false;
Would you prefer centralizing this by calling a helper (e.g., pool_server.get_optimal_default_pool
) instead? I can refactor to that to keep logic in one place.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const has_non_internal_resources = (req.system && req.system.pools_by_name) ? | |
Object.values(req.system.pools_by_name).some(p => p.name !== 'backingstores') : | |
Object.values(req.system.pools_by_name).some(p => p.name !== `${config.DEFAULT_POOL_NAME}-${req.system._id}`) : | |
false; | |
const has_non_internal_resources = (req.system && req.system.pools_by_name) ? | |
Object.values(req.system.pools_by_name).some(p => { | |
const is_default = | |
// Prefer structural markers first | |
(p.hosts_pool_info && p.hosts_pool_info.backingstore) || | |
(p.cloud_pool_info && p.cloud_pool_info.backingstore) || | |
// Fallback: name-based prefix match to handle optional suffixes | |
(typeof p.name === 'string' && ( | |
p.name === config.DEFAULT_POOL_NAME || | |
p.name.startsWith(`${config.DEFAULT_POOL_NAME}-`) || | |
p.name.startsWith(`${config.DEFAULT_POOL_NAME}_`) | |
)); | |
return !is_default; | |
}) : | |
false; |
🤖 Prompt for AI Agents
In src/server/system_services/account_server.js around lines 1386 to 1389, the
current exact-name check against `${config.DEFAULT_POOL_NAME}-${req.system._id}`
is brittle and misclassifies default/internal pools; replace it with structural
detection that first checks any explicit internal/default marker on the pool
object (e.g., pool.internal, pool.isDefault, or same owner/system id), then as a
fallback accept names that start with the DEFAULT_POOL_NAME followed by optional
separators and the system id or nothing (e.g., /^backingstore(?:[-_]?<id>)?$/
semantics) so only truly non-default pools are counted as non-internal;
alternatively call the centralized helper (e.g.,
pool_server.get_optimal_default_pool) if available to keep logic in one place.
Describe the Problem
backing store in upgrade scenario
Replace bucket with tier with new backingstore
Explain the Changes
Issues: Fixed #xxx / Gap #xxx
Testing Instructions:
remove_mongo_pool.js
upgrade scripttest remove mongo_pool to version 5.20.0
test intest_upgrade_scripts.js
file with 4.19 codeSummary by CodeRabbit
New Features
Refactor
Chores
Behavior Change