Skip to content

Mctp bridge support #71

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 29 commits into from
Aug 25, 2025
Merged

Conversation

faizana-nvidia
Copy link
Contributor

This PR aims to introduce ALLOCATE_ENDPOINT_ID message support along with MCTP Bridge endpoint into the existing peer structure.

@jk-ozlabs
Copy link
Member

Thanks for the contribution! I'll get to a proper review shortly.

I have some pending changes that rework a lot of the peer, link and network allocation mechanisms. That shouldn't affect your code too much, but I'll request a rebase once that is merged.

@jk-ozlabs jk-ozlabs self-assigned this Apr 25, 2025
@faizana-nvidia
Copy link
Contributor Author

Thanks for the contribution! I'll get to a proper review shortly.

I have some pending changes that rework a lot of the peer, link and network allocation mechanisms. That shouldn't affect your code too much, but I'll request a rebase once that is merged.

Sure no problem

Copy link
Member

@jk-ozlabs jk-ozlabs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the main design point here is how we're handling the pool allocations. It looks like your particular use-case is around static allocations, which I'll focus on here.

As I mentioned in the dbus changes, we cannot add arguments without further version-compatiblity changes. After a bit of chatting with the team, I think a better approach would be to add a new dbus call to explicitly allocate a bridge and a predefined pool (which would include the pool size). Perhaps something like:

AllocateBridgeStatic(addr: ay, pool_start: y, pool_size: y)
  • where the Set Endpoint ID response must match the expected pool size.

(we would also want purely-dynamic pools to be allocated from SetupEndpoint and friends, but that would result in a dynamic pool allocation. This dynamic pool would be defined either by a toml config option, or via a new TMBO dbus interface. However, we can handle those later, I think)

Would that work?

@jk-ozlabs
Copy link
Member

In general, can you add a bit more of an explanation / rationale as part of your commit messages, instead of just log output? There is some good guidance for commit messages up in the "Patch formatting and changelogs" section of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/5.Posting.rst

@jk-ozlabs
Copy link
Member

We'll also need to consider the routing setup for bridged endpoints. Ideally we would:

  1. create a route for the bridge itself, plus a neighbour entry with the appropriate physical address data
  2. create a range route for the allocated endpoint pool, using the bridge as a gateway for that range (ie, no neighbour entry)

the issue is that there is no kernel support for (2) at present: we need some kernel changes to implement gateway routes. It is possible to create "somewhat-fake" routes for those endpoints, using a neighbour table entry for each (bridged) peer that uses the bridge phys address, but that's a bit suboptimal. I'd prefer not to encode that hack into mctpd if possible.

I do have a todo for the kernel changes necessary for that, sounds like I should get onto it!

@santoshpuranik
Copy link

We'll also need to consider the routing setup for bridged endpoints. Ideally we would:

1. create a route for the bridge itself, plus a neighbour entry with the appropriate physical address data

2. create a range route for the allocated endpoint pool, using the bridge as a gateway for that range (ie, no neighbour entry)

the issue is that there is no kernel support for (2) at present: we need some kernel changes to implement gateway routes. It is possible to create "somewhat-fake" routes for those endpoints, using a neighbour table entry for each (bridged) peer that uses the bridge phys address, but that's a bit suboptimal. I'd prefer not to encode that hack into mctpd if possible.

I do have a todo for the kernel changes necessary for that, sounds like I should get onto it!

IIUC, 1 is what we can achieve with the tools we have today, right? For ex: add route to the bridge itself and then mctp route add <downstream eid> via <bridge net if>, essentially adding a neighbour table entry? Would this not continue to work as from TMBO point-of-view all packets go via the bridge route.

When you say sub-optimal, are you referring to the neighbour lookup that happens in net/mctp/route.c? Noob question, how does a gateway impl make that faster?

When is the gateway support in kernel for MCTP nets planned? We can help if you have a design in mind.

@jk-ozlabs
Copy link
Member

Hi Santosh,

IIUC, 1 is what we can achieve with the tools we have today, right?

Yes, but it requires a lot of workaround to set up.

For ex: add route to the bridge itself and then mctp route add <downstream eid> via <bridge net if>, essentially adding a neighbour table entry?

That isn't adding a neighbour table entry though; just a route. USB is a little different in that there are no neighbour table entries required, because there is no physical addressing.

For a bridge, using this scheme would require:

  1. adding the route to the bridge EID
  2. adding the neighbour entry for the bridge EID
  3. adding individual routes for each EID in the EID pool
  4. adding individual fake neighbour table entries for each EID in the EID pool, which would (incorrectly) represent that the EID has a specific physical address (ie., that of the bridge)

(for USB, we don't need (2) or (4), but that's purely a property of the transport type. We would need those to be supported in mctpd to allow other transport types like i2c).

This would work, but it's messy.

When you say sub-optimal, are you referring to the neighbour lookup that happens in net/mctp/route.c?

No, the neighbour lookups happen in net/mctp/neigh.c.

Noob question, how does a gateway impl make that faster?

Not so much faster, more tidier. With a gateway route, we would require:

  1. adding the route to the bridge EID
  2. adding the neighbour entry for the bridge EID
  3. adding one range route for the entire EID pool, referencing the bridge EID as the gateway

No fake neighbour table entries are required - since the kernel just looks up the gateway physical address from the gateway's neighbour table entry.

When is the gateway support in kernel for MCTP nets planned?

I have it done - will push a development branch shortly.

@jk-ozlabs
Copy link
Member

I have it done - will push a development branch shortly.

https://github.com/CodeConstruct/linux/tree/dev/forwarding

@santoshpuranik
Copy link

Hi Jeremy,

Thank you for the detailed response.

Hi Santosh,

IIUC, 1 is what we can achieve with the tools we have today, right?

Yes, but it requires a lot of workaround to set up.

For ex: add route to the bridge itself and then mctp route add <downstream eid> via <bridge net if>, essentially adding a neighbour table entry?

That isn't adding a neighbour table entry though; just a route. USB is a little different in that there are no neighbour table entries required, because there is no physical addressing.

For a bridge, using this scheme would require:

1. adding the route to the bridge EID

2. adding the neighbour entry for the bridge EID

3. adding individual routes for each EID in the EID pool

4. adding individual fake neighbour table entries for each EID in the EID pool, which would (incorrectly) represent that the EID has a specific physical address (ie., that of the bridge)

Ack, I see something like I2C would need a PHY address.

When you say sub-optimal, are you referring to the neighbour lookup that happens in net/mctp/route.c?

No, the neighbour lookups happen in net/mctp/neigh.c.

Ack. I should have said the neigh_lookup call that happens in route.c!

Noob question, how does a gateway impl make that faster?

Not so much faster, more tidier. With a gateway route, we would require:

1. adding the route to the bridge EID

2. adding the neighbour entry for the bridge EID

3. adding one range route for the entire EID pool, referencing the bridge EID as the gateway

No fake neighbour table entries are required - since the kernel just looks up the gateway physical address from the gateway's neighbour table entry.

Thank you, that does seem cleaner.

@jk-ozlabs
Copy link
Member

And for the userspace changes, my dev/gateway branch here:

https://github.com/CodeConstruct/mctp/tree/dev/gateway

@santoshpuranik
Copy link

@jk-ozlabs : I think we agree that mctpd has to poll all allocated endpoints with a Get Endpoint ID periodically. I think the first thing we'd need to enable in order to do that is to make MCTP requests and responses asynchronous. Do you have a design in mind to make MCTP requests async (like via a request queue per allocated endpoint)?

@jk-ozlabs
Copy link
Member

jk-ozlabs commented Jun 3, 2025

I think we agree that mctpd has to poll all allocated endpoints with a Get Endpoint ID periodically

Just as a clarification - not all endpoints, but EIDs within allocated endpoint ranges, which have not yet been enumerated. And this is assuming we expect mctpd to automatically enumerate those bridged devices. I think the latter is reasonable, but we don't have a specific design point around that yet.

With that in mind, yes, we probably want to make that async, as those requests are likely to not have a response, and therefore we're at worst-case waiting time.

In terms of design: we probably don't want a struct peer to be created for those endpoints, as they don't strictly exist as proper peers at that stage. I think a minimal-impact approach may be to keep a set of the allocated (but not-yet-enumerated) ranges, and periodically send the Get Endpoint ID requests.

We don't necessarily need to keep much state for that polling mechanism (ie, between request and response) - receiving a Get Endpoint ID response for anything in that range would trigger the enumeration process.

@santoshpuranik
Copy link

I think we agree that mctpd has to poll all allocated endpoints with a Get Endpoint ID periodically

Just as a clarification - not all endpoints, but EIDs within allocated endpoint ranges, which have not yet been enumerated.

Wouldn't we also want to poll enumerated endpoints under the bridge to determine when they "went away"?

In terms of design: we probably don't want a struct peer to be created for those endpoints, as they don't strictly exist as proper peers at that stage. I think a minimal-impact approach may be to keep a set of the allocated (but not-yet-enumerated) ranges, and periodically send the Get Endpoint ID requests.

We don't necessarily need to keep much state for that polling mechanism (ie, between request and response) - receiving a Get Endpoint ID response for anything in that range would trigger the enumeration process.

Ack. How periodically do you think we should check? Same as the logic for determining when to set endpoint state as degraded (TReclaim/2)?

@jk-ozlabs
Copy link
Member

Wouldn't we also want to poll enumerated endpoints under the bridge to determine when they "went away"?

No, and we don't do that with directly-attached endpoints either. The current philosophy is that we don't care if an endpoint disappears, until some application calls Recover. If there's no application using the endpoint, then no need to monitor for its presence.

[I'm okay with revisiting this, or handling bridged endpoints differently, if there's a compelling argument for doing so]

How periodically do you think we should check?

Treclaim/2 seems a bit too often to me, but might be fine as a starting point. I suspect that an ideal approach would be to poll more regularly when a bridge pool is initially allocated, then reduce frequency. However, let's not complicate the initial implementation too much here, and just use a configurable constant.

@jk-ozlabs
Copy link
Member

jk-ozlabs commented Jun 3, 2025

.. and speaking of Recover, we might need to revisit how we handle that for bridged endpoints, as a MCTP-level recovery operation probably isn't applicable as a directly-attached device (in the same manner, at least). CC @amboar.

@santoshpuranik
Copy link

No, and we don't do that with directly-attached endpoints either.

So we have a case where we will have to call allocate endpoint ID on the bridge device when not all of its downstream devices are available. In such a case, how do you think we can determine when those downstream EIDs become available unless we poll?

@jk-ozlabs
Copy link
Member

how do you think we can determine when those downstream EIDs become available unless we poll

I am suggesting we poll. Just that we then stop polling once we enumerate the endpoint.

@santoshpuranik
Copy link

how do you think we can determine when those downstream EIDs become available unless we poll

I am suggesting we poll. Just that we then stop polling once we enumerate the endpoint.

Ah, ack, then.

@amboar
Copy link
Contributor

amboar commented Jun 11, 2025

.. and speaking of Recover, we might need to revisit how we handle that for bridged endpoints, as a MCTP-level recovery operation probably isn't applicable as a directly-attached device (in the same manner, at least). CC @amboar.

It will need some rework as currently it assumes the peer is a neighbour and uses physical addressing for Get Endpoint ID. We still want a mechanism to establish the loss of a non-neighbour peer though. I think Recover is fine for that. We need to use some message for polling, and despite the absurdity I think Get Endpoint ID is also fine for that, just we can't use physical addressing if the peer is behind a bridge. The observed behaviour of Recover would be the same - if the peer is responsive then the D-Bus object remains exposed, or if it's unresponsive then the object is removed. The difference between a peer being unresponsive as opposed to not having yet been assigned an address cannot be determined across the bridge, so in that case we skip the substance of the recovery operation (EID re-assignment). That's a responsibility of the bridge node anyway.

@jk-ozlabs
Copy link
Member

Thanks for that, Andrew.

There might be some commonality between the peers undergoing (non-local) recovery, and those EIDs that are behind a bridge, but not-yet enumerated. If a Recover of a non-local endpoint fails (ie, the Get Endpoint ID commands involved in the Recover process all timeout), then we should return that EID to the "allocated but not yet enumerated" EID set, which means we will continue to send periodic Get Endpoint ID commands (perhaps on a less frequent basis though).

The same should occur for a Remove too.

@amboar
Copy link
Contributor

amboar commented Jun 11, 2025

Yep, that sounds sensible.

@faizana-nvidia
Copy link
Contributor Author

Thank you all for taking out time to look into the PR,

I've addressed to the asked comments on previous commit, added new commit for MCTP Bridge design doc, need to push Polling mechanism now

@jk-ozlabs
Copy link
Member

Thanks for the updates! A couple of comments:

  1. We don't really do design proposal docs as file in the repo; it's great to see your recap of the discussion points from this PR, but there's no need for that format to be long-lived in the repo itself. I would suggest turning this into a user-consumable document describing how things work according to your new implementation. Any dbus API changes belong in the mctpd.md document.

  2. Before implementing this new dbus API, we would need some confirmation that non-contiguous pool allocations are permissible. I have raised an issue with the PMCI WG, (#1540, if you have access), and would like at least some indication that the requirement can be relaxed before we commit to the separate pool ranges.

  3. In order to reduce the upfront work, you may want to skip the endpoint polling for the initial PR; the changes will still be useful in that au.com.codeconstruct.MCTP.Network1.LearnEndpoint can be used to enumerate downstream devices manually (once the pool is allocated, and we can route to those endpoints).

  4. You have a couple of cases where you add something in an initial patch, then re-work it in the follow-up patch. This makes review overly complicated.

  5. Super minor, but the formatting of introduced changes is inconsistent. Given there's still some work to do before this series is ready, I will apply the tree-wide reformat shortly, and add a .clang-format.

@faizana-nvidia faizana-nvidia force-pushed the mctp-bridge-support branch 3 times, most recently from bf8f331 to fa59ed7 Compare June 30, 2025 21:44
@faizana-nvidia
Copy link
Contributor Author

faizana-nvidia commented Jun 30, 2025

Thanks for the updates! A couple of comments:

  1. We don't really do design proposal docs as file in the repo; it's great to see your recap of the discussion points from this PR, but there's no need for that format to be long-lived in the repo itself. I would suggest turning this into a user-consumable document describing how things work according to your new implementation. Any dbus API changes belong in the mctpd.md document.
  2. Before implementing this new dbus API, we would need some confirmation that non-contiguous pool allocations are permissible. I have raised an issue with the PMCI WG, (#1540, if you have access), and would like at least some indication that the requirement can be relaxed before we commit to the separate pool ranges.
  3. In order to reduce the upfront work, you may want to skip the endpoint polling for the initial PR; the changes will still be useful in that au.com.codeconstruct.MCTP.Network1.LearnEndpoint can be used to enumerate downstream devices manually (once the pool is allocated, and we can route to those endpoints).
  4. You have a couple of cases where you add something in an initial patch, then re-work it in the follow-up patch. This makes review overly complicated.
  5. Super minor, but the formatting of introduced changes is inconsistent. Given there's still some work to do before this series is ready, I will apply the tree-wide reformat shortly, and add a .clang-format.

Hello Jeremy

Thank you for looking over the commits, based on your comment # 1 I have removed the new .md file which captured MCTP Bridge support details on PR and updated the existing mctpd.md file with new information about dbus api AssignBridgeStatic. Regarding user consumable document, I'm not much sure what this could be, if you could let me know what this document should be I can create one and update the PR.

I recently got the permission for PMCI WG, have glanced over what was stated on issue #1540, basically the Idea is to split the BusOwner EID pool and segregate a chunk of eids for Bridge's downstream pool on the higher end of Busowner pool while keeping lower end for non-bridge devices. This would be helpful for Dynamic EID assignment of downstream pool devices incase multiple Bridge's are there under same network.

My current implementation involves finding of contiguous eid chunk of min(requested pool size, bridge's pool size capability) from available BusOwner's pool but we begin looking from Asked pool_start (Static) or from next to Bridge's EID (Dynamic) and we look till we get right sized chunk and mark that eid as pool_start. I did based this from the same line of spec for which you raised the issue.

In order to reduce the upfront work, you may want to skip the endpoint polling for the initial PR; the changes will still be useful in that au.com.codeconstruct.MCTP.Network1.LearnEndpoint can be used to enumerate downstream devices manually (once the pool is allocated, and we can route to those endpoints).

I can create a new PR for Endpoint polling if thats what you mean and skip for this PR. Also for adding route for the downstream endpoint to Bridge, we would need your implementation implementation to be merged for both linux kernel and mctpd. Internally I've tested my polling logic with your pulled changes, but for this PR I haven't picked them up so discovery of downstream EID via LearnEndpoint would probably not be possible with only this PR

  1. You have a couple of cases where you add something in an initial patch, then re-work it in the follow-up patch. This makes review overly complicated.
  2. Super minor, but the formatting of introduced changes is inconsistent. Given there's still some work to do before this series is ready, I will apply the tree-wide reformat shortly, and add a .clang-format.

I've updated the patch set now for easier review, hope it helps, let me know if I can do anything else to further ease the review. Thanks for your .clang format, once that is pushed I would reply those onto my change

faizana-nvidia and others added 8 commits August 25, 2025 10:36
* updated mctpd.md with new mctp bridge support for dynamic eid
assignment from AssignEndpoint d-bus call

Signed-off-by: Faizan Ali <[email protected]>
Add new test for validating AssignEndpoint D-Bus method
to verify bridge endpoint EID allocation being contiguous to its
downstream eids. Add Allocate Endpoint control message support with
new endpoint property for allocated pool size also assign dynamic eid
contiguous to bridge during Allocate Endpoint control message.

Signed-off-by: Faizan Ali <[email protected]>
New endpoint object interface au.com.codeconstruct.MCTP.Bridge1
which will capture details of bridge type endpoint such as
pool start, pool end.

Update test framework with new test methods to validate bridge
pool assignemnt.

[Minor rebase rework from Jeremy Kerr <[email protected]>; don't
fail the allocation on signal emit failure]

Signed-off-by: Faizan Ali <[email protected]>
Currently, test_assign_dynamic_bridge_eid test both the bridge
assignment, and conflicts against static EIDs.

Instead, split this into two smaller tests, which provide a base for
future bridge-conflict tests.

Signed-off-by: Jeremy Kerr <[email protected]>
In addition to the static assignments, we want to ensure that
LearnEndpoint does not result in EID conflicts.

Signed-off-by: Jeremy Kerr <[email protected]>
… ep assignment

We want to ensure that running out of bridge range space does not cause
a failure to allocate a non-bridge EID. We speculatively allocate before
we determine bridge/non-bridge status, so this may cause issues.

Signed-off-by: Jeremy Kerr <[email protected]>
add_peer() returns -EEXIST if a proposed EID is already allocated to a
peer, but -EADDRNOTAVAIL if it is allocated to a bridge.

Callers are expecting EEXIST for conflicts, so use that instead.

Signed-off-by: Jeremy Kerr <[email protected]>
@jk-ozlabs jk-ozlabs force-pushed the mctp-bridge-support branch from ebe93fc to 5a90891 Compare August 25, 2025 02:37
Spacing fixes, and a simplification for the Bridge1 interface
description. Re-order pool properties to describe in start -> end order.

Signed-off-by: Jeremy Kerr <[email protected]>
The spec currenty requires this, but that may change. So, don't bind the
dbus API to requiring contiguous EIDs.

Signed-off-by: Jeremy Kerr <[email protected]>
Some failure paths log internally, others do not. Make this consistent,
and do all logging inside the function.

Signed-off-by: Jeremy Kerr <[email protected]>
Currently, we only allow bridge pool allocation through an
AssignEndpoint call, as that is guaranteed to involve a Set Endpoint ID
command, required to start the pool allocation process.

LearnEndpoint is intended to never modify endpoint state, so we keep that
as-is.

SetupEndpoint has always been a convenience method, intended to do a
LearnEndpoint if possible, or fall back to AssignEndpoint if not.
Because of this, we are not making any assurances about preserving state
with SetupEndpoint.

With the new bridge support, the current distinction between
AssignEndpoint (which will allocate a bridge pool) and SetupEndpoint
(which will not) is a potential point of confusion. Instead, allow
bridge allocations through SetupEndpoint.

We have a new conditional path here: we try the LearnEndpoint (ie, a Get
Endpoint ID, to see if we can use that EID) first, but add a new check
to determine if this is a bridge EID type. Is so, we force the fallback
to Set Endpoint ID, which will allow a bridge EID allocation too.

Signed-off-by: Jeremy Kerr <[email protected]>
Currently, if an endpoint needs reassigning due to a bridge conflict,
SetupEndpoint may publish an endpoint, then remove it. This is because
we're calling get_endpoint_peer() for the Get Endpoint ID function
(which adds the peer), but later check for bridge compatibility.

Prevent this by open-coding the parts we need from get_endpoint_peer,
and only performing the add once we have a valid peer. This requires a
bit of re-work that is particular to bridge allocation in the
SetupEndpoint case.

Signed-off-by: Jeremy Kerr <[email protected]>
The if will return in all paths, no need for the `else`.

Signed-off-by: Jeremy Kerr <[email protected]>
This shouldn't be a bitwise test.

Signed-off-by: Jeremy Kerr <[email protected]>
Guarantee that we're not altering ctx/net data.

Signed-off-by: Jeremy Kerr <[email protected]>
`net_learn` only specifies the usage scenario, not the behaviour.

Signed-off-by: Jeremy Kerr <[email protected]>
Currently, we attempt to allocate bridge pool ranges of the size of our
max pool configuration setting, and then trim after we know the
requested pool size. If the max allocation is not available, we do not
provide *any* EID range to the requesting bridge.

However, it's entirely likely that the bridge will request a pool that
is smaller than our maximum. We should not reject that allocation, as
there is space available.

Instead of insisting on allocating the max, just pre-allocate the
largest space up to the max. When we then learn the bridge pool size,
offer the allocation that we made. If this is smaller than the
preallocation, we trim. If it is larger, we just offer what is
allocated.

This changes a failure case, where the tests expect the less-than-max
allocation to fail. No need to preserve this behaviour, as we can
actually offer a workable pool at this point.

Signed-off-by: Jeremy Kerr <[email protected]>
@jk-ozlabs jk-ozlabs force-pushed the mctp-bridge-support branch from 5a90891 to f4870bc Compare August 25, 2025 03:12
@jk-ozlabs
Copy link
Member

A few minor fixes based on @mkj's in-office review.

@faizana-nvidia
Copy link
Contributor Author

Thank you for rebasing the PR with your br :)

OK, thanks for taking a look!

My plan is to push to the branch for this PR, and then merge. That way it's your original PR that gets tracked as the merge point for this feature.

Ack, would be better this way.

@jk-ozlabs
Copy link
Member

No problem - we needed the rebase in order to resolve conflicts with 9711f0b.

Next step is to do a little testing on hardware we have here, then we should be good to merge. Let me know (soon!) if you have any further alterations beforehand.

@jk-ozlabs jk-ozlabs merged commit 2c009be into CodeConstruct:main Aug 25, 2025
3 checks passed
@jk-ozlabs
Copy link
Member

Merged 🎉 - thank you for the contributions!

@faizana-nvidia
Copy link
Contributor Author

Merged 🎉 - thank you for the contributions!

Thank you for your help and guidance/reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants