Skip to content

Conversation

kun6fup4nd4
Copy link
Contributor

@kun6fup4nd4 kun6fup4nd4 commented Aug 28, 2025

PR Type

Enhancement


Description

  • Introduces FACT v2 algorithm for improved transaction communication.

  • Adds extensive debug logging for FACT algorithm operations.

  • Refactors sender/receiver group calculation for FACT v2.

  • Updates configuration and type definitions for FACT v2 support.


Changes walkthrough 📝

Relevant files
Enhancement
TransactionQueue.ts
FACT v2 algorithm integration and enhanced debugging         

src/state-manager/TransactionQueue.ts

  • Implements FACT v2 sender/receiver group calculation and integration.
  • Adds detailed debug logging for FACT operations and stuck
    transactions.
  • Refactors corresponding node selection and validation for FACT v2.
  • Updates logic for final data communication and validation.
  • +418/-294
    shardus-types.ts
    Update type definitions for FACT v2 support                           

    src/shardus/shardus-types.ts

  • Adds optional factv2 property to ServerConfiguration interface.
  • Documents FACT v2 as an enhanced version with improved verification.
  • +2/-0     
    index.ts
    Add FACT algorithm logging flag                                                   

    src/logger/index.ts

  • Adds fact flag to LogFlags for FACT algorithm logging.
  • Updates default log flags to include fact: false.
  • +4/-0     
    fastAggregatedCorrespondingTell.ts
    Update corresponding node logic for FACT v2                           

    src/utils/fastAggregatedCorrespondingTell.ts

  • Updates getCorrespondingNodes to support FACT v2 logic.
  • Adds v2 parameter for FACT v2 compatibility.
  • Adjusts index wrapping and destination calculation for v2.
  • Documents that verifyCorrespondingSender is not used in FACT v2.
  • +20/-16 
    Configuration changes
    server.ts
    Add FACT v2 configuration flag                                                     

    src/config/server.ts

    • Adds factv2 configuration flag to enable FACT v2 algorithm.
    +1/-0     

    Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • Copy link

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 5 🔵🔵🔵🔵🔵
    🏅 Score: 82
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Algorithmic Complexity

    The new FACT v2 sender/receiver group calculation and corresponding logic introduce significant complexity. Reviewers should carefully validate the correctness of group membership, index calculations, and edge cases (e.g., group size mismatches, circular index logic) to ensure no subtle bugs are introduced in transaction routing or validation.

    calculateFactSenderGroup(queueEntry: QueueEntry): [(Shardus.NodeWithRank | P2PTypes.NodeListTypes.Node)[], (Shardus.NodeWithRank | P2PTypes.NodeListTypes.Node)[]] {
      if (!queueEntry.transactionGroup || !queueEntry.executionGroup) {
        return [[],[]]
      }    
    
      /* prettier-ignore */ if (logFlags.fact) console.log(`FACT-GROUP-1 txId:${queueEntry.logID} keys:${queueEntry.uniqueKeys.length}`)
    
      // Track which nodes store which accounts
      const nodeToAccountsMap = new Map<string, Set<number>>()
    
      // Build map of which nodes store which accounts
      const allkeys = queueEntry.uniqueKeys.length
    
      for (let i = 0; i < queueEntry.uniqueKeys.length; i++) 
      {
        const key = queueEntry.uniqueKeys[i]
        const homeNode = queueEntry.homeNodes[key]
    
        if (!homeNode) continue
    
        // Skip global accounts if not global modification
        if (queueEntry.globalModification === false && 
            this.stateManager.accountGlobals.isGlobalAccount(key)) {
          continue
        }
    
        const consensusNodeIds = homeNode.consensusNodeForOurNodeFull.map(n => n.id)
        /* prettier-ignore */ if (logFlags.fact) console.log(`FACT-GROUP-2 txId:${queueEntry.logID} key[${i}]:${key.substring(0,8)} consensusNodes:[${consensusNodeIds.join(',')}]`)
    
        // Track all nodes that store this account's partition
        for (const node of homeNode.consensusNodeForOurNodeFull) {
          if (!nodeToAccountsMap.has(node.id)) {
            nodeToAccountsMap.set(node.id, new Set())
          }
          nodeToAccountsMap.get(node.id).add(i)
        }    
      }
      // Get first account's home node to access consensus and edge nodes separately
      const executionGroupHome = queueEntry.homeNodes[queueEntry.uniqueKeys[0]]
      const executionEdgeNodeIds = new Set(executionGroupHome?.edgeNodes?.map(n => n.id) || [])
    
      /* prettier-ignore */ if (logFlags.fact) console.log(`FACT-GROUP-3 txId:${queueEntry.logID} executionEdgeNodeIds:[${Array.from(executionEdgeNodeIds).join(',')}]`)
    
      const senderGroup = queueEntry.transactionGroup.filter(node => {
        const accountsStored = nodeToAccountsMap.get(node.id)
        const accountsStoredArray = accountsStored ? Array.from(accountsStored) : []
        const isExecutionEdge = executionEdgeNodeIds.has(node.id)
    
        /* prettier-ignore */ if (logFlags.fact) console.log(`FACT-GROUP-4 txId:${queueEntry.logID} node:${node.id} accountsStored:[${accountsStoredArray.join(',')}] isExecutionEdge:${isExecutionEdge}`)
    
        if (!accountsStored || accountsStored.size === 0) 
          return false
    
        if (accountsStored && accountsStored.size == allkeys) {
          return false
        }
    
        return true
      })
    
      const receiverGroup = (queueEntry.executionGroup as (Shardus.NodeWithRank | P2PTypes.NodeListTypes.Node)[])?.filter(node => {
        const accountsStored = nodeToAccountsMap.get(node.id)
    
        if (accountsStored && accountsStored.size == allkeys) {
          return false
        }
    
        return true
      })
    
      // Sort the result by node.id for consistent ordering
      senderGroup.sort(this.stateManager._sortByIdAsc)
      receiverGroup.sort(this.stateManager._sortByIdAsc)
    
      if (logFlags.fact) console.log(`FACT-GROUP-5 txId:${queueEntry.logID} senderGroup:[${senderGroup.map(n => n.id.substring(0,4)).join(',')}] receiverGroup:[${receiverGroup.map(n => n.id.substring(0,4)).join(',')}]`)
    
      return [receiverGroup, senderGroup]
    }  
    Logging Overhead

    Extensive debug logging (especially under logFlags.fact) is added throughout the FACT v2 logic. Reviewers should ensure that this logging does not unintentionally leak sensitive data, and that it is properly gated to avoid performance degradation in production.

      if (logFlags.fact) console.log(`FACT-ADD-DATA-1 nodeId:${Self.id} txId:${queueEntry.logID} accountId:${data.accountId}`)
    
      if (queueEntry.collectedData[data.accountId] != null) {
        if (configContext.stateManager.collectedDataFix) {
          // compare the timestamps and keep the newest
          const existingData = queueEntry.collectedData[data.accountId]
          if (data.timestamp > existingData.timestamp) {
            queueEntry.collectedData[data.accountId] = data
            nestedCountersInstance.countEvent('queueEntryAddData', 'collectedDataFix replace with newer data')
          } else {
            nestedCountersInstance.countEvent('queueEntryAddData', 'already collected 1')
            return
          }
        } else {
          // we have already collected this data
          nestedCountersInstance.countEvent('queueEntryAddData', 'already collected 2')
          return
        }
      }
      profilerInstance.profileSectionStart('queueEntryAddData', true)
      // check the signature of each account data
      if (signatureCheck && (data.sign == null || data.sign.owner == null || data.sign.sig == null)) {
        this.mainLogger.fatal(`queueEntryAddData: data.sign == null ${utils.stringifyReduce(data)}`)
        nestedCountersInstance.countEvent('queueEntryAddData', 'data.sign == null')
        return
      }
    
      if (signatureCheck) {
        const dataSenderPublicKey = data.sign.owner
        const dataSenderNode: Shardus.Node = byPubKey[dataSenderPublicKey]
        if (dataSenderNode == null) {
          nestedCountersInstance.countEvent('queueEntryAddData', 'dataSenderNode == null')
          return
        }
        const consensusNodesForAccount = queueEntry.homeNodes[data.accountId]?.consensusNodeForOurNodeFull
        if (
          consensusNodesForAccount == null ||
          consensusNodesForAccount.map((n) => n.id).includes(dataSenderNode.id) === false
        ) {
          nestedCountersInstance.countEvent(
            'queueEntryAddData',
            'data sender node is not in the consensus group of the' + ' account'
          )
          return
        }
    
        const singedData = data as SignedObject
    
        if (this.crypto.verify(singedData) === false) {
          nestedCountersInstance.countEvent('queueEntryAddData', 'data signature verification failed')
          return
        }
      }
    
      queueEntry.collectedData[data.accountId] = data
      queueEntry.dataCollected = Object.keys(queueEntry.collectedData).length
    
      if (logFlags.fact) console.log(`FACT-ADD-DATA-2 nodeId:${Self.id} txId:${queueEntry.logID} accountId:${data.accountId} collectedData.keys:${Object.keys(queueEntry.collectedData)}`)
    
      //make a deep copy of the data
      queueEntry.originalData[data.accountId] = Utils.safeJsonParse(Utils.safeStringify(data))
      queueEntry.beforeHashes[data.accountId] = data.stateId
    
      if (queueEntry.dataCollected === queueEntry.uniqueKeys.length) {
        //  queueEntry.tx Keys.allKeys.length
        queueEntry.hasAll = true
        // this.gossipCompleteData(queueEntry)
        if (queueEntry.executionGroup && queueEntry.executionGroup.length > 1)
          this.shareCompleteDataToNeighbours(queueEntry)
        if (logFlags.debug || this.stateManager.consensusLog) {
          this.mainLogger.debug(
            `queueEntryAddData hasAll: true for txId ${queueEntry.logID} ${
              queueEntry.acceptedTx.txId
            } at timestamp: ${shardusGetTime()} nodeId: ${Self.id} collected ${
              Object.keys(queueEntry.collectedData).length
            } uniqueKeys ${queueEntry.uniqueKeys.length}`
          )
        }
      }
    
      if (data.localCache) {
        queueEntry.localCachedData[data.accountId] = data.localCache
        delete data.localCache
      }
    
      /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_addData', `${queueEntry.logID}`, `key ${utils.makeShortHash(data.accountId)} hash: ${utils.makeShortHash(data.stateId)} hasAll:${queueEntry.hasAll} collected:${queueEntry.dataCollected}  ${queueEntry.acceptedTx.timestamp}`)
      profilerInstance.profileSectionStart('queueEntryAddData', true)
    }
    
    async shareCompleteDataToNeighbours(queueEntry: QueueEntry): Promise<void> {
      if (configContext.stateManager.shareCompleteData === false) {
        return
      }
      if (queueEntry.hasAll === false || queueEntry.sharedCompleteData) {
        return
      }
      if (queueEntry.isInExecutionHome === false) {
        return
      }
      const dataToShare: WrappedResponses = {}
      const stateList: Shardus.WrappedResponse[] = []
      for (const accountId in queueEntry.collectedData) {
        const data = queueEntry.collectedData[accountId]
        const riCacheResult = await this.app.getCachedRIAccountData([accountId])
        if (riCacheResult != null && riCacheResult.length > 0) {
          nestedCountersInstance.countEvent('shareCompleteDataToNeighbours', 'riCacheResult, skipping')
          continue
        } else {
          dataToShare[accountId] = data
          stateList.push(data)
        }
      }
      const payload = { txid: queueEntry.acceptedTx.txId, stateList }
      const neighboursNodes = utils.selectNeighbors(queueEntry.executionGroup, queueEntry.ourExGroupIndex, 2)
      if (stateList.length > 0) {
        this.broadcastState(neighboursNodes, payload, 'shareCompleteDataToNeighbours')
    
        queueEntry.sharedCompleteData = true
        nestedCountersInstance.countEvent(
          `queueEntryAddData`,
          `sharedCompleteData stateList: ${stateList.length} neighbours: ${neighboursNodes.length}`
        )
        if (logFlags.debug || this.stateManager.consensusLog) {
          this.mainLogger.debug(
            `shareCompleteDataToNeighbours: shared complete data for txId ${
              queueEntry.logID
            } at timestamp: ${shardusGetTime()} nodeId: ${Self.id} to neighbours: ${Utils.safeStringify(
              neighboursNodes.map((node) => node.id)
            )}`
          )
        }
      }
    }
    
    async gossipCompleteData(queueEntry: QueueEntry): Promise<void> {
      if (queueEntry.hasAll === false || queueEntry.gossipedCompleteData) {
        return
      }
      if (configContext.stateManager.gossipCompleteData === false) {
        return
      }
      const dataToGossip: WrappedResponses = {}
      const stateList: Shardus.WrappedResponse[] = []
      for (const accountId in queueEntry.collectedData) {
        const data = queueEntry.collectedData[accountId]
        const riCacheResult = await this.app.getCachedRIAccountData([accountId])
        if (riCacheResult != null && riCacheResult.length > 0) {
          nestedCountersInstance.countEvent('gossipCompleteData', 'riCacheResult, skipping')
          continue
        } else {
          dataToGossip[accountId] = data
          stateList.push(data)
        }
      }
      const payload = { txid: queueEntry.acceptedTx.txId, stateList }
      if (stateList.length > 0) {
        Comms.sendGossip(
          'broadcast_state_complete_data', // deprecated
          payload,
          '',
          Self.id,
          queueEntry.executionGroup,
          true,
          6,
          queueEntry.acceptedTx.txId
        )
        queueEntry.gossipedCompleteData = true
        nestedCountersInstance.countEvent('gossipCompleteData', `stateList: ${stateList.length}`)
        if (logFlags.debug || this.stateManager.consensusLog) {
          this.mainLogger.debug(
            `gossipQueueEntryData: gossiped data for txId ${queueEntry.logID} at timestamp: ${shardusGetTime()} nodeId: ${
              Self.id
            }`
          )
        }
      }
    }
    
    /**
     * queueEntryHasAllData
     * Test if the queueEntry has all the data it needs.
     * TODO could be slightly more if it only recalculated when dirty.. but that would add more state and complexity,
     * so wait for this to show up in the profiler before fixing
     * @param queueEntry
     */
    queueEntryHasAllData(queueEntry: QueueEntry): boolean {
      if (queueEntry.hasAll === true) {
        return true
      }
      if (queueEntry.uniqueKeys == null) {
        throw new Error(`queueEntryHasAllData (queueEntry.uniqueKeys == null)`)
      }
      let dataCollected = 0
      for (const key of queueEntry.uniqueKeys) {
        // eslint-disable-next-line security/detect-object-injection
        if (queueEntry.collectedData[key] != null) {
          dataCollected++
        }
      }
      if (dataCollected === queueEntry.uniqueKeys.length) {
        //  queueEntry.tx Keys.allKeys.length uniqueKeys.length
        queueEntry.hasAll = true
        return true
      }
      return false
    }
    
    queueEntryListMissingData(queueEntry: QueueEntry): string[] {
      if (queueEntry.hasAll === true) {
        return []
      }
      if (queueEntry.uniqueKeys == null) {
        throw new Error(`queueEntryListMissingData (queueEntry.uniqueKeys == null)`)
      }
      const missingAccounts = []
      for (const key of queueEntry.uniqueKeys) {
        // eslint-disable-next-line security/detect-object-injection
        if (queueEntry.collectedData[key] == null) {
          missingAccounts.push(key)
        }
      }
    
      return missingAccounts
    }
    
    /**
     * queueEntryRequestMissingData
     * ask other nodes for data that is missing for this TX.
     * normally other nodes in the network should foward data to us at the correct time.
     * This is only for the case that a TX has waited too long and not received the data it needs.
     * @param queueEntry
     */
    async queueEntryRequestMissingData(queueEntry: QueueEntry): Promise<void> {
      if (this.stateManager.currentCycleShardData == null) {
        return
      }
    
      if (queueEntry.pendingDataRequest === true) {
        return
      }
      queueEntry.pendingDataRequest = true
    
      nestedCountersInstance.countEvent('processing', 'queueEntryRequestMissingData-start')
    
      if (!queueEntry.requests) {
        queueEntry.requests = {}
      }
      if (queueEntry.uniqueKeys == null) {
        throw new Error('queueEntryRequestMissingData queueEntry.uniqueKeys == null')
      }
    
      const allKeys = []
      for (const key of queueEntry.uniqueKeys) {
        // eslint-disable-next-line security/detect-object-injection
        if (queueEntry.collectedData[key] == null) {
          allKeys.push(key)
        }
      }
    
      /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingData_start', `${queueEntry.acceptedTx.txId}`, `qId: ${queueEntry.entryID} AccountsMissing:${utils.stringifyReduce(allKeys)}`)
    
      // consensus group should have all the data.. may need to correct this later
      //let consensusGroup = this.queueEntryGetConsensusGroup(queueEntry)
      //let consensusGroup = this.queueEntryGetTransactionGroup(queueEntry)
    
      for (const key of queueEntry.uniqueKeys) {
        // eslint-disable-next-line security/detect-object-injection
        if (queueEntry.collectedData[key] == null && queueEntry.requests[key] == null) {
          let keepTrying = true
          let triesLeft = 5
          // let triesLeft = Math.min(5, consensusGroup.length )
          // let nodeIndex = 0
          while (keepTrying) {
            if (triesLeft <= 0) {
              keepTrying = false
              break
            }
            triesLeft--
            // eslint-disable-next-line security/detect-object-injection
            const homeNodeShardData = queueEntry.homeNodes[key] // mark outstanding request somehow so we dont rerequest
    
            // let node = consensusGroup[nodeIndex]
            // nodeIndex++
    
            // find a random node to ask that is not us
            let node = null
            let randomIndex: number
            let foundValidNode = false
            let maxTries = 1000
    
            // todo make this non random!!!.  It would be better to build a list and work through each node in order and then be finished
            // we have other code that does this fine.
            while (foundValidNode == false) {
              maxTries--
              randomIndex = this.stateManager.getRandomInt(homeNodeShardData.consensusNodeForOurNodeFull.length - 1)
              // eslint-disable-next-line security/detect-object-injection
              node = homeNodeShardData.consensusNodeForOurNodeFull[randomIndex]
              if (maxTries < 0) {
                //FAILED
                this.statemanager_fatal(
                  `queueEntryRequestMissingData`,
                  `queueEntryRequestMissingData: unable to find node to ask after 1000 tries tx:${
                    queueEntry.logID
                  } key: ${utils.makeShortHash(key)} ${utils.stringifyReduce(
                    homeNodeShardData.consensusNodeForOurNodeFull.map((x) => (x != null ? x.id : 'null'))
                  )}`
                )
                break
              }
              if (node == null) {
                continue
              }
              if (node.id === this.stateManager.currentCycleShardData.nodeShardData.node.id) {
                continue
              }
              foundValidNode = true
            }
    
            if (node == null) {
              continue
            }
            if (node.status != 'active' || potentiallyRemoved.has(node.id)) {
              continue
            }
            if (node === this.stateManager.currentCycleShardData.ourNode) {
              continue
            }
    
            // Todo: expand this to grab a consensus node from any of the involved consensus nodes.
    
            for (const key2 of allKeys) {
              // eslint-disable-next-line security/detect-object-injection
              queueEntry.requests[key2] = node
            }
    
            const relationString = ShardFunctions.getNodeRelation(
              homeNodeShardData,
              this.stateManager.currentCycleShardData.ourNode.id
            )
            /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingData_ask', `${queueEntry.logID}`, `r:${relationString}   asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} AccountsMissing:${utils.stringifyReduce(allKeys)}`)
    
            // Node Precheck!
            if (
              this.stateManager.isNodeValidForInternalMessage(node.id, 'queueEntryRequestMissingData', true, true) ===
              false
            ) {
              // if(this.tryNextDataSourceNode('queueEntryRequestMissingData') == false){
              //   break
              // }
              continue
            }
    
            const message = {
              keys: allKeys,
              txid: queueEntry.acceptedTx.txId,
              timestamp: queueEntry.acceptedTx.timestamp,
            }
            let result = null
            try {
              // if (this.config.p2p.useBinarySerializedEndpoints && this.config.p2p.requestStateForTxBinary) {
              // GOLD-66 Error handling try/catch happens one layer outside of this function in process transactions
              /* prettier-ignore */ if (logFlags.seqdiagram) this.seqLogger.info(`0x53455101 ${shardusGetTime()} tx:${message.txid} ${NodeList.activeIdToPartition.get(Self.id)}-->>${NodeList.activeIdToPartition.get(node.id)}: ${'request_state_for_tx'}`)
              result = (await this.p2p.askBinary<RequestStateForTxReq, RequestStateForTxRespSerialized>(
                node,
                InternalRouteEnum.binary_request_state_for_tx,
                message,
                serializeRequestStateForTxReq,
                deserializeRequestStateForTxResp,
                {}
              )) as RequestStateForTxRespSerialized
              // } else {
              //   result = (await this.p2p.ask(node, 'request_state_for_tx', message)) as RequestStateForTxResp
              // }
            } catch (error) {
              /* prettier-ignore */ if (logFlags.error) {
                if (error instanceof ResponseError) {
                  this.mainLogger.error(
                    `ASK FAIL request_state_for_tx : exception encountered where the error is ${error}`
                  )
                }
              }
              /* prettier-ignore */ if (logFlags.error) this.mainLogger.error('askBinary request_state_for_tx exception:', error)
    
              /* prettier-ignore */ if (logFlags.error) this.mainLogger.error(`askBinary error: ${InternalRouteEnum.binary_request_state_for_tx} asked to ${node.externalIp}:${node.externalPort}:${node.id}`)
            }
    
            if (result == null) {
              if (logFlags.verbose) {
                if (logFlags.error) this.mainLogger.error('ASK FAIL request_state_for_tx')
              }
              /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingData_askfailretry', `${queueEntry.logID}`, `r:${relationString}   asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} `)
              continue
            }
            if (result.success !== true) {
              if (logFlags.error) this.mainLogger.error('ASK FAIL queueEntryRequestMissingData 9')
              /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingData_askfailretry2', `${queueEntry.logID}`, `r:${relationString}   asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} `)
              continue
            }
    
            let dataCountReturned = 0
            const accountIdsReturned = []
            for (const data of result.stateList) {
              this.queueEntryAddData(queueEntry, data)
              dataCountReturned++
              accountIdsReturned.push(utils.makeShortHash(data.accountId))
            }
    
            if (queueEntry.hasAll === true) {
              queueEntry.logstate = 'got all missing data'
            } else {
              queueEntry.logstate = 'failed to get data:' + queueEntry.hasAll
              //This will time out and go to reciept repair mode if it does not get more data sent to it.
            }
    
            /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingData_result', `${queueEntry.logID}`, `r:${relationString}   result:${queueEntry.logstate} dataCount:${dataCountReturned} asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID}  AccountsMissing:${utils.stringifyReduce(allKeys)} AccountsReturned:${utils.stringifyReduce(accountIdsReturned)}`)
    
            // queueEntry.homeNodes[key] = null
            for (const key2 of allKeys) {
              //consider deleteing these instead?
              //TSConversion changed to a delete opertaion should double check this
              //queueEntry.requests[key2] = null
              // eslint-disable-next-line security/detect-object-injection
              delete queueEntry.requests[key2]
            }
    
            if (queueEntry.hasAll === true) {
              break
            }
    
            keepTrying = false
          }
        }
      }
    
      if (queueEntry.hasAll === true) {
        nestedCountersInstance.countEvent('processing', 'queueEntryRequestMissingData-success')
      } else {
        nestedCountersInstance.countEvent('processing', 'queueEntryRequestMissingData-failed')
    
        //give up and wait for receipt
        queueEntry.waitForReceiptOnly = true
    
        if (this.config.stateManager.txStateMachineChanges) {
          this.updateTxState(queueEntry, 'await final data', 'missing data')
        } else {
          this.updateTxState(queueEntry, 'consensing')
        }
    
        if (logFlags.debug)
          this.mainLogger.debug(`queueEntryRequestMissingData failed to get all data for: ${queueEntry.logID}`)
      }
    }
    
    /**
     * queueEntryRequestMissingReceipt
     * Ask other nodes for a receipt to go with this TX
     * @param queueEntry
     */
    async queueEntryRequestMissingReceipt(queueEntry: QueueEntry): Promise<void> {
      if (this.stateManager.currentCycleShardData == null) {
        return
      }
    
      if (queueEntry.uniqueKeys == null) {
        throw new Error('queueEntryRequestMissingReceipt queueEntry.uniqueKeys == null')
      }
    
      if (queueEntry.requestingReceipt === true) {
        return
      }
    
      queueEntry.requestingReceipt = true
      queueEntry.receiptEverRequested = true
    
      /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingReceipt_start', `${queueEntry.acceptedTx.txId}`, `qId: ${queueEntry.entryID}`)
    
      const consensusGroup = this.queueEntryGetConsensusGroup(queueEntry)
    
      this.stateManager.debugNodeGroup(
        queueEntry.acceptedTx.txId,
        queueEntry.acceptedTx.timestamp,
        `queueEntryRequestMissingReceipt`,
        consensusGroup
      )
      //let consensusGroup = this.queueEntryGetTransactionGroup(queueEntry)
      //the outer loop here could just use the transaction group of nodes instead. but already had this working in a similar function
      //TODO change it to loop the transaction group untill we get a good receipt
    
      //Note: we only need to get one good receipt, the loop on keys is in case we have to try different groups of nodes
      let gotReceipt = false
      for (const key of queueEntry.uniqueKeys) {
        if (gotReceipt === true) {
          break
        }
    
        let keepTrying = true
        let triesLeft = Math.min(5, consensusGroup.length)
        let nodeIndex = 0
        while (keepTrying) {
          if (triesLeft <= 0) {
            keepTrying = false
            break
          }
          triesLeft--
          // eslint-disable-next-line security/detect-object-injection
          const homeNodeShardData = queueEntry.homeNodes[key] // mark outstanding request somehow so we dont rerequest
    
          // eslint-disable-next-line security/detect-object-injection
          const node = consensusGroup[nodeIndex]
          nodeIndex++
    
          if (node == null) {
            continue
          }
          if (node.status != 'active' || potentiallyRemoved.has(node.id)) {
            continue
          }
          if (node === this.stateManager.currentCycleShardData.ourNode) {
            continue
          }
    
          const relationString = ShardFunctions.getNodeRelation(
            homeNodeShardData,
            this.stateManager.currentCycleShardData.ourNode.id
          )
          /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingReceipt_ask', `${queueEntry.logID}`, `r:${relationString}   asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} `)
    
          // Node Precheck!
          if (
            this.stateManager.isNodeValidForInternalMessage(node.id, 'queueEntryRequestMissingReceipt', true, true) ===
            false
          ) {
            // if(this.tryNextDataSourceNode('queueEntryRequestMissingReceipt') == false){
            //   break
            // }
            continue
          }
    
          const message = { txid: queueEntry.acceptedTx.txId, timestamp: queueEntry.acceptedTx.timestamp }
          let result = null
          // GOLD-67 to be safe this function needs a try/catch block to prevent a timeout from causing an unhandled exception
          // if (
          //   this.stateManager.config.p2p.useBinarySerializedEndpoints &&
          //   this.stateManager.config.p2p.requestReceiptForTxBinary
          // ) {
          try {
            /* prettier-ignore */ if (logFlags.seqdiagram) this.seqLogger.info(`0x53455101 ${shardusGetTime()} tx:${message.txid} ${NodeList.activeIdToPartition.get(Self.id)}-->>${NodeList.activeIdToPartition.get(node.id)}: ${'request_receipt_for_tx'}`)
            result = await this.p2p.askBinary<RequestReceiptForTxReqSerialized, RequestReceiptForTxRespSerialized>(
              node,
              InternalRouteEnum.binary_request_receipt_for_tx,
              message,
              serializeRequestReceiptForTxReq,
              deserializeRequestReceiptForTxResp,
              {}
            )
          } catch (e) {
            this.statemanager_fatal(`queueEntryRequestMissingReceipt`, `error: ${e.message}`)
            /* prettier-ignore */ this.mainLogger.error(`askBinary error: ${InternalRouteEnum.binary_request_receipt_for_tx} asked to ${node.externalIp}:${node.externalPort}:${node.id}`)
          }
          // } else {
          //   result = await this.p2p.ask(node, 'request_receipt_for_tx', message) // not sure if we should await this.
          // }
    
          if (result == null) {
            if (logFlags.verbose) {
              /* prettier-ignore */ if (logFlags.error) this.mainLogger.error(`ASK FAIL request_receipt_for_tx ${triesLeft} ${utils.makeShortHash(node.id)}`)
            }
            /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingReceipt_askfailretry', `${queueEntry.logID}`, `r:${relationString}   asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} `)
            continue
          }
          if (result.success !== true) {
            /* prettier-ignore */ if (logFlags.error) this.mainLogger.error(`ASK FAIL queueEntryRequestMissingReceipt 9 ${triesLeft} ${utils.makeShortHash(node.id)}:${utils.makeShortHash(node.internalPort)} note:${result.note} txid:${queueEntry.logID}`)
            continue
          }
    
          /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingReceipt_result', `${queueEntry.logID}`, `r:${relationString}   result:${queueEntry.logstate} asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} result: ${utils.stringifyReduce(result)}`)
    
          if (result.success === true && result.receipt != null) {
            //TODO implement this!!!
            queueEntry.receivedSignedReceipt = result.receipt
            keepTrying = false
            gotReceipt = true
    
            this.mainLogger.debug(
              `queueEntryRequestMissingReceipt got good receipt for: ${queueEntry.logID} from: ${utils.makeShortHash(
                node.id
              )}:${utils.makeShortHash(node.internalPort)}`
            )
          }
        }
    
        // break the outer loop after we are done trying.  todo refactor this.
        if (keepTrying == false) {
          break
        }
      }
      queueEntry.requestingReceipt = false
    
      if (gotReceipt === false) {
        queueEntry.requestingReceiptFailed = true
      }
    }
    
    // async queueEntryRequestMissingReceipt_old(queueEntry: QueueEntry): Promise<void> {
    //   if (this.stateManager.currentCycleShardData == null) {
    //     return
    //   }
    
    //   if (queueEntry.uniqueKeys == null) {
    //     throw new Error('queueEntryRequestMissingReceipt queueEntry.uniqueKeys == null')
    //   }
    
    //   if (queueEntry.requestingReceipt === true) {
    //     return
    //   }
    
    //   queueEntry.requestingReceipt = true
    //   queueEntry.receiptEverRequested = true
    
    //   /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingReceipt_start', `${queueEntry.acceptedTx.txId}`, `qId: ${queueEntry.entryID}`)
    
    //   const consensusGroup = this.queueEntryGetConsensusGroup(queueEntry)
    
    //   this.stateManager.debugNodeGroup(
    //     queueEntry.acceptedTx.txId,
    //     queueEntry.acceptedTx.timestamp,
    //     `queueEntryRequestMissingReceipt`,
    //     consensusGroup
    //   )
    //   //let consensusGroup = this.queueEntryGetTransactionGroup(queueEntry)
    //   //the outer loop here could just use the transaction group of nodes instead. but already had this working in a similar function
    //   //TODO change it to loop the transaction group untill we get a good receipt
    
    //   //Note: we only need to get one good receipt, the loop on keys is in case we have to try different groups of nodes
    //   let gotReceipt = false
    //   for (const key of queueEntry.uniqueKeys) {
    //     if (gotReceipt === true) {
    //       break
    //     }
    
    //     let keepTrying = true
    //     let triesLeft = Math.min(5, consensusGroup.length)
    //     let nodeIndex = 0
    //     while (keepTrying) {
    //       if (triesLeft <= 0) {
    //         keepTrying = false
    //         break
    //       }
    //       triesLeft--
    //       // eslint-disable-next-line security/detect-object-injection
    //       const homeNodeShardData = queueEntry.homeNodes[key] // mark outstanding request somehow so we dont rerequest
    
    //       // eslint-disable-next-line security/detect-object-injection
    //       const node = consensusGroup[nodeIndex]
    //       nodeIndex++
    
    //       if (node == null) {
    //         continue
    //       }
    //       if (node.status != 'active' || potentiallyRemoved.has(node.id)) {
    //         continue
    //       }
    //       if (node === this.stateManager.currentCycleShardData.ourNode) {
    //         continue
    //       }
    
    //       const relationString = ShardFunctions.getNodeRelation(
    //         homeNodeShardData,
    //         this.stateManager.currentCycleShardData.ourNode.id
    //       )
    //       /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingReceipt_ask', `${queueEntry.logID}`, `r:${relationString}   asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} `)
    
    //       // Node Precheck!
    //       if (
    //         this.stateManager.isNodeValidForInternalMessage(
    //           node.id,
    //           'queueEntryRequestMissingReceipt',
    //           true,
    //           true
    //         ) === false
    //       ) {
    //         // if(this.tryNextDataSourceNode('queueEntryRequestMissingReceipt') == false){
    //         //   break
    //         // }
    //         continue
    //       }
    
    //       const message = { txid: queueEntry.acceptedTx.txId, timestamp: queueEntry.acceptedTx.timestamp }
    //       const result: RequestReceiptForTxResp_old = await this.p2p.ask(
    //         node,
    //         'request_receipt_for_tx_old',
    //         message
    //       ) // not sure if we should await this.
    
    //       if (result == null) {
    //         if (logFlags.verbose) {
    //           /* prettier-ignore */ if (logFlags.error) this.mainLogger.error(`ASK FAIL request_receipt_for_tx_old ${triesLeft} ${utils.makeShortHash(node.id)}`)
    //         }
    //         /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingReceipt_askfailretry', `${queueEntry.logID}`, `r:${relationString}   asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} `)
    //         continue
    //       }
    //       if (result.success !== true) {
    //         /* prettier-ignore */ if (logFlags.error) this.mainLogger.error(`ASK FAIL queueEntryRequestMissingReceipt 9 ${triesLeft} ${utils.makeShortHash(node.id)}:${utils.makeShortHash(node.internalPort)} note:${result.note} txid:${queueEntry.logID}`)
    //         continue
    //       }
    
    //       /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_queueEntryRequestMissingReceipt_result', `${queueEntry.logID}`, `r:${relationString}   result:${queueEntry.logstate} asking: ${utils.makeShortHash(node.id)} qId: ${queueEntry.entryID} result: ${utils.stringifyReduce(result)}`)
    
    //       if (result.success === true && result.receipt != null) {
    //         //TODO implement this!!!
    //         queueEntry.recievedAppliedReceipt = result.receipt
    //         keepTrying = false
    //         gotReceipt = true
    
    //         this.mainLogger.debug(
    //           `queueEntryRequestMissingReceipt got good receipt for: ${
    //             queueEntry.logID
    //           } from: ${utils.makeShortHash(node.id)}:${utils.makeShortHash(node.internalPort)}`
    //         )
    //       }
    //     }
    
    //     // break the outer loop after we are done trying.  todo refactor this.
    //     if (keepTrying == false) {
    //       break
    //     }
    //   }
    //   queueEntry.requestingReceipt = false
    
    //   if (gotReceipt === false) {
    //     queueEntry.requestingReceiptFailed = true
    //   }
    // }
    
    // compute the rand of the node where rank = node_id XOR hash(tx_id + tx_ts)
    computeNodeRank(nodeId: string, txId: string, txTimestamp: number): bigint {
      if (nodeId == null || txId == null || txTimestamp == null) return BigInt(0)
      const hash = this.crypto.hash([txId, txTimestamp])
      return BigInt(XOR(nodeId, hash))
    }
    
    // sort the nodeList by rank, in descending order
    orderNodesByRank(nodeList: Shardus.Node[], queueEntry: QueueEntry): Shardus.NodeWithRank[] {
      const nodeListWithRankData: Shardus.NodeWithRank[] = []
    
      for (let i = 0; i < nodeList.length; i++) {
        const node: Shardus.Node = nodeList[i]
        const rank = this.computeNodeRank(node.id, queueEntry.acceptedTx.txId, queueEntry.acceptedTx.timestamp)
        const nodeWithRank: Shardus.NodeWithRank = {
          rank,
          id: node.id,
          status: node.status,
          publicKey: node.publicKey,
          externalIp: node.externalIp,
          externalPort: node.externalPort,
          internalIp: node.internalIp,
          internalPort: node.internalPort,
        }
        nodeListWithRankData.push(nodeWithRank)
      }
      return nodeListWithRankData.sort((a: Shardus.NodeWithRank, b: Shardus.NodeWithRank) => {
        return b.rank > a.rank ? 1 : -1
      })
    }
    
    /**
     * queueEntryGetTransactionGroup
     * @param {QueueEntry} queueEntry
     * @returns {Node[]}
     */
    queueEntryGetTransactionGroup(queueEntry: QueueEntry, tryUpdate = false): Shardus.Node[] {
      let cycleShardData = this.stateManager.currentCycleShardData
      if (Context.config.stateManager.deterministicTXCycleEnabled) {
        cycleShardData = this.stateManager.shardValuesByCycle.get(queueEntry.txGroupCycle)
      }
      if (cycleShardData == null) {
        throw new Error('queueEntryGetTransactionGroup: currentCycleShardData == null')
      }
      if (queueEntry.uniqueKeys == null) {
        throw new Error('queueEntryGetTransactionGroup: queueEntry.uniqueKeys == null')
      }
      if (queueEntry.transactionGroup != null && tryUpdate != true) {
        return queueEntry.transactionGroup
      }
    
      const txGroup: Shardus.Node[] = []
      const uniqueNodes: StringNodeObjectMap = {}
    
      let hasNonGlobalKeys = false
      for (const key of queueEntry.uniqueKeys) {
        // eslint-disable-next-line security/detect-object-injection
        const homeNode = queueEntry.homeNodes[key]
        // txGroup = Array.concat(txGroup, homeNode.nodeThatStoreOurParitionFull)
        if (homeNode == null) {
          if (logFlags.verbose) this.mainLogger.debug('queueEntryGetTransactionGroup homenode:null')
        }
        if (homeNode.extendedData === false) {
          ShardFunctions.computeExtendedNodePartitionData(
            cycleShardData.shardGlobals,
            cycleShardData.nodeShardDataMap,
            cycleShardData.parititionShardDataMap,
            homeNode,
            cycleShardData.nodes
          )
        }
    
        //may need to go back and sync this logic with how we decide what partition to save a record in.
    
        // If this is not a global TX then skip tracking of nodes for global accounts used as a reference.
        if (queueEntry.globalModification === false) {
          if (this.stateManager.accountGlobals.isGlobalAccount(key) === true) {
            /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`queueEntryGetTransactionGroup skipping: ${utils.makeShortHash(key)} tx: ${queueEntry.logID}`)
            continue
          } else {
            hasNonGlobalKeys = true
          }
        }
    
        for (const node of homeNode.nodeThatStoreOurParitionFull) {
          // not iterable!
          uniqueNodes[node.id] = node
          if (node.id === Self.id)
            if (logFlags.verbose)
              /* prettier-ignore */ this.mainLogger.debug(`queueEntryGetTransactionGroup tx ${queueEntry.logID} our node coverage key ${key}`)
        }
    
        const scratch1 = {}
        for (const node of homeNode.nodeThatStoreOurParitionFull) {
          // not iterable!
          scratch1[node.id] = true
        }
        // make sure the home node is in there in case we hit and edge case
        uniqueNodes[homeNode.node.id] = homeNode.node
    
        // TODO STATESHARDING4 is this next block even needed:
        // HOMENODEMATHS need to patch in nodes that would cover this partition!
        // TODO PERF make an optimized version of this in ShardFunctions that is smarter about which node range to check and saves off the calculation
        // TODO PERF Update.  this will scale badly with 100s or 1000s of nodes. need a faster solution that can use the list of accounts to
        //                    build a list of nodes.
        // maybe this could go on the partitions.
        const { homePartition } = ShardFunctions.addressToPartition(cycleShardData.shardGlobals, key)
        if (homePartition != homeNode.homePartition) {
          //loop all nodes for now
          for (const nodeID of cycleShardData.nodeShardDataMap.keys()) {
            const nodeShardData: StateManagerTypes.shardFunctionTypes.NodeShardData =
              cycleShardData.nodeShardDataMap.get(nodeID)
            const nodeStoresThisPartition = ShardFunctions.testInRange(homePartition, nodeShardData.storedPartitions)
            /* eslint-disable security/detect-object-injection */
            if (nodeStoresThisPartition === true && uniqueNodes[nodeID] == null) {
              //setting this will cause it to end up in the transactionGroup
              uniqueNodes[nodeID] = nodeShardData.node
              queueEntry.patchedOnNodes.set(nodeID, nodeShardData)
            }
            // build index for patched nodes based on the home node:
            if (nodeStoresThisPartition === true) {
              if (scratch1[nodeID] == null) {
                homeNode.patchedOnNodes.push(nodeShardData.node)
                scratch1[nodeID] = true
              }
            }
            /* eslint-enable security/detect-object-injection */
          }
        }
    
        //todo refactor this to where we insert the tx
        if (queueEntry.globalModification === false && this.executeInOneShard && key === queueEntry.executionShardKey) {
          //queueEntry.executionGroup = homeNode.consensusNodeForOurNodeFull.slice()
          const executionKeys = []
          if (logFlags.verbose) {
            for (const node of queueEntry.executionGroup) {
              executionKeys.push(utils.makeShortHash(node.id) + `:${node.externalPort}`)
            }
          }
          /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`queueEntryGetTransactionGroup executeInOneShard ${queueEntry.logID} isInExecutionHome:${queueEntry.isInExecutionHome} executionGroup:${Utils.safeStringify(executionKeys)}`)
          /* prettier-ignore */ if (logFlags.playback && logFlags.verbose) this.logger.playbackLogNote('queueEntryGetTransactionGroup', `queueEntryGetTransactionGroup executeInOneShard ${queueEntry.logID} isInExecutionHome:${queueEntry.isInExecutionHome} executionGroup:${Utils.safeStringify(executionKeys)}`)
        }
    
        // if(queueEntry.globalModification === false && this.executeInOneShard && key === queueEntry.executionShardKey){
        //   let ourNodeShardData: StateManagerTypes.shardFunctionTypes.NodeShardData = this.stateManager.currentCycleShardData.nodeShardData
        //   let nodeStoresThisPartition = ShardFunctions.testInRange(homePartition, ourNodeShardData.storedPartitions)
        //   if(nodeStoresThisPartition === false){
        //     queueEntry.isInExecutionHome = false
        //     queueEntry.waitForReceiptOnly = true
        //   }
        //   /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`queueEntryGetTransactionGroup ${queueEntry.logID} isInExecutionHome:${queueEntry.isInExecutionHome} waitForReceiptOnly:${queueEntry.waitForReceiptOnly}`)
        //   /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('queueEntryGetTransactionGroup', `queueEntryGetTransactionGroup ${queueEntry.logID} isInExecutionHome:${queueEntry.isInExecutionHome} waitForReceiptOnly:${queueEntry.waitForReceiptOnly}`)
        // }
      }
      queueEntry.ourNodeInTransactionGroup = true
      if (uniqueNodes[cycleShardData.ourNode.id] == null) {
        queueEntry.ourNodeInTransactionGroup = false
        /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`queueEntryGetTransactionGroup not involved: hasNonG:${hasNonGlobalKeys} tx ${queueEntry.logID}`)
      }
      if (queueEntry.ourNodeInTransactionGroup)
        if (logFlags.seqdiagram)
          /* prettier-ignore */ this.seqLogger.info(`0x53455105 ${shardusGetTime()} tx:${queueEntry.acceptedTx.txId} Note over ${NodeList.activeIdToPartition.get(Self.id)}: targetgroup`)
    
      // make sure our node is included: needed for gossip! - although we may not care about the data!
      // This may seem confusing, but to gossip to other nodes, we have to have our node in the list we will gossip to
      // Other logic will use queueEntry.ourNodeInTransactionGroup to know what else to do with the queue entry
      uniqueNodes[cycleShardData.ourNode.id] = cycleShardData.ourNode
    
      const values = Object.values(uniqueNodes)
      for (const v of values) {
        txGroup.push(v)
      }
    
      txGroup.sort(this.stateManager._sortByIdAsc)
      if (queueEntry.ourNodeInTransactionGroup) {
        const ourID = cycleShardData.ourNode.id
        for (let idx = 0; idx < txGroup.length; idx++) {
          // eslint-disable-next-line security/detect-object-injection
          const node = txGroup[idx]
          if (node.id === ourID) {
            queueEntry.ourTXGroupIndex = idx
            break
          }
        }
      }
      if (tryUpdate != true) {
        if (Context.config.stateManager.deterministicTXCycleEnabled === false) {
          queueEntry.txGroupCycle = this.stateManager.currentCycleShardData.cycleNumber
        }
        queueEntry.transactionGroup = txGroup
      } else {
        queueEntry.updatedTxGroupCycle = this.stateManager.currentCycleShardData.cycleNumber
        queueEntry.transactionGroup = txGroup
      }
    
      // let uniqueNodes = {}
      // for (let n of gossipGroup) {
      //   uniqueNodes[n.id] = n
      // }
      // for (let n of updatedGroup) {
      //   uniqueNodes[n.id] = n
      // }
      // let values = Object.values(uniqueNodes)
      // let finalGossipGroup =
      // for (let n of updatedGroup) {
      //   uniqueNodes[n.id] = n
      // }
    
      return txGroup
    }
    
    /**
     * queueEntryGetConsensusGroup
     * Gets a merged results of all the consensus nodes for all of the accounts involved in the transaction
     * Ignores global accounts if globalModification == false and the account is global
     * @param {QueueEntry} queueEntry
     * @returns {Node[]}
     */
    queueEntryGetConsensusGroup(queueEntry: QueueEntry): Shardus.Node[] {
      let cycleShardData = this.stateManager.currentCycleShardData
      if (Context.config.stateManager.deterministicTXCycleEnabled) {
        cycleShardData = this.stateManager.shardValuesByCycle.get(queueEntry.txGroupCycle)
      }
      if (cycleShardData == null) {
        throw new Error('queueEntryGetConsensusGroup: currentCycleShardData == null')
      }
      if (queueEntry.uniqueKeys == null) {
        throw new Error('queueEntryGetConsensusGroup: queueEntry.uniqueKeys == null')
      }
      if (queueEntry.conensusGroup != null) {
        return queueEntry.conensusGroup
      }
      const txGroup = []
      const uniqueNodes: StringNodeObjectMap = {}
    
      let hasNonGlobalKeys = false
      for (const key of queueEntry.uniqueKeys) {
        // eslint-disable-next-line security/detect-object-injection
        const homeNode = queueEntry.homeNodes[key]
        if (homeNode == null) {
          if (logFlags.verbose) this.mainLogger.debug('queueEntryGetConsensusGroup homenode:null')
        }
        if (homeNode.extendedData === false) {
          ShardFunctions.computeExtendedNodePartitionData(
            cycleShardData.shardGlobals,
            cycleShardData.nodeShardDataMap,
            cycleShardData.parititionShardDataMap,
            homeNode,
            cycleShardData.nodes
          )
        }
    
        // TODO STATESHARDING4 GLOBALACCOUNTS is this next block of logic needed?
        // If this is not a global TX then skip tracking of nodes for global accounts used as a reference.
        if (queueEntry.globalModification === false) {
          if (this.stateManager.accountGlobals.isGlobalAccount(key) === true) {
            /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`queueEntryGetConsensusGroup skipping: ${utils.makeShortHash(key)} tx: ${queueEntry.logID}`)
            continue
          } else {
            hasNonGlobalKeys = true
          }
        }
    
        for (const node of homeNode.consensusNodeForOurNodeFull) {
          uniqueNodes[node.id] = node
        }
    
        // make sure the home node is in there in case we hit and edge case
        uniqueNodes[homeNode.node.id] = homeNode.node
      }
      queueEntry.ourNodeInConsensusGroup = true
      if (uniqueNodes[cycleShardData.ourNode.id] == null) {
        queueEntry.ourNodeInConsensusGroup = false
        /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`queueEntryGetConsensusGroup not involved: hasNonG:${hasNonGlobalKeys} tx ${queueEntry.logID}`)
      }
    
      // make sure our node is included: needed for gossip! - although we may not care about the data!
      uniqueNodes[cycleShardData.ourNode.id] = cycleShardData.ourNode
    
      const values = Object.values(uniqueNodes)
      for (const v of values) {
        txGroup.push(v)
      }
      queueEntry.conensusGroup = txGroup
      return txGroup
    }
    
    /**
     * queueEntryGetConsensusGroupForAccount
     * Gets a merged results of all the consensus nodes for a specific account involved in the transaction
     * Ignores global accounts if globalModification == false and the account is global
     * @param {QueueEntry} queueEntry
     * @returns {Node[]}
     */
    queueEntryGetConsensusGroupForAccount(queueEntry: QueueEntry, accountId: string): Shardus.Node[] {
      let cycleShardData = this.stateManager.currentCycleShardData
      if (Context.config.stateManager.deterministicTXCycleEnabled) {
        cycleShardData = this.stateManager.shardValuesByCycle.get(queueEntry.txGroupCycle)
      }
      if (cycleShardData == null) {
        throw new Error('queueEntryGetConsensusGroup: currentCycleShardData == null')
      }
      if (queueEntry.uniqueKeys == null) {
        throw new Error('queueEntryGetConsensusGroup: queueEntry.uniqueKeys == null')
      }
      if (queueEntry.conensusGroup != null) {
        return queueEntry.conensusGroup
      }
      if (queueEntry.uniqueKeys.includes(accountId) === false) {
        throw new Error(`queueEntryGetConsensusGroup: account ${accountId} is not in the queueEntry.uniqueKeys`)
      }
      const txGroup = []
      const uniqueNodes: StringNodeObjectMap = {}
    
      let hasNonGlobalKeys = false
      const key = accountId
      // eslint-disable-next-line security/detect-object-injection
      const homeNode = queueEntry.homeNodes[key]
      if (homeNode == null) {
        if (logFlags.verbose) this.mainLogger.debug('queueEntryGetConsensusGroup homenode:null')
      }
      if (homeNode.extendedData === false) {
        ShardFunctions.computeExtendedNodePartitionData(
          cycleShardData.shardGlobals,
          cycleShardData.nodeShardDataMap,
          cycleShardData.parititionShardDataMap,
          homeNode,
          cycleShardData.nodes
        )
      }
    
      // TODO STATESHARDING4 GLOBALACCOUNTS is this next block of logic needed?
      // If this is not a global TX then skip tracking of nodes for global accounts used as a reference.
      if (queueEntry.globalModification === false) {
        if (this.stateManager.accountGlobals.isGlobalAccount(key) === true) {
          /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`queueEntryGetConsensusGroup skipping: ${utils.makeShortHash(key)} tx: ${queueEntry.logID}`)
        } else {
          hasNonGlobalKeys = true
        }
      }
    
      for (const node of homeNode.consensusNodeForOurNodeFull) {
        uniqueNodes[node.id] = node
      }
    
      // make sure the home node is in there in case we hit and edge case
      uniqueNodes[homeNode.node.id] = homeNode.node
      queueEntry.ourNodeInConsensusGroup = true
      if (uniqueNodes[cycleShardData.ourNode.id] == null) {
        queueEntry.ourNodeInConsensusGroup = false
        /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`queueEntryGetConsensusGroup not involved: hasNonG:${hasNonGlobalKeys} tx ${queueEntry.logID}`)
      }
    
      // make sure our node is included: needed for gossip! - although we may not care about the data!
      uniqueNodes[cycleShardData.ourNode.id] = cycleShardData.ourNode
    
      const values = Object.values(uniqueNodes)
      for (const v of values) {
        txGroup.push(v)
      }
      return txGroup
    }
    /**
     * tellCorrespondingNodes
     * @param queueEntry
     * -sends account data to the correct involved nodees
     * -loads locally available data into the queue entry
     */
    // async tellCorrespondingNodesOld(queueEntry: QueueEntry): Promise<unknown> {
    //   if (this.stateManager.currentCycleShardData == null) {
    //     throw new Error('tellCorrespondingNodes: currentCycleShardData == null')
    //   }
    //   if (queueEntry.uniqueKeys == null) {
    //     throw new Error('tellCorrespondingNodes: queueEntry.uniqueKeys == null')
    //   }
    //   // Report data to corresponding nodes
    //   const ourNodeData = this.stateManager.currentCycleShardData.nodeShardData
    //   // let correspondingEdgeNodes = []
    //   let correspondingAccNodes: Shardus.Node[] = []
    //   const dataKeysWeHave = []
    //   const dataValuesWeHave = []
    //   const datas: { [accountID: string]: Shardus.WrappedResponse } = {}
    //   const remoteShardsByKey: { [accountID: string]: StateManagerTypes.shardFunctionTypes.NodeShardData } = {} // shard homenodes that we do not have the data for.
    //   let loggedPartition = false
    //   for (const key of queueEntry.uniqueKeys) {
    //     ///   test here
    //     // let hasKey = ShardFunctions.testAddressInRange(key, ourNodeData.storedPartitions)
    //     // todo : if this works maybe a nicer or faster version could be used
    //     let hasKey = false
    //     // eslint-disable-next-line security/detect-object-injection
    //     const homeNode = queueEntry.homeNodes[key]
    //     if (homeNode.node.id === ourNodeData.node.id) {
    //       hasKey = true
    //     } else {
    //       //perf todo: this seems like a slow calculation, coult improve this
    //       for (const node of homeNode.nodeThatStoreOurParitionFull) {
    //         if (node.id === ourNodeData.node.id) {
    //           hasKey = true
    //           break
    //         }
    //       }
    //     }
    //
    //     // HOMENODEMATHS tellCorrespondingNodes patch the value of hasKey
    //     // did we get patched in
    //     if (queueEntry.patchedOnNodes.has(ourNodeData.node.id)) {
    //       hasKey = true
    //     }
    //
    //     // for(let patchedNodeID of queueEntry.patchedOnNodes.values()){
    //     // }
    //
    //     let isGlobalKey = false
    //     //intercept that we have this data rather than requesting it.
    //     if (this.stateManager.accountGlobals.isGlobalAccount(key)) {
    //       hasKey = true
    //       isGlobalKey = true
    //       /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('globalAccountMap', queueEntry.logID, `tellCorrespondingNodes - has`)
    //     }
    //
    //     if (hasKey === false) {
    //       if (loggedPartition === false) {
    //         loggedPartition = true
    //         /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`tellCorrespondingNodes hasKey=false: ${utils.stringifyReduce(homeNode.nodeThatStoreOurParitionFull.map((v) => v.id))}`)
    //         /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`tellCorrespondingNodes hasKey=false: full: ${utils.stringifyReduce(homeNode.nodeThatStoreOurParitionFull)}`)
    //       }
    //       /* prettier-ignore */ if (logFlags.verbose) this.mainLogger.debug(`tellCorrespondingNodes hasKey=false  key: ${utils.stringifyReduce(key)}`)
    //     }
    //
    //     if (hasKey) {
    //       // TODO PERF is it possible that this query could be used to update our in memory cache? (this would save us from some slow look ups) later on
    //       //    when checking timestamps.. alternatively maybe there is a away we can note the timestamp with what is returned here in the queueEntry data
    //       //    and not have to deal with the cache.
    //       // todo old: Detect if our node covers this paritition..  need our partition data
    //
    //       this.profiler.profileSectionStart('process_dapp.getRelevantData')
    //       this.profiler.scopedProfileSectionStart('process_dapp.getRelevantData')
    //       /* prettier-ignore */ this.setDebugLastAwaitedCallInner('this.stateManager.transactionQueue.app.getRelevantData old')
    //       let data = await this.app.getRelevantData(
    //         key,
    //         queueEntry.acceptedTx.data,
    //         queueEntry.acceptedTx.appData
    //       )
    //       /* prettier-ignore */ this.setDebugLastAwaitedCallInner('this.stateManager.transactionQueue.app.getRelevantData old', DebugComplete.Completed)
    //       this.profiler.scopedProfileSectionEnd('process_dapp.getRelevantData')
    //       this.profiler.profileSectionEnd('process_dapp.getRelevantData')
    //
    //       //only queue this up to share if it is not a global account. global accounts dont need to be shared.
    //
    //       // not sure if it is correct to update timestamp like this.
    //       // if(data.timestamp === 0){
    //       //   data.timestamp = queueEntry.acceptedTx.timestamp
    //       // }
    //
    //       //if this is not freshly created data then we need to make a backup copy of it!!
    //       //This prevents us from changing data before the commiting phase
    //       if (data.accountCreated == false) {
    //         data = utils.deepCopy(data)
    //       }
    //
    //       if (isGlobalKey === false) {
    //         // eslint-disable-next-line security/detect-object-injection
    //         datas[key] = data
    //         dataKeysWeHave.push(key)
    //         dataValuesWeHave.push(data)
    //       }
    //
    //       // eslint-disable-next-line security/detect-object-injection
    //       queueEntry.localKeys[key] = true
    //       // add this data to our own queue entry!!
    //       this.queueEntryAddData(queueEntry, data)
    //     } else {
    //       // eslint-disable-next-line security/detect-object-injection
    //       remoteShardsByKey[key] = queueEntry.homeNodes[key]
    //     }
    //   }
    //   if (queueEntry.globalModification === true) {
    //     /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('tellCorrespondingNodes', queueEntry.logID, `tellCorrespondingNodes - globalModification = true, not telling other nodes`)
    //     return
    //   }
    //
    //   // if we are in the execution shard no need to forward data
    //   // This is because other nodes will not expect pre-apply data anymore (but they will send us their pre apply data)
    //   if (
    //     queueEntry.globalModification === false &&
    //     this.executeInOneShard &&
    //     queueEntry.isInExecutionHome === true
    //   ) {
    //     //will this break things..
    //     /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('tellCorrespondingNodes', queueEntry.logID, `tellCorrespondingNodes - isInExecutionHome = true, not telling other nodes`)
    //     return
    //   }
    //
    //   let message: { stateList: Shardus.WrappedResponse[]; txid: string }
    //   let edgeNodeIds = []
    //   let consensusNodeIds = []
    //
    //   const nodesToSendTo: StringNodeObjectMap = {}
    //   const doOnceNodeAccPair = new Set<string>() //can skip  node+acc if it happens more than once.
    //
    //   for (const key of queueEntry.uniqueKeys) {
    //     // eslint-disable-next-line security/detect-object-injection
    //     if (datas[key] != null) {
    //       for (const key2 of queueEntry.uniqueKeys) {
    //         if (key !== key2) {
    //           // eslint-disable-next-line security/detect-object-injection
    //           const localHomeNode = queueEntry.homeNodes[key]
    //           // eslint-disable-next-line security/detect-object-injection
    //           const remoteHomeNode = queueEntry.homeNodes[key2]
    //
    //           // //can ignore nodes not in the execution group since they will not be running apply
    //           // if(this.executeInOneShard && (queueEntry.executionIdSet.has(remoteHomeNode.node.id) === false)){
    //           //   continue
    //           // }
    //
    //           const ourLocalConsensusIndex = localHomeNode.consensusNodeForOurNodeFull.findIndex(
    //             (a) => a.id === ourNodeData.node.id
    //           )
    //           if (ourLocalConsensusIndex === -1) {
    //             continue
    //           }
    //
    //           edgeNodeIds = []
    //           consensusNodeIds = []
    //           correspondingAccNodes = []
    //
    //           // must add one to each lookup index!
    //           const indicies = ShardFunctions.debugFastStableCorrespondingIndicies(
    //             localHomeNode.consensusNodeForOurNodeFull.length,
    //             remoteHomeNode.consensusNodeForOurNodeFull.length,
    //             ourLocalConsensusIndex + 1
    //           )
    //           const edgeIndicies = ShardFunctions.debugFastStableCorrespondingIndicies(
    //             localHomeNode.consensusNodeForOurNodeFull.length,
    //             remoteHomeNode.edgeNodes.length,
    //             ourLocalConsensusIndex + 1
    //           )
    //
    //           let patchIndicies = []
    //           if (remoteHomeNode.patchedOnNodes.length > 0) {
    //             patchIndicies = ShardFunctions.debugFastStableCorrespondingIndicies(
    //               localHomeNode.consensusNodeForOurNodeFull.length,
    //               remoteHomeNode.patchedOnNodes.length,
    //               ourLocalConsensusIndex + 1
    //             )
    //           }
    //
    //           // HOMENODEMATHS need to work out sending data to our patched range.
    //           // let edgeIndicies = ShardFunctions.debugFastStableCorrespondingIndicies(localHomeNode.consensusNodeForOurNodeFull.length, remoteHomeNode.edgeNodes.length, ourLocalConsensusIndex + 1)
    //
    //           // for each remote node lets save it's id
    //           for (const index of indicies) {
    //             const node = remoteHomeNode.consensusNodeForOurNodeFull[index - 1] // fastStableCorrespondingIndicies is one based so adjust for 0 based array
    //             if (node != null && node.id !== ourNodeData.node.id) {
    //               nodesToSendTo[node.id] = node
    //               consensusNodeIds.push(node.id)
    //             }
    //           }
    //           for (const index of edgeIndicies) {
    //             const node = remoteHomeNode.edgeNodes[index - 1] // fastStableCorrespondingIndicies is one based so adjust for 0 based array
    //             if (node != null && node.id !== ourNodeData.node.id) {
    //               nodesToSendTo[node.id] = node
    //               edgeNodeIds.push(node.id)
    //             }
    //           }
    //
    //           for (const index of patchIndicies) {
    //             const node = remoteHomeNode.edgeNodes[index - 1] // fastStableCorrespondingIndicies is one based so adjust for 0 based array
    //             if (node != null && node.id !== ourNodeData.node.id) {
    //               nodesToSendTo[node.id] = node
    //               //edgeNodeIds.push(node.id)
    //             }
    //           }
    //
    //           const dataToSend: Shardus.WrappedResponse[] = []
    //           // eslint-disable-next-line security/detect-object-injection
    //           dataToSend.push(datas[key]) // only sending just this one key at a time
    //
    //           // sign each account data
    //           for (let data of dataToSend) {
    //             data = this.crypto.sign(data)
    //           }
    //
    //           message = { stateList: dataToSend, txid: queueEntry.acceptedTx.txId }
    //
    //           //correspondingAccNodes = Object.values(nodesToSendTo)
    //
    //           //build correspondingAccNodes, but filter out nodeid, account key pairs we have seen before
    //           for (const [accountID, node] of Object.entries(nodesToSendTo)) {
    //             const keyPair = accountID + key
    //             if (node != null && doOnceNodeAccPair.has(keyPair) === false) {
    //               doOnceNodeAccPair.add(keyPair)
    //
    //               // consider this optimization later (should make it so we only send to execution set nodes)
    //               // if(queueEntry.executionIdSet.has(remoteHomeNode.node.id) === true){
    //               //   correspondingAccNodes.push(node)
    //               // }
    //               correspondingAccNodes.push(node)
    //             }
    //           }
    //
    //           if (correspondingAccNodes.length > 0) {
    //             const remoteRelation = ShardFunctions.getNodeRelation(
    //               remoteHomeNode,
    //               this.stateManager.currentCycleShardData.ourNode.id
    //             )
    //             const localRelation = ShardFunctions.getNodeRelation(
    //               localHomeNode,
    //               this.stateManager.currentCycleShardData.ourNode.id
    //             )
    //             /* prettier-ignore */ if (logFlags.playback) this.logger.playbackLogNote('shrd_tellCorrespondingNodes', `${queueEntry.acceptedTx.txId}`, `remoteRel: ${remoteRelation} localrel: ${localRelation} qId: ${queueEntry.entryID} AccountBeingShared: ${utils.makeShortHash(key)} EdgeNodes:${utils.stringifyReduce(edgeNodeIds)} ConsesusNodes${utils.stringifyReduce(consensusNodeIds)}`)
    //
    //             // Filter nodes before we send tell()
    //             const filteredNodes = this.stateManager.filterValidNodesForInternalMessage(
    //               correspondingAccNodes,
    //               'tellCorrespondingNodes',
    //               true,
    //               true
    //             )
    //             if (filteredNodes.length === 0) {
    //               /* prettier-ignore */ if (logFlags.error) this.mainLogger.error('tellCorrespondingNodes: filterValidNodesForInternalMessage no valid nodes left to try')
    //               return null
    //             }
    //             const filterdCorrespondingAccNodes = filteredNodes
    //
    //             this.broadcastState(filterdCorrespondingAccNodes, message)
    //           }
    //         }
    //       }
    //     }
    //   }
    // }
    
    async broadcastState(
      nodes: Shardus.Node[],
      message: { stateList: Shardus.WrappedResponse[]; txid: string },
      context: string
    ): Promise<void> {
      // if (this.config.p2p.useBinarySerializedEndpoints && this.config.p2p.broadcastStateBinary) {
      // convert legacy message to binary supported type
      const request = message as BroadcastStateReq
      if (logFlags.seqdiagram) {
        for (const node of nodes) {
          if (context == 'tellCorrespondingNodes') {
            /* prettier-ignore */ if (logFlags.seqdiagram) this.seqLogger.info(`0x53455102 ${shardusGetTime()} tx:${message.txid} ${NodeList.activeIdToPartition.get(Self.id)}-->>${NodeList.activeIdToPartition.get(node.id)}: ${'broadcast_state_nodes'}`)
          } else {
            /* prettier-ignore */ if (logFlags.seqdiagram) this.seqLogger.info(`0x53455102 ${shardusGetTime()} tx:${message.txid} ${NodeList.activeIdToPartition.get(Self.id)}-->>${NodeList.activeIdToPartition.get(node.id)}: ${'broadcast_state_neighbour'}`)
          }
        }
      }
      this.p2p.tellBinary<BroadcastStateReq>(
        nodes,
        InternalRouteEnum.binary_broadcast_state,
        request,
        serializeBroadcastStateReq,
        {
          verification_data: verificationDataCombiner(
            message.txid,
            message.stateList.length.toString(),
            request.stateList[0].accountId
          ),
        }
      )
      // r...

    Copy link
    Contributor

    @mhanson-github mhanson-github left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Been testing today. Pretty good.

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants