Skip to content

Conversation

mssabr01
Copy link
Contributor

@mssabr01 mssabr01 commented Jun 20, 2025

PR Type

Enhancement, Tests


Description

  • Introduced receipt signature compression/decompression for storage optimization

  • Added new nodes table for mapping public keys to node IDs

  • Implemented in-memory LRU cache for node mappings

  • Added comprehensive tests for receipt signature transformer logic


Changes walkthrough 📝

Relevant files
Enhancement
Config.ts
Add receipt signature optimization config options               

src/Config.ts

  • Added receiptSignatureOptimization config option with sub-options.
  • Set default values for signature optimization in config object.
  • +10/-0   
    index.ts
    Add nodes table for signature optimization                             

    src/dbstore/index.ts

  • Created new nodes table in receipt database for signature
    optimization.
  • Added index on public_key in nodes table.
  • +10/-0   
    receipts.ts
    Integrate signature compression/decompression in receipt storage

    src/dbstore/receipts.ts

  • Integrated signature compression before inserting receipts.
  • Integrated signature decompression when querying receipts.
  • Refactored deserialization to support async decompression.
  • Updated queries to handle new compressed signature format.
  • +40/-28 
    receiptTransformer.ts
    Implement receipt signature compression and node mapping 

    src/middleware/receiptTransformer.ts

  • Implemented signature compression by mapping public keys to node IDs.
  • Implemented decompression by restoring public keys from node IDs.
  • Added batch operations and in-memory LRU cache for node mappings.
  • Provided cache management and preload utilities.
  • +376/-0 
    server.ts
    Preload node cache on server startup                                         

    src/server.ts

  • Preloaded node cache at server startup if optimization enabled.
  • Dynamically imported and invoked cache preload utility.
  • +5/-0     
    Tests
    receiptTransformer.test.ts
    Add tests for receipt signature transformer logic               

    src/middleware/tests/receiptTransformer.test.ts

  • Added comprehensive tests for signature compression/decompression.
  • Tested cache behavior and batch operations.
  • Mocked database and logger dependencies for isolation.
  • +332/-0 

    Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🏅 Score: 91
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Possible Data Consistency

    The node ID to public key mapping logic in the LRU cache and batch database operations should be carefully reviewed for race conditions or possible mismatches, especially under concurrent receipt insertions or cache evictions.

    class NodeCache {
      private cache: Map<string, number> = new Map()
      private reverseCache: Map<number, string> = new Map()
      private maxSize: number = 10000
    
      constructor(maxSize: number = 10000) {
        this.maxSize = maxSize
      }
    
      set(publicKey: string, nodeId: number): void {
        // Remove oldest entry if cache is full
        if (this.cache.size >= this.maxSize && !this.cache.has(publicKey)) {
          const firstKey = this.cache.keys().next().value
          const firstId = this.cache.get(firstKey)
          this.cache.delete(firstKey)
          if (firstId) this.reverseCache.delete(firstId)
        }
    
        this.cache.set(publicKey, nodeId)
        this.reverseCache.set(nodeId, publicKey)
      }
    
      getNodeId(publicKey: string): number | undefined {
        const nodeId = this.cache.get(publicKey)
        if (nodeId !== undefined) {
          // Move to end (LRU behavior)
          this.cache.delete(publicKey)
          this.cache.set(publicKey, nodeId)
        }
        return nodeId
      }
    
      getPublicKey(nodeId: number): string | undefined {
        return this.reverseCache.get(nodeId)
      }
    
      clear(): void {
        this.cache.clear()
        this.reverseCache.clear()
      }
    }
    
    // Global node cache instance
    const nodeCache = new NodeCache()
    
    // Promisified database operations
    // The sqlite3 methods already have callback as the last parameter, so we can bind directly
    const dbGet = (db: Database, sql: string, params: any[]) => {
      return new Promise<any>((resolve, reject) => {
        db.get(sql, params, (err, row) => {
          if (err) reject(err)
          else resolve(row)
        })
      })
    }
    
    const dbRun = (db: Database, sql: string, params: any[]) => {
      return new Promise<void>((resolve, reject) => {
        db.run(sql, params, function(err) {
          if (err) reject(err)
          else resolve()
        })
      })
    }
    
    const dbAll = (db: Database, sql: string, params: any[]) => {
      return new Promise<any[]>((resolve, reject) => {
        db.all(sql, params, (err, rows) => {
          if (err) reject(err)
          else resolve(rows)
        })
      })
    }
    
    /**
     * Get or create node ID for a public key
     */
    async function getOrCreateNodeId(publicKey: string): Promise<number> {
      // Check cache first
      const cachedId = nodeCache.getNodeId(publicKey)
      if (cachedId !== undefined) {
        return cachedId
      }
    
      try {
        // Check database
        const row = await dbGet(receiptDatabase, 'SELECT node_id FROM nodes WHERE public_key = ?', [publicKey])
    
        if (row) {
          nodeCache.set(publicKey, row.node_id)
          return row.node_id
        }
    
        // Insert new node
        const firstSeen = Date.now()
        await dbRun(receiptDatabase, 'INSERT INTO nodes (public_key, first_seen) VALUES (?, ?)', [publicKey, firstSeen])
    
        // Get the inserted ID
        const newRow = await dbGet(receiptDatabase, 'SELECT node_id FROM nodes WHERE public_key = ?', [publicKey])
        if (newRow) {
          nodeCache.set(publicKey, newRow.node_id)
          return newRow.node_id
        }
    
        throw new Error('Failed to create node mapping')
      } catch (error) {
        Logger.mainLogger.error('Error in getOrCreateNodeId:', error)
        throw error
      }
    }
    
    /**
     * Get public key for a node ID
     */
    async function getPublicKeyForNodeId(nodeId: number): Promise<string | null> {
      // Check cache first
      const cachedKey = nodeCache.getPublicKey(nodeId)
      if (cachedKey !== undefined) {
        return cachedKey
      }
    
      try {
        const row = await dbGet(receiptDatabase, 'SELECT public_key FROM nodes WHERE node_id = ?', [nodeId])
    
        if (row) {
          nodeCache.set(row.public_key, nodeId)
          return row.public_key
        }
    
        return null
      } catch (error) {
        Logger.mainLogger.error('Error in getPublicKeyForNodeId:', error)
        return null
      }
    }
    
    /**
     * Batch get or create node IDs for multiple public keys
     */
    async function batchGetOrCreateNodeIds(publicKeys: string[]): Promise<Map<string, number>> {
      const result = new Map<string, number>()
      const uncachedKeys: string[] = []
    
      // Check cache first
      for (const publicKey of publicKeys) {
        const cachedId = nodeCache.getNodeId(publicKey)
        if (cachedId !== undefined) {
          result.set(publicKey, cachedId)
        } else {
          uncachedKeys.push(publicKey)
        }
      }
    
      if (uncachedKeys.length === 0) {
        return result
      }
    
      try {
        // Batch query database
        if (uncachedKeys.length === 0) {
          return result
        }
        const placeholders = uncachedKeys.map(() => '?').join(',')
        const rows = await dbAll(
          receiptDatabase, 
          `SELECT node_id, public_key FROM nodes WHERE public_key IN (${placeholders})`,
          uncachedKeys
        )
    
        // Process existing nodes
        const existingKeys = new Set<string>()
        for (const row of rows) {
          result.set(row.public_key, row.node_id)
          nodeCache.set(row.public_key, row.node_id)
          existingKeys.add(row.public_key)
        }
    
        // Insert new nodes
        const newKeys = uncachedKeys.filter(key => !existingKeys.has(key))
        if (newKeys.length > 0) {
          const firstSeen = Date.now()
          const insertPlaceholders = newKeys.map(() => '(?, ?)').join(',')
          const insertParams = newKeys.flatMap(key => [key, firstSeen])
          await dbRun(receiptDatabase, `INSERT INTO nodes (public_key, first_seen) VALUES ${insertPlaceholders}`, insertParams)
    
          // Query newly inserted nodes
          const newPlaceholders = newKeys.map(() => '?').join(',')
          const newRows = await dbAll(
            receiptDatabase,
            `SELECT node_id, public_key FROM nodes WHERE public_key IN (${newPlaceholders})`,
            newKeys
          )
    
          for (const row of newRows) {
            result.set(row.public_key, row.node_id)
            nodeCache.set(row.public_key, row.node_id)
          }
        }
    
        return result
      } catch (error) {
        Logger.mainLogger.error('Error in batchGetOrCreateNodeIds:', error)
        throw error
      }
    }
    
    /**
     * Compress signatures in a receipt by replacing public keys with node IDs
     */
    export async function compressReceiptSignatures(receipt: ArchiverReceipt): Promise<ArchiverReceipt> {
      if (!config.receiptSignatureOptimization?.enabled) {
        return receipt
      }
    
      try {
        const signedReceipt = receipt.signedReceipt as SignedReceipt
    
        // Skip if already compressed or no signature pack
        if (!signedReceipt.signaturePack || signedReceipt.signaturePack.length === 0) {
          return receipt
        }
    
        // Check if already compressed
        if ((signedReceipt as any)._compressed === true) {
          return receipt
        }
    
        // Extract all public keys
        const publicKeys = signedReceipt.signaturePack.map(sig => sig.owner)
    
        // Batch get/create node IDs
        const nodeIdMap = await batchGetOrCreateNodeIds(publicKeys)
    
        // Create compressed signature pack
        const compressedSignatures: CompressedSignature[] = signedReceipt.signaturePack.map(sig => ({
          id: nodeIdMap.get(sig.owner) || 0,
          sig: sig.sig
        }))
    
        // Create compressed receipt
        const compressedReceipt = {
          ...receipt,
          signedReceipt: {
            ...signedReceipt,
            signaturePack: compressedSignatures as any,
            _compressed: true
          }
        }
    
        return compressedReceipt
      } catch (error) {
        Logger.mainLogger.error('Error compressing receipt signatures:', error)
        // Return original receipt on error
        return receipt
      }
    }
    
    /**
     * Decompress signatures in a receipt by replacing node IDs with public keys
     */
    export async function decompressReceiptSignatures(receipt: ArchiverReceipt): Promise<ArchiverReceipt> {
      if (!config.receiptSignatureOptimization?.enabled) {
        return receipt
      }
    
      try {
        const signedReceipt = receipt.signedReceipt as any
    
        // Skip if not compressed or no signature pack
        if (!signedReceipt._compressed || !signedReceipt.signaturePack || signedReceipt.signaturePack.length === 0) {
          return receipt
        }
    
        // Batch get public keys for all node IDs
        const nodeIds = signedReceipt.signaturePack.map((sig: CompressedSignature) => sig.id)
        const uniqueNodeIds = [...new Set(nodeIds)]
    
        // Batch query
        if (uniqueNodeIds.length === 0) {
          return receipt
        }
        const placeholders = uniqueNodeIds.map(() => '?').join(',')
        const rows = await dbAll(
          receiptDatabase,
          `SELECT node_id, public_key FROM nodes WHERE node_id IN (${placeholders})`,
          uniqueNodeIds
        )
    
        // Create lookup map
        const nodeIdToPublicKey = new Map<number, string>()
        for (const row of rows) {
          nodeIdToPublicKey.set(row.node_id, row.public_key)
          nodeCache.set(row.public_key, row.node_id)
        }
    
        // Create decompressed signature pack
        const decompressedSignatures: OriginalSignature[] = signedReceipt.signaturePack.map((sig: CompressedSignature) => ({
          owner: nodeIdToPublicKey.get(sig.id) || '',
          sig: sig.sig
        }))
    
        // Remove _compressed flag and restore original structure
        const { _compressed, ...cleanSignedReceipt } = signedReceipt
    
        const decompressedReceipt = {
          ...receipt,
          signedReceipt: {
            ...cleanSignedReceipt,
            signaturePack: decompressedSignatures
          }
        }
    
        return decompressedReceipt
      } catch (error) {
        Logger.mainLogger.error('Error decompressing receipt signatures:', error)
        // Return original receipt on error
        return receipt
      }
    }
    
    /**
     * Preload active nodes into cache for better performance
     */
    export async function preloadNodeCache(): Promise<void> {
      if (!config.receiptSignatureOptimization?.enabled) {
        return
      }
    
      try {
        // Load most recent nodes (adjust limit based on network size)
        const rows = await dbAll(
          receiptDatabase,
          'SELECT node_id, public_key FROM nodes ORDER BY first_seen DESC LIMIT ?',
          [10000]
        )
    
        nodeCache.clear()
        for (const row of rows) {
          nodeCache.set(row.public_key, row.node_id)
        }
    
        Logger.mainLogger.info(`Preloaded ${rows.length} nodes into cache`)
      } catch (error) {
        Logger.mainLogger.error('Error preloading node cache:', error)
      }
    }
    
    /**
     * Clear the node cache
     */
    export function clearNodeCache(): void {
      nodeCache.clear()
    }
    Backward Compatibility

    The new receipt signature compression/decompression logic may affect how receipts are stored and retrieved. Ensure that receipts written before this change are still readable and that the deserialization logic handles both compressed and uncompressed formats gracefully.

    export async function insertReceipt(receipt: Receipt, storeCheckpoints: boolean = true): Promise<void> {
      try {
        if (storeCheckpoints && config.checkpoint.bucketConfig.allowCheckpointUpdates) {
          // Create checkpoint for receipt
          const checkpointData = new ReceiptCheckpointData(receipt)
          const bucketID = calculateBucketID(receipt)
          receiptCheckpointManager.addData(checkpointData, bucketID)
        }
    
        // Compress receipt signatures if optimization is enabled
        const processedReceipt = await compressReceiptSignatures(receipt)
    
        // Define the columns to match the database schema
        const columns = [
          'receiptId',
          'tx',
          'cycle',
          'applyTimestamp',
          'timestamp',
          'signedReceipt',
          'afterStates',
          'beforeStates',
          'appReceiptData',
          'executionShardKey',
          'globalModification',
        ]
    
        // Create placeholders for the values
        const placeholders = `(${columns.map(() => '?').join(', ')})`
        const sql = `INSERT OR REPLACE INTO receipts (${columns.join(', ')}) VALUES ${placeholders}`
    
        // Map the receipt object to match the columns
        const values = columns.map((column) =>
          typeof processedReceipt[column] === 'object'
            ? SerializeToJsonString(processedReceipt[column]) // Serialize objects to JSON strings
            : processedReceipt[column]
        )
    
        // Execute the query directly
        await db.run(receiptDatabase, sql, values)
    
        if (storeCheckpoints && config.checkpoint.bucketConfig.allowCheckpointUpdates) {
          await bulkUpdateCheckpointStatusField(CheckpointStatusType.RECEIPT, true, undefined, undefined, [receipt.cycle])
        }
    
        if (config.VERBOSE) {
          Logger.mainLogger.debug('Successfully inserted Receipt', receipt.receiptId)
        }
      } catch (err) {
        Logger.mainLogger.error(err)
        Logger.mainLogger.error('Unable to insert Receipt or it is already stored in the database', receipt.receiptId)
      }
    }
    
    export async function bulkInsertReceipts(receipts: Receipt[], storeCheckpoints: boolean = true): Promise<void> {
      try {
        if (storeCheckpoints && config.checkpoint.bucketConfig.allowCheckpointUpdates) {
          // Create checkpoints for all receipts
          for (const receipt of receipts) {
            const checkpointData = new ReceiptCheckpointData(receipt)
            const bucketID = calculateBucketID(receipt)
            receiptCheckpointManager.addData(checkpointData, bucketID)
          }
        }
    
        // Compress all receipts if optimization is enabled
        const processedReceipts = await Promise.all(
          receipts.map(receipt => compressReceiptSignatures(receipt))
        )
    
        // Define the table columns based on schema
        const columns = [
          'receiptId',
          'tx',
          'cycle',
          'applyTimestamp',
          'timestamp',
          'signedReceipt',
          'afterStates',
          'beforeStates',
          'appReceiptData',
          'executionShardKey',
          'globalModification',
        ]
    
        // Construct the SQL query with placeholders
        const placeholders = processedReceipts.map(() => `(${columns.map(() => '?').join(', ')})`).join(', ')
        const sql = `INSERT OR REPLACE INTO receipts (${columns.join(', ')}) VALUES ${placeholders}`
    
        // Flatten the `receipts` array into a single list of values
        const values = processedReceipts.flatMap((receipt) =>
          columns.map((column) =>
            typeof receipt[column] === 'object'
              ? SerializeToJsonString(receipt[column]) // Serialize objects to JSON
              : receipt[column]
          )
        )
    
        // Execute the query in a single call
        await db.run(receiptDatabase, sql, values)
    
        if (storeCheckpoints && config.checkpoint.bucketConfig.allowCheckpointUpdates) {
          const receiptsToUpdate = receipts.map((receipt) => receipt.cycle)
          await bulkUpdateCheckpointStatusField(
            CheckpointStatusType.RECEIPT,
            State.isSyncing,
            undefined,
            undefined,
            receiptsToUpdate
          )
        }
    
        if (config.VERBOSE) {
          Logger.mainLogger.debug('Successfully inserted Receipts', receipts.length)
        }
      } catch (err) {
        Logger.mainLogger.error(err)
        Logger.mainLogger.error('Unable to bulk insert Receipts', receipts.length)
      }
    }
    
    export async function queryReceiptByReceiptId(receiptId: string, timestamp = 0): Promise<Receipt> {
      try {
        const sql = `SELECT * FROM receipts WHERE receiptId=?` + (timestamp ? ` AND timestamp=?` : '')
        const value = timestamp ? [receiptId, timestamp] : [receiptId]
        const receipt = (await db.get(receiptDatabase, sql, value)) as DbReceipt
        if (receipt) {
          return await deserializeDbReceipt(receipt)
        }
        return null
      } catch (e) {
        Logger.mainLogger.error(e)
        return null
      }
    }
    
    export async function queryLatestReceipts(count: number): Promise<Receipt[]> {
      if (!Number.isInteger(count)) {
        Logger.mainLogger.error('queryLatestReceipts - Invalid count value')
        return null
      }
      try {
        const sql = `SELECT * FROM receipts ORDER BY cycle DESC, timestamp DESC LIMIT ${count ? count : 100}`
        const receipts = (await db.all(receiptDatabase, sql)) as DbReceipt[]
        if (receipts.length > 0) {
          const deserializedReceipts = await Promise.all(receipts.map((receipt: DbReceipt) => deserializeDbReceipt(receipt)))
          return deserializedReceipts
        }
        return receipts
      } catch (e) {
        Logger.mainLogger.error(e)
        return null
      }
    }
    
    export async function queryReceipts(skip = 0, limit = 10000): Promise<Receipt[]> {
      let receipts: Receipt[] = []
      if (!Number.isInteger(skip) || !Number.isInteger(limit)) {
        Logger.mainLogger.error('queryReceipts - Invalid skip or limit')
        return receipts
      }
      try {
        const sql = `SELECT * FROM receipts ORDER BY cycle ASC, timestamp ASC LIMIT ${limit} OFFSET ${skip}`
        const dbReceipts = (await db.all(receiptDatabase, sql)) as DbReceipt[]
        if (dbReceipts.length > 0) {
          receipts = await Promise.all(dbReceipts.map((receipt: DbReceipt) => deserializeDbReceipt(receipt)))
        }
      } catch (e) {
        Logger.mainLogger.error(e)
      }
      if (config.VERBOSE) {
        Logger.mainLogger.debug('Receipt receipts', receipts ? receipts.length : receipts, 'skip', skip)
      }
      return receipts
    }
    
    export async function queryReceiptCount(): Promise<number> {
      let receipts
      try {
        const sql = `SELECT COUNT(*) FROM receipts`
        receipts = await db.get(receiptDatabase, sql, [])
      } catch (e) {
        Logger.mainLogger.error(e)
      }
      if (config.VERBOSE) {
        Logger.mainLogger.debug('Receipt count', receipts)
      }
      if (receipts) receipts = receipts['COUNT(*)']
      else receipts = 0
      return receipts
    }
    
    export async function queryReceiptCountByCycles(start: number, end: number): Promise<ReceiptCount[]> {
      let receiptsCount: ReceiptCount[]
      let dbReceiptsCount: DbReceiptCount[]
      try {
        const sql = `SELECT cycle, COUNT(*) FROM receipts GROUP BY cycle HAVING cycle BETWEEN ? AND ? ORDER BY cycle ASC`
        dbReceiptsCount = (await db.all(receiptDatabase, sql, [start, end])) as DbReceiptCount[]
      } catch (e) {
        Logger.mainLogger.error(e)
      }
      if (config.VERBOSE) {
        Logger.mainLogger.debug('Receipt count by cycle', dbReceiptsCount)
      }
      if (dbReceiptsCount.length > 0) {
        receiptsCount = dbReceiptsCount.map((dbReceipt) => {
          return {
            cycle: dbReceipt.cycle,
            receiptCount: dbReceipt['COUNT(*)'],
          }
        })
      }
      return receiptsCount
    }
    
    export async function queryReceiptCountBetweenCycles(
      startCycleNumber: number,
      endCycleNumber: number
    ): Promise<number> {
      let receipts
      try {
        const sql = `SELECT COUNT(*) FROM receipts WHERE cycle BETWEEN ? AND ?`
        receipts = await db.get(receiptDatabase, sql, [startCycleNumber, endCycleNumber])
      } catch (e) {
        console.log(e)
      }
      if (config.VERBOSE) {
        Logger.mainLogger.debug('Receipt count between cycles', receipts)
      }
      if (receipts) receipts = receipts['COUNT(*)']
      else receipts = 0
      return receipts
    }
    
    export async function queryReceiptsBetweenCycles(
      skip = 0,
      limit = 10000,
      startCycleNumber: number,
      endCycleNumber: number
    ): Promise<Receipt[]> {
      let receipts: Receipt[] = []
      if (!Number.isInteger(skip) || !Number.isInteger(limit)) {
        Logger.mainLogger.error('queryReceiptsBetweenCycles - Invalid skip or limit')
        return receipts
      }
      try {
        const sql = `SELECT * FROM receipts WHERE cycle BETWEEN ? AND ? ORDER BY cycle ASC, timestamp ASC LIMIT ${limit} OFFSET ${skip}`
        const dbReceipts = (await db.all(receiptDatabase, sql, [startCycleNumber, endCycleNumber])) as DbReceipt[]
        if (dbReceipts.length > 0) {
          receipts = await Promise.all(dbReceipts.map((receipt: DbReceipt) => deserializeDbReceipt(receipt)))
        }
      } catch (e) {
        console.log(e)
      }
      if (config.VERBOSE) {
        Logger.mainLogger.debug('Receipt receipts between cycles', receipts ? receipts.length : receipts, 'skip', skip)
      }
      return receipts
    }
    
    export async function queryInitNetworkReceiptCountBetweenCycles(
      startCycleNumber: number,
      endCycleNumber: number
    ): Promise<number> {
      let count = 0
      try {
        const sql = `
          SELECT * FROM receipts
          WHERE cycle BETWEEN ? AND ?
        `
        const dbReceipts = (await db.all(receiptDatabase, sql, [startCycleNumber, endCycleNumber])) as DbReceipt[]
    
        // Deserialize all receipts first
        const deserializedReceipts = await Promise.all(dbReceipts.map((receipt: DbReceipt) => deserializeDbReceipt(receipt)))
    
        const filtered = deserializedReceipts.filter((receipt: Receipt) => {
          // Inline type for safely accessing internalTXType
          const tx = receipt.tx as {
            originalTxData?: {
              tx?: {
                internalTXType?: number
              }
            }
          }
    
          return tx?.originalTxData?.tx?.internalTXType === 1
        })
    
        count = filtered.length
      } catch (e) {
        console.error('Error counting initNetwork receipts:', e)
      }
    
      if (config.VERBOSE) {
        Logger.mainLogger.debug('InitNetwork receipt count between cycles:', count)
      }
    
      return count
    }
    
    async function deserializeDbReceipt(receipt: DbReceipt): Promise<Receipt> {
      if (receipt.tx) receipt.tx = DeSerializeFromJsonString(receipt.tx)
      if (receipt.beforeStates) receipt.beforeStates = DeSerializeFromJsonString(receipt.beforeStates)
      if (receipt.afterStates) receipt.afterStates = DeSerializeFromJsonString(receipt.afterStates)
      if (receipt.appReceiptData) receipt.appReceiptData = DeSerializeFromJsonString(receipt.appReceiptData)
      if (receipt.signedReceipt) receipt.signedReceipt = DeSerializeFromJsonString(receipt.signedReceipt)
      // globalModification is stored as 0 or 1 in the database, convert it to boolean
      receipt.globalModification = (receipt.globalModification as unknown as number) === 1
    
      // Decompress receipt signatures if optimization is enabled
      const decompressedReceipt = await decompressReceiptSignatures(receipt as Receipt)
    
      // Ensure we return a proper Receipt object with all required fields
      return {
        ...decompressedReceipt,
        receiptId: receipt.receiptId,
        timestamp: receipt.timestamp,
        applyTimestamp: receipt.applyTimestamp
      } as Receipt
    }
    Mock Cleanup

    The test suite uses extensive mocking of database calls. Ensure that all mocks are properly reset between tests to avoid cross-test contamination, and that the tests accurately reflect real-world usage scenarios.

    import { compressReceiptSignatures, decompressReceiptSignatures, clearNodeCache } from '../receiptTransformer'
    import { ArchiverReceipt, SignedReceipt } from '../../dbstore/receipts'
    import { config } from '../../Config'
    import * as db from '../../dbstore/sqlite3storage'
    import { receiptDatabase } from '../../dbstore'
    
    // Mock dependencies
    jest.mock('../../dbstore/sqlite3storage')
    jest.mock('../../Logger', () => ({
      mainLogger: {
        info: jest.fn(),
        error: jest.fn(),
        debug: jest.fn(),
      },
    }))
    
    // Mock database responses
    const mockDbGet = db.get as jest.MockedFunction<typeof db.get>
    const mockDbRun = db.run as jest.MockedFunction<typeof db.run>
    const mockDbAll = db.all as jest.MockedFunction<typeof db.all>
    
    describe('receiptTransformer', () => {
      beforeEach(() => {
        // Clear all mocks
        jest.clearAllMocks()
        clearNodeCache()
    
        // Enable optimization by default
        config.receiptSignatureOptimization = {
          enabled: true,
          cacheSize: 10000,
          batchSize: 100,
        }
      })
    
      afterEach(() => {
        clearNodeCache()
      })
    
      describe('compressReceiptSignatures', () => {
        it('should return receipt unchanged when optimization is disabled', async () => {
          config.receiptSignatureOptimization.enabled = false
    
          const receipt: ArchiverReceipt = {
            tx: { originalTxData: {}, txId: 'test', timestamp: 123 },
            cycle: 1,
            signedReceipt: {
              proposal: {
                applied: true,
                cant_preApply: false,
                accountIDs: [],
                beforeStateHashes: [],
                afterStateHashes: [],
                appReceiptDataHash: 'hash',
                txid: 'test',
              },
              proposalHash: 'hash',
              signaturePack: [
                { owner: 'publicKey1', sig: 'signature1' },
                { owner: 'publicKey2', sig: 'signature2' },
              ],
              voteOffsets: [7, 7],
            } as SignedReceipt,
            globalModification: false,
            appReceiptData: { data: {} },
          }
    
          const result = await compressReceiptSignatures(receipt)
          expect(result).toEqual(receipt)
        })
    
        it('should compress signatures when optimization is enabled', async () => {
          // Mock database responses
          mockDbGet.mockResolvedValueOnce(null) // No existing node for publicKey1
          mockDbRun.mockResolvedValueOnce(undefined) // Insert successful
          mockDbGet.mockResolvedValueOnce({ node_id: 1 }) // Return new node_id for publicKey1
    
          mockDbGet.mockResolvedValueOnce(null) // No existing node for publicKey2
          mockDbRun.mockResolvedValueOnce(undefined) // Insert successful
          mockDbGet.mockResolvedValueOnce({ node_id: 2 }) // Return new node_id for publicKey2
    
          const receipt: ArchiverReceipt = {
            tx: { originalTxData: {}, txId: 'test', timestamp: 123 },
            cycle: 1,
            signedReceipt: {
              proposal: {
                applied: true,
                cant_preApply: false,
                accountIDs: [],
                beforeStateHashes: [],
                afterStateHashes: [],
                appReceiptDataHash: 'hash',
                txid: 'test',
              },
              proposalHash: 'hash',
              signaturePack: [
                { owner: 'publicKey1', sig: 'signature1' },
                { owner: 'publicKey2', sig: 'signature2' },
              ],
              voteOffsets: [7, 7],
            } as SignedReceipt,
            globalModification: false,
            appReceiptData: { data: {} },
          }
    
          const result = await compressReceiptSignatures(receipt)
    
          expect(result.signedReceipt).toHaveProperty('_compressed', true)
          expect((result.signedReceipt as any).signaturePack).toEqual([
            { id: 1, sig: 'signature1' },
            { id: 2, sig: 'signature2' },
          ])
        })
    
        it('should skip compression if receipt already compressed', async () => {
          const receipt: ArchiverReceipt = {
            tx: { originalTxData: {}, txId: 'test', timestamp: 123 },
            cycle: 1,
            signedReceipt: {
              proposal: {
                applied: true,
                cant_preApply: false,
                accountIDs: [],
                beforeStateHashes: [],
                afterStateHashes: [],
                appReceiptDataHash: 'hash',
                txid: 'test',
              },
              proposalHash: 'hash',
              signaturePack: [
                { id: 1, sig: 'signature1' },
                { id: 2, sig: 'signature2' },
              ] as any,
              voteOffsets: [7, 7],
              _compressed: true,
            } as any,
            globalModification: false,
            appReceiptData: { data: {} },
          }
    
          const result = await compressReceiptSignatures(receipt)
          expect(result).toEqual(receipt)
          expect(mockDbGet).not.toHaveBeenCalled()
        })
    
        it('should handle batch compression efficiently', async () => {
          // Mock batch database response
          mockDbAll.mockResolvedValueOnce([
            { node_id: 1, public_key: 'publicKey1' },
          ])
          mockDbRun.mockResolvedValueOnce(undefined) // Batch insert
          mockDbAll.mockResolvedValueOnce([
            { node_id: 2, public_key: 'publicKey2' },
          ])
    
          const receipt: ArchiverReceipt = {
            tx: { originalTxData: {}, txId: 'test', timestamp: 123 },
            cycle: 1,
            signedReceipt: {
              proposal: {
                applied: true,
                cant_preApply: false,
                accountIDs: [],
                beforeStateHashes: [],
                afterStateHashes: [],
                appReceiptDataHash: 'hash',
                txid: 'test',
              },
              proposalHash: 'hash',
              signaturePack: [
                { owner: 'publicKey1', sig: 'signature1' },
                { owner: 'publicKey2', sig: 'signature2' },
              ],
              voteOffsets: [7, 7],
            } as SignedReceipt,
            globalModification: false,
            appReceiptData: { data: {} },
          }
    
          const result = await compressReceiptSignatures(receipt)
    
          expect(result.signedReceipt).toHaveProperty('_compressed', true)
          expect((result.signedReceipt as any).signaturePack).toHaveLength(2)
        })
      })
    
      describe('decompressReceiptSignatures', () => {
        it('should return receipt unchanged when optimization is disabled', async () => {
          config.receiptSignatureOptimization.enabled = false
    
          const receipt: ArchiverReceipt = {
            tx: { originalTxData: {}, txId: 'test', timestamp: 123 },
            cycle: 1,
            signedReceipt: {
              proposal: {
                applied: true,
                cant_preApply: false,
                accountIDs: [],
                beforeStateHashes: [],
                afterStateHashes: [],
                appReceiptDataHash: 'hash',
                txid: 'test',
              },
              proposalHash: 'hash',
              signaturePack: [
                { id: 1, sig: 'signature1' },
                { id: 2, sig: 'signature2' },
              ] as any,
              voteOffsets: [7, 7],
              _compressed: true,
            } as any,
            globalModification: false,
            appReceiptData: { data: {} },
          }
    
          const result = await decompressReceiptSignatures(receipt)
          expect(result).toEqual(receipt)
        })
    
        it('should decompress signatures when optimization is enabled', async () => {
          // Mock database response
          mockDbAll.mockResolvedValueOnce([
            { node_id: 1, public_key: 'publicKey1' },
            { node_id: 2, public_key: 'publicKey2' },
          ])
    
          const receipt: ArchiverReceipt = {
            tx: { originalTxData: {}, txId: 'test', timestamp: 123 },
            cycle: 1,
            signedReceipt: {
              proposal: {
                applied: true,
                cant_preApply: false,
                accountIDs: [],
                beforeStateHashes: [],
                afterStateHashes: [],
                appReceiptDataHash: 'hash',
                txid: 'test',
              },
              proposalHash: 'hash',
              signaturePack: [
                { id: 1, sig: 'signature1' },
                { id: 2, sig: 'signature2' },
              ] as any,
              voteOffsets: [7, 7],
              _compressed: true,
            } as any,
            globalModification: false,
            appReceiptData: { data: {} },
          }
    
          const result = await decompressReceiptSignatures(receipt)
    
          expect(result.signedReceipt).not.toHaveProperty('_compressed')
          expect((result.signedReceipt as SignedReceipt).signaturePack).toEqual([
            { owner: 'publicKey1', sig: 'signature1' },
            { owner: 'publicKey2', sig: 'signature2' },
          ])
        })
    
        it('should skip decompression if receipt not compressed', async () => {
          const receipt: ArchiverReceipt = {
            tx: { originalTxData: {}, txId: 'test', timestamp: 123 },
            cycle: 1,
            signedReceipt: {
              proposal: {
                applied: true,
                cant_preApply: false,
                accountIDs: [],
                beforeStateHashes: [],
                afterStateHashes: [],
                appReceiptDataHash: 'hash',
                txid: 'test',
              },
              proposalHash: 'hash',
              signaturePack: [
                { owner: 'publicKey1', sig: 'signature1' },
                { owner: 'publicKey2', sig: 'signature2' },
              ],
              voteOffsets: [7, 7],
            } as SignedReceipt,
            globalModification: false,
            appReceiptData: { data: {} },
          }
    
          const result = await decompressReceiptSignatures(receipt)
          expect(result).toEqual(receipt)
          expect(mockDbAll).not.toHaveBeenCalled()
        })
      })
    
      describe('cache behavior', () => {
        it('should use cache for repeated operations', async () => {
          // First compression - should hit database
          mockDbGet.mockResolvedValueOnce({ node_id: 1 })
    
          const receipt: ArchiverReceipt = {
            tx: { originalTxData: {}, txId: 'test', timestamp: 123 },
            cycle: 1,
            signedReceipt: {
              proposal: {
                applied: true,
                cant_preApply: false,
                accountIDs: [],
                beforeStateHashes: [],
                afterStateHashes: [],
                appReceiptDataHash: 'hash',
                txid: 'test',
              },
              proposalHash: 'hash',
              signaturePack: [
                { owner: 'publicKey1', sig: 'signature1' },
              ],
              voteOffsets: [7],
            } as SignedReceipt,
            globalModification: false,
            appReceiptData: { data: {} },
          }
    
          // First call - should query database
          await compressReceiptSignatures(receipt)
          expect(mockDbGet).toHaveBeenCalledTimes(1)
    
          // Clear mock calls
          mockDbGet.mockClear()
    
          // Second call - should use cache
          await compressReceiptSignatures(receipt)
          expect(mockDbGet).not.toHaveBeenCalled()
        })
      })
    })

    Comment on lines 256 to 263
    const sql = `SELECT * FROM receipts ORDER BY cycle DESC, timestamp DESC LIMIT ${count ? count : 100}`
    const receipts = (await db.all(receiptDatabase, sql)) as DbReceipt[]
    if (receipts.length > 0) {
    receipts.forEach((receipt: DbReceipt) => {
    deserializeDbReceipt(receipt)
    })
    }
    if (config.VERBOSE) {
    Logger.mainLogger.debug('Receipt latest', receipts)
    const deserializedReceipts = await Promise.all(receipts.map((receipt: DbReceipt) => deserializeDbReceipt(receipt)))
    return deserializedReceipts
    }
    return receipts
    } catch (e) {
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Suggestion: Returning the raw receipts array when no results are found may cause type inconsistencies, as it is typed as DbReceipt[] but the function promises Receipt[]. Return an empty array instead to maintain type safety. [possible issue, importance: 7]

    New proposed code:
     export async function queryLatestReceipts(count: number): Promise<Receipt[]> {
       if (!Number.isInteger(count)) {
         Logger.mainLogger.error('queryLatestReceipts - Invalid count value')
         return null
       }
       try {
         const sql = `SELECT * FROM receipts ORDER BY cycle DESC, timestamp DESC LIMIT ${count ? count : 100}`
         const receipts = (await db.all(receiptDatabase, sql)) as DbReceipt[]
         if (receipts.length > 0) {
           const deserializedReceipts = await Promise.all(receipts.map((receipt: DbReceipt) => deserializeDbReceipt(receipt)))
           return deserializedReceipts
         }
    -    return receipts
    +    return []
       } catch (e) {
         Logger.mainLogger.error(e)

    kun6fup4nd4 and others added 3 commits June 20, 2025 20:58
    The promisified database helpers were incorrectly implemented.
    promisify expects the callback as the last argument, but the
    wrappers had the callback in the wrong position. This could cause
    unexpected behavior or runtime errors.
    
    Replaced incorrect promisify usage with explicit Promise wrappers
    that properly handle sqlite3's callback pattern.
    
    🤖 Generated with [Claude Code](https://claude.ai/code)
    
    Co-Authored-By: Claude <[email protected]>
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    3 participants