Skip to content

iterative caching for terragrunt configs #2051

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Jul 26, 2025

Conversation

motatoes
Copy link
Contributor

No description provided.

Copy link
Contributor

bismuthdev bot commented Jul 25, 2025

Bug Summary

Total Bugs Found: 2

Bug Descriptions

  1. Ignored Return Value in hydrateDiggerConfigYamlWithTerragrunt - In digger_config.go, the function now returns an Atlantis configuration object that is being discarded with a blank identifier (_). This could lead to unexpected behavior since the returned configuration is not being properly handled.

  2. Potential Nil Pointer in UpsertRepoCache - In cache.go, the function UpsertRepoCache is called with newAtlantisConfig which could be nil as it's returned from LoadDiggerConfig. If the function doesn't properly handle nil values, this could lead to a nil pointer dereference.

The most critical issue appears to be the second bug, as it could potentially cause runtime crashes if a nil pointer is dereferenced.

tgParsingConfig.AwsRoleToAssume = b.AwsRoleToAssume
tgParsingConfig.AwsCognitoOidcConfig = b.AwsCognitoOidcConfig

err := hydrateDiggerConfigYamlWithTerragrunt(config, *tgParsingConfig, terraformDir, b.BlockName)
_, err := hydrateDiggerConfigYamlWithTerragrunt(config, *tgParsingConfig, terraformDir, b.BlockName, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hydrateDiggerConfigYamlWithTerragrunt function now returns a value (*tac.AtlantisConfig) instead of just an error, but at line 439-440, the call site is ignoring this return value by using the blank identifier _. This could lead to unexpected behavior since the returned Atlantis configuration is not being properly handled. The fix is to capture the return value in a variable (e.g., newConfig) to maintain consistency with the other call sites of this function.

Suggested change
_, err := hydrateDiggerConfigYamlWithTerragrunt(config, *tgParsingConfig, terraformDir, b.BlockName, nil)
newConfig, err := hydrateDiggerConfigYamlWithTerragrunt(config, *tgParsingConfig, terraformDir, b.BlockName, nil)

@@ -83,7 +85,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UpsertRepoCache function is called with newAtlantisConfig which could be nil, as it's returned from LoadDiggerConfig. Looking at the migration file backend/migrations/20250725041417.sql, we can see that a new column terragrunt_atlantis_config was added to the repo_caches table, which suggests that the UpsertRepoCache function was updated to handle this new parameter.

However, I cannot see the implementation of UpsertRepoCache to verify if it properly handles nil values for the newAtlantisConfig parameter. If UpsertRepoCache doesn't have proper nil checking for this parameter, it could lead to a nil pointer dereference when trying to access or use the newAtlantisConfig object.

Since I cannot confirm the implementation details of UpsertRepoCache, I'm reporting this as a potential issue that should be verified. The code should ensure that UpsertRepoCache properly handles nil values for the newAtlantisConfig parameter to prevent potential runtime errors.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)

@motatoes motatoes changed the title add terragrunt config iterative caching for terragrunt configs Jul 25, 2025
Copy link
Contributor

bismuthdev bot commented Jul 25, 2025

Bug Summary

Total Bugs Found: 3

Bug Descriptions:

  1. Missing Parameter in LoadDiggerConfigYaml Call - In projects_refesh_main.go, the function is called with only three parameters when it now requires four. The missing parameter is a pointer to an AtlantisConfig object.

  2. Missing Parameter in LoadDiggerConfigYaml Call - Same issue occurs in github.go where the function is called with three parameters instead of the required four.

  3. Potential Nil Pointer Dereference - In cache.go, the UpdateRepoCache function passes newAtlantisConfig to UpsertRepoCache without checking if it's nil, which could cause a runtime panic.

The most critical bug is the potential nil pointer dereference in cache.go as this would cause runtime crashes, while the other two are compilation errors that would be caught before deployment.

@@ -46,7 +46,7 @@ func main() {

slog.Info("refreshing projects from repo", "repoFullName", repoFullName)
err := utils3.CloneGitRepoAndDoAction(cloneUrl, branch, "", token, "", func(dir string) error {
config, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function LoadDiggerConfigYaml now requires four parameters, but is being called with only three. The fourth parameter is a pointer to an AtlantisConfig object, which can be nil. This will cause a compilation error since the function signature has changed.

Suggested change
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)

@@ -43,7 +43,7 @@ func LoadProjectsFromGithubRepo(gh utils2.GithubClientProvider, installationId s
}

err = utils3.CloneGitRepoAndDoAction(cloneUrl, branch, "", *token, "", func(dir string) error {
config, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function LoadDiggerConfigYaml now requires four parameters, but is being called with only three. The fourth parameter is a pointer to an AtlantisConfig object, which can be nil. This will cause a compilation error since the function signature has changed.

Suggested change
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)

@@ -83,7 +85,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the UpdateRepoCache function, there's a potential nil pointer dereference when calling UpsertRepoCache. The newAtlantisConfig variable can be nil if LoadDiggerConfig returns a nil value for the Atlantis configuration, but the code doesn't check for this before passing it to UpsertRepoCache. This could lead to a runtime panic if newAtlantisConfig is nil.

The fix adds a nil check before calling UpsertRepoCache to ensure we handle the case where newAtlantisConfig is nil properly.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
if newAtlantisConfig == nil {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, nil)
} else {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
}

Copy link
Contributor

bismuthdev bot commented Jul 25, 2025

Cancelled.

Copy link
Contributor

bismuthdev bot commented Jul 25, 2025

Summary of Bugs Found

A total of 7 bugs were identified in the codebase, with the following issues:

Critical Bugs

  1. Parameter Count Mismatch in LoadDiggerConfigYaml - Found in multiple files (projects_refesh_main.go and github.go), where the function is being called with 3 parameters when it now requires 4 parameters. This causes compilation errors.

  2. Function Signature Inconsistency - The LoadDiggerConfigFromString function returns 4 values while the similar LoadDiggerConfig function returns 5 values (missing the *tac.AtlantisConfig return value). This inconsistency can lead to bugs when developers expect the same behavior from both functions.

  3. Missing Error Handling - In cache.go, the UpdateRepoCache function doesn't handle errors from the UpsertRepoCache operation, potentially reporting success to clients even when database operations fail.

  4. Potential Nil Pointer Dereference - In cache.go, the newAtlantisConfig variable could be nil and is passed directly to UpsertRepoCache without any check, which could lead to a nil pointer dereference if the receiving function doesn't handle nil values properly.

These bugs primarily involve parameter mismatches, inconsistent function signatures, and inadequate error handling, which could lead to compilation errors, runtime errors, and unexpected behavior.

@@ -46,7 +46,7 @@ func main() {

slog.Info("refreshing projects from repo", "repoFullName", repoFullName)
err := utils3.CloneGitRepoAndDoAction(cloneUrl, branch, "", token, "", func(dir string) error {
config, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LoadDiggerConfigYaml function now expects 4 parameters, but is being called with only 3 parameters. The fourth parameter is a pointer to a tac.AtlantisConfig which can be nil in this case. This mismatch in parameter count will cause compilation errors.

Suggested change
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)

@@ -43,7 +43,7 @@ func LoadProjectsFromGithubRepo(gh utils2.GithubClientProvider, installationId s
}

err = utils3.CloneGitRepoAndDoAction(cloneUrl, branch, "", *token, "", func(dir string) error {
config, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LoadDiggerConfigYaml function now expects 4 parameters, but is being called with only 3 parameters. The fourth parameter is a pointer to a tac.AtlantisConfig which can be nil in this case. This mismatch in parameter count will cause compilation errors.

Suggested change
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)

@@ -46,7 +46,7 @@ func main() {

slog.Info("refreshing projects from repo", "repoFullName", repoFullName)
err := utils3.CloneGitRepoAndDoAction(cloneUrl, branch, "", token, "", func(dir string) error {
config, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LoadDiggerConfigYaml function now requires 4 parameters, but is being called with only 3 parameters. The 4th parameter is taConfig which should be set to nil in this case. This will cause a runtime error when the function is called.

Suggested change
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)

@@ -43,7 +43,7 @@ func LoadProjectsFromGithubRepo(gh utils2.GithubClientProvider, installationId s
}

err = utils3.CloneGitRepoAndDoAction(cloneUrl, branch, "", *token, "", func(dir string) error {
config, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LoadDiggerConfigYaml function now requires 4 parameters, but is being called with only 3 parameters. The 4th parameter is taConfig which should be set to nil in this case. This will cause a runtime error when the function is called.

Suggested change
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)

@@ -83,7 +85,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the UpdateRepoCache function in backend/controllers/cache.go, there's a call to UpsertRepoCache that passes the newAtlantisConfig parameter returned from LoadDiggerConfig, but there's no error handling for the result of this operation. If the database operation fails, the error is silently ignored, and the function continues to report success to the client even though the cache update failed. This could lead to inconsistent state between what the client believes happened and the actual state of the database.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
if err != nil {
slog.Error("Could not update repo cache", "error", err)
return
}

@@ -83,7 +85,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a potential nil pointer dereference bug in the UpdateRepoCache function. When loading the digger configuration, the newAtlantisConfig variable could be nil if there was an error or if no Atlantis configuration was found. However, this nil value is passed directly to UpsertRepoCache without any check.

If the UpsertRepoCache function doesn't properly handle nil values for the Atlantis config parameter, this could lead to a nil pointer dereference. The fix adds a nil check before passing the value to ensure proper handling of the nil case.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
if newAtlantisConfig == nil {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, nil)
} else {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
}

Copy link
Contributor

bismuthdev bot commented Jul 25, 2025

Bug Summary

A total of 2 bugs were found in the codebase:

  1. Missing File Existence Check - In backend/controllers/cache.go, the UpdateRepoCache function directly reads the digger.yml file without checking if it exists first, which can cause errors when the file is missing. A helper function already exists that could properly handle this case.

  2. Potential Nil Pointer Dereference - In the same file, there's a potential issue where newAtlantisConfig is passed to UpsertRepoCache() without checking if it's nil, which could lead to a nil pointer dereference if the function doesn't handle nil values properly.

The most critical bug is the first one, as it will definitely cause errors when the configuration file doesn't exist, while the second one is a potential issue that needs further investigation.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Summary

This PR implements iterative caching for Terragrunt configurations by transitioning from the 'atlantis' package to a new 'tac' (Terragrunt Atlantis Config) package structure. The key changes include:

Package Restructuring: The entire libs/digger_config/terragrunt/atlantis/ directory has been renamed to libs/digger_config/terragrunt/tac/, with all associated imports and references updated throughout the codebase. This reflects a move to use the Terragrunt Atlantis Config library based on the transcend-io/terragrunt-atlantis-config project.

Enhanced Caching Architecture: The LoadDiggerConfig function signature has been updated to accept a *tac.AtlantisConfig parameter and return an additional *tac.AtlantisConfig value. This enables iterative caching where previously parsed Terragrunt configurations can be reused, significantly improving performance for large Terragrunt repositories.

Database Support: A new migration adds a terragrunt_atlantis_config bytea column to the repo_caches table, allowing the system to persist both Digger configurations and Terragrunt Atlantis configurations. The RepoCache model and related storage functions have been updated to handle this dual caching approach.

Filtering Improvements: The configuration parsing now supports filtering based on changed files through the new FilterPaths (previously FilterPath) field, allowing for more granular incremental updates. When a cached config is provided, the system only processes directories containing changed files rather than regenerating everything from scratch.

Function Signature Updates: All callers of LoadDiggerConfig throughout the codebase have been updated to pass the new taConfig parameter (typically as nil when no cached config is available) and handle the additional return value.

Confidence score: 4/5

• This is a substantial architectural change that should improve performance but carries moderate risk due to its scope
• The systematic nature of the changes and comprehensive test coverage increase confidence
• Several files need attention for potential inconsistencies or missing functionality in test examples

137 files reviewed, no comments

Edit Code Review Bot Settings | Greptile

@@ -1758,6 +1759,15 @@ func (db *Database) UpsertRepoCache(orgId uint, repoFullName string, diggerYmlSt
return nil, fmt.Errorf("could not marshal config: %v", err)
}

atlantisConfigMarshalled, err := json.Marshal(newAtlantisConfig)
if err != nil {
slog.Error("could not marshal digger config",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

syntax: Error message says 'digger config' but should say 'atlantis config' for accuracy

Suggested change
slog.Error("could not marshal digger config",
slog.Error("could not marshal atlantis config",

@@ -954,7 +955,7 @@ func GetDiggerConfigForBranch(gh utils.GithubClientProvider, installationId int6

slog.Debug("Successfully read digger.yml file", "configLength", len(diggerYmlStr))

config, _, dependencyGraph, err = dg_configuration.LoadDiggerConfig(dir, true, changedFiles)
config, _, dependencyGraph, _, err = dg_configuration.LoadDiggerConfig(dir, true, changedFiles, taConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: The LoadDiggerConfig call now passes taConfig as the 4th parameter, but the return values suggest it now returns 5 values instead of 4. Verify this matches the updated function signature.

Comment on lines +1129 to +1137
var taConfig tac.AtlantisConfig
err = json.Unmarshal(repoCache.TerragruntAtlantisConfig, &taConfig)
if err != nil {
slog.Error("Failed to unmarshal config from cache",
"orgId", orgId,
"repoFullName", repoFullName,
"error", err,
)
return "", nil, nil, nil, fmt.Errorf("failed to unmarshal config from cache: %v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Consider handling the case where TerragruntAtlantisConfig is nil or empty to avoid unmarshaling errors, similar to how other cached data is validated.

@@ -83,7 +85,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: dereferencing *config without nil check could cause panic if LoadDiggerConfig returns nil config with non-nil error

absoluteDirOfChangedFiles := lo.Map(relativeDirOfChangedFiles, func(dir string, index int) string {
return path.Join(terraformDir, dir)
})
parsingConfig.FilterPaths = GetDirNamesFromPaths(absoluteDirOfChangedFiles)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: inconsistent path handling - using GetDirNamesFromPaths(absoluteDirOfChangedFiles) converts absolute paths back to relative, but line 337 uses absolute paths directly

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Summary

This PR introduces significant enhancements to the terragrunt configuration caching system through architectural improvements and bug fixes. The major changes focus on refining the existing TAC (Terragrunt Atlantis Config) implementation and addressing critical issues identified in the previous review.

Key Updates in This Iteration:

  1. Test Configuration Fixes: Several test files that were previously oversimplified or contained errors have been corrected:

    • Fixed with_parent/terragrunt.hcl which was reduced to just a locals block - now contains proper terragrunt configuration
    • Corrected the local vs locals syntax error in multi-account test configurations
    • Restored missing functionality in dependency test cases
  2. Enhanced Dependency Chain Testing: Improved the chained dependencies test cases to properly demonstrate complex dependency resolution scenarios:

    • Updated chained_dependencies/depender/terragrunt.hcl to include proper dependency blocks
    • Modified depender_on_depender/nested/terragrunt.hcl to create more realistic dependency chains
    • Enhanced terragrunt_dependency/depender/terragrunt.hcl with dynamic dependency references
  3. Refined Filtering Logic: The caching system now supports more sophisticated filtering through:

    • Multiple filter paths (FilterPaths []string) for granular directory-based filtering
    • Better handling of deleted terragrunt files in cached configurations
    • Improved directory path extraction utilities
  4. Database Migration Integrity: Added proper migration tracking with 20250725041417.sql to support the new terragrunt_atlantis_config column, ensuring database schema consistency.

  5. Comprehensive Function Signature Updates: All callers of LoadDiggerConfig across the codebase have been systematically updated to handle the new caching parameters, maintaining compatibility while enabling the enhanced functionality.

The iterative caching system works by preserving previously parsed configurations and only re-processing directories that contain changed files, significantly reducing parsing overhead for large terragrunt repositories. The system intelligently handles project deletion by filtering out cached projects whose source files no longer exist.

Confidence score: 4/5

  • The caching implementation is well-architected and addresses performance concerns for large terragrunt repositories
  • Test configuration issues from previous review have been addressed
  • The incremental processing logic should provide substantial performance improvements

Files that need more attention:

  • libs/digger_config/terragrunt/tac/generate_test.go - Contains potential bug with empty contents being passed to yaml.Unmarshal on line 62
  • backend/models/storage.go - The UpsertRepoCache function should validate that newAtlantisConfig is not nil before marshaling
  • libs/digger_config/terragrunt/tac/test_examples/multi_accounts_vpc_route53_tgw/network-account/eu-west-1/network/env.hcl - Dramatic simplification may indicate incomplete test case migration

138 files reviewed, 1 comment

Edit Code Review Bot Settings | Greptile

Comment on lines +748 to +750
if oldConfig != nil {
atlantisConfig.Projects = oldConfig.Projects
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Projects from oldConfig are copied by reference, not deep copied - modifications to atlantisConfig.Projects will affect the original oldConfig.Projects slice

@@ -83,7 +85,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In backend/controllers/cache.go, there's a potential issue with passing newAtlantisConfig to UpsertRepoCache() without checking if it's nil.

I've been trying to find the implementation of UpsertRepoCache() to determine if it properly handles nil values for the newAtlantisConfig parameter, but I can't locate the full implementation in the provided files.

If UpsertRepoCache() doesn't handle nil values properly, this could lead to a nil pointer dereference when newAtlantisConfig is nil. The code should either check for nil before passing it or ensure that the UpsertRepoCache() function handles nil values safely.

Since I can't see the implementation of UpsertRepoCache(), I'm unable to determine if this is actually a bug or if the function already handles nil values properly. I'm leaving the code unchanged but flagging this as a potential issue that should be investigated.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)

Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Bug Summary

A total of 2 bugs were found in the codebase, both related to potential nil pointer dereference issues in the cache controller.

Bugs Found:

  1. Potential nil pointer dereference in UpdateRepoCache - The function passes newAtlantisConfig to UpsertRepoCache without checking if it's nil, which could cause a crash if the Terragrunt configuration fails to load or isn't present.

  2. Unverified nil handling in UpsertRepoCache call - Similar to the first issue, the code passes newAtlantisConfig to UpsertRepoCache without proper nil checking, potentially causing issues if the configuration loading fails or doesn't include Atlantis configuration.

The most critical issue is the potential for nil pointer dereferences in the cache handling code, which could lead to application crashes during repository cache updates.

@@ -83,7 +85,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the UpdateRepoCache function, newAtlantisConfig is passed to UpsertRepoCache without checking if it's nil. This could cause a nil pointer dereference if the Terragrunt configuration fails to load or isn't present. The LoadDiggerConfig function can return a nil AtlantisConfig in certain cases, and this value is not validated before being passed to the database function.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
if newAtlantisConfig == nil {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, nil)
} else {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no dereferencing happening in that function

@@ -83,7 +85,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In backend/controllers/cache.go, line 87 passes newAtlantisConfig to UpsertRepoCache without checking if it's nil. Based on the code in the controller, newAtlantisConfig could be nil if the configuration loading fails or doesn't include Atlantis configuration.

However, after examining the code, I cannot confirm this is a bug because:

  1. The UpsertRepoCache function is not fully visible in the provided code, so I cannot verify if it properly handles nil values for newAtlantisConfig.
  2. The function call is preceded by error checking that would prevent execution if the config loading failed.
  3. The LoadDiggerConfig function might always return a non-nil newAtlantisConfig even if empty.

Without seeing the implementation of UpsertRepoCache, I cannot confirm if this would cause a nil pointer dereference. If UpsertRepoCache does marshal newAtlantisConfig without a nil check, then it would be a bug.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Summary

This iteration contains minor refinements to the terragrunt caching implementation introduced in previous changes. The updates focus on improving error message accuracy and maintaining API compatibility as the caching system evolves.

Error Message Accuracy: Updated the error message in UpsertRepoCache method to correctly reference 'terragrunt-atlantis-config' instead of 'digger config' when marshaling fails. This provides more precise debugging information since the method handles both digger config and terragrunt-atlantis-config marshaling operations.

API Compatibility: Modified the LoadDiggerConfigYaml function call in the background projects refresh service to include the new 4th parameter for Terragrunt Atlantis Config. The service passes nil for this parameter, maintaining existing behavior while supporting the enhanced caching architecture.

Code Cleanup: Removed the entire ee/drift/tasks/github.go file as part of architectural consolidation. The LoadProjectsFromGithubRepo function previously housed in this file has been replaced by equivalent functionality in the background projects refresh service, which now handles project loading directly using environment variables rather than function parameters.

These changes represent the final touches on the iterative caching system, ensuring that error messages are accurate for debugging and that all components properly integrate with the new caching architecture. The removal of the drift-specific project loading logic consolidates project refresh operations into a dedicated background service, improving separation of concerns.

Confidence score: 5/5

• These are minor, low-risk changes that improve code quality and maintainability
• The error message fix enhances debugging without affecting functionality
• API compatibility changes are straightforward with clear intent

No files need additional attention - all changes are well-contained and safe to merge.

3 files reviewed, no comments

Edit Code Review Bot Settings | Greptile

Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Cancelled.

1 similar comment
Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Cancelled.

Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Bug Summary Report

Total Bugs Found: 5

Critical Issues

  1. Nil Pointer Dereference Risk - In sendProcessCacheRequest, the code attempts to close resp.Body without checking if resp is nil, which could cause a runtime panic.

  2. Silent Failure in Cache Processing - The function sendProcessCacheRequest doesn't properly handle non-200 status codes, returning nil even when the operation fails, leading to silent failures in cache updates.

  3. Function Signature Mismatch - In the projects refresh service, a call to LoadDiggerConfigYaml doesn't account for the updated function signature that now returns four values instead of three, causing a compilation error.

Other Issues

  1. Copy-Paste Error - Error message in sendProcessCacheRequest incorrectly references "Process Drift" instead of "Process Cache".

  2. Missing Environment Variable Validation - The sendProcessCacheRequest function doesn't verify if the diggerHostname environment variable is set before using it to construct a URL.

Comment on lines 134 to 138
req, err := http.NewRequest("POST", driftUrl, bytes.NewBuffer(jsonPayload))
if err != nil {
fmt.Println("Process Drift: Error creating request:", err)
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a copy-paste error in the sendProcessCacheRequest function. The error message incorrectly says "Process Drift: Error creating request" when this function is actually handling cache processing, not drift processing. This is inconsistent with other error messages in the same function that correctly use "Process Cache:" as the prefix. This suggests the code was copied from a drift-related function and not fully updated.

Suggested change
req, err := http.NewRequest("POST", driftUrl, bytes.NewBuffer(jsonPayload))
if err != nil {
fmt.Println("Process Drift: Error creating request:", err)
return err
}
req, err := http.NewRequest("POST", driftUrl, bytes.NewBuffer(jsonPayload))
if err != nil {
fmt.Println("Process Cache: Error creating request:", err)
return err
}

Comment on lines +104 to +105
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendProcessCacheRequest function doesn't check if diggerHostname is empty before using it to construct a URL. If the HOSTNAME environment variable is not set, this will lead to an invalid URL being constructed when url.JoinPath is called, potentially causing unexpected behavior or errors. The function should validate that the hostname is not empty and return an appropriate error if it's missing.

Suggested change
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
diggerHostname := os.Getenv("HOSTNAME")
if diggerHostname == "" {
slog.Error("HOSTNAME environment variable not set")
return fmt.Errorf("HOSTNAME environment variable not set")
}
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")

Comment on lines +144 to +149
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
defer resp.Body.Close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sendProcessCacheRequest function, there's a potential nil pointer dereference bug. If the HTTP request fails (client.Do returns an error), resp will be nil, but the code still attempts to call defer resp.Body.Close(). This will cause a panic at runtime when the deferred function executes.

The fix adds a nil check in the deferred function to ensure we only attempt to close the response body if both resp and resp.Body are not nil.

Suggested change
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
defer resp.Body.Close()
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
defer func() {
if resp != nil && resp.Body != nil {
resp.Body.Close()
}
}()

@@ -46,7 +46,7 @@ func main() {

slog.Info("refreshing projects from repo", "repoFullName", repoFullName)
err := utils3.CloneGitRepoAndDoAction(cloneUrl, branch, "", token, "", func(dir string) error {
config, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR has changed the signature of LoadDiggerConfigYaml to return an additional value (the Atlantis configuration), but this call site hasn't been updated to match. The function now returns four values instead of three, which will cause a compilation error.

Suggested change
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)
config, _, newAtlantisConfig, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)

Co-authored-by: bismuthdev[bot] <177057995+bismuthdev[bot]@users.noreply.github.com>
Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Cancelled.

Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Cancelled.

Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Summary of Bugs Found in Codebase

A total of 6 bugs were identified across the codebase, primarily in the cache controller functionality.

Critical Issues

  1. Improper Error Handling in Response Processing: When non-200 status codes are received, the code attempts to use potentially empty/nil response bodies, leading to misleading error messages.

  2. Missing Environment Variable Validation: The HOSTNAME environment variable is used without validation, potentially creating invalid URLs when the variable is not set.

  3. No Request Timeout: HTTP client requests lack timeout settings, which could lead to hanging requests and resource leaks.

  4. Untracked Asynchronous Errors: The cache update function uses goroutines without any mechanism to track or report failures, causing operations to appear successful even when they fail.

Other Issues

  1. Inconsistent Logging Methods: The codebase mixes structured logging (slog.Error) with basic logging (fmt.Println), potentially causing missing logs in production environments.

  2. Incorrect Log Field Name: A log message uses 'filterPath' as the key when referencing an array variable named 'FilterPaths', creating inconsistency in log field naming.

Comment on lines +154 to +168
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
}

slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))

return fmt.Errorf("cache update failed with status code %d: %s", statusCode, string(responseBody))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sendProcessCacheRequest function, when a non-200 status code is received, the code attempts to read the response body using io.ReadAll(resp.Body). If this read operation fails, the error is logged but the function continues to use string(responseBody) in both the error log and the returned error message.

Since responseBody would be empty or nil when io.ReadAll fails, this leads to misleading error messages that don't accurately reflect the actual problem. The error message would show an empty response body rather than indicating that there was a problem reading the response.

The fix adds an early return with a more accurate error message when the response body cannot be read, ensuring that the error propagated up the call stack correctly reflects what actually happened.

Suggested change
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
}
slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))
return fmt.Errorf("cache update failed with status code %d: %s", statusCode, string(responseBody))
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
return fmt.Errorf("cache update failed with status code %d: error reading response body: %v", statusCode, err)
}
slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))
return fmt.Errorf("cache update failed with status code %d: %s", statusCode, string(responseBody))

Comment on lines +105 to +127
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")

installationLink, err := models.DB.GetGithubInstallationLinkForInstallationId(installationId)
if err != nil {
slog.Error("Error getting installation link", "installationId", installationId, "error", err)
return err
}

orgId := installationLink.OrganisationId

payload := UpdateCacheRequest{
RepoFullName: repoFullName,
Branch: branch,
InstallationId: installationId,
OrgId: orgId,
}

cacheRefreshUrl, err := url.JoinPath(diggerHostname, "_internal/update_repo_cache")
if err != nil {
slog.Error("Error joining URL paths", "error", err)
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendProcessCacheRequest function retrieves the diggerHostname from the HOSTNAME environment variable without validating if it's empty. If this environment variable is not set, url.JoinPath() will be called with an empty string as the first parameter, which could lead to an invalid URL being constructed.

While url.JoinPath() itself handles empty strings gracefully by treating them as empty path segments, the resulting URL would be just "_internal/update_repo_cache" without a hostname, which would be an invalid URL for the subsequent HTTP request. This would cause the http.NewRequest() call to fail with an error about an invalid URL scheme.

The fix adds validation to check if the HOSTNAME environment variable is empty and returns an appropriate error message if it is.

Suggested change
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
installationLink, err := models.DB.GetGithubInstallationLinkForInstallationId(installationId)
if err != nil {
slog.Error("Error getting installation link", "installationId", installationId, "error", err)
return err
}
orgId := installationLink.OrganisationId
payload := UpdateCacheRequest{
RepoFullName: repoFullName,
Branch: branch,
InstallationId: installationId,
OrgId: orgId,
}
cacheRefreshUrl, err := url.JoinPath(diggerHostname, "_internal/update_repo_cache")
if err != nil {
slog.Error("Error joining URL paths", "error", err)
return err
}
diggerHostname := os.Getenv("HOSTNAME")
if diggerHostname == "" {
slog.Error("HOSTNAME environment variable is not set")
return fmt.Errorf("HOSTNAME environment variable is not set")
}
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
installationLink, err := models.DB.GetGithubInstallationLinkForInstallationId(installationId)
if err != nil {
slog.Error("Error getting installation link", "installationId", installationId, "error", err)
return err
}
orgId := installationLink.OrganisationId
payload := UpdateCacheRequest{
RepoFullName: repoFullName,
Branch: branch,
InstallationId: installationId,
OrgId: orgId,
}
cacheRefreshUrl, err := url.JoinPath(diggerHostname, "_internal/update_repo_cache")
if err != nil {
slog.Error("Error joining URL paths", "error", err)
return err
}

Comment on lines +144 to +149
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the third instance of inconsistent error logging in the sendProcessCacheRequest function. While most of the function uses structured logging with slog.Error, this section uses fmt.Println for error logging. This inconsistency could lead to missing logs in production environments where stdout might be redirected or not captured properly.

The fix standardizes this error logging to use slog.Error with structured fields, completing the standardization of the logging approach in this function.

Suggested change
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
slog.Error("Error sending request", "error", err)
return err
}

Comment on lines +144 to +145
client := &http.Client{}
resp, err := client.Do(req)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HTTP client in sendProcessCacheRequest doesn't have a timeout set, which could lead to hanging requests if the server doesn't respond. This is a common issue that can cause resource leaks and potentially impact system stability.

Adding a reasonable timeout (30 seconds in this case) ensures that requests don't hang indefinitely. This is especially important for internal service communication where a non-responsive endpoint shouldn't block the entire system.

Note that this change requires adding time to the imports if it's not already there.

Suggested change
client := &http.Client{}
resp, err := client.Do(req)
client := &http.Client{
Timeout: 30 * time.Second,
}
resp, err := client.Do(req)

Co-authored-by: bismuthdev[bot] <177057995+bismuthdev[bot]@users.noreply.github.com>
Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Cancelled.

Co-authored-by: bismuthdev[bot] <177057995+bismuthdev[bot]@users.noreply.github.com>
Copy link
Contributor

bismuthdev bot commented Jul 26, 2025

Summary of Bugs Found in Codebase

Total Bugs Found: 5

Bug Summaries

  1. Inconsistent Error Logging - In sendProcessCacheRequest, error logging uses fmt.Println() instead of the structured logger slog.Error() that's used throughout the rest of the codebase.

  2. Missing HTTP Client Timeout - The HTTP client in sendProcessCacheRequest doesn't have a timeout set, which could lead to hanging requests if the server doesn't respond.

  3. Missing Environment Variable Validation - The sendProcessCacheRequest function reads the diggerHostname from the environment variable HOSTNAME without validating that it's not empty, which could lead to runtime errors.

  4. Missing Branch Parameter Validation - The UpdateRepoCache function doesn't validate that the branch parameter is not empty before using it, potentially leading to caching data from unintended branches.

  5. Improper Error Handling for Response Body - In sendProcessCacheRequest, when reading the response body in the error case, the error from io.ReadAll is logged but then ignored, potentially using invalid data in subsequent operations.

The most critical bugs appear to be the missing HTTP client timeout and the lack of environment variable validation, as these could lead to application hangs and runtime errors respectively.

Comment on lines +145 to +149
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sendProcessCacheRequest function, there's an inconsistency in error logging. While the rest of the codebase (including other parts of this same function) uses the structured logger slog.Error(), line 146 uses fmt.Println() for error logging. This inconsistency makes debugging more difficult as it breaks the structured logging pattern used throughout the codebase.

Suggested change
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
resp, err := client.Do(req)
if err != nil {
slog.Error("Error sending request", "error", err)
return err
}

Comment on lines +144 to +145
client := &http.Client{}
resp, err := client.Do(req)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HTTP client in sendProcessCacheRequest doesn't have a timeout set, which could lead to hanging requests if the server doesn't respond. This is a potential issue because without a timeout, the request could hang indefinitely, consuming resources and potentially causing the application to become unresponsive.

Adding a reasonable timeout (30 seconds in this case) ensures that requests don't hang indefinitely and resources are properly released if the server doesn't respond in a timely manner.

Suggested change
client := &http.Client{}
resp, err := client.Do(req)
client := &http.Client{
Timeout: 30 * time.Second,
}
resp, err := client.Do(req)

Comment on lines +105 to +106
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendProcessCacheRequest function reads the diggerHostname from the environment variable HOSTNAME without validating that it's not empty. This could lead to runtime errors when attempting to use url.JoinPath() with an empty hostname, resulting in invalid URLs and potential HTTP request failures.

The fix adds validation to check if the HOSTNAME environment variable is empty and returns an appropriate error if it is not set.

Suggested change
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
diggerHostname := os.Getenv("HOSTNAME")
if diggerHostname == "" {
return fmt.Errorf("HOSTNAME environment variable is not set")
}
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")

Comment on lines +154 to +166
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
}

slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sendProcessCacheRequest function in backend/controllers/cache.go, when reading the response body in the error case (non-200 status code), the error from io.ReadAll is logged but then ignored. The function continues with potentially invalid responseBody data which is then used in the error message and log.

If the error reading the response body occurs, we should return early with an appropriate error message rather than continuing with potentially invalid data. This ensures that the error handling is more robust and prevents using potentially corrupted or empty data in subsequent operations.

Suggested change
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
}
slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
return fmt.Errorf("cache update failed with status code %d and could not read response body: %v", statusCode, err)
}
slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))

@motatoes motatoes merged commit 4a7ae74 into develop Jul 26, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant