Skip to content

iterative caching for terragrunt configs #2051

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Jul 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 84 additions & 8 deletions backend/controllers/cache.go
Original file line number Diff line number Diff line change
@@ -1,10 +1,15 @@
package controllers

import (
"bytes"
"encoding/json"
"fmt"
"github.com/diggerhq/digger/libs/digger_config/terragrunt/tac"
"github.com/diggerhq/digger/libs/git_utils"
"io"
"log/slog"
"net/http"
"net/url"
"os"
"path"
"strings"
Expand All @@ -15,13 +20,14 @@ import (
"github.com/gin-gonic/gin"
)

type UpdateCacheRequest struct {
RepoFullName string `json:"repo_full_name"`
Branch string `json:"branch"`
OrgId uint `json:"org_id"`
InstallationId int64 `json:"installation_id"`
}

func (d DiggerController) UpdateRepoCache(c *gin.Context) {
type UpdateCacheRequest struct {
RepoFullName string `json:"repo_full_name"`
Branch string `json:"branch"`
OrgId uint `json:"org_id"`
InstallationId int64 `json:"installation_id"`
}

var request UpdateCacheRequest
err := c.BindJSON(&request)
Expand Down Expand Up @@ -65,13 +71,14 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {

var diggerYmlStr string
var config *dg_configuration.DiggerConfig
var newAtlantisConfig *tac.AtlantisConfig

// update the cache here, do it async for immediate response
go func() {
err = git_utils.CloneGitRepoAndDoAction(cloneUrl, branch, "", *token, "", func(dir string) error {
diggerYmlBytes, err := os.ReadFile(path.Join(dir, "digger.yml"))
diggerYmlStr = string(diggerYmlBytes)
config, _, _, err = dg_configuration.LoadDiggerConfig(dir, true, nil)
config, _, _, newAtlantisConfig, err = dg_configuration.LoadDiggerConfig(dir, true, nil, nil)
if err != nil {
slog.Error("Error loading digger config", "error", err)
return err
Expand All @@ -83,7 +90,7 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {
slog.Error("Could not load digger config", "error", err)
return
}
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UpsertRepoCache function is called with newAtlantisConfig which could be nil, as it's returned from LoadDiggerConfig. Looking at the migration file backend/migrations/20250725041417.sql, we can see that a new column terragrunt_atlantis_config was added to the repo_caches table, which suggests that the UpsertRepoCache function was updated to handle this new parameter.

However, I cannot see the implementation of UpsertRepoCache to verify if it properly handles nil values for the newAtlantisConfig parameter. If UpsertRepoCache doesn't have proper nil checking for this parameter, it could lead to a nil pointer dereference when trying to access or use the newAtlantisConfig object.

Since I cannot confirm the implementation details of UpsertRepoCache, I'm reporting this as a potential issue that should be verified. The code should ensure that UpsertRepoCache properly handles nil values for the newAtlantisConfig parameter to prevent potential runtime errors.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the UpdateRepoCache function, there's a potential nil pointer dereference when calling UpsertRepoCache. The newAtlantisConfig variable can be nil if LoadDiggerConfig returns a nil value for the Atlantis configuration, but the code doesn't check for this before passing it to UpsertRepoCache. This could lead to a runtime panic if newAtlantisConfig is nil.

The fix adds a nil check before calling UpsertRepoCache to ensure we handle the case where newAtlantisConfig is nil properly.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
if newAtlantisConfig == nil {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, nil)
} else {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the UpdateRepoCache function in backend/controllers/cache.go, there's a call to UpsertRepoCache that passes the newAtlantisConfig parameter returned from LoadDiggerConfig, but there's no error handling for the result of this operation. If the database operation fails, the error is silently ignored, and the function continues to report success to the client even though the cache update failed. This could lead to inconsistent state between what the client believes happened and the actual state of the database.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
if err != nil {
slog.Error("Could not update repo cache", "error", err)
return
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a potential nil pointer dereference bug in the UpdateRepoCache function. When loading the digger configuration, the newAtlantisConfig variable could be nil if there was an error or if no Atlantis configuration was found. However, this nil value is passed directly to UpsertRepoCache without any check.

If the UpsertRepoCache function doesn't properly handle nil values for the Atlantis config parameter, this could lead to a nil pointer dereference. The fix adds a nil check before passing the value to ensure proper handling of the nil case.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
if newAtlantisConfig == nil {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, nil)
} else {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: dereferencing *config without nil check could cause panic if LoadDiggerConfig returns nil config with non-nil error

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In backend/controllers/cache.go, there's a potential issue with passing newAtlantisConfig to UpsertRepoCache() without checking if it's nil.

I've been trying to find the implementation of UpsertRepoCache() to determine if it properly handles nil values for the newAtlantisConfig parameter, but I can't locate the full implementation in the provided files.

If UpsertRepoCache() doesn't handle nil values properly, this could lead to a nil pointer dereference when newAtlantisConfig is nil. The code should either check for nil before passing it or ensure that the UpsertRepoCache() function handles nil values safely.

Since I can't see the implementation of UpsertRepoCache(), I'm unable to determine if this is actually a bug or if the function already handles nil values properly. I'm leaving the code unchanged but flagging this as a potential issue that should be investigated.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the UpdateRepoCache function, newAtlantisConfig is passed to UpsertRepoCache without checking if it's nil. This could cause a nil pointer dereference if the Terragrunt configuration fails to load or isn't present. The LoadDiggerConfig function can return a nil AtlantisConfig in certain cases, and this value is not validated before being passed to the database function.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
if newAtlantisConfig == nil {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, nil)
} else {
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no dereferencing happening in that function

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In backend/controllers/cache.go, line 87 passes newAtlantisConfig to UpsertRepoCache without checking if it's nil. Based on the code in the controller, newAtlantisConfig could be nil if the configuration loading fails or doesn't include Atlantis configuration.

However, after examining the code, I cannot confirm this is a bug because:

  1. The UpsertRepoCache function is not fully visible in the provided code, so I cannot verify if it properly handles nil values for newAtlantisConfig.
  2. The function call is preceded by error checking that would prevent execution if the config loading failed.
  3. The LoadDiggerConfig function might always return a non-nil newAtlantisConfig even if empty.

Without seeing the implementation of UpsertRepoCache, I cannot confirm if this would cause a nil pointer dereference. If UpsertRepoCache does marshal newAtlantisConfig without a nil check, then it would be a bug.

Suggested change
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)
_, err = models.DB.UpsertRepoCache(orgId, repoFullName, diggerYmlStr, *config, newAtlantisConfig)

if err != nil {
slog.Error("Could not update repo cache", "error", err)
return
Expand All @@ -93,3 +100,72 @@ func (d DiggerController) UpdateRepoCache(c *gin.Context) {

c.String(http.StatusOK, "successfully submitted cache for processing, check backend logs for progress")
}

func sendProcessCacheRequest(repoFullName string, branch string, installationId int64) error {
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
Comment on lines +105 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendProcessCacheRequest function in backend/controllers/cache.go retrieves environment variables HOSTNAME and DIGGER_INTERNAL_SECRET but doesn't validate if they are set before using them.

If these environment variables are not set:

  1. diggerHostname will be an empty string, causing url.JoinPath() to create an invalid URL
  2. webhookSecret will be an empty string, resulting in an empty Authorization header
  3. The HTTP request will likely fail or be rejected by the server

This could lead to silent failures or unexpected behavior in the cache update process. The fix adds validation for both environment variables and returns appropriate error messages if they're not set.

Suggested change
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
diggerHostname := os.Getenv("HOSTNAME")
if diggerHostname == "" {
slog.Error("HOSTNAME environment variable not set")
return fmt.Errorf("HOSTNAME environment variable not set")
}
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
if webhookSecret == "" {
slog.Error("DIGGER_INTERNAL_SECRET environment variable not set")
return fmt.Errorf("DIGGER_INTERNAL_SECRET environment variable not set")
}

Comment on lines +105 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendProcessCacheRequest function doesn't check if diggerHostname is empty before using it to construct a URL. If the HOSTNAME environment variable is not set, this will lead to an invalid URL being constructed when url.JoinPath is called, potentially causing unexpected behavior or errors. The function should validate that the hostname is not empty and return an appropriate error if it's missing.

Suggested change
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
diggerHostname := os.Getenv("HOSTNAME")
if diggerHostname == "" {
slog.Error("HOSTNAME environment variable not set")
return fmt.Errorf("HOSTNAME environment variable not set")
}
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")

Comment on lines +105 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendProcessCacheRequest function reads the diggerHostname from the environment variable HOSTNAME without validating that it's not empty. This could lead to runtime errors when attempting to use url.JoinPath() with an empty hostname, resulting in invalid URLs and potential HTTP request failures.

The fix adds validation to check if the HOSTNAME environment variable is empty and returns an appropriate error if it is not set.

Suggested change
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
diggerHostname := os.Getenv("HOSTNAME")
if diggerHostname == "" {
return fmt.Errorf("HOSTNAME environment variable is not set")
}
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")


installationLink, err := models.DB.GetGithubInstallationLinkForInstallationId(installationId)
if err != nil {
slog.Error("Error getting installation link", "installationId", installationId, "error", err)
return err
}

orgId := installationLink.OrganisationId

payload := UpdateCacheRequest{
RepoFullName: repoFullName,
Branch: branch,
InstallationId: installationId,
OrgId: orgId,
}

cacheRefreshUrl, err := url.JoinPath(diggerHostname, "_internal/update_repo_cache")
if err != nil {
slog.Error("Error joining URL paths", "error", err)
return err
}
Comment on lines +105 to +127
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendProcessCacheRequest function retrieves the diggerHostname from the HOSTNAME environment variable without validating if it's empty. If this environment variable is not set, url.JoinPath() will be called with an empty string as the first parameter, which could lead to an invalid URL being constructed.

While url.JoinPath() itself handles empty strings gracefully by treating them as empty path segments, the resulting URL would be just "_internal/update_repo_cache" without a hostname, which would be an invalid URL for the subsequent HTTP request. This would cause the http.NewRequest() call to fail with an error about an invalid URL scheme.

The fix adds validation to check if the HOSTNAME environment variable is empty and returns an appropriate error message if it is.

Suggested change
diggerHostname := os.Getenv("HOSTNAME")
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
installationLink, err := models.DB.GetGithubInstallationLinkForInstallationId(installationId)
if err != nil {
slog.Error("Error getting installation link", "installationId", installationId, "error", err)
return err
}
orgId := installationLink.OrganisationId
payload := UpdateCacheRequest{
RepoFullName: repoFullName,
Branch: branch,
InstallationId: installationId,
OrgId: orgId,
}
cacheRefreshUrl, err := url.JoinPath(diggerHostname, "_internal/update_repo_cache")
if err != nil {
slog.Error("Error joining URL paths", "error", err)
return err
}
diggerHostname := os.Getenv("HOSTNAME")
if diggerHostname == "" {
slog.Error("HOSTNAME environment variable is not set")
return fmt.Errorf("HOSTNAME environment variable is not set")
}
webhookSecret := os.Getenv("DIGGER_INTERNAL_SECRET")
installationLink, err := models.DB.GetGithubInstallationLinkForInstallationId(installationId)
if err != nil {
slog.Error("Error getting installation link", "installationId", installationId, "error", err)
return err
}
orgId := installationLink.OrganisationId
payload := UpdateCacheRequest{
RepoFullName: repoFullName,
Branch: branch,
InstallationId: installationId,
OrgId: orgId,
}
cacheRefreshUrl, err := url.JoinPath(diggerHostname, "_internal/update_repo_cache")
if err != nil {
slog.Error("Error joining URL paths", "error", err)
return err
}


jsonPayload, err := json.Marshal(payload)
if err != nil {
slog.Error("Process Cache: error marshaling JSON", "error", err)
return err
}

req, err := http.NewRequest("POST", cacheRefreshUrl, bytes.NewBuffer(jsonPayload))
if err != nil {
slog.Error("Process Cache: Error creating request", "error", err)
return err
}

req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", fmt.Sprintf("Bearer %v", webhookSecret))

client := &http.Client{}
resp, err := client.Do(req)
Comment on lines +144 to +145
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HTTP client in sendProcessCacheRequest doesn't have a timeout set, which could lead to hanging requests if the server doesn't respond. This is a common issue that can cause resource leaks and potentially impact system stability.

Adding a reasonable timeout (30 seconds in this case) ensures that requests don't hang indefinitely. This is especially important for internal service communication where a non-responsive endpoint shouldn't block the entire system.

Note that this change requires adding time to the imports if it's not already there.

Suggested change
client := &http.Client{}
resp, err := client.Do(req)
client := &http.Client{
Timeout: 30 * time.Second,
}
resp, err := client.Do(req)

Comment on lines +144 to +145
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HTTP client in sendProcessCacheRequest doesn't have a timeout set, which could lead to hanging requests if the server doesn't respond. This is a potential issue because without a timeout, the request could hang indefinitely, consuming resources and potentially causing the application to become unresponsive.

Adding a reasonable timeout (30 seconds in this case) ensures that requests don't hang indefinitely and resources are properly released if the server doesn't respond in a timely manner.

Suggested change
client := &http.Client{}
resp, err := client.Do(req)
client := &http.Client{
Timeout: 30 * time.Second,
}
resp, err := client.Do(req)

if err != nil {
fmt.Println("Error sending request:", err)
return err
}
Comment on lines +144 to +149
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the third instance of inconsistent error logging in the sendProcessCacheRequest function. While most of the function uses structured logging with slog.Error, this section uses fmt.Println for error logging. This inconsistency could lead to missing logs in production environments where stdout might be redirected or not captured properly.

The fix standardizes this error logging to use slog.Error with structured fields, completing the standardization of the logging approach in this function.

Suggested change
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
slog.Error("Error sending request", "error", err)
return err
}

Comment on lines +145 to +149
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sendProcessCacheRequest function, there's an inconsistency in error logging. While the rest of the codebase (including other parts of this same function) uses the structured logger slog.Error(), line 146 uses fmt.Println() for error logging. This inconsistency makes debugging more difficult as it breaks the structured logging pattern used throughout the codebase.

Suggested change
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
resp, err := client.Do(req)
if err != nil {
slog.Error("Error sending request", "error", err)
return err
}

defer resp.Body.Close()
Comment on lines +145 to +150
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sendProcessCacheRequest function, there's a potential nil pointer dereference bug. If the HTTP request fails (client.Do returns an error), resp will be nil, but the code still attempts to call defer resp.Body.Close(). This will cause a panic at runtime when the deferred function executes.

The fix adds a nil check in the deferred function to ensure we only attempt to close the response body if both resp and resp.Body are not nil.

Suggested change
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
defer resp.Body.Close()
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return err
}
defer func() {
if resp != nil && resp.Body != nil {
resp.Body.Close()
}
}()


statusCode := resp.StatusCode
if statusCode != 200 {
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
}

slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))
Comment on lines +154 to +166
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sendProcessCacheRequest function in backend/controllers/cache.go, when reading the response body in the error case (non-200 status code), the error from io.ReadAll is logged but then ignored. The function continues with potentially invalid responseBody data which is then used in the error message and log.

If the error reading the response body occurs, we should return early with an appropriate error message rather than continuing with potentially invalid data. This ensures that the error handling is more robust and prevents using potentially corrupted or empty data in subsequent operations.

Suggested change
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
}
slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
return fmt.Errorf("cache update failed with status code %d and could not read response body: %v", statusCode, err)
}
slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))


return fmt.Errorf("cache update failed with status code %d: %s", statusCode, string(responseBody))
Comment on lines +154 to +168
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sendProcessCacheRequest function, when a non-200 status code is received, the code attempts to read the response body using io.ReadAll(resp.Body). If this read operation fails, the error is logged but the function continues to use string(responseBody) in both the error log and the returned error message.

Since responseBody would be empty or nil when io.ReadAll fails, this leads to misleading error messages that don't accurately reflect the actual problem. The error message would show an empty response body rather than indicating that there was a problem reading the response.

The fix adds an early return with a more accurate error message when the response body cannot be read, ensuring that the error propagated up the call stack correctly reflects what actually happened.

Suggested change
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
}
slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))
return fmt.Errorf("cache update failed with status code %d: %s", statusCode, string(responseBody))
// Read response body to get error details
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
slog.Error("Failed to read error response body", "error", err)
return fmt.Errorf("cache update failed with status code %d: error reading response body: %v", statusCode, err)
}
slog.Error("got unexpected cache status",
"statusCode", statusCode,
"repoFullName", repoFullName,
"orgId", orgId,
"branch", branch,
"installationId", installationId,
"responseBody", string(responseBody))
return fmt.Errorf("cache update failed with status code %d: %s", statusCode, string(responseBody))

}
return nil
}
44 changes: 32 additions & 12 deletions backend/controllers/github.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/diggerhq/digger/libs/digger_config/terragrunt/tac"
"github.com/diggerhq/digger/libs/git_utils"
"log/slog"
"math/rand"
Expand Down Expand Up @@ -416,7 +417,6 @@ func handlePushEvent(gh utils.GithubClientProvider, payload *github.PushEvent, a
loadProjectsOnPush := os.Getenv("DIGGER_LOAD_PROJECTS_ON_PUSH")

if loadProjectsOnPush == "true" {

if strings.HasSuffix(ref, defaultBranch) {
slog.Debug("Loading projects from GitHub repo (push event)", "loadProjectsOnPush", loadProjectsOnPush, "ref", ref, "defaultBranch", defaultBranch)
err := services.LoadProjectsFromGithubRepo(gh, strconv.FormatInt(installationId, 10), repoFullName, repoOwner, repoName, cloneURL, defaultBranch)
Expand All @@ -428,6 +428,15 @@ func handlePushEvent(gh utils.GithubClientProvider, payload *github.PushEvent, a
slog.Debug("Skipping loading projects from GitHub repo", "loadProjectsOnPush", loadProjectsOnPush)
}

repoCacheEnabled := os.Getenv("DIGGER_CONFIG_REPO_CACHE_ENABLED")
if repoCacheEnabled == "1" && strings.HasSuffix(ref, defaultBranch) {
go func() {
if err := sendProcessCacheRequest(repoFullName, defaultBranch, installationId); err != nil {
slog.Error("Failed to process cache request", "error", err, "repoFullName", repoFullName)
}
}()
}

return nil
}

Expand Down Expand Up @@ -914,7 +923,7 @@ func handlePullRequestEvent(gh utils.GithubClientProvider, payload *github.PullR
return nil
}

func GetDiggerConfigForBranch(gh utils.GithubClientProvider, installationId int64, repoFullName string, repoOwner string, repoName string, cloneUrl string, branch string, changedFiles []string) (string, *dg_github.GithubService, *dg_configuration.DiggerConfig, graph.Graph[string, dg_configuration.Project], error) {
func GetDiggerConfigForBranch(gh utils.GithubClientProvider, installationId int64, repoFullName string, repoOwner string, repoName string, cloneUrl string, branch string, changedFiles []string, taConfig *tac.AtlantisConfig) (string, *dg_github.GithubService, *dg_configuration.DiggerConfig, graph.Graph[string, dg_configuration.Project], error) {
slog.Info("Getting Digger config for branch",
slog.Group("repository",
slog.String("fullName", repoFullName),
Expand Down Expand Up @@ -954,7 +963,7 @@ func GetDiggerConfigForBranch(gh utils.GithubClientProvider, installationId int6

slog.Debug("Successfully read digger.yml file", "configLength", len(diggerYmlStr))

config, _, dependencyGraph, err = dg_configuration.LoadDiggerConfig(dir, true, changedFiles)
config, _, dependencyGraph, _, err = dg_configuration.LoadDiggerConfig(dir, true, changedFiles, taConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: The LoadDiggerConfig call now passes taConfig as the 4th parameter, but the return values suggest it now returns 5 values instead of 4. Verify this matches the updated function signature.

if err != nil {
slog.Error("Error loading and parsing Digger config",
"directory", dir,
Expand Down Expand Up @@ -1046,6 +1055,7 @@ func getDiggerConfigForPR(gh utils.GithubClientProvider, orgId uint, prLabels []

// check if items should be loaded from cache
useCache := false
var taConfig *tac.AtlantisConfig = nil
if val, _ := os.LookupEnv("DIGGER_CONFIG_REPO_CACHE_ENABLED"); val == "1" && !slices.Contains(prLabels, "digger:no-cache") {
useCache = true
slog.Info("Attempting to load config from cache",
Expand All @@ -1054,7 +1064,7 @@ func getDiggerConfigForPR(gh utils.GithubClientProvider, orgId uint, prLabels []
"prNumber", prNumber,
)

diggerYmlStr, config, dependencyGraph, err := retrieveConfigFromCache(orgId, repoFullName)
_, _, _, taConfigTemp, err := retrieveConfigFromCache(orgId, repoFullName)
if err != nil {
slog.Info("Could not load from cache, falling back to live loading",
"orgId", orgId,
Expand All @@ -1065,9 +1075,8 @@ func getDiggerConfigForPR(gh utils.GithubClientProvider, orgId uint, prLabels []
slog.Info("Successfully loaded config from cache",
"orgId", orgId,
"repoFullName", repoFullName,
"projectCount", len(config.Projects),
)
return diggerYmlStr, ghService, config, *dependencyGraph, &prBranch, &prCommitSha, changedFiles, nil
taConfig = taConfigTemp
}
}

Expand All @@ -1084,7 +1093,7 @@ func getDiggerConfigForPR(gh utils.GithubClientProvider, orgId uint, prLabels []
"prNumber", prNumber,
)

diggerYmlStr, ghService, config, dependencyGraph, err := GetDiggerConfigForBranch(gh, installationId, repoFullName, repoOwner, repoName, cloneUrl, prBranch, changedFiles)
diggerYmlStr, ghService, config, dependencyGraph, err := GetDiggerConfigForBranch(gh, installationId, repoFullName, repoOwner, repoName, cloneUrl, prBranch, changedFiles, taConfig)
if err != nil {
slog.Error("Error loading Digger config from repository",
"prNumber", prNumber,
Expand All @@ -1098,7 +1107,7 @@ func getDiggerConfigForPR(gh utils.GithubClientProvider, orgId uint, prLabels []
return diggerYmlStr, ghService, config, dependencyGraph, &prBranch, &prCommitSha, changedFiles, nil
}

func retrieveConfigFromCache(orgId uint, repoFullName string) (string, *dg_configuration.DiggerConfig, *graph.Graph[string, dg_configuration.Project], error) {
func retrieveConfigFromCache(orgId uint, repoFullName string) (string, *dg_configuration.DiggerConfig, *graph.Graph[string, dg_configuration.Project], *tac.AtlantisConfig, error) {
slog.Debug("Retrieving config from cache",
"orgId", orgId,
"repoFullName", repoFullName,
Expand All @@ -1111,7 +1120,7 @@ func retrieveConfigFromCache(orgId uint, repoFullName string) (string, *dg_confi
"repoFullName", repoFullName,
"error", err,
)
return "", nil, nil, fmt.Errorf("failed to load repo cache: %v", err)
return "", nil, nil, nil, fmt.Errorf("failed to load repo cache: %v", err)
}

var config dg_configuration.DiggerConfig
Expand All @@ -1122,7 +1131,18 @@ func retrieveConfigFromCache(orgId uint, repoFullName string) (string, *dg_confi
"repoFullName", repoFullName,
"error", err,
)
return "", nil, nil, fmt.Errorf("failed to unmarshal config from cache: %v", err)
return "", nil, nil, nil, fmt.Errorf("failed to unmarshal config from cache: %v", err)
}

var taConfig tac.AtlantisConfig
err = json.Unmarshal(repoCache.TerragruntAtlantisConfig, &taConfig)
if err != nil {
slog.Error("Failed to unmarshal config from cache",
"orgId", orgId,
"repoFullName", repoFullName,
"error", err,
)
return "", nil, nil, nil, fmt.Errorf("failed to unmarshal config from cache: %v", err)
Comment on lines +1137 to +1145
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Consider handling the case where TerragruntAtlantisConfig is nil or empty to avoid unmarshaling errors, similar to how other cached data is validated.

}

slog.Debug("Creating project dependency graph from cached config",
Expand All @@ -1138,7 +1158,7 @@ func retrieveConfigFromCache(orgId uint, repoFullName string) (string, *dg_confi
"repoFullName", repoFullName,
"error", err,
)
return "", nil, nil, fmt.Errorf("error creating dependency graph from cached config: %v", err)
return "", nil, nil, nil, fmt.Errorf("error creating dependency graph from cached config: %v", err)
}

slog.Info("Successfully retrieved config from cache",
Expand All @@ -1147,7 +1167,7 @@ func retrieveConfigFromCache(orgId uint, repoFullName string) (string, *dg_confi
"projectCount", len(config.Projects),
)

return repoCache.DiggerYmlStr, &config, &projectsGraph, nil
return repoCache.DiggerYmlStr, &config, &projectsGraph, &taConfig, nil
}

func GetRepoByInstllationId(installationId int64, repoOwner string, repoName string) (*models.Repo, error) {
Expand Down
2 changes: 2 additions & 0 deletions backend/migrations/20250725041417.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
-- Modify "repo_caches" table
ALTER TABLE "public"."repo_caches" ADD COLUMN "terragrunt_atlantis_config" bytea NULL;
3 changes: 2 additions & 1 deletion backend/migrations/atlas.sum
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
h1:pk1704W4CHtf2K+qFKIkjTh8pN2R8Lor32ecJPtQgoY=
h1:DO/UG0tYCnZA69yRLMd2ZBKUORlJtZJLkVCiCogKp+k=
20231227132525.sql h1:43xn7XC0GoJsCnXIMczGXWis9d504FAWi4F1gViTIcw=
20240115170600.sql h1:IW8fF/8vc40+eWqP/xDK+R4K9jHJ9QBSGO6rN9LtfSA=
20240116123649.sql h1:R1JlUIgxxF6Cyob9HdtMqiKmx/BfnsctTl5rvOqssQw=
Expand Down Expand Up @@ -60,3 +60,4 @@ h1:pk1704W4CHtf2K+qFKIkjTh8pN2R8Lor32ecJPtQgoY=
20250716043004.sql h1:WFz35jhsJ1pswpt5mmewW7L8LP5taRcNMf+pqUi3Yxs=
20250716222603.sql h1:a66teR/6BQwNwhJ/zgNA/NjUVTz5DcoQl7cDZvzVS8c=
20250717032021.sql h1:HaIhNsz3C+c87CDmFjgFlc9zGqoE5BU4m0dofDpeDYk=
20250725041417.sql h1:Dds6fqS415FD1jlsoVEahcEEm1px3EHV5435moL+Vp8=
9 changes: 5 additions & 4 deletions backend/models/cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,9 @@ import (
// storing repo cache such as digger.yml configuration
type RepoCache struct {
gorm.Model
OrgId uint
RepoFullName string
DiggerYmlStr string
DiggerConfig []byte `gorm:"type:bytea"`
OrgId uint
RepoFullName string
DiggerYmlStr string
DiggerConfig []byte `gorm:"type:bytea"`
TerragruntAtlantisConfig []byte `gorm:"type:bytea"`
}
22 changes: 17 additions & 5 deletions backend/models/storage.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/diggerhq/digger/libs/digger_config/terragrunt/tac"
"log/slog"
"math"
"net/http"
Expand Down Expand Up @@ -1746,7 +1747,7 @@ func (db *Database) GetDiggerLock(resource string) (*DiggerLock, error) {
return lock, nil
}

func (db *Database) UpsertRepoCache(orgId uint, repoFullName string, diggerYmlStr string, diggerConfig configuration.DiggerConfig) (*RepoCache, error) {
func (db *Database) UpsertRepoCache(orgId uint, repoFullName string, diggerYmlStr string, diggerConfig configuration.DiggerConfig, newAtlantisConfig *tac.AtlantisConfig) (*RepoCache, error) {
var repoCache RepoCache

configMarshalled, err := json.Marshal(diggerConfig)
Expand All @@ -1758,6 +1759,15 @@ func (db *Database) UpsertRepoCache(orgId uint, repoFullName string, diggerYmlSt
return nil, fmt.Errorf("could not marshal config: %v", err)
}

atlantisConfigMarshalled, err := json.Marshal(newAtlantisConfig)
if err != nil {
slog.Error("could not marshal terragrunt-atlantis-config",
"repoFullName", repoFullName,
"orgId", orgId,
"error", err)
return nil, fmt.Errorf("could not marshal config: %v", err)
}

// check if repo exist already, do nothing in this case
result := db.GormDB.Where("org_id = ? AND repo_full_name=?", orgId, repoFullName).Find(&repoCache)
if result.Error != nil {
Expand All @@ -1777,6 +1787,7 @@ func (db *Database) UpsertRepoCache(orgId uint, repoFullName string, diggerYmlSt

repoCache.DiggerConfig = configMarshalled
repoCache.DiggerYmlStr = diggerYmlStr
repoCache.TerragruntAtlantisConfig = atlantisConfigMarshalled
result = db.GormDB.Save(&repoCache)
} else {
// create record here
Expand All @@ -1785,10 +1796,11 @@ func (db *Database) UpsertRepoCache(orgId uint, repoFullName string, diggerYmlSt
"orgId", orgId)

repoCache = RepoCache{
OrgId: orgId,
RepoFullName: repoFullName,
DiggerYmlStr: diggerYmlStr,
DiggerConfig: configMarshalled,
OrgId: orgId,
RepoFullName: repoFullName,
DiggerYmlStr: diggerYmlStr,
DiggerConfig: configMarshalled,
TerragruntAtlantisConfig: atlantisConfigMarshalled,
}
result = db.GormDB.Save(&repoCache)
if result.Error != nil {
Expand Down
2 changes: 1 addition & 1 deletion backend/utils/bitbucket.go
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ func GetDiggerConfigForBitbucketBranch(bb BitbucketProvider, token string, repoF
}

diggerYmlStr = string(diggerYmlBytes)
config, _, dependencyGraph, err = dg_configuration.LoadDiggerConfig(dir, true, changedFiles)
config, _, dependencyGraph, _, err = dg_configuration.LoadDiggerConfig(dir, true, changedFiles, nil)
if err != nil {
slog.Error("Error loading Digger config",
"repoFullName", repoFullName,
Expand Down
2 changes: 1 addition & 1 deletion backend/utils/gitlab.go
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ func GetDiggerConfigForBranchGitlab(gh GitlabProvider, projectId int, repoFullNa
"configLength", len(diggerYmlStr),
)

config, _, dependencyGraph, err = dg_configuration.LoadDiggerConfig(dir, true, changedFiles)
config, _, dependencyGraph, _, err = dg_configuration.LoadDiggerConfig(dir, true, changedFiles, nil)
if err != nil {
slog.Error("Failed to load Digger config",
"projectId", projectId,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ func main() {

slog.Info("refreshing projects from repo", "repoFullName", repoFullName)
err := utils3.CloneGitRepoAndDoAction(cloneUrl, branch, "", token, "", func(dir string) error {
config, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil)
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR has changed the signature of LoadDiggerConfigYaml to return an additional value (the Atlantis configuration), but this call site hasn't been updated to match. The function now returns four values instead of three, which will cause a compilation error.

Suggested change
config, _, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)
config, _, newAtlantisConfig, err := dg_configuration.LoadDiggerConfigYaml(dir, true, nil, nil)

if err != nil {
slog.Error("failed to load digger.yml: %v", "error", err)
return fmt.Errorf("error loading digger.yml %v", err)
Expand Down
2 changes: 1 addition & 1 deletion cli/pkg/github/github.go
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ func GitHubCI(lock core_locking.Lock, policyCheckerProvider core_policy.PolicyCh
usage.ReportErrorAndExit(githubActor, fmt.Sprintf("Failed to get current dir. %s", err), 4)
}

diggerConfig, diggerConfigYaml, dependencyGraph, err := digger_config.LoadDiggerConfig("./", true, nil)
diggerConfig, diggerConfigYaml, dependencyGraph, _, err := digger_config.LoadDiggerConfig("./", true, nil, nil)
if err != nil {
usage.ReportErrorAndExit(githubActor, fmt.Sprintf("Failed to read Digger digger_config. %s", err), 4)
}
Expand Down
Loading
Loading