-
Notifications
You must be signed in to change notification settings - Fork 5
Run tekton tasks tests in parallel #432
Conversation
Container logs printed using fmt.Println which is test agnostic, leading to outputs occassionally being associated with the wrong testcase whenever tests run in parallel
Now that the tests have successfully run through we calculated that the test duration was cut down to 19m54s from 23m38s (only comparing one test run with another, though). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks awesome to me. I am curious what you say to my previous comments.
Could this change be in the way to introduce caching (see proposal PR #412 ) as there might be concurrent access to the cache which could potentially cause flaky tests.
return fmt.Errorf("failed to create directory: %w", err) | ||
} | ||
|
||
f, err := os.OpenFile(filePath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am curious did you need this file flags and permissions? Have you different flags/permissions first - perhaps os.Create() and it did not work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, we'll check os.Create()
instead, looks like this is a convenience wrapper around os.OpenFile()
fileHandles[event.Test] = f | ||
case "pause", "cont", "skip": // do nothing | ||
case "pass", "fail": | ||
f, ok := fileHandles[event.Test] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be some error reporting if it is not ok?
case "pass", "fail": | ||
f, ok := fileHandles[event.Test] | ||
if ok { | ||
err := f.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume test pass and fail is typically captured in prior output. Perhaps its worthwhile to nonetheless write this explicitly in the log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@henrjk you're spot-on, go test
actually does write a pass/fail marker with an "bench"/"output" event.Action
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The object for events of type pass
and fail
do not have any output. The output is captured in the prior object with event type output
like this:
--- PASS: TestTaskODSBuildGradle/ods-build-gradle/task_should_build_gradle_app (70.27s)
So the only additional log that we could provide reacting on pass
and fail
would be a duplicate of this one and that's why @kuebler and I decided to do nothing but closing the file on those.
return fmt.Errorf("failed to open output file: %w", err) | ||
} | ||
fileHandles[event.Test] = f | ||
case "pause", "cont", "skip": // do nothing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is skip of interest in the log?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly to #432 (comment) the output for a skipped test would be captured in the prior output so it would only lead to us printing information twice if we react on skip
, pause
or cont
.
Really, really great work! I have not looked into the details, but I have two more general questions:
|
Good idea, probably the tests read an ENV variable and based on that decide whether to execute the tests in parallel or not.
Good catch, let's try and put |
Careful, this is on purpose: #364 (though we can also change the approach if we find a good alternative). Different topic than parallel testing anyway ;) |
|
- previously, only table tests were run in parallel - in addition, check whether running the parent tasks themselves in parallel further improves overall test execution time
…com/opendevstack/ods-pipeline into feature/improve-test-execution-time
- looks like it doesn't have possitive effect on test execution time - makes tests flaky as they're now running into all kinds of timeouts due to concurrent execution
Closing as a consequence of #722. The tests are way faster now because there is less to test in this repo, hence the need for parallel tests is less. |
This PR introduces running the Tekton tasks tests in parallel.
Due to the fact that the test outputs of the different test cases are interleaved we came up with a solution to log the outputs of each test case to their respective log file inside
test/testdata/test-results
.This directory will be uploaded as an artifact after the test steps for the developer to be able to check the logs conveniently.
Ref #401
Closes #14
Tasks:
docs/design
directory or not applicabledocs
directory or not applicablemake test
) or not applicable