-
Notifications
You must be signed in to change notification settings - Fork 12
Integration Tests for the SOM language #128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Integration Tests for the SOM language #128
Conversation
Another small issue: I have test that passes, but was expected to fail. Reporting this should be minimal, and does not need to show the whole stuff. Currently, it shows this:
But all that's needed is: Also, it looks like there's something odd with the diff:
|
Assertions that fail only by being in unspecified/known/unuspported now just show that they passed and are located inside of the tags file. |
c830358
to
493648b
Compare
IntegrationTests/test_runner.py
Outdated
|
||
# ############################################### | ||
# TEST BELOW THIS ARE TESTING THE RUNNER ITSELF | ||
# ############################################### |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good if all this goes into a separate file.
Would be good if there's an easy way to run the integration tests on a SOM VM, without getting distracted by the tests of the test runner.
IntegrationTests/test_runner.py
Outdated
tests = sorted(tests) | ||
|
||
expected_tests = [ | ||
"core-lib/IntegrationTests/test_runner_tests/soms_for_testing/som_test_1.som", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're already using __file__
above. I am not really sure what the test here is doing, but the hard coded paths here means this fails when pytest is ran in another than the expected directory, which is not great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep I agree, hadn't spotted that. This should fix that using a filepath which is generated at runtime.
expected_tests = [
f"{str(test_runner_tests_location)}/soms_for_testing/som_test_1.som",
f"{str(test_runner_tests_location)}/soms_for_testing/som_test_2.som",
f"{str(test_runner_tests_location)}/soms_for_testing/som_test_3.som",
]
IntegrationTests/test_runner.py
Outdated
def assign_environment_variables(classpath, vm, test_exceptions, generate_report): | ||
""" | ||
Assign the environment variables needed for execution based on the name | ||
classpath: String being the name of the variable CLASSPATH | ||
vm: String being the name of the variable VM | ||
test_exceptions: String being the name of the variable TEST_EXCEPTIONS | ||
generate_report: String being the name of the variable GENERATE_REPORT | ||
""" | ||
# Work out settings for the application (They are labelled REQUIRED or OPTIONAL) | ||
if classpath not in os.environ: # REQUIRED | ||
sys.exit("Please set the CLASSPATH environment variable") | ||
|
||
if vm not in os.environ: # REQUIRED | ||
sys.exit("Please set the VM environment variable") | ||
|
||
if test_exceptions in os.environ: # OPTIONAL | ||
external_vars.TEST_EXCEPTIONS = os.environ[test_exceptions] | ||
|
||
if generate_report in os.environ: # OPTIONAL | ||
# Value is the location | ||
# Its prescense in env variables signifies intent to save | ||
external_vars.GENERATE_REPORT = os.environ[generate_report] | ||
|
||
external_vars.CLASSPATH = os.environ[classpath] | ||
external_vars.VM = os.environ[vm] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I am not sure I see the benefit of having this a function.
Also, passing in the constants on call seems to make this just harder to figure out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial idea was to convert this to a method call to be able to test the logic, however I've not thought of a good way of doing such so it will probably revert back to the old implementation.
Hm, there's some problem with the current output. Looks like everything is turned lower case:
|
Output will be converted to lower case if case_sensitive is not set to be true in the test file. If case sensitivity is imperative for a test then it needs to be specified in the test file. The diff just reflects that, unless I am missing something in the output. |
Hm, right. True, it's the diff. |
I could show the output in upper case and then show the diff, but I think the diff showing what is actually compared makes more sense otherwise it would point out case differences. |
1b2c0f4
to
6d13aa9
Compare
This is a PyTest-based runner for the integration tests of SOM. - Test expectations can be set through a tagging system. Uses a yaml file which can be specified through TEST_EXCEPTIONS envvar
6d13aa9
to
c07858e
Compare
- The report generated is now in yaml format. It can function as a TEST_EXCEPTIONS file with failing tests and passing tests, added and removed as needed. It should not be blindly followed but makes the generation of a tags file very easy. - It comes with additional information like number passed/skipped and which ones exactly have been changed. - Test runner is now case insensitive unless instructed not to be through case_sensitive: True in the tests. - It is also possible to now in the test level expect a test to fail. This allows for testing of test_runner features. Not ideal but makes sure case_sensitivity works correctly. - adjusted the reading of filepaths to require only 1 str() call - Tests now sorted / Name of check_out changed. Comments updated to reflect variable naming - Diff is now shown in error message
… string. - skip test on Unicode Failure
- Removed Print statements leftover - Updated error handling to just show diff unless no diff is available - EXECUTABLE changed to be VM - Added some basic test_runner tests - Make a diff reverted to just be a diff and not have line numbers - be strict on inclusion in stdout/stderr - Removed the lang_tests and adjusted the test_runner logic to now print a proper error on /Tests folder not found - Fix conftest to output a string rather than an object for exitstatus - Fixed error with missing or empty tag file, addded test to assert that the yaml files are working correctly - Added an additional test to check the robustness of read_test_exceptions - Fixed an error where specifying case_sensitive: False would not register as False but True instead - Added a new tests to test the discovery of lang_tester valid tests inside a directory - Pathname for TEST_EXCEPTIONS now just takes a filename if it is located within IntegrationTests. - Adjusted test_test_diccovery to use relative path names rather than hard coded - There are now two options for running pytest, normally and execute just the som tests and with the argument -m tester which will run the test_runner tests located in test_tester - Added deselect message when tests are deselected due to runner choice - Filepaths must now be given from IntegrationTests onwards so: Tests/test.som - Updated to now use relative paths rather than hardcoded for TEST_EXCEPTIONS
Signed-off-by: Stefan Marr <[email protected]>
Signed-off-by: Stefan Marr <[email protected]>
c07858e
to
79a727c
Compare
Signed-off-by: Stefan Marr <[email protected]>
Signed-off-by: Stefan Marr <[email protected]>
Signed-off-by: Stefan Marr <[email protected]>
Signed-off-by: Stefan Marr <[email protected]>
…ests() It’s only needed when the tests are prepared. Signed-off-by: Stefan Marr <[email protected]>
@rhys-h-walker I squashed lots of commits, to have a better overview. I also rebased on the new master, that contains the merge with integration tests. Please have a look at the commits I added, there were various bits that I pointed out earlier, and that simplify the code a bit. Please also have a look at the test_tester stuff. This passes: But from one level up, it doesn't work:
|
Yep, that's an easy enough fix I think one of the testers is missing a relative path. I have also noticed that the GENERATE_REPORT function no longer works correctly. The names it outputs for test include the whole core-lib/IntegrationTests so it no longer works directly as a TEST_EXCEPTIONS file. I'll fix that too |
Also, Tests/vector_awfy_capacity.som may need updating, it features a fixed classpath that is passed to the SOM++ interpreter.
It'll need some kind of relative path location added to it |
…o that passing.yaml can be recognised
hm, yeah, I suppose that's a little different to the load_file case where it's inside the SOM program. Not sure whether we can find a consistent way of deal with both. |
…remove anything in the path from before Tests/
…variable by that name. Useful for custom classpath loading
This issue is resolved, specify through @tag an environment variable which can be used in place for a custom_classpath.
|
…w match what is expected
… Tests/ So all tests will be Tests/test.som
Signed-off-by: Stefan Marr <[email protected]>
Signed-off-by: Stefan Marr <[email protected]>
…pdated parse_test_file to feature methods rather than one large method, Allows for more fine grained testing
This pull request adds integration tests to the SOM standard library. These integration tests require two python libraries to be installed Pytest and PyYaml.
Integration tests are based on the lang_tests of yksom.
Tests can be configured through using environment variables:
EXECUTABLE= Location of the SOM executable
CLASSPATH= Classpath required for running
TEST_EXCEPTIONS= Exceptions yaml file for tests not expected to pass
DEBUG= Boolean for detailed run information on the go
GENERATE_REPORT= File location for a report to be generated at the end of the run