Julia Test Running: Best Practices
Think tests run too slow in Julia? Cannot figure out Test-Driven Development (TDD)? Here is a guide.
Many programming languages give you one size fits all solutions. The Julia community, in contrast, believes in supplying flexible building blocks from which you can construct whatever solution you want.
The advantage of this approach is that you can easily tailor a solution to your particular needs. The downsides are that many Julia beginners are left scratching their heads wondering what blocks they should combine. Julia's testing framework is an excellent example of both this power and confusion. It is flexible and can tailor it to your needs, but that also means it is not always obvious how you do the most common workflows.
This article is your missing guide to common workflows in Julia's built-in testing framework. We are going to look at how you do the following tasks:
Run an extensive, slow and thorough test of a package you just downloaded
Regular testing while developing a project
Rapid iteration testing suited for workflows such as TDD
Your needs differ depending on what you are doing. If you are testing every five minutes, then you can assume that the packages you depend on didn't suddenly get a new release. You don't need to verify every dependency on every test run while doing TDD. In contrast, when you pick up an old project you worked on last week or last month, you would want to get the latest changes and make sure everything works with the latest dependencies.
Infrequent and Correct Testing
Let us fetch a project to do some testing. Feel free to make your own package, but the examples I take you through here are based on running tests on my LittleManComputer package. It is a bit of a silly name, but derives from a thought experiment to make a basic computer to teach beginners how a computer operates at the most basic level.
We are going to use it to teach Julia testing. Find a suitable directory and clone the Git repository:
❯ git clone git@github.com:ordovician/LittleManComputer.jl.git LMC
Cloning into 'LMC'...
Receiving objects: 100% (144/144), 24.61 KiB | 307.00 KiB/s, done.
❯ cd LMC
Next, we start up Julia and jump into package mode. You don't know how to get into package mode in Julia? Just press the square bracket ]
and the prompt will change from julia>
to something ending with pkg>
depending on your Julia version. Once in package mode, we will activate the project, which makes it the target for all our package commands. Why do we care about that? Because running a test is a package command.
❯ julia
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.7.2 (2022-02-06)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
julia> ]
(@v1.7) pkg> activate .
Activating project at `~/Development/LMC`
(LittleManComputer) pkg> test
Testing LittleManComputer
Status `/private/var/folders/qb/Project.toml`
[c742fd3c] LittleManComputer v0.1.0 `~/Development/LMC`
[8dfed614] Test `@stdlib/Test`
Status `/private/var/folders/qb/Manifest.toml`
[c742fd3c] LittleManComputer v0.1.0 `~/Development/LMC`
[2a0f44e3] Base64 `@stdlib/Base64`
[b77e0a4c] InteractiveUtils `@stdlib/InteractiveUtils`
[56ddb016] Logging `@stdlib/Logging`
[d6f4376e] Markdown `@stdlib/Markdown`
[9a3f8284] Random `@stdlib/Random`
[ea8e919c] SHA `@stdlib/SHA`
[9e88b42a] Serialization `@stdlib/Serialization`
[8dfed614] Test `@stdlib/Test`
Testing Running tests...
Test Summary: | Pass Total
All Tests | 57 57
Testing LittleManComputer tests passed
The output tells as that a total of 57 tests got run, and they all passed, and all we had to do to run the tests was to write test
while in package mode. However, this is a very elaborate test. You can see there is a lot of activity before running the actual tests.
The purpose of this form of testing is to be able to do a thorough test of a whole package occasionally. E.g., after you downloaded a new package. This form of testing is not aimed at rapid iteration, but correctness. It checks all dependent packages and make sure they are up-to-date before running the tests. But if you do numerous small code changes, and you want to make sure you haven't broken any tests while doing this, then you would rather not do pkg> test
as that will significantly slow you down.
NOTE: I am telling you ha half-truth. Since I wrote the first version of this story, the Julia package and testing system have seen such dramatic performance improvements that for plenty of projects this kind of testing will be more than fast enough.
Regular Slow Testing
In your daily workflow, you will do more frequent tests runs. In these cases, the pkg> test
approach is suboptimal, as it is pointless to check state of dependent packages multiple times a day. They are unlikely to change. In this case, the sensible way to test is to execute the file containing your test-code like a normal Julia program:
julia>;
shell> julia test/runtests.jl
Test Summary: | Pass Total
All Tests | 57 57
Now you don't have to pay for the overhead of going online and checking state of dependent packages. That means your tests run much faster.
What if your tests still don't run fast enough? We got another trick up our sleeve we can use. Split up your test code and spread the code over multiple files. The LittleManComputer
package already does this. Look at the test/runtests.jl
file:
using LittleManComputer
using Test
@testset "All Tests" begin
include("assem_tests.jl")
include("disassem_tests.jl")
include("simulator_tests.jl")
end
Just comment out the test you want to avoid running. If you find this approach too primitive, you can roll a more elegant solution. In the test directory, I have stored an alternative code file for running tests called testrunner.jl
.
using LittleManComputer
using Test
tests = ["assem", "disassem", "simulator"]
if !isempty(ARGS)
tests = ARGS # Set list to same as command line args
end
@testset "All Tests" begin
for t in tests
include("$(t)_tests.jl")
end
end
Here we are utilizing the fact that ARGS
is an array in Julia which contains command line arguments. The code check if we have provided command line arguments or not. By default, all tests files are run. If command line arguments have been provided, we will only run tests for the files specified. In the following example, we run the disassembly and assembly tests.
shell> julia test/testrunner.jl disassem assem
Test Summary: | Pass Total
All Tests | 47 47
However, there is still an overhead in having to spin up the Julia JIT every time you want to perform these tests. Can we do it even faster?
NOTE: When you run tests this way, you are actually using the
v1.7
Julia environment and not the one setup for your tests. But you can use the--project
switch to specify the environment to use.
shell> julia --project=test test/testrunner.jl
Rapid Iteration Testing
Julia has slower startup than languages such as Python and Ruby because it is a JIT compiler. That means the optimal way to work with Julia is to remain inside the Julia REPL environment as much as possible. That is why Julia prefers offering more development tools inside the Julia environment rather than offering them as separate shell tools as is common for languages such as Python and Ruby.
Thus, the trick to get frequent and rapid testing is to avoid restarting Julia every time we perform a test run. We want to stay within our fully initialized REPL environment as long as possible.
julia> include("test/runtests.jl");
Test Summary: | Pass Total
All Tests | 57 57
However, for this approach to work properly, you should load the Revise package before loading your package. Just restart Julia and load Revise
first. Revise
is not bundled with Julia, so you have to install it first:
(@v1.7) pkg> add Revise
Next, we load Revise
into the Julia REPL:
julia> using Revise
Why do you need Revise
? What does Revise
do which is so important? Revise
is software which monitors changes to source code and automatically loads those changes into the Julia REPL environment. Assume you have changed a function in a package. Now you want to test that function with the changes. In this case, you want your test runner to test against the new code and not the code that was there when the package first got loaded.
When you test through include
the packages loaded will not actually get reloaded. It will reuse the code that is already loaded. That is good for performance, but bad if we want to incorporate code changes and test those. That is why you need Revise
in this case.
Running Single Tests
When doing TDD, we focus on getting one test to work at a time. That means we would like to run the same test several times in a row. In this case, it is a waste to spend processor cycles running other tests.
The solution is to just evaluate individual tests in the Julia REPL. There are various ways of achieving that:
Copy and paste the test you would like to run into your REPL
Use Julia for VS Code and press Alt-Enter inside the test you want to run
Kindly note that neither approach will work unless you have loaded the packages used by your tests, such as:
# Setup code
using LittleManComputer
using Test
You can do that manually each time you want to run the tests, but here is the catch: What if there are a lot of packages to load or other initialization code? A solution is to factor out the initialization code into a separate file. You can call this file anything you like, but I am choosing to call it setup.jl
. Our runtests.jl
file would then look like:
include("setup.jl")
@testset "All Tests" begin
include("assem_tests.jl")
include("disassem_tests.jl")
include("simulator_tests.jl")
end
Each time you start your Julia development session, you have to run this line before you can run any of our individual tests:
julia> include("test/setup.jl")
It is important to know that the Alt-Enter trick isn't magic. It has some limitations. It runs all the tests up to the top level test set. You may have assumed that it will run the innermost tests sets, but that is not the case.