If we want to do the style of TCR that
used in his wonderful YouTube videos on Test & Commit || Revert then I believe we need to be able to do our tests in the same file as our code. This is simple in rust where that is the norm. Not so easy in Java &/or Kotlin. But it is not too hard to set this up! First, what would that look like? Lets do HelloWorld. Now if you are doing TCR but you want your guiding tests (story tests, customer tests, end to end tests, acceptance tests) to be able to be committed failing while you create unit tests and write code in a TCR style, then you need to be able to separate them from each other. I suggest putting guiding tests in thesrc/test/kotlin
directory structure as tests usually are, and then putting the unit tests in the same file as the class they are testing. This requires a little bit of tweaking of the build.gradle.kts
.So what will this look like? First I write a test for hello world:
import io.kotest.core.spec.style.BehaviorSpec
import io.kotest.matchers.shouldBe
import java.io.ByteArrayOutputStream
import java.io.PrintStream
import com.ronnev.Main
class MainTest : ExpectSpec({
context("Main") {
expect("should print Hello World!") {
val outputStream = ByteArrayOutputStream()
val printStream = PrintStream(outputStream)
System.setOut(printStream)
main()
outputStream.toString() shouldBe "Hello World!\n"
}
}
})
this of course fails to compile.
Now I need a Main.kt
with a main in order to compile and get a failing test:
fun main(args: Array<String>? = null) {
println()
}
now my test fails. Note the magic words necessary to make a basic main work in kotlin. Also you need a bit in the build.gradle.kts
if you want ./gradlew run
to work:
plugins {
kotlin("jvm") version "2.0.20"
java
application
}
...
application {
mainClass.set("com.ronnev.MainKt")
}
Let’s imagine that this is complicated enough that we need to design something, and we think we should make a greeting and then print it. So we need a Greeting.kt
which we put our test in (I know, bad juju - class has different name!)
package com.ronnev
import io.kotest.core.spec.style.ExpectSpec
import io.kotest.matchers.shouldBe
class GreetingTest : ExpectSpec({
context("Greeting") {
expect("should return Hello World!") {
Greeting().greet() shouldBe "Hello World!"
}
}
})
Now this fails to compile for a lot of reasons. First, tools expect that the test code is under src/test/
not src/main
. So we need to add a sourceSet
to our build.gradle.kts
:
sourceSets {
val unitTest by creating {
kotlin {
srcDir("src/main/kotlin")
compileClasspath += sourceSets["main"].output
runtimeClasspath += sourceSets["main"].output
}
}
}
and we need to define a unitTestImplementation
in order to be able to define test dependencies. This part seems like gradle magic, I couldn’t understand why exactly and would love an explanation. After all, to make the compiler work, we ended up putting the test libraries in the main dependencies which I don’t like.
configurations {
create("unitTestImplementation") {
extendsFrom(configurations.testImplementation.get())
}
}
now it seems we still have to include these tests dependencies in the main implementation which I don’t like but haven’t figured out how to avoid:
dependencies {
implementation(kotlin("test"))
implementation("io.kotest:kotest-runner-junit5:$kotestVersion")
implementation("io.kotest:kotest-assertions-core:$kotestVersion")
}
we also need to filter out test classes from the jar and define unitTest
. We also put in some code to make sure that test
and unitTest
output the results even if they all pass. Also, we want the unitTest
task to always run before the test task.
tasks {
jar {
exclude("**/*Test.class", "**/*Test\$*.class")
}
val unitTest by creating(Test::class) {
useJUnitPlatform()
testClassesDirs = sourceSets["unitTest"]
.output
.classesDirs
classpath = sourceSets["unitTest"].runtimeClasspath
setupTestLogging()
displayResultsOfTests()
}
test {
useJUnitPlatform()
setupTestLogging()
displayResultsOfTests()
dependsOn("unitTest")
}
}
fun Test.displayResultsOfTests() {
afterSuite(
KotlinClosure2<TestDescriptor, TestResult, Unit>(
{ desc, result ->
if (desc.parent == null) {
print("Summary Report: ")
print("${result.resultType} (${result.testCount} tests,")
print(" ${result.successfulTestCount} passed,")
print(" ${result.failedTestCount} failed,")
println(" ${result.skippedTestCount} skipped)")
}
})
)
}
fun Test.setupTestLogging() {
testLogging {
events("passed", "skipped", "failed")
showStandardStreams = true
exceptionFormat = TestExceptionFormat.FULL
}
}
(See the actual listings at my github https://github.com/vextorspace/kotlinTcrOnSave
Now we can finish our Greetings.kt
class and make the unit test pass:
package com.ronnev
import io.kotest.core.spec.style.ExpectSpec
import io.kotest.matchers.shouldBe
class Greeting() {
fun greet(): String {
return "Hello World!"
}
}
class GreetingTest : ExpectSpec({
context("Greeting") {
expect("should return Hello World!") {
Greeting().greet() shouldBe "Hello World!"
}
}
})
This makes this class pass, but notice the main test still fails. If we fill in the main function appropriately, then it passes:
fun main(args: Array<String>? = null) {
println(Greeting().greet())
}
Now the guiding test passes too.
So now we just need our onsave script which will be similar to our old on commit script but include the commit. We need to run the unit tests only with
./gradlew unitTest
.
And then we will need to do a commit or revert, depending on the results.
#!/bin/sh
echo "Running gradle tests..."
./gradlew clean unitTest
# Store the test result
TEST_RESULT=$?
# Check if tests failed
if [ $TEST_RESULT -ne 0 ]; then
echo "Tests failed! Removing changes..."
git reset --hard HEAD
else
echo "Tests passed! Committing..."
git add .
git commit -m "working"
fi
exit 0
cool, that worked!
Now we also want to monitor the filesystem for changes, ignoring certain directories, and run this tcr.sh
script whenever anything does change. Instead of leaning on the IDE for this, I decided to use watchexec
which can be installed by running
curl -sS https://webi.sh/watchexec | sh
and then we use this start.sh script to enable it:
#!/bin/sh
watchexec -i build -i .idea -i .git -i .gradle -i .kotlin -i .vscode -i src/test -- ./tcr.sh
so when we want to edit, we run ./start.sh
in the directory, then get going in any ide or even just raw emacs and we can do TCR on Save in Kotlin! Yay!
Try it out. Download the project, install watchexec
(or use your IDEs onsave extensions) to run tcr.sh
whenever you save. I prefer to only have it watch for unit tests and build files effectively.
So then lets do a little actual work with this script. Lets add the ability to greet an individual to the src/test/kotlin/MainTest.kt
:
context("Called with Bob as Argument") {
expect("should print Hello Bob!") {
val outputStream = ByteArrayOutputStream()
val printStream = PrintStream(outputStream)
System.setOut(printStream)
main(arrayOf("Bob"))
outputStream.toString() shouldBe "Hello Bob!\n"
}
}
so then this does not trigger a failed unitTest
so does not revert on us. It does not even trigger the tests. Now we go to our unit test and add the ability to Greeting.kt
:
class Greeting() {
fun greet(toWhom: String = "World"): String {
return "Hello $toWhom!"
}
}
class GreetingTest : ExpectSpec({
context("Greeting") {
expect("should return Hello World!") {
Greeting().greet() shouldBe "Hello World!"
}
}
context("Greeting with Bob") {
expect("should return Hello Bob!") {
Greeting().greet("Bob") shouldBe "Hello Bob!"
}
}
})
notice we had to add the test and the ability at the same time. Took me 3 tries before I had no errors. Probably should have first refactored out the World from the string. That would have simplified it! Those are the skills you learn from TCR!
I feel like I may keep my tcr script around all my projects, and when I need to slow down because I keep making mistakes and I’m afraid, I could just run ./start.sh
and I’d be TCRing, and forced to slow down.
Only question is, I don’t have a separate set of testImplementation
commands because of the inability of it to manage them separately when they are in the same file. like rust does with its cfg system. So lets see if my jar grows if I add a test library. Currently it is 2402 bytes. Lets add the mockk
library and build the jar. Still 2402. Now lets build a test using it and rebuild the jar: 2401. Huh, 1 byte less. Anyway, with the filtering of test classes (unzip the jar to verify this) there should be no increase to the release jar size. So we could use this in production code!