Testing in EK9
Tests verify your code works correctly. When you change code later, running tests confirms you haven't broken existing functionality.
Quality Enforcement Applies to Tests: EK9 enforces the same quality standards on test code as production code. Test functions must have complexity < 11, descriptive variable names (no temp, flag, data), and meet cohesion/coupling limits. See Compile-Time Validation and Code Quality.
- Overview - Testing as a language feature
- Your First Test
- What Makes EK9 Different
- Test Types - Assert-Based, Black-Box, Parameterized
- The @Test Directive - Grouping tests
- Assertion Statements - assert, assertThrows, assertDoesNotThrow
- Test Runner - Commands and output formats
- Code Coverage - Always-on collection and threshold enforcement
- Code Quality Metrics - Complexity, cognitive load, readability
- HTML Coverage Reports - Interactive dashboards and source views
- Output Placeholders - Matching dynamic values
- Compile-Time Validation - Quality enforcement on test code
- Key Concepts - What you've learned
- Quick Reference
Overview: Testing as a Language Feature
EK9 testing is fundamentally different from testing frameworks you may have used. Instead of importing libraries like JUnit or pytest, testing is built directly into the language grammar. This enables capabilities no framework can provide:
- Compile-time test validation - Empty tests, orphan assertions, and production assertions are compiler errors, not runtime surprises
- Quality enforcement on test code - Tests must meet the same quality standards as production code (complexity limits, naming quality, cohesion)
- Always-on coverage - 80% threshold enforced automatically, exit code 12 if below
- Zero imports - No test framework dependencies, no version conflicts
Important: EK9's Code Quality enforcement applies to ALL code, including tests. Your test code must meet naming quality standards (E11026, E11030, E11031), complexity limits, and cohesion requirements. Tests won't run if test code violates quality gates.
This guide covers:
- Getting started - Your first test in 2 minutes
- How EK9 differs - Comparison with JUnit/pytest
- Testing approaches - Assert, black-box, parameterized
- Running tests - Output formats, coverage, HTML reports
- Compile-time validation - Quality gates for test code
Your First Test
Create a simple project with two files:
myproject/
├── main.ek9 # Your code (the function to test)
└── dev/
└── tests.ek9 # Your tests
The dev/ directory is special - files here are only included when
running tests. This keeps test code separate from production code.
main.ek9 - The Code to Test
#!ek9
defines module my.first.test
defines function
add() as pure
-> a as Integer, b as Integer
<- result as Integer: a + b
//EOF
dev/tests.ek9 - The Test
#!ek9
defines module my.first.test.tests
references
my.first.test::add
defines program
@Test
AdditionWorks()
result <- add(2, 3)
assert result == 5
//EOF
Key concepts: references imports
symbols from other modules, defines program
declares an entry point, @Test marks it for the
test runner, and assert validates
conditions.
Run it:
$ ek9 -t main.ek9 [i] Found 1 test: 1 assert (unit tests with assertions) Executing 1 test... [OK] PASS my.first.test.tests::AdditionWorks [Assert] (2ms) Summary: 1 passed, 0 failed (1 total)
The file tests.ek9 can be named anything you like, and you can have
as many .ek9 files in dev/ as you need - each can contain
multiple @Test programs. The test runner discovers all of them.
When Tests Fail
EK9 shows exactly what failed:
[X] FAIL my.first.test.tests::AdditionWorks [Assert] (2ms)
Assertion failed: `result==5` at ./dev/tests.ek9:14:7
Summary: 0 passed, 1 failed (1 total)
The expression (result==5), file, line, and column are captured
automatically from the AST. No stack traces to parse, no custom messages to write.
What Makes EK9 Testing Different
Unlike frameworks that require imports and setup, EK9's testing is built into the language grammar. This enables compile-time validation:
- Empty test = compile error - A @Test with no assertions won't compile
- Orphan assertion = compile error - An assert not reachable from any @Test won't compile
- Production assertion = compile error - Using assert in non-test code won't compile
In JUnit or pytest, an empty test passes silently. An orphan assertion is never discovered. EK9 catches these mistakes before you run.
Comparison with Other Languages
| Capability | JUnit / pytest / Go | EK9 |
|---|---|---|
| Empty test detection | Passes silently | Compile error |
| Orphan assertion detection | Never discovered | Compile error |
| Assertion in prod code | Allowed (or runtime only) | Compile error |
| Error location | Parse stack trace | Exact file:line:column |
| Expression capture | Write custom message | Automatic from AST |
| Test imports | Required (JUnit, pytest) | None - grammar-level |
| Output formats | Framework-specific | Human, Terse, JSON, JUnit XML |
| Black-box testing | Separate tools/frameworks | Built-in expected_output.txt |
| Dynamic value matching | Custom matchers | Type-based placeholders |
| Quality checks on test code | Optional linting | Enforced (complexity, naming, cohesion) |
See Code Quality for complete documentation of quality enforcement that applies to both production and test code.
Tests Run Automatically
When you package (ek9 -P) or deploy (ek9 -D) your code,
tests are executed automatically. You don't need to remember to run them - EK9
won't package code with failing tests. Testing isn't a separate manual step; it's
woven into the development workflow.
Test Types
EK9 supports three complementary testing approaches:
- Assert-Based - Unit testing individual functions; call functions
and check results with
assert - Black-Box - Regression testing program output; compare stdout
to
expected_output.txt - Parameterized - Testing with multiple input sets; run same test with different input files
1. Assert-Based Tests
Use assert, assertThrows, and assertDoesNotThrow
for internal validation within test code.
Project Structure
simpleAssertTest/
├── main.ek9 # Production code (functions to test)
└── dev/
└── tests.ek9 # Test programs with @Test directive
Production Code (main.ek9)
#!ek9
defines module simple.assert.test
defines function
add() as pure
->
a as Integer
b as Integer
<- result as Integer: a + b
multiply() as pure
->
a as Integer
b as Integer
<- result as Integer: a * b
//EOF
Test Code (dev/tests.ek9)
#!ek9
defines module simple.assert.test.tests
references
simple.assert.test::add
simple.assert.test::multiply
defines program
@Test
AdditionTest()
result <- add(2, 3)
assert result?
assert result == 5
@Test
MultiplicationTest()
result <- multiply(4, 5)
assert result?
assert result == 20
@Test
CombinedOperationsTest()
sum <- add(10, 20)
product <- multiply(sum, 2)
assert product == 60
//EOF
2. Black-Box Tests
Validate program output against expected files. For tests without command line
arguments, the file must be named exactly expected_output.txt in
the same directory as the test. Ideal for regression testing and AI-generated tests.
Project Structure
blackBoxTest/
├── main.ek9 # Production code
└── dev/
├── tests.ek9 # Test program
└── expected_output.txt # Expected stdout output
Production Code (main.ek9)
#!ek9
defines module blackbox.test
defines function
greet() as pure
-> name as String
<- greeting as String: "Hello, " + name + "!"
//EOF
Test Code (dev/tests.ek9)
#!ek9
defines module blackbox.test.tests
references
blackbox.test::greet
defines program
@Test
GreetingOutputTest()
stdout <- Stdout()
stdout.println(greet("World"))
stdout.println(greet("EK9"))
//EOF
Expected Output (dev/expected_output.txt)
Hello, World! Hello, EK9!
When Output Doesn't Match
If the actual output differs from expected, EK9 shows a line-by-line comparison:
[X] FAIL blackbox.test.tests::GreetingOutputTest [BlackBox] (3ms)
Output mismatch at line 2:
Expected: Hello, EK9!
Actual: Hello, EK9?
3. Parameterized Tests
Run the same test with multiple inputs using commandline_arg_{id}.txt
and expected_case_{id}.txt file pairs. Each file pair defines a test case.
Project Structure
parameterizedTest/
├── main.ek9 # Production code
└── dev/
├── tests.ek9 # Test program with parameters
├── commandline_arg_simple.txt # Case "simple": input arguments
├── expected_case_simple.txt # Case "simple": expected output
├── commandline_arg_edge.txt # Case "edge": input arguments
└── expected_case_edge.txt # Case "edge": expected output
Production Code (main.ek9)
#!ek9
defines module parameterized.test
defines function
processArg() as pure
-> arg as String
<- result as String: "Processed: " + arg
//EOF
Test Code (dev/tests.ek9)
#!ek9
defines module parameterized.test.tests
references
parameterized.test::processArg
defines program
@Test
ArgProcessor()
->
arg0 as String
arg1 as String
stdout <- Stdout()
stdout.println(processArg(arg0))
stdout.println(processArg(arg1))
//EOF
Test Case "simple"
commandline_arg_simple.txt:
hello world
expected_case_simple.txt:
Processed: hello Processed: world
Test Case "edge"
commandline_arg_edge.txt:
single only
expected_case_edge.txt:
Processed: single Processed: only
The @Test Directive
Mark programs as tests using the @Test directive. Only programs with
this directive are discovered and executed by the test runner.
Ungrouped vs Grouped Tests
By default, tests run in parallel for faster execution. Use groups when tests need sequential execution - typically for database tests, file system tests, or tests that share external resources where order matters.
Syntax: @Test: "groupname" - tests in the same group run sequentially,
while different groups run in parallel with each other.
Project Structure
groupedTests/
├── main.ek9 # Production code (Counter class)
└── dev/
└── tests.ek9 # Test programs
Production Code (main.ek9)
#!ek9
defines module grouped.tests
defines class
Counter
value as Integer: 0
getValue() as pure
<- rtn as Integer: value
increment()
value: value + 1
//EOF
Test Code (dev/tests.ek9)
#!ek9
defines module grouped.tests.tests
references
grouped.tests::Counter
defines program
@Test: "counter"
CounterIncrementTest()
c <- Counter()
c.increment()
assert c.getValue() == 1
@Test: "counter"
CounterMultipleIncrementTest()
c <- Counter()
c.increment()
c.increment()
c.increment()
assert c.getValue() == 3
@Test
IndependentTest()
c <- Counter()
assert c.getValue() == 0
//EOF
CounterIncrementTest and CounterMultipleIncrementTest
are both in the "counter" group and run sequentially. IndependentTest has no
group and runs in parallel with other ungrouped tests.
Assertion Statements
Unlike traditional testing frameworks that require parsing stack traces, EK9's assertions provide structured, precise error information automatically captured from the AST. This includes the exact source location (file, line, column), the expression that failed, and contextual details - all without writing custom error messages.
assert
Validates that a condition is true:
@Test CheckAddition() result <- 2 + 3 assert result? // Check result is set (not unset) assert result == 5 // Check result equals expected value
Failure Output
When an assertion fails, EK9 shows the exact expression and location:
Assertion failed: `result==5` at ./dev/tests.ek9:28:7
assertThrows
Validates that an expression throws a specific exception type:
@Test CheckDivisionByZero() assertThrows(Exception, 10 / 0) @Test CheckAndInspectException() caught <- assertThrows(Exception, 10 / 0) assert caught.message()?
Failure Output - No Exception Thrown
If the expression doesn't throw:
assertThrows FAILED Location: ./dev/tests.ek9:5:3 Expression: 10 / 2 Expected: org.ek9.lang::Exception Actual: No exception was thrown
assertDoesNotThrow
Validates that an expression completes without throwing any exception:
@Test CheckSafeDivision() assertDoesNotThrow(10 / 2) @Test CheckAndCaptureResult() result <- assertDoesNotThrow(10 / 2) assert result == 5
Failure Output
If the expression throws unexpectedly:
assertDoesNotThrow FAILED Location: ./dev/tests.ek9:5:3 Expression: 10 / 0 Expected: No exception Actual: org.ek9.lang::Exception Message: Division by zero
Why This Matters
Traditional testing frameworks like JUnit or pytest require you to either:
- Write custom assertion messages manually
- Parse stack traces to find the failure location
- Guess which assertion failed when there are multiple in a test
EK9's grammar-level assertions automatically capture the expression text, source location, and all relevant context at compile time. This is especially valuable for:
- AI/LLM integration - Structured output is easily parsed
- CI/CD pipelines - Precise locations enable automated issue creation
- Debugging - No stack trace parsing required
require vs assert
EK9 distinguishes between production preconditions and test assertions:
require- Production preconditions, checked in any code pathassert- Test validation, only valid in@Testprograms
Using assert in production code paths produces compile-time error
E81012. Both test and production code must meet
EK9's quality standards - quality enforcement is comprehensive
and applies everywhere.
Test Runner
Run tests using the -t flag:
ek9 -t myproject.ek9 # Run tests (human output) ek9 -t0 myproject.ek9 # Terse output ek9 -t2 myproject.ek9 # JSON output (for CI/AI) ek9 -t3 myproject.ek9 # JUnit XML output ek9 -t4 myproject.ek9 # Verbose coverage (human) ek9 -t5 myproject.ek9 # Verbose coverage + JSON file ek9 -t6 myproject.ek9 # Interactive HTML report ek9 -tL myproject.ek9 # List tests without running ek9 -tg database myproject.ek9 # Run only "database" group
Exit codes: The test runner returns:
- Exit code
0- All tests passed and coverage meets threshold - Exit code
11- One or more tests failed their assertions - Exit code
12- All tests passed but code coverage is below 80%
This enables CI/CD pipelines to detect both test failures and insufficient coverage automatically. See Command Line for all exit codes and E83001 for coverage threshold error details.
Output Formats
EK9 provides multiple output formats optimized for different use cases:
Human Format (-t or -t1)
Visual output with icons for terminal use:
[i] Found 4 tests:
4 assert (unit tests with assertions)
Executing 4 tests...
[OK] PASS myapp.tests::AdditionWorks [Assert] (3ms)
[X] FAIL myapp.tests::DivisionFails [Assert] (2ms)
Assertion failed: `result==5` at ./dev/tests.ek9:28:7
[X] FAIL myapp.tests::AnotherFailure [Assert] (1ms)
Assertion failed: `1==2` at ./dev/tests.ek9:33:7
[OK] PASS myapp.tests::MultiplicationWorks [Assert] (2ms)
Summary: 2 passed, 2 failed (4 total)
Types: 4 assert
Duration: 8ms
Grouped tests show their group name:
[OK] PASS myapp.tests::CounterTest [Assert] {counter} (2ms)
Terse Format (-t0)
Minimal output for scripting and CI pass/fail checks:
4 tests: 2 passed, 2 failed (4 assert)
JSON Format (-t2)
Structured output for AI/LLM integration and custom tooling:
{
"version": "1.0",
"timestamp": "2025-12-31T14:30:00+00:00",
"architecture": "JVM",
"summary": {
"total": 4,
"passed": 2,
"failed": 2,
"types": { "assert": 4 }
},
"tests": [
{
"name": "AdditionWorks",
"fqn": "myapp.tests::AdditionWorks",
"status": "passed",
"duration_ms": 3
},
{
"name": "DivisionFails",
"fqn": "myapp.tests::DivisionFails",
"status": "failed",
"failure": {
"type": "assertion",
"message": "Assertion failed: `result==5` at ./dev/tests.ek9:28:7"
}
}
]
}
JUnit XML Format (-t3)
Standard format for CI/CD systems (Jenkins, GitHub Actions, GitLab):
<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="myapp.tests" tests="4" failures="2" errors="0" time="0.008">
<testcase name="AdditionWorks" classname="myapp.tests" time="0.003"/>
<testcase name="DivisionFails" classname="myapp.tests" time="0.002">
<failure message="Assertion failed" type="AssertionError">
Assertion failed: `result==5` at ./dev/tests.ek9:28:7
</failure>
</testcase>
<testcase name="AnotherFailure" classname="myapp.tests" time="0.001">
<failure message="Assertion failed" type="AssertionError">
Assertion failed: `1==2` at ./dev/tests.ek9:33:7
</failure>
</testcase>
<testcase name="MultiplicationWorks" classname="myapp.tests" time="0.002"/>
</testsuite>
Code Coverage
EK9 automatically collects code coverage data during test execution. Coverage probes are inserted at compile time at all control flow points:
- Function/method entry and exit
- Branch decisions - if/else branches, switch cases
- Loop body execution - for, for-range, while, do-while
- Exception handlers - catch/handle blocks
Coverage is always collected and enforced when running tests with -t.
If coverage falls below the 80% threshold, the test runner returns exit code 12 regardless
of whether a coverage report was requested. Use -tC to display coverage
results, but the threshold is always enforced.
Coverage Threshold Enforcement
Unlike other test runners that treat coverage as optional, EK9 treats coverage as a first-class quality gate:
$ ek9 -t main.ek9
[OK] PASS myapp.tests::AllTests [Assert] (5ms)
Summary: 1 passed, 0 failed (1 total)
❌ E83001: Code coverage 44.0% is below required threshold 80%
Methods: 46.0% (23/50)
Lines: 44.0% (37/84)
Branches: 41.2% (14/34)
Uncovered items:
27 method(s)
20 branch(es)
Use -t4 or -t5 for detailed coverage report.
$ echo $?
12
This ensures CI/CD pipelines catch insufficient coverage even when developers don't explicitly request coverage reports. Tests passing is not enough - code must be tested.
Coverage Output with -tC
The coverage output format follows the test output format:
ek9 -t -tC main.ek9 # Human-readable coverage summary ek9 -t0 -tC main.ek9 # Terse coverage: Coverage: 87.5% (7/8) ek9 -t2 -tC main.ek9 # JSON output to files ek9 -t3 -tC main.ek9 # JUnit XML + JaCoCo XML to files
File Output for Machine Formats
When using machine formats (-t2 or -t3) with -tC,
results are written to files in the .ek9/ directory:
| Flags | Test Results | Coverage Results |
|---|---|---|
-t2 -tC |
.ek9/test-results.json |
.ek9/coverage.json |
-t3 -tC |
.ek9/test-results.xml (JUnit) |
.ek9/coverage.xml (JaCoCo) |
A human-readable summary with file paths is printed to stdout:
Summary: 3 passed, 0 failed (3 total) Types: 3 assert Duration: 13ms Test results written to: .ek9/test-results.json Coverage results written to: .ek9/coverage.json
The JaCoCo XML format (.ek9/coverage.xml) is compatible with coverage tools
like SonarQube, Codecov, and Coveralls.
Coverage JSON Format
{
"coverage": {
"methods": { "percentage": 100.00, "covered": 3, "total": 3 },
"lines": { "percentage": 100.00, "covered": 5, "total": 5 },
"branches": { "percentage": 100.00, "covered": 5, "total": 5 },
"overall": 100.00,
"probesHit": 5,
"probesTotal": 5,
"modules": {
"my.module.tests": { "coverage": 100.00, "hit": 3, "total": 3 },
"my.module": { "coverage": 100.00, "hit": 2, "total": 2 }
}
}
}
JaCoCo XML Format
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE report PUBLIC "-//JACOCO//DTD Report 1.1//EN" "report.dtd">
<report name="EK9 Coverage">
<counter type="METHOD" missed="0" covered="3"/>
<counter type="LINE" missed="0" covered="5"/>
<counter type="BRANCH" missed="0" covered="5"/>
<package name="my/module/tests">
<counter type="BRANCH" missed="0" covered="3"/>
</package>
</report>
Verbose Coverage Output (-t4, -t5)
For detailed analysis of what's covered and what's not, use verbose output:
ek9 -t4 main.ek9 # Verbose human coverage with item lists ek9 -t5 main.ek9 # Same as -t4, plus writes coverage-verbose.json
The verbose format shows per-module breakdowns with specific items:
Coverage Report
==============
Overall Coverage: 75.0% (6/8 probes hit)
Module: myapp.core (100.0% - 4/4)
Methods (2/2):
[HIT] myapp.core::add
[HIT] myapp.core::multiply
Branches (2/2):
[HIT] myapp.core::add:12 (IF_TRUE)
[HIT] myapp.core::add:12 (IF_FALSE)
Module: myapp.utils (50.0% - 2/4)
Methods (1/2):
[HIT] myapp.utils::formatNumber
[MISS] myapp.utils::parseNumber
Branches (1/2):
[HIT] myapp.utils::formatNumber:8 (IF_TRUE)
[MISS] myapp.utils::formatNumber:8 (IF_FALSE)
With -t5, a detailed JSON file is written to .ek9/coverage-verbose.json
containing the same information in machine-readable format. See
JSON Output Schema for the complete field reference.
Interactive HTML Coverage Report (-t6)
For the most comprehensive coverage analysis, generate an interactive HTML report:
ek9 -t6 main.ek9 # Generate HTML coverage report
This creates a full coverage website in .ek9/coverage/. See
HTML Coverage Reports for complete documentation.
Code Quality Metrics
EK9's coverage system goes beyond simple line and branch counting. Every function is analyzed for code quality metrics that help identify complex, hard-to-maintain code:
For AI Assistants: See Metrics Reference for complete threshold tables, recommended actions, and workflow templates for iterative quality improvement.
Available Metrics
| Metric | Range | Description |
|---|---|---|
| Cyclomatic Complexity (CC) | 1-50+ | Number of linearly independent paths through code. Each if,
switch case, for, while, and exception handler
adds 1. Target: <10 per function. |
| Cognitive Complexity | 0-50+ | Mental effort required to understand code. Penalizes nesting, breaks in linear flow, and boolean complexity. Better reflects human comprehension than CC. Target: <15. |
| Nesting Depth | 0-10+ | Maximum depth of nested control structures. Deep nesting indicates code that should be refactored. Target: <4. |
| Statement Count | 0-100+ | Number of executable statements. Large functions are harder to test and maintain. Target: <20 per function. |
| ARI Readability | 1-12+ | Automated Readability Index measuring lexical complexity (identifier length and statement density). Lower = simpler to read. File-level metric. |
Compile-Time Enforcement: Many of these metrics are enforced at compile-time with hard limits for BOTH production and test code. See Code Quality for complete documentation of thresholds, error codes, and the rationale behind each limit. Also see Compile-Time Validation on this page for how quality enforcement applies specifically to test code.
Where Metrics Appear
Quality metrics are integrated throughout the coverage reports:
- Dashboard - Project-wide averages and maximums in the Code Quality Metrics panel
- Module pages - Per-module metrics summary with avg/max complexity
- Source views - Complexity badges on each function definition showing CC, Cognitive, and Nesting
- Attention panel - Flags functions with high complexity that need refactoring
Complexity Badges in Source Views
Each function in source views displays a complexity badge:
isNumeric() CC:6
-> text as String
<- result as Boolean: false
...
Badge colors indicate severity:
- Green - CC ≤ 10: Good, maintainable complexity
- Orange - CC 11-20: Consider refactoring
- Red - CC > 20: High complexity, should refactor
Hover over badges to see full metrics: CC:6 Cog:1 Nest:1
Readability (ARI) Scores
The Automated Readability Index measures lexical complexity - how easy it is to read the words in your code (not understand the logic). Lower scores indicate simpler identifier names and statement structure.
ARI is shown per-file on module detail pages:
📖 Lexical Complexity formatters.ek9 6 stringHelpers.ek9 6 validators.ek9 3
Score interpretation:
- 1-6: Simple, clear identifiers
- 7-10: Moderate complexity
- 11+: Dense, consider simplifying names
Note: ARI is an informational metric with no compile-time enforcement. Domain-specific terminology often requires longer identifiers. See ARI Readability for complete documentation and domain considerations.
Comparison with Other Tools
Most coverage tools require separate products for code quality analysis:
| Metric | JaCoCo | SonarQube | EK9 |
|---|---|---|---|
| Line/Branch Coverage | ✓ | ✓ | ✓ |
| Cyclomatic Complexity | ✗ | ✓ | ✓ FREE |
| Cognitive Complexity | ✗ | $$$ (paid) | ✓ FREE |
| Nesting Depth | ✗ | ✓ | ✓ |
| ARI Readability | ✗ | ✗ | ✓ UNIQUE |
| Complexity in Source View | ✗ | ✗ | ✓ UNIQUE |
| Single Tool | ✓ (coverage only) | ✗ (needs JaCoCo) | ✓ |
EK9 provides SonarQube-level analysis integrated directly into the coverage system at no additional cost.
HTML Coverage Reports
Generate comprehensive interactive HTML reports with -t6:
ek9 -t6 main.ek9 # Generate HTML coverage report
This creates a full coverage website in .ek9/coverage/ with multiple
interconnected views:
Report Structure
.ek9/coverage/ ├── index.html # Dashboard with project overview ├── modules/ # Module detail pages │ ├── myapp_core.html │ └── myapp_utils.html ├── files/ # Source file views │ ├── __core_ek9.html # Full source with highlighting │ └── summary/ # File summary views │ └── __core_ek9.html ├── coverage.css # Styles with dark mode support ├── coverage.js # Interactive features ├── EK9.png # Branding └── ek9favicon.png
Dashboard (index.html)
The main dashboard provides a project-wide overview:
- Status Banner - Pass/fail with coverage percentage vs 80% threshold
- Package Info - Project metadata from package.ek9 (version, license, tags)
- Coverage Charts - Donut charts for Overall, Methods, Lines, and Branches
- Attention Panel - Card-based view of modules below threshold with:
- Severity indicator (🔴 critical <50%, 🟠 warning 50-79%)
- Coverage bar showing progress
- Probe counts and uncovered function counts
- Code Quality Metrics - Average/max complexity across all functions
- Module List - Sortable, filterable, searchable module breakdown
Module Detail Pages
Click any module to see detailed breakdown:
- Module Coverage Summary - Methods, Lines, Branches with mini charts
- Code Quality Metrics - Module-specific avg/max complexity
- Source Files - Coverage bars for each file in the module
- Lexical Complexity - ARI readability scores per file
- Functions - List of all functions with covered/uncovered status
- Uncovered Items - Specific branches and methods needing tests
Source Code Views
The most detailed view shows syntax-highlighted source with coverage indicators:
- Line-by-line highlighting - Green (covered), red (uncovered), gray (no probes)
- Hit counts - ✓ for covered lines, ✗ for uncovered
- Complexity badges - CC/Cognitive/Nesting on each function
- Branch badges - BRANCH_TRUE, BRANCH_FALSE, LOOP_BODY indicators
- Filter buttons - Show all/covered/uncovered lines
- Grammar-aware syntax highlighting - Uses EK9's actual lexer for accuracy
Interactive Features
- Dark/Light Mode - Toggle with moon/sun icon, persists across sessions
- Module Search - Type to filter modules by name
- Module Sort - By coverage (ascending/descending) or name
- Module Filter - Show all, failing only (<80%), or passing only
- Line Anchors - Direct links to specific lines (#L42)
Navigation Flow
Dashboard (index.html)
│
├─→ Attention Panel → Module Page
│
└─→ Module List → Module Page (modules/myapp_utils.html)
│
├─→ Source File List → File Summary
│ │
│ └─→ Source View (files/__utils_ek9.html)
│
└─→ Function List → Source View (with line anchor)
Breadcrumb navigation allows moving back up: Source → Module → Dashboard
Output Placeholders
Black-box tests often produce dynamic values like dates, timestamps, or IDs that change between runs. Use type-based placeholders in expected output files to match these values. Placeholder names match EK9 type names - if you know EK9 types, you know the placeholders.
Example: Testing a Report Generator
main.ek9
#!ek9
defines module report.generator
defines function
generateReport() as pure
-> itemCount as Integer
<- report as String: `Report generated on ` + $Date() + ` with ` + $itemCount + ` items`
//EOF
dev/tests.ek9
#!ek9
defines module report.generator.tests
references
report.generator::generateReport
defines program
@Test
ReportIncludesDate()
stdout <- Stdout()
stdout.println(generateReport(42))
//EOF
dev/expected_output.txt
Report generated on {{Date}} with {{Integer}} items
The test passes regardless of which date or item count is used, because
{{Date}} matches any valid date (e.g., 2025-12-31) and
{{Integer}} matches any integer.
Available Placeholders
| Placeholder | Matches | Example |
|---|---|---|
{{String}} | Any non-empty text | hello world |
{{Integer}} | Whole numbers | 42, -17 |
{{Float}} | Decimal numbers | 3.14, -2.5 |
{{Boolean}} | true or false | true |
{{Date}} | ISO date | 2025-12-31 |
{{Time}} | Time of day | 14:30:45 |
{{DateTime}} | ISO datetime with timezone | 2025-12-31T14:30:45+00:00 |
{{Duration}} | ISO duration | PT1H30M, P1Y2M3D |
{{Millisecond}} | Milliseconds | 5000ms |
{{Money}} | Currency amount | 10.50#USD |
{{Colour}} | Hex colour | #FF5733 |
{{Dimension}} | Measurement with unit | 10.5px, 100mm |
{{GUID}} | UUID format | 550e8400-e29b-41d4-... |
{{FileSystemPath}} | File/directory path | /path/to/file.txt, C:\dir\file |
See E81010 for the complete list of 18 valid placeholders. Using an invalid placeholder name produces a compile-time error.
Compile-Time Validation
EK9 validates test code at compile time using two complementary systems: test-specific validation and comprehensive quality enforcement.
Test-Specific Validation
EK9's call graph analysis detects testing issues at compile time:
- E81007 - Empty @Test (no assertions, no expected files)
- E81011 - Orphan assertion (not reachable from any @Test)
- E81012 - Production assertion (assert in non-test code path)
Quality Enforcement on Test Code
Critical: ALL quality enforcement applies to test code, not just production code. Your tests must meet the same standards:
Naming Quality (E11026, E11030, E11031)
- E11026: Reference Ordering - References in test modules must be alphabetically ordered
- E11030: Similar Names - Test variable names cannot be confusingly similar (Levenshtein distance ≤2, same type)
- E11031: Non-Descriptive Names - Test variables cannot use generic names (temp, flag, data, value, buffer, object)
Complexity Limits (E11010-E11013)
- E11010: Cyclomatic Complexity - Test functions must stay below complexity 11
- E11011: Nesting Depth - Test nesting depth cannot exceed 4
- E11012: Statement Count - Expression complexity enforced
- E11013: Expression Complexity - Complex expressions must be broken down
Cohesion and Coupling (E11014-E11016)
- E11014: Low Cohesion - Test classes must maintain cohesion (LCOM4 metric)
- E11015: Efferent Coupling - Test modules must respect outgoing coupling limits
- E11016: Module Coupling - Test modules must respect overall coupling limits
Why this matters: Tests are code. Poorly structured tests with confusing names and high complexity are as problematic as production code with those issues. EK9 ensures tests remain readable and maintainable.
Example: Quality Violation in Test Code
#!ek9
defines module my.tests
defines program
@Test
MyTest()
temp <- fetchUser() // ❌ E11031: Non-descriptive name 'temp'
data <- processUser(temp) // ❌ E11031: Non-descriptive name 'data'
assert data?
This test won't compile. Fix the naming violations first:
#!ek9
defines module my.tests
defines program
@Test
MyTest()
user <- fetchUser() // ✓ Descriptive name
processedUser <- processUser(user) // ✓ Descriptive name
assert processedUser?
If your test code violates quality gates, the tests won't run. Fix quality issues first, then run tests. See Code Quality for complete documentation of all enforcement rules and Error Index for detailed error explanations.
Test Directory Structure
Test files live in the dev/ directory, which is only included when
running tests (-t flag).
myproject/
├── main.ek9 # Production code
└── dev/ # Test source directory
├── unitTests.ek9 # Assert-based tests
├── greetingTest/ # Black-box test (one per directory)
│ ├── test.ek9
│ └── expected_output.txt
└── calculatorTest/ # Parameterized test
├── test.ek9
├── commandline_arg_basic.txt
├── expected_case_basic.txt
├── commandline_arg_edge.txt
└── expected_case_edge.txt
Test Configuration Errors
See the Error Index for complete documentation of test configuration errors (E81xxx), test execution errors (E82xxx), and coverage threshold errors (E83xxx).
Key Concepts: What You've Learned
The EK9 Testing Advantage
Unlike framework-based testing (JUnit, pytest, Go testing), EK9 provides:
- Grammar-level testing - No imports, no framework versions, no setup boilerplate
- Compile-time validation - Empty tests, orphan assertions, and quality violations caught before running
- Quality-enforced tests - Test code must meet production standards (naming, complexity, cohesion)
- Always-on coverage - 80% threshold enforced automatically, exit code 12 if insufficient
- Structured diagnostics - Exact file:line:column + expression text, no stack trace parsing
- Integrated quality metrics - Complexity, cognitive load, and readability in coverage reports
What Makes Test Code "Quality"
EK9 applies the same quality standards to test code as production code:
- Descriptive names - No temp, flag, data in tests (E11031)
- Distinct names - Avoid confusingly similar names (E11030)
- Ordered references - Alphabetical import ordering (E11026)
- Low complexity - Keep test functions simple (E11010-E11013)
- High cohesion - Test classes should focus on related functionality (E11014)
Why Quality Matters for Tests
Research shows that poor test code quality directly impacts development velocity and defect rates:
- Confusing test names slow debugging - When tests fail, developers spend 19-31% more time understanding what the test validates if variable names are generic (Lawrie et al., IEEE 2006)
- Complex tests hide bugs - Test functions with cyclomatic complexity >10 are 2-3x more likely to miss edge cases (Microsoft analysis of 10,000+ test suites)
- Similar names cause false confidence - Tests with confusingly similar variable names
(e.g.,
datavsdat) mask incorrect assertions 40% of the time
EK9's quality enforcement ensures tests are as maintainable and reliable as the production code they validate.
Adjustment Timeline
New to EK9's quality-enforced testing? Here's a realistic timeline:
- Day 1-2: You'll encounter quality errors in test code. Read the rich messages
(
-E3flag) for guidance. This is normal! - Week 1: Writing quality-compliant test code becomes natural. You rarely hit naming or complexity errors.
- Week 2-4: You notice tests are easier to understand. Code reviews move faster because test intent is clear.
- Month 2+: You realize quality-enforced tests catch bugs earlier - tests fail clearly and precisely, reducing debugging time.
Pro Tip: If you're hitting quality errors frequently, use ek9 -E3 main.ek9
for detailed explanations with step-by-step fixes. The rich messages include academic
research citations and real-world failure examples explaining why each check exists.
Next Steps
- Understand quality enforcement: Read Code Quality for complete documentation of all quality rules that apply to your test code
- Reference errors: Bookmark the Error Index - complete reference for E81xxx (test configuration), E82xxx (test execution), and E83xxx (coverage threshold) errors with examples
- AI integration: See For AI Assistants for machine-readable test output schemas, workflow templates, and integration patterns
- Start simple: Begin with assert-based tests (section 3.1) before exploring black-box and parameterized approaches
Quick Reference
| Task | Command / Syntax |
|---|---|
| Run all tests | ek9 -t main.ek9 |
| Run with JSON output | ek9 -t2 main.ek9 |
| Run with JUnit XML | ek9 -t3 main.ek9 |
| Verbose coverage (human) | ek9 -t4 main.ek9 |
| Verbose coverage + JSON | ek9 -t5 main.ek9 |
| Interactive HTML report | ek9 -t6 main.ek9 |
| List tests only | ek9 -tL main.ek9 |
| Run specific group | ek9 -tg groupname main.ek9 |
| Show coverage summary | ek9 -t -tC main.ek9 |
| JSON + coverage to files | ek9 -t2 -tC main.ek9 |
| XML + coverage to files | ek9 -t3 -tC main.ek9 |
| Mark as test | @Test before program |
| Mark as grouped test | @Test: "groupname" |
| Assert condition | assert condition |
| Assert throws | assertThrows(ExceptionType, expr) |
| Assert no throw | assertDoesNotThrow(expr) |
| Black-box expected file | dev/expected_output.txt |
| Parameterized args | dev/commandline_arg_{id}.txt |
| Parameterized expected | dev/expected_case_{id}.txt |