LydianAI Open-source tooling

Software needs to be good enough to go into a vehicle.

Automotive software systems — braking, steering, OTA updates, connectivity — must meet strict safety and security requirements before they reach the road. Showing that the work was done right means structured evidence: requirements that are traced, tests that are recorded, changes that are controlled.

The target of this tool is any software system developed for an automotive context. That could be a driver assistance function, a gateway ECU, a software update service, or an in-vehicle application. What they share is that every change — no matter how small — needs to be understood in terms of what requirement it satisfies, what hazard it addresses, and what evidence exists that it was tested and approved.

Today, three sets of rules converge on automotive software teams. UNECE R155 requires a documented cybersecurity management system — threat analyses, risk ratings, and evidence that mitigations were implemented. UNECE R156 requires a complete audit trail for every software update deployed to a vehicle. ISO 29119 defines what test evidence looks like. Meeting these in parallel, under continuous development, is where manual processes break.

The problem is not that teams don't know what to do. The problem is that compliance evidence is fragile when managed manually. A requirement changes. Three test cases are now stale. A change gets reviewed without anyone noticing the downstream impact. By the time the release gate approaches, reconstructing the traceability chain can take longer than the development itself.


Before

  • Requirement changes in a Word doc; no one knows what's affected
  • "I thought someone else had tested that"
  • Release gate: 3 weeks reconstructing traceability by hand
  • Auditor asks for evidence → folder search, maybe missing files

With Assurance-as-Code

  • Requirement change recorded; downstream items auto-flagged as suspect
  • Every test case linked to the requirement it covers — no gaps
  • Release gate: automated check runs in seconds, not weeks
  • Auditor asks for evidence → export baseline report, complete chain

What a project looks like

Managing a SW System project

A project in the tool represents a SW System being developed — it has a profile (which standards apply, what ASIL level), and it holds all the compliance artifacts for that system. Items like requirements, hazard analyses, test cases, and test results are created and linked inside that project. Each item has an owner, a state, and a revision history.

Evidence flows from requirement to release

When a requirement is written, it links to the hazard or risk it addresses. When a test case is written, it links to the requirement it covers. When a test is run, the result is recorded and linked to the test case. When a release is due, the system checks that the full chain exists and every item has been reviewed and approved — before it will generate a release baseline.

DraftUnder ReviewReviewedApprovedReleasedObsolete

What good compliance evidence looks like

HARA — hazard identified, ASIL assigned
↓ drives
Safety requirement defined
↓ covered by
Test case written and linked
↓ produces
Test result: PASS
↓ gates
Release baseline generated ✓

Hazard and risk analysis (HARA)

ISO 26262 requires teams to identify hazardous situations — what happens if this software function fails at speed? Each hazard gets a severity, exposure, and controllability rating, and an ASIL level derived from those. The HARA drives the safety requirements that follow. HARA records are linked to the safety requirements they generate, so when a requirement changes, the hazard analysis that drove it is automatically flagged for review.

Threat analysis (TARA) for cybersecurity

UNECE R155 requires teams to identify attack surfaces, assess the impact of successful attacks, and document the mitigations implemented. This is called a TARA — Threat Analysis and Risk Assessment. TARA items link to the cybersecurity requirements they drive, and those requirements link to the test cases that verify them. You can follow the chain from threat to evidence in either direction.

Test evidence: plans, cases, and results

ISO 29119 defines what test documentation should look like. A test plan describes what will be tested and how. Test cases define individual scenarios linked to specific requirements. Test results record what happened — pass, fail, environment, date. A release baseline cannot be generated until every in-scope requirement has an approved, passing test result attached. The evidence is the gate, not a checkbox.


What this tool is (and isn't)

It is

  • A place to create and manage compliance artifacts for a SW System
  • A way to link requirements, test cases, and test results so you can see the full chain
  • A controlled lifecycle for every item — draft, reviewed, approved, released
  • An audit log of every change, with who made it and when
  • A release-gate check before a baseline can be generated

It isn't

  • A certified safety tool (you still need qualified engineers to make the decisions)
  • A hosted or managed service
  • A replacement for a full product lifecycle management system

Regulations this tool is designed for

UNECE R156 — Software updates

R156 applies to manufacturers that deploy software updates to vehicles (OTA or otherwise). It requires a Software Update Management System — documented evidence that every update was authorized, tested, and can be traced back to its triggering reason. The tool maintains the artifact chain from change request through requirement, test, and release, and generates a timestamped baseline that serves as the authorization record.

UNECE R155 — Cybersecurity

R155 requires manufacturers to maintain a cybersecurity management system covering the full vehicle lifecycle. In practice, that means documenting threats, assessing their impact, implementing mitigations, and proving through test evidence that mitigations work. The tool manages these artifacts and links them, so when an auditor asks "what did you do about this risk", the answer is a traceable record, not a folder search.

ISO 29119 — Testing

ISO 29119 defines what test documentation should look like so it can serve as regulatory evidence — not just as a record for the development team. The tool structures test artifacts (plans, cases, results) in a way that satisfies the standard's requirements, and enforces that nothing reaches a release baseline without all required test evidence in place.


Connect your code repository

Compliance evidence shouldn't depend on someone manually asserting "this file implements that requirement." The repository integration closes the loop: developers declare what their code does in a single YAML file, and the tool keeps the compliance record in sync with every push.

One file in the repository

A file called .aac/repo.yml lives in your codebase alongside the code it describes. It maps source paths to the requirements they satisfy. The scanner reads it from the exact commit being scanned — so the evidence reflects what was actually in the repository at that revision, not what someone thought was there later.

Scans on every push and PR

A GitHub App webhook triggers a scan on each push and each pull request. Implementation items and source references are created or updated automatically — no manual import step. The compliance record stays in sync with the codebase as the code evolves, not just at release time.

Compliance status in GitHub Checks

Validation and release-gate results are published back to GitHub as check runs on the PR. Developers see whether their change breaks a traceability link or leaves a requirement uncovered before the PR is merged — not three weeks later when someone starts preparing a release baseline.

View on GitHub →Repository integration docsAll docs