LydianAI distributed inference

Federated training across mixed hardware — including legacy GPUs.

The first public LydianAI app is a distributed training proof-of-concept: a FastAPI coordinator runs FedAvg rounds while workers (CPU and GPU) train locally and submit updates. It’s designed for heterogeneous setups (macOS controller + Ubuntu workers) and supports Pascal GPUs like GTX 1080/1080 Ti.

View on GitHub Run the Quickstart
✅ FedAvg rounds ✅ FastAPI coordinator ✅ Tailscale networking ✅ NEW vs LEGACY GPU install

What you get in this PoC

Coordinator + workers

A CPU-only server coordinates rounds and aggregates updates; workers register, poll for work, train, and submit updates.

Real constraints handled

Mixed machines, mixed GPUs, legacy CUDA compatibility, and best-effort GPU telemetry.

Reproducible runs

Start training, monitor progress, and pull results from a CLI client. CIFAR-10 included as the baseline dataset.


Fast path

1) Bring machines on the same network

Use Tailscale so the controller and workers can talk over stable 100.x addresses.

Tailscale setup

2) Pick NEW or LEGACY GPU setup

Modern GPUs use new wheels; Pascal (GTX 1080/1080 Ti) needs pinned legacy Torch + Python.

GPU setup guide
Run the full Quickstart