Capabilities
Everything you need to collect, shape, route, and govern telemetry and event data at the first mile
LyftData exposes seven core capabilities — Collect, Transform, Route, Compose, Scale, Observe, Visual Editor — all running on the deterministic Server → Workers → Jobs architecture. Each capability builds toward control, scale, and governance.
Works with logs, metrics, traces, security events, SaaS exports, files, and API records.
Here’s what you can do with LyftData today.
Three themes
To keep complexity understandable, every primitive belongs to one of these themes:
CONTROL & COST
- •Shape and mask data before it hits metered tools
- •Route curated subsets into expensive platforms
- •Archive full fidelity cheaply
Every byte is intentional, not accidental.
SCALE & RELIABILITY
- •Server → Workers → Jobs keeps behavior deterministic
- •Workers scale horizontally without rewrites
- •Run & Trace shows exactly what will happen before deployment
Pipelines stay predictable even as volume grows.
GOVERNANCE & AUDITABILITY
- •Jobs are signed, version-controlled definitions
- •Run & Trace produces auditable evidence
- •Server stores lineage and change history
You can prove how data moved — every time.
Collect
Workers read directly from your existing sources with a single Input definition per Job—for telemetry, events, and records.
Files & file shares
S3 / GCS / Azure Blob
Windows Events
HTTP / APIs
Security sources like CrowdStrike
What it solves
Tool-specific agents and scripts
Duplicated ingestion logic
Missed sources due to inconsistent configs
A consistent, governed ingestion layer.
Inputs & Sources →Transform (Actions)
Chain declarative Actions to filter, parse, enrich, mask, script, and normalize events and records.
Filter — remove noise
Parse — extract fields
Enrich — add context
Redact / Mask — govern sensitive data
Script — run custom logic
Normalize — produce consistent structures
What it solves
Brittle per-tool transforms
Untraceable scripts
Inconsistent masking and enrichment
Cleaner, reviewable data upstream.
Actions Reference →Route
Each Job defines the primary Output so you can intentionally deliver to SIEM, observability, analytics, or archives—without vendor lock-in.
SIEM
Observability tools
Data platforms
Object storage
Archival buckets
What it solves
Over-ingesting into expensive tools
Blind forwarding
Duplicated pipelines per destination
Every destination receives only the data it needs.
Outputs →Compose (Channels)
Channels clone and fan out governed outputs so one Job can feed many downstream systems—from telemetry tools to data platforms.
Curated logs → SIEM
Structured events → observability
Full-fidelity logs → archive
What it solves
Copy-paste pipelines per tool
Inconsistent policies across branches
Costly re-ingestion when adding destinations
One pipeline, many intentional outputs.
Channels tutorial →Scale
Workers are stateless executors you can add anywhere without rewriting Jobs.
What it solves
Per-cluster drift
Scaling tied to agents
Risky deployments under load
Predictable performance and safer parallelism.
Operate & Scale →Observe
LyftData surfaces logs, metrics, and traces from Workers back to Server.
What it solves
Blind pipelines
Slow failure detection
Unverifiable transformations
Observability by default.
Operate Overview →Visual Editor
Author Jobs visually, preview transformations with Run & Trace, then export or review in YAML.
What it solves
Hidden UI-only logic
Slow pipeline authoring
Handwritten configs without guardrails
Speed during creation, precision during review.
Visual Editor →Workflows
Workflows & deployments
Versioned releases, safe rollouts, and clear rollbacks—across one worker or a fleet.
Build
- Draft workflows quickly and iterate safely.
- Publish versions for repeatable deploys and rollbacks.
- Reuse jobs and blueprints instead of starting from scratch.
Deploy
- Plan first: preview what will change and where it will run.
- Roll out safely across workers (with targeted placement and scaling).
- Rollback by deploying a previous published version.
Operate
- See what’s running and where (workloads + fleet).
- Track deployments end-to-end (events, progress, warnings).
- Keep changes explicit: preview disruptive moves before applying them.
Why these capabilities matter — Because your downstream tools shouldn’t dictate your data structure.
When pipelines live in vendor agents and per-tool configs, every change becomes fragile and expensive.
- •Downstream tools no longer dictate your pipeline shape
- •You avoid vendor lock-in while keeping the tools you already use
- •You prevent cost explosions and surprise ingest bills
- •You maintain one chain of custody for governance and audits
LyftData makes data predictable — operationally, economically, and in compliance posture.
See how teams put these capabilities to work
Portability and stable references
Workflow node identities and blueprint aliases are treated as stable identifiers so placement/scaling references remain portable across edits and export/import loops.
Security, observability, and data teams already building on LyftData.
Check compatibility with your stackBrowse supported sources and destinations.
Understand the architectureWalk through Server → Workers → Jobs in detail.