<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Cloud SPE]]></title><description><![CDATA[Welcome to Cloud SPE, your free-to-use RTMP & AI Gateways powered by the Livepeer Network. ]]></description><link>https://www.livepeer.cloud/</link><generator>Ghost 5.80</generator><lastBuildDate>Sat, 09 May 2026 03:55:43 GMT</lastBuildDate><atom:link href="https://www.livepeer.cloud/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[NaaP Analytics: Project Completion Report]]></title><description><![CDATA[Livepeer’s AI network is now measurable. Read the Cloud SPE’s final project report on NaaP Analytics, featuring public dashboards, performance signals, and the data foundation for production-grade service guarantees.]]></description><link>https://www.livepeer.cloud/naap-analytics-project-completion-report/</link><guid isPermaLink="false">69cf9039c28d16000117dc71</guid><category><![CDATA[AI]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Infrastructure]]></category><category><![CDATA[Livepeer]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Report]]></category><dc:creator><![CDATA[Admin User]]></dc:creator><pubDate>Fri, 10 Apr 2026 13:30:30 GMT</pubDate><media:content url="https://www.livepeer.cloud/content/images/2026/04/cloudspe-naap-completion-photo_2026-04-03_06-01-37.jpg" medium="image"/><content:encoded><![CDATA[<h1 id="naap-analytics-project-completion-report">NaaP Analytics: Project Completion Report</h1>
<img src="https://www.livepeer.cloud/content/images/2026/04/cloudspe-naap-completion-photo_2026-04-03_06-01-37.jpg" alt="NaaP Analytics: Project Completion Report"><p><em>By the Cloud SPE Team &#x2014; <a href="https://www.livepeer.cloud/">Livepeer.Cloud</a></em></p>
<hr>
<h2 id="executive-summary">Executive Summary</h2>
<p>The Cloud SPE has completed delivery of the <strong>Network-as-a-Product (NaaP) MVP &#x2014; SLA Metrics, Analytics, and Public Infrastructure</strong> project, funded by the Livepeer Treasury. This post provides a detailed accounting of what was proposed, what was built, how the architecture works, and what impact this has on the Livepeer network going forward.</p>
<p>Work began in November 2025 with the original proposal and discovery process. The <a href="https://forum.livepeer.org/t/metrics-and-sla-foundations-for-naap/3189?ref=livepeer.cloud">revised pre-proposal</a> was submitted in January 2026, passed the treasury vote, and execution proceeded through three milestones culminating in April 2026.</p>
<hr>
<h2 id="what-was-promised-vs-what-was-delivered">What Was Promised vs. What Was Delivered</h2>
<h3 id="milestone-1-%E2%80%94-metrics-collection-aggregation-february-2026-%E2%9C%85">Milestone 1 &#x2014; Metrics Collection &amp; Aggregation (February 2026) &#x2705;</h3>
<p><strong>Promised:</strong></p>
<ul>
<li>Define and implement the minimal metrics set</li>
<li>Aggregate existing telemetry into a unified analytics layer</li>
<li>A basic dashboard showing sample data flowing end to end</li>
</ul>
<p><strong>Delivered:</strong></p>
<ul>
<li>A comprehensive <strong>metrics catalog</strong> covering network state, stream activity, performance, payments, reliability, and orchestrator leaderboard scoring</li>
<li>A Kafka-to-ClickHouse ingest pipeline using ClickHouse&apos;s Kafka Engine tables for durable, exactly-once event consumption</li>
<li>Materialized views routing incoming events into tables with validation rules</li>
<li>Normalized tables capturing event-family facts and rollups</li>
<li>A working Grafana dashboard demonstrating end-to-end data flow from Kafka through to visual output</li>
<li>Documented bootstrap schema for reproducible fresh deployments</li>
</ul>
<h3 id="milestone-2-%E2%80%94-test-signals-derived-analytics-march-2026-%E2%9C%85">Milestone 2 &#x2014; Test Signals &amp; Derived Analytics (March 2026) &#x2705;</h3>
<p><strong>Promised:</strong></p>
<ul>
<li>Deploy reference load-test gateways</li>
<li>Launch a public dashboard with core views</li>
<li>APIs for ecosystem consumption</li>
</ul>
<p><strong>Delivered:</strong></p>
<ul>
<li>Reference load-test gateway operational, generating consistent AI pipeline performance signals</li>
<li><strong>Four production Grafana dashboards:</strong>
<ul>
<li>System health overview</li>
<li>Real-time stream activity</li>
<li>Payments and revenue metrics</li>
<li>FPS, latency, WebRTC performance</li>
</ul>
</li>
<li>Additional <strong>supply inventory dashboard</strong> tracking GPU capacity across the network</li>
<li>A <strong>REST API</strong> with endpoints covering many requirement specs:
<ul>
<li>Network state and orchestrator profiles</li>
<li>Stream activity and job performance</li>
<li>Payment and economics data</li>
<li>Reliability and SLA scoring</li>
<li>Orchestrator leaderboard with composite scoring</li>
<li>GPU supply inventory and capacity</li>
</ul>
</li>
<li><strong>OpenAPI specification</strong> embedded in the API service with Swagger UI</li>
<li><strong>Prometheus metrics</strong> endpoint for operational monitoring of the API itself</li>
</ul>
<h3 id="milestone-3-%E2%80%94-stabilization-review-april-2026-%E2%9C%85">Milestone 3 &#x2014; Stabilization &amp; Review (April 2026) &#x2705;</h3>
<p><strong>Promised:</strong></p>
<ul>
<li>Harden infrastructure for reliability and cost efficiency</li>
<li>Document metrics, assumptions, and known gaps</li>
<li>Review outcomes with the community to determine next steps</li>
</ul>
<p><strong>Delivered:</strong></p>
<ul>
<li>Full production deployment across multiple infrastructure nodes with separated concerns:
<ul>
<li>Kafka broker, MirrorMaker2 for replicating events from Confluent Cloud, and a full ClickHouse + API + Grafana stack</li>
</ul>
</li>
<li><strong>Traefik reverse proxy</strong> with automated TLS via Cloudflare</li>
<li><strong>Prometheus monitoring</strong> with 180-day retention policy</li>
<li><strong>Resolver service</strong> &#x2014; a custom service that publishes corrected current and serving state into canonical stores</li>
<li><strong>dbt semantic layer</strong> publishing canonical and API views over normalized tables, with automated tests</li>
<li><strong>31 data-quality validation tests</strong> in a scenario-based test harness</li>
<li>Comprehensive documentation suite (see below)</li>
</ul>
<hr>
<h2 id="architecture-deep-dive">Architecture Deep Dive</h2>
<p>The NaaP Analytics platform follows a layered architecture designed for clarity, auditability, and extensibility:</p>
<pre><code>Kafka Topics
    &#x2193;
ClickHouse Kafka Engine Tables
    &#x2193;
Ingest Materialized Views &#x2192; accepted_raw_events / ignored_raw_events
    &#x2193;
Normalized Tables (event-family facts + rollups)
    &#x2193;
Resolver Service &#x2192; canonical_*_store / api_*_store
    &#x2193;
dbt Semantic Layer &#x2192; canonical_* / api_* views
    &#x2193;
Go REST API + Grafana Dashboards
</code></pre>
<h3 id="key-architectural-decisions">Key Architectural Decisions</h3>
<p><strong>ClickHouse + Kafka Engine</strong> &#x2014; We chose ClickHouse as the analytics store for its columnar storage efficiency and native Kafka engine support. This eliminates the need for a separate ETL service &#x2014; ClickHouse consumes directly from Kafka topics, and materialized views handle routing and validation in a single step.</p>
<p><strong>REST/JSON API Design</strong> &#x2014; The API follows a straightforward REST design with no authentication required for public read endpoints. This maximizes accessibility for ecosystem teams while keeping the door open for org-scoped access in the future.</p>
<p><strong>Tiered Serving Contract</strong> &#x2014; Data flows through defined tiers: raw &#x2192; normalized &#x2192; canonical (resolver) &#x2192; semantic (dbt) &#x2192; API. Each tier has explicit contracts about freshness, correctness, and derivation rules. The resolver publishes &quot;corrected&quot; state &#x2014; reconciling event ordering, handling late-arriving data, and producing a consistent current-state view.</p>
<h3 id="the-resolver">The Resolver</h3>
<p>The resolver deserves specific mention. It&apos;s a separate service that:</p>
<ul>
<li>Reads from normalized tables</li>
<li>Computes corrected current state (handling out-of-order events, deduplication, state transitions)</li>
<li>Publishes to canonical store tables</li>
<li>Supports three run modes: <code>bootstrap</code> (full historical rebuild), <code>tail</code> (real-time processing), and <code>auto</code> (bootstrap then tail)</li>
<li>Includes repair capabilities for specific time windows</li>
</ul>
<p>This is the component that transforms raw event noise into trustworthy, queryable state. Without it, dashboards would show inconsistencies from event ordering, network partitions, or delayed telemetry.</p>
<h3 id="enrichment-layer">Enrichment Layer</h3>
<p>The API includes a polling worker that enriches raw orchestrator data with:</p>
<ul>
<li>ENS name resolution</li>
<li>Staking information from the Livepeer protocol</li>
<li>Gateway metadata</li>
<li>GPU inventory details</li>
</ul>
<p>This ensures the API and dashboards present human-readable, contextually rich data rather than raw addresses and IDs.</p>
<hr>
<h2 id="documentation-delivered">Documentation Delivered</h2>
<p>One of the project goals was to leave the community with not just working software, but a documented system that others can operate, extend, and contribute to:</p>
<table>
<thead>
<tr>
<th>Document</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>DESIGN.md</code></td>
<td>Architecture overview, layer rules, tier contracts, key decisions</td>
</tr>
<tr>
<td><code>PRODUCT_SENSE.md</code></td>
<td>Product goals, success criteria, non-goals</td>
</tr>
<tr>
<td><code>PLANS.md</code></td>
<td>Phase planning and implementation status</td>
</tr>
<tr>
<td><code>metrics-and-sla-reference.md</code></td>
<td>Community-facing metrics reference: formulas, SLA targets, glossary</td>
</tr>
<tr>
<td><code>architecture.md</code></td>
<td>Layer rules and enforcement model</td>
</tr>
<tr>
<td><code>system-visuals.md</code></td>
<td>Mermaid diagrams: ingest flow, resolver, deployment topology</td>
</tr>
<tr>
<td><code>data-validation-rules.md</code></td>
<td>Behavioral contract for all 17 validation rules (31 tests)</td>
</tr>
<tr>
<td><code>operations-runbook.md</code></td>
<td>Deployment, alerting, troubleshooting, maintenance, backups</td>
</tr>
<tr>
<td><code>devops-environment-guide.md</code></td>
<td>Monitoring, local/production environment setup</td>
</tr>
<tr>
<td><code>data-retention-policy.md</code></td>
<td>Kafka and ClickHouse retention windows, replay strategy</td>
</tr>
<tr>
<td><code>infra-hardening-runbook.md</code></td>
<td>Security posture, Kafka listener architecture</td>
</tr>
<tr>
<td><code>incident-response.md</code></td>
<td>Severity definitions (P0&#x2013;P3), escalation contacts, post-mortem template</td>
</tr>
<tr>
<td><code>run-modes-and-recovery.md</code></td>
<td>Resolver run modes, failure recovery, rebuild procedures</td>
</tr>
<tr>
<td><code>compose-services.md</code></td>
<td>Docker Compose services, profiles, and responsibilities</td>
</tr>
</tbody>
</table>
<p>Every product spec is individually documented with requirement traceability.</p>
<hr>
<h2 id="impact-on-the-livepeer-network">Impact on the Livepeer Network</h2>
<h3 id="immediate-impact">Immediate Impact</h3>
<ol>
<li>
<p><strong>Visibility:</strong> For the first time, anyone can see how the Livepeer AI network is performing &#x2014; not from marketing materials, but from live data sourced from real workloads and standardized test signals.</p>
</li>
<li>
<p><strong>Comparability:</strong> Orchestrators can be compared on a level playing field &#x2014; same metrics, same methodology, same data pipeline. The leaderboard scoring model uses composite metrics that reward both performance and reliability.</p>
</li>
<li>
<p><strong>Ecosystem Integration:</strong> The public APIs and data model are designed for consumption. The NaaP platform itself (<a href="https://github.com/livepeer/naap?ref=livepeer.cloud">livepeer/naap</a>) is an architecture where new views, tools, and integrations can be built independently by any team.</p>
</li>
<li>
<p><strong>Operational Maturity:</strong> The Livepeer network now has documented SLA metrics, a metrics glossary, formulas for reliability and performance scoring, and a reference implementation of how to collect, validate, and serve this data. This is the kind of infrastructure that enterprise evaluators look for.</p>
</li>
</ol>
<h3 id="foundation-for-future-work">Foundation for Future Work</h3>
<p>This project intentionally did <strong>not</strong> attempt to:</p>
<ul>
<li>Enforce SLAs or modify protocol incentives</li>
<li>Introduce new routing logic</li>
<li>Make protocol changes</li>
</ul>
<p>These are all logical next steps, and they all depend on the measurement layer we&apos;ve now established. Specifically, this enables:</p>
<ul>
<li><strong>SLA-aware job routing</strong> &#x2014; gateways can use performance data to route jobs to orchestrators that meet specific reliability or latency thresholds</li>
<li><strong>Network quality scores</strong> &#x2014; aggregate metrics that can be published to the Livepeer Explorer or consumed by third-party evaluation tools</li>
<li><strong>Treasury accountability</strong> &#x2014; future funded projects can point to observable metrics as evidence of impact</li>
<li><strong>GPU market intelligence</strong> &#x2014; the supply inventory data provides a real-time view of network capacity, useful for both gateway operators planning workloads and orchestrators positioning their hardware</li>
</ul>
<hr>
<h2 id="lessons-insights-from-the-build">Lessons &amp; Insights from the Build</h2>
<h3 id="community-feedback-made-this-better">Community Feedback Made This Better</h3>
<p>The original pre-proposal (October 2025) was more ambitious &#x2014; it included decentralized data transport via Streamr.Network, a larger budget, and broader scope. The community&apos;s feedback was pointed and constructive: too much scope, too much cost, unnecessary architectural risk. That feedback led to a complete reset. The revised proposal dropped Streamr, cut the budget, simplified the architecture, and focused on a thin but complete MVP. The result is better software shipped faster.</p>
<h3 id="existing-infrastructure-is-an-asset">Existing Infrastructure Is an Asset</h3>
<p>A key design principle was reusing existing Livepeer infrastructure wherever possible. Telemetry data from gateways and orchestrators already existed &#x2014; it just wasn&apos;t aggregated or publicly accessible. By building on top of what was already there (gateway events, orchestrator telemetry, Kafka streams from Confluent Cloud), we avoided building a new event pipeline from scratch.</p>
<h3 id="the-resolver-pattern-pays-off">The Resolver Pattern Pays Off</h3>
<p>Early in development, we faced the classic analytics challenge: raw events are noisy, out of order, and inconsistent. Rather than trying to fix this at the ingest layer (which would add latency and complexity), we built a separate resolver service that computes correct state from normalized events. This separation of concerns kept the ingest pipeline fast and simple while giving the API layer clean, trustworthy data. The resolver&apos;s repair mode &#x2014; the ability to reprocess a specific time window &#x2014; has already proven invaluable during development and will continue to be useful operationally.</p>
<h3 id="documentation-is-a-deliverable">Documentation Is a Deliverable</h3>
<p>We treated documentation as a first-class deliverable, not an afterthought. Every architectural decision is recorded in an ADR. Every operational procedure has a runbook. Every validation rule has a behavioral contract. This matters because this is community infrastructure &#x2014; it needs to be operable by people who didn&apos;t build it.</p>
<h3 id="right-sizing-is-an-art">Right-Sizing Is an Art</h3>
<p>The leaner budget forced discipline. We couldn&apos;t build everything, so we built the right things. The metrics catalog is minimal but sufficient. The API covers well-defined requirement specs. The dashboards address four key operational domains. Every component earned its place. This is a pattern we&apos;d recommend to any SPE: start with the smallest thing that proves value, then let the data make the case for further investment.</p>
<hr>
<h2 id="open-source">Open Source</h2>
<p>All code is available at <a href="https://github.com/Cloud-SPE/livepeer-naap-analytics?ref=livepeer.cloud">github.com/Cloud-SPE/livepeer-naap-analytics</a> under an open source license. The repository includes:</p>
<ul>
<li>Complete Go API source OpenAPI specs</li>
<li>Resolver service source</li>
<li>dbt warehouse models with tests</li>
<li>ClickHouse schema and migrations</li>
<li>Grafana dashboards (JSON)</li>
<li>Prometheus configuration</li>
<li>Docker Compose stacks for local development</li>
<li>Production deployment configurations for Docker / Portainer</li>
<li>Full documentation suite</li>
<li>Data validation test harness</li>
</ul>
<hr>
<h2 id="acknowledgments">Acknowledgments</h2>
<p>This project exists because of the Livepeer community. The feedback on the original pre-proposal &#x2014; from DeFine, Karolak, vires-in-numeris, j0sh, Authority_Null, rickstaa, dob, and others &#x2014; was direct, constructive, and ultimately made the project significantly better. Mehrdad from the Livepeer Foundation provided ongoing guidance and confirmed alignment with the network observability roadmap. Qiang Han from Livepeer Inc endorsed the proposal and committed to collaboration throughout execution. honestly_rich championed transparency and accountability throughout the process.</p>
<p>This is what community-driven governance looks like when it works.</p>
<hr>
<p><em>The NaaP Analytics platform is live and open source. Review the code at <a href="https://github.com/Cloud-SPE/livepeer-naap-analytics?ref=livepeer.cloud">github.com/Cloud-SPE/livepeer-naap-analytics</a>, explore the <a href="https://forum.livepeer.org/t/metrics-and-sla-foundations-for-naap/3189?ref=livepeer.cloud">proposal thread</a>, and reach out in <a href="https://discord.gg/livepeer?ref=livepeer.cloud">Livepeer Discord</a> with questions or ideas for what to build next.</em></p>
]]></content:encoded></item><item><title><![CDATA[Your GPUs Can Do More Than Transcode Video — Here's How to Put Them to Work on Livepeer]]></title><description><![CDATA[Stop letting your GPUs sit idle. Join the BlueClaw Network as a provider and power the next generation of AI agents on Livepeer. Get the v1.3 onboarding guide for Chat, Embeddings, and Image Gen.]]></description><link>https://www.livepeer.cloud/livepeer-orchestrators-for-blueclaw-openclaw-agents/</link><guid isPermaLink="false">69cf969cc28d16000117dcbe</guid><category><![CDATA[AI]]></category><category><![CDATA[Image Generation]]></category><category><![CDATA[Infrastructure]]></category><category><![CDATA[Livepeer]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Admin User]]></dc:creator><pubDate>Mon, 06 Apr 2026 10:45:19 GMT</pubDate><media:content url="https://www.livepeer.cloud/content/images/2026/04/opeai-byoc-orch-onboarding-photo_2026-04-03_06-03-27.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.livepeer.cloud/content/images/2026/04/opeai-byoc-orch-onboarding-photo_2026-04-03_06-03-27.jpg" alt="Your GPUs Can Do More Than Transcode Video &#x2014; Here&apos;s How to Put Them to Work on Livepeer"><p><em>A guide for Livepeer Orchestrators ready to serve AI inference and unlock a new revenue stream.</em></p>
<hr>
<p>If you&apos;re running a Livepeer orchestrator, you already have the infrastructure most AI companies would kill for: GPUs connected to a decentralized network, Docker expertise, and skin in the game. What if those same GPUs could serve LLM chat completions, text embeddings, and image generation &#x2014; with demand already waiting?</p>
<p>That&apos;s exactly what <a href="https://blueclaw.network/?ref=livepeer.cloud">BlueClaw Network</a> plans to make possible.</p>
<h2 id="what-is-blueclaw">What Is BlueClaw?</h2>
<p>BlueClaw is an <strong>OpenAI-compatible AI inference gateway</strong> built on top of the Livepeer GPU network. It provides:</p>
<ul>
<li><strong>Chat completions</strong> (<code>/v1/chat/completions</code>)</li>
<li><strong>Text embeddings</strong> (<code>/v1/embeddings</code>)</li>
<li><strong>Image generation</strong> (<code>/v1/images/generations</code>)</li>
</ul>
<p>All accessible at <code>https://openai.blueclaw.network/v1</code> &#x2014; the same API shape developers already use with OpenAI, so any application using the OpenAI SDK can switch to BlueClaw by changing a single line: the <code>base_url</code>.</p>
<p>BlueClaw doesn&apos;t run its own GPUs. <strong>Your GPUs power it.</strong> The gateway discovers orchestrators through the on-chain AI Service Registry, routes inference requests to them, and your infrastructure does the work. You set your own pricing. You keep your earnings.</p>
<h2 id="why-should-orchestrators-care">Why Should Orchestrators Care?</h2>
<h3 id="new-workloads-same-hardware">New Workloads, Same Hardware</h3>
<p>If you&apos;re running RTX 3090s, 4090s, or better &#x2014; you already meet the requirements. BlueClaw&apos;s BYOC (Bring Your Own Compute) framework lets you deploy lightweight runner containers alongside your existing orchestrator setup. No new hardware needed.</p>
<h3 id="real-demand-from-day-one">Real Demand from Day One</h3>
<p>BlueClaw isn&apos;t speculative. It&apos;s designed for autonomous AI agent builders who need <strong>unlimited, always-on inference</strong> without rate limits or per-token billing. These workloads are persistent and growing &#x2014; agents don&apos;t sleep, and they don&apos;t stop sending requests at 5 PM.</p>
<h3 id="the-ai-inference-market-is-massive">The AI Inference Market Is Massive</h3>
<p>Video transcoding put Livepeer on the map. AI inference is where it scales. LLM inference, embeddings for RAG pipelines, image generation &#x2014; these are the workloads every company on the planet is trying to provision right now. BlueClaw gives you a seat at that table, powered by infrastructure you already run.</p>
<h3 id="expanding-capabilities-on-the-horizon">Expanding Capabilities on the Horizon</h3>
<p>Beyond the three core capabilities live today, Cloud SPE is actively building <strong>reranking</strong> (Cohere-compatible <code>/v1/rerank</code>) and <strong>video generation</strong> runners. Orchestrators who onboard now will be first in line when these capabilities go live on BlueClaw.</p>
<h2 id="what-youll-need">What You&apos;ll Need</h2>
<p>Here&apos;s the honest picture of what&apos;s required:</p>
<table>
<thead>
<tr>
<th>Requirement</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>GPU</strong></td>
<td>NVIDIA RTX 3090+ (24GB VRAM minimum for chat + embeddings)</td>
</tr>
<tr>
<td><strong>OS</strong></td>
<td>Linux only &#x2014; Windows is not supported, macOS is unverified</td>
</tr>
<tr>
<td><strong>Software</strong></td>
<td>Docker, NVIDIA Container Toolkit, latest NVIDIA drivers</td>
</tr>
<tr>
<td><strong>Orchestrator</strong></td>
<td>Existing Livepeer AI Orchestrator with stake on Arbitrum One</td>
</tr>
<tr>
<td><strong>Registry</strong></td>
<td>Registered on the <a href="https://arbiscan.io/address/0x04C0b249740175999E5BF5c9ac1dA92431EF34C5?ref=livepeer.cloud">AI Service Registry</a></td>
</tr>
<tr>
<td><strong>Networking</strong></td>
<td>A domain you control + Cloudflare Tunnel (free tier) for valid HTTPS</td>
</tr>
<tr>
<td><strong>ETH</strong></td>
<td>Small amount on Arbitrum One for gas fees</td>
</tr>
</tbody>
</table>
<p><strong>GPU requirements by capability:</strong></p>
<table>
<thead>
<tr>
<th>Capability</th>
<th>Minimum GPU</th>
<th>What You&apos;ll Serve</th>
</tr>
</thead>
<tbody>
<tr>
<td>Chat (small models)</td>
<td>RTX 3090</td>
<td>qwen3:8b, gemma-3-4b-it</td>
</tr>
<tr>
<td>Chat (medium/large)</td>
<td>RTX 4090+ / A100</td>
<td>Qwen2.5-14B-AWQ, Llama-3.3-70B-FP8</td>
</tr>
<tr>
<td>Text Embeddings</td>
<td>RTX 3090</td>
<td>nomic-embed-text, SFR-Embedding-2_R</td>
</tr>
<tr>
<td>Image Generation</td>
<td>RTX 4090+</td>
<td>RealVisXL V4.0, FLUX.1-dev</td>
</tr>
</tbody>
</table>
<p>A 3090 operator can serve chat completions and embeddings on day one. Image generation requires a 4090 or better, on a dedicated GPU.</p>
<h2 id="how-it-works-%E2%80%94-the-architecture">How It Works &#x2014; The Architecture</h2>
<p>The flow is clean and modular:</p>
<pre><code>BlueClaw Gateway (discovers you on-chain)
        &#x2502;
        &#x25BC;
  Your AI Orchestrator (go-livepeer)
        &#x2502;
        &#x25BC;  (via Cloudflare Tunnel)
  BYOC Runners (chat / embeddings / image gen)
        &#x2502;
        &#x25BC;
  Inference Backend (Ollama or vLLM)
        &#x2502;
        &#x25BC;
     Your GPU
</code></pre>
<p>You deploy:</p>
<ol>
<li><strong>Your AI Orchestrator</strong> &#x2014; the <code>go-livepeer</code> node you likely already run</li>
<li><strong>A Cloudflare Tunnel</strong> &#x2014; provides valid HTTPS without managing certificates</li>
<li><strong>An inference backend</strong> &#x2014; Ollama (simpler) or vLLM (higher throughput, larger models)</li>
<li><strong>BYOC runner containers</strong> &#x2014; lightweight proxies that register capabilities with your orchestrator and route requests to your backend</li>
</ol>
<p>Each component is a Docker container. The runners are open source under <a href="https://github.com/Cloud-SPE/livepeer-byoc-suite?ref=livepeer.cloud">Cloud-SPE on GitHub</a>. If you want to build a custom runner for a new workload, the framework only requires an HTTP endpoint and a capability registration sidecar.</p>
<h2 id="the-onboarding-guide">The Onboarding Guide</h2>
<p>Cloud SPE has published a comprehensive <strong>BlueClaw GPU Provider Onboarding Guide</strong> (v1.3) that walks you through every step:</p>
<ul>
<li><strong>Step 1:</strong> Create Docker networks and volumes</li>
<li><strong>Step 2:</strong> Deploy and configure your AI Orchestrator (including ticket redemption wallet setup and AI Service Registry registration)</li>
<li><strong>Step 3:</strong> Set up your Cloudflare Tunnel with published application routes for each capability</li>
<li><strong>Step 4:</strong> Deploy your inference backend &#x2014; complete Docker Compose files for both Ollama and vLLM, with tested model configurations per GPU (3090, 4090, 5090)</li>
<li><strong>Step 5:</strong> Deploy BYOC runners for chat completions, text embeddings, and image generation &#x2014; each with its own compose file and environment variable reference</li>
<li><strong>Step 6:</strong> Start everything in the correct order and verify runner registration</li>
<li><strong>Step 7:</strong> Verify end-to-end with BlueClaw&apos;s playground</li>
</ul>
<p>The guide includes Docker Compose files you can use directly, tested vLLM configurations by GPU type, a full environment variable reference, troubleshooting for common issues (TLS errors, capability registration failures, VRAM management), and a quick-reference section for 3090 operators who want the shortest path to serving jobs.</p>
<h2 id="quick-start-the-3090-operator-path">Quick-Start: The 3090 Operator Path</h2>
<p>If you have a 3090 and want the fastest route:</p>
<ol>
<li>Run <code>tztcloud/go-livepeer:latest</code> registered on the AI Service Registry</li>
<li>Set up one Cloudflare Tunnel with one subdomain</li>
<li>Deploy Ollama, pull <code>qwen3:8b</code> and <code>nomic-embed-text:latest</code></li>
<li>Deploy the chat completions runner and embeddings runner</li>
<li>Sign up at <a href="https://blueclaw.network/?ref=livepeer.cloud">blueclaw.network</a>, test via the playground</li>
</ol>
<p>That&apos;s it. You&apos;re serving AI inference on a decentralized network.</p>
<h2 id="what-models-are-supported">What Models Are Supported?</h2>
<p>BlueClaw currently supports a growing roster across all three capabilities:</p>
<p><strong>Chat:</strong> qwen3:8b, gemma-3-4b-it, Qwen2.5-14B-Instruct-AWQ, Llama-3.3-70B-Instruct-FP8<br>
<strong>Embeddings:</strong> nomic-embed-text, SFR-Embedding-2_R<br>
<strong>Image Generation:</strong> RealVisXL V4.0 Lightning, FLUX.1-dev</p>
<p>The model list will expand as more orchestrators come online and new runners are developed.</p>
<h2 id="open-source-all-the-way-down">Open Source, All the Way Down</h2>
<p>Every BYOC runner is open source. The full suite lives under <a href="https://github.com/Cloud-SPE/livepeer-byoc-suite?ref=livepeer.cloud">github.com/Cloud-SPE</a>:</p>
<ul>
<li><strong>Chat + Embeddings runners</strong> &#x2014; Go</li>
<li><strong>Capability registration</strong> &#x2014; Go</li>
<li><strong>Rerank runner</strong> &#x2014; Python/FastAPI (experimental)</li>
<li><strong>Video generation runner</strong> &#x2014; Python/FastAPI (experimental)</li>
<li><strong>Gateway proxy</strong> &#x2014; Go</li>
</ul>
<p>Want to build a runner for a workload that doesn&apos;t exist yet? The framework is designed for exactly that.</p>
<h2 id="get-started">Get Started</h2>
<p>The full onboarding guide is available now. To get your copy and start the process:</p>
<p><strong>&#x1F449; Reach out to <a href="https://discord.gg/xpKATpA7?ref=livepeer.cloud">@mike_zoop on the Livepeer Discord</a></strong></p>
<p>Mike will share the complete guide, answer your questions, and help you through any setup issues. You can also find him in the <strong>#orchestrating</strong> channel.</p>
<p>Whether you&apos;re running a single 3090 or a rack of 4090s, there&apos;s a path for you. The guide covers it all &#x2014; from the minimal setup to multi-GPU, multi-backend deployments with vLLM and image generation.</p>
<hr>
<h2 id="the-bigger-picture">The Bigger Picture</h2>
<p>Livepeer started as a video transcoding network. With BlueClaw, it becomes something larger: <strong>a decentralized GPU compute layer for AI inference.</strong> The same orchestrators who built the network&apos;s video infrastructure are now positioned to power the next generation of AI applications.</p>
<p>The demand is here. The tooling is ready. The guide is written.</p>
<p>The only question is whether your GPUs are going to sit idle &#x2014; or get to work.</p>
<hr>
<p><em><a href="https://blueclaw.network/?ref=livepeer.cloud">BlueClaw Network</a> &#x2014; Decentralized AI inference on the Livepeer network.</em><br>
<em>Built by <a href="https://www.livepeer.cloud/">Cloud SPE</a> for the Livepeer community.</em><br>
<em>Questions? Find @mike_zoop on <a href="https://discord.gg/xpKATpA7?ref=livepeer.cloud">Livepeer Discord</a>.</em></p>
]]></content:encoded></item><item><title><![CDATA[Building the Measurement Layer Livepeer Needs: Introducing NaaP Analytics]]></title><description><![CDATA[Decentralized AI just got a receipt. Cloud SPE introduces NaaP Analytics: the observability layer for Livepeer’s AI network. Track SLA metrics, GPU performance, and reliability with transparent, real-time data.]]></description><link>https://www.livepeer.cloud/building-the-measurement-layer-for-livepeer/</link><guid isPermaLink="false">69cf8b90c28d16000117dc3f</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Livepeer]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[AI]]></category><category><![CDATA[Infrastructure]]></category><category><![CDATA[Report]]></category><dc:creator><![CDATA[Admin User]]></dc:creator><pubDate>Thu, 15 Jan 2026 09:57:00 GMT</pubDate><media:content url="https://www.livepeer.cloud/content/images/2026/04/cloudspe-naap-announcement-photo_2026-04-03_05-56-59.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.livepeer.cloud/content/images/2026/04/cloudspe-naap-announcement-photo_2026-04-03_05-56-59.jpg" alt="Building the Measurement Layer Livepeer Needs: Introducing NaaP Analytics"><p><em>By the Cloud SPE Team &#x2014; <a href="https://www.livepeer.cloud/">Livepeer.Cloud</a></em></p>
<hr>
<h2 id="the-network-is-growing-now-it-needs-to-be-measurable">The Network Is Growing. Now It Needs to Be Measurable.</h2>
<p>Livepeer has evolved from a decentralized video transcoding protocol into a full-fledged AI compute network &#x2014; handling image generation, video synthesis, LLM inference, text-to-speech, and more. Orchestrators around the world power these workloads on GPUs ranging from GTX 1080s to RTX 5090s serving gateways and applications that depend on performance and reliability.</p>
<p>But here&apos;s the gap: until now, there hasn&apos;t been a shared, network-wide way to measure how well it&apos;s all working.</p>
<p>Gateway providers couldn&apos;t easily compare orchestrator performance across regions or workloads. Orchestrators had no standardized way to demonstrate reliability. Developers evaluating Livepeer for production use had to trust marketing materials instead of transparent data. And the broader ecosystem lacked the foundation needed to build SLA-aware routing, scaling, or any kind of production-grade service guarantees.</p>
<p>That&apos;s what this project changes.</p>
<h2 id="what-we-built">What We Built</h2>
<p>The <strong>Cloud SPE</strong> has joined forces with the Livepeer Foundation, Livepeer Inc, and the Livepeer Community to design, build, and deploy the <strong>NaaP (Network-as-a-Product) Analytics MVP</strong> &#x2014; a complete metrics, analytics, and observability platform for the Livepeer AI network.</p>
<p>Funded through the <a href="https://forum.livepeer.org/t/metrics-and-sla-foundations-for-naap/3189?ref=livepeer.cloud">Livepeer Treasury</a> and aligned with the <a href="https://roadmap.livepeer.org/p/make-network-data-more-observable?ref=livepeer.cloud">Livepeer Foundation&apos;s roadmap</a> to make network data more observable, this project delivers:</p>
<h3 id="core-sla-metrics">Core SLA Metrics</h3>
<p>A standardized set of performance, reliability, and demand metrics &#x2014; covering everything from job success rates and GPU throughput to latency, FPS stability, and payment economics. These metrics are sourced from gateway telemetry, orchestrator signals, and reference load tests, unified into a single analytics layer.</p>
<h3 id="network-test-verification-signals">Network Test &amp; Verification Signals</h3>
<p>We operate reference load-test gateways that generate consistent, reproducible performance signals across live AI pipelines. Public test scenarios are designed to reflect real workloads, are transparent and community-verifiable, and feed directly into the same analytics layer as organic network traffic.</p>
<h3 id="analytics-aggregation-layer">Analytics &amp; Aggregation Layer</h3>
<p>A purpose-built data pipeline transforms raw events into network-level views. Events flow from Kafka through ClickHouse, normalized through a resolver service, shaped by dbt semantic models, and served through a Go REST API. The architecture prioritizes efficient querying &#x2014; dashboards never need to scan raw job data.</p>
<h3 id="public-dashboard-apis">Public Dashboard &amp; APIs</h3>
<p>A standalone Grafana-powered dashboard presents live and historical metrics across four key domains: system health, real-time operations, economics and payments, and performance drill-downs. Public, read-only APIs expose aggregate SLA scores, GPU supply inventory, and orchestrator leaderboard data. Gateways and ecosystem teams can consume this data directly or mirror it into their own systems.</p>
<h3 id="operations-stewardship">Operations &amp; Stewardship</h3>
<p>The entire platform is deployed across a production-grade infrastructure with automated monitoring, Prometheus metrics, and documented runbooks. We&apos;re committed to maintaining and operating this infrastructure for the Livepeer community.</p>
<h2 id="why-this-matters">Why This Matters</h2>
<p>This isn&apos;t a dashboard project. It&apos;s the measurement layer that the Livepeer network requires to evolve from &quot;decentralized compute that works&quot; into &quot;production infrastructure you can bet your business on.&quot;</p>
<p><strong>For Gateway Providers:</strong> You can now evaluate orchestrator performance with real data &#x2014; compare reliability, latency, and throughput across regions and workloads before routing a single job.</p>
<p><strong>For Orchestrators:</strong> Your performance is now visible and comparable. Good work gets recognized. The network rewards reliability, not just availability.</p>
<p><strong>For Developers &amp; Partners:</strong> Evaluating Livepeer for production use no longer requires a leap of faith. The data is public, the APIs are open, and the methodology is transparent.</p>
<p><strong>For the Ecosystem:</strong> Future SLA-aware routing, automated scaling, and production service guarantees all depend on trusted measurement. This is that foundation.</p>
<h2 id="the-road-here">The Road Here</h2>
<p>This project didn&apos;t arrive overnight. It started with community conversations about what Livepeer needed to become a true production platform. An initial pre-proposal in October 2025 drew thoughtful feedback &#x2014; the community pushed back on scope, cost, and architectural complexity. They were right.</p>
<p>We listened. We reset. We narrowed the scope, reduced the budget, simplified the architecture, and prioritized time-to-value. The <a href="https://forum.livepeer.org/t/metrics-and-sla-foundations-for-naap/3189?ref=livepeer.cloud">revised proposal</a> earned broad support &#x2014; from the community, from Livepeer Inc, and from the Livepeer Foundation &#x2014; and passed the <a href="https://explorer.livepeer.org/treasury/47675980806842999962173227987422002121354040219792725319563843023665050472833?ref=livepeer.cloud">treasury vote</a> in January 2026.</p>
<p>Work had already begun in November 2025, and we&apos;ve executed across three milestones: metrics collection and aggregation, test signals and derived analytics, and stabilization. The result is a production system that&apos;s live, documented, open source, and ready for the community to build on.</p>
<h2 id="what-comes-next">What Comes Next</h2>
<p>This MVP establishes shared measurement. It does not enforce SLAs, modify protocol incentives, or introduce routing logic. Those are future decisions that the community can now make with data in hand.</p>
<p>What this enables:</p>
<ul>
<li><strong>SLA-aware routing</strong> &#x2014; gateways can route jobs based on demonstrated performance, not just price</li>
<li><strong>Network quality scoring</strong> &#x2014; aggregate reliability metrics that make Livepeer legible to enterprise buyers</li>
<li><strong>Data-driven governance</strong> &#x2014; treasury proposals and network decisions grounded in observable outcomes</li>
</ul>
<p>We built this as neutral, public infrastructure. The data belongs to the network. The code is <a href="https://github.com/Cloud-SPE/livepeer-naap-analytics?ref=livepeer.cloud">open source</a>. The APIs are public. And we&apos;re here to maintain it, improve it, and support the teams that build on top of it.</p>
<h2 id="about-cloud-spe">About Cloud SPE</h2>
<p><a href="https://www.livepeer.cloud/">Cloud SPE</a> is a Special Purpose Entity within the Livepeer ecosystem, founded by three Livepeer orchestrator node operators: <a href="https://www.speedybird.xyz/?ref=livepeer.cloud">Speedy Bird Technologies</a>, <a href="https://mikezupper.com/?ref=livepeer.cloud">Mike Zupper</a> (Xode App), and Papabear (Solar Farm). We operate free-to-use Livepeer gateways, build open-source tooling for the network, and work to make decentralized video and AI infrastructure accessible to everyone.</p>
<p>This is our third treasury-funded project. Previous work includes the original <a href="https://www.livepeer.cloud/livepeer-cloud-spe-approve/">SPE gateway and demand generation infrastructure</a> and the <a href="https://www.livepeer.cloud/livepeer-treasury-proposal-ai-performance-leaderboard/">AI Performance Leaderboard</a> integrated into the Livepeer Explorer. Each project has built on the last, and NaaP Analytics represents the most ambitious &#x2014; and most important &#x2014; step yet.</p>
<hr>
<p>Join the conversation in <a href="https://discord.gg/livepeer?ref=livepeer.cloud">Livepeer Discord</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Self-Hosting Livepeer’s LLM Pipeline: Deploying an Ollama-Based GPU Runner for AI Orchestrators]]></title><description><![CDATA[Livepeer AI Orchestrators can reuse existing NVIDIA 10/20 Series GPUs to provide LLM AI Inference to the Livepeer AI Network]]></description><link>https://www.livepeer.cloud/self-hosting-livepeers-llm-pipeline-deploying-an-ollama-based-gpu-runner-for-ai-orchestrators/</link><guid isPermaLink="false">69175d3ad5f94d00010a96c7</guid><category><![CDATA[AI]]></category><category><![CDATA[Livepeer]]></category><category><![CDATA[Infrastructure]]></category><dc:creator><![CDATA[Mike Z]]></dc:creator><pubDate>Fri, 14 Nov 2025 17:05:51 GMT</pubDate><content:encoded><![CDATA[<p><strong>Written by <a href="https://mikezupper.com/?ref=livepeer.cloud">Mike Zupper</a></strong><br>
<em>Member of <a href="https://www.livepeer.cloud/">Cloud SPE</a>, <a href="https://docs.livepeer.org/ai/introduction?ref=livepeer.cloud">Livepeer AI</a></em></p>
<hr>
<h2 id="introduction">Introduction</h2>
<p>Livepeer is rapidly evolving into a real-time AI video and machine intelligence network.<br>
Beyond video transcoding, orchestrators can now serve AI workloads such as:</p>
<ul>
<li>Image generation</li>
<li>Image-to-video</li>
<li>text-to-speech</li>
<li>audio-to-text</li>
<li><strong>Large Language Model (LLM) inference</strong></li>
<li>and more ... read about it @ <a href="https://docs.livepeer.org/ai/pipelines/overview?ref=livepeer.cloud">Livepeer AI Docs</a></li>
</ul>
<p>To support this shift, the Cloud SPE (Special Purpose Entity within Livepeer) has built a custom <strong>Ollama-based AI Runner</strong> optimized for running LLM inference on GPUs with <strong>as little as 8GB of VRAM</strong>.</p>
<p>This post walks you through <em>exactly</em> how to deploy that runner using Docker, configure a Livepeer AI Orchestrator to use it, and verify that everything is working correctly &#x2014; with detailed logs, examples, and explanations.</p>
<p>If you get stuck, join the <strong>Livepeer Discord</strong> and visit the <strong>#orchestrating</strong> channel:<br>
&#x1F449; <a href="https://discord.gg/xpKATpA7?ref=livepeer.cloud">https://discord.gg/xpKATpA7</a><br>
You can always ping <strong>@mike_zoop</strong> for help.</p>
<hr>
<h2 id="why-we-built-an-ollama-based-runner-cloud-spe-motivations">Why We Built an Ollama-Based Runner (Cloud SPE Motivations)</h2>
<p>The official Livepeer docs recommend GPUs with <strong>16GB+ VRAM</strong> for AI inference &#x2014; and for good reason: diffusion models and advanced pipelines often require huge memory footprints.</p>
<p>However, LLMs (especially quantized formats) can run very efficiently on <strong>8GB, 10GB, 12GB</strong> cards.</p>
<p>Cloud SPE created this custom Ollama-based runner because:</p>
<h3 id="%E2%9C%94-many-orchestrators-already-own-gpus-like-gtx-1080-1070-ti-2080-3060">&#x2714; Many orchestrators already own GPUs like GTX 1080, 1070 Ti, 2080, 3060</h3>
<p>These cards may be <strong>idle</strong> from legacy transcoding workloads &#x2014; we want to put them back to work.</p>
<h3 id="%E2%9C%94-llm-jobs-do-not-require-massive-vram">&#x2714; LLM jobs do not require massive VRAM</h3>
<p>Ollama supports quantization and streaming inference, making 8GB GPUs perfectly viable.</p>
<h3 id="%E2%9C%94-lower-barrier-to-entry-%E2%86%92-more-decentralization">&#x2714; Lower barrier to entry &#x2192; more decentralization</h3>
<p>The more GPUs that can join the network, the healthier and more globally distributed Livepeer becomes.</p>
<h3 id="%E2%9C%94-high-vram-gpus-4090-5090-etc-can-run-more-complex-models">&#x2714; High-VRAM GPUs (4090, 5090, etc.) can run more complex models</h3>
<p>Operators with modern cards gain additional earning opportunities for heavy models and emerging video-AI pipelines.</p>
<p><strong>Bottom line:</strong><br>
This runner is designed so <strong>more orchestrators can earn</strong> and <strong>more GPUs can be useful</strong> in Livepeer&#x2019;s AI future.</p>
<hr>
<h2 id="hardware-system-requirements">Hardware &amp; System Requirements</h2>
<p>Your orchestrator node must meet the following minimum standards.</p>
<h3 id="minimum-requirements"><strong>Minimum Requirements</strong></h3>
<ul>
<li><strong>GPU:</strong> NVIDIA GTX 1080 or better (&#x2265; 8GB VRAM)</li>
<li><strong>Driver:</strong> Latest NVIDIA drivers installed</li>
<li><strong>Docker:</strong> Installed + working</li>
<li><strong>NVIDIA Container Toolkit:</strong> Installed (enables CUDA inside Docker containers)</li>
</ul>
<p>Livepeer&#x2019;s official docs (<a href="https://docs.livepeer.org/ai/orchestrators/get-started?ref=livepeer.cloud">https://docs.livepeer.org/ai/orchestrators/get-started</a>) suggest 16GB VRAM, but <strong>Cloud SPE&#x2019;s 8GB-compatible runner removes this requirement</strong>.</p>
<h3 id="recommended-hardware"><strong>Recommended Hardware</strong></h3>
<ul>
<li>NVIDIA RTX 10-, 20-, 30-, or 40-series GPUs (save the 4090 or large VRAM models for other Livepeer AI Jobs)</li>
<li>8GB+ VRAM</li>
<li>Fast NVMe storage</li>
<li>CPU with &#x2265; 8 cores</li>
<li>&#x2265; 32GB RAM</li>
</ul>
<p>Lower latency and faster GPUs &#x2192; <strong>more job wins</strong>.</p>
<hr>
<h2 id="architecture-overview">Architecture Overview</h2>
<p>Here&#x2019;s the local architecture when running an AI Orchestrator and Ollama GPU runner on the same machine:</p>
<pre><code>Livepeer LLM Flow (Simplified)

Client (Gateway)
   |
   v
AI Orchestrator
   |
   v
Ollama AI Runner (llm_runner)
   |
   v
Ollama
   |
   v
GPU
</code></pre>
<h3 id="components">Components</h3>
<table>
<thead>
<tr>
<th>Component</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Ollama AI Runner (<code>llm_runner</code>)</strong></td>
<td>Translates Livepeer LLM pipeline requests &#x2192; Ollama API calls</td>
</tr>
<tr>
<td><strong>Ollama Server</strong></td>
<td>Loads and executes LLMs on your GPU</td>
</tr>
<tr>
<td><strong>GPU</strong></td>
<td>Executes inference kernels</td>
</tr>
<tr>
<td><strong>AI Orchestrator</strong></td>
<td>Receives jobs from the Livepeer network and routes them to the runner</td>
</tr>
</tbody>
</table>
<p>For most operators, the orchestrator and runner sit on the <strong>same box</strong>.<br>
Advanced users can separate them using Livepeer Remote Workers &#x2014; but this requires more networking and support (ask in Discord).</p>
<hr>
<h2 id="deploying-the-ollama-based-ai-runner">Deploying the Ollama-Based AI Runner</h2>
<p>This section walks you through the full deployment.</p>
<h3 id="step-1-%E2%80%94-create-the-persistent-ollama-model-volume">Step 1 &#x2014; Create the persistent Ollama model volume</h3>
<p>This ensures your model stays downloaded after container restarts.</p>
<pre><code class="language-bash">docker volume create ollama
</code></pre>
<h3 id="step-2-%E2%80%94-docker-compose-stack">Step 2 &#x2014; Docker Compose Stack</h3>
<p>Create a <code>docker-compose.yml</code> with the following stack:</p>
<pre><code class="language-yaml">services:
  ollama-ai-runner:
    image: tztcloud/livepeer-ollama-runner:0.1.1
    container_name: llm_runner
    restart: unless-stopped
    runtime: nvidia
    #Uncomment this port if you want to verify the service is up 
    #ports:
    #  - 8000:8000
    environment:
      - RUST_LOG=info
      - OLLAMA_BASE_URL=http://ollama:11434
       
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    runtime: nvidia
    #Uncomment this port if you want to verify the service is up 
    #ports:
    #  - 11434:11434
    volumes:
      - ollama:/root/.ollama
    environment:
      - OLLAMA_GPU_ENABLED=true
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
              driver: nvidia
              count: all

volumes:
  ollama:
    external: true
</code></pre>
<h3 id="step-3-%E2%80%94-start-the-stack">Step 3 &#x2014; Start the stack</h3>
<pre><code class="language-bash">docker compose up -d
</code></pre>
<h3 id="step-4-%E2%80%94-download-the-model-inside-the-ollama-container">Step 4 &#x2014; Download the model inside the Ollama container</h3>
<p>Once the containers are running:</p>
<pre><code class="language-bash">docker exec -it ollama ollama pull llama3.1:8b
</code></pre>
<p>This stores the model inside the <code>ollama</code> Docker volume.</p>
<blockquote>
<p><strong>Note:</strong><br>
Ollama model name (<code>llama3.1:8b</code>) and Livepeer model name (<code>meta-llama/Meta-Llama-3.1-8B-Instruct</code>) are different &#x2014; but they represent the same model family.<br>
This is expected.</p>
</blockquote>
<hr>
<h2 id="configuring-the-ai-orchestrator">Configuring the AI Orchestrator</h2>
<p>You must update your <code>aiModels.json</code> to tell the orchestrator where your runner is located.</p>
<h3 id="step-5-%E2%80%94-edit-aimodelsjson">Step 5 &#x2014; Edit <code>aiModels.json</code></h3>
<p>Add:</p>
<pre><code class="language-json">[
    {
        &quot;pipeline&quot;: &quot;llm&quot;,
        &quot;model_id&quot;: &quot;meta-llama/Meta-Llama-3.1-8B-Instruct&quot;,
        &quot;warm&quot;: true,
        &quot;price_per_unit&quot;: 0.18,
        &quot;currency&quot;: &quot;USD&quot;,
        &quot;pixels_per_unit&quot;: 1000000,
        &quot;url&quot;: &quot;http://llm_runner:8000&quot;
    }
]
</code></pre>
<p>Important details:</p>
<ul>
<li><code>pipeline: &quot;llm&quot;</code> enables the LLM pipeline</li>
<li><code>model_id</code> must match the model Livepeer expects</li>
<li><code>url</code> uses the Docker service name <code>llm_runner</code> &#x2014; containers share a network</li>
<li><code>warm: true</code> tells the orchestration layer to preload the model</li>
</ul>
<h3 id="step-6-%E2%80%94-register-with-the-ai-service-registry">Step 6 &#x2014; Register with the AI Service Registry</h3>
<p>If you skip this step, you <strong>will not receive jobs</strong>.</p>
<p>Consult Livepeer docs or ask in Discord for the exact commands depending on your orchestrator setup.</p>
<hr>
<h1 id="verifying-the-deployment">Verifying the Deployment</h1>
<p>Once everything is launched, you should see specific logs.</p>
<h2 id="runner-startup-logs-llmrunner">Runner Startup Logs (<code>llm_runner</code>)</h2>
<pre><code class="language-text">INFO livepeer_ollama_runner: Starting livepeer-ollama-runner
INFO livepeer_ollama_runner: Ollama base URL: http://ollama:11434
INFO livepeer_ollama_runner: Bind address: 0.0.0.0:8000
INFO livepeer_ollama_runner: Server listening on 0.0.0.0:8000
</code></pre>
<p>This confirms:</p>
<ul>
<li>The runner reached the Ollama container</li>
<li>It&#x2019;s listening on port 8000</li>
<li>No authentication required (local only)</li>
</ul>
<h2 id="ollama-startup-logs">Ollama Startup Logs</h2>
<p>You should see:</p>
<pre><code class="language-text">level=INFO msg=&quot;llama runner started in 1.59 seconds&quot;
level=INFO msg=&quot;loaded runners&quot; count=1
level=INFO msg=&quot;waiting for llama runner to start responding&quot;
</code></pre>
<p>This means:</p>
<ul>
<li>GPU was detected</li>
<li>Ollama successfully registered its internal inference runner</li>
<li>The model is ready to load when requested</li>
</ul>
<hr>
<h2 id="when-you-receive-a-job">When You Receive a Job</h2>
<p>When the Livepeer gateway assigns an LLM job to your Orchestrator, <code>llm_runner</code> will log:</p>
<pre><code class="language-text">INFO llm_handler{model=Some(&quot;meta-llama/Meta-Llama-3.1-8B-Instruct&quot;)}:
livepeer_ollama_runner: Received LLM request

INFO llm_handler{model=Some(&quot;meta-llama/Meta-Llama-3.1-8B-Instruct&quot;)}:
livepeer_ollama_runner: Processing request with model=llama3.1:8b, stream=false
</code></pre>
<p>This verifies:</p>
<ul>
<li>Livepeer job &#x2192; orchestrator &#x2192; runner &#x2192; Ollama pipeline works</li>
<li>The model mapping is correct</li>
<li>You&#x2019;re officially serving LLM inference on the network</li>
</ul>
<hr>
<h2 id="gpu-verification-nvidia-smi">GPU Verification (nvidia-smi)</h2>
<p>Run:</p>
<pre><code class="language-bash">nvidia-smi
</code></pre>
<p>When jobs execute, you should see Ollama consuming VRAM:</p>
<pre><code>0   N/A  N/A          516194      C   /usr/bin/ollama         5364MiB
</code></pre>
<p>This confirms:</p>
<ul>
<li>GPU is exposed to Docker</li>
<li>Ollama is executing kernels on the GPU</li>
<li>The workload is actually running (not CPU fallback)</li>
</ul>
<hr>
<h2 id="verify-ai-capabilities">Verify AI Capabilities</h2>
<p>If you visit (<a href="https://tools.livepeer.cloud/ai/network-capabilities?ref=livepeer.cloud">https://tools.livepeer.cloud/ai/network-capabilities</a>), you should see the &quot;LLM&quot; pipeline and your orchestrator listed as &quot;Warm&quot;</p>
<h1 id="optional-using-remote-workers">Optional: Using Remote Workers</h1>
<p>Livepeer supports &#x201C;AI Remote Workers&#x201D; &#x2014; allowing an orchestrator to run on one box and dispatch jobs to multiple remote GPU workers.</p>
<p>This is <strong>advanced</strong> and requires:</p>
<ul>
<li>Secure networking</li>
<li>Correct registration</li>
<li>Gateway reachability</li>
<li>Consistent worker health monitoring</li>
</ul>
<p>If you want to explore this:</p>
<p>&#x1F449; Join the Discord: <a href="https://discord.gg/xpKATpA7?ref=livepeer.cloud">https://discord.gg/xpKATpA7</a><br>
Ask in <strong>#orchestrating</strong>, tag <strong>@mike_zoop</strong></p>
<hr>
<h1 id="faq-initial-version-%E2%80%94-will-grow-over-time">FAQ (Initial Version &#x2014; Will Grow Over Time)</h1>
<h3 id="do-i-need-16gb-vram"><strong>Do I need 16GB VRAM?</strong></h3>
<p>No.<br>
Cloud SPE built this runner to support <strong>8GB GPUs</strong> like the GTX 1080, 1070 Ti, 2060, etc.<br>
Higher VRAM improves throughput, but 8GB works.</p>
<h3 id="do-i-need-to-run-the-orchestrator-and-runner-on-the-same-machine"><strong>Do I need to run the Orchestrator and Runner on the same machine?</strong></h3>
<p>Not required, but strongly recommended for simplicity.</p>
<h3 id="why-is-my-orchestrator-not-receiving-jobs"><strong>Why is my Orchestrator not receiving jobs?</strong></h3>
<p>Most common reasons:</p>
<ul>
<li>You did <strong>not register with the AI Service Registry</strong></li>
<li><code>aiModels.json</code> misconfigured</li>
<li>GPU too slow &#x2192; not competitive for jobs</li>
<li>Network issues</li>
<li>Runner not reachable by container name <code>llm_runner</code></li>
</ul>
<h3 id="why-use-ollama-instead-of-raw-pytorch-or-tensorrt"><strong>Why use Ollama instead of raw PyTorch or TensorRT?</strong></h3>
<p>Ollama provides:</p>
<ul>
<li>Simple Docker deployment</li>
<li>Fast quantized models</li>
<li>Low VRAM usage</li>
<li>Clean API for the runner</li>
<li>Massive model library</li>
</ul>
<hr>
<h1 id="final-notes">Final Notes</h1>
<p>If you hit issues, join the community and ask questions:</p>
<p>&#x1F449; Livepeer Discord: <a href="https://discord.gg/xpKATpA7?ref=livepeer.cloud">https://discord.gg/xpKATpA7</a><br>
Ask in <strong>#orchestrating</strong> and tag <strong>@mike_zoop</strong></p>
<p>You can also read more about my work at:</p>
<ul>
<li><a href="https://mikezupper.com/?ref=livepeer.cloud">https://mikezupper.com</a></li>
<li><a href="https://www.livepeer.cloud/">https://www.livepeer.cloud</a></li>
</ul>
<p>Cloud SPE is committed to lowering the barrier to entry, increasing GPU participation, and expanding Livepeer into a resilient, decentralized infrastructure layer for open AI.</p>
<hr>
<p>If you&apos;d like any revisions, enhancements, images, diagrams, or additional sections &#x2014; just let me know!</p>
]]></content:encoded></item><item><title><![CDATA[Exciting New Livepeer Treasury Proposal: AI Performance Leaderboard]]></title><description><![CDATA[New Livepeer Treasury Proposal: AI Performance Leaderboard Livepeer.Cloud SPE Job Tester API Integration  Explorer Orchestrator]]></description><link>https://www.livepeer.cloud/livepeer-treasury-proposal-ai-performance-leaderboard/</link><guid isPermaLink="false">66acf1c0c0d9d80001ce9c58</guid><category><![CDATA[AI]]></category><category><![CDATA[Livepeer]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Video Generation]]></category><dc:creator><![CDATA[Mike Z]]></dc:creator><pubDate>Wed, 31 Jul 2024 15:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1638957360698-9e50b564da73?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDh8fG5leHQlMjBsZXZlbHxlbnwwfHx8fDE3MjI2MTA0MDh8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1638957360698-9e50b564da73?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDh8fG5leHQlMjBsZXZlbHxlbnwwfHx8fDE3MjI2MTA0MDh8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Exciting New Livepeer Treasury Proposal: AI Performance Leaderboard"><p>We are thrilled to announce that Livepeer.Cloud SPE has just created a new Livepeer Treasury Proposal for our second project: <strong>AI Performance Leaderboard</strong> to be integrated into the Livepeer Explorer. This initiative aims to significantly enhance the Livepeer AI Subnet by providing valuable metrics and performance insights.</p><h3 id="key-components-of-the-project">Key Components of the Project</h3><ol><li><strong>AI Job Tester</strong><ol><li>This component will submit AI jobs to each Orchestrator and rigorously test their performance. By running these tests, we will gather critical data on how well each Orchestrator handles AI tasks.</li></ol></li><li><strong>AI Leaderboard API</strong><ol><li>The API will be responsible for storing and managing data from the AI Job Tester. It will handle all API calls and ensure that the data is readily available for viewing and analysis.</li></ol></li><li><strong>Integration into Livepeer Explorer Orchestrator Performance Leaderboard</strong><ol><li>The final component involves integrating the AI performance metrics into the existing Livepeer Explorer UI. This will allow Orchestrators and Delegators to easily compare performance and make informed decisions.</li></ol></li></ol><h3 id="project-timeline">Project Timeline</h3><p>We anticipate that this project will take approximately 3 months to complete. Once implemented, the AI Performance Leaderboard will have a substantial impact on the Livepeer AI subnet, providing clearer insights and fostering better performance across the network.</p><h3 id="call-to-action">Call to Action</h3><p>We encourage all Orchestrators and Delegators to support this initiative. Your involvement will help ensure the success of this project and its positive impact on the Livepeer ecosystem.</p><p>For more details, you can review the full proposal <a href="https://explorer.livepeer.org/treasury/69112973991711207069799657820129915730234258793790128205157315299386501373337?ref=livepeer.cloud">here</a> and check out the pre-proposal discussion <a href="https://forum.livepeer.org/t/livepeer-cloud-pre-proposal-ai-metrics-and-visibility/2531/5?ref=livepeer.cloud">here</a>.</p><p>Stay tuned for updates, and thank you for your continued support!</p>]]></content:encoded></item><item><title><![CDATA[Financial Report: April 2024]]></title><description><![CDATA[A view into the budget spending alignment with Livepeer.Cloud SPE's Treasury Proposal commitments.]]></description><link>https://www.livepeer.cloud/financial-report-april-2024/</link><guid isPermaLink="false">66213c9d6140b8000181e63d</guid><category><![CDATA[Livepeer]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Financial]]></category><category><![CDATA[Report]]></category><dc:creator><![CDATA[Mike Z]]></dc:creator><pubDate>Tue, 30 Apr 2024 00:00:02 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.livepeer.cloud/content/images/2024/04/markus-spiske-XrIfY_4cK1w-unsplash.jpg" class="kg-image" alt loading="lazy" width="2000" height="1333" srcset="https://www.livepeer.cloud/content/images/size/w600/2024/04/markus-spiske-XrIfY_4cK1w-unsplash.jpg 600w, https://www.livepeer.cloud/content/images/size/w1000/2024/04/markus-spiske-XrIfY_4cK1w-unsplash.jpg 1000w, https://www.livepeer.cloud/content/images/size/w1600/2024/04/markus-spiske-XrIfY_4cK1w-unsplash.jpg 1600w, https://www.livepeer.cloud/content/images/2024/04/markus-spiske-XrIfY_4cK1w-unsplash.jpg 2000w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Photo by </span><a href="https://unsplash.com/@markusspiske?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><span style="white-space: pre-wrap;">Markus Spiske</span></a><span style="white-space: pre-wrap;"> on </span><a href="https://unsplash.com/photos/text-XrIfY_4cK1w?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><span style="white-space: pre-wrap;">Unsplash</span></a></figcaption></figure><p>In this financial update for Livepeer.Cloud SPE, we&apos;ve published a detailed breakdown of our budget and show that it has been used in full alignment with the approved SPE Treasury Proposal.</p><h2 id="summary-of-spend">Summary of Spend</h2><ol><li><strong>Total Spend</strong>: Every line item in our proposed budget has been published here as actual spend, ensuring full accountability. As shown, the total amount approved for the SPE has been allocated to its intended purpose. The data provided represents actual spend as of April 30, 2024. </li><li><strong>Livepeer Gateway:</strong> These funds represent the allocations made to set up and run the gateway. This includes the necessary reserve and deposit to pay Orchestrators for work completed.</li><li><strong>Infrastructure:</strong> We&apos;ve met our goals of providing quality and performance at a low monthly cost. </li><li><strong>Operations</strong>: Initial spend in this category is expected to be higher during the launch period and reduce to a steady state after month three of operations. </li><li><strong>Development, Documentation, Testing:</strong> Even though we&apos;ve spent less than what&apos;s common in the industry, we&apos;ve succeeded in developing, documenting, and testing our services, showing our commitment to value for the community. </li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.livepeer.cloud/content/images/2024/04/image.png" class="kg-image" alt loading="lazy" width="2000" height="953" srcset="https://www.livepeer.cloud/content/images/size/w600/2024/04/image.png 600w, https://www.livepeer.cloud/content/images/size/w1000/2024/04/image.png 1000w, https://www.livepeer.cloud/content/images/size/w1600/2024/04/image.png 1600w, https://www.livepeer.cloud/content/images/2024/04/image.png 2398w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Click the image to enlarge.</span></figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Introducing Dream: AI-Powered Image and Video Generation on Livepeer.Cloud]]></title><description><![CDATA[Dream, a free to use playground, powered by the Livepeer AI Subnet. A web-based image-to-image, text-to-image, and image-to-video generation tool]]></description><link>https://www.livepeer.cloud/livepeer-cloud-gateway-ai-subnet/</link><guid isPermaLink="false">66269d6f6140b8000181e68f</guid><category><![CDATA[Livepeer]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[AI]]></category><category><![CDATA[Image Generation]]></category><category><![CDATA[Video Generation]]></category><category><![CDATA[Infrastructure]]></category><category><![CDATA[Dream]]></category><category><![CDATA[Stable Studio]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Platform]]></category><dc:creator><![CDATA[Mike Z]]></dc:creator><pubDate>Fri, 26 Apr 2024 17:42:12 GMT</pubDate><media:content url="https://www.livepeer.cloud/content/images/2024/04/joshua-sortino-LqKhnDzSF-8-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.livepeer.cloud/content/images/2024/04/joshua-sortino-LqKhnDzSF-8-unsplash.jpg" alt="Introducing Dream: AI-Powered Image and Video Generation on Livepeer.Cloud"><p>We are excited to announce the launch of <strong>Livepeer.Cloud Dream</strong>, the newest addition to <a href="https://www.livepeer.cloud/">Livepeer.Cloud</a> that transforms the way you create images and videos. Access Dream now at <a href="https://dream.livepeer.cloud/?ref=livepeer.cloud">https://dream.livepeer.cloud</a>, and dive into the future of AI-driven media generation.</p>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.livepeer.cloud/content/images/2024/05/dream-preview.png" class="kg-image" alt="Introducing Dream: AI-Powered Image and Video Generation on Livepeer.Cloud" loading="lazy" width="1282" height="811" srcset="https://www.livepeer.cloud/content/images/size/w600/2024/05/dream-preview.png 600w, https://www.livepeer.cloud/content/images/size/w1000/2024/05/dream-preview.png 1000w, https://www.livepeer.cloud/content/images/2024/05/dream-preview.png 1282w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">What AI Images will you dream of today?</span></figcaption></figure><p><strong>Alpha Release Available Now</strong><br>
Our user interface is still under development, but we couldn&#x2019;t wait to give you a sneak peek with our alpha release. Dream is designed to empower creators with sophisticated AI tools that make image and video generation both simple and intuitive.</p>
<p><strong>Open Source Innovation</strong><br>
In our commitment to community-driven development, we are thrilled to open source the StableStudio Livepeer AI Plugin. This initiative allows you to design your own interfaces and harness the power of Livepeer&apos;s AI Gateway Nodes for a tailored AI creation experience.</p>
<p><strong>Exciting Features on the Horizon</strong><br>
Our team is working on expanding Dream&apos;s capabilities with the introduction of image to video transformation, the ability to upscale images to larger sizes, and support for a more diverse range of AI models.</p>
<p><strong>Join the Creative Revolution</strong><br>
Dream is more than just a service; it&apos;s a playground for the imagination. This platform not only supports the imaging pipeline but also showcases the vast possibilities within the image-to-image and video AI ecosystem. We encourage developers, artists, and innovators to explore Dream, experiment with its capabilities, and push the boundaries of digital media creation.</p>
<p>Embark on your creative journey with Dream and start transforming your visions into stunning visual realities today!</p>
]]></content:encoded></item><item><title><![CDATA[Livepeer + Owncast = Self Hosted Streaming ❤️]]></title><description><![CDATA[<p><a href="https://www.livepeer.cloud/" rel="noreferrer">Livepeer Cloud</a> is proud to announce the alpha release of the integration between <a href="https://livepeer.org/?ref=livepeer.cloud" rel="noreferrer">Livepeer</a> and <a href="https://owncast.online/?ref=livepeer.cloud" rel="noreferrer">Owncast</a>.</p><p>This integration enables the feature called &quot;Stream Relay&quot; which enables self-hosted Owncast instances to relay their stream to a remote transcoding service like the Livepeer Cloud gateway, powered by the Livepeer Protocol.</p>]]></description><link>https://www.livepeer.cloud/livepeer-owncast-self-hosted-streaming/</link><guid isPermaLink="false">66019a9ca055440001c6d9a0</guid><category><![CDATA[Livepeer]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[RTMP]]></category><category><![CDATA[Streaming]]></category><category><![CDATA[Transcode]]></category><dc:creator><![CDATA[Mike Z]]></dc:creator><pubDate>Mon, 01 Apr 2024 15:05:10 GMT</pubDate><media:content url="https://www.livepeer.cloud/content/images/2024/04/possessed-photography-7tMrynb3aS0-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.livepeer.cloud/content/images/2024/04/possessed-photography-7tMrynb3aS0-unsplash.jpg" alt="Livepeer + Owncast = Self Hosted Streaming &#x2764;&#xFE0F;"><p><a href="https://www.livepeer.cloud/" rel="noreferrer">Livepeer Cloud</a> is proud to announce the alpha release of the integration between <a href="https://livepeer.org/?ref=livepeer.cloud" rel="noreferrer">Livepeer</a> and <a href="https://owncast.online/?ref=livepeer.cloud" rel="noreferrer">Owncast</a>.</p><p>This integration enables the feature called &quot;Stream Relay&quot; which enables self-hosted Owncast instances to relay their stream to a remote transcoding service like the Livepeer Cloud gateway, powered by the Livepeer Protocol.</p><p>This project has been under construction since November of 2023 and is now ready for Owncast users to take advantage. You can download the release on <a href="https://github.com/mikezupper/owncast/releases/tag/v0.1.3-RC1?ref=livepeer.cloud" rel="noreferrer">Github</a> or use the docker image from <a href="https://hub.docker.com/r/tztcloud/owncast/tags?ref=livepeer.cloud" rel="noreferrer">Docker Hub</a>. For detailed instructions visit our <a href="https://www.livepeer.cloud/get-started" rel="noreferrer">Getting Started</a> guide.</p>]]></content:encoded></item><item><title><![CDATA[Unlocking Live Streaming Freedom: Running a Livepeer Gateway Node]]></title><description><![CDATA[<p>In the ever-evolving landscape of live streaming, content creators are constantly seeking more affordable, flexible, and decentralized solutions. Enter <a href="https://livepeer.org/?ref=livepeer.cloud" rel="noreferrer">Livepeer</a>, a protocol that empowers individuals to take control of their streaming infrastructure through its decentralized video transcoding capabilities. In this article, we&apos;ll explore how to run a Livepeer</p>]]></description><link>https://www.livepeer.cloud/how-to-run-a-livepeer-gateway-node/</link><guid isPermaLink="false">66018d76a055440001c6d998</guid><category><![CDATA[Livepeer]]></category><category><![CDATA[RTMP]]></category><category><![CDATA[Streaming]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Mike Z]]></dc:creator><pubDate>Fri, 29 Mar 2024 15:07:00 GMT</pubDate><media:content url="https://www.livepeer.cloud/content/images/2024/03/basil-james-iC4BsZQaREg-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.livepeer.cloud/content/images/2024/03/basil-james-iC4BsZQaREg-unsplash.jpg" alt="Unlocking Live Streaming Freedom: Running a Livepeer Gateway Node"><p>In the ever-evolving landscape of live streaming, content creators are constantly seeking more affordable, flexible, and decentralized solutions. Enter <a href="https://livepeer.org/?ref=livepeer.cloud" rel="noreferrer">Livepeer</a>, a protocol that empowers individuals to take control of their streaming infrastructure through its decentralized video transcoding capabilities. In this article, we&apos;ll explore how to run a Livepeer Gateway Node in Broadcaster Mode, and unlock a new era of streaming freedom.</p><h3 id="embracing-decentralization-and-cost-efficiency">Embracing Decentralization and Cost Efficiency</h3><p>One of the primary advantages of running a Livepeer Gateway Node is the ability to self-host at a fraction of the cost compared to Software as a Service (SaaS) or cloud-based solutions. Traditional streaming services often come with hefty price tags, making it challenging for independent creators and smaller organizations to afford reliable streaming infrastructure. By leveraging Livepeer&apos;s decentralized protocol, individuals can tap into a cost-effective alternative that doesn&apos;t compromise on performance or reliability.</p><h3 id="decentralizing-centralized-services">Decentralizing Centralized Services</h3><p>Livepeer&apos;s decentralized approach revolutionizes the streaming industry by challenging the dominance of highly centralized cloud-based services, particularly Real-Time Messaging Protocol (RTMP) and HTTP Live Streaming (HLS). Traditionally, these services are controlled by a handful of major corporations, leading to vendor lock-in and limited options for content creators. However, with Livepeer, users can break free from centralized control and embrace a more diverse and inclusive streaming ecosystem.</p><h3 id="freedom-from-vendor-lock-in">Freedom from Vendor Lock-in</h3><p>Running a Livepeer Gateway Node also offers the advantage of platform-neutral technology, eliminating the risk of vendor lock-in. Many streaming platforms require users to adhere to specific software or hardware requirements, locking them into proprietary systems and limiting their flexibility. In contrast, Livepeer&apos;s open and decentralized protocol ensures that users have the freedom to choose the tools and technologies that best suit their needs, without being tied to a single vendor.</p><h3 id="promoting-broadcaster-diversity">Promoting Broadcaster Diversity</h3><p>Another compelling reason to run a Livepeer Gateway Node is to contribute to the diversity of broadcasters within the Livepeer protocol. In a decentralized ecosystem, diversity is key to ensuring resilience, scalability, and inclusivity. By running a node, individuals can actively participate in expanding the network and empowering a broader range of content creators to share their stories and reach audiences worldwide.</p><h3 id="getting-started-with-livepeer-gateway-node">Getting Started with Livepeer Gateway Node </h3><p>So, how can you get started with running a Livepeer Gateway Node? The process is simpler than you might think. Whether you&apos;re a seasoned streaming veteran or a newcomer looking to dip your toes into the world of live streaming, Livepeer Cloud provides the tools and support you need to succeed. Check out the <a href="https://www.livepeer.cloud/get-started" rel="noreferrer">Getting Started</a> guide for all the details needed to leverage the Free-To-Use Livepeer Cloud infrastructure.<br><br>So why wait? Leverage the Livepeer Cloud transcoding services today and revolutionize the way you stream live content.</p>]]></content:encoded></item><item><title><![CDATA[Livepeer Cloud Proposal Earns Treasury Approval Vote]]></title><description><![CDATA[Livepeer Cloud reaches approval from the Livepeer Treasury!]]></description><link>https://www.livepeer.cloud/livepeer-cloud-spe-approve/</link><guid isPermaLink="false">65ea0e65ec059000017c87f6</guid><category><![CDATA[Livepeer]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[papa bear]]></dc:creator><pubDate>Mon, 12 Feb 2024 00:00:00 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.livepeer.cloud/content/images/2024/03/cyrus-crossan-ZqsY740eAOo-unsplash.jpg" class="kg-image" alt loading="lazy" width="2000" height="1126" srcset="https://www.livepeer.cloud/content/images/size/w600/2024/03/cyrus-crossan-ZqsY740eAOo-unsplash.jpg 600w, https://www.livepeer.cloud/content/images/size/w1000/2024/03/cyrus-crossan-ZqsY740eAOo-unsplash.jpg 1000w, https://www.livepeer.cloud/content/images/size/w1600/2024/03/cyrus-crossan-ZqsY740eAOo-unsplash.jpg 1600w, https://www.livepeer.cloud/content/images/2024/03/cyrus-crossan-ZqsY740eAOo-unsplash.jpg 2000w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Photo by </span><a href="https://unsplash.com/@cys_escapes?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><span style="white-space: pre-wrap;">Cyrus Crossan</span></a><span style="white-space: pre-wrap;"> on </span><a href="https://unsplash.com/photos/black-and-white-love-print-crew-neck-shirt-ZqsY740eAOo?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><span style="white-space: pre-wrap;">Unsplash</span></a></figcaption></figure><p>In a significant milestone for the <a href="https://livepeer.org/?ref=livepeer.cloud" rel="noreferrer">Livepeer</a> ecosystem, the Livepeer.cloud Special Purpose Entity (SPE) has secured approval from the <a href="https://explorer.livepeer.org/treasury/110409521297538895053642752647313688591695822800862508217133236436856613165807?ref=livepeer.cloud" rel="noreferrer">Livepeer Treasury</a>. The culmination of weeks of meticulous preparation, the <a href="https://forum.livepeer.org/t/livepeer-cloud-spe-proposal-draft/2235/7?ref=livepeer.cloud" rel="noreferrer">proposal</a> was submitted by the Livepeer Cloud team in January, with voting concluding on February 12, 2024.</p><p>The Livepeer.cloud SPE is designed to implement strategic plans for demand generation within the Livepeer network. The <a href="https://forum.livepeer.org/t/livepeer-cloud-spe-proposal-draft/2235/7?ref=livepeer.cloud" rel="noreferrer">proposal</a>, which underwent a thorough review process, has now received the green light following a seven-day voting period.</p><p>The Livepeer community actively participated in the voting process, with stakeholders expressing their support for the proposed strategies. The seven-day voting period allowed for a transparent and inclusive decision-making process, aligning with Livepeer&apos;s commitment to community-driven governance.</p><p>As demand for decentralized video infrastructure continues to grow, Livepeer aims to position itself as a leading player in the space. The approval of the Livepeer.cloud SPE signifies a collaborative effort between the development team and the broader Livepeer community. With a clear roadmap for demand generation in place, Livepeer is poised to strengthen its position as a decentralized video infrastructure platform, offering innovative solutions to users and developers alike.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://explorer.livepeer.org/treasury/110409521297538895053642752647313688591695822800862508217133236436856613165807?ref=livepeer.cloud"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Livepeer Explorer - Treasury</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://explorer.livepeer.org/favicon.ico" alt><span class="kg-bookmark-author">Treasury</span></div></div><div class="kg-bookmark-thumbnail"><img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" alt></div></a></figure>]]></content:encoded></item></channel></rss>