Skip to content

profullstack/infernet-protocol

Repository files navigation

Infernet Protocol Logo

Infernet Protocol

An open-source, peer-to-peer protocol for distributed GPU inference tasks.

Read the whitepaper at INFERNET-PROTOCOL.md.

The architecture is outlined in INFERNET-ARCHITECTURE.md.

Table of Contents

GitHub GitHub commit activity GitHub last commit

Technologies

JavaScript P2P WebSockets REST API Multi-GPU Multi-CPU Svelte Hono Electron React Native PWA Docker

Platforms

Android iOS Windows macOS Linux Web

Database

PocketBase

Repository Structure

  • desktop/ — Desktop application implementation

    • electron/ — Electron-specific code for desktop app
    • src/ — Desktop application source code
  • mobile/ — Mobile application implementation

    • src/ — Mobile application source code
  • web/ — Progressive Web App (PWA) implementation

    • src/ — Svelte 4 components (shared with desktop when possible)
    • server/ — Hono.js server implementation
  • src/ — Core protocol implementation

    • api/ — API endpoints and handlers
    • db/ — Database models and operations
    • network/ — P2P networking and communication
    • execution/ — Inference execution environment
    • identity/ — Identity and authentication
  • docker/ — Docker configuration for self-hosting

  • docs/ — Documentation, whitepaper, and assets

Getting Started

Prerequisites

Clone the Repository

git clone https://github.com/profullstack/infernet-protocol.git
cd infernet-protocol
pnpm install

Core Protocol

To run the core protocol server:

pnpm start

For development with auto-restart:

pnpm dev

Desktop Application

The desktop application uses Electron with Svelte for the UI.

cd desktop
pnpm install

For development (runs both Vite dev server and Electron):

pnpm electron:dev

To build the desktop application:

pnpm electron:build

Mobile Application

The mobile application uses React Native with Expo.

cd mobile
pnpm install

To start the Expo development server:

pnpm start

To run on Android or iOS:

pnpm android
# or
pnpm ios  # macOS only

PocketBase Integration

All applications (desktop, mobile, and PWA) use PocketBase for data management. They connect to a remote P2P instance to seed nodes from https://infernet.tech/nodes.

The API exposes a public /nodes route when running in server mode.

PWA & Self-Hosting

The Progressive Web App (PWA) is designed for GPU/CPU farm operators who need to manage their infrastructure through a web interface.

PWA Development

The PWA uses Svelte 4 and Hono.js, sharing components with the desktop app where possible:

cd web
pnpm install
pnpm dev  # Start development server

To build the PWA for production:

pnpm build

Self-Hosting Mode

The application can be self-hosted on a server:

cd web
pnpm build
pnpm start:server  # Start production server

Docker Support

For containerized deployment:

# Build the Docker image
docker build -t infernet-protocol -f docker/Dockerfile .

# Run the container
docker run -p 3000:3000 -p 8080:8080 --gpus all infernet-protocol

This is particularly useful for GPU farm operators who need to manage multiple machines.

Visit https://infernet.tech and https://github.com/profullstack/infernet-protocol

USE CASES

The Infernet Protocol enables a decentralized marketplace for AI computation resources, with two primary use cases that leverage Lightning Network (LN) for micropayments:

1. Resource Provider: Donating or Renting GPU/CPU Time

As a resource provider in the Infernet Protocol network, you can:

  • Monetize Idle Computing Resources: Convert underutilized GPU/CPU capacity into a source of passive income by offering it to the network
  • Earn Lightning Network Payments: Receive instant micropayments in Bitcoin via the Lightning Network for every computation task your hardware processes
  • Set Custom Pricing Models: Define your own pricing based on hardware capabilities, availability schedules, and computation types
  • Build Reputation: Establish a reputation score based on reliability, speed, and quality of service, attracting more clients
  • Join Specialized Pools: Participate in specialized computation pools for specific AI model types or industries
  • Flexible Participation: Choose between full-time operation or casual participation during off-hours

The desktop application and PWA provide comprehensive dashboards for monitoring earnings, hardware utilization, and reputation metrics.

2. Resource Consumer: Training and Inference on the P2P Network

As a consumer of computing resources, you can:

  • Access Distributed Computing Power: Tap into a global network of GPUs and CPUs without capital investment in hardware
  • Pay-as-You-Go with Lightning: Pay only for the actual computation used via instant Lightning Network micropayments
  • Scale Dynamically: Easily scale your computation needs up or down based on project requirements
  • Select Specialized Hardware: Choose specific hardware profiles optimized for your particular AI models
  • Prioritize Jobs: Adjust pricing to prioritize urgent computation tasks
  • Distribute Workloads: Split large jobs across multiple providers for faster processing
  • Ensure Privacy: Leverage secure computation options for sensitive data processing

The mobile and web applications provide intuitive interfaces for submitting jobs, tracking progress, and managing budgets.

Lightning Network Integration

The Infernet Protocol uses Lightning Network for all transactions, enabling:

  • Instant Micropayments: Process payments in milliseconds with minimal fees
  • Trustless Operation: No need for credit checks or deposits - pay only for what you use
  • Global Accessibility: Anyone with internet access and Bitcoin can participate
  • Programmable Payments: Automated payments based on computation metrics
  • Privacy-Preserving: Transactions don't require revealing personal information

This payment infrastructure allows for truly decentralized AI computation marketplace that operates efficiently at global scale.

Contact

For technical contributions or questions: protocol@infernet.tech

Discord Reddit

About

Infernet: A Peer-to-Peer Distributed GPU Inference Protocol

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published