An open-source, peer-to-peer protocol for distributed GPU inference tasks.
Read the whitepaper at INFERNET-PROTOCOL.md.
The architecture is outlined in INFERNET-ARCHITECTURE.md.
- Technologies
- Platforms
- Database
- Repository Structure
- Getting Started
- PWA & Self-Hosting
- USE CASES
- Contact
-
desktop/
— Desktop application implementationelectron/
— Electron-specific code for desktop appsrc/
— Desktop application source code
-
mobile/
— Mobile application implementationsrc/
— Mobile application source code
-
web/
— Progressive Web App (PWA) implementationsrc/
— Svelte 4 components (shared with desktop when possible)server/
— Hono.js server implementation
-
src/
— Core protocol implementationapi/
— API endpoints and handlersdb/
— Database models and operationsnetwork/
— P2P networking and communicationexecution/
— Inference execution environmentidentity/
— Identity and authentication
-
docker/
— Docker configuration for self-hosting -
docs/
— Documentation, whitepaper, and assets
- Node.js (v18 or later)
- pnpm (v10 or later)
- Expo CLI (for mobile development)
- Android Studio (for Android development)
- Xcode (for iOS development, macOS only)
git clone https://github.com/profullstack/infernet-protocol.git
cd infernet-protocol
pnpm install
To run the core protocol server:
pnpm start
For development with auto-restart:
pnpm dev
The desktop application uses Electron with Svelte for the UI.
cd desktop
pnpm install
For development (runs both Vite dev server and Electron):
pnpm electron:dev
To build the desktop application:
pnpm electron:build
The mobile application uses React Native with Expo.
cd mobile
pnpm install
To start the Expo development server:
pnpm start
To run on Android or iOS:
pnpm android
# or
pnpm ios # macOS only
All applications (desktop, mobile, and PWA) use PocketBase for data management. They connect to a remote P2P instance to seed nodes from https://infernet.tech/nodes.
The API exposes a public /nodes
route when running in server mode.
The Progressive Web App (PWA) is designed for GPU/CPU farm operators who need to manage their infrastructure through a web interface.
The PWA uses Svelte 4 and Hono.js, sharing components with the desktop app where possible:
cd web
pnpm install
pnpm dev # Start development server
To build the PWA for production:
pnpm build
The application can be self-hosted on a server:
cd web
pnpm build
pnpm start:server # Start production server
For containerized deployment:
# Build the Docker image
docker build -t infernet-protocol -f docker/Dockerfile .
# Run the container
docker run -p 3000:3000 -p 8080:8080 --gpus all infernet-protocol
This is particularly useful for GPU farm operators who need to manage multiple machines.
Visit https://infernet.tech and https://github.com/profullstack/infernet-protocol
The Infernet Protocol enables a decentralized marketplace for AI computation resources, with two primary use cases that leverage Lightning Network (LN) for micropayments:
As a resource provider in the Infernet Protocol network, you can:
- Monetize Idle Computing Resources: Convert underutilized GPU/CPU capacity into a source of passive income by offering it to the network
- Earn Lightning Network Payments: Receive instant micropayments in Bitcoin via the Lightning Network for every computation task your hardware processes
- Set Custom Pricing Models: Define your own pricing based on hardware capabilities, availability schedules, and computation types
- Build Reputation: Establish a reputation score based on reliability, speed, and quality of service, attracting more clients
- Join Specialized Pools: Participate in specialized computation pools for specific AI model types or industries
- Flexible Participation: Choose between full-time operation or casual participation during off-hours
The desktop application and PWA provide comprehensive dashboards for monitoring earnings, hardware utilization, and reputation metrics.
As a consumer of computing resources, you can:
- Access Distributed Computing Power: Tap into a global network of GPUs and CPUs without capital investment in hardware
- Pay-as-You-Go with Lightning: Pay only for the actual computation used via instant Lightning Network micropayments
- Scale Dynamically: Easily scale your computation needs up or down based on project requirements
- Select Specialized Hardware: Choose specific hardware profiles optimized for your particular AI models
- Prioritize Jobs: Adjust pricing to prioritize urgent computation tasks
- Distribute Workloads: Split large jobs across multiple providers for faster processing
- Ensure Privacy: Leverage secure computation options for sensitive data processing
The mobile and web applications provide intuitive interfaces for submitting jobs, tracking progress, and managing budgets.
The Infernet Protocol uses Lightning Network for all transactions, enabling:
- Instant Micropayments: Process payments in milliseconds with minimal fees
- Trustless Operation: No need for credit checks or deposits - pay only for what you use
- Global Accessibility: Anyone with internet access and Bitcoin can participate
- Programmable Payments: Automated payments based on computation metrics
- Privacy-Preserving: Transactions don't require revealing personal information
This payment infrastructure allows for truly decentralized AI computation marketplace that operates efficiently at global scale.
For technical contributions or questions: protocol@infernet.tech