Skip to main content

Layer 4: Capability Marketplace

The capability marketplace is the unifying abstraction of Mehr. Every node advertises what it can do. Every node can request capabilities it lacks. The marketplace matches supply and demand through local, bilateral negotiation — no central coordinator.

This is the layer that makes Mehr a distributed computer rather than just a network.

The Unifying Abstraction

In Mehr, there are no fixed node roles. Instead:

  • A node with a LoRa radio and solar panel advertises: "I can relay packets 24/7"
  • A node with a GPU advertises: "I can run Whisper speech-to-text"
  • A node with an SSD advertises: "I can store 100 GB of data"
  • A node with a cellular modem advertises: "I can route to the internet"

Each of these is a capability — discoverable, negotiable, verifiable, and payable.

Capability Advertisement

Every node broadcasts its capabilities to the network:

NodeCapabilities {
node_id: NodeID,
timestamp: Timestamp,
signature: Ed25519Signature,

// ── CONNECTIVITY ──
interfaces: [{
medium: TransportType,
bandwidth_bps: u64,
latency_ms: u32,
reliability: u8, // 0-255 (avoids FP on constrained devices)
cost_per_byte: u64,
internet_gateway: bool,
}],

// ── COMPUTE ──
compute: {
cpu_class: enum { Micro, Low, Medium, High },
available_memory_mb: u32,
mhr_byte: bool, // can run basic contracts
wasm: bool, // can run full WASM
cost_per_cycle: u64,
offered_functions: [{
function_id: Hash,
description: String,
cost_structure: CostStructure,
max_concurrent: u32,
}],
},

// ── STORAGE ──
storage: {
available_bytes: u64,
storage_class: enum { Volatile, Flash, SSD, HDD },
cost_per_byte_day: u64,
max_object_size: u32,
serves_content: bool,
},

// ── AVAILABILITY ──
uptime_pattern: enum {
AlwaysOn, Solar, Intermittent, Scheduled(schedule),
},
}

No Special Protocol Primitives

Heavy compute like ML inference, transcription, translation, and text-to-speech are not protocol primitives. They are compute capabilities offered by nodes that have the hardware to run them.

A node with a GPU advertises offered_functions: [whisper-small, piper-tts]. A consumer requests execution through the standard compute delegation path. The protocol is agnostic to what the function does — it only cares about discovery, negotiation, verification, and payment.

Emergent Specialization

Nodes naturally specialize based on hardware and market dynamics:

HardwareNatural SpecializationEarns FromDelegates
ESP32 + LoRa + solarPacket relay, availabilityRouting feesEverything else
Raspberry Pi + LoRa + WiFiCompute, LoRa/WiFi bridgeCompute delegation, bridgingBulk storage
Mini PC + SSD + EthernetStorage, DHT, HTTP proxyStorage fees, proxy feesNothing
Phone (intermittent)Consumer, occasional relayRelaying while movingAlmost everything
GPU workstationHeavy compute (inference, etc.)Compute feesNothing

Capability Chains

Delegation cascades naturally:

Node A (LoRa relay)
└── delegates compute to → Node B (Raspberry Pi)
└── delegates storage to → Node C (Mini PC + SSD)
└── delegates connectivity to → Node D (Gateway)

Each link is a bilateral agreement with its own payment channel. No central coordination required.