STAY UP-TO-DATE WITH SPHERON'S
LATEST INNOVATIONS
February 18th, 2024
Fizz Node Improvements
Stricter GPU Node Requirements:
Nodes must now provide valid Cuda / Nvidia Driver version details before connecting.
GPU checks are automatically skipped for CPU-only nodes.
Enhanced Logging & Diagnostics:
Improved logging for GPU-based Fizz nodes for better issue tracking and debugging.
Fixed bugs related to NVIDIA stats retrieval and VRAM parsing.
Stability & Performance Enhancements:
Resolved startup log issues, especially for large logs and loading bars.
Implemented automatic restarts on failure, with graceful exits (exit code 0).
Fixed crashes caused by incorrect memory/storage unit configurations.
A fallback mechanism was added to check CPU units from the Fizz spec.
Robust Testing & Monitoring:
Introduced smoke tests for CPU nodes (previously available only for GPU nodes).
We have added regular bandwidth checks every 30 minutes, with an average 3-hour threshold of 50Mbps; nodes below this threshold will be closed.
Slark Node Enhancements
8k+ Sybil Node Removal: Implemented strict checks to identify and remove Sybil Fizz nodes that fake their resources. More than 8k+ flagged nodes will have their points fully slashed.
Node Status Updates: Added new statuses: "Flagged" and "Paused" for Fizz nodes that repeatedly fail resource, version, and uptime checks.
Challenger API Optimization: Improved the process for selecting reasons when slashing nodes.
Resource Metrics: Now calculating and displaying average resource usage for each era.
Fizz Node App Updates
Phase 3 Stellar NFT: The new Stellar phase has launched on the Fizz network, introducing a registration fee for running a node. This phase also brings an exciting incentive with a 1.5x points multiplier for all registered nodes.
Performance Details Table: View your node's comprehensive 5-day performance history, including points earned or not and slashing reasons to help improve performance.
Console App Enhancements
Models Marketplace:
Deploy Popular LLM Models: You can deploy top models like LLama 3.2, Deepseek, Gemma, and Qwen directly from the marketplace.
Easy Navigation: Access the marketplace via the βModelsβ tab and filter models by author, precision, and type.
Simple Deployment: Select a model, select GPU specifications and duration, and deploy easily. A dedicated model details page provides an OpenAI-compatible API, a chat playground, and real-time logs.
Performance Boost:
Enhanced the performance of the GPU marketplace page by optimizing template fetching (for Jupyter, VSCode & Ollama) and GPU availability retrieval.
Core Updates & Fixes
CLI Update Command: Fixed the deployment update command in the CLI.
Gateway Deployment Change: The gateway Helm chart is no longer deployed by default on providers. Providers needing a gateway must manually deploy and configure it as needed.
Migration to Base Sepolia: We've successfully migrated to Base Sepolia to enhance the overall experience for our GPU compute infrastructure. This transition offers better tooling and more efficient management of GPU resources, making it easier and faster to build and deploy AI agents. Base Sepolia also unlocks greater scalability for infrastructure, allowing AI agents on the network to scale seamlessly as demand grows. With gasless transactions now available, users will benefit from smoother operations, reduced costs, and a significantly improved user experience.
Mitrasish Mukherjee
Cofounder & Product Lead