Executive Summary

The RIOS Pilot AI Core (Tier 3) is a sovereign, mobile supercomputing node designed for off-grid AI model training and heavy scientific simulation. Housed in a modified 20ft High-Cube ISO container, its core mission is to solve the “Data Gravity” problem by bringing high-performance computing (HPC) directly to remote data sources, thus bypassing the bandwidth limitations of cloud uploads. Tagged as “The Sovereign Brain,” it functions as the pinnacle of the DeReticular hardware ecosystem, distinguishing itself from Tier 1 and Tier 2 units which focus on AI inference.
Powered by a 15kW deployable solar array and a 60kWh battery bank, the unit’s “Power-First” design supports a data center-class compute core featuring Dual Intel Xeon Platinum processors and an NVIDIA A100 (80GB) Tensor Core GPU. This enables on-site retraining of Large Language Models (LLMs) and complex simulations for industries like remote mining, disaster response, and field genomics.
Strategically positioned as a low-volume, high-margin “Halo Product,” the AI Core targets national governments, research institutions, and large industrial enterprises. With a Manufacturer’s Suggested Retail Price (MSRP) of $185,000 and an estimated Cost of Goods Sold (COGS) of $92,500, it carries a 50% gross margin. The product is a complex, build-to-order system with a 10-12 week lead time and is subject to strict U.S. export compliance regulations due to its high-performance components.
——————————————————————————–
1. Product Overview and Strategic Role
The RIOS Pilot AI Core (Tier 3), with the SKU RIOS-CORE-20FT-A100, is classified as an Expeditionary High-Performance Computing (HPC) unit or Mobile Data Center. Its primary strategic distinction is its capability for AI Training, whereas the lower-tier RIOS units are designed for AI Inference. This positions the Tier 3 unit as the central hub in a distributed sovereign cloud architecture, responsible for creating and updating the AI models used by other units at the tactical edge.
- Core Problem Solved: The unit directly addresses the “Data Gravity” problem, where massive datasets (terabytes) generated in remote locations (e.g., geological scans, drone swarm data) cannot be efficiently uploaded to centralized cloud services like AWS or Azure. The AI Core’s value proposition is that it “brings the cloud to the data.”
- Tagline: “The Sovereign Brain. Train AI Models Anywhere on Earth.”
- Halo Product Status: It is considered the flagship “Halo Product” for DeReticular, designed to validate the power of the entire ecosystem and establish the brand as a serious Defense & Industrial Grade technology provider.
- Market Position: It targets the top 5% of the market, including national governments with data sovereignty mandates (e.g., Uganda, Israel), research institutions, and industrial enterprises.
2. System Architecture and Technical Specifications
The AI Core is built on a “Power-First” design philosophy, where the physical form factor is dictated by the energy requirements of its supercomputing hardware.
A. The Shell (Chassis & Containment)
The foundation is a robust, physically secure, and thermally managed enclosure.
- Form Factor: New/One-Trip 20ft High-Cube ISO Shipping Container.
- Dimensions: 20′ (L) x 8′ (W) x 9’6″ (H).
- Gross Weight: Approximately 12,000 – 12,500 lbs (5,670 kg) fully loaded.
- Thermal Management: The interior is fabricated with a Hot Aisle / Cold Aisle Containment system to isolate the hot exhaust from the server intakes, maximizing cooling efficiency.
- Physical Security: Features include steel-reinforced doors, biometric access control, and internal motion/vibration sensors.
- RF Shielding: An optional copper-foil lining is available for signal isolation, making the unit compliant for use as a Sensitive Compartmented Information Facility (SCIF).
B. Power Systems (The Reactor)
The unit is engineered for complete energy independence, capable of sustaining peak loads off-grid.
- Solar Generation: 12kW – 15kW peak capacity. This is achieved through a combination of bifacial roof panels and heavy-duty, manual fold-out “Solar Wings” on the East and West flanks that triple the solar capture area.
- Battery Storage: 40kWh – 60kWh industrial battery bank, expandable to 100kWh.
- Chemistry: High-Voltage (400V) LiFePO4 Stack for greater efficiency.
- Autonomy: Capable of sustaining the NVIDIA A100’s peak load through the night or providing 12-18 hours of operation at 50% HPC load without solar input.
- Power Management: Utilizes triple-redundant 15kW inverters (e.g., Victron Quattro, Sol-Ark 15k) or a single 30kW industrial 3-phase inverter to provide pure sine wave power suitable for sensitive HPC equipment.
C. Supercompute Core (The Brain)
This is the high-performance heart of the unit, designed for intensive computational tasks.
- Server: A high-density 4U GPU server (e.g., Dell PowerEdge XE8545 or Supermicro GPU SuperServer).
- CPU: Dual Intel Xeon Platinum 8300 Series processors, providing a total of 40-80 cores.
- AI Acceleration: A single NVIDIA A100 Tensor Core GPU with 80GB PCIe memory is standard. The system is upgradable to include up to four A100 GPUs. An NVIDIA H100 is also an option.
- Compute Power: ~9.7 TFLOPS (FP64 for scientific simulation) / ~600 TFLOPS (AI Tensor operations).
- RAM: 512GB DDR4 ECC Registered memory.
- Storage: A 100TB+ NVMe All-Flash Array serves as a local “Data Lake” for high-speed data ingestion, with rates noted up to 40GB/s.
- Operating System: RIOS Sovereign Cloud OS (HPC Edition / Kubernetes).
- Connectivity: Dual bonded Starlink High-Performance terminals and fiber uplink ports.
D. Industrial Cooling System
To ensure peak performance in extreme environments, the cooling system is robust and redundant.
- System: Dual 18,000 BTU to 36,000 BTU dedicated CRAC (Computer Room Air Conditioning) units.
- Redundancy: An N+1 (Main + Backup) configuration ensures that the GPU never throttles due to heat, even in ambient temperatures of 50°C (122°F).
- Operating Temperature Range: The unit is rated to operate from -20°F to 120°F (-29°C to 49°C).
- Fire Suppression (Optional): A clean agent gas system (e.g., Novec 1230) can be installed.
3. Operational Model: The Sovereign Cloud Hub
The Tier 3 AI Core is designed to be the central “Region” in a distributed, sovereign cloud network.
- Data Aggregation: Tier 1 and Tier 2 units deployed at the edge collect data (e.g., images, sensor readings) and perform initial inference. They forward “hard cases”—data they cannot process or understand—to the Tier 3 unit.
- Local Learning & Retraining: The AI Core ingests this new data and uses its powerful compute core to retrain the master AI model, typically overnight.
- Over-the-Air (OTA) Updates: The updated, more intelligent model is then pushed back out to the Tier 1 and Tier 2 units via a local mesh network.
This creates a self-improving ecosystem that “gets smarter every day, without ever connecting to AWS or Azure,” ensuring complete data sovereignty and reducing the data-to-decision loop from weeks to hours.
4. Financial and Commercial Analysis
The AI Core is a high-value asset with a specific pricing and fulfillment strategy.
A. Pricing and Margin
| Metric | Value | Notes |
| MSRP | $185,000.00 USD | Based on “Infrastructure Replacement” value proposition. |
| Wholesale / Partner Price | $155,000.00 USD | For approved partners and resellers. |
| Estimated COGS | $92,500.00 USD | Includes all hardware, fabrication, and labor. |
| Gross Margin | 50% ($92,500) | Buffers against silicon price volatility and warranty risk. |
| Payment Terms | 40% Deposit | 30% at Milestone (Shell Complete), 30% Pre-Shipment. |
B. Cost of Goods Sold (COGS) Breakdown
| Component | Estimated Cost | Details |
| Shell & Fabrication | $14,500 | 20ft container, hot/cold aisle, RF shielding, solar wing bracing. |
| Power Systems | $28,000 | 14kW solar array, 60kWh battery bank, inverters. |
| Supercompute Core | $38,000 | Server, Dual Xeon CPUs, 512GB RAM, 100TB NVMe, Single A100. |
| Industrial Cooling | $4,500 | Dual 18,000 BTU mini-splits and fans. |
| Labor & Integration | $7,500 | 80 hrs fabrication, 50 hrs systems engineering @ $55/hr. |
| Total Estimated COGS | $92,500.00 |
C. Recurring Revenue and Upgrades
- RIOS Core Support Subscription: A mandatory or highly recommended service priced at **2,500/month** (30,000 annually). This includes priority “Red Phone” support, remote thermal monitoring, and HPC software stack updates.
- Hardware Upgrades:
- Add 2nd A100 GPU: +$25,000
- Quad-GPU Upgrade: +$75,000 (est.)
- Satellite Uplink: +$3,000 – $5,000 (Dual Starlink High-Perf Kit)
5. Fulfillment and Export Compliance
The AI Core is a complex, build-to-order (BTO) asset with a strict fulfillment process and significant regulatory requirements.
- Lead Time: 10-12 weeks from deposit.
- Fulfillment Process:
- Weeks 1-3 (Compliance & Procurement): An Export Check is a critical first step to verify the end-user is not on the BIS Entity List. Silicon components (GPU, server) are sourced immediately.
- Weeks 4-7 (Heavy Fabrication): Involves cutting vents, installing partitions, and welding the articulating solar wing arms.
- Weeks 8-10 (HPC Integration): Server rack installation, thermal tuning under load, and software installation (RIOS OS, Kubernetes).
- Weeks 11-12 (Logistics): Requires a heavy-duty crane for transport. Commissioning often requires a DeReticular Field Engineer on-site.
- Export Control:
- Classification: The NVIDIA A100 GPU and high-performance server fall under ECCN 4A003.b and 5A002.
- Requirement: The product is RESTRICTED and subject to US Export Administration Regulations (EAR). All international sales require a completed End-User Statement and may require a BIS License Review.
6. Competitive Analysis
The RIOS Pilot AI Core offers a unique combination of mobility, power autonomy, and sovereign AI training capabilities that differentiate it from established competitors.
| Feature | RIOS Pilot AI Core (Tier 3) | AWS Outposts | Standard Modular Data Center |
| Price | $185,000 (Capex) | High Monthly Opex | ~$250,000+ (Capex) |
| Power | Included (15kW Solar) | Requires Grid | Requires Grid |
| Mobility | ISO Container | Stationary Rack | ISO Container (less integrated) |
| Data Sovereignty | 100% Local Storage | AWS Control Plane | Varies |
| AI Capability | Training (A100) | Inference mostly | Varies |
