top of page
gordian background.png

Bigger inputs.
Smarter models.
Smaller footprints.

Embedded software that breaks through constraints in CPU/GPU processing and memory usage, training high data-intensity models on memory limited GPUs (requiring hundreds of gigabytes) and inference on edge devices in single-digit megabytes (requiring hundreds of megabytes).

Logo - Our Color-01_edited.png

01

Edge AI IN Constrained Devices

Multiple CNNs running in parallel at the edge with minimal CPU/GPU and memory usage.

02

ML Training and Inference for Extremely High Data Intensity

Up to 4K pixel arrays run on standard object detection models; more data means better learning and higher accuracy.

Real World Applications

Utilities:
Grid-Edge Intelligence

Real-time anomaly detection and classification for predictive grid maintenance. 

Industrial: Electrical Panel and Motor Current Analysis 

Condition-based monitoring of on-premise electrical infrastructure and powered equipment. 

Medtech:
Diagnostic Imaging

Edge AI for resource-constrained devices; enhanced ML and inference for extremely high data-intensity applications. 

Upcoming Event: Itron Inspire 2025

Gordian Co-founder Shekar Mantha will present a technical briefing at Itron Inspire 2025 on Monday, October 27th at 11:45 AM in Orlando, Florida.

The talk will cover Gordian's edge Anomaly detection and classification apps for next-generation grid-edge intelligence.

Edge ML inference

with minimalCPU/GPU and memory usage and power draw. 

Why Gordian?

Shared memory across multiple models

for extremely high data-intensity model training. 

Workflow-friendly

drop-in embedded software layer, no model customization required. 

bottom of page