Skip to content

Closing the gap between AIand the physical world.

Griffin Labs trains frontier AI models and builds robotic systems for facilities environments — where traditional automation has always fallen short.

The Problem

Conventional robotics cannot operate here.

Facilities management is one of the largest industries in the world — and one of the least automated. The reason is not lack of demand. It is that the environments are fundamentally too variable for conventional robotics to operate in.

Labour-intensive at scale

Facilities management employs tens of millions of workers globally. The tasks are physical, repetitive, and exhausting — yet the industry has remained almost entirely manual. Not because automation is unwanted, but because the environments are too variable for conventional robotics to handle.

Structured automation cannot operate here

Warehouse robots require fixed layouts, barcodes, and controlled conditions. Facilities are the opposite: variable layouts, shifting obstacles, unpredictable surfaces, and constantly changing conditions. Every attempt to apply industrial automation to facilities has failed at the same boundary.

The gap is an AI problem, not a hardware problem

The reason facilities have resisted automation is not mechanical — it is perceptual and adaptive. The environment changes faster than any pre-programmed system can respond. Solving this requires embodied AI: models that perceive, reason, and act in real time, trained on real-world data from real facilities.

Approach

From model to machine, we own every layer.

We train our own models on real sensor data, build our own robotic hardware, and deploy into live commercial facilities — hotels, airports, and office complexes across Southeast Asia. We own every layer because no single layer works without the others.

Embodied AI Models

We train end-to-end models that connect perception and language to physical action — grounded in real sensor data, operating in conditions that lab environments cannot reproduce.

Hardware for Unstructured Environments

We develop the physical robotic platforms, autonomy stack, and manipulation capabilities needed to execute useful work in messy, high-variation spaces where traditional automation struggles.

Facilities Deployment

We operate robots in live hotels, airports, and commercial buildings — not controlled labs. Our teleoperation, evaluation, and simulation pipelines are built for the variability of real environments.

Learning in the Field

Every deployment generates training data. Every dataset improves the model. Our systems become more capable with each run — through the feedback loop of real-world operation, not manual tuning.

Research

Research built for the physical world.

Our research program spans model architecture, learning algorithms, and data infrastructure — all designed around the constraints of real facilities environments.

Vision-Language-Action Models

We develop VLA models that ground language and visual understanding directly in physical action. Our models take sensor observations as input and output motor commands — trained end-to-end on real-world manipulation data collected across live facilities deployments.

Reinforcement Learning from Real-World Data

We use reinforcement learning fine-tuned on deployment feedback — not synthetic rollouts. Every robot run in the field generates reward signal. Our RL pipeline continuously updates policies based on what actually works in high-variation, uncontrolled environments.

Teleoperation & In-the-Wild Data Collection

We operate a teleoperation infrastructure to collect high-quality demonstration data directly in live facilities — not in labs or curated environments. This in-the-wild data is the foundation of our training pipeline and gives our models a distribution advantage that cannot be replicated in simulation.

Deployments & Partnerships

Deploy robots in your facility.

We are running pilots in commercial facilities across Southeast Asia and are open to research collaborations and investor conversations. Reach us directly.