Our Products

Cutting-edge assistive technologies powered by AI, robotics, and computer vision, designed to empower people with motor neuron diseases and disabilities

Control Methods

Gaze
Blinking
Head motion
Facial gestures

Privacy-First Edge Computing

All our products process data locally using edge computing. No data is transmitted to the cloud—your privacy is protected.

URNights

Night Assistant

URNights enables you to call for assistance in seconds. URNights can be controlled by gaze, blinking, head motion, and facial gestures. The smart night assistance system continuously monitors and analyses your facial muscles and eye movements via a specialised camera. URNights works even in the dark.

The Challenge

Patients in the advanced stages of Amyotrophic Lateral Sclerosis (ALS), Spinal Muscular Atrophy (SMA), and other similar genetic or Motor Neuron Disorders (MND) are unable to speak or move their limbs. Most of these patients are on a permanent mechanical ventilator and breathe through a tracheostomy tube in their throat. Some patients with spinal cord injuries may also suffer from similar problems. Imagine for a moment that you are a conscious person with Severe Speech and Motor Impairment (SSMI), that is, unable to speak or move. How can you ask for help with any of your needs in the middle of the night? Unfortunately, patients in this situation cannot seek help in the middle of the night. They must wait until their caregivers wake up in the morning.

Our Solution

URNights, night assistant by URATech, is developed to address this problem. Patients with severe speech and motor impairment can now call for assistance in seconds, ensuring they never have to wait helplessly through the night again.

Key Features

Works even in complete darkness
State-of-the-art machine learning and AI
Edge computing—all processing happens locally
No data transmitted to the cloud
Multiple control modalities

Watch URNights in Action

URAComm

Assistive Communication Board

URAComm - Assistive Communication Board

URAComm is a smart communication board system that continuously monitors and analyses your facial muscles and eye movements via a specialized camera. URAComm can be controlled by gaze, blinking, head motion, and facial gestures.

The Challenge

People with severe speech and motor impairments often lose the ability to communicate with their loved ones and caregivers, leading to isolation and reduced quality of life. Traditional communication methods become impossible when patients cannot speak or move their limbs.

Our Solution

URAComm uses state-of-the-art machine learning and AI algorithms locally in the camera and the attached computer. Using edge computing technologies, all processing is performed locally. For ultimate privacy, users' data is NOT transmitted to the cloud. URATech's dedicated engineers or your local URATech representatives can advise the best URAComm configuration for your personal needs.

Key Features

Multiple control modalities
Local processing for ultimate privacy
Customizable to individual needs
Professional configuration support
Works with various facial gestures

URVHeadCtrl

Your Vision-Based Head-Controlled Mouse

URVHeadCtrl - Your Vision-Based Head-Controlled Mouse

URVHeadCtrl has been developed as an additional control modality to make it broadly accessible as a long-lasting computer access system for people with degenerative conditions. It uses cutting-edge Computer Vision (CV), Machine Learning (ML) and Artificial Intelligence (AI) techniques.

The Challenge

Severe Speech and Motor Impairment (SSMI) from conditions such as Amyotrophic Lateral Sclerosis (ALS), Spinal Muscular Atrophy (SMA), and other similar genetic or Motor Neuron Disorders (MND) can prohibit individuals from effectively using a mouse and keyboard, locking them out of technology. While various tools are available to help with access, computer control methods that rely on a single input form are frequently ineffective or short-lived as symptoms change.

Our Solution

URVHeadCtrl uses a camera with cutting-edge Computer Vision, Machine Learning and AI techniques. It continuously monitors the user's head, eyes, blinking, and facial gestures and controls the mouse and keyboard of a computer based on the user's head motions, providing a long-lasting accessibility solution.

Key Features

Camera-based—no physical contact needed
Adapts to changing symptoms
Controls both mouse and keyboard
Long-lasting accessibility solution
Cutting-edge AI technology

The KOBE Robot

Keep One Breathing Everywhere — An Autonomous Mobile Robotic Platform for Paediatric Respiratory Ventilation Assistance

The KOBE Robot - Keep One Breathing Everywhere — An Autonomous Mobile Robotic Platform for Paediatric Respiratory Ventilation Assistance

The KOBE (Keep One Breathing Everywhere) is a novel tethered ventilator-carrying robot developed to empower paediatric patients requiring round-the-clock mechanical ventilator support for breathing. An Autonomous Mobile Robotic Platform for Paediatric Respiratory Ventilation Assistance.

The Challenge

Paediatric patients requiring continuous mechanical ventilation are severely limited in their mobility and daily activities. Traditional ventilator setups confine children to beds or wheelchairs, significantly impacting their quality of life, development, and sense of independence during crucial formative years.

Our Solution

The robot employs person-following algorithms, integrating inputs from obstacle detection sensors and a purpose-built tether sensor. The tether sensor plays multiple roles, pivotal for the person-following functionality. It provides critical data on the robot's relative state w.r.t. the child. It also imparts the child a sense of security and companionship by simulating a connection to a leashed pet. The development of the robot underwent rigorous design processes, and its performance was extensively tested across various indoor and outdoor environments.

Key Features

Autonomous person-following technology
Advanced obstacle detection sensors
Purpose-built tether sensor system
Tested in real-life scenarios
Indoor and outdoor capable

Watch The KOBE Robot in Action

URArm

Upper-Limb Orthotic Exoskeleton for Telerehabilitation with Multimodal User Interface

URArm - Image 1URArm - Image 2

URArm is an upper-limb orthotic exoskeleton for telerehabilitation with multimodal user interface. URArm has a multimodal Human-Machine Interface (HMI) and can be controlled either by a rehabilitation therapist locally or remotely for telerehabilitation sessions or by the user's Surface Electromyography (sEMG) signals.

The Challenge

Loss of mobility on the upper extremities, such as paralysis, hemiparesis, or long-term muscle weakness, can result from various neuromuscular or neurological diseases, stroke or accidents. Patients with these conditions may partially or fully lose control of their upper extremities. Traditional therapy methods may not restore the lost functions.

Our Solution

It has been shown that Robot-Assisted Rehabilitation Therapy (RART) can improve the outcome of rehabilitation efforts. Furthermore, robotic orthotic exoskeletons can partially replace some of the lost functions and positively affect patients' Activities of Daily Living (ADL). URArm presents a solution that enables both local and remote telerehabilitation sessions.

Key Features

Multimodal Human-Machine Interface
Telerehabilitation capability
sEMG signal control
Local and remote therapist control
Improves Activities of Daily Living

Interested in Our Products?

Sign up to receive updates and learn more about how URATech can help you or your loved ones.