Conduit

Conduit

Accessibility That Adapts to You.

ROLE
Lead UX Designer & Frontend Developer
TIMELINE
36 hours (Hackalytics @ Georgia Tech)
TOOLS
ElectronTypeScriptFigma
AccessibilityUX DesignMultimodal DesignUser Research
01

Context

Conduit is an open-source, multimodal accessibility platform that unifies EEG, gaze tracking, voice detection, and ASL recognition into one adaptive interface. Built in just 36 hours during Hackalytics @ Georgia Tech, this project reimagines accessibility as a fluid system that bends to individual needs rather than forcing users into rigid interaction patterns.
02

The Problem

People with motor disabilities face fragmented accessibility tools that force them to choose between single input methods rather than fluidly adapting to their needs. Current solutions are expensive, rigid, and create more barriers than they remove.
01

COST-PROHIBITIVE BARRIERS

Professional assistive technology costs thousands of dollars, with single-input devices locked behind prohibitive price tags that exclude most users who need them.

02

FRAGMENTED EXPERIENCE

Users must juggle multiple disconnected tools for different input types—separate software for eye tracking, voice control, and gesture recognition—creating cognitive overhead and frustration.

03

NO SEAMLESS SWITCHING

Switching between voice, eye tracking, or gesture controls requires manual reconfiguration and app switching instead of fluid, real-time adaptation to the user's current ability.

04

HIGH COGNITIVE LOAD

Learning and managing multiple separate systems with different interfaces drains cognitive energy that should be spent on actual tasks, not fighting with tools.

05

LIMITED ADAPTABILITY

Existing tools don't adapt to fluctuating physical or emotional states throughout the day, forcing users into rigid interaction patterns that don't match their reality.

03

Research

Since this was a hackathon project, there was less time for conventional research methods, but we tracked down some personas to help us nail the issues we wanted to combat.

Defining goals for user research

UNDERSTAND ACCESSIBILITY BARRIERS

Learn how people with motor disabilities currently navigate digital interfaces and what daily challenges they face with existing assistive technology.

MAP MULTIMODAL WORKFLOWS

Identify how different input methods could complement each other and where transitions between modalities create friction or opportunity.

VALIDATE TECHNICAL FEASIBILITY

Work with team members to understand what's possible with EEG, gaze tracking, voice, and gesture recognition within our 36-hour constraint.

Making sense of user research with affinity mapping

Affinity mapping

Identified patterns & themes:

  • Modality switching: Users need multiple input methods available simultaneously, not sequential replacement
  • Cognitive fatigue: Mental energy spent managing tools leaves less for actual tasks
  • Cost barriers: Professional-grade assistive tech prices out most users who would benefit
  • Lack of customization: One-size-fits-all solutions don't account for fluctuating ability levels

User Personas

A

Alex Chen

22, College StudentAtlanta, GA

Background

Alex has cerebral palsy affecting motor control. Uses eye tracking and voice commands but struggles with tool fragmentation and setup time.

Goals

  • Switch between input methods seamlessly
  • Maintain focus without reconfiguring tools
  • Complete coursework efficiently

Frustrations

  • Eye tracking requires constant recalibration
  • Voice control doesn't work in quiet libraries
  • Can't combine multiple input methods easily
M

Maya Rodriguez

28, Graphic DesignerAustin, TX

Background

Maya is non-verbal and communicates through ASL. She's a talented designer frustrated by tools that assume verbal communication.

Goals

  • Use gesture-based controls naturally
  • Work as efficiently as speaking colleagues
  • Express creativity without voice commands

Frustrations

  • Most software assumes voice input
  • ASL recognition is rarely integrated
  • Current solutions feel like workarounds
J

James Park

45, WriterSan Francisco, CA

Background

James has ALS and his abilities fluctuate daily. He needs adaptive tools that recognize his changing needs without manual reconfiguration.

Goals

  • Use different inputs based on daily ability
  • Continue working without setup overhead
  • Maintain independence as long as possible

Frustrations

  • Switching tools takes too much energy
  • Can't predict which input will work best daily
  • Technology doesn't adapt to his changing state
04

Define

What's making this problem so difficult for users?

TOOL FRAGMENTATION

Users waste cognitive energy managing separate applications instead of focusing on their actual work. The problem isn't the individual tools—it's that they don't speak to each other.

INFLEXIBLE INPUT SYSTEMS

Current solutions force users to commit to one input method per session. When abilities fluctuate, users can't adapt without completely reconfiguring their setup.

MISSING MULTIMODAL INTELLIGENCE

Assistive tech treats each input channel as isolated. There's no system that understands when to blend EEG precision with voice speed, or eye tracking confidence with gesture backup.

How might we

create an accessibility platform that adapts to users' changing abilities in real-time, rather than forcing them into rigid input patterns?

05

Design Process

Conduit creates a unified multimodal hub where users mix EEG signals, gaze tracking, voice commands, and ASL gestures within one adaptive interface. The system provides input smoothing, cursor snapping, progressive disclosure, and real-time modality switching—making complex accessibility feel intuitive and personal.
RESEARCH & DISCOVERY

RESEARCH & DISCOVERY

Conducted caregiver interviews and accessibility research to understand pain points across different conditions. Mapped interaction patterns and cognitive load factors for each modality.

Why this mattered:

With only 36 hours, we needed to ground design decisions in real user needs rather than assumptions. Caregiver insights revealed the most critical pain points to solve first.

ADAPTIVE INTERACTION DESIGN

ADAPTIVE INTERACTION DESIGN

Designed core patterns like input smoothing, magnetic snapping, and multimodal redundancy. Interfaces flex dynamically based on precision levels and cognitive load.

Why this mattered:

Users with motor control challenges need forgiving interfaces that amplify intention rather than amplify error. These patterns make complex inputs feel natural.

HIGH-FIDELITY PROTOTYPE

HIGH-FIDELITY PROTOTYPE

Built complete Figma prototype simulating end-to-end experience: onboarding, input switching, dashboard customization, and real-time feedback visualization.

Why this mattered:

Creating a pixel-perfect prototype allowed us to validate the entire user journey before writing production code, saving critical development time.

06

Outcomes & Impact

3
CORE PERSONAS DEFINED
4
INPUT MODALITIES UNIFIED
36h
CONCEPT TO DEMO
Conduit demonstrated how accessibility can be both technically sophisticated and emotionally intuitive, validating multimodal design as the future of inclusive computing.
07

Impact & Takeaways

Conduit won the Accessibility Innovation Prize at Hackalytics 2025 and has since been adopted as a reference prototype by accessibility researchers studying multimodal interface design.

Key takeaways:

Speed constraints force creative problem-solving

Building Conduit in 36 hours meant we couldn't perfect every modality—we had to pick the most impactful patterns and execute them well. This constraint actually made the design sharper because we stayed laser-focused on core user needs.

Accessibility requires deep technical empathy

Understanding assistive technology isn't just about compliance—it's about genuinely grasping how someone's physical reality shapes their interaction with digital interfaces. Our caregiver interviews revealed nuances that design guidelines alone would never surface.

Multimodal design is the future

This project validated that blending input methods isn't just a nice-to-have feature—it's essential for creating truly adaptive systems. The most powerful moments came when users could fluidly shift between modalities based on their current state.

08

More Visuals

Gallery image 1
Gallery image 2
Gallery image 3
Gallery image 4