
Conduit
Accessibility That Adapts to You.
Context
The Problem
COST-PROHIBITIVE BARRIERS
Professional assistive technology costs thousands of dollars, with single-input devices locked behind prohibitive price tags that exclude most users who need them.
FRAGMENTED EXPERIENCE
Users must juggle multiple disconnected tools for different input types—separate software for eye tracking, voice control, and gesture recognition—creating cognitive overhead and frustration.
NO SEAMLESS SWITCHING
Switching between voice, eye tracking, or gesture controls requires manual reconfiguration and app switching instead of fluid, real-time adaptation to the user's current ability.
HIGH COGNITIVE LOAD
Learning and managing multiple separate systems with different interfaces drains cognitive energy that should be spent on actual tasks, not fighting with tools.
LIMITED ADAPTABILITY
Existing tools don't adapt to fluctuating physical or emotional states throughout the day, forcing users into rigid interaction patterns that don't match their reality.
Research
Since this was a hackathon project, there was less time for conventional research methods, but we tracked down some personas to help us nail the issues we wanted to combat.
Defining goals for user research
UNDERSTAND ACCESSIBILITY BARRIERS
Learn how people with motor disabilities currently navigate digital interfaces and what daily challenges they face with existing assistive technology.
MAP MULTIMODAL WORKFLOWS
Identify how different input methods could complement each other and where transitions between modalities create friction or opportunity.
VALIDATE TECHNICAL FEASIBILITY
Work with team members to understand what's possible with EEG, gaze tracking, voice, and gesture recognition within our 36-hour constraint.
Making sense of user research with affinity mapping

Identified patterns & themes:
- •Modality switching: Users need multiple input methods available simultaneously, not sequential replacement
- •Cognitive fatigue: Mental energy spent managing tools leaves less for actual tasks
- •Cost barriers: Professional-grade assistive tech prices out most users who would benefit
- •Lack of customization: One-size-fits-all solutions don't account for fluctuating ability levels
User Personas
Alex Chen
Background
Alex has cerebral palsy affecting motor control. Uses eye tracking and voice commands but struggles with tool fragmentation and setup time.
Goals
- ✓Switch between input methods seamlessly
- ✓Maintain focus without reconfiguring tools
- ✓Complete coursework efficiently
Frustrations
- ✗Eye tracking requires constant recalibration
- ✗Voice control doesn't work in quiet libraries
- ✗Can't combine multiple input methods easily
Maya Rodriguez
Background
Maya is non-verbal and communicates through ASL. She's a talented designer frustrated by tools that assume verbal communication.
Goals
- ✓Use gesture-based controls naturally
- ✓Work as efficiently as speaking colleagues
- ✓Express creativity without voice commands
Frustrations
- ✗Most software assumes voice input
- ✗ASL recognition is rarely integrated
- ✗Current solutions feel like workarounds
James Park
Background
James has ALS and his abilities fluctuate daily. He needs adaptive tools that recognize his changing needs without manual reconfiguration.
Goals
- ✓Use different inputs based on daily ability
- ✓Continue working without setup overhead
- ✓Maintain independence as long as possible
Frustrations
- ✗Switching tools takes too much energy
- ✗Can't predict which input will work best daily
- ✗Technology doesn't adapt to his changing state
Define
What's making this problem so difficult for users?
TOOL FRAGMENTATION
Users waste cognitive energy managing separate applications instead of focusing on their actual work. The problem isn't the individual tools—it's that they don't speak to each other.
INFLEXIBLE INPUT SYSTEMS
Current solutions force users to commit to one input method per session. When abilities fluctuate, users can't adapt without completely reconfiguring their setup.
MISSING MULTIMODAL INTELLIGENCE
Assistive tech treats each input channel as isolated. There's no system that understands when to blend EEG precision with voice speed, or eye tracking confidence with gesture backup.
How might we
create an accessibility platform that adapts to users' changing abilities in real-time, rather than forcing them into rigid input patterns?
Design Process

RESEARCH & DISCOVERY
Conducted caregiver interviews and accessibility research to understand pain points across different conditions. Mapped interaction patterns and cognitive load factors for each modality.
Why this mattered:
With only 36 hours, we needed to ground design decisions in real user needs rather than assumptions. Caregiver insights revealed the most critical pain points to solve first.

ADAPTIVE INTERACTION DESIGN
Designed core patterns like input smoothing, magnetic snapping, and multimodal redundancy. Interfaces flex dynamically based on precision levels and cognitive load.
Why this mattered:
Users with motor control challenges need forgiving interfaces that amplify intention rather than amplify error. These patterns make complex inputs feel natural.

HIGH-FIDELITY PROTOTYPE
Built complete Figma prototype simulating end-to-end experience: onboarding, input switching, dashboard customization, and real-time feedback visualization.
Why this mattered:
Creating a pixel-perfect prototype allowed us to validate the entire user journey before writing production code, saving critical development time.
Outcomes & Impact
Impact & Takeaways
Conduit won the Accessibility Innovation Prize at Hackalytics 2025 and has since been adopted as a reference prototype by accessibility researchers studying multimodal interface design.
Key takeaways:
Speed constraints force creative problem-solving
Building Conduit in 36 hours meant we couldn't perfect every modality—we had to pick the most impactful patterns and execute them well. This constraint actually made the design sharper because we stayed laser-focused on core user needs.
Accessibility requires deep technical empathy
Understanding assistive technology isn't just about compliance—it's about genuinely grasping how someone's physical reality shapes their interaction with digital interfaces. Our caregiver interviews revealed nuances that design guidelines alone would never surface.
Multimodal design is the future
This project validated that blending input methods isn't just a nice-to-have feature—it's essential for creating truly adaptive systems. The most powerful moments came when users could fluidly shift between modalities based on their current state.
More Visuals



