I designed Moon Surgical’s first AI analytics platform from scratch. Surgeons had no way to learn from their robotic procedures. I built a system that turns raw surgical data into insights they actually act on. 58% increase in robot utilization after launch.
View ImpactMaestro Insights Platform — live since December 2024
Maestro Insights has two faces: a surgeon dashboard for personal performance, and an admin dashboard for hospital-wide robot management. Both pull from the same data the robot collects during every procedure.
The surgeon’s dashboard opens with trend cards that surface changes in their performance KPIs. No digging, no spreadsheets. If your start time improved or your case duration spiked, you see it immediately.
Interactive charts that show procedure volume over time. Surgeons track whether they’re hitting their targets, and the data updates after every case.
Each case gets a detailed breakdown: timing, instruments used, staff, and notes. The AI synopsis generates a summary with actionable takeaways. No manual review needed.
Recent cases with timing, staff, and procedure details
Clicking into a case reveals the AI-generated synopsis — an LLM-written summary of what happened, what changed, and what to pay attention to next.
AI synopsis with performance analysis and actionable takeaways
Every procedure generates hundreds of data points — arm force readings, instrument switches, phase transitions, timing deltas. This raw data flows through our processing pipeline and into an LLM with system prompts tuned to surface what surgeons care about: what changed from their baseline, what took longer than expected, and what to watch next time. I worked with our data scientist to define which metrics mattered and iterated on the prompt structure until the summaries matched the language surgeons actually use in post-op discussions.
Case Intelligence: proactive AI notifications surfaced after each procedure
Surgeons can’t type notes during a procedure. I designed a voice interface: say “Hey Maestro” and speak. There’s no screen UI for this — it’s an OR environment. Instead, the robot’s RGB strip pulses with an organic halo to show it’s listening. The glow responds to speech cadence and fades when done.
Once the voice is captured, the system processes it into structured notes. Surgeons can review AI-generated summaries, listen to recorded clips, and approve or edit annotations after the procedure. This approve/edit/delete flow was a deliberate safety-first decision — in a clinical environment, no AI-generated content should persist without surgeon verification.
AI notes, voice recordings, and annotations — captured hands-free during surgery
The admin side functions as a real-time observability layer for the entire robotics program. Administrators monitor site-wide performance — surgeon adoption, case volume, service line breakdown — and can drill into live OR activity as it happens.
Site-wide KPIs: surgeon adoption, case volume, and top service lines at a glance
OR Traffic Control is the real-time monitoring view. Each room shows live case status, timeline progress, staff allocation, and delay alerts — with a modal for drilling into active procedure events as they happen. The UI handles non-deterministic state: cases run long, rooms turn over unpredictably, and delays cascade.
OR Traffic Control with live case modal showing real-time procedure events
Maestro captures data through sensors and cameras during every surgery: timing, instrument usage, force, positioning. This data was supposed to power a feedback loop — surgeons learn from their cases, hospitals track adoption. But when I joined, the loop was broken. Surgeons got monthly PDF reports weeks after their procedures. By then, the context was gone.
Maestro Insights bridges the gap — real-time data from the robot, reflected back as actionable insights
Two user groups needed this data for very different reasons. For this case study, I’m focusing on the surgeon experience — the research, testing, and design decisions that shaped how surgeons interact with their performance data.
Want to track their own performance, learn from each case, and improve their robotic technique over time.
Need to justify ROI, track robot utilization across the site, and manage OR scheduling around robotic cases.
I ran three research tracks focused on surgeons to understand what they actually need from their data:
1:1 interviews and OR observations with surgeons and staff
MyIntuitive, VersiusConnect, Stryker — what exists, what’s missing
Surgical chiefs from partner hospitals across the U.S.
The research surfaced clear patterns. Three themes appeared across every conversation with surgeons:
Surgeons wanted per-case performance impact. Admins needed live KPIs to justify ROI and resource allocation.
Monthly PDFs were too delayed and too generic. Both groups needed on-demand access to make timely decisions.
Surgeons couldn’t drill into individual cases. They needed transparency across phases and usage metrics.
I worked with the PM and team to map every feature request against impact and effort. We focused on what surgeons would use daily and deferred the rest:
We considered three delivery formats for surgical insights:
Static, easy to ignore, no interactivity
Monthly, generic, too delayed for decisions
Real-time, personalized, AI-powered
I built the design system from the Maestro brand kit — pulling colors, gradients, and iconography from the robot’s hardware identity and translating them into a cohesive UI system for screens.
Brand kit to screen — color and gradient mapping from the Maestro hardware identity
Core tokens for the Maestro Insights platform — dark-first, data-rich, and optimized for clinical readability.
We built out the dashboard concept into high-fidelity mockups and tested them with 11 surgeons:
The dashboard concepts we scored — clean visuals, but surgeons didn’t find them useful
Surgeons scored the dashboard across three categories:
The graphs were clear. The visuals were clean. But usefulness scored 2.62 out of 5. Surgeons told us exactly why:
This feels more like something for data enthusiasts. I don’t see myself using it regularly.
I don’t have time to scan graphs or numbers. Just tell me what I need to know.
Looks better than MyIntuitive for sure, but it’s the same kind of data — and I don’t really use that either.
Surgeons didn’t want data. They wanted insights. Charts and numbers are work — someone has to interpret them. I presented the findings to our CEO and CSO, and we pivoted from a data dashboard to an AI-powered insights platform. The LLM generates case summaries, surfaces what changed, and tells surgeons what to pay attention to. That’s what became Maestro Insights.
The pivot: from data-heavy dashboard to AI-powered insights
We measured across every hospital deployment for three months after launch.
I actually check this after every case now. It’s the first tool that tells me what changed without me having to dig for it.
— Orthopedic Surgeon, partner hospitalThe admin view finally gives us the data we need to justify expanding the robotics program to the board.
— Hospital Administrator, post-launch reviewAfter three months live, surgeons and hospital partners started asking for more. Three directions emerged from post-launch conversations:
I was the only designer on the project. Seven months, from blank canvas to live product.
Seven months on a 0→1 product — including a mid-project pivot — taught me more than any redesign could.
We scored 4.46 on graph comprehension and 2.62 on usefulness. The charts were clear. The surgeons didn’t care. They needed someone to interpret the data for them. That’s why we pivoted to AI. I won’t confuse clarity with value again.
A 2.62 usefulness score felt like failure. It was the single most important number in the project. Without it, we would have shipped a product surgeons ignored. The discomfort of bad results is what makes them valuable.
Designing “Hey Maestro” forced me to think about feedback without a screen. A pulsing RGB halo on the robot was the answer — organic, visible from across the room, and impossible to confuse with any other signal in the OR. Sometimes the best interface is light. If I revisited this, I’d explore haptic confirmation through the instrument handles so the surgeon doesn’t need to look away from the patient at all.