We are investigating autonomous tumor localization
and extraction with the da Vinci surgical robot In the same way that human drivers can benefit from the automation of well-defined and tedious subtasks such as parallel parking and driving on open freeways, surgeons can benefit from
the automation of surgical subtasks. We have created tooling and software to autonomously perform a multi-step surgical procedure including: Palpating to localize a subcutaneous tumor, Dissecting, and then Retracting the skin to reveal the tumor, reaching in to extract the tumor, and
Sealing the incision with surgical adhesive. Each of these steps is performed without human intervention. This video demonstrates a successful trial of the entire five-step procedure where human input is required only at four points to change tools. We use silicone-based tissue phantoms with silicone-rubber cylinders to represent the tumor. One tumor is placed in a random position
under the silicone skin layer. The first step is tumor localization. We designed and implemented a low-cost, single-use palpation probe, that fits on the end of a standard da Vinci Classic tool. It uses a Hall-Effect sensing mechanism to localize subcutaneous tumors based on end-effector displacement. Here we see the probe scanning the surface with eight parallel passes to estimate the tumor location. Second step is surgical incision to access the tumor. The system makes an incision at an offset from the palpation estimate, using a sequence of short motions and then sweeping the scalpel along the full incision. The next steps are retraction and tumor debridement. One gripper retracts the skin to expose the tumor, while the second gripper reaches in
to detach and remove the tumor. The final step is the closure of the incision wound using a surgical adhesive. One arm stabilizes the incision while the other uses an automated
injector we designed and implemented to apply the adhesive at a constant rate. Here are some failure modes. We are working on experiments to characterize the reliability of each step and on incorporating computer vision and new probing algorithms to improve robustness.