To enable the human mobility necessary to effectively explore near-Earth asteroids and deep space effectively, a new extravehicular activity (EVA) jetpack is under development. The new design leverages knowledge and experience gained from the current astronaut rescue device, the Simplified Aid for EVA Rescue (SAFER). Whereas the primary goal for a rescue device is to return the crew to a safe haven, in-space exploration and navigation requires an expanded set of capabilities. To accommodate the range of tasks astronauts may be expected to perform while utilizing the jetpack, it was desired to research a hands-free method of control. This hands-free control method would enable astronauts to command their motion while transporting payloads and conducting two-handed tasks.
The approach for the jetpack research effort was to leverage the existing SAFER avionics system and add functionality in the form of a new hands-free method of control (replacing the existing six-DOF hand controller). For the initial design, both voice and foot sensor controls were selected and tested as input devices. The voice command modality utilizes a wearable headset with wireless communications and adjustable operator display screen, microphone, and headset. The foot sensor design is derived from deep-sea diving applications of foot pedal controls, adapted for inclusion into an astronaut’s spacesuit boot worn during a microgravity EVA.
The voice control solution utilizes natural language speech commands as inputs to the SAFER control avionics. A verbal command taxonomy was developed to control the three translational and three rotational DOF available to the astronaut. This taxonomy leverages the vocabulary used to teach SAFER operations to astronauts during their flight training; for example “Plus X” or “Minus Yaw.”
The second concept chosen for initial development was foot sensor control, based on methods used in Deep Worker Sub and Atmospheric Diving Suit applications. However, in the absence of gravity or a platform on which to mount pedals during EVA use, the concept was adapted for pedal-free, friction-free, training-light commanding. Additional requirements were to (1) provide discrete on/off thruster control for the three axes of motion available during initial testing (±X, ±Y, and ±Yaw), (2) create a system suitable for various anthropometries, and (3) provide a robust system for demonstration purposes.
The first iteration of this system was successfully demonstrated in an air bearing facility, with follow-on studies planned to enhance human interface data to the wearable display, and test a hands-free solution using a combination of the voice and foot sensor concepts.
This work was done by Jennifer Rochlis Zumbado and Pedro H. Curiel of Johnson Space Center. MSC-25512-1