Toward adaptive robotic sampling of phytoplankton in the coastal ocean

Currents, wind, bathymetry, and freshwater runoff are some of the factors that make coastal waters heterogeneous, patchy, and scientifically interesting—where it is challenging to resolve the spatiotemporal variation within the water column. We present methods and results from field experiments using an autonomous underwater vehicle (AUV) with embedded algorithms that focus sampling on features in three dimensions. This was achieved by combining Gaussian process (GP) modeling with onboard robotic autonomy, allowing volumetric measurements to be made at fine scales. Special focus was given to the patchiness of phytoplankton biomass, measured as chlorophyll a (Chla), an important factor for understanding biogeochemical processes, such as primary productivity, in the coastal ocean. During multiple field tests in Runde, Norway, the method was successfully used to identify, map, and track the subsurface chlorophyll a maxima (SCM). Results show that the algorithm was able to estimate the SCM volumetrically, enabling the AUV to track the maximum concentration depth within the volume. These data were subsequently verified and supplemented with remote sensing, time series from a buoy and ship-based measurements from a fast repetition rate fluorometer (FRRf), particle imaging systems, as well as discrete water samples, covering both the large and small scales of the microbial community shaped by coastal dynamics. By bringing together diverse methods from statistics, autonomous control, imaging, and oceanography, the work offers an interdisciplinary perspective in robotic observation of our changing oceans.

Source: Sciencemag.org – Science Robotics Latest Content

On the choice of grasp type and location when handing over an object

The human hand is capable of performing countless grasps and gestures that are the basis for social activities. However, which grasps contribute the most to the manipulation skills needed during collaborative tasks, and thus which grasps should be included in a robot companion, is still an open issue. Here, we investigated grasp choice and hand placement on objects during a handover when subsequent tasks are performed by the receiver and when in-hand and bimanual manipulation are not allowed. Our findings suggest that, in this scenario, human passers favor precision grasps during such handovers. Passers also tend to grasp the purposive part of objects and leave “handles” unobstructed to the receivers. Intuitively, this choice allows receivers to comfortably perform subsequent tasks with the objects. In practice, many factors contribute to a choice of grasp, e.g., object and task constraints. However, not all of these factors have had enough emphasis in the implementation of grasping by robots, particularly the constraints introduced by a task, which are critical to the success of a handover. Successful robotic grasping is important if robots are to help humans with tasks. We believe that the results of this work can benefit the wider robotics community, with applications ranging from industrial cooperative manipulation to household collaborative manipulation.

Source: Sciencemag.org – Science Robotics Latest Content

AntBot: A six-legged walking robot able to home like desert ants in outdoor environments

Autonomous outdoor navigation requires reliable multisensory fusion strategies. Desert ants travel widely every day, showing unrivaled navigation performance using only a few thousand neurons. In the desert, pheromones are instantly destroyed by the extreme heat. To navigate safely in this hostile environment, desert ants assess their heading from the polarized pattern of skylight and judge the distance traveled based on both a stride-counting method and the optic flow, i.e., the rate at which the ground moves across the eye. This process is called path integration (PI). Although many methods of endowing mobile robots with outdoor localization have been developed recently, most of them are still prone to considerable drift and uncertainty. We tested several ant-inspired solutions to outdoor homing navigation problems on a legged robot using two optical sensors equipped with just 14 pixels, two of which were dedicated to an insect-inspired compass sensitive to ultraviolet light. When combined with two rotating polarized filters, this compass was equivalent to two costly arrays composed of 374 photosensors, each of which was tuned to a specific polarization angle. The other 12 pixels were dedicated to optic flow measurements. Results show that our ant-inspired methods of navigation give precise performances. The mean homing error recorded during the overall trajectory was as small as 0.67% under lighting conditions similar to those encountered by ants. These findings show that ant-inspired PI strategies can be used to complement classical techniques with a high level of robustness and efficiency.

Source: Sciencemag.org – Science Robotics Latest Content

Soft robot perception using embedded soft sensors and recurrent neural networks

Recent work has begun to explore the design of biologically inspired soft robots composed of soft, stretchable materials for applications including the handling of delicate materials and safe interaction with humans. However, the solid-state sensors traditionally used in robotics are unable to capture the high-dimensional deformations of soft systems. Embedded soft resistive sensors have the potential to address this challenge. However, both the soft sensors—and the encasing dynamical system—often exhibit nonlinear time-variant behavior, which makes them difficult to model. In addition, the problems of sensor design, placement, and fabrication require a great deal of human input and previous knowledge. Drawing inspiration from the human perceptive system, we created a synthetic analog. Our synthetic system builds models using a redundant and unstructured sensor topology embedded in a soft actuator, a vision-based motion capture system for ground truth, and a general machine learning approach. This allows us to model an unknown soft actuated system. We demonstrate that the proposed approach is able to model the kinematics of a soft continuum actuator in real time while being robust to sensor nonlinearities and drift. In addition, we show how the same system can estimate the applied forces while interacting with external objects. The role of action in perception is also presented. This approach enables the development of force and deformation models for soft robotic systems, which can be useful for a variety of applications, including human-robot interaction, soft orthotics, and wearable robotics.

Source: Sciencemag.org – Science Robotics Latest Content

Vision-based grasp learning of an anthropomorphic hand-arm system in a synergy-based control framework

In this work, the problem of grasping novel objects with an anthropomorphic hand-arm robotic system is considered. In particular, an algorithm for learning stable grasps of unknown objects has been developed based on an object shape classification and on the extraction of some associated geometric features. Different concepts, coming from fields such as machine learning, computer vision, and robot control, have been integrated together in a modular framework to achieve a flexible solution suitable for different applications. The results presented in this work confirm that the combination of learning from demonstration and reinforcement learning can be an interesting solution for complex tasks, such as grasping with anthropomorphic hands. The imitation learning provides the robot with a good base to start the learning process that improves its abilities through trial and error. The learning process occurs in a reduced dimension subspace learned upstream from human observation during typical grasping tasks. Furthermore, the integration of a synergy-based control module allows reducing the number of trials owing to the synergistic approach.

Source: Sciencemag.org – Science Robotics Latest Content

See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion

Humans are able to seamlessly integrate tactile and visual stimuli with their intuitions to explore and execute complex manipulation skills. They not only see but also feel their actions. Most current robotic learning methodologies exploit recent progress in computer vision and deep learning to acquire data-hungry pixel-to-action policies. These methodologies do not exploit intuitive latent structure in physics or tactile signatures. Tactile reasoning is omnipresent in the animal kingdom, yet it is underdeveloped in robotic manipulation. Tactile stimuli are only acquired through invasive interaction, and interpretation of the data stream together with visual stimuli is challenging. Here, we propose a methodology to emulate hierarchical reasoning and multisensory fusion in a robot that learns to play Jenga, a complex game that requires physical interaction to be played effectively. The game mechanics were formulated as a generative process using a temporal hierarchical Bayesian model, with representations for both behavioral archetypes and noisy block states. This model captured descriptive latent structures, and the robot learned probabilistic models of these relationships in force and visual domains through a short exploration phase. Once learned, the robot used this representation to infer block behavior patterns and states as it played the game. Using its inferred beliefs, the robot adjusted its behavior with respect to both its current actions and its game strategy, similar to the way humans play the game. We evaluated the performance of the approach against three standard baselines and show its fidelity on a real-world implementation of the game.

Source: Sciencemag.org – Science Robotics Latest Content

Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs

Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams. If robots could represent and infer high-level concepts, then it would notably improve their ability to understand our intent and to transfer tasks between different environments. To that end, we introduce a computational framework that replicates aspects of human concept learning. Concepts are represented as programs on a computer architecture consisting of a visual perception system, working memory, and action controller. The instruction set of this cognitive computer has commands for parsing a visual scene, directing gaze and attention, imagining new objects, manipulating the contents of a visual working memory, and controlling arm movement. Inferring a concept corresponds to inducing a program that can transform the input to the output. Some concepts require the use of imagination and recursion. Previously learned concepts simplify the learning of subsequent, more elaborate concepts and create a hierarchy of abstractions. We demonstrate how a robot can use these abstractions to interpret novel concepts presented to it as schematic images and then apply those concepts in very different situations. By bringing cognitive science ideas on mental imagery, perceptual symbols, embodied cognition, and deictic mechanisms into the realm of machine learning, our work brings us closer to the goal of building robots that have interpretable representations and common sense.

Source: Sciencemag.org – Science Robotics Latest Content