top of page
Robotic Manipulation for Nuclear Sort and Segregation
Deliverables
Report: D1.1 Force cues based on integral error signals
One of the components of the RoMaNS project is a shared control architecture for allowing an easy operation of the two-arm system by a human operator during remote manipulation of waste material. The main focus is on deriving meaningful and effective force cues for the operator in (partial) control of the manipulator motion with the aim of increasing the operator’s situational awareness. To this end, we have proposed different levels of shared autonomy with, in all cases, a visual and force/haptic feedback for the human operator. A first set of algorithms have relied on the use of (classical) instantaneous error quantities, such as the tracking error, for deriving a force cue signal. Afterwards, we have instead considered the use of more complex cues based on an “integral” error signal evaluated along the whole future trajectory, and, thus, representative of the future consequences of the operator’s local actions. We have also started performing some user studies in order to evaluate the various possibilities in a principled way.
Report: D1.2 Mapping between human and robot hands
Part of the Romans activities have been devoted to find optimal ways of coupling the dofs of a human hand to those of a robot hand, with the aim of replicating the human manipulative skills on the Romans robotic setup. This deliverable reports the activities of the consortium in this topic.
Report: D2.1 Multi-modal multi-category object modelling
This report describes work done as part of task T2.2, which aims to learn models of objects and materials, and develop robust and efficient algorithms for their inference. Two main elements of work have been developed, which use state-of-the-art deep learning techniques to achieve semantic reconstruction of objects and scenes. Annexe A.1 presents a novel method for simultaneous 3D reconstruction of scenes and recognition and segmentation/labelling of materials in those scenes (metal, concrete, painted surfaces, fabric etc). Annexe A.2 presents a novel method for simultaneous: 3D reconstruction of scenes; detection and pixel-wise segmentation of objects in the scenes; object category recognition and pixel-wise labelling. In both cases the semantic reconstruction is achieved at real-time frame rates.
Paper A.2 also reports on the development of a novel nuclear waste object dataset, including 2D and 3D/point-cloud views, with annotated ground-truth, for a wide variety of different nuclear waste-like objects.
While the work described in this report uses the most recent academic research methods (deep learning), it has already attracted considerable interest for very practical industrial applications, from (highly conservative) end-user customer Sellafield Ltd, who manage the Sellafield nuclear site, which represents the largest and most challenging environmental remediation problem in the whole of Europe.
Paper A.2 also reports on the development of a novel nuclear waste object dataset, including 2D and 3D/point-cloud views, with annotated ground-truth, for a wide variety of different nuclear waste-like objects.
While the work described in this report uses the most recent academic research methods (deep learning), it has already attracted considerable interest for very practical industrial applications, from (highly conservative) end-user customer Sellafield Ltd, who manage the Sellafield nuclear site, which represents the largest and most challenging environmental remediation problem in the whole of Europe.
Report: D2.2 Active Segmentation
In environments with unknown and cluttered objects, it is often very challenging to assess which parts of the scene belong together and which not, even with multi-modoal sensory input. However, for robotic manipulation this is an essential information, e.g. for successful grasping. Active segmentation tries to gather such information by perturbing the scene and reasoning from the observed motion. An important component is the observation of movements in the scene. Usually, such movement detection is based on object recognition which relies on visual features which are based on the texture of the objects. However, in settings such as the RoMaNS sort and segregation rig, we cannot assume objects with high texture. Instead, we are tasked with objects such as pipes, barrels, canisters, or concrete bricks which have only little or no texture at all. To be able to deal with such situations, we propse an active segementation algorithm which uses shape in addition to color for computing local feature descriptors. In addition, we apply an improved initialization of the over-segmentation. We show in experiments on real robots how these changes improve the active segmentation in case of textureless objects.
Report: D2.3 Adaptive visual tracking for arbitrary objects without training
For nuclear sort and segregation, it is important to be able to visually track both parts of robots and also manipulated objects, in difficult conditions of extremely cluttered scenes. This visual tracking is also necessary for a variety of tasks: visual servoing of robots onto objects; controlling pushing type manipulations; detecting errors such as dropped objects or potential collisions; assisting the human operator with monitoring; and providing data for augmented reality (e.g. highlighting tracked objects on the operator’s visual display).
We have achieved the main objectives of this task. Our technical contributions follow the subtasks of T2.4 (as listed in the TA). This work has achieved state-of-the-art performance on the leading public benchmark tracking challenge datasets, and has been published in elite computer vision proceedings CVPR and ECCV. We present several high-impact publications of our novel object trackers, and additional reports and publications on 3D pose tracking and its applications to visual servoing of robots.
We have achieved the main objectives of this task. Our technical contributions follow the subtasks of T2.4 (as listed in the TA). This work has achieved state-of-the-art performance on the leading public benchmark tracking challenge datasets, and has been published in elite computer vision proceedings CVPR and ECCV. We present several high-impact publications of our novel object trackers, and additional reports and publications on 3D pose tracking and its applications to visual servoing of robots.
Report: D2.4 Real-time visual localisation and control
Using visual information for localisation and control is an important component of the RoMaNS project. Vision provides more robustness against modelling issues and allows for sensor-based control strategies which can account for changes in the environment at runtime. Two complementary elements of research have been developed which incorporate real-time visual localisation and control:
1)i) CNRS have developed a bi-manual robot system. One robot arm is the slave arm used for grasping and manipulation. The second robot arm carries a wrist-mounted camera. This camera provides situational awareness to the human operator, and also provides images to algorithms which use visual information to assist the human with controlling the slave arm. Assistance is given by the intelligent agent providing various kinds of haptic cues to the human for guidance. Additionally, the intelligent agent automatically controls the camera arm, to always provide the best view of the object being grasped, and avoiding occlusions. This work relies on real-time tracking and localisation of the object being grasped w.r.t. the slave gripper. We used a state-of-the-art visual tracker and proposed several visual servoing control laws for commanding the manipulator carrying the camera so as to always keep a good vantage point of the objects of interest of the gripper while also avoiding mutual visual occlusions. The tracker can be used to track objects based on their CAD models or by exploiting markers placed on them.
2)University of Birmingham further showed how real-time visual tracking can be achieved for arbitrarily shaped objects. This tracking is then used to achieve autonomous control of an arm and hand, to achieve dynamic grasping of moving objects. This work demonstrates real-time 6dof localisation of objects of arbitrary geometry, and use of vision for real-time control of sophisticated autonomous grasping incorporating grasp-planning, reaching to grasp and collision avoidance with respect to scene and vision-captured obstacles.
1)i) CNRS have developed a bi-manual robot system. One robot arm is the slave arm used for grasping and manipulation. The second robot arm carries a wrist-mounted camera. This camera provides situational awareness to the human operator, and also provides images to algorithms which use visual information to assist the human with controlling the slave arm. Assistance is given by the intelligent agent providing various kinds of haptic cues to the human for guidance. Additionally, the intelligent agent automatically controls the camera arm, to always provide the best view of the object being grasped, and avoiding occlusions. This work relies on real-time tracking and localisation of the object being grasped w.r.t. the slave gripper. We used a state-of-the-art visual tracker and proposed several visual servoing control laws for commanding the manipulator carrying the camera so as to always keep a good vantage point of the objects of interest of the gripper while also avoiding mutual visual occlusions. The tracker can be used to track objects based on their CAD models or by exploiting markers placed on them.
2)University of Birmingham further showed how real-time visual tracking can be achieved for arbitrarily shaped objects. This tracking is then used to achieve autonomous control of an arm and hand, to achieve dynamic grasping of moving objects. This work demonstrates real-time 6dof localisation of objects of arbitrary geometry, and use of vision for real-time control of sophisticated autonomous grasping incorporating grasp-planning, reaching to grasp and collision avoidance with respect to scene and vision-captured obstacles.
Report: D3.1 Scientific report on learning with the different types of instructions
Our project needs to leverage various types of human instruction in order to achieve efficient robot learning for semi-autonomous assistance. This report describes the overview of the existing methods for learning from human instructions. This report covers behavioural cloning, inverse reinforcement learning, and preference learning. This report clarifies the features and limitations of these existing methods, which indicates what we need to develop beyond the limitations of the state-of-the-art in this project.
Report: D3.2 Semi-Autonomous Learning for Trajectory and Grasp Planning
Semi-autonomous learning is a key aspect of the Romans project. Traditional, au-tonomous machine learning approaches such as reinforcement learning do not exploit the presence of a human teacher. For example, reinforcement learning assumes the existence of a hand-coded reward function, that can give evaluative feedback on the quality of the executed behavior. However, using such hand coded reward functions is inflicted with several problems. Firstly, for many tasks it is rather complicated to define such a reward function that yields the desired behavior and a lot of fine tuning of certain parameters of the reward function is needed. Moreover, the reward function alone is a sparse feedback signal and it therefore typically requires many executions on the real robot to find a good solution.
Report: D3.3 Reactive Grasping and Disentangling of Objects
Robots have impacted the manufacturing industry due to their ability to autonomously handle large and heavy objects with high speed and precision. A prime example of this is the car manufacturing industry. Robots have similarly large potential for performing tasks in less structured environments such as in sorting and segregating heaps of waste. However, even though there are, for example, robot vacuum cleaners and autonomous driving prototypes, in waste segregation robots have been mainly applied to classifying already separated objects and picking them up with a simple grasp type that is assumed to work for any object. However, waste segregation can be challenging due to the need to disentangle objects with unknown shapes that may be entangled in complex ways that are not fully perceivable due to limited sensor information. Moreover, the effects of forceful actions of the robot are very hard to predict and require the robot to consider uncertainty during the decision making process.
Report: D4.1 Project interfaces and data types
This report describes the activities related to software design and development methodologies to facilitate software integration within the project. It includes work on project
interfaces and data types which are going to be used for integration of partner’s software
components during project demonstrations.
interfaces and data types which are going to be used for integration of partner’s software
components during project demonstrations.
Report: D4.2 Controllers for arms and grippers
This deliverable comprises software in the form of drivers for the various robot arms, hands and grippers being used in the project. Devices for which we have created drivers include: KUKA industrial arms; KUKA LBR iiwa compliant arms; CEA novel backdrivable slave arm; Haption haptic master arm; Schunk and Zimmer industrial parallel jaw grippers; CEA novel robot hand.
Report: D4.3 Simulators of novel arms and grippers
This deliverable addresses the design rationale of the simulation software toolkit related to the newly developed CEA hardware (i.e. backdrivable slave arm, master exoskeleton haptic device, and slave multifingered gripper). First, the document provides the geometric and kinematics properties of the CEA designed systems within WP1: these features may be used by partners for high-level control and standalone simulations on their side. Then, at high level, the general purpose of the proposed simulation environment that has been developed in this deliverable is to build high quality software modules which will implement approaches from WP1-3 first, and to integrate these software modules to create demonstration applications from WP5 at the end. The developed simulation toolkit is expected to respect reusability and flexibility to robustly operate in each partner’s environment conditions, as well as interoperability to fluently co-operate with other software modules, as they will be developed by each partner. In particular, the software interface, as specified in WP4 requirements, is written in C++, and specifies the way the module can be connected with interfaces of other project modules.
In link with “D4.2 Controllers for novel arms, grippers and off-shelf robots (UoB)”, the proposed simulation contents is also made compatible with hardware communication, so that the embedded simulation of the arms and grippers can interact in a bilateral way with the hardware (for supervision or virtual reality for instance).
In link with “D4.2 Controllers for novel arms, grippers and off-shelf robots (UoB)”, the proposed simulation contents is also made compatible with hardware communication, so that the embedded simulation of the arms and grippers can interact in a bilateral way with the hardware (for supervision or virtual reality for instance).
Report: D4.4 Cameras and other sensors
This deliverable comprises software in the form of drivers for various cameras and other sensors being used in the project. Devices for which we have created drivers include: rgb cameras; several different kinds of point-cloud depth cameras; force-torque sensors.
Report: D5.1 Basic test bed
This report provides an overview of the RoMaNS (Robotic Manipulation for Nuclear Sort and Segregation) test rig, which is located at the NNL Workington Laboratory in West Cumbria, United Kingdom. Throughout the duration of this three year project, partner technology will be integrated into this test rig and will permit partner technology to be tested and demonstrated.
The RoMaNS project will advance the state of the art in autonomous, tele-operative, shared control for remote manipulation, decision making and will deliver the required improvements in safety, reliability and throughput by significantly simplifying grasping operations from the operator’s perspective. This has far reaching cross-sector applications in nuclear, aerospace, oil and gas, space, food and agriculture. Within the nuclear industries of multiple EU states, it applies across the entire sector, such as waste processing, decommissioning, asset care, maintenance, repair, characterization and sampling. The novel technology that will be produced within this project will be applied to a very challenging and safety-critical nuclear “sort and segregate” industrial problem, which is driven by urgent market and societal needs. Inspiration for this project is from the Box Encapsulation Plant (BEP) at the Sellafield nuclear site in the United Kingdom, which is constructing a Sort and Segregate facility to process the UK’s 1.4 million cubic metres of Intermediate Level Waste.
The RoMaNS project will advance the state of the art in autonomous, tele-operative, shared control for remote manipulation, decision making and will deliver the required improvements in safety, reliability and throughput by significantly simplifying grasping operations from the operator’s perspective. This has far reaching cross-sector applications in nuclear, aerospace, oil and gas, space, food and agriculture. Within the nuclear industries of multiple EU states, it applies across the entire sector, such as waste processing, decommissioning, asset care, maintenance, repair, characterization and sampling. The novel technology that will be produced within this project will be applied to a very challenging and safety-critical nuclear “sort and segregate” industrial problem, which is driven by urgent market and societal needs. Inspiration for this project is from the Box Encapsulation Plant (BEP) at the Sellafield nuclear site in the United Kingdom, which is constructing a Sort and Segregate facility to process the UK’s 1.4 million cubic metres of Intermediate Level Waste.
Report: D5.2 Shipping of Master-Slave system
A major part of the RoMaNS project involves the development of an advanced, haptic-feedback master-slave system by CEA. The system will comprise a master arm and exoskeleton glove, coupled with a slave arm equipped with a slave hand at the end-effector. By the end of the project, a full master-slave hand-arm system should have been shipped to partners UoB and TUDa, who will then transport their systems to industry partner NNL, to enable a fully bi-manual master-slave system to be demonstrated at the end of the project.
Report: D5.3 Baseline performance measurement of expert MSM operator population
This report describes initial pilot experiments to investigate the remote manipulation capabilities of expert operators using conventional mechanical Master-Slave Manipulators (MSMs).
So far, there has been remarkably little penetration of robots into the nuclear industry at all. Instead, the vast majority of remote manipulation in nuclear sites, worldwide, is performed using mechanical MSM devices, controlled by very highly skilled MSM operators. MSMs have been in use since the 1940s, and are a well established and trusted technology in the industry.
In contrast, the RoMaNS project seeks to advance the state of the art in robotic manipulation for nuclear applications. The capabilities and applications of robots are somewhat different from those of MSMs, and it is not necessarily possible to directly/fully compare these two different technologies. However, we feel it is important to try and gain some understanding of the manipulative capabilities of the MSM devices (and their human operators), prior to evaluating the performance of the new robotic manipulation technologies being developed in RoMaNS. Similar manipulative tasks have also been used in RoMaNS deliverable report D5.4, which explores baselining of the performance of humans using conventional teleoperation of robot arms and grippers to carry out remote manipulations.
While there is a significant body of patent literature, and some academic engineering literature, which discusses the engineering design of MSM devices, we are not aware of previous attempts to systematically evaluate the capabilities of humans to use such machinery to perform practical remote manipulation tasks.
As a first step towards principled evaluation of MSM capabilities, we have designed three benchmark manipulative tasks: shape sorting, block stacking, and point-to-point dexterity tasks. Six expert MSM operators were invited to complete these tasks, and a number of metrics of their performance and workload were measured and assessed. While this is an initial small study, and by no means yet comprehensive, it has yielded some useful indications of baseline performance of expert MSM operators. This deliverable is intended to remain a live, working document, so that we can grow and extend this initial pilot study as the RoMaNS project progresses.
So far, there has been remarkably little penetration of robots into the nuclear industry at all. Instead, the vast majority of remote manipulation in nuclear sites, worldwide, is performed using mechanical MSM devices, controlled by very highly skilled MSM operators. MSMs have been in use since the 1940s, and are a well established and trusted technology in the industry.
In contrast, the RoMaNS project seeks to advance the state of the art in robotic manipulation for nuclear applications. The capabilities and applications of robots are somewhat different from those of MSMs, and it is not necessarily possible to directly/fully compare these two different technologies. However, we feel it is important to try and gain some understanding of the manipulative capabilities of the MSM devices (and their human operators), prior to evaluating the performance of the new robotic manipulation technologies being developed in RoMaNS. Similar manipulative tasks have also been used in RoMaNS deliverable report D5.4, which explores baselining of the performance of humans using conventional teleoperation of robot arms and grippers to carry out remote manipulations.
While there is a significant body of patent literature, and some academic engineering literature, which discusses the engineering design of MSM devices, we are not aware of previous attempts to systematically evaluate the capabilities of humans to use such machinery to perform practical remote manipulation tasks.
As a first step towards principled evaluation of MSM capabilities, we have designed three benchmark manipulative tasks: shape sorting, block stacking, and point-to-point dexterity tasks. Six expert MSM operators were invited to complete these tasks, and a number of metrics of their performance and workload were measured and assessed. While this is an initial small study, and by no means yet comprehensive, it has yielded some useful indications of baseline performance of expert MSM operators. This deliverable is intended to remain a live, working document, so that we can grow and extend this initial pilot study as the RoMaNS project progresses.
Report: D5.4 Bench-mark performance measurement, on standardised tasks, of human operators using simple/basic current industry practices for remote operation of robot arms
In order to demonstrate the utility of new robotic manipulation technologies, being developed during the RoMaNS project, it is necessary to evaluate these new technologies in comparison to the previous state-of-the-art in the nuclear industry (simple direct teleoperation with CCTV cameras for situational awareness). This report describes a variety of pilot studies, which aim to objectively evaluate the ability of human operators to carry out remote manipulation tasks, using simple tele-operation of a robot arm and gripper. Our intention is to compare this simple teleoperation data against future results of evaluating the performance of novel RoMaNS technologies.
As an additional step, beyond the original requirements of this deliverable, we also describe an experiment in which we compare human-supervised autonomy (vision-guided autonomous grasping and trajectory planning) against direct teleoperation, for a remote manipulation task involving grasping and stacking of blocks.
The benchmark manipulative tasks, explored in D5.4 with robot arms, are the same as those used in D5.3 to measure the performance of MSM operators using mechanical Master-Slave Manipulator devices. In some cases the test equipment was scaled to eliminate confounding factors of different sized workspaces of the different manipulators. However, direct comparison of robots against MSM devices has not yet been fully possible. This is because we were only able to test MSMs with expert operators, while experiments with teleoperated robots were performed with novice operators.
However, initial results suggest that, while novice teleoperation of robots is much slower than expert performance with MSM devices, novices supervising semi-autonomous robots perform over three times faster than when using direct teleoperation, approaching the time-to-completion rates of expert MSM operators. This suggests that advanced robot control methods might indeed be used to rapidly enable a new generation of nuclear workers to take over from the very highly skilled, but aging (mean age 55 years in UK and USA) current nuclear workforce, consistent with the overall aims of RoMaNS.
As an additional step, beyond the original requirements of this deliverable, we also describe an experiment in which we compare human-supervised autonomy (vision-guided autonomous grasping and trajectory planning) against direct teleoperation, for a remote manipulation task involving grasping and stacking of blocks.
The benchmark manipulative tasks, explored in D5.4 with robot arms, are the same as those used in D5.3 to measure the performance of MSM operators using mechanical Master-Slave Manipulator devices. In some cases the test equipment was scaled to eliminate confounding factors of different sized workspaces of the different manipulators. However, direct comparison of robots against MSM devices has not yet been fully possible. This is because we were only able to test MSMs with expert operators, while experiments with teleoperated robots were performed with novice operators.
However, initial results suggest that, while novice teleoperation of robots is much slower than expert performance with MSM devices, novices supervising semi-autonomous robots perform over three times faster than when using direct teleoperation, approaching the time-to-completion rates of expert MSM operators. This suggests that advanced robot control methods might indeed be used to rapidly enable a new generation of nuclear workers to take over from the very highly skilled, but aging (mean age 55 years in UK and USA) current nuclear workforce, consistent with the overall aims of RoMaNS.
Report: D5.5 Final RoMaNS robotic manipulation system
This report describes the final integrated robotic manipulation systems that have been developed for nuclear sorting and segregation waste handling operations. The industry end-user requires a range of different control methods to be available, and these are reflected in our final systems designs. There is no conflict between autonomy and teleoperation – both are considered important and useful tools to have available. For safety-critical applications, it is essential that direct teleoperation always be available to the human operator. However, due to the enormous amounts of waste that need to be processed, it is also important to incorporate a variety of advanced control methods to enable waste handling operations to proceed more quickly and efficiently. In particular, direct teleoperation of grasping is extremely slow and difficult using conventional methods (joystick control and CCTV camera views), and we therefore provide a variety of operator assistance technologies. In keeping with end-user requirements, we provide a range of control methods which can conveniently be switched between dynamically and ergonomically during operations:
1) Direct teleoperation using arm-hand exoskeletons with haptic feedback.
2) Fully autonomous grasp planning and execution.
3) Variable autonomy - dynamic switching between options 1) and 2) using augmented reality display to inform the human operator of grasps that are planned by the AI in real-time.
4) A shared control mode, wherein the AI uses haptic cues to guide the human operator towards stable grasps.
The system has been implemented on the fully industrial RoMaNS robot test-rig at the NNL Workington nuclear industry site, under full nuclear security and nuclear safety regulations, where it has been tested by nuclear industry workers handling nuclear waste simulants. The system goes far beyond the previous state-of-the-art for the nuclear industry, and has received very positive feedback from experienced industry workers.
1) Direct teleoperation using arm-hand exoskeletons with haptic feedback.
2) Fully autonomous grasp planning and execution.
3) Variable autonomy - dynamic switching between options 1) and 2) using augmented reality display to inform the human operator of grasps that are planned by the AI in real-time.
4) A shared control mode, wherein the AI uses haptic cues to guide the human operator towards stable grasps.
The system has been implemented on the fully industrial RoMaNS robot test-rig at the NNL Workington nuclear industry site, under full nuclear security and nuclear safety regulations, where it has been tested by nuclear industry workers handling nuclear waste simulants. The system goes far beyond the previous state-of-the-art for the nuclear industry, and has received very positive feedback from experienced industry workers.
bottom of page