With the rapid increase in end-of-life smartphones, enhancing the automation and intelligence of their recycling processes has become an urgent challenge. At present, the disassembly of discarded smartphones predominantly relies on manual labor, which is not only inefficient but also associated with environmental pollution and high labor intensity. In the context of end-of-life smartphone recycling, complex situations such as stacking and occlusion are commonly encountered. Accurate pose information can provide critical data for precise robotic grasping, thereby improving the level of automation and efficiency in recycling and disassembly. This research proposes a pose estimation method tailored for stacked discarded smartphones, integrating an improved Mask R-CNN instance segmentation model with Iterative Closest Point (ICP) point cloud registration technology. The method begins by accurately segmenting stacked smartphones using both real and synthetic datasets. Subsequently, pose information is extracted through the proposed estimation approach, providing critical data to guide the robotic arm's grasping actions, thereby improving sorting efficiency and minimizing manual intervention. To enhance its practical applicability, a pose recognition interactive system is developed, enabling visualization and dynamic interaction with pose data. Experimental results demonstrate the effectiveness of the transfer learning algorithm, which leverages a large volume of synthetic data combined with a small batch of real-world data. This research offers valuable theoretical insights and technical solutions for advancing the automation and intelligent disassembly of end-of-life smartphones.
Keywords: End-of-life management; Instance segmentation; Iterative closest point; Pose estimation; Stacked smartphones.
Copyright © 2025 Elsevier Ltd. All rights reserved.