Feature ArticleAutonomous Inspection Of Undersea Structures
By Dan McLeod • John Jacobson • Mark Hardy
Frequent risk-based assessment of the condition and integrity of subsea equipment is vital to predicting the life of the equipment and prevention of uncontrolled release of hydrocarbons into the environment. Oil and gas operators must know the state of equipment that is often thousands of meters below the oceanís surface, shrouded in a veil of darkness. Traditionally, equipment inspection is accomplished with visual sensors, such as video or still cameras, mounted on ROVs hardwired to the operators controlling the vehicle from a ship above the inspection site. This general visual inspection (GVI) requires significant topside support equipment and numerous skilled operators on site, along with a large vessel support crew, to control, observe, and maintain the ROV and interpret the images. While image quality has improved with the advent of digital HD sensors, the images are often degraded by camera movement and water turbidity, reducing inspection effectiveness. In addition, the data provided to clients require laborious review of recorded video that must be archived and revisited by experts for interpretation.
Lockheed Martinís Marlin AUV operating in the Gulf of Mexico.
With the advent of 3D imaging sonar and 3D lidar sensors, visual images can be augmented or replaced altogether with 3D models of the subsea equipment geolocated in their respective positions on the seabed. These 3D models can then be imported into a variety of third-party software tools and compared digitally with the as-built models of the structure, simplifying engineering analysis. 3D models also form the basis for automated change detection in subsequent inspections.
When these sensors are employed on an AUV, such as the Marlin developed by Lockheed Martin (Bethesda, Maryland), structural inspections can be executed autonomously, reducing the need to deploy specialists and highly trained ROV operators offshore.
Today, the use of 3D sonar and 3D lidar is limited to deployment on tripods or ROVs set on the seabed offering a stable platform. This approach results in a limited field of view, both horizontally and vertically, essentially imaging one portion of the scene at a time. Imaging an entire structure requires the tripod or ROV to be moved incrementally, gradually encompassing the entire structure in the sensorís view. The images are then post-processed into a mosaic revealing only the area that could be covered by the sensorsí viewing field. When mounted to an ROV, the sensors must be stabilized by the ROV fixing itself to a stable structure and/or sitting on the seabed while measurements are made. Both of these techniques produce limited, incomplete views of the scene, resulting in limited information to the client. This technique also requires multiple repositioning of the sensor and significant post-processing to generate a final image.
Lockheed Martinís Marlin not only carries the sensors, but also autonomously interacts with the sensors and the vehicleís navigation and control system to produce high-quality, motion-compensated 3D models within minutes of retrieving the vehicle to the surface. Three-D imaging ďon-the-flyĒ enables users to collect a whole perspective view of the subsea field with accurate 3D models generated for every structure imaged over the course of a data collection mission. These models are not limited to the scanning field of view of a tripod-mounted sensor, and Lockheed Martinís Feature Based Navigation system produces models that are accurate to millimeters or centimeters, depending on the 3D sensor employed.
Lockheed Martin and 3D at Depth LLC (Boulder, Colorado) are incorporating the SL2 3D lidar onto the proven Marlin AUV, resulting in a transformational capability to produce georegistered 3D images of subsea equipment on-the-fly with millimeter resolution.
Given the precision placement and small diameter of a laserís spot size, the high scan rate, and potential distortion of an image resulting from sensor motion, a tight coupling of the lidar sensor with the Marlin AUVís motion is required to produce accurate 3D models. Our teamís goal is to collect high-precision georegistered image points underwater at a rate of 40,000 points per second while traveling at 2 knots, processing the data in real time and building a 3D image on-the-fly.
Using software algorithms developed for lidar mounted on land-based and aerial vehicles, Lockheed Martin engineers produced amazing 3D models from point clouds generated from the CodaOctopus (Edinburgh, Scotland) Echoscope sonar. Lockheed Martin is now adapting the proven sonar software to process the SL2 lidar point clouds to generate 3D models on-the-fly.
Testing conducted by 3D at Depth and their partners has confirmed the SL2 product baseline performance while operating in a number of environments. Integrating the SL2 onto the Marlin is, therefore, less complex and difficult. To continue this article please click here.
Dan McLeod is the deputy director of Offshore Systems & Sensors at the Lockheed Martin, Riviera Beach, Florida, Mission Systems & Training Business. He is a key contributor to Lockheed Martinís undersea programs, beginning with Perry Technologies, followed by their acquisition by Lockheed Martin in 1990. His underwater vehicle career spans three decades and includes design development and management of manned submersibles, saturation diving systems, numerous ROV systems, and now AUV systems for military and commercial use.
John Jacobson is a senior program manager for Lockheed Martin and has more than 30 years of experience in the design and manufacture of underwater systems, including ROVs, AUVs, subsea tooling and manned submersibles. He has held various senior management positions with subsea technology companies, including Perry Tritech, Stolt Offshore, OceanWorks International and Lockheed Martin.
Mark Hardy is the co-founder and subsea lidar evangelist for 3D at Depth. He has held a variety of roles throughout his career in technology, solving mission-critical business problems by leveraging GIS and 3D data sets. These include the founding of two previous start-ups that leveraged 3D and GIS information to provide decision support for utilities, telecommunications and the military. He is originally from the Boston area but found his way to Texas, where he graduated from the University of Texas with a B.S. in mechanical engineering. He resides in Colorado.