Home | Contact ST  
Follow ST



Feature Article

Autonomous Underwater
Optical Imaging


By Chad Collett


Typical ROV front setup.


Autonomy is the future of underwater systems. The cost of ships, fuel and personnel required to support exploration, oil production and equipment inspection limit the financially sustainable amount of undersea support activity. Because visual data are easily understood by the human operators (color and resolution data), optical video camera systems have been one of the primary inspection support tools for ROV systems.

While footage can be stored and replayed at any time, the operation of camera systems is normally a manual process requiring live interaction for adjustment of focus, zoom and other parameters. The front of an ROV commonly hosts a combination of cameras, LEDs, pan-tilt and sonar units. The camera sensors in older systems are SD video; newer systems have HD video to enable high-resolution inspections, as well as digital stills for fine detail. All these sensors often require real-time human interaction.

The challenge then is how to automate the camera system so that it’s usable on AUVs or on partially autonomous hybrid ROVs. This article covers the core challenges and the designed solutions to combat those challenges.


Image Focus
Target focus sharpness is a product of depth of field of the sensor/lens parameters and distance to the target. The depth of field issue can be solved by proper lens and sensor selection to increase the focus plane’s depth, which reduces the amount of focus autonomous adjustment required.

A more complex problem is how to determine distance to the target. Auto-focus systems are often distracted by silt and objects in the water column between the camera and the target. This can cause them to go into a “hunting” mode, where the lens is adjusted near-far-near to find the sharpest setting. Common AUV systems have altimeters, but this will only tell vertical distance to seafloor, not diagonal distance to target.

A method that SubC is developing is utilization of the parallel lasers built into our cameras to perform a laser-based auto-focus. First, detect the laser points, following a camera calibration. Then, calculate the distance to the target using the calibration. Then, adjusting the lens, focus distance to this target distance.

The lasers are projected from the camera, so they are always painted on the object that is in the center frame of the camera; it does not matter what angle or direction the camera is pointed. Additionally, this provides a linear scale of the captured image, for which objects can be selected for linear measurements.


Resolution and Zoom
The purpose of an optical zoom is to increase the pixel density of the image. The lens does this by decreasing the field of view and perceived distance to the target area. The same number of pixels in the sensor are then covering a smaller view area, which increases the detail of objects. Using modern camera technology, high pixel density can instead be achieved by increasing the number of pixels on the sensor.

A practical replacement for manual zoom on an autonomous system could be high-resolution digital stills. While HD live video cameras at 25 to 30 Hz are necessary with human operators, an autonomous system may be fine with 2- to 5-Hz images.

Using simple math, considering camera field of view, image size (in Megapixels) and distance to target; it is possible to calculate the pixel density of a 10x zoom camera versus a 12 Megapixel stills camera. It turns out that 12 Megapixel digital stills are equal in pixel density, however, covering a wider field of view.


SubC’s Aquorea Mk2 hybrid LED strobe.



Lighting, Power and Synchronization
Video systems run at 25 or 30 Hz and have exposure times of 40 or 32 ms. What this means is that the sensor and subsequent lighting are active 100 percent of the time. This makes video lighting systems consume high amounts of power.

Autonomous systems have power restrictions. An efficient alternative to video would be digital stills at significantly shorter pulse durations (1 to 2 ms) and fewer per second (3 to 5 Hz). For this to work, the camera sensor and lighting require tight synchronization. Since the exposure time is so short, any delays in activation of the lighting will make the image dimmer.

Common strobe solutions use Xenon bulbs. They have a high output, requiring much power, and use a glass bulb. This results in a large package that is not what should be considered reliable for an underwater system. SubC set out to design what we consider the ideal solution using new LED technology (15,000 to 25,000 lumens). Being an LED there are no elements to break or burn out. It will last 10,000 to 20,000 hr.

It has low power trickle charging utilizing adaptive power limiting and an internal charging bank. This results in an instantaneous power draw on the system that is fully adjustable from 20 to 100 watts. The charge bank allows high current output (1.3x max pulse) for full 25,000 lumen pulse. The designed quick reaction time (less than 150 ns) enables shutter speeds as fast as 1/4,000. The output is trigger synchronized with the sensor of SubC cameras. The LED is also driven directly from the camera, resulting in simpler system integration.

The beam angle was designed to be adjustable. The front element is replaceable to get beams from 60° to 140°. This supports multiple applications.

The LED is thermally coupled to the water, meaning a higher output can be achieved with lower power. There is built in thermal protection in case the unit is run in air and other protection for short circuits and wiring faults.


Inspection Speed and Data Logging
Video framerates and exposure times also result in a limitation to inspection speed. Moving at speeds more than 0.5 kt. will cause blurring with video systems at 25- to 30-Hz frame rate. It makes sense again to use digital-still systems. Short exposure times with tight sync to lighting strobes enable quick coverage of an area.

Much of the inspection parameters can be automated with a built-in camera autonomous inspection planner. This is possible with the addition of another piece of technology built into the camera, an inertial measurement unit (IMU). To determine max platform speed, several factors must be considered: distance from camera to seafloor (platform altimeter and camera laser points); angle of camera unit (IMU tilt data); camera field of view (intrinsic camera parameters); required image overlap (tracked against IMU positional data); and amount of light (calculated based on sample image parameters).

The user or AUV sets the camera’s image overlap. The altimeter, depth, positional, heading and time data are fed in from the AUV system at constant intervals. The camera uses its internal IMU, along with the AUV sensor data to build a model of the 3D distance between images.

Each image would have the following stored parameters: image key (file name), IMU pitch, IMU roll, IMU heading, IMU position, system time, system easting, system northing, system depth, system altimeter, and system heading. The camera could then use the data to calculate automatically when a photo should be taken to get the desired overlap.

Recognizing that many platforms are different, ideally the camera can (optionally) accept PPS + GGA/ZDA to keep time; or put out a TTL/RS232/Ethernet pulse at the start/end of exposure. Most AUVs have an inspection package, so it is important to sync image exposure times with AUV relative position in some way, either in the camera or in the AUV. With all the data logged, and proper sync between camera and strobe, autonomous inspection platform speeds of 5 kt., 5 to 10 m from the seafloor should be possible.



Example digital still from an underwater platform.


Interpretation of Environment
Traditional optical inspection involves manual review of video and image data. Often this represents a two-fold time cost of the original inspection. The footage is captured, which takes inspection time x and then is completely reviewed later, time y. An ideal system would interpret the inspection autonomously, flag anomalies and log data with video. The flagged anomalies could then be quickly reviewed by a human, new time y.

There are, of course, several challenges to be overcome when developing such an automated system. How to adapt for changing water conditions (viscosity)? How to determine what objects constitute an anomaly? How to display the data to the user in a time-saving and meaningful way?

The first problem of water conditions can be partially solved with development of an automatic image enhancement algorithm. Based on various image parameters, the multitude of camera settings can be adjusted by the program running on the camera’s operating system. Detected features can be enhanced for better color representation, sharpness and clarity. This will assist the human reviewer in analysis of the footage.

Once a reliable object and feature change detection algorithm is developed, flags in the data can easily be set to alert the human operator. Over time, as further detections are refined with the algorithm-reviewer-algorithm feedback process, the number of positive flags will increase versus false flags. Additionally, if the object requiring detection is already understood, it can be preprogrammed into the detection algorithm. Many objects could be preloaded into the algorithm and assigned names; tags and decisions could then be made by the autonomous system based on a positive detection.

There are many methods to display the autonomous inspection results to the reviewer. Each time an object of interest is detected, or the system gives the camera a command to capture footage, it would make sense to take multiple overlapping digital stills. It is then possible to build an image mosaic. This is because all the positional sensor data are also logged with each still. The individual stills can be positioned in 3D space relative to each other. Common objects and features on the target or seafloor can also be used to group common stills. The benefit of combining multiple images into one large image versus reviewing many images are time savings and a better large-scale perspective of the area.

Automating a camera system for use on an AUV provides the additional advantage of assisting human operators who are flying ROVs. The generation of data in addition to video and digital stills also decreases the time to decisions. Auto-detection of objects of interest can increase the viability of inspections by ensuring less important objects are missed. It also serves to decrease the review time of inspections onshore by clients.


Application of Concepts
Much of this article covers new concepts in development. SubC strives to develop, test in the real world and rapidly deploy its new concepts. Linear laser measurements and distance to target, based on parallel laser dots built into the cameras, will be released soon as a free software update for SubC customers



Chad Collett began his underwater career in 2001 with the Canadian Navy as an inspection diver. He built upon this experience with the Canadian National Research Council, Oceaneering and other ocean-centered organizations. He founded SubC in 2010 to build the highest-quality underwater video equipment.




-back to top-

-back to to Features Index-

Sea Technology is read worldwide in more than 115 countries by management, engineers, scientists and technical personnel working in industry, government and educational research institutions. Readers are involved with oceanographic research, fisheries management, offshore oil and gas exploration and production, undersea defense including antisubmarine warfare, ocean mining and commercial diving.