Mapping life on the seafloor

I’m experimenting with mapping the seafloor for restoration projects. This is what I have done so far:

  1. Record a time-lapse sequence, swim as slow as possible (GoPro Hero 7 Black linear 0.5sec)
  2. Buy ArgiSoft Metashape Standard (about $300NZD)
  3. Import photos to chunk
  4. Align photos (Duplication errors on highest, high best, Reference preselection [sequential], Guided image matching)
  5. Don’t clean up the point cloud (not needed for orthomosaics)
  6. Build mesh (use sparse cloud source data, Arbitrary 3D)
  7. Build texture (Adaptive orthophoto, Mosaic, Enable hole filling, Enable ghosting filter)
  8. Capture view (hide cameras and other visual aids first then export .PNG at a ridiculous resolution)

ArgiSoft Metashape worked much better than using Adobes photo stitching software (Photoshop & Lightroom) on the same data. But I need more overlapping images as all the software packages were not able to match all of any of the four test sequences I did.

I’m going to test shooting in video next. The frames will be smaller 2704×1520 (if I stick with linear to avoid extra processing for lens distortion) instead of 4000×3000 with the time-lapse but I’m hopping all the extra frames will more than compensate (2FPS=>24FPS).

In theory an ROV will be better but I don’t think there are any on the market that know where they are based on what they can see. All the work arounds for knowing where you are underwater are expensive, here are two UWIS & Nimrod. I want to see if we can do this with divers and no location data. I don’t think towing a GPS will be accurate enough to match up the photos but it does seem to work with drones taking images of bigger scenes (I want this to work in 50cm visibility). I expect if I want large complete images the diver will need to follow another diver who has left a line on the seafloor. One advantage of this is that the line could have a scale on it, but I’m hoping to avoid it as the lines will be ugly šŸ˜€ So far I can do only two turns before it fails. There are three patterns that might work (Space invaders, Spirals and Pick up sticks). For my initial trials I am focusing on Space invaders.

Video provides lots more frames and the conversion is easy. A land based test with GPS disabled, multiple turns, 2500 photos, 2704 x 2028 linear, space invader pattern at 1.2m from the ground worked perfectly. However I cant get it to work underwater. In every test so far Metashape will only align 50-100 frames. I tried shooting on a sunny day which was terrible as the reflection of the waves dancing on the seafloor confuses the software. But two follow up shoots also failed, when I look at the frames Metashape cant match I just don’t see why its can’t align them. Theses two images are in sequence, one gets aligned and the next one is dropped!

Here is what the test footage looks like, I have increased the contrast.

I have also tried exporting the frames at 8fps to see if the alignment errors are happening because the images are too similar but got similar results (faster).

Detailed advice from Metashape:

Since you are using Sequential pre-selection, you wouldn’t get matching points for the images from the different lines of “space invader” or “pick up sticks” scenarios or from different radius of “spiral” scenario.

If you are using “space invader” scenario and have hundreds or thousands of images, it may be reasonable to align the data in two iterations: with sequential preselection and then with estimated preselection, providing that most of the cameras are properly aligned.

As for the mesh reconstruction – using Sparse Cloud source would give you very rough model, so you may consider building the model from the depth maps with medium/high quality in Arbitrary mode. As for the texture reconstruction, I can suggest to generate it in Generic mode and then in the model view switch the view from Perspective to Orthographic, then orient the viewpoint in the desired way and use Capture View option get a kind of planar orthomosaic projection for your model.

Align ‘sequential’ only ever gets about 5% of the shots. Repeating the alignment procedure on ‘estimated’ picks up the rest but the camera alignment gets curved. I think I have calibrated the cameras to 24mm (it’s hard to see if that has been applied) but it doesn’t seem to change things.

I tried an above water test and made a two minute video of the Māori fish dams at Tahuna Torea. I used the same settings as above, but dropped the quality down to medium. It looks great!

The differences between above and below water are: Camera distance to subject, flotsam, visibility / image quality and colour range. If the footage I am gathering is too poor for Metashape to align it might mean we need less suspended sediment in the water to make the images. That’s a problem as the places I want to map are suffering from suspended sediment ā€“ which is why they would benefit from shellfish restoration.

The Agisoft support team are awesome. They processed my footage with f = 1906 in the camera calibration, align photos without using preselection and a 10,00 tie point limit. The alignment took 2.5 days but worked perfectly (click on the image below). There are a few glitches but I think the result is good enough for mapping life on the seafloor. I will refine the numbers a bit and post them in a seperate blog post, wahoo!

Update Jul 2022: Great paper explaining the process with more sophisticated hardware


Final method

Here is my photogrammetry process / settings for GoPro underwater. I am updating them as I learn more. Please let me know if you discover an improvement. Thanks to Vanessa Maitland for her help refining the process.

Here is an example image made using the process, the area is about 8m in diameter, click on the image to see it full size.

Step 1: Make a video of the seafloor using a GoPro

  • If you’re at less than 5m deep you will need to go on a cloudy day for even light
  • Make sure you shoot 4k, Linear. Also make sure the orientation is locked in preferences.
  • Record you location and the direction your going to swim if you want to put your orthomosaic on a map
  • Swim in a spiral using a line attached to a peg to keep the distance from it even
  • Don’t leave any gaps or you will generate a spherical image

Step 2: Edit your video

  • There are lots of software packages that will do this, I use Adobe After FX where I can increase the contrast in the footage and add more red light depending on the depth. You might also find it easier to trim your video here but you can also do it in Step 5.

Step 3: Install and launch Agisoft Metashape Standard.

  • These instructions are for version 1.8.1

Step 4: Use GPU.

  • In ‘Preferences’, ‘GPU’ select your GPU if you have one, I also checked ‘Use CPU when performing accelerated processing’

Step 5: Import video

  • From the menu chose: ‘File’, ‘Import’, ‘Import Video’.
  • The main setting to play with here is the ‘Frame Step’ We have had success with using every third frame which cuts down on processing time.
  • If you have multiple videos you will have to import multiple chunks, then I recommend combining them before processing using ‘Workflow, Merge Chunks’, I have had better results doing this, rather than processing each chunk individually then choosing ‘Workflow, Align Chunks’.

Step 6: Camera calibration

  • From the menu chose: ‘Tools’, ‘Camera calibration’
  • Change ‘Type’ from ‘Auto’ to ‘Precalibrated’
  • Change the ‘f’ value to 1906

Step 7: Align photos

  • From the menu chose: ‘Workflow’, ‘Align Photos’
  • Check ‘Generic preselection’
  • Uncheck ‘Reference preselection’
  • Uncheck ‘Reset current alignment’
  • Select ‘High’ & ‘Estimated’ from the dropdown boxes
    Under ‘Advanced’ chose:
  • ‘Key point limit’ value to 40,000
  • ‘Tie point limit’ value to 10,000
  • Uncheck ‘Guided image matching’ & ‘Adaptive camera model fitting’
  • Leave ‘Exclude stationary tie pointsā€™ checked

If 100% of cameras are not aligned then try Step 8 otherwise skip to Step 9.

Step 8: Align photos (again)

  • From the menu chose: ‘Workflow’, ‘Align Photos’
  • Uncheck ‘Generic preselection’
  • Check ‘Reference preselection’
  • Uncheck ‘Reset current alignment’
  • Select ‘High’ & ‘Sequential’ from the dropdown boxes
    Under ‘Advanced’ chose:
  • ‘Key point limit’ value to 40,000
  • ‘Tie point limit’ value to 4,000
  • Uncheck ‘Guided image matching’ & ‘Adaptive camera model fitting’
  • Leave ‘Exclude stationary tie pointsā€™ checked

Now all the photos should be aligned, if not repeat step 7 & 8 with higher settings and check ‘Reset current alignment’ on step 7 only. I have been happy with models that have 10% of photos not aligned.

Step 9: Tools / Optimize Camera Locations

Just check the check boxes below (default settings):

  • Fit f, Fit k1, Fit k2, Fit k3
  • Fit cx, cy, Fit p1, Fit p2

 Leave the other checkboxes (including Advanced settings) unchecked.

Step 10: Resize region
Use the region editing tools in the graphical menu make sure that the region covers all the photos you want to turn into a 3D mesh. You can change what is being displayed in the viewport under ‘Model’, ‘Show/Hide Items’.

Step 11: Build dense cloud

  • Quality ‘High’
  • Depth filtering ‘Moderate’
  • Uncheck all other boxes.

Step 12: Build mesh

  • From the menu choose: ‘Workflow’, ‘Build Mesh’
  • Select ‘Dense cloud’, ‘Arbitrary’
  • Select ‘Face count’ ‘High’
  • Under ‘Advanced’ leave ‘Interpolation’ ‘Enabled’
  • Leave ‘Calculate vertex colours’ checked

Step 13: Build texture

  • From the menu choose: ‘Workflow’, ‘Build Texture’
  • Select ‘Diffuse map’, ‘Images’, ‘Generic, ‘Mosaic (default)’ from the dropdown menus.
  • The texture size should be edited to suit your output requirements. The default is ‘4096 x 1’
  • Under ‘Advanced’ Turn off ‘Enable hole filling’ & ‘Enable ghosting filter’ if you’re using the image for scientific rather than advocacy reasons.

Step 14: Export orthomosaic
You can orientate the view using the tools in the graphical menu. Make sure the view is in orthographic before you export the image (5 on the keypad). Then chose ‘View’, ‘Capture View’ from the menu. The maximum pixel dimensions are 16,384 x 16,384. Alternatively you can export the texture.

==

Let me know if you have experimented converting timelaspe / hyperlapse video to photogrammetry. There may be some advantages.

Stream Health Monitoring guides

As communities get increasingly worried about the declining quality of their waterways there is more interest stream health assessments. I am a huge fan of the Waicare Invertebrate Monitoring Protocol (WIMP) which is simple enough that school students can use it. However the Waicare programme has been largely defunded by Auckland Council and there is no way for the public to share WIMP data. NIWA and Federated Farmers of New Zealand have put together https://nzwatercitizens.co.nz/ based on the New Zealand Stream Health Monitoring and Assessment Kit (SHMAK). It is great but incredibly hard to use, the manual is horrific. I believe this is being addressed but will take years. To help, the science learning hub has made this great guide for teachers and students. NIWA have put together some videos. They are not published together anywhere online so I have posted the list below:

My macro setup Jan 2019

Updated from 2017. I have upgraded to some second hand 600EX RT’s, a radio transmitter, 3rd flash and some custom designed and 3D printed soft boxes. Files uploaded here. The soft boxes are printed using transparent PLA which has a natural frosty finish and produces lovely diffuse shadows (0.2mm @ 5 layers). I painted them black & yellow and lined the mouth with black tape so as to not scratch the flashes. The 600EX-RT’s are quite heavy and I had to use epoxy glue to re-enforce the cold shoes. I fibreglassed a giant nail to the base of a Manfrotto monopod to create the portable outdoor light stand.

Example photo of a native wasp (Netelia ephippiata).

3D printed stencil

I tried 3D printing this stencil for a penguin box so I didn’t have to cut out the letters. 5 layers at .2mm PLA. It worked great and the thin lines come out really well, but you can’t leave the stencil in the sun or it warps!

Night counts

I have been counting a lot of birds lately trying to build a solid picture of how the shorebirds use the Tāmaki Estuary. In addition to regular wader counts I decided to try and also count them at night as they might use the roosting areas differently. After testing various devices including long exposure photography Pieter and I decided on expensive gear to do the survey, I bought a Luna Optics LN-DM50-HRSD Digital Night Vision Monocular (example image above) and Pieter purchased a Pulsar Helion Thermal Imaging Handheld XP38. Using these devices helped us count the birds without disturbing them which was quite important to us. Details of the trial here Tamaki Estuary shorebird survey ā€“ Wildlands.

The feeding observations were really interesting. Tho in regards to extra light ā€“ it was disappointing not to find shorebirds at Mt Wellington War Memorial Reserve where I found banded dotterel and flocks of SIPO at night last year. The reserve now has large lights for playing sports in the dark.

I think the trial study has given us enough data to understand how the shorebirds use the estuary at night. More detail would be interesting but is unlikely to affect future management decisions. Here is the Raw data. I plan to keep a better eye on the Pakuranga Sailing Club and include the data in a report to Council on roosting in the Tāmaki Estuary.

== SUPLEMENTRY NOTES ==

Access to viewing points was limited because we wanted to do the survey without disturbing the birds. All the waders identified were more easily scared at night with the exception of the mixed flock at the Pakuranga Sailing Club that was unusually calm in our presence (both day and night). Day counts followed the limited routes of the night counts for consistency, however a few birds were often seen during the day that would have been hard to see at night from the same vantage point.

No variations in flock shape were observed between day and night. The main thing that correlated with flock shape was if the SIPO were feeding. The SIPO flock spread out over 100m in diameter when feeding at Point England both on the sportsfeilds (at night) and the paddocks (during the day). The pied stilts were always grouped closely together.

I think the roosting patterns look like waders are primarily avoiding disturbance. Wide open spaces with no obstructions or light were preferredā€¦ maybe close to the water too – especially for the smaller waders? Seaside park a good example – I was surprised to see SIPO there at night. The lack of birds at Mt Wellington both Day and Night in 2018 vs 2017 is odd. Maybe the addition of the lights has also reduced the sites daytime roosting function.

I totalled the counts for SIPO and Stilts to see if there are variations in the total day vs night numbers. We are short 22.5% on SIPO, I feel like we would not have been that far off and some birds are heading elsewhere to roost, it’s not a huge number tho. I am sure we missed some birds tho, for example during the day the NNZD at Point England are so much easier to spot than at night.

How to read coloured bird bands

Here is a one page instructional guide on how to read coloured bird bands. Band observations should be sent to bandingoffice@doc.govt.nz. Many birds now have ELFs (Engraved Leg Flags) sightings of ELFs on New Zealand dotterel (even if you can not read the letters on the flag) should also be sent to the Department of Conservation banding office.

Additional notes for Tara iti / NZ Fairy tern (2024):

B = Blue (blue can fade to gray)
K = Black
M = Metal
pG = Pale green
L = Lime green

A decoy dotterel

I have been working with Northern New Zealand dotterel (NNZD) for nearly five years now. I know this bird really well but I had to look a lot closer when I decided to try and make a decoy. Southern New Zealand dotterel (SNZD) are in big trouble with a small and declining population. I decided to see if a decoy would work to help capture SNZD which have been difficult to catch outside of breeding season. A well placed decoy could help position one or more birds near a leg-hold noose-mat trap or in position for a canon net.

I put the decoy about 20m away from some NNZD at Point England and on both occasions each NNZD walked over to the decoy (with head bobs) in less than two minutes. They stayed just less than 1M away for two minutes before I removed the decoy.

I need to fix the beak, refine the plumage and add spikes to the removable legs but so far it looks promising.

UPDATE: I have made a small flock of them. They are not quite right but should do the job.

Underwater tripod

Underwater tripod

Price, durability and stability were the big factors when designing this underwater tripod. The final design ended up incredibly simple and getting great results (I will upload some time-lapse footage at a later date). I have made four of them and I got this great shot of a red moki (who usually avoid me) at Goat Island on Monday.

Red Moki - Goat Island