Here is a simplified version of my photogrammetry settings in Agisoft Metashape Standard 1.8.1 (mostly for my own reference). Detailed explanation here Mapping life on the seafloor/










Mostly just stuff I am doing to help the planet
Here is a simplified version of my photogrammetry settings in Agisoft Metashape Standard 1.8.1 (mostly for my own reference). Detailed explanation here Mapping life on the seafloor/
I figured out how to dive from my kayak this summer. It’s been really fun with lots of advantages including:
At first I tried towing my kayak to the water with my tank onboard. The C-Tug SandTrakz Cart Kayak Trolley is great but not designed to carry so much weight. Even without the tank I have broken one of the straps. I now carry my tank and BCD while towing the kayak with everything else onboard including my weights.
The other thing I find very useful is these stainless steel D-Eye Swivel Snap Hooks. The anchor, wheels and camera all get clipped on so nothing falls off. Even though I have clips for my paddle I tie it on as I really dont want my paddle gone when I resurface.
I found putting the BCD on in the kayak too hard so I just do that in the water, it has its own little rope & clip so it does not float away. I also 3D printed a fitting to get a little dive flag going but it should be bigger.
I love having more gear closer when shore diving, including my phone, dry keys and something warm to eat as soon as I get out of the water :D. I’m sure my set up will evolve over time (I need a slightly heavier anchor for soft substrates) but I’m really happy with this and plan to do lots more.
Extra photos by Kirk Tucker
Update August 2024
With the C-Tug Double Up Bar (four wheels!) I can now take everything down to the beach in one trip. I’m not quite strong enough to get it back up the steepest bit of sand in one trip and I take the dive tank back up separately.
I have been helping out Auckland Zoo and the Department of Conservation with important conservation work, and last year Auckland Zoo had an unusual request.
“Can you make flamingo eggs? Our flock of Greater flamingos have a tendency to kick their eggs into the water, so we give them a ‘dummy’ egg whilst we place their precious egg safely in an incubator.”
In the past I have only assisted with endemic or threatened species so I was a little hesitant, that was until I went on a short tour of the Zoo’s flamingo habitat and met the birds. I learnt that in the wild, flamingo habitat is indeed threatened, and I was captivated by these elegant, head-high birds. One of the young females named ‘Otis’ wandered over and gave me a friendly chest bump. Immediately smitten, I have made 21 eggs for the flock. There were two technical challenges:
One of the eggs has been successfully tested and I hope Otis & co will be happier spending more time sitting on eggs.
I regularly get asked for several graphics from the State of Our Gulf 2020 so I am posting them here so everyone can access them.
I’m experimenting with mapping the seafloor for restoration projects. This is what I have done so far:
ArgiSoft Metashape worked much better than using Adobes photo stitching software (Photoshop & Lightroom) on the same data. But I need more overlapping images as all the software packages were not able to match all of any of the four test sequences I did.
I’m going to test shooting in video next. The frames will be smaller 2704×1520 (if I stick with linear to avoid extra processing for lens distortion) instead of 4000×3000 with the time-lapse but I’m hopping all the extra frames will more than compensate (2FPS=>24FPS).
In theory an ROV will be better but I don’t think there are any on the market that know where they are based on what they can see. All the work arounds for knowing where you are underwater are expensive, here are two UWIS & Nimrod. I want to see if we can do this with divers and no location data. I don’t think towing a GPS will be accurate enough to match up the photos but it does seem to work with drones taking images of bigger scenes (I want this to work in 50cm visibility). I expect if I want large complete images the diver will need to follow another diver who has left a line on the seafloor. One advantage of this is that the line could have a scale on it, but I’m hoping to avoid it as the lines will be ugly 😀 So far I can do only two turns before it fails. There are three patterns that might work (Space invaders, Spirals and Pick up sticks). For my initial trials I am focusing on Space invaders.
Video provides lots more frames and the conversion is easy. A land based test with GPS disabled, multiple turns, 2500 photos, 2704 x 2028 linear, space invader pattern at 1.2m from the ground worked perfectly. However I cant get it to work underwater. In every test so far Metashape will only align 50-100 frames. I tried shooting on a sunny day which was terrible as the reflection of the waves dancing on the seafloor confuses the software. But two follow up shoots also failed, when I look at the frames Metashape cant match I just don’t see why its can’t align them. Theses two images are in sequence, one gets aligned and the next one is dropped!
Here is what the test footage looks like, I have increased the contrast.
I have also tried exporting the frames at 8fps to see if the alignment errors are happening because the images are too similar but got similar results (faster).
Detailed advice from Metashape:
Since you are using Sequential pre-selection, you wouldn’t get matching points for the images from the different lines of “space invader” or “pick up sticks” scenarios or from different radius of “spiral” scenario.
If you are using “space invader” scenario and have hundreds or thousands of images, it may be reasonable to align the data in two iterations: with sequential preselection and then with estimated preselection, providing that most of the cameras are properly aligned.
As for the mesh reconstruction – using Sparse Cloud source would give you very rough model, so you may consider building the model from the depth maps with medium/high quality in Arbitrary mode. As for the texture reconstruction, I can suggest to generate it in Generic mode and then in the model view switch the view from Perspective to Orthographic, then orient the viewpoint in the desired way and use Capture View option get a kind of planar orthomosaic projection for your model.
Align ‘sequential’ only ever gets about 5% of the shots. Repeating the alignment procedure on ‘estimated’ picks up the rest but the camera alignment gets curved. I think I have calibrated the cameras to 24mm (it’s hard to see if that has been applied) but it doesn’t seem to change things.
I tried an above water test and made a two minute video of the Māori fish dams at Tahuna Torea. I used the same settings as above, but dropped the quality down to medium. It looks great!
The differences between above and below water are: Camera distance to subject, flotsam, visibility / image quality and colour range. If the footage I am gathering is too poor for Metashape to align it might mean we need less suspended sediment in the water to make the images. That’s a problem as the places I want to map are suffering from suspended sediment – which is why they would benefit from shellfish restoration.
The Agisoft support team are awesome. They processed my footage with f = 1906 in the camera calibration, align photos without using preselection and a 10,00 tie point limit. The alignment took 2.5 days but worked perfectly (click on the image below). There are a few glitches but I think the result is good enough for mapping life on the seafloor. I will refine the numbers a bit and post them in a seperate blog post, wahoo!
Update Jul 2022: Great paper explaining the process with more sophisticated hardware
Final method
Here is my photogrammetry process / settings for GoPro underwater. I am updating them as I learn more. Please let me know if you discover an improvement. Thanks to Vanessa Maitland for her help refining the process.
Step 1: Make a video of the seafloor using a GoPro
Step 2: Edit your video
Step 3: Install and launch Agisoft Metashape Standard.
Step 4: Use GPU.
Step 5: Import video
Step 6: Camera calibration
Step 7: Align photos
If 100% of cameras are not aligned then try Step 8 otherwise skip to Step 9.
Step 8: Align photos (again)
Now all the photos should be aligned, if not repeat step 7 & 8 with higher settings and check ‘Reset current alignment’ on step 7 only. I have been happy with models that have 10% of photos not aligned.
Step 9: Tools / Optimize Camera Locations
Just check the check boxes below (default settings):
Leave the other checkboxes (including Advanced settings) unchecked.
Step 10: Resize region
Use the region editing tools in the graphical menu make sure that the region covers all the photos you want to turn into a 3D mesh. You can change what is being displayed in the viewport under ‘Model’, ‘Show/Hide Items’.
Step 11: Build dense cloud
Step 12: Build mesh
Step 13: Build texture
Step 14: Export orthomosaic
You can orientate the view using the tools in the graphical menu. Make sure the view is in orthographic before you export the image (5 on the keypad). Then chose ‘View’, ‘Capture View’ from the menu. The maximum pixel dimensions are 16,384 x 16,384. Alternatively you can export the texture.
==
Let me know if you have experimented converting timelaspe / hyperlapse video to photogrammetry. There may be some advantages.
As communities get increasingly worried about the declining quality of their waterways there is more interest stream health assessments. I am a huge fan of the Waicare Invertebrate Monitoring Protocol (WIMP) which is simple enough that school students can use it. However the Waicare programme has been largely defunded by Auckland Council and there is no way for the public to share WIMP data. NIWA and Federated Farmers of New Zealand have put together https://nzwatercitizens.co.nz/ based on the New Zealand Stream Health Monitoring and Assessment Kit (SHMAK). It is great but incredibly hard to use, the manual is horrific. I believe this is being addressed but will take years. To help, the science learning hub has made this great guide for teachers and students. NIWA have put together some videos. They are not published together anywhere online so I have posted the list below:
Updated from 2017. I have upgraded to some second hand 600EX RT’s, a radio transmitter, 3rd flash and some custom designed and 3D printed soft boxes. Files uploaded here. The soft boxes are printed using transparent PLA which has a natural frosty finish and produces lovely diffuse shadows (0.2mm @ 5 layers). I painted them black & yellow and lined the mouth with black tape so as to not scratch the flashes. The 600EX-RT’s are quite heavy and I had to use epoxy glue to re-enforce the cold shoes. I fibreglassed a giant nail to the base of a Manfrotto monopod to create the portable outdoor light stand.
I tried 3D printing this stencil for a penguin box so I didn’t have to cut out the letters. 5 layers at .2mm PLA. It worked great and the thin lines come out really well, but you can’t leave the stencil in the sun or it warps!
I have been counting a lot of birds lately trying to build a solid picture of how the shorebirds use the Tāmaki Estuary. In addition to regular wader counts I decided to try and also count them at night as they might use the roosting areas differently. After testing various devices including long exposure photography Pieter and I decided on expensive gear to do the survey, I bought a Luna Optics LN-DM50-HRSD Digital Night Vision Monocular (example image above) and Pieter purchased a Pulsar Helion Thermal Imaging Handheld XP38. Using these devices helped us count the birds without disturbing them which was quite important to us. Details of the trial here Tamaki Estuary shorebird survey – Wildlands.
The feeding observations were really interesting. Tho in regards to extra light – it was disappointing not to find shorebirds at Mt Wellington War Memorial Reserve where I found banded dotterel and flocks of SIPO at night last year. The reserve now has large lights for playing sports in the dark.
I think the trial study has given us enough data to understand how the shorebirds use the estuary at night. More detail would be interesting but is unlikely to affect future management decisions. Here is the Raw data. I plan to keep a better eye on the Pakuranga Sailing Club and include the data in a report to Council on roosting in the Tāmaki Estuary.
== SUPLEMENTRY NOTES ==
Access to viewing points was limited because we wanted to do the survey without disturbing the birds. All the waders identified were more easily scared at night with the exception of the mixed flock at the Pakuranga Sailing Club that was unusually calm in our presence (both day and night). Day counts followed the limited routes of the night counts for consistency, however a few birds were often seen during the day that would have been hard to see at night from the same vantage point.
No variations in flock shape were observed between day and night. The main thing that correlated with flock shape was if the SIPO were feeding. The SIPO flock spread out over 100m in diameter when feeding at Point England both on the sportsfeilds (at night) and the paddocks (during the day). The pied stilts were always grouped closely together.
I think the roosting patterns look like waders are primarily avoiding disturbance. Wide open spaces with no obstructions or light were preferred… maybe close to the water too – especially for the smaller waders? Seaside park a good example – I was surprised to see SIPO there at night. The lack of birds at Mt Wellington both Day and Night in 2018 vs 2017 is odd. Maybe the addition of the lights has also reduced the sites daytime roosting function.
I totalled the counts for SIPO and Stilts to see if there are variations in the total day vs night numbers. We are short 22.5% on SIPO, I feel like we would not have been that far off and some birds are heading elsewhere to roost, it’s not a huge number tho. I am sure we missed some birds tho, for example during the day the NNZD at Point England are so much easier to spot than at night.