Vlad (Kuzmin) Erium
10 min readNov 25, 2020

Review of “Getting started with RealityCapture with Timothy Hanson – Webinar” by Vlad Kuzmin

Short:

Timothy extremely brave person! Without good knowledge of RealityCapture it’s UI and UX, without understanding how RealityCapture work inside, with delusion on why some settings work for badly captured datasets with wrong chosen cameras and capture patterns, him did video for this webinar and give recommendations how “getting started”.

Long version:

I will try to mention only worst moments, not all bad or questionable.

Camera recommendation

Timothy probably from old VFX industry. And definitely prefer and recommending DSLR camera in webinar made for hobbyists who start his way in photogrammetry. Because this is, DSLR camera user forced to use optical viewfinder that allow precisely focus on some part of object. But in same moment this force DSLR owner use uncomfortable pose and shooting pattern: squat and leaning from some squatting pose.

Result of this easily visible on final camera poses. Only 3 loops on different elevation levels with almost 300 images in one loop. For RC 300–900 images is not issue. But that clearly visible clusters of 10–15 images with minimal parallax changes and huge gaps between this clusters and loops.

Why this happen? Because DSLR camera not comfortable to shoot with LiveView shooting mode when you can stay still and hold your camera in any pose and see real-time view on display. For that, and if we talking about “Getting Started” more fair will be recommend use any modern Smartphone or Mirrorless camera. Both of them have real-time liveview that Extremely useful for rookies, or just in situations where you can’t or don’t want use tripod but still want have Correct camera overlap/distribution.

Table

We can discuss long about what type of table use to help you scan small objects. But no one who had even small experience in photo or video will never recommend to use colored vivid table to scan something. If you care a bit about textures you will never use random vivid color plate like this even if it will allow you stay/sit every 10–15 degree around object. Just because reflection will add you random color tints to any non rough surfaces of your object. And if we thinking about Help to photogrammetry there is no questions use or not newspapers for that. If you at home this will be news paper or something with random but monochrome textures. That will help you align cameras in situation when you scan featureless/textureless object like we see on video.

Color target we see in video can only help to understand where you sit or stay and almost not give any support for better alignment or even focusing. But squat on every 15 degree and shoot mostly from same position as we can seen before do not use even that feature of target/table/plate like this.

Wrong way of use 16bit TIFF images

Every experienced RealityCapture user just know that alignment and meshing inside RealityCapture use 8bit. Because for that steps is more important contrast instead of bit depth.

Capturing Reality added 16/32bit images support specially from huge demand from VFX industry to use higher bit depth images for textures in scans. That quite often required delighting textures in post. Camera triangulation and meshing still required only 8bit and in 99.9999% using higher but depth in meshing or alignment will only slowdown process and in some cases even can give you worse alignment precision.

Option to convert HDRI/16bit images specially added for lazy workflows where user not need maximal details on mesh, but only care about textures. So it allow optimize and tonemap HDRI images to Geometry 8bit image “layer” and use it in SFM/MVS when original 16bit/HDRI image layer will used for textures.

AWFUL Aligning settings.

That make any RealityCapture expert hairs stand on whole body. (My ass still itching, because i can’t calm down hairs on it)

RealityCapture default settings are not randomly chosen. When RealityCapture authors and in same moment authors of some unique modern SFM and MVS algorithms was invented them. Knowledge how photogrammetry working and how correctly capture images for photogrammetry give authors of RealityCapture chose settings that should work with most of average good datasets. That mean if you captured images correctly you do not need change Alignment settings from default and software will align and mesh perfectly.

We are living in a real world, and not anyone was born as a photogrammetry export. And in some use cases you just can’t have optimal image/camera count (for example multi camera rigs which are always compromise between highest possible quality you need, budget you have and ability software work with non optimal datasets).

And if you increase Max features and Preselectors this only mean your dataset is not optimal. Like if i set 40000/80000 features instead of 10000/40000 and set 20000 preselectors insted of 10000 defaults this mean my camera rig use 2 times less optimal camera layout.

In video we see 100000/400000 features and 200000 for preselectors. What this mean? This mean that “Guru” clearly tell us, sorry children, my way of do photogrammetry 10 times worse than optimal!

Well. Another reason can be less ashamed. Timothy become from old Agisoft Photoscan photogrammetry camp. And teach how to do Agisoft Photoscan inside RealityCapture. There is Agisoft users have so strange love to point clouds, and trust that higher point count sparse cloud have better results will be in the end.

And in Photoscan/Metashape one of the favorite workflows rise values in camera alignment (SFM) step to maximal possible for your hardware settings.

And yes, there we use Max Features settings higher than actually images have.

Timothy force RealityCapture search for all possible features on images with settings that 5 times higher than possible features on images.

Preselectors (alignment settings 2)

There are clear way to understand is this person know more than average rookie about RealityCapture is check what settings him use for Preselectors. If you see 200000 Preselectors in settings JUST RUN, RUN AND NOT LOOK BACK!!!

I will just cry:

I can just say, default 10000 preselectors is many times more than actually can required to correctly and precisely estimate all camera settings and lens intrinsics. And using extremely high settings not only slowdown alignment step but also give you way worse final mesh due to enormous amount of false positive features and matches.

RealityCapture is not a Photoscan!

Using Agisoft Photoscan workflows directly in RealityCapture is a bad idea. They are or not working or work slower or just give you worse or just bad results than you can have with correct modern photogrammetry workflows used in RealityCapture and other Modern photogrammetry apps.

In webinar video i count three or more times when was mentioned Point Cloud in situation where this do not give any value. And this is another sign of wrong workflow used in wrong place.

Ok, sorry, i need relax a bit. And watch some funny cats videos on youtube to recover my neurons.

User Interface and User Experience (UI/UX)

There are a lot of users arguing that RealityCapture have not common UI and UX. But if you invest some time to read RealityCapture manual. And may be a little bit more to learn how to use workflows and UI/UX from it. You will find that RealityCapture authors predict huge amount of different workflows and have good and fast UI/UX inside RealityCapture.

And users from “Getting Started“ video also can expect that it author will show good and logical workflow even for simple tasks.

What we see in video. 4 window layout. And no one 2D view use “Color Cursor”. Color cursors are one of basic but most powerful features inside RealityCapture. And specially made for Control Points or similar workflows.

You can not use all color cursors, but Blue one should be your best friend. Define 2D view as Blue cursor and every camera or image you clicked will be automatically opened in this 2D view. Press 1 and click will open in Green Cursor window, etc. bit this is a magic already, and not everyone use this. But you should know what you missing.

For Getting started most logical and visual way was use Blue cursor to select specific camera from 3D view in different components. But not scroll through 1000 images to find something usable.

Color Cursors was specially designed to be used with huge city scale datasets that often only possible to do with RealityCapture. But they are extremely useful with small datasets too.

Merging Components

In that moment my brain start overheating. How possible teach someone how better or just start using RealityCapture if you do completely opposite things in application you teaching?

Talking about merging components and wish to merge components by pressing Merge Components button but specially inform anyone that you set Force Component Rematch to YES?!

Force component rematch specially made to force RealityCapture ignore all found camera positions and orientations find one more time all features (or load them from cache) and MATCH one more time all preselected FEATURES FROM SCRATCH! This is almost equal to create new project and import all images and align first time. Except may be reuse of prior calculated lens intrinsic and detected (not matched) features.

Using Force component rematch with Merge components are equal to press Align with force component rematch. And in this case better speak about Aligning but not Merging imported components.

Filtering mesh

Personally if i show any workflow to fresh new user in RealityCapture i always think that this person can have not a best optimal hardware system. Because him just start to learning amazing word of photogrammetry. And based on this, i will always recommend use lasso tool with Ctrl modifier to add to selection or Shift to deselect (ctrl+shift is intersect). This can allow select all polygons user need filter out and Filter (remove poly) selection as one operation. For 100K-1MLN poly meshes this is not a big problem to have new mesh every time you want filter out some part, but scene tree become long and confusing when you have many temporary meshes.

Correct and more optimal workflow is using lasso or any other selection tools and use Ctrl key with lasso to add more polygons to selection or Shift to remove selected by mistake poly and filter all at once.

But way that shown in video is not so bad, it just not optimal. So can be used.

Export settings

And more like personal preferences. But based on some bad experience from other users who not so skilled in 3D.

For newcomers better recommend disable Vertex Colors on export. If they will use such models in 3D apps like Marmoset or Sketchfab, present vertex colors can add weird tint or just can make looks renders darker, because vertex colors will be multiplied with texture.

Post Scriptum

I not against Timothy as a person, him braver than me. I brave only to post some small and useful tips on FB groups or twitter for free. Him have own video tutorial web site with lot’s of paid tutorials with questionable advices.

I even can imagine that some will find something useful in his words or videos about VFX. But i think as person who try to teach people how to use specific software are in charge of ANY WORD AND ANY WORKFLOW HIM RECOMMENDING. And should think many times before tell something, because wrong advice can give his student wrong direction and cost a lot of lost time or even worse cost a lot of time for software support that will try to resolve why users datasets so slow aligning with so bad quality comparing to other commercial software?

But this will happen when more and more will use such extreme high and incorrect Alignment settings.
Or people will send more angry messages on support groups that RC is unstable and system crashing? You will have this more because they will use 16bit images everywhere and this is 4–5 times more data that need to be transferred and processed, and working with so huge datasets required carefully configured hardware.

I know that some of RealityCapture experts also found some strange advices in this video. Some questionable, some even weird. But i not mentioned them only because they are way far from “Getting Started” but let’s discuss about this in FB or in twitter.

Link to webinar: https://www.youtube.com/watch?v=jeeccnjWIZs watch it and give your humble opinion.

Best regards,
Vlad Kuzmin

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Vlad (Kuzmin) Erium
Vlad (Kuzmin) Erium

Written by Vlad (Kuzmin) Erium

RealityCapture and Photogrammetry world expert

Responses (4)

Write a response

Great perspective from the master. This video is sort of a quiz to see if you know PG enough to spot the bad advice.

1

Hi, I am new to Reality capture. I have successfully created a mesh model, however, my model is showing point clouds when zoom in instead of triangles. What could be reason and how shall I correct it.
Thank you

I would most certainly love to see your latest knowledge and wisdom on All of your article topics!
I love how you boldly and confidently stand up to the broken & corrupted machine of misinformation out there!