We are doing inverse procedural modeling rather than photogrammetry, so there will be some idealization/iconification that takes place - but this also allows end-users to bring in their own material libraries in a pretty seamless way.
This page [1] gives a bit of an overview of this technique, which was new to me:
Inverse procedural modeling discovers a procedural representation of an existing geometric model and the discovered procedural model then supports synthesizing new similar models.
Not sure what the "discovers" actually means, need to find a better reference ... This page [2] summarizes the method like this:
We propose an inverse modeling approach for stochastic trees that takes polygonal tree models as input and estimates the parameters of a procedural model so that it produces trees similar to the input.
So I guess that's roughly the same, then. I still don't quite understand it, if you have to manually model a building, and then get a procedural model's parameter out of that, you still had to model it (=do a lot of work)? Is the benefit that you then can store the parameters and use the model to regenerate the building, thus compressing the representation a bunch?
Our inverse procedural modeling process produces 3D models representing/explaining the structures we see in our input sensor data, so it doesn't require hand-modeling first.
> Is the benefit that you then can store the parameters and use the model to regenerate the building, thus compressing the representation a bunch?
A few benefits - we can automatically generate the mesh model at different levels of detail by stripping out elements of the procedural recipe (rather than relying on mesh decimation which gives ugly results). And yeah, compression + error-correction also play a role. Plus compared to photogrammetric models, we have the metadata needed for interactive lighting/simulation.