Back to ontology

Table of Contents

Digital Reconstruction

Relations from Other Scenarios

Relations to Other Scenarios

Summary🔗

This scenario covers the digital representation of the physical world evolving over the time.

The digital representation includes images, point_cloud, voxel_cloud, reconstructed_geometry, and as-built.

The object recognition is intentionally left out as out-of-scope from the BIMprove project as well as semantic interpretation of points and surfaces.

Models🔗

images🔗

This model contain all the images recorded manually (e.g., by a smartphone) or automatically (e.g., by an UAV or statically installed cameras through a recording) over time.

We expect the images in JPEGs. The meta-data such as orientation, position and sensor range (e.g., for thermal images (from thermal_inspection)) is expected in EXIF.

point_cloud🔗

This model encompasses all the points of the physical building over time.

The expected format of the point cloud is E57. Our system will use E57 as an exchange format (e.g., for import/export). The backend can use arbitrary format for storage and manipulation.

voxel_cloud🔗

This model includes all the voxels over time.

reconstructed_geometry🔗

This model captures all the reconstructed surfaces.

as-built🔗

The as-built model is obtained by combining point cloud and the latest version of bim3d (from evolving_plan).

The model is updated continuously by a bimmer.

Unlike bim3d (from evolving_plan), this model is not official.

The entity identifiers from this model should match the identifiers from bim3d (from evolving_plan).

Definitions🔗

recording🔗

The recording is a sequence of images and points recorded by an UXV (from uxv_recording) (with different sensors (from uxv_recording) such as photo cameras, FARO lasers, LiDARs, thermal cameras etc.).

The recording is assumed atomic (i.e. "discrete", as opposed to continuous recording from, say, a static camera observing a scene).

We also assume that the relevant objects do not move during the recording. However, some movement is possible, though (e.g., workers and vehicles on the site, other UXVS (from uxv_recording) etc.).

The sensor (from uxv_recording) as well as the spatial accuracy (from uxv_recording) of the sensor should be defined in the recording.

For example, lasers might have the spatial accuracy (from uxv_recording) in millimeters, while the spatial accuracy (from uxv_recording) of the photo odometry can be in low-digit centimeters (depending on the texture, lighting conditions etc.).

image🔗

An image is a picture taken by a camera.

The camera can be a photo camera (taking RGB images), but can also be a thermal camera (taking thermal images (from thermal_inspection)).

point🔗

A point is a 3D representation of a physical building.

Each point is:

voxel🔗

Voxel is a small cube or union of cubes abstracting the individual points.

Voxel cloud is stiched together from point cloud by binning points to pre-defined cubes and unions of cubes.

It is a poor man's reconstructed geometry.

Each volumetric shape is also associated a time stamp based on the underlying points.

The color of a shape is determined by inferring it from the underlying points. (We might consider color spaces like CIE, LAB etc. Vector average of RGB is perceptually wrong.)

The color gives you the similarity of the points (in addition to their mutual proximity) so that cubes with different colors should not be merged.

Analogously, the time stamp of the voxel is given by the average over the corresponding points.

surface🔗

A surface is a 3D shape given as surface.

The surface is reconstructed based on the point cloud.

Reconstructed surfaces are not semantically interpretable. The main purpose of the geometry is visualization, not semantic recognition. Hence there is no link to BIM models and this is intentionally left out-of-scope.

The time stamp of a surface is computed based on the average of the time stamps of its points.

object recognition🔗

Recognized objects (based on images or point cloud) are not part of the BIM model.

Such objects include tools, debris, safety nets etc.

Some objects, such as safety nets, can not be detected with a point cloud and need to rely on images for texture.

This is a non-goal, as we lack the resources (foremost data, but also time, qualifications, focus). However, we should keep in mind that the system should prepare the data for recognition in a future project.

bimmer🔗

This person updates the geometry of as-built on a continuous basis.

Scenario🔗

As-planned🔗

The as-planned data is coming from bim3d (from evolving_plan).

As-observed🔗

Raw. The "raw" observations come from the images (manual snapshots or recorded by an UAV in a recording) and are kept in images.

Point cloud. The point cloud is reconstructed by external software from the images and stored to point_cloud. (The external software is not part of the BIMprove development efforts.)

Additionally, point_cloud includes also the points that are recorded by lasers (in case there is a sensor (from uxv_recording) during a recording).

Finally, the BIMprove system should provide an import end-point so that arbitrarily point clouds can be imported. For example, the crew could use external smartphone apps to record a point cloud, special hand-held lidars etc. We do not want to constrict the range of recording devices. The source as well as the sensor accuracy need to be specified accordingly.

Visualizable abstractions. We further abstract points from the point_cloud into:

The main aim of both the voxels and surfaces is easier visualization of the physical world.

(We will probably not have time to implement the geometry reconstruction in BIMprove system. There might be perhaps libraries to easily reconstruct it from the point cloud on the backend side. As we don't know that at this point (2021-01-20), we leave it here as a nice-to-have.)

As-built plan.

In addition to semantically uninterpretable observations (such as images and points), our system needs the interpretable observation.

This is given in the model as-built. This model is continuously updated (e.g., daily) by the bimmer using external software to model the current state of the physical building digitally using as base the official plans bim3d (from evolving_plan), previous version of as-built (if available) and point_cloud.

The external software for BIM reconstruction (and adaptation to observed data) is not part of the BIMprove development efforts. Some software solutions include:

Divergence🔗

All the observations (like images and point cloud) need to be converted to our main site coordinate system (from evolving_plan).

Whatever data comes into the backend needs to be pre-processed appropriately to conform to site coordinate system (from evolving_plan).

The remaining aspect sections intentionally left empty.

Test Cases🔗

Acceptance Criteria🔗

timestamps of points manageable🔗

The timestamps for individual points of the point clouds should be appropriately compressed so that we add only a word (2-bytes) or attach a timestamp to a group of points etc.

granularity of timestamps🔗

As points of the point cloud are attributed a time stamp, the time stamp does not need to be extremely precise. For example, if we take 50 images of a single point, the time stamp of the point can be the average (or minimum or maximum) over the corresponding timestamps of the images.

visual veracity of geometry🔗

The geometry might look "melted" as we lack points from all the viewpoints.