Back to ontology

Table of Contents

Level Index

Virtual Inspection

Relations from Other Scenarios

Relations to Other Scenarios

Summary🔗

This scenario is about manual virtual inspections of the construction sitesite usually from the site officeoffice

Models🔗

Intentionally left empty.

Definitions🔗

Intentionally left empty.

Scenario🔗

The inspection is based on:

The data is visualized as a 3D model.

As-planned🔗

Visualization. The system displays a selected revision (from evolving_plan) of bim3d (from evolving_plan).

Two revisions (from evolving_plan) can be compared visually, e.g., by using color tainting. (Mind that the difference between the two models is not computed explicitly, only demonstrated visually.)

Filtering of the displayed elements. The elements can be filtered by a set of one or more guids (from evolving_plan) or unique resource identifiers (URIs).

The elements can be further filtered by the type and and property values.

To spatially and logically filter elements, a bounding box, a set of related zones and/or a set of related groups can be selected.

Furthermore, we can also filter the elements by the related task shadows (from scheduling).

The task shadows (from scheduling) can be filtered by time range, task type and/or a set of related actors (from actor_management) (manually picked). The filtering of tasks dictates the filtering of displayed elements.

The element can be further filtered by a set of related topics (from topic_management). The topics (from topic_management) can be filtered by a priority, filter, status and time range.

All these filters can be combined to compositional filters as intersections and/or unions of filters.

Note that the filtering is also relevant for other components of the system so that they can link to virtual inspection. This implies that the query underlying the filtering should be representable as a string and can be easily embedded in an an URL.

The hidden elements can either be completely hidden or displayed as markedly transparent.

(Note about the implementation priorities: it remains to be seen during the implementation what queries are ergonomic and what can be practically implemented. We need to start with a simple solution and progress to the more sophisticated ones.

Please also consider the remark regarding the importance of filtering when inspecting the divergence between the plans and as-built observations.)

Risk zone visualization. The risk zones (from risk_management) are a special kind of zones and need proper visualization.

For example, they can be coloured appropriately by their risk level (from risk_management).

We also need to take into account the temporal validity of a risk zone (e.g., see risk (from risk_management)).

The user can turn these zone visualizations on/off.

This visualization is particularly useful for the briefing sessions.

Topic visualization. We visualize all the related topics (from topic_management) of the elements currently in the view (e.g., as icons). You can follow the link straight to "Topic Management" to read about that topic (from topic_management).

You can also filter the visualized topics (from topic_management) by setting a priority filter, status (open, candidate, resolved, +user defined etc.) and time range. Mind that this is different from filtering the elements by filtering their related topics.

Visualizing the content of a topic (from topic_management) is out-of-scope of the project. We deem such a visualization to be confusing. We expect the user to use "Topic Management" for general topic management.

viewquery (from topic_management) creation. To create a viewquery (from topic_management), the user needs to filter in the relevant data (as-planned, as-observed and divergence).

The user is provided a special GUI element to convert the query into an viewquery (from topic_management), preview it and attach it as a comment (from topic_management) to a topic (from topic_management).

Information about an element. An individual BIM element can be selected (e.g., by clicking on it).

The information about the element is displayed based on the extended information from bim_extended (from evolving_plan). This include relations of an element to actors, tasks, costs etc.

Once the relations are displayed, we can follow the reference of, say, a relatedIfcTask and jump to "Scheduling".

Additionally, we list all the topics (from topic_management) (and, consequently, to comments (from topic_management)) involved with the element. The user can jump to the "Topic Management" to see the details.

Access. We do not distinguish the relations in different access categories. The user is either allowed to see all the relations OR she is not allowed to see any (in which case, she sees only the geometry and the point clouds).

As-observed🔗

Observed uninterpretable geometry. The user can select what observations should be displayed:

These observations can be filtered by:

The system should be able to display multiple overlapping observations.

They can be differentiated by color tainting related to a selection of multiple time ranges. For example, the points corresponding to the time range 12:00-16:00 2021-01-13 are tainted red, and the points corresponding to the time range 00:00-23:59 2021-01-05 are tainted blue.

Visualization considerations. The system should be careful how the points are transferred to the user.

A scan of a room with lasers can easily become a gigabyte, and sometimes there are even multiple scans of the same room.

Instead of just pushing all the data to the client, the server needs to compress the point_cloud (from digital_reconstruction) somehow.

One approach would be to only display reconstructed_geometry (from digital_reconstruction). In parallel, the system can also compress the points by melting together the points which are too far away (by gracefully reducing the resolution, brute-force sub-sampling, voxelization etc.).

Hence the backend needs to continuously talk to the client and update the set of visible points as well as "melt together" the points too far away and "unmelt" the points closer up.

Images. Images are displayed as icons anchored at their position.

The user can view individual images (e.g., by clicking on their icon).

The user can filter the images similar to the point cloud:

The thermal images (from thermal_inspection) are a special case where we want to include semantic information about the temperature. Instead of displaying a general icon, we want to summarize the temperature. For example, we could show minimum, 10%-quantile, 90%-quantile and maximum as a rectangle. (How we display this information is to be decided during the implementation.)

As-built. Additionally, the user can display the geometry corresponding to a version of
as-built (from digital_reconstruction). The user can also visually compare two different as-built versions (e.g., by color tainting).

The filtering functionality is same as filtering the as-planned elements.

Divergence🔗

The as-planned and as-observed views can be merged in a single one by stacking.

If both bim3d (from evolving_plan) and as-built (from digital_reconstruction) are to be displayed, we visually compare them, e.g., by color tainting to highlight the one or the other.

Since the elements in as-built (from digital_reconstruction) share the identifiers with bim3d (from evolving_plan), we can also apply the same filtering mechanism as for as-planned elements.

The rich filtering capabilities are particularly relevant for the visualization of the as-built versus as-planned differences. While their benefit is rather marginal during inspection of as-planned elements, it becomes much more difficult to spot important deviations without proper focus on different kinds of elements as not all deviations are equal.

For example, the deviation of 1cm might be acceptable for one wall, but unacceptable for another or for the width of a window frame. Where elements need to be inspected manually, showing only the relevant subset (both the planned elements as well as their observations) is crucial.

We also need to provide a functionality to manually measure distances. For example, this can be related to "Risk Management" where we need to check that emergency exits are close enough. Another example is to verify manually if a building element has been properly constructed.

Example. We examine the following story.

"The quality survey shows, that the window on the 1st floor at this position is 5 cm larger. We will need to order larger windows if the whole remains the same size."

Here are the steps how this story can be effectuated using virtual inspection:

  1. Select the latest revision (from evolving_plan) of bim3d (from evolving_plan).

    Filter the elements by the desired date (to hide all the irrelevant elements from bim3d (from evolving_plan)).

  2. Select the time range to include the relevant recording(s).

  3. Navigate to the element in question.

  4. Examine the element comparing the as-planned geometry from bim3d (from evolving_plan) with the point cloud, voxel cloud, reconstructed geometry and/or images (filtered by the appropriate time range) and as-built geometry (selected by the appropriate version).

  5. Select the element to see its relations.

  6. Go through its relations to tasks and search for the task corresponding to the installation.

  7. Follow the reference to see the details of the task and finally retrieve the relationship to the installer.

  8. Create a new topic (from topic_management) describing the order and tracking further steps.

(As it might be obvious from this example, we still need to figure out how filtering can be made ergonomic. For example, it might make sense to pick a single time range and have it applied both to filtering elements in bim3d (from evolving_plan) by filtering tasks, to select the appropriate version of as-built (from digital_reconstruction) and to filter the images, point cloud and other observations.

We need to experiment with the filtering during the implementation. We know for now that the backend should provide a versatile query mechanism.)

Analytics🔗

Intentionally left empty.

Scheduling🔗

Intentionally left empty.

Safety🔗

Intentionally left empty.

Test Cases🔗

behavioral test🔗

We pick two testers, Tester 1 and tester 2.

Stage 1. Present observations (images (from digital_reconstruction), point_cloud (from digital_reconstruction), etc.) to Tester 1.

(Change the plan behind the back of the tester in order to introduce a deviation for sure.)

Present the plan (bim3d (from evolving_plan), deliberately modified).

The tester should find a deviation.

The tester should create a topic (from topic_management) (this would be a new topic as we deliberately modified bim3d (from evolving_plan)).

Stage 2. Pick another tester, Tester 2.

We modify the plan (bim3d (from evolving_plan)) back to the original state.

We ask Tester 2 to act on it and see if s/he resolves the topic (from topic_management).

magnitude🔗

We need to generate mock data to test for plan_magnitude and observation_magnitude (both on backend and on frontend!).

fuzzy queries🔗

We automatically generate queries and make sure that the system does not break.

(We have to figure out the details during the implementation.)

Acceptance Criteria🔗

plan magnitude🔗

There are thousands of elements to be displayed.

For each element, there might be hundreds of related tasks (from scheduling), related actors (from actor_management) and related topics (from topic_management).

observation magnitude🔗

A scan of a room can easily become 1 Gigabyte with many, many points.

We can show point clouds in megabytes (so only a fraction), say, for a selected element or a smaller volume of interest.

However, there needs to be the way to view the point cloud in sufficient detail. At the moment (2021-01-27), we do not know precisely how to do this and need to leave this as an important implementation detail.

observation selection🔗

The system should be able to display up to three (3) recordings (from digital_reconstruction).

We don't have to present more recordings (from digital_reconstruction) as that would be confusing for the user, but also very difficult to handle on the backend due to the volume of the data.

For example, if you have more than three recordings, you would have a very large number of recorded points (from digital_reconstruction) for a single location (e.g., imagine if 4-5 drones or robots record in parallel -- you can not present the points of all of them at a fine-grained resolution.)