We introduce an interactive tool which enables a user to quickly
assemble an architectural model directly over a 3D point cloud
acquired from large-scale scanning of an urban scene. The user
loosely defines and manipulates simple building blocks, which we
call SmartBoxes, over the point samples. These boxes quickly snap
to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense
that their locations and sizes are automatically adjusted on-the-fly
to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two
snapping forces defined respectively by a data-fitting term and a
contextual term, which together assist the user in reconstructing the
architectural model from a sparse and noisy point cloud. We show
that a combination of the user’s interactive guidance and high-level
knowledge about the semantics of the underlying model, together
with the snapping forces, allows the reconstruction of structures
which are partially or even completely missing from the input.