MASSIVE-SCALE URBAN RECONSTRUCTION, CLASSIFICATION, AND RENDERING FROM REMOTE SENSOR IMAGERY

Consortium

Presagis Inc

Presagis Inc

Presagis is a Montreal-based software company that supplies the top 100 defense and aeronautic companies in the world with simulation and graphics software. Over the last decade, Presagis has built a strong reputation in helping create the complexity of the real world in a virtual one. Their deep understanding of the defense and aeronautic industries combined with expertise in synthetic environments, simulation & visualization, human-machine interfaces, and sensors positions them to meet today’s goals and prepare for tomorrow’s challenges. Today, Presagis is heavily investing into the research and innovation of virtual reality, artificial intelligence, and big data analysis. By leveraging their experience and recognizing emerging trends, their pioneering team of experts, former military personnel, and programmers are challenging the status quo and building tomorrow’s technology — today.

Concordia University, Montreal, Quebec

Immersive and Creative Technologies Lab

The Immersive and Creative Technologies lab was founded in late 2011 and since its establishment it has been focusing on fundamental and applied research in the areas of computer vision, computer graphics, virtual/augmented reality and creative technologies, and their application in a wide range of fields. More specifically, the long term objectives of the research at the ICT Lab are to create (a) virtual worlds which are indistinguishable [in all aspects] from the real-world areas they represent and, (b) visualizations employing these realistic virtual worlds for a wide range of applications.
The ICT lab is part of the Department of Computer Science and Software Engineering at the Faculty of Engineering and Computer Science at Concordia University.

Research

Research objectives

Image-based Modeling

Classification of geospatial features and road extraction

Photorealistic rendering

DAEDALUS research programme

Publications

IEEE_3DTV_2018

Single-shot Dense Reconstruction with Epic-flow

Chen Qiao, Charalambos Poullis
IEEE 3DTV-CON, 2018
In this paper we present a novel method for generating dense reconstructions by applying only structure-from-motion(SfM) on large-scale datasets without the need for multi-view stereo as a post-processing step.
A state-of-the-art optical flow technique is used to generate dense matches. The matches are encoded such that verification for correctness becomes possible, and are stored in a database on-disk. The use of this out-of-core approach transfers the requirement for large memory space to disk, therefore allowing for the processing of even larger-scale datasets than before. We compare our approach with the state-of-the-art and present the results which verify our claims.
CRV 2018

Deep Autoencoders with Aggregated Residual Transformations for Urban Reconstruction from Remote Sensing Data

Timothy Forbes, Charalambos Poullis
15th Conference on Computer and Robot Vision, 2018
In this work we investigate urban reconstruction and propose a complete and automatic framework for reconstructing urban areas from remote sensing data.
Firstly, we address the complex problem of semantic labeling and propose a novel network architecture named SegNeXT which combines the strengths of deep-autoencoders with feed-forward links in generating smooth predictions and reducing the number of learning parameters, with the effectiveness which cardinality-enabled residual-based building blocks have shown in improving prediction accuracy and outperforming deeper/wider network architectures with a smaller number of learning parameters. The network is trained with benchmark datasets and the reported results show that it can provide at least similar and in some cases better classification than state-of-the-art. Secondly, we address the problem of urban reconstruction and propose a complete pipeline for automatically converting semantic labels into virtual representations of the urban areas. An agglomerative clustering is performed on the points according to their classification and results in a set of contiguous and disjoint clusters. Finally, each cluster is processed according to the class it belongs: tree clusters are substituted with procedural models, cars are replaced with simplified CAD models, buildings' boundaries are extruded to form 3D models, and road, low vegetation, and clutter clusters are triangulated and simplified. The result is a complete virtual representation of the urban area. The proposed framework has been extensively tested on large-scale benchmark datasets and the semantic labeling and reconstruction results are reported.

Contact

Immersive and Creative Technologies Lab
Department of Computer Science and Software Engineering
Concordia University
1455 de Maisonneuve Blvd. West, EV03.183,
Montréal, Québec,
Canada, H3G 1M8