NAUKA

Optimising UAV data acquisition and processing for photogrammetry: a review

Unmanned aerial vehicles (UAVs) are used to acquire measurement data for an increasing number of applications. Photogrammetric studies based on UAV data, thanks to the significant development of computer vision techniques, photogrammetry, and equipment miniaturization, allow sufficient accuracy for many engineering and non-engineering applications to be achieved. In addition to accuracy, development time and cost of data acquisition and processing are also important issues. The aim of this paper is to present potential limitations in the use of UAVs to acquire measurement data and to present measurement and processing techniques affecting the optimisation of work both in terms of accuracy and economy. Issues related to the type of drones used (multi-rotor, fixed-wing), type of shutter in the camera (rolling shutter, global shutter), camera calibration method (pre-calibration, self-calibration), georeferencing method (direct, indirect), technique of measuring the external images orientation parameters (RTK, PPK, PPP), flight design methods and the type of software used were analysed.


Vehicle detection and masking in UAV images using YOLO to improve photogrammetric products

Photogrammetric products obtained by processing data acquired with Unmanned Aerial Vehicles (UAVs) are used in many fields.Various structures are analysed, including roads. Many roads located in cities are characterised by heavy traffic. This makes itimpossible to avoid the presence of cars in aerial photographs. However, they are not an integral part of the landscape, so theirpresence in the generated photogrammetric products is unnecessary. The occurrence of cars in the images may also lead to errorssuch as irregularities in digital elevation models (DEMs) in roadway areas and the blurring effect on orthophotomaps. Theresearch aimed to improve the quality of photogrammetric products obtained with the Structure from Motion algorithm. To fulfilthis objective, the Yolo v3 algorithm was used to automatically detect cars in the images. Neural network learning was performedusing data from a different flight to ensure that the obtained detector could also be used in independent projects. Thephotogrammetric process was then carried out in two scenarios: with and without masks. The obtained results show that theautomatic masking of cars in images is fast and allows for a significant increase in the quality of photogrammetric products suchas DEMs and orthophotomaps.


Determining optimal photogrammetric adjustment of images obtained from a fixed-wing UAV

Photogrammetry with unmanned aerial vehicles (UAVs) has become a source of data with extensive applications. The accuracy is of utmost significance, yet the intention is also to find the best possible solutions for data acquisition in economic terms. The objective of the research was the analysis of various variants of the bundle block adjustment. The analysis concerns data which is diversified with respect to the type of shutter (rolling/global), the measurement of external orientation elements, the overlap and the number of ground control points (GCPs).

Semantic segmentation-driven integration of point clouds from mobile scanning platforms in urban environments

Precise and complete 3D representations of architectural structures or industrial sites are essential for various applications, including structural monitoring or cadastre. However, acquiring these datasets can be time-consuming, particularly for large objects. Mobile scanning systems offer a solution for such cases. In the case of complex scenes, multiple scanning systems are required to obtain point clouds that can be merged into a comprehensive representation of the object. Merging individual point clouds obtained from different sensors or at different times can be difficult due to discrepancies caused by moving objects or changes in the scene over time, such as seasonal variations in vegetation. In this study, we present the integration of point clouds obtained from two mobile scanning platforms within a built-up area. We utilized a combination of a quadruped robot and an unmanned aerial vehicle (UAV). The PointNet++ network was employed to conduct a semantic segmentation task, enabling the detection of non-ground objects. The experimental tests used the Toronto 3D dataset and DALES for network training. Based on the performance, the model trained on DALES was chosen for further research. The proposed integration algorithm involved semantic segmentation of both point clouds, dividing them into square subregions, and performing subregion selection by checking the emptiness or when both subregions contained points. Parameters such as local density, centroids, coverage, and Euclidean distance were evaluated. Point cloud merging and augmentation enhanced with semantic segmentation and clustering resulted in the exclusion of points associated with these movable objects from the point clouds. The comparative analysis of the method and simple merging was performed based on file size, number of points, mean roughness, and noise estimation. The proposed method provided adequate results with the improvement of point cloud quality indicators.