From Physical Objects to Virtual Models
The process of developing a 3D model begins with gathering initial data using methods like LiDAR scanning and photogrammetry to capture detailed spatial information about the object or environment . The raw data is then processed to create a point cloud , involving noise filtering , alignment of multiple scans , and ensuring precise measurements .
A point cloud comprises a group of unrelated or disconnected points ; the points cannot be defined as having a shape . Subsequently , the point cloud data is converted into a mesh structure comprising vertices , edges , and faces that define the object ’ s shape . Following this , textures and additional details may be applied to enhance realism to accurately portray colours , surfaces , and fine features . Lastly , the completed 3D model will be validated to verify its accuracy and completeness . High-density point clouds often result in higher computational costs . Therefore , further optimisation efforts may be performed to reduce file size or improve performance for specific applications .
LiDAR , RGB-D cameras , and photogrammetry operate on different principles to capture spatial data for creating 3D models . LiDAR uses laser pulses to measure distances precisely by calculating the time it takes for the light to bounce back from a surface . RGB-D uses structured light or time-of-flight technology to measure depth , producing a depth map along with colour information ( RGB ) and providing real-time 3D data . Photogrammetry involves taking multiple overlapping 2D images of an object or environment from different angles and using software to stitch these images together to create a 3D model . It relies on identifying common points in the images to reconstruct the spatial structure .
In terms of the captured data , LiDAR produces highly accurate and dense point clouds that can capture fine details of complex surfaces across large areas and is particularly effective in different lighting conditions . RGB-D technology is beneficial for indoor environments , short-range applications and dynamic environments as it can provide realtime data useful for applications like robotics and augmented reality . Photogrammetry can produce detailed and photorealistic 3D models , hence ideal for applications requiring visual fidelity and texture detail . However , its accuracy depends on the quality and number of images , and it is sensitive to lighting conditions and surface textures .
Software tools like Pix4D and Agisoft Metashape are then used for processing and analysing aerial imagery and point cloud data . In agriculture , these tools are instrumental for creating detailed virtual 3D representations of fields and crops , facilitating precision agriculture practices such as crop monitoring , yield estimation , and soil analysis through accurate spatial data processing and visualisation .
Examples of 3D modelling for Crop Growth Monitoring and Post-Harvest Handling
The agricultural sector has long recognised the importance of developing time- and worksaving systems while maintaining the necessary efficiency and accuracy in producing the outcome . For crop growth monitoring , an accurate 3D representation of the crop is extremely important , but its development is highly time-consuming .
In our work at the Smart Farming Technology Research Centre , University Putra Malaysia ( UPM ), in collaboration with the Malaysian Agricultural Research Institute ( MARDI ), we utilised 3D modelling technology for monitoring crops grown in an indoor farming facility at MARDI . Specifically , we explored the feasibility of using photogrammetry-reconstructed 3D point clouds for crop height measurement , by comparing the height measured on the 3D point cloud to the physical height of the crop . The images used in the study were captured with a mobile phone camera , which is widely accessible to farmers and growers compared to LiDAR or RGB-D technologies .
Butterhead lettuce and rubber seedlings ( n = 20 for each type ) were randomly selected over a period of 45 days . A phone camera with 12 MP resolution and a focal length of 26mm was used to capture 30 to 50 images of each plant sample . Images were captured all around the crop without any preferential position , at irregular intervals using ambient lighting ( see Figure 1a ). For physical height measurement , a ruler was placed vertically next to the plant to get the height measurement
37