AI for 3D CAD, augmented reality and virtual reality | Pipeline Magazine
By: Dijam Panigrahi
We don’t know which of the world’s biggest tech companies will power the best future tools, technologies, and resources for manufacturing, healthcare, construction, etc. For this reason, organizations have worked extremely hard to ensure that they create changes that will have a huge impact on humanity. It begins when recent technological advancements with artificial intelligence (AI) and immersive mixed reality technologies such as augmented reality (AR) and virtual reality (VR) were realized.
Although these technologies differ from each other, they currently work together in advanced three-dimensional (3D) applications and environments, as this benefits businesses and their customers.
In virtual reality, a user wears a headset that allows entry into a new world, which can even mimic the real world. Virtual reality allows users to have both a visual and aural experience that will replicate a real world setting in a manufacturing environment.
Augmented reality is conceptually similar to virtual reality. However, augmented reality displays digital content in the real world. This allows manufacturers of electrical, utility or industrial equipment involved in the creation of new machines to see the virtual specifications of the design. In turn, they also see how it could work in a real environment of utility or power generation.
Of course, these technologies are promising. The challenge, however, is that they require large doses of data, the ability to process large amounts of data at remarkable speeds, and the ability to scale projects in a technological environment that is rarely allowed in office environments. typical.
Immersive mixed reality calls for a precise and persistent fusion of the real and virtual worlds. Therefore, it is necessary to render complex models and scenes with photorealistic detail, rendered in the right physical location with the right scale and precise pose. To take advantage of AR / VR to design, build, or repair components, persistent precision and precise nature are required.
This is currently achieved by using discrete GPUs from servers and delivering the rendered images wirelessly to headsets (HMDs) such as Microsoft HoloLens and Oculus Quest.
One of the main requirements of mixed reality applications is to superimpose precisely on an object its model or the digital twin. In this way, work instructions can be provided for assembly and training, and possible manufacturing errors can also be detected. This allows the user to also track the object and change the rendering as the work progresses.
The majority of object tracking systems on the device use 2D image and / or marker-based tracking. This severely limits the accuracy of the 3D overlay, as 2D tracking is unable to estimate depth with great accuracy, and therefore scale and pose. While users may receive what appears to be a good match when viewing from an angle or position, the overlay loses alignment as the user moves in 6DOF.
Further, object registration, which is the detection, identification and estimation of the scale and orientation of the object, is performed. In most cases this is achieved by calculation or by using simple computer vision methods with standard training libraries (examples: Google MediaPipe, VisionLib). It can work well for regular, smaller and simpler objects such as hands, faces, cups, tables, chairs, wheels, regular geometric structures, etc. However, for large and more complex objects in enterprise use cases, labeled training data (more so in 3D) is not readily available. As a result, using 2D image-based tracking to persistently align, overlap, and track the object and merge the rendered model with it in 3D is extremely difficult, if not impossible. Enterprise-level users are overcoming these hurdles by leveraging 3D environments and AI technology in their immersive mixed reality design-build projects.
3D AI based on deep learning allows users to identify 3D objects of arbitrary shape and size in various orientations with high precision in 3D space. This approach is scalable with any arbitrary shape and can be used in business use cases requiring complex 3D render overlay.