Arvizio, a tech firm in Ontario, Canada, is developing enterprise-grade augmented and mixed reality (AR/MR) solutions for construction or industrial on-site workflows.
The firm offers a wide suite of AR/MR tools to optimise and share complex enterprise-grade 3D models, and has designed its solution for key immersive devices such as Microsoft’s HoloLens 2 and Magic Leap. Arvizios selection of solutions includes ‘Immerse 3D’ and ‘Instructor’.
With these immersive tools, Arvizio aims to bring innovation to Industry 4.0 by assisting frontline workers to monitor and navigate industrial environments.
Most recently, the firm released AR Instructor, which enables on-site workers to integrate custom work instructions on AR heads-up displays, including 2D documents, videos, and 3D digital twins.
Additionally, updates to AR Instructor connect dispersed frontline workers by executing instant remote collaboration and guidance. For example, seniors can annotate the field of view (FoV) of an on-site worker via direct audio and video connections.
XR Today Spoke with Jonathan Reeves, CEO of Arvizio, to discuss how his firm is laying the foundation for a safe and reliable Industry 4.0.
XR Today: How does Arvizio help to empower Industry 4.0?
Jonathan Reeves: It’s obvious that AR is one of the next great things to break through and begin to revolutionise Industry 4.0 over to the ‘next stage.’
As part of this, there’s a significant digital transformation taking place across industries and sectors, and I think it’s fair to say that, while AR has been a good topic for a lot of hype and discussion in the market, it’s only now beginning to break through.
There are some very good reasons for that. You asked about the COVID environment and that is a catalyst. So, with tools like the Arvizio Immerse 3D solution, or our Arvizio Instructor, a very key and fundamental concept of both products is the ability to enable remote collaboration.
It’s our view that remote collaboration is one of the key drivers. A lot of the Metaverse discussions are taking place, and most people think of the Metaverse [as] a remote collaboration-type scenario.
We buy into that and think the ability to have team members work across geographies is particularly important. For example, with the Immerse 3D product, what that zeroes in on is the ability to conduct design reviews using 3D models and content.
Workers can edit models such as very complex computer-assisted designs (CAD), and on-site workers can adopt the Immerse 3D product to edit Revit or Navisworks models for example.
XR Today: What challenges did you find when translating real-time 3D (RT3D) models and digital twins into Arvizio solutions?
Jonathan Reeves: A lot of the challenges with visualising large-scale models is the complexity of the models themselves. So, we work with customers in the energy field and they might have a real-time 3D (RT3D) model of an entire chemical plant, or an oil refinery, whatever it may be.
That RT3D model may have millions of objects, so every individual valve, every individual pump, and so forth in that model, is a separate object. Devices like the Microsoft HoloLens, and even your mobile phone or iPad, don’t have the horsepower to render RT3D models of that size and complexity.
There are a couple of key techniques that we’ve had to bite off to go right after that. One is that we’ve built a very powerful optimization tool that automatically takes a very large-scale model, with millions of objects, and optimises every individual object in it.
Take an example a small part, like a pump, that’s in the 3D model. There’s all the other pipework and machinery and so forth around it. Now, if that pump is a long distance away from me when I’m looking at the entire model, I don’t need to show it in very high detail. But, the closer I get to that pump, the more detail I need to see.
Using our optimization engine creates different levels of detail for each of those objects. Then, based on the distance from the object, we use a greater or lesser level of detail.
What that allows us to do is use fewer resources on the headset to render that, because if the object is just a few pixels, there’s no point rendering the hundreds of thousands of triangles associated with that model.
A second key piece is with technologies like 5G and high-speed broadband. The other thing we can do is leverage server technology to do renderings of the objects, which we call ‘hybrid rendering’. Here, the headset renders some of the objects, and a cloud or edge server renders the rest.
What we do is essentially render the 3D model and stream a left eye-right eye video image into the headset. Now, part of the trick here is you have to keep the round trip latency low because if a worker moves their head you have to re-render a different scene.
So, what we’re doing is we’re sending real-time information from the headset to the server, re-rendering the scene, and then streaming it back. That’s the technique we use for the most complex models.
XR Today: Do Arvizio solutions improve safety and accessibility within these hazardous work environments?
Jonathan Reeves: When overlaying instructions on a piece of machinery, we can create warnings and indications such as ‘don’t touch this area,’ or ‘keep away from this kind of thing.’ That’s an important and key aspect.
Also, combining the ‘Remote Expert’ feature into one product gives it an additional leg, because we often find that when field engineers go on-site, things are usually poorly documented. So, somebody changed something and the database doesn’t reflect that.
By having the real-time video component, they can quickly jump on a call, consult with the experts to bring them up to speed on what’s different, and then take over from there. That, we think, greatly assists with safety.