17 Sep Augmented Reality Software Tools
Programming or pinning?
Augmented Reality Software Tools
The best of both worlds – that’s what many advertising slogans say when it comes to combining two aspects: emotion and innovation, naturalness and modernity, pragmatism and perfectionism. But what if two worlds collide and open up endless possibilities? Augmented Reality (AR) combines reality and virtuality. The product is the best of both worlds.
AR is an interface of real objects and virtual content. What sounds simple is really a promising technology with immense advantages. This is visible for example when learning with AR. If content is prepared with virtual or spatial simulations, the learner doesn’t have to navigate in a User Interface or search for information. His or her cognitive burden is minimized and the complete focus on the content is therefore enabled. The quality of learning is enhanced.
Customer specific knowledge transfer needs good programmers
Software Development Kits (SDKs) and Software Tools that are necessary for implementing such interfaces are already widely spread, e.g. ArCore, Apple Arkit and HoloLens Toolkit. Guidelines based on experience values and extensive studies are also available. But teaching content and expectations of companies of knowledge transfer are mostly subject-specific and individual. This is why it takes programmers who develop tailor-made solutions for each individual customer. This process is cost- and time-consuming. The programmer has to deal with the tracking and the interaction with virtual and real objects, create user interfaces and pay attention for a good user experience and easy handling.
In spite of these individual demands parallels can be found in the use cases which are about learning and training workflows. In the industrial area, for example, the animation and description of certain worksteps with audiovisual and spatial instructions is reasonable. This is the reason why VISCOPIC developed tools which enable the user to generate content and manuals himself easily. He doesn’t have to have knowledge about tracking, interaction or design of the user interface and there are no programming skills needed. With VISCOPIC Pins virtual content can be ‘pinned’ onto real objects. With VISCOPIC Steps, instructing, learning and training step-by-step on virtual objects are made possible.
A development environment for all SDKs – Unity
To implement an ideal AR learning environment the interaction of hardware and software components is crucial. It depends on the abilities of the programmer how skillful he combines these components in an application. In addition, he needs a development environment which suits the needs. For AR applications, Unity is optimal since it’s especially compatible with different AR-SDKs like for the HoloLens or the Meta 2, that both have their own SDKs.
The SDks offer scripts, with which uncomplicated and quick development of applications are enabled. The developer can access basic functionalities of the hardware, like the camera image, microphone and speaker, sensors for spatial-mapping and the interactions of the user. Individual content that should be presented has to be generated and integrated by a developer basing on the SDK.
Besides AR-glasses, smartphones are also suitable for AR applications, because they contain built-in sensors that are also used in AR-glasses. A significant advantage of smartphones is their omnipresence. Compared to AR-glasses, AR apps can generate a broad reach on smartphones quickly. SDKs are available for numerous platforms, like Goolge-ARCore (Android) and ARKit (iOS).
Apart from said SDKs that are tailored for specific hardware or operating systems, there are also independent SDKs. Vuforia or Kudan for example are specialized on object tracking and versatile. The developing environment Unity is suitable for all mentioned SDKs, and the software can be delivered to different platforms.
Software vendors such as Microsoft, Google, Meta and Apple add a collection of basic recommendations and guidelines to their SDKs. These are based on empirical data and values from the concrete application. They provide information on how interactions and user interfaces can be designed to increase user experience. For example, the required frame rate must be maintained or – depending on the hardware and software – the correct distance must be selected so that virtual objects and interface elements are presented to the user in a pleasant way.
3 components + a good idea = an interactive application
In order to build an interactive app, a good idea is indispensable. It helps to know the three main components of each AR app. These are:
For AR glasses to be more than just a screen, they need a tracking system. This anchors a virtual object in space so that it maintains its position even when the viewer moves around the object.
The proprietary tracking systems of glasses can be extended by tracking-SDKs. These can usually be integrated independently of special glasses. With these additional tracking-SDKs, not only the position of the glasses and the general environment can be detected, but also special objects in the environment, such as people, tools, machines or gestures.
The HoloLens uses inside-out tracking, which is typical for AR devices. It uses the sensors built directly into the glasses (cameras, infrared cameras, light sensors and inertial measurement units) to position itself and objects in space. The glasses look for so-called “features”, i.e. “interesting” points in the room, which they reliably recognize over several frames and from several angles. Thus the glasses can determine in which relation itself stands to the environment.
Additional algorithms can access the sensors. There are usually two different optical approaches: The first approach is based on recognizing natural features such as entire objects and is followed by VisionLib and OpenCV, for instance. The disadvantage is that the computation power needed is often comparatively high and the applications prove to be less reliable. On the positive side, the environment does not need to be specially prepared.
The second approach is to place markers or QR codes (so-called fiducial markers) at prominent positions or objects. The camera can easily distinguish these from other objects. The Marker-based SDKs include Wikitude and ARToolKit. An integration into Unity has already taken place.
Tracking can be improved with additional hardware. A current research approach deals with exo-suits, such as the HoloSuit, for body tracking or additional sensors such as the LeapMotion, which specializes in tracking individual fingers. There are currently no field-proven systems in conjunction with the HoloLens, even though the initial results of the research and development teams are very promising.
Which tracking system you choose depends strongly on the use case. Sometimes the standard functions of the glasses are sufficient, if the content can be viewed independently of the environment. In more specific cases, the advantages and disadvantages of additional SDKs must be weighed up in order to achieve the best possible result.
To view holograms, 3D models can simply be imported into Unity. Most of the time, however, this is not the end of the story, because you usually don’t just want to view a model, but want to add additional content such as animations, instructions, text, images, and videos. If there are only a few additions, entering them into Unity is not a problem. If, on the other hand, you want to display several or hundreds of work instructions, animations or media content, it becomes difficult to scale the number of contents.
With the help of an editor or content creation tool, it is possible to transfer the input of content to people who do not need a deeper understanding of 3D engines, instead of having to assign it to a programmer. This means a large initial development effort.
In addition to the number of contents, it must be considered that the size of the graphic objects must be adapted to the respective glasses and their abilities. The HoloLens has a display resolution of 1268×720 px and a custom-made processor. Nevertheless, you should not overcharge your glasses with the rendering of large files. Images and videos must therefore, be compressed and the polygons of the 3D objects reduced to guarantee a smooth 60 FPS (frames per second).
A tracking system and virtual content are the basic requirements, interaction however, is an essential aspect of every AR app. Playing an animation, starting a video or placing holograms in space – there are few applications where the user does not want to interact with the virtual world.
Depending on the AR glasses, interactions look different. They can be summarized as follows:
- Gestures: The hand gestures allow the user to perform actions. What they do depends on the application. Frequently gestures are used to perform a simple mouse click. The position of the hand is tracked by the glasses. Even more complex interactions such as scrolling or dragging objects in space can be performed.
- Gaze: In the example HoloLens there is a virtual cursor placed in the middle of the field of view. If the cursor is positioned on the object and the user makes a click gesture, the system interacts with the object on which the cursor is positioned. This cursor can also be used without gestures. A head movement of the user can play a certain video as long as the user is looking at it. If the user looks away, the video stops automatically. This is helpful if you don’t want the user to learn the hand gestures.
- Voice input: Actions can also be carried out without hand or head movements: with the help of the voice. If “Go Back” means going back to a previous step, the user does not have to look at the Go Back button and perform the click gesture, but simply say the keyword. This can save time or be useful if the user needs the hands for other purposes.
In addition to the how, the question of where one interacts plays an important role. Traditional user interfaces are arranged in windows. Since the user of an AR app should be able to use them everywhere and from any viewing direction, the control elements must be arranged in easily accessible places. Holograms placed in space often serve as a reference point.
Sometimes it happens that a menu or a panel does not belong to a certain hologram, but should be accessible from everywhere. In this case, voice inputs or moving windows can be used to change their position with the user’s viewing direction.
Which interaction possibilities should be offered depends on the use case. The target group must be considered and the complexity as well as the number of interactions must be adapted accordingly.
It won’t work without programming effort, but it can be minimized.
In summary, it can be said that for an AR app with the common SDKs there is no getting around programming and integrating tracking, content and user interface. A software developer has to write several weeks on an application before it can present the desired content. Each SDK comes with its own peculiarities, which is why the programmer has to deal with it extensively beforehand. This results in an immense amount of work for each individual AR app.
With VISCOPIC Steps and Pins you can reduce the amount of work for the creation of work steps, because minimal programming effort is needed and user-friendly interfaces make working with the systems easy. AR content can be created by drag and drop. The platforms make it possible to create individual learning content for the HoloLens, the conventional PC or tablets.