Apple, Google Enter the AR Fray

Apple, Google Enter the AR Fray

Technology News |
By eeNews Europe

One might think that Apple’s new ARKit and Google’s ARCore will rescue developers from AR oblivion, but a quick look around highlights a plethora of AR options targeting a range of platforms that are already available. Having heavy hitters like Apple and Google in the mix will not hurt, but the likes of Microsoft and Intel have been hitting the AR gong for quite a while already. Likewise, solutions like Scope AR’s WorkLink (see figure 1) have been available for a number of years.

Fig. 1: Scope AR’s WorkLink is a deployable AR
solution that that works with tablets and AR glasses
to provide support for training and support in the

back as an industrial tool (see figure 2). The main difference between this effort and its ARCore is the target audience. The Google Glass hardware is actually paired with software from Google solution providers that work directly with the customers.

Fig. 2: Google Glass is back, in the factory.
Customers actually work with Google solution
providers instead of getting hardware and software
directly from Google

Apple’s ARKit and Google’s ARCore target iOS and Android developers respectively, with smartphones being the target platform. These look to take advantage of the massive smartphone population that is dominated by iOS and Android. Likewise, even midrange smartphones are coming equipped with high-resolution cameras that blend well with AR solutions. Of course, Apple and Google’s offerings highlight the importance of AR.

“Google and Apple’s AR technology is exciting and confirms what we’ve believed at Meta since before our inception: Augmented reality is the next paradigm of computing,” said David Oh, head of developer relations at Meta. “We are working closely with developers to design the most productive and intuitive AR applications, in line with Meta’s just-published Spatial Design Guidelines. Once these applications offer a seamless, natural and productive experience, we are excited to see how quickly and easily these high-quality apps will be ported to glasses and drive the entire industry forward.”

Apple’s ARKit (see figure 3) uses Visual Inertial Odometry (VIO) to track what the iPad or iPhone camera sees and allows mapping of content onto the screen presentation. It uses the built-in sensor system in these devices using the Core Motion framework. Of course, this is the same type of thing that is done within any AR framework or software development kit (SDK). It is just a matter of using different nomenclature. At this point ARKit requires an Apple A9 and A10 processor that is in the latest versions of Apple’s hardware. The support will be found in iOS 11 and Xcode 9 development tools.

Fig. 3: ARKit targets Apple’s iPad and iPhone
platforms turning them into AR viewers.

Google’s ARCore targets Google Android (see figure 4). ARCore is a relative newcomer to the AR space. Some of its details are a bit sparse. Like ARKit, it supports third-party gaming engines like Unity and Unreal. This is key for generating 3D images to overlay virtual images in AR mode.

ARKit and ARCore have the advantage over third-party solutions because of their integration with the  operating system. Likewise, this support will come as part of the operating system package versus an application. This, in theory, will provide potential integration of multiple AR apps in the future.

Fig. 4: ARCore brings AR to Google Android.
The 3D objects on the table are computer-generated
and oriented for the real table’s placement.

AR SDKs typically hide much of the underlying complexity of sensor integration, scene analysis, and so on. Applications can then utilize this information and merge it with 2D and 3D content that is then displayed and manipulated by application users. This may sound simple but is actually quite complex, as details like lighting need to be taken into account.

Likewise, many AR applications do not simply overlay information but require that this information be positioned based on camera images. This requires image recognition and scene analysis. This can require significant amounts of computing power, and even artificial intelligence and machine learning come into play, as these tools are used to recognize items in a scene and even the relationship between objects.

ARKit and ARCore work within their respective programming platforms and frameworks. This allows access to other app development support from buttons to gesture recognition, since the AR aspect will not be the only part of the app that a user will work with. These SDKs also work with other 3D tools that already exist, such as Apple’s SceneKit (which supports 3D animation) and SpriteKit (which handles 2D animation).

I did mention that there were other solutions out there—including support from Microsoft and Intel—but here are a few more that have SDKs and plugins that have cross platform support, including iOS and Android:  ARTookKit, EasyAR, Kudan, Maxst, Vuforia, Wikitude, and XZIMG. Many include additional features like face recognition.

All of these can be used to create AR games and applications, but they will require a significant amount of coding and understanding of the SDK to deliver an AR application. Developers will need to know whether they want to work at this level or work at customizing an existing solution like Scope AR’s WorkLink.

If anyone is wondering what all that computing power in a smartphone will be used for, they should look at the apps being generated using these toolkits and frameworks.


This article originally appeared in Electronic Design

Related articles:

Augmented, virtual, annotated, mixed… a “$37 billion reality” by 2027

Finnish startup beats human eye resolution for AR/VR

Embedded vision processor ip comes with boosted CNN engine

Ultrahaptics to expand its global presence and reach into AR/VR

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles