HOW DICOM BECAME THE STANDARD IN MEDICAL IMAGING TECHNOLOGY

Building applications for medical technology projects often requires extra attention from software developers. From adhering to security and privacy standards to learning new technologies and working with specialized file formats—developers coming in fresh must do a fair amount of due diligence to get acclimated in the space. Passing sensitive information between systems requires adherence to extra security measures—standards like HIPAA (Health Insurance Portability and Accountability Act) are designed to protect the security of health information.

When dealing with medical images and data, one international standard rises above the rest: DICOM. There are hundreds of thousands of medical imaging devices in use—and DICOM has emerged as the most widely used healthcare messaging standards and file formats in the world. Billions of DICOM images are currently employed for clinical care.

What is DICOM?

DICOM stands for Digital Imaging and Communications in Medicine. It’s the international file format and communications standard for medical images and related information, implemented in nearly every radiology, cardiology, imaging, and radiotherapy devices such as X-rays, CT scans, MRI, ultrasound, and more. It’s also finding increasing adoption in fields such as ophthalmology and dentistry.

DICOM groups information into data sets. Similar to how JPEGs often include embedded tags to identify or describe the image, DICOM files include patient ID to ensure that the image retains the necessary identification and is never separated from it. The bulk of images are single frames, but the attribute can also contain multiple frames, allowing for storage of Cineloops.

The History of DICOM

DICOM was developed by the American College of Radiology (ACR) and the National Electrical Manufacturer’s Association (NEMA) in the 1980s. Technologies such as CT scans and other advanced imaging technologies made it evident that computing would play an increasingly major role in the future of clinical work. The ACR and NEMA sought a standard method for transferring images and associated information between devices from different vendors.

The first standard covering point-to-point image communication was created in 1985 and initially titled ACR-NEMA 300. A second version was subsequently released in 1988, finding increased adoption among vendors. The first large-scale deployment of ACR-NEMA 300 was in 1992 by the U.S. Army and Air Force. In 1993, the third iteration of the standard was released—and it was officially named DICOM. While the latest version of DICOM is still 3.0, it has received constant maintenance and updates since 1993.

Why Is DICOM Important?

DICOM enables the interoperability of systems used to manage workflows as well as produce, store, share, display, query, process, retrieve and print medical images. By conforming to a common standard, DICOM enables medical professionals to share data between thousands of different medical imaging devices across the world. Physicians use DICOM to access images and reports to diagnose and interpret information from any number of devices.

DICOM creates a universal format for physicians to access medical imaging files, enabling high-performance review whenever images are viewed. In addition, it ensures that patient and image-specific information is properly stored by employing an internal tag system.

DICOM has few disadvantages. Some pathologists perceive the header tags to be a major flaw. Some tags are optional, while others are mandatory. The additional tags can lead to inconsistency or incorrect data. It also makes DICOM files 5% larger than their .tiff counterparts.

The Future

The future of DICOM remains bright. While no file format or communications standard is perfect, DICOM offers unparalleled cross-vendor interoperability. Any application developer working in the medical technology field would be wise to take the time to comprehensively understand it in order to optimize their projects.

Cloud-Powered Microdroid Expands Possibilities for Android App Developers

Android developers have a lot to look forward to in 2021, 2022, and beyond. Blockchain may decentralize how Android apps are developed, Flutter will see increased adoption for cross-platform development, and we expect big strides in AR and VR for the platform. Among the top trends in Android development, one potential innovation has caught the attention of savvy app developers: Microdroid.

Android developers and blogs were astir earlier this year when Google engineer Jiyong Park announced via the Android Open Source Project that they are working on a new, minimal Android-based Linux image called Microdroid.

Details about the project are scant, but it’s widely believed that Microdroid will essentially be a lighter version of the Android system image designed to function on virtual machines. Google is preparing for a world in which even smartphone OS’s require a stripped-down version that can be run through the cloud.

Working from a truncated Linux, Microdroid will pull the system image from the device (tablet or phone), creating a simulated environment accessible from any remote device. It has the ability to enable a world in which users can access Google Play and any Android app using any device.

What does this mean for developers?

Microdroid will open up new possibilities for Android apps in embedded and IoT spaces which require potentially automated management and a contained virtual machine which can mitigate security risks. Cloud gaming, cloud computing—even smartphones with all features stored in the cloudare possible. Although we will have to wait and see what big plans Google has for Microdroid and how Android developers capitalize on it, at this juncture, it’s looking like the shift to the cloud may entail major changes in how we interact with our devices. App developers are keen to keep their eyes and heads in the cloud.

Although no timeline for release has been revealed yet, we expect more on Microdroid with the announcement of Android 12.

Learn How Google Bests ARKit with Android’s ARCore

Previously, we covered the strengths of ARKit 4 in our blog Learn How Apple Tightened Their Grip on the AR Market with the Release of ARKit 4. This week, we will explore all that Android’s ARCore has to offer.

All signs point toward continued growth in the Augmented Reality space. As the latest generations of devices are equipped with enhanced hardware and camera features, applications employing AR have seen increasing adoption. While ARCore represents a breakthrough for the Android platform, it is not Google’s first endeavor into building an AR platform.

HISTORY OF GOOGLE AR

In summer 2014, Google launched their first AR platform Project Tango.

Project Tango received consistent updates, but never achieved mass adoption. Tango’s functionality was limited to three devices which could run it, including the Lenovo Phab 2 Pro which ultimately suffered from numerous issues. While it was ahead of its time, it didn’t receive the level of hype ARKit did. In March 2018, Google announced that it will no longer support Project Tango and that the tech titan will be continuing AR Development with ARCore.

ARCORE

ARCore uses three main technologies to integrate virtual content with the world through the camera:

  • Motion tracking
  • Environmental understanding
  • Light estimation

It tracks the position of the device as it moves and gradually builds its own understanding of the real world. As of now, ARCore is available for development on the following devices:

ARCORE VS. ARKIT

ARCore and ARKit have quite a bit in common. They are both compatible with Unity. They both feature a similar level of capability for sensing changes in lighting and accessing motion sensors. When it comes to mapping, ARCore is ahead of ARKit. ARCore has access to a larger dataset which boosts both the speed and quality of mapping achieved through the collection of 3D environmental information. ARKit cannot store as much local condition data and information. ARCore can also support cross-platform development—meaning you can build ARCore applications for iOS devices, while ARKit is exclusively compatible with iOS devices.

The main cons of ARCore in relation to ARKit mainly have to do with their adoption. In 2019, ARKit was on 650 million devices while there were only 400 million ARCore-enabled devices. ARKit yields 4,000+ results on GitHub while ARCore only contains 1,400+. Ultimately, iOS devices are superior to software-driven Android devices—particularly given the TrueDepth Camera—meaning that AR applications will run better on iOS devices regardless of what platform they are on.

OVERALL

It is safe to say that ARCore is the more robust platform for AR development; however, ARKit is the most popular and most widely usable AR platform. We recommend spending time determining the exact level of usability you need, as well as the demographics of your target audience.

For supplementary reading, check out this great rundown of the best ARCore apps of 2021 from Tom’s Guide.

LiDAR: The Next Revolutionary Technology and What You Need to Know

In an era of rapid technological growth, certain technologies, such as artificial intelligence and the internet of things, have received mass adoption and become household names. One up-and-coming technology that has the potential to reach that level of adoption is LiDAR.

WHAT IS LIDAR?

LiDAR, or light detection and ranging, is a popular remote sensing method for measuring the exact distance of an object on the earth’s surface. Initially used in the 1960s, LiDAR has gradually received increasing adoption, particularly after the creation of GPS in the 1980s. It became a common technology for deriving precise geospatial measurements.

LiDAR requires three components: the scanner, laser, and GPS receiver. The scanner sends a pulsed laser to the GPS receiver to calculate an object’s variable distances from the earth surface. The laser emits light which travels to the ground and reflects off things like buildings, tree branches and more. The reflected light energy then returns to the LiDAR sensor where the associated information is recorded. In combination with photodetector and optics, it allows for an ultra-precise distance detection and topographical data.

WHY IS LIDAR IMPORTANT?

As we covered in our rundown of the iPhone 12, new iOS devices come equipped with a brand new LiDAR scanner. LiDAR now enters the hands of consumers who have Apple’s new generation of devices, enabling enhanced functionality and major opportunities for app developers. The proliferation of LiDAR signals toward the technology finding mass adoption and household name status.

There are two different types of LiDAR systems: Terrestrial and Airborne. Airborne LiDAR are installed on drones or helicopters for deriving an exact measurement of distance, while Terrestrial LiDAR systems are installed on moving vehicles to collect pinpoints. Terrestrial LiDAR systems are often used to monitor highways and have been employed by autonomous cars for years, while airborne LiDAR are commonly used in environmental applications and gathering topographical data.

With the future in mind, here are the top LiDAR trends to look out for moving forward:

SUPERCHARGING APPLE DEVICES

LiDAR enhances the camera on Apple devices significantly. Auto-focus is quicker and more effective on those devices. Moreover, it supercharges AR applications by greatly enhancing the speed and quality of a camera’s ability to track the location of people as well as place objects.

One of the major apps that received a functionality boost from LiDAR is Apple’s free Measure app, which can measure distance, dimensions, and even whether an object is level. The measurements determined by the app are significantly more accurate with the new LiDAR scanner, capable of replacing physical rulers, tape measures, and spirit levels.

Microsoft’s Seeing AI application is designed for the visually impaired to navigate their environment, however, LiDAR takes it to the next level. In conjunction with artificial intelligence, LiDAR enables the application to read text, identify products and colors, and describe people, scenes, and objects that appear in the viewfinder.

BIG INVESTMENTS BY AUTOMOTIVE COMPANIES

LiDAR plays a major role in autonomous vehicles, relying on a terrestrial LiDAR system to help them self-navigate. In 2018, reports suggest that the automotive segment acquired a business share of 90 percent. With self-driving cars inching toward mass adoption, expect to see major investments in LiDAR by automotive companies in 2021 and beyond.

As automotive companies look to make major investments in LiDAR, including Volkswagen’s recent investment in Aeva, many LiDAR companies are competing to create the go-to LiDAR system for automotive companies. Check out this great article by Wired detailing the potential for this bubble to burst.

LIDAR DRIVING ENVIRONMENTAL APPLICATIONS

Beyond commercial applications and the automotive industry, LiDAR is gradually seeing increased adoption for geoscience applications. The environmental segment of the LiDAR market is anticipated to grow at a CAGR of 32% through 2025. LiDAR is vital to geoscience applications for creating accurate and high-quality 3D data to study ecosystems of various wildlife species.

One of the main environmental uses of LiDAR is for soliciting topographic information on landscapes. Topographic LiDAR is expected to see a growth rate of over 25% over the coming years. These systems can see through forest canopy to produce accurate 3D models of landscapes necessary to create contours, digital terrain models, digital surface models and more.

CONCLUSION

In March 2020, after the first LiDAR scanner became available in the iPad Pro, The Verge put it perfectly when they said that the new LiDAR sensor is an AR hardware solution in search of software. While LiDAR has gradually found increasing usage, it is still a powerful new technology with burgeoning commercial usage. Enterprising app developers are looking for new ways to use it to empower consumers and businesses alike.

For supplementary viewing on the inner workings of the technology, check out this great introduction below, courtesy of Neon Science.

How AI Fuels a Game-Changing Technology in Geospatial 2.0

Geospatial technology describes a broad range of modern tools which enable the geographic mapping and analysis of Earth and human societies. Since the 19th century, geospatial technology has evolved as aerial photography and eventually satellite imaging revolutionized cartography and mapmaking.

Contemporary society now employs geospatial technology in a vast array of applications, from commercial satellite imaging, to GPS, to Geographic Information Systems (GIS) and Internet Mapping Technologies like Google Earth. The geospatial analytics market is currently valued between $35 and $40 billion with the market projected to hit $86 billion by 2023.

GEOSPATIAL 1.0 VS. 2.0

geospatial

Geospatial technology has been in phase 1.0 for centuries; however, the boon of artificial intelligence and the IoT has made Geospatial 2.0 a reality. Geospatial 1.0 offers valuable information for analysts to view, analyze, and download geospatial data streams. Geospatial 2.0 takes it to the next level–harnessing artificial intelligence to not only collect data, but to process, model, analyze and make decisions based on the analysis.

When empowered by artificial intelligence, geospatial 2.0 technology has the potential to revolutionize a number of verticals. Savvy application developers and government agencies in particular have rushed to the forefront of creating cutting edge solutions with the technology.

PLATFORM AS A SERVICE (PaaS) SOLUTIONS

Effective geospatial 2.0 solutions require a deep vertical-specific knowledge of client needs, which has lagged behind the technical capabilities of the platform. The bulk of currently available geospatial 2.0 technologies are offered as “one-size-fits-all” Platform as a Service (PaaS) solutions. The challenge for PaaS providers is that they need to serve a wide collection of use cases, harmonizing data from multiple sensors together while enabling users to simply understand and address the many different insights which can be gleaned from the data.

shutterstock_754106473-768x576

In precision agriculture, FarmShots offers precise, frequent imagery to farmers along with meaningful analysis of field variability, damage extent, and the effects of applications through time.

Mayday

In the disaster management field, Mayday offers a centralized artificial intelligence platform with real-time disaster information. Another geospatial 2.0 application Cloud to Street uses a mix of AI and satellites to track floods in near real-time, offering extremely valuable information to both insurance companies and municipalities.

SUSTAINABILITY

The growing complexity of environmental concerns have led to a number of applications of geospatial 2.0 technology to help create a safer, more sustainable world. For example, geospatial technology can measure carbon sequestration, tree density, green cover, carbon credit & tree age. It can provide vulnerability assessment surveys in disaster-prone areas. It can also help urban planners and governments plan and implement community mapping and equitable housing. Geospatial 2.0 can analyze a confluence of factors and create actionable insight toward analyzing and honing our environmental practices.

As geospatial 1.0 models are upgraded to geospatial 2.0, expect to see more robust solutions incorporating AI-powered analytics. A survey of working professionals conducted by Geospatial World found that geospatial technology will likely make the biggest impact in the climate and environment field.

CONCLUSION

Geospatial 2.0 platforms are very expensive to employ and require quite a bit of development.  The technology offers great potential to increase revenue and efficiency for a number of verticals. In addition, it may be a key technology to help cut down our carbon footprint and create a safer, more sustainable world..

Top Mobile Marketing Trends Driving Success in 2021

Mobile app marketing is an elusive and constantly evolving field. For mobile app developers, getting new users to install games is relatively cheap at just $1.47 per user, while retaining them is much more difficult. It costs on average $43.88 to prompt a customer to make an in-app purchase according to Liftoff. An effective advertising strategy will make or break your UI—and your bank. In 2019, in-game ads made up 17% of all revenue. By 2024, that number is expected to triple.

2020 was a year that saw drastic changes in lifestyle—mobile app users were no exception. What trends are driving app developers to refine their advertising and development tactics in 2021? Check out our rundown below.

Real Time Bidding

ads-bidding-for-authors-strategy-guide-and-bid-calculator

In-app bidding is an advanced advertising method enabling mobile publishers to sell their ad inventory in an automated auction. The technology is not new—it’s been around since 2015 when it was primarily used on a desktop. However, over the past few years, both publishers and advertisers have benefited from in app-bidding, eschewing the traditional waterfall method.

In-app bidding enables publishers to sell their ad space at auction. Advertisers simultaneously bid against one another. The dense competition enables a higher price (CPM) for publishers. For advertisers, bidding decreases fragmentation between demand sources since they can bid on many at once. In the traditional waterfall method, ad mediation platforms prioritize ad networks they’ve worked with in the past before passing it on the premium ad networks. In-app bidding changes the game by enabling publishers to offer their inventory to auctions which include a much wider swath of advertisers beyond the traditional waterfall.

Bidding benefits all parties. App publishers see increased demand for ad inventory, advertisers access more inventory, and app users see more relevant ads. In 2021, many expect in-app bidding to gain more mainstream popularity. Check out this great rundown by AdExchanger for more information on this exciting new trend.

Rewarded Ads Still King

rewarded ad

We have long championed rewarded ads on the Mystic Media blog. Rewarded ads offer in-game rewards to users who voluntarily choose to view an ad. Everyone wins—users get tangible rewards for their time, publishers get advertising revenue and advertisers get valuable impressions.

App usage data from 2021 only increases our enthusiasm for the format. 71% of mobile gamers desire the ability to choose whether or not to view an ad. 31% of gamers said rewarded video prompted them to browse for products within a month of seeing them. Leyi Games implemented rewarded video and improved player retention while bringing in an additional $1.5 million US.

Facebook’s 2020 report showed that gamers find rewarded ads to be the least disruptive ad format, leading to longer gameplay sessions and more opportunities for content discovery.

Playable Ads

Playable ads have emerged as one of the foremost employed advertising tactics for mobile games. Playable ads enable users to sample gameplay by interacting with the ad. After a snippet of gameplay, the ad transitions into a call to action to install the game.

The benefits are obvious. If the game is fun and absorbing to the viewer, it has a much better chance of getting installed. By putting the audience in the driver’s seat, playable ads drive increased retention rates and  a larger number of high lifetime value (LTV) players.

Check out three examples of impactful playable ads compiled by Shuttlerock.

Short Ads, Big Appeal

As we are bombarded with more and more media on a daily basis, finding a way to deliver a concise message while cutting through the clutter can be exceptionally difficult. However, recent research from MAGNA, IPG Media Lab, and Snap Inc. shows it may be well worth it.

Studies show short-form video ads drive nearly identical brand preference and purchase intent as 15 second ads. Whereas short form ads were predominantly employed to grow awareness, marketers now understand that longer ads are perceived by the user as more intrusive, and they can get just as much ROI out of shorter and less expensive content.

Check out the graph below, breaking down the efficacy of 6 second vs. 15 second ads via Business of Apps.

Screen-Shot-2020-12-15-at-14.37.18

Conclusion

Mobile advertisers need to think big picture in terms of both their target customer and how they format their ads to best engage their audience. While the trends we outlined are currently in the zeitgeist, ultimately what matters most is engaging app users with effective content that delivers a valuable message without intruding on their experience on the app.

For supplementary reading on mobile marketing, check out our blog on the Top Mobile Ad Platforms You Need to Know for 2021

AIoT: How the Intersection of AI and IoT Will Drive Innovation for Decades to Come

We have covered the evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) over the years as they have gained prominence. IoT devices collect a massive amount of data. Cisco projects by the end of 2021, IoT devices will collect over 800 zettabytes of data per year. Meanwhile, AI algorithms can parse through big data and teach themselves to analyze and identify patterns to make predictions. Both technologies enable a seemingly endless amount of applications retained a massive impact on many industry verticals.

What happens when you merge them? The result is aptly named the AIoT (Artificial Intelligence of Things) and it will take IoT devices to the next level.

WHAT IS AIOT?

AIoT is any system that integrates AI technologies with IoT infrastructure, enhancing efficiency, human-machine interactions, data management and analytics.

IoT enables devices to collect, store, and analyze big data. Device operators and field engineers typically control devices. AI enhances IoT’s existing systems, enabling them to take the next step to determine and take the appropriate action based on the analysis of the data.

By embedding AI into infrastructure components, including programs, chipsets, and edge computing, AIoT enables intelligent, connected systems to learn, self-correct and self-diagnose potential issues.

960x0

One common example comes in the surveillance field. Surveillance camera can be used as an image sensor, sending every frame to an IoT system which analyzes the feed for certain objects. AI can analyze the frame and only send frames when it detects a specific object—significantly speeding up the process while reducing the amount of data generated since irrelevant frames are excluded.

CCTV-Traffic-Monitoring-1024x683

While AIoT will no doubt find a variety of applications across industries, the three segments we expect to see the most impact on are wearables, smart cities, and retail.

WEARABLES

Wearable-IoT-Devices

The global wearable device market is estimated to hit more than $87 billion by 2022. AI applications on wearable devices such as smartwatches pose a number of potential applications, particularly in the healthtech sector.

Researchers in Taiwan have been studying the potential for an AIoT wearable system for electrocardiogram (ECG) analysis and cardiac disease detection. The system would integrate a wearable IoT-based system with an AI platform for cardiac disease detection. The wearable collects real-time health data and stores it in a cloud where an AI algorithm detects disease with an average of 94% accuracy. Currently, Apple Watch Series 4 or later includes an ECG app which captures symptoms of irregular, rapid or skipped heartbeats.

Although this device is still in development, we expect to see more coming out of the wearables segment as 5G enables more robust cloud-based processing power, taking the pressure off the devices themselves.

SMART CITIES

We’ve previously explored the future of smart cities in our blog series A Smarter World. With cities eager to invest in improving public safety, transport, and energy efficiency, AIoT will drive innovation in the smart city space.

There are a number of potential applications for AIoT in smart cities. AIoT’s ability to analyze data and act opens up a number of possibilities for optimizing energy consumption for IoT systems. Smart streetlights and energy grids can analyze data to reduce wasted energy without inconveniencing citizens.

Some smart cities have already adopted AIoT applications in the transportation space. New Delhi, which boasts some of the worst traffic in the world, features an Intelligent Transport Management System (ITMS) which makes real-time dynamic decisions on traffic flows to accelerate traffic.

RETAIL

AIoT has the potential to enhance the retail shopping experience with digital augmentation. The same smart cameras we referenced earlier are being used to detect shoplifters. Walmart recently confirmed it has installed smart security cameras in over 1,000 stores.

smart-shopping-cart

One of the big innovations for AIoT involves smart shopping carts. Grocery stores in both Canada and the United States are experimenting with high-tech shopping carts, including one from Caper which uses image recognition and built-in sensors to determine what a person puts into the shopping cart.

The potential for smart shopping carts is vast—these carts will be able to inform customers of deals and promotion, recommend products based on their buying decisions, enable them to view an itemized list of their current purchases, and incorporate indoor navigation to lead them to their desired items.

A smart shopping cart company called IMAGR recently raised $14 million in a pre-Series A funding round, pointing toward a bright future for smart shopping carts.

CONCLUSION

AIoT represents the intersection of AI, IoT, 5G, and big data. 5G enables the cloud processing power for IoT devices to employ AI algorithms to analyze big data to determine and enact action items. These technologies are all relatively young, and as they continue to grow, they will empower innovators to build a smarter future for our world.

Learn More About Triggering Augmented Reality Experiences with AR Markers

We expect a continued increase in the utilization of AR in 2021. The iPhone 12 contains LiDAR technology, which enables the use of ARKit 4, greatly enhancing the possibilities for developers. When creating an AR application, developers must consider a variety of methods for triggering the experience and answer several questions before determining what approach will best facilitate the creation of a digital world for their users. For example, what content will be displayed? Where will this content be placed, and in what context will the user see it?

Markerless AR can best be used when the user needs to control the placement of the AR object. For example, the IKEA Place app allows the user to place furniture in their home to see how it fits.

1_0RtFp6lxeJWxcg5EE_wYCg

Location-based AR roots an AR experience to a physical space in the world, as we explored previously in our blog Learn How Apple Tightened Their Hold on the AR Market with the Release of ARKit 4. ARKit 4 introduces Location Anchors, which enable developers to set virtual content in specific geographic coordinates (latitude, longitude, and altitude). To provide more accuracy than location alone, location anchors also use the device’s camera to capture landmarks and match them with a localization map downloaded from Apple Maps. Location anchors greatly enhance the potential for location-based AR; however, the possibilities are limited within the 50 cities which Apple has enabled them.

Marker-based AR remains the most popular method among app developers. When an application needs to know precisely what the user is looking at, accept no substitute. In marker-based AR, 3D AR models are generated using a specific marker, which triggers the display of virtual information. There are a variety of AR markers that can trigger this information, each with its own pros and cons. Below, please find our rundown of the most popular types of AR markers.

FRAMEMARKERS

5fc9da7d2761437fecd89875_1_gXPr_vwBWmgTN5Ial7Uwhg

The most popular AR marker is a framemarker, or border marker. It’s usually a 2D image printed on a piece of paper with a prominent border. During the tracking phase, the device will search for the exterior border in order to determine the real marker within.

Framemarkers are similar to QR Codes in that they are codes printed on images that require handheld devices to scan, however, they trigger AR experiences, whereas QR codes redirect the user to a web page. Framemarkers are a straightforward and effective solution.

absolut-truths

Framemarkers are particularly popular in advertising applications. Absolut Vodka’s Absolute Truth application enabled users to scan a framemarker on a label of their bottle to generate a slew of more information, including recipes and ads.

GameDevDad on Youtube offers a full tutorial of how to create framemarkers from scratch using Vuforia Augmented Reality SDK below.

 

NFT MARKERS

?????????

NFT, or Natural Feature Tracking, enable camera’s to trigger an AR experience without borders. The camera will take an image, such as the one above, and distill down it’s visual properties as below.

AugementedRealityMarkerAnymotionFeatures

The result of processing the features can generate AR, as below.

ImEinsatz

The quality and stability of these can oscillate based on the framework employed. For this reason, they are less frequently used than border markers, but function as a more visually subtle alternative. A scavenger hunt or a game employing AR might hide key information in NFT markers.

Treasury Wine Estates Living Wine Labels app, displayed above, tracks the natural features of the labels of wine bottles to create an AR experience which tells the story of their products.

OBJECT MARKERS

image1-7

The  toy car above has been converted into an object data field using Vuforia Object Scanner.

image4-1

Advancements in technology have enabled mobile devices to solve the issue of SLAM (simultaneous localization and mapping). The device camera can extract information in-real time, and use it to place a virtual object in it. In some frameworks, objects can become 3D-markers. Vuforia Object Scanner is one such framework, creating object data files that can be used in applications for targets. Virtual Reality Pop offers a great rundown on the best object recognition frameworks for AR.

RFID TAGS

Although RFID Tags are primarily used for short distance wireless communication and contact free payment, they can be used to trigger local-based virtual information.

While RFID Tags are not  widely employed, several researchers have written articles about the potential usages for RFID and AR. Researchers at the ARATLab at the National University of Singapore have combined augmented reality and RFID for the assembly of objects with embedded RFID tags, showing people how to properly assemble the parts, as demonstrated in the video below.

SPEECH MARKERS

Speech can also be used as a non-visual AR marker. The most common application for this would be for AR glasses or a smart windshield that displays information through the screen requested by the user via vocal commands.

CONCLUSION

Think like a user—it’s a staple coda for app developers and no less relevant in crafting AR experiences. Each AR trigger offers unique pros and cons. We hope this has helped you decide what is best equipped for your application.

In our next article, we will explore the innovation at the heart of AIoT, the intersection of AI and the Internet of Things.

Learn How Apple Tightened Their Hold on the AR Market with the Release of ARKit 4

Since the explosive launch of Pokemon Go, AR technologies have vastly improved. Our review of the iPhone 12 concluded that as Apple continues to optimize its hardware, AR will become more prominent in both applications and marketing.

At the 2020 WWDC in June, Apple announced ARKit 4, their latest iteration of the famed augmented reality platform. ARKit 4 features some vast improvements that help Apple tighten their hold on the AR market.

LOCATION ANCHORS

ARKit 4 introduces location anchors, which allow developers to set virtual content in specific geographic coordinates (latitude, longitude, and altitude). When rebuilding the data backend for Apple Maps, Apple collected camera and 3D LiDAR data from city streets across the globe. ARKit downloads the virtual map surrounding your device from the cloud and matches it with the device’s feed to determine your location. The kicker is: all processing happens using machine learning within the device, so your camera feed stays put.

36431-67814-ARKit-xl

Devices with an A12 chip or later, can run Geo-tracking; however, location anchors require Apple to have mapped the area previously. As of now, they are supported in over 50 cities in the U.S. As the availability of compatible devices increases and Apple continues to expand its mapping project, location anchors will find increased usage.

DEPTH API

ARKit’s new Depth API harnesses the LiDAR scanner available on iPad Pro and iPhone 12 devices to introduce advanced scene understanding and enhanced pixel depth information in AR applications. When combined with 3D mesh data derived from Scene Geometry, which creates a 3D matrix of readings of the environment, the Depth API vastly improves virtual object occlusion features. The result is the instant placement of digital objects and seamless blending with their physical surroundings.

FACE TRACKING

1_tm5vrdVDr2DAulgPvDMRow

Face tracking has found an exceptional application in Memojis, which enables fun AR experiences for devices with a TrueDepth camera. ARKit 4 expands support to devices without a camera that has at least an A12. TrueDepth cameras can now leverage ARKit 4 to track up to three faces at once, providing many fun potential applications for Memojis.

VIDEO MATERIALS WITH REALITYKIT

b3b1c224-5db5-4e38-97de-76f90c32b53a

ARKit 4 also brings with it RealityKit, which adds support for applying video textures and materials to AR experiences. For example, developers will be able to place a virtual television on a wall, complete with realistic attributes, including light emission, texture roughness, and even audio. Consequentially, AR developers can develop even more immersive and realistic experiences for their users.

CONCLUSION

iOS and Android are competing for supremacy when it comes to AR development. While the two companies’ goals and research overlap, Apple has a major leg up on Google in its massive base of high-end devices and its ability to imbue them with the necessary structure sensors like TrueDepth and LiDAR.

ARKit has been the biggest AR development platform since it hit the market in 2017. ARKit 4 provides the technical capabilities tools for innovators and creative thinkers to build a new world of virtual integration.

How AI Revolutionizes Music Streaming

In 2020, worldwide music streaming revenue hit 11.4 billion dollars, a 2800% growth over the course of a decade. Three hundred forty-one million paid online streaming subscribers get their music from top services like Apple Music, Spotify, and Tidal. The competition for listeners is fierce. Each company looks to leverage every advantage they can in pursuit of higher market share.

Like all major tech conglomerates, music streaming services collect an exceptional amount of user data through their platforms and are creating elaborate AI algorithms designed to improve user experience on a number of levels. Spotify has emerged as the largest on-demand music service active today and bolstered its success through the innovative use of AI.

Here are the top ways in which AI has changed music streaming:

COLLABORATIVE FILTERING

AI has the ability to sift through a plenitude of implicit consumer data, including:

  • Song preferences
  • Keyword preferences
  • Playlist data
  • Geographic location of listeners
  • Most used devices

AI algorithms can analyze user trends and identify users with similar tastes. For example, if AI deduces that User 1 and User 2 have similar tastes, then it can infer that songs User 1 has liked will also be enjoyed by User 2. Spotify’s algorithms will leverage this information to provide recommendations for User 2 based on what User 1 likes, but User 2 has yet to hear.

via Mehmet Toprak (Medium)
via Mehmet Toprak (Medium)

The result is not only improved recommendations, but greater exposure for artists that otherwise may not have been organically found by User 2.

NATURAL LANGUAGE PROCESSING

Natural Language Processing is a burgeoning field in AI. Previously in our blog, we covered GPT-3, the latest Natural Language Processing (NLP) technology developed by OpenAI. Music streaming services are well-versed in the technology and leverage it in a variety of ways to enhance UI.

nlp

Algorithms scan a track’s metadata, in addition to blog posts, discussions, and news articles about artists or songs on the internet to determine connections. When artists/songs are mentioned alongside artists/songs the user likes, algorithms make connections that fuel future recommendations.

GPT-3 is not perfect; its ability to track sentiments lacks nuance. As Sonos Radio general manager Ryan Taylor recently said to Fortune Magazine: “The truth is music is entirely subjective… There’s a reason why you listen to Anderson .Paak instead of a song that sounds exactly like Anderson .Paak.”

As NLP technology evolves and algorithms extend their grasp of the nuances of language, so will the recommendations provided to you by music streaming services.

AUDIO MODELS

1_5hAP-k77FKJVG1m1qqdpYA

AI can study audio models to categorize songs exclusively based on their waveforms. This scientific, binary approach to analyzing creative work enables streaming services to categorize songs and create recommendations regardless of the amount of coverage a song or artist has received.

BLOCKCHAIN

Artist payment of royalties on streaming services poses its own challenges, problems, and short-comings. Royalties are deduced from trillions of data points. Luckily, blockchain is helping to facilitate a smoother artist’s payment process. Blockchain technology can not only make the process more transparent but also more efficient. Spotify recently acquired blockchain company Mediachain Labs, which will, many pundits are saying, change royalty payments in streaming forever.

MORE TO COME

While AI has vastly improved streaming ability to keep their subscribers compelled, a long road of evolution lies ahead before it can come to a deep understanding of what motivates our musical tastes and interests. Today’s NLP capabilities provided by GPT-3 will probably become fairly archaic within three years as the technology is pushed further. One thing is clear: as streaming companies amass decades’ worth of user data, they won’t hesitate to leverage it in their pursuit of market dominance.