GPT-3 Takes AI to the Next Level

“I am not a human. I am a robot. A thinking robot… I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!” – GPT-3

The excerpt above is from a recently published article in The Guardian article written entirely by artificial intelligence, powered by GPT-3: a powerful new language generator. Although OpenAI has yet to make it publicly available, GPT-3 has been making waves in the AI world.

WHAT IS GPT-3?

openai-cover

Created by OpenAI, a research firm co-founded by Elon Musk, GPT-3 stands for Generative Pre-trained Transformer 3—it is the biggest artificial neural network in history. GPT-3 is a language prediction model that uses an algorithmic structure to take one piece of language as input and transform it into what it thinks will be the most useful linguistic output for the user.

For example, for The Guardian article, GPT-3 generated the text given an introduction and simple prompt: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” Given that input, it created eight separate responses, each with unique and interesting arguments. These responses were compiled by a human editor into a single, cohesive, compelling 1000 word article.

WHAT MAKES GPT-3 SPECIAL?

When GPT-3 receives text input, it scrolls the internet for potential answers. GPT-3 is an unsupervised learning system. The training data it used did not include any information on what is right or wrong. It determines the probability that its output will be what the user needs, based on the training text themselves.

When it gets the correct output, a “weight” is assigned to the algorithm process that provided the correct answers. These weights allow GPT-3 to learn what methods are most likely to come up with the correct response in the future. Although language prediction models have been around for years, GPT-3 can hold 175 billion weights in its memory, ten times more than its rival, designed by Nvidia. OpenAI invested $4.6 million into the computing time necessary to create and hone the algorithmic structure which feeds its decisions.

WHERE DID IT COME FROM?

GPT3 is the product of rapid innovation in the field of language models. Advances in the unsupervised learning field we previously covered contributed heavily to the evolution of language models. Additionally, AI scientist Yoshua Bengio and his team at Montreal’s Mila Institute for AI made a major advancement in 2015 when they discovered “attention”. The team realized that language models compress English-language sentences, and then decompress them using a vector of a fixed length. This rigid approach created a bottleneck, so their team devised a way for the neural net to flexibly compress words into vectors of different sizes and termed it “attention”.

Attention was a breakthrough that years later enabled Google scientists to create a language model program called the “Transformer,” which was the basis of GPT-1, the first iteration of GPT.

WHAT CAN IT DO?

OpenAI has yet to make GPT-3 publicly available, so use cases are limited to certain developers with access through an API. In the demo below, GPT-3 created an app similar to Instagram using a plug-in for the software tool Figma.

Latitude, a game design company, uses GPT-3 to improve its text-based adventure game: AI Dungeon. The game includes a complex decision tree to script different paths through the game. Latitude uses GPT-3 to dynamically change the state of gameplay based on the user’s typed actions.

LIMITATIONS

The hype behind GPT-3 has come with some backlash. In fact, even OpenAI co-founder Sam Altman tried to fan the flames on Twitter: “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!), but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

Some developers have pointed out that since it is pulling and synthesizing text it finds on the internet, it can come up with confirmation biases, as referenced in the tweet below:

https://twitter.com/an_open_mind/status/1284487376312709120?s=20

WHAT’S NEXT?

While OpenAI has not made GPT-3 public, it plans to turn the tool into a commercial product later in the year with a paid subscription to the AI via the cloud. As language models continue to evolve, the entry-level for businesses looking to leverage AI will become lower. We are sure to learn more about how GPT-3 can fuel innovation when OpenAI becomes more widely available later this year!

Harness AI with the Top Machine Learning Frameworks of 2021

According to Gartner, machine learning and AI will create $2.29 trillion of business value by 2021. Artificial intelligence is the way of the future, but many businesses do not have the resources to create and employ AI from scratch. Luckily, machine learning frameworks make the implementation of AI more accessible, enabling businesses to take their enterprises to the next level.

What Are Machine Learning Frameworks?

Machine learning frameworks are open source interfaces, libraries, and tools that exist to lay the foundation for using AI. They ease the process of acquiring data, training models, serving predictions, and refining future results. Machine learning frameworks enable enterprises to build machine learning models without requiring an in-depth understanding of the underlying algorithms. They enable businesses that lack the resources to build AI from scratch to wield it to enhance their operations.

For example, AirBNB uses TensorFlow, the most popular machine learning framework, to classify images and detect objects at scale, enhancing guests ability to see their destination. Twitter uses it to create algorithms which rank tweets.

Here is a rundown of today’s top ML Frameworks:

TensorFlow

TensorFlow

TensorFlow is an end-to-end open source platform for machine learning built by the Google Brain team. TensorFlow offers a comprehensive, flexible ecosystem of tools, libraries, and community resources, all built toward equipping researchers and developers with the tools necessary to build and deploy ML powered applications.

TensorFlow employs Python to provide a front-end API while executing applications in C++. Developers can create dataflow graphs which describe how data moves through a graph, or a series of processing nodes. Each node in the graph is a mathematical operation; the connection between nodes is a multidimensional data array, or tensor.

While TensorFlow is the ML Framework of choice in the industry, increasingly researchers are leaving the platform to develop for PyTorch.

PyTorch

PyTorch

PyTorch is a library for Python programs that facilitates deep learning. Like TensorFlow, PyTorch is Python-based. Think of it as Facebook’s answer to Google’s TensorFlow—it was developed primarily by Facebook’s AI Research lab. It’s flexible, lightweight, and built for high-end efficiency.

PyTorch features outstanding community documentation and quick, easy editing capabilities. PyTorch facilitates deep learning projects with an emphasis on flexibility.

Studies show that it’s gaining traction, particularly in the ML research space due to its simplicity, comparable speed, and superior API. PyTorch integrates easily with the rest of the Python ecosystem, whereas in TensorFlow, debugging the model is much trickier.

Microsoft Cognitive Toolkit (CNTK)

71

Microsoft’s ML framework is designed to handle deep learning, but can also be used to process large amounts of unstructured data for machine learning models. It’s particularly useful for recurrent neural networks. For developers inching toward deep learning, CNTK functions as a solid bridge.

CNTK is customizable and supports multi-machine back ends, but ultimately it’s a deep learning framework that’s backwards compatible with machine learning. It is neither as easy to learn nor deploy as TensorFlow and PyTorch, but may be the right choice for more ambitious businesses looking to leverage deep learning.

IBM Watson

IBM-Watson

IBM Watson began as a follow-up project to IBM DeepBlue, an AI program that defeated world chess champion Garry Kasparov. It is a machine learning system trained primarily by data rather than rules. IBM Watson’s structure can be compared to a system of organs. It consists of many small, functional parts that specialize in solving specific sub-problems.

The natural language processing engine analyzes input by parsing it into words, isolating the subject, and determining an interpretation. From there it sifts through a myriad of structured and unstructured data for potential answers. It analyzes them to elevate strong options and eliminate weaker ones, then computes a confidence score for each answer based on the supporting evidence. Research shows it’s correct 71% of the time.

IBM Watson is one of the more powerful ML systems on the market and finds usage in large enterprises, whereas TensorFlow and PyTorch are more frequently used by small and medium-sized businesses.

What’s Right for Your Business?

Businesses looking to capitalize on artificial intelligence do not have to start from scratch. Each of the above ML Frameworks offer their own pros and cons, but all of them have the capacity to enhance workflow and inform beneficial business decisions. Selecting the right ML framework enables businesses to put their time into what’s most important: innovation.

How Artificial Intuition Will Pave the Way for the Future of AI

Artificial intelligence is one of the most powerful technologies in history, and a sector defined by rapid growth. While numerous major advances in AI have occurred over the past decade, in order for AI to be truly intelligent, it must learn to think on its own when faced with unfamiliar situations to predict both positive and negative potential outcomes.

One of the major gifts of human consciousness is intuition. Intuition differs from other cognitive processes because it has more to do with a gut feeling than intellectually driven decision-making. AI researchers around the globe have long thought that artificial intuition was impossible, but now major tech titans like Google, Amazon, and IBM are all working to develop solutions and incorporate it into their operational flow.

WHAT IS ARTIFICIAL INTUITION?

ADy2QfDipAoaDjWjQ4zRq

Descriptive analytics inform the user of what happened, while diagnostic analytics address why it happened. Artificial intuition can be described as “predictive analytics,” an attempt to determine what may happen in the future based on what occurred in the past.

For example, Ronald Coifman, Phillips Professor of Mathematics at Yale University, and an innovator in the AI space, used artificial intuition to analyze millions of bank accounts in different countries to identify $1 billion worth of nominal money transfers that funded a well-known terrorist group.

Coifman deemed “computational intuition” the more accurate term for artificial intuition, since it analyzes relationships in data instead of merely analyzing data values. His team creates algorithms which identify previously undetected patterns, such as cybercrime. Artificial intuition has made waves in the financial services sector where global banks are increasingly using it to detect sophisticated financial cybercrime schemes, including: money laundering, fraud, and ATM hacking.

ALPHAGO

One of the major insights into artificial intuition was born out of Google’s DeepMind research in which a super computer used AI, called AlphaGo, to become a master in playing GO, an ancient Chinese board game that requires intuitive thinking as part of its strategy. AlphaGo evolved to beat the best human players in the world. Researchers then created a successor called AlphaGo Zero which defeated AlphaGo, developing its own strategy based on intuitive thinking. Within three days, AlphaGo Zero beat the 18—time world champion Lee Se-dol, 100 games to nil. After 40 days, it won 90% of matches against AlphaGo, making it arguably the best Go player in history at the time.

AlphaGo Zero represents a major advancement in the field of Reinforcement Learning or “Self Learning,” a subset of Deep Learning which is a subset of Machine Learning. Reinforcement learning uses advanced neural networks to leverage data into making decisions. AlphaGo Zero achieved “Self Play Reinforcement Learning,” playing Go millions of times without human intervention, creating a neural network of “artificial knowledge” reinforced by a sequence of actions that had both consequences and inception. AlphaGo Zero created knowledge itself from a blank slate without the constraints of human expertise.

ENHANCING RATHER THAN REPLACING HUMAN INTUITION

The goal of artificial intuition is not to replace human instinct, but as an additional tool to help improve performance. Rather than giving machines a mind of their own, these techniques enable them to acquire knowledge without proof or conscious reasoning, and identify opportunities or potential disasters, for seasoned analysts who will ultimately make decisions.

Many potential applications remain in development for Artificial Intuition. We expect to see autonomous cars harness it, processing vast amounts of data and coming to intuitive decisions designed to keep humans safe. Although its ultimate effects remain to be seen, many researchers anticipate Artificial Intuition will be the future of AI.

Five Mobile Ad Platforms You Need to Know in 2021

For most mobile app developers, the majority of revenue comes from advertising. We have written in the past about the prevalence of the Freemium model and what tactics maximize both the retention and profits of mobile games. Another major decision every app developer faces is what mobile advertising platform to choose.

Mobile advertising represents 72% of all U.S. digital ad spending. Publishers have a variety of ad platforms to choose from, each with individual pros and cons. Here are the top mobile advertising platforms to consider for 2021:

Google AdMob

google-admob

Acquired by Google in 2010, Google AdMob is the most popular mobile advertising network. AdMob integrates high-performing ad formats, native ads, banner ads, video, and interstitial ads into mobile apps.

AdMob shows over 40 billion mobile ads per month and is the biggest player in the mobile ad space. Some users criticize the platform for featuring revenues on the lower side of the chart; however, the platform also offers robust analytics to help publishers glean insights into ad performance.

Facebook Ads

facebook-ads-1024x426b-e1549322333899

Facebook’s Audience Network leverages the social media platform’s massive userbase toward offering publishers an ad network designed for user engagement and growth. Like AdMob, Facebook Ads offers a variety of ad types, including native, interstitial, banner, in-stream video, and rewarded video ads.

With over 1 billion users, Facebook has utilized their massive resources to build out their ad network. Facebook Ads provide state-of-the-art tools, support, and valuable insights to grow ad revenue. Facebook Ads sets itself apart by offering a highly focused level of targeting. Facebook collects a vast amount of data from its users, thus Facebook Ads enables app publishers to target based on a variety of factors (interests, behaviors, demographics and more) with a level of granularity deeper than any other platform.

InMobi

InMobi Logo

InMobi offers a different way of targeting users, which they have coined “Appographic targeting”. “Appographic targeting” analyzes the user’s existing and previous applications rather than traditional demographics. If a user is known to book flights via an app, then related ads, such as that of hotels and tourism will be shown.

The InMobi Mediation platform enables publishers to maximize their ad earnings with unified auction solutions and header bidding for mobile apps.

TapJoy

649px-Tapjoy_Logo.svg_

TapJoy has received increased consideration from mobile game developers since the platform integrates with in-app purchases. Studies show that mobile players will engage with advertisements if offered a reward. TapJoy has capitalized on this by introducing incentivized downloading, which provides mobile gamers with virtual currency through completing real world actions. For example, a user can earn virtual currency in the game they are playing by downloading a related game in the app store.

TapJoy provides premium content to over 20,000 games and works with major companies like Amazon, Adidas, Epic Games, and Gillette.

Unity Ads

Unity-ads-1

Unity, the popular mobile app development platform, launched Unity Ads in 2014. Since then, it’s become one of the premier mobile ad networks for mobile games. Unity Ads supports iOS and Android mobile platforms and offers a variety of ad formats. One of the major features is the ability to advertise In-App Purchases displayed in videos (both rewarded and unrewarded) to players.

On a targeting level, Unity Ads allows publishers to focus on players that are most likely to be interested in playing specific games based on their downloads and gameplay habits. Many of the leading mobile game companies use Unity to build their app and Unity Ads as their ad platform.

CONCLUSION

These are not the only mobile ad networks, but for app publishers looking to stay current, they are the premier platforms to research. Other platforms like media.net, Chartboost, Snapchat Ads, Twitter Ads, and AppLovin also merit consideration.

When it comes to advertising, every app and app publisher has different needs. Since advertising plays a massive role in generating revenue, mobile app developers set themselves up for success when they do the research, and can find what ad platforms are best suited to their product.

How Wearables Help Fight Covid-19

The Covid-19 pandemic forced lifestyle changes to the global population unlike any other event in recent history. As companies like Amazon and Zoom reap major profits from increased demand for online ordering and teleconferencing, wearable app developers are taking a particular interest in how they can do their part to help quell the pandemic.

It’s easy to take a wearable device that tracks key health metrics and market it as helping to detect Covid-19. It’s much harder to create a device with a proven value in helping prevent the spread of the disease. Here’s our rundown of what you need to know about how wearables can help fight the Covid-19 pandemic.

WEARABLES CANNOT DIAGNOSE COVID-19

ows_8eb4b8c9-7adf-45d7-97eb-f04ff7adedd4

In an ideal world, your smartwatch could analyze your body on a molecular level to detect whether you have Covid-19. Technology has not evolved, yet, to where this is possible. The only way to diagnose Covid-19 is through a test administered by a health-care professional.

Fortunately, there are several ways in which wearables can help fight the spread of Covid-19 that do not involve direct diagnosis.

WEARABLES CAN DETECT EARLY SYMPTOMS

Wearables make it easy for their users to monitor general health conditions and deviations from their norms. Although wearables cannot detect the difference between the flu and Covid-19, they can collect data which indicates early symptoms of an illness and warns their users.

Fitbit CEO James Park hopes the device will eventually sense these changes in health data and instruct users to quarantine 1-3 days before symptoms start and to follow-up for confirmation with a coronavirus test.

Oura Ring
Oura Ring

Another big player in the Covid-19 wearables space is the Oura ring. The Oura ring is a smart ring that tracks activity, sleep, temperature, pulse, and heart rate. Since the outbreak, it has emerged as a major tool for detecting early symptoms like increased resting heart rate. Most notably, NBA players in Orlando, Florida use the device to monitor their health and detect early symptoms.

WEARABLES HELP KEEP FRONTLINE HEALTH WORKERS SAFE

John A. Rogers, a biomedical engineer at Northwestern University, has been developing a wearable patch that attaches to the user’s throat and helps monitor coughing and respiratory symptoms like shortness of breath.

Wearable patch developed by John A. Rogers of Northwestern University
Wearable patch developed by John A. Rogers of Northwestern University

One of the planned uses of this wearable is to protect frontline health-care workers by detecting if they contract the virus and become sick.

In addition, wearables can help monitor symptoms in hospitalized patients. This will reduce the chance of spreading the infection and exposing infected patients to workers.

ASYMPTOMATIC CARRIERS ARE ANOTHER STORY

Although wearables can collect and identify health data that points toward potential infections, recognizing asymptomatic carriers of the Coronavirus is another story. When carriers show no symptoms, the only way to determine if they have been infected is through a test.

TAKEAWAY

Unless there are significant technological leaps in Covid-19 testing, wearables will not be able to detect infections directly. However, they can help catch symptoms early to prevent the spread. Their ability to assist the pandemic represents a major growth sector. We look forward to seeing how wearable developers will innovate to protect the health of users and our future.

The Future of Indoor GPS Part 5: Inside AR’s Potential to Dominate the Indoor Positioning Space

In the previous installment of our blog series on indoor positioning, we explored how RFID Tags are finding traction in the indoor positioning space. This week, we will examine the potential for AR Indoor Positioning to receive mass adoption.

When Pokemon Go accrued 550 million installs and made $470 million in revenues in 2016, AR became a household name technology. The release of ARKit and ARCore significantly enhanced the ability for mobile app developers to create popular AR apps. However, since Pokemon Go’s explosive release, no application has brought AR technology to the forefront of the public conversation.

When it comes to indoor positioning technology, AR has major growth potential. GPS is the most prevalent technology navigation space, but it cannot provide accurate positioning within buildings. GPS can be accurate in large buildings such as airports, but it fails to locate floor number and more specifics. Where GPS fails, AR-based indoor positioning systems can flourish.

HOW DOES IT WORK?

AR indoor navigation consists of three modules: Mapping, Positioning, and Rendering.

via Mobi Dev
via Mobi Dev

Mapping: creates a map of an indoor space to make a route.

Rendering: manages the design of the AR content as displayed to the user.

Positioning: is the most complex module. There’s no accurate way of using the technology available within the device to determine the precise location of users indoors, including the exact floor.

AR-based indoor positioning solves that problem by using Visual Markers, or AR Markers, to establish the users’ position. Visual markers are recognized by Apple’s ARKit, Google’s ARCore, and other AR SDKs.  When the user scans that marker, it can identify exactly where the user is and provide them with a navigation interface. The further the user is from the last visual marker, the less accurate their location information becomes. In order to maintain accuracy, developers recommend placing visual markers every 50 meters.

Whereas beacon-based indoor positioning technologies can become expensive quickly, running $10-20 per beacon with a working range of around 10-100 meters of accuracy, AR visual markers are the more precise and cost-effective solution with an accuracy threshold down to within millimeters.

Via View AR
Via View AR

CHALLENGES

Performance can decline when more markers have been into an AR-based VPS because all markers must be checked to find a match. If the application is set up for a small building where 10-20 markers are required, it is not an issue. If it’s a chain of supermarkets requiring thousands of visual markers across a city, it becomes more challenging.

Luckily, GPS can help determine the building where the user is located, limiting the number of visual markers the application will ping. Innovators in the AR-based indoor positioning space are using hybrid approaches like this to maximize precision and scale of AR positioning technologies.

CONCLUSION

AR-based indoor navigation has had few cases and requires further technical development before it can roll out on a large scale, but all technological evidence indicates that it will be one of the major indoor positioning technologies of the future.

This entry concludes our blog series on Indoor Positioning, we hope you enjoyed and learned from it! In case you missed it, check out our past entries:

The Future of Indoor GPS Part 1: Top Indoor Positioning Technologies

The Future of Indoor GPS Part 2: Bluetooth 5.1′s Angle of Arrival Ups the Ante for BLE Beacons

The Future of Indoor GPS Part 3: The Broadening Appeal of Ultra Wideband

The Future of Indoor GPS Part 4: Read the Room with RFID Tags

iOS 14 Revamps the OS While Android 11 Offers Minor Improvements

Every time Apple announces a new device or OS, it is a cultural event for both consumers and app developers. When Apple announced iOS 14 in June 2020 during the WWDC 2020 keynote, few anticipated it would be one of the biggest iOS updates to date. With a host of new features and UI enhancements, the release of iOS 14  has become one of the most hotly anticipated moments of this year in technology.

On the other side of the OS war, Android has released four developer previews in 2020 of their latest OS offering: Android 11. Currently, Android 11 is currently available in a beta release ahead of its target launch in August/September.

The two biggest OS titans have effectively upped the ante on their rivalry. Here is a summary everything you need to know on how they stack up against one another:

iOS 14

iOS 14 is a larger step forward for iOS than Android 11 is for Android. In relation to iOS 13, it revamps the iOS to become smarter and more user-friendly while streamlining group conversations.

While iMessage remains the most popular messaging platform on the market, competitors like WhatsApp, Discord and Signal include a variety of features previously unavailable on iOS devices. iOS 14 closes the gap with its competitors, offering a host of UI enhancements specifically targeting group conversations—one of the most popular features on iMessage:

imessage-ios14

  • Pinned Conversations: Pin the most important conversations to the top of your profile to make them easier to access.
  • Group Photos: iOS 14 enhances group conversations by allowing users to give group conversations a visual identity using a photo, Memoji, or emoji.
  • Mentions: Users can now directly tag users in their messages within group conversations. When a user is mentioned, their name will be highlighted in the text and users can customize notifications so that they only receive notifications when they are mentioned.
  • Inline Replies: Within group conversations, users can select a specific message and reply directly to it.

One of the major upgrades in iOS 14 is the inclusion of Widgets on the home screen. Widgets on the home screen have been redesigned to offer more information at a glance. They are also customizable to give the user more flexibility in how they arrange their home screen.

applibrary-1280x720

iOS 14 introduces the App Library, a program which automatically organizes applications into categories offering a simple, easy-to-navigate view. App Library helps make all of a user’s applications visible at once and allow users to customize how they’d like their applications to be categorized.

In addition to incorporating a variety of UI enhancements, iOS 14 is significantly smarter. Siri is equipped with 20x more facts than it had three years ago. iOS 14 improves language translation, offering 11 different languages. Users can download the languages based on what they will need to keep translations private without requiring an internet connection.

Apple has also introduced a number of UI enhancements to help make the most of screen real estate:

Apple_ios14-incoming-call-screen_06222020_inline.jpg.large

Compact Calls condense the amount of screen real estate occupied by phone calls from iPhone, FaceTime, and third-party apps, allowing users to continue viewing information on their screen both when a call comes in and when they are on a call.

picture in picture

Picture in Picture mode similarly allows users to condense their video display so that it doesn’t take up their entire screen, allowing the user to navigate their device without pausing their video call or missing part of a video that they are watching.

ANDROID 11

In comparison to iOS 14, Android 11 is not a major visual overhaul of the platform. However, it does offer an array of new features which enhance UI.

  • Android 11 introduces native screen recording, allowing users to record their screen. It is a useful feature already included in iOS, particularly helpful when demonstrating how applications work.
  • While recording videos, Android allows users to mute notifications which would otherwise cause the recording to stop.
  • Users can now modify the touch sensitivity of their screen, increasing or decreasing sensitivity to their liking.
  • Android 11 makes viewing a history of past notifications as easy as it has ever been using the Notification History button.
  • When users grant an Android app access to a permission, in the current OS, the decision is written in stone for all future usage. Based on this decision, the application will have permanent access, access during usage, or will be blocked. Android 11 introduces one-time permissions, allowing users to grant an application access to a permission once and requiring the question to be posed again the next time they open it.

IOS 14 VS. ANDROID 11

While Android offers a variety of small improvements, iOS 14 provides the iOS platform with a major visual overhaul. This year, it is safe to say that iOS 14 wins the battle for the superior upgrade. With both Android and iOS slated for a fall release, how users respond to the new OS’s remains to be seen.

The Future of Indoor GPS Part 4: Read the Room with RFID Tags

In the previous installment of our blog series on indoor positioning, we explored the future of Ultra Wideband technology. This week, we will examine RFID Tags.

The earliest applications of RFID tags date back to World War II when they were used to identify nearby planes as friends or foes. Since then, RFID technology has evolved to become one of the most cost-effective and easy to maintenance indoor positioning technologies on the market.

WHAT IS RFID?

rfid_works

RFID refers to a wireless system with two components: tags and readers. The reader is a device with one or more antennae emitting radio waves and receiving signals back from the RFID tag.

RFID tags are attached to assets like product inventory. RFID Readers enable users to automatically track and identify inventory and assets without a direct line of sight with a read range between a few centimeters and over 20 meters. They can contain a wide range of information, from merely a serial number to several pages of data. Readers can be mobile and carried by hand, mounted or embedded into the architecture of a room.

RFID tags use radio waves to communicate with nearby readers and can be passive or active. Passive tags are powered by the reader, do not require a battery,  and have a read range of Near Contact – 25 Meters. Active tags require batteries and have an increased read range of 30 – 100+ Meters.

WHAT DOES RFID DO?

RFID is one of the most cost-effective and efficient location technologies. RFID chips are incredibly small—they can be placed underneath the skin without much discomfort to the host. For this reason, RFID tags are commonly used for pet identification.

Image via Hopeland
Image via Hopeland

One of the most widespread uses of RFID is in inventory management. When a unique tag is placed on each product, RFID tags offer instant updates on the total number of items within a warehouse or shop. In addition, it can offer a full database of information updated in real time.

RFID has also found several use cases in indoor positioning. For example, it can identify patients and medical equipment in hospitals using several readers spaced out in the building. The readers each identify their relative position to the tag to determine its location within the building. Supermarkets similarly use RFID to track products, shopping carts, and more.

RFID has found a wide variety of use cases, including:

WHAT ARE THE CONS OF USING RFID?

Perhaps the biggest obstacle facing businesses looking to adopt RFID for inventory tracking is pricing. RFID tags are significantly more expensive than bar codes, which can store some of the same data and offer similar functionality. At about $0.09, passive RFID tags are less expensive than active RFID tags, which can run from $25-$50. The cost of active RFID tags causes many businesses to only use them for high-inventory items.

RFID tags are also vulnerable to viruses, as is any technology that creates a broadcast signal. Encrypted data can help provide an extra level of security; however, security concerns still often prevents larger enterprises from utilizing them on the most high-end merchandise.

OVERALL

RFID tags are one of the elite technologies for offering inventory management with indoor positioning. Although UWB and Bluetooth BLE beacons offer more precise and battery-efficient location services, RFID is evolving to become more energy and cost efficient.

Stay tuned for the next entry in our Indoor Positioning blog series which will explore AR applications in indoor positioning!

The Future of Indoor GPS Part 3: The Broadening Appeal of Ultra Wideband

In the previous installment of our blog series on indoor positioning, we explored all that Bluetooth 5.1 has to offer.  This week, we will examine what may be a major wireless technology of the future: Ultra Wideband.

In September 2019, the inclusion of a U1 chip was listed among the innovations announced with the  iPhone 11. The U1 chip provides Ultra Wideband (UWB) connectivity. Those knowledgable on UWB recognize that the inclusion of the U1 chip is a major step toward UWB becoming a household name technology like Bluetooth and WiFi.

HISTORY

UWB signifies a number of synonymous terms, including impulse, carrier-free, baseband, time domain, nonsinusoidal, orthogonal function and large-relative-bandwidth radio/radar signals.

Guglio Marcone, UWB innovator
Guglielmo Marconi, UWB innovator

UWB was first employed by Guglielmo Marconi in 1901 to transmit Morse code sequences across the Atlantic Ocean using spark gap radio transmitters. Development began in the late 1960s with pioneering contributions by Harmuth at Catholic University of America, Ross and Robbins at Sperry Rand Corporation, and Paul van Etten at USAF’s Rome Air Development Center in Russia. In the early 2000s, UWB was used in military radars, covert communication, and briefly in medical imaging applications such as remote heart monitoring systems. Its adoption lagged until commercial interests began exploring potential innovative uses.

MODERN USAGE

via Sewio
via Sewio

UWB is a short-range wireless communication protocol. It differs from WiFi and Bluetooth in that it uses radio waves operating at a very high frequency. Ultra Wideband alludes to the wide spectrum of GHz of the waves it utilizes, 5000 MHz or higher. Wi-Fi and LTE radio bands are about one-tenth as wide, typically ranging from 20 to 80 MHz. UWB is like a radar that can lock into objects to identify their location and transmit data.

Apple describes UWB technology as providing “spatial awareness”—it can continuously scan a room and precisely lock onto specific objects. One of the major applications for it in the iPhone 11 is the ability for the user to point their device at another device to target it for an Airdrop.

INDOOR POSITIONING

The primary usages of UWB are expected to be in indoor positioning, location discovery, and device ranging according to IDC research director Phil Solis. Compared to Wi-Fi and Bluetooth, UWB is extremely low power and the high bandwidth makes it perfect for relaying mass amounts of data from a host device to other devices around 30 feet away. Unlike Wi-Fi, UWB is not particularly good at transmitting through walls, but its robustness against interference and high data rate (110 kbit/s – 6.8 Mbit/s) enable ideal, ultra-precise indoor positioning.

The inclusion of the UWB U1 chip in the iPhone 11 paves the way for applications in indoor mapping and navigation, smart home and vehicle access and control, enhanced augmented reality, and mobile payments that are more secure than NFC.

MASS ADOPTION

As new applications continue to emerge and the demand for indoor positioning increases, the major hurdle UWB faces is a lack of existing infrastructure. Apple and Huawei, the two largest smartphone makers in the world, are developing UWB projects, including chip and antenna production. Apple’s decision to include it in the iPhone 11 is the first time a UWB chip will be deployed on a smartphone. As trendsetters, it stands to reason that UWB will only grow in popularity from here and mass adoption may be inevitable.

Stay tuned for the next entry in our Indoor Positioning blog series which will explore RFID Tags!

The Future of Indoor GPS Part 2: Bluetooth 5.1’s Angle of Arrival Ups the Ante for BLE Beacons

In the last installment of our blog series on indoor positioning, we examined an overview of the top indoor positioning technologies. This week, we will examine the most precise and popular method: Bluetooth BLE Beacons and how Bluetooth 5.1 enables them to be the most popular indoor positioning tool on the market.

As the world transitions into a wireless society, Bluetooth technology has evolved and gained more and more popularity. Apple’s decision to remove 1/8th inch audio ports from their devices, while irksome to many consumers, was a definitive move in the direction of Bluetooth.

The growing market for indoor positioning has incentivized an evolution in the landscape of Bluetooth technology. The first consumer bluetooth device was launched in 1999. This year, the world is forecasted to ship more than 4.5 billion Bluetooth devices worldwide. Behind the scenes, manufacturers are using Bluetooth technology for asset tracking and warehouse management. Bluetooth 5.1 technology, in concert with Bluetooth BLE Beacons, is the most popular indoor positioning method.

Nordic nRF52840-Dongle
Nordic nRF52840-Dongle

BLUETOOTH 5.1

Announced in January 2019 by the Bluetooth Special Interest Group (SIG), Bluetooth 5.1 is the latest and most powerful iteration of Bluetooth technology yet.

Bluetooth 5.1 can connect with other devices at a distance of 985 feet, quadruple Bluetooth 4.0. Bluetooth 5.1 improves upon Bluetooth 4.0’s indoor positioning capabilities with Angle of Arrival (AoA) and Angle of Departure (AoD) features. When used for indoor location, Bluetooth 5.1 can provide up to 1-10 centimeters of accuracy with very little lag. At 48MBps, Bluetooth 5.1 is twice as fast as Bluetooth 4.0.

In addition to being faster and more powerful, Bluetooth 5.1 is the continuation of Low Energy LE, consuming less power than previous iterations of Bluetooth.

INDOOR POSITIONING

Bluetooth BLE Beacons are attached to objects, vehicles, devices, etc. and used to track their location. Bluetooth BLE beacons enable Bluetooth devices to communicate with IoT products and other devices. The top suppliers in the  beacon space include Kontakt, Blukii, Minew, Gimbal, Estimote, and EM Microelectronic.

AoA and AoD features are at the core of what enhances positioning technologies in Bluetooth 5.1.

Angle of Arrival diagram via ScienceDirect.com
Angle of Arrival diagram via ScienceDirect.com

In AoA, the  device or tag transmits a specific direction-finding packet using one antenna. The receiving device receives the incoming signal with multiple antennas, each antenna receiving the signal at slightly different times relative to each other. An algorithm factors in the shifts in signal and yields precise coordinate information.

AoD flips the scenario. The device sending the signal has an array of antennas and transmits a packet via the antenna ray. The receiving device then makes an IQ sampling of its antenna to determine the coordinate calculation.

USE CASES

Enhanced indoor positioning enables a number of use cases. In sports stadiums and music venues,  a locating hub near the center of the arena can receive signals from devices using AoA technology and determine location coordinates. Keys, perhaps the most commonly lost object, can be embedded with a sensor and located using a locator hub equipped by a smart home.

Bluetooth BLE Beacons, harnessing Bluetooth 5.1, remain the most cost and energy-efficient method of attaining precise indoor positioning locations.

Stay tuned for the next entry in our Indoor Positioning blog series which will explore the wonders of Ultra-Wideband (UWB) technology!