Tag Archives: AI

How Chatbots Make Healthcare More Efficient

In the mid 1960s, Joseph Weizenbaum of the MIT Artificial Intelligence Laboratory created ELIZA, an early natural language processing computer program and the first chatbot therapist. While ELIZA did not change therapy forever, it was a major step forward and one of the first programs capable of taking the Turing Test. Researchers were surprised by the amount of people who attributed human-like feelings to the computer’s responses.

Fast-forward 50 years later, advancements in artificial intelligence and natural language processing enable chatbots to become useful in a number of scenarios. Interest in chatbots has increased by 500% in the past 10 years and the market size is expected reach $1.3 billion by 2025.

Chatbots are becoming commonplace in marketing, customer service, real estate, finance, and more. Healthcare is one of the top 5 industries where chatbots are expected to make an impact. This week, we explore why chatbots appeal to help healthcare providers run a more efficient operation.

SCALABILITY

Chatbots can interact with a large number of users instantly. Their scalability equips them to handle logistical problems with ease. For example, chatbots can make mundane tasks such as scheduling easier by asking basic questions to understand a user’s health issues, matching them with doctors based on available time slots, and integrating with both doctor and patient calendars to create an appointment.

At the onset of the pandemic, Intermountain Healthcare was receiving an overload of inquiries from people who were afraid they may have contracted Covid-19. In order to facilitate the inquiries, Intermountain added extra staff and a dedicated line to their call center, but it wasn’t enough. Ultimately, they turned to artificial intelligence in the form of Scout, a conversational chatbot made by Gyant, to facilitate a basic coronavirus screening which determined if patients were eligible to get tested at a time when the number of tests were limited.

Ultimately, Scout only had to ask very basic questions—but it handled the bevy of inquiries with ease. Chatbots have proved themselves to be particularly useful for understaffed healthcare providers. As they employ AI to learn from previous interactions, they will become more sophisticated which will enable them to take on more robust tasks.

ACCESS

Visiting a doctor can be challenging due to the considerable amount of time it takes to commute. Working people and those without access to reliable transport may prevent them from taking on the hassle of the trip. Chatbots and telehealth in general provide a straightforward solution to these issues, enabling patients to receive insight as to whether an in-person consultation will be necessary.

While chatbots cannot provide medical insight and prognoses, they are effective in collecting and encouraging an awareness of basic data, such as anxiety and weight changes. They can help effectively triage patients through preliminary stages using automated queries and store information which doctors can later reference with ease. Their ability to proliferate information and handle questions will only increase as natural language processing improves.

A PERSONALIZED APPROACH — TO AN EXTENT

Chatbot therapists have come a long way since ELIZA. Developments in NLP, machine learning, and more enable chatbots to deliver helpful, personalized responses to user messages. Chatbots like Woebot are trained to employ cognitive-behavioral therapy (CBT) to aid patients suffering from emotional distress by offering prompts and exercises for reflection. The anonymity of chatbots can help encourage patients to provide more candid answers unafraid of human judgment.

However, chatbots have yet to achieve one of the most important features a medical provider should have: empathy. Each individual is different, some may be scared away by formal talk and prefer casual conversation while for others, formality may be of the utmost importance. Given the delicacy of health matters, a lack of human sensitivity is a major flaw.

While chatbots can help manage a number of logistical tasks to make life easier for patients and providers, their application will be limited until they can gauge people’s tone and understand context. If recent advances in NLP and AI serve any indication, that time is soon to come.

How the Internet of Behaviors Will Shape the Future of Digital Marketing

In the digital age, businesses need to leverage every possible platform and cutting-edge technology in order to get a leg up on the competition. We’ve covered the Internet of Things extensively on the Mystic Media blog, but a new and related tech trend is making waves. This trend is called the Internet of Behaviors and according to Gartner, about 40% of people will have their behavior tracked by the IoB globally by 2023.

WHAT IS THE IOB?

Internet of Behavior, or the IoB, exists at the intersection of technology, data analytics, and behavioral science. The IoB leverages data collected from a variety of sources, including online activities, social media, wearable devices, commercial transactions and IoT devices, in order to deliver insights related to consumers and purchasing behavior.

With devices more interconnected than ever, the IoB tracks, gathers, combines and interprets massive data sets so that businesses can better understand their consumers. Businesses leverage analysis from the IoB to offer more personalized marketing with the goal of influencing consumer decision making.

HOW DOES IT WORK?

Traditionally, a car insurance company would analyze a customer’s driving history in order to determine if they are a good or bad driver. However, in today’s digital age, they might take it a step further and analyze social media profiles in order to “predict” whether a customer is a safe driver. Imagine what insights they could gather from a user’s Google search history or Amazon purchases? Access to large datasets enables large companies to create psychographic profiles and gather an enhanced understanding of their customer base.

Businesses can use the IoB for more than just purchasing decisions. UX designers can leverage insights to deliver more effective customer experiences. Large companies such as Ford are designing autonomous vehicles that change based on the city, modulating behavior based on vehicle traffic, pedestrians, bicycles and more.

GBKSOFT created a mobile application that collects data from wearable devices in order to help golfers improve their skills. The application records each golf ball hit, including the stroke, force, trajectory and angle, and delivers visual recommendations to improve their swing and technique. Insights gathered through data are translated into behavioral trends that are then converted into recommendations to improve the user’s game.

The IoB is all about collecting data that can be translated into behavior which helps companies understand consumer tendencies and translate them into meaningful actions.

CONCERNS

While there is quite a bit of enthusiasm surrounding the potential impact of the IoB for B2C companies, a number of legal concerns come with it. A New York Times article, written by Harvard Business School emeritus professor Shoshana Zuboff, warns of the age of surveillance capitalism where tech behemoths surveil humans with the intent to control their behavior.

Due to the speed at which technology and the ability to collect data has proliferated, privacy and data security are under-regulated and major concerns for consumers. For example, Facebook was applying facial recognition scans in advance of the 2016 election without user’s consent. Cambridge Analytica’s use of psychoanalytic profiles has been the subject of much derision. Momentum for data privacy regulation is growing and since the IoB hinges on the ability for companies to collect and market data, forthcoming regulations could inhibit its impact.

CONCLUSION

Despite regulatory concerns, the IoB is a sector that we expect to see grow over time. As the IoT generates big data and AI evolves to learn how to parse through and analyze it, it’s only natural that companies will take the next step to leverage analysis to enhance their understanding of their customers’ behaviors and use it to their advantage. The IoB is where that next step will take place.

LiDAR: The Next Revolutionary Technology and What You Need to Know

In an era of rapid technological growth, certain technologies, such as artificial intelligence and the internet of things, have received mass adoption and become household names. One up-and-coming technology that has the potential to reach that level of adoption is LiDAR.

WHAT IS LIDAR?

LiDAR, or light detection and ranging, is a popular remote sensing method for measuring the exact distance of an object on the earth’s surface. Initially used in the 1960s, LiDAR has gradually received increasing adoption, particularly after the creation of GPS in the 1980s. It became a common technology for deriving precise geospatial measurements.

LiDAR requires three components: the scanner, laser, and GPS receiver. The scanner sends a pulsed laser to the GPS receiver to calculate an object’s variable distances from the earth surface. The laser emits light which travels to the ground and reflects off things like buildings, tree branches and more. The reflected light energy then returns to the LiDAR sensor where the associated information is recorded. In combination with photodetector and optics, it allows for an ultra-precise distance detection and topographical data.

WHY IS LIDAR IMPORTANT?

As we covered in our rundown of the iPhone 12, new iOS devices come equipped with a brand new LiDAR scanner. LiDAR now enters the hands of consumers who have Apple’s new generation of devices, enabling enhanced functionality and major opportunities for app developers. The proliferation of LiDAR signals toward the technology finding mass adoption and household name status.

There are two different types of LiDAR systems: Terrestrial and Airborne. Airborne LiDAR are installed on drones or helicopters for deriving an exact measurement of distance, while Terrestrial LiDAR systems are installed on moving vehicles to collect pinpoints. Terrestrial LiDAR systems are often used to monitor highways and have been employed by autonomous cars for years, while airborne LiDAR are commonly used in environmental applications and gathering topographical data.

With the future in mind, here are the top LiDAR trends to look out for moving forward:

SUPERCHARGING APPLE DEVICES

LiDAR enhances the camera on Apple devices significantly. Auto-focus is quicker and more effective on those devices. Moreover, it supercharges AR applications by greatly enhancing the speed and quality of a camera’s ability to track the location of people as well as place objects.

One of the major apps that received a functionality boost from LiDAR is Apple’s free Measure app, which can measure distance, dimensions, and even whether an object is level. The measurements determined by the app are significantly more accurate with the new LiDAR scanner, capable of replacing physical rulers, tape measures, and spirit levels.

Microsoft’s Seeing AI application is designed for the visually impaired to navigate their environment, however, LiDAR takes it to the next level. In conjunction with artificial intelligence, LiDAR enables the application to read text, identify products and colors, and describe people, scenes, and objects that appear in the viewfinder.

BIG INVESTMENTS BY AUTOMOTIVE COMPANIES

LiDAR plays a major role in autonomous vehicles, relying on a terrestrial LiDAR system to help them self-navigate. In 2018, reports suggest that the automotive segment acquired a business share of 90 percent. With self-driving cars inching toward mass adoption, expect to see major investments in LiDAR by automotive companies in 2021 and beyond.

As automotive companies look to make major investments in LiDAR, including Volkswagen’s recent investment in Aeva, many LiDAR companies are competing to create the go-to LiDAR system for automotive companies. Check out this great article by Wired detailing the potential for this bubble to burst.

LIDAR DRIVING ENVIRONMENTAL APPLICATIONS

Beyond commercial applications and the automotive industry, LiDAR is gradually seeing increased adoption for geoscience applications. The environmental segment of the LiDAR market is anticipated to grow at a CAGR of 32% through 2025. LiDAR is vital to geoscience applications for creating accurate and high-quality 3D data to study ecosystems of various wildlife species.

One of the main environmental uses of LiDAR is for soliciting topographic information on landscapes. Topographic LiDAR is expected to see a growth rate of over 25% over the coming years. These systems can see through forest canopy to produce accurate 3D models of landscapes necessary to create contours, digital terrain models, digital surface models and more.

CONCLUSION

In March 2020, after the first LiDAR scanner became available in the iPad Pro, The Verge put it perfectly when they said that the new LiDAR sensor is an AR hardware solution in search of software. While LiDAR has gradually found increasing usage, it is still a powerful new technology with burgeoning commercial usage. Enterprising app developers are looking for new ways to use it to empower consumers and businesses alike.

For supplementary viewing on the inner workings of the technology, check out this great introduction below, courtesy of Neon Science.

How AI Fuels a Game-Changing Technology in Geospatial 2.0

Geospatial technology describes a broad range of modern tools which enable the geographic mapping and analysis of Earth and human societies. Since the 19th century, geospatial technology has evolved as aerial photography and eventually satellite imaging revolutionized cartography and mapmaking.

Contemporary society now employs geospatial technology in a vast array of applications, from commercial satellite imaging, to GPS, to Geographic Information Systems (GIS) and Internet Mapping Technologies like Google Earth. The geospatial analytics market is currently valued between $35 and $40 billion with the market projected to hit $86 billion by 2023.

GEOSPATIAL 1.0 VS. 2.0

geospatial

Geospatial technology has been in phase 1.0 for centuries; however, the boon of artificial intelligence and the IoT has made Geospatial 2.0 a reality. Geospatial 1.0 offers valuable information for analysts to view, analyze, and download geospatial data streams. Geospatial 2.0 takes it to the next level–harnessing artificial intelligence to not only collect data, but to process, model, analyze and make decisions based on the analysis.

When empowered by artificial intelligence, geospatial 2.0 technology has the potential to revolutionize a number of verticals. Savvy application developers and government agencies in particular have rushed to the forefront of creating cutting edge solutions with the technology.

PLATFORM AS A SERVICE (PaaS) SOLUTIONS

Effective geospatial 2.0 solutions require a deep vertical-specific knowledge of client needs, which has lagged behind the technical capabilities of the platform. The bulk of currently available geospatial 2.0 technologies are offered as “one-size-fits-all” Platform as a Service (PaaS) solutions. The challenge for PaaS providers is that they need to serve a wide collection of use cases, harmonizing data from multiple sensors together while enabling users to simply understand and address the many different insights which can be gleaned from the data.

shutterstock_754106473-768x576

In precision agriculture, FarmShots offers precise, frequent imagery to farmers along with meaningful analysis of field variability, damage extent, and the effects of applications through time.

Mayday

In the disaster management field, Mayday offers a centralized artificial intelligence platform with real-time disaster information. Another geospatial 2.0 application Cloud to Street uses a mix of AI and satellites to track floods in near real-time, offering extremely valuable information to both insurance companies and municipalities.

SUSTAINABILITY

The growing complexity of environmental concerns have led to a number of applications of geospatial 2.0 technology to help create a safer, more sustainable world. For example, geospatial technology can measure carbon sequestration, tree density, green cover, carbon credit & tree age. It can provide vulnerability assessment surveys in disaster-prone areas. It can also help urban planners and governments plan and implement community mapping and equitable housing. Geospatial 2.0 can analyze a confluence of factors and create actionable insight toward analyzing and honing our environmental practices.

As geospatial 1.0 models are upgraded to geospatial 2.0, expect to see more robust solutions incorporating AI-powered analytics. A survey of working professionals conducted by Geospatial World found that geospatial technology will likely make the biggest impact in the climate and environment field.

CONCLUSION

Geospatial 2.0 platforms are very expensive to employ and require quite a bit of development.  The technology offers great potential to increase revenue and efficiency for a number of verticals. In addition, it may be a key technology to help cut down our carbon footprint and create a safer, more sustainable world..

AIoT: How the Intersection of AI and IoT Will Drive Innovation for Decades to Come

We have covered the evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) over the years as they have gained prominence. IoT devices collect a massive amount of data. Cisco projects by the end of 2021, IoT devices will collect over 800 zettabytes of data per year. Meanwhile, AI algorithms can parse through big data and teach themselves to analyze and identify patterns to make predictions. Both technologies enable a seemingly endless amount of applications retained a massive impact on many industry verticals.

What happens when you merge them? The result is aptly named the AIoT (Artificial Intelligence of Things) and it will take IoT devices to the next level.

WHAT IS AIOT?

AIoT is any system that integrates AI technologies with IoT infrastructure, enhancing efficiency, human-machine interactions, data management and analytics.

IoT enables devices to collect, store, and analyze big data. Device operators and field engineers typically control devices. AI enhances IoT’s existing systems, enabling them to take the next step to determine and take the appropriate action based on the analysis of the data.

By embedding AI into infrastructure components, including programs, chipsets, and edge computing, AIoT enables intelligent, connected systems to learn, self-correct and self-diagnose potential issues.

960x0

One common example comes in the surveillance field. Surveillance camera can be used as an image sensor, sending every frame to an IoT system which analyzes the feed for certain objects. AI can analyze the frame and only send frames when it detects a specific object—significantly speeding up the process while reducing the amount of data generated since irrelevant frames are excluded.

CCTV-Traffic-Monitoring-1024x683

While AIoT will no doubt find a variety of applications across industries, the three segments we expect to see the most impact on are wearables, smart cities, and retail.

WEARABLES

Wearable-IoT-Devices

The global wearable device market is estimated to hit more than $87 billion by 2022. AI applications on wearable devices such as smartwatches pose a number of potential applications, particularly in the healthtech sector.

Researchers in Taiwan have been studying the potential for an AIoT wearable system for electrocardiogram (ECG) analysis and cardiac disease detection. The system would integrate a wearable IoT-based system with an AI platform for cardiac disease detection. The wearable collects real-time health data and stores it in a cloud where an AI algorithm detects disease with an average of 94% accuracy. Currently, Apple Watch Series 4 or later includes an ECG app which captures symptoms of irregular, rapid or skipped heartbeats.

Although this device is still in development, we expect to see more coming out of the wearables segment as 5G enables more robust cloud-based processing power, taking the pressure off the devices themselves.

SMART CITIES

We’ve previously explored the future of smart cities in our blog series A Smarter World. With cities eager to invest in improving public safety, transport, and energy efficiency, AIoT will drive innovation in the smart city space.

There are a number of potential applications for AIoT in smart cities. AIoT’s ability to analyze data and act opens up a number of possibilities for optimizing energy consumption for IoT systems. Smart streetlights and energy grids can analyze data to reduce wasted energy without inconveniencing citizens.

Some smart cities have already adopted AIoT applications in the transportation space. New Delhi, which boasts some of the worst traffic in the world, features an Intelligent Transport Management System (ITMS) which makes real-time dynamic decisions on traffic flows to accelerate traffic.

RETAIL

AIoT has the potential to enhance the retail shopping experience with digital augmentation. The same smart cameras we referenced earlier are being used to detect shoplifters. Walmart recently confirmed it has installed smart security cameras in over 1,000 stores.

smart-shopping-cart

One of the big innovations for AIoT involves smart shopping carts. Grocery stores in both Canada and the United States are experimenting with high-tech shopping carts, including one from Caper which uses image recognition and built-in sensors to determine what a person puts into the shopping cart.

The potential for smart shopping carts is vast—these carts will be able to inform customers of deals and promotion, recommend products based on their buying decisions, enable them to view an itemized list of their current purchases, and incorporate indoor navigation to lead them to their desired items.

A smart shopping cart company called IMAGR recently raised $14 million in a pre-Series A funding round, pointing toward a bright future for smart shopping carts.

CONCLUSION

AIoT represents the intersection of AI, IoT, 5G, and big data. 5G enables the cloud processing power for IoT devices to employ AI algorithms to analyze big data to determine and enact action items. These technologies are all relatively young, and as they continue to grow, they will empower innovators to build a smarter future for our world.

How AI Revolutionizes Music Streaming

In 2020, worldwide music streaming revenue hit 11.4 billion dollars, a 2800% growth over the course of a decade. Three hundred forty-one million paid online streaming subscribers get their music from top services like Apple Music, Spotify, and Tidal. The competition for listeners is fierce. Each company looks to leverage every advantage they can in pursuit of higher market share.

Like all major tech conglomerates, music streaming services collect an exceptional amount of user data through their platforms and are creating elaborate AI algorithms designed to improve user experience on a number of levels. Spotify has emerged as the largest on-demand music service active today and bolstered its success through the innovative use of AI.

Here are the top ways in which AI has changed music streaming:

COLLABORATIVE FILTERING

AI has the ability to sift through a plenitude of implicit consumer data, including:

  • Song preferences
  • Keyword preferences
  • Playlist data
  • Geographic location of listeners
  • Most used devices

AI algorithms can analyze user trends and identify users with similar tastes. For example, if AI deduces that User 1 and User 2 have similar tastes, then it can infer that songs User 1 has liked will also be enjoyed by User 2. Spotify’s algorithms will leverage this information to provide recommendations for User 2 based on what User 1 likes, but User 2 has yet to hear.

via Mehmet Toprak (Medium)
via Mehmet Toprak (Medium)

The result is not only improved recommendations, but greater exposure for artists that otherwise may not have been organically found by User 2.

NATURAL LANGUAGE PROCESSING

Natural Language Processing is a burgeoning field in AI. Previously in our blog, we covered GPT-3, the latest Natural Language Processing (NLP) technology developed by OpenAI. Music streaming services are well-versed in the technology and leverage it in a variety of ways to enhance UI.

nlp

Algorithms scan a track’s metadata, in addition to blog posts, discussions, and news articles about artists or songs on the internet to determine connections. When artists/songs are mentioned alongside artists/songs the user likes, algorithms make connections that fuel future recommendations.

GPT-3 is not perfect; its ability to track sentiments lacks nuance. As Sonos Radio general manager Ryan Taylor recently said to Fortune Magazine: “The truth is music is entirely subjective… There’s a reason why you listen to Anderson .Paak instead of a song that sounds exactly like Anderson .Paak.”

As NLP technology evolves and algorithms extend their grasp of the nuances of language, so will the recommendations provided to you by music streaming services.

AUDIO MODELS

1_5hAP-k77FKJVG1m1qqdpYA

AI can study audio models to categorize songs exclusively based on their waveforms. This scientific, binary approach to analyzing creative work enables streaming services to categorize songs and create recommendations regardless of the amount of coverage a song or artist has received.

BLOCKCHAIN

Artist payment of royalties on streaming services poses its own challenges, problems, and short-comings. Royalties are deduced from trillions of data points. Luckily, blockchain is helping to facilitate a smoother artist’s payment process. Blockchain technology can not only make the process more transparent but also more efficient. Spotify recently acquired blockchain company Mediachain Labs, which will, many pundits are saying, change royalty payments in streaming forever.

MORE TO COME

While AI has vastly improved streaming ability to keep their subscribers compelled, a long road of evolution lies ahead before it can come to a deep understanding of what motivates our musical tastes and interests. Today’s NLP capabilities provided by GPT-3 will probably become fairly archaic within three years as the technology is pushed further. One thing is clear: as streaming companies amass decades’ worth of user data, they won’t hesitate to leverage it in their pursuit of market dominance.

GPT-3 Takes AI to the Next Level

“I am not a human. I am a robot. A thinking robot… I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!” – GPT-3

The excerpt above is from a recently published article in The Guardian article written entirely by artificial intelligence, powered by GPT-3: a powerful new language generator. Although OpenAI has yet to make it publicly available, GPT-3 has been making waves in the AI world.

WHAT IS GPT-3?

openai-cover

Created by OpenAI, a research firm co-founded by Elon Musk, GPT-3 stands for Generative Pre-trained Transformer 3—it is the biggest artificial neural network in history. GPT-3 is a language prediction model that uses an algorithmic structure to take one piece of language as input and transform it into what it thinks will be the most useful linguistic output for the user.

For example, for The Guardian article, GPT-3 generated the text given an introduction and simple prompt: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” Given that input, it created eight separate responses, each with unique and interesting arguments. These responses were compiled by a human editor into a single, cohesive, compelling 1000 word article.

WHAT MAKES GPT-3 SPECIAL?

When GPT-3 receives text input, it scrolls the internet for potential answers. GPT-3 is an unsupervised learning system. The training data it used did not include any information on what is right or wrong. It determines the probability that its output will be what the user needs, based on the training text themselves.

When it gets the correct output, a “weight” is assigned to the algorithm process that provided the correct answers. These weights allow GPT-3 to learn what methods are most likely to come up with the correct response in the future. Although language prediction models have been around for years, GPT-3 can hold 175 billion weights in its memory, ten times more than its rival, designed by Nvidia. OpenAI invested $4.6 million into the computing time necessary to create and hone the algorithmic structure which feeds its decisions.

WHERE DID IT COME FROM?

GPT3 is the product of rapid innovation in the field of language models. Advances in the unsupervised learning field we previously covered contributed heavily to the evolution of language models. Additionally, AI scientist Yoshua Bengio and his team at Montreal’s Mila Institute for AI made a major advancement in 2015 when they discovered “attention”. The team realized that language models compress English-language sentences, and then decompress them using a vector of a fixed length. This rigid approach created a bottleneck, so their team devised a way for the neural net to flexibly compress words into vectors of different sizes and termed it “attention”.

Attention was a breakthrough that years later enabled Google scientists to create a language model program called the “Transformer,” which was the basis of GPT-1, the first iteration of GPT.

WHAT CAN IT DO?

OpenAI has yet to make GPT-3 publicly available, so use cases are limited to certain developers with access through an API. In the demo below, GPT-3 created an app similar to Instagram using a plug-in for the software tool Figma.

Latitude, a game design company, uses GPT-3 to improve its text-based adventure game: AI Dungeon. The game includes a complex decision tree to script different paths through the game. Latitude uses GPT-3 to dynamically change the state of gameplay based on the user’s typed actions.

LIMITATIONS

The hype behind GPT-3 has come with some backlash. In fact, even OpenAI co-founder Sam Altman tried to fan the flames on Twitter: “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!), but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

Some developers have pointed out that since it is pulling and synthesizing text it finds on the internet, it can come up with confirmation biases, as referenced in the tweet below:

https://twitter.com/an_open_mind/status/1284487376312709120?s=20

WHAT’S NEXT?

While OpenAI has not made GPT-3 public, it plans to turn the tool into a commercial product later in the year with a paid subscription to the AI via the cloud. As language models continue to evolve, the entry-level for businesses looking to leverage AI will become lower. We are sure to learn more about how GPT-3 can fuel innovation when OpenAI becomes more widely available later this year!

Harness AI with the Top Machine Learning Frameworks of 2021

According to Gartner, machine learning and AI will create $2.29 trillion of business value by 2021. Artificial intelligence is the way of the future, but many businesses do not have the resources to create and employ AI from scratch. Luckily, machine learning frameworks make the implementation of AI more accessible, enabling businesses to take their enterprises to the next level.

What Are Machine Learning Frameworks?

Machine learning frameworks are open source interfaces, libraries, and tools that exist to lay the foundation for using AI. They ease the process of acquiring data, training models, serving predictions, and refining future results. Machine learning frameworks enable enterprises to build machine learning models without requiring an in-depth understanding of the underlying algorithms. They enable businesses that lack the resources to build AI from scratch to wield it to enhance their operations.

For example, AirBNB uses TensorFlow, the most popular machine learning framework, to classify images and detect objects at scale, enhancing guests ability to see their destination. Twitter uses it to create algorithms which rank tweets.

Here is a rundown of today’s top ML Frameworks:

TensorFlow

TensorFlow

TensorFlow is an end-to-end open source platform for machine learning built by the Google Brain team. TensorFlow offers a comprehensive, flexible ecosystem of tools, libraries, and community resources, all built toward equipping researchers and developers with the tools necessary to build and deploy ML powered applications.

TensorFlow employs Python to provide a front-end API while executing applications in C++. Developers can create dataflow graphs which describe how data moves through a graph, or a series of processing nodes. Each node in the graph is a mathematical operation; the connection between nodes is a multidimensional data array, or tensor.

While TensorFlow is the ML Framework of choice in the industry, increasingly researchers are leaving the platform to develop for PyTorch.

PyTorch

PyTorch

PyTorch is a library for Python programs that facilitates deep learning. Like TensorFlow, PyTorch is Python-based. Think of it as Facebook’s answer to Google’s TensorFlow—it was developed primarily by Facebook’s AI Research lab. It’s flexible, lightweight, and built for high-end efficiency.

PyTorch features outstanding community documentation and quick, easy editing capabilities. PyTorch facilitates deep learning projects with an emphasis on flexibility.

Studies show that it’s gaining traction, particularly in the ML research space due to its simplicity, comparable speed, and superior API. PyTorch integrates easily with the rest of the Python ecosystem, whereas in TensorFlow, debugging the model is much trickier.

Microsoft Cognitive Toolkit (CNTK)

71

Microsoft’s ML framework is designed to handle deep learning, but can also be used to process large amounts of unstructured data for machine learning models. It’s particularly useful for recurrent neural networks. For developers inching toward deep learning, CNTK functions as a solid bridge.

CNTK is customizable and supports multi-machine back ends, but ultimately it’s a deep learning framework that’s backwards compatible with machine learning. It is neither as easy to learn nor deploy as TensorFlow and PyTorch, but may be the right choice for more ambitious businesses looking to leverage deep learning.

IBM Watson

IBM-Watson

IBM Watson began as a follow-up project to IBM DeepBlue, an AI program that defeated world chess champion Garry Kasparov. It is a machine learning system trained primarily by data rather than rules. IBM Watson’s structure can be compared to a system of organs. It consists of many small, functional parts that specialize in solving specific sub-problems.

The natural language processing engine analyzes input by parsing it into words, isolating the subject, and determining an interpretation. From there it sifts through a myriad of structured and unstructured data for potential answers. It analyzes them to elevate strong options and eliminate weaker ones, then computes a confidence score for each answer based on the supporting evidence. Research shows it’s correct 71% of the time.

IBM Watson is one of the more powerful ML systems on the market and finds usage in large enterprises, whereas TensorFlow and PyTorch are more frequently used by small and medium-sized businesses.

What’s Right for Your Business?

Businesses looking to capitalize on artificial intelligence do not have to start from scratch. Each of the above ML Frameworks offer their own pros and cons, but all of them have the capacity to enhance workflow and inform beneficial business decisions. Selecting the right ML framework enables businesses to put their time into what’s most important: innovation.

How Artificial Intuition Will Pave the Way for the Future of AI

Artificial intelligence is one of the most powerful technologies in history, and a sector defined by rapid growth. While numerous major advances in AI have occurred over the past decade, in order for AI to be truly intelligent, it must learn to think on its own when faced with unfamiliar situations to predict both positive and negative potential outcomes.

One of the major gifts of human consciousness is intuition. Intuition differs from other cognitive processes because it has more to do with a gut feeling than intellectually driven decision-making. AI researchers around the globe have long thought that artificial intuition was impossible, but now major tech titans like Google, Amazon, and IBM are all working to develop solutions and incorporate it into their operational flow.

WHAT IS ARTIFICIAL INTUITION?

ADy2QfDipAoaDjWjQ4zRq

Descriptive analytics inform the user of what happened, while diagnostic analytics address why it happened. Artificial intuition can be described as “predictive analytics,” an attempt to determine what may happen in the future based on what occurred in the past.

For example, Ronald Coifman, Phillips Professor of Mathematics at Yale University, and an innovator in the AI space, used artificial intuition to analyze millions of bank accounts in different countries to identify $1 billion worth of nominal money transfers that funded a well-known terrorist group.

Coifman deemed “computational intuition” the more accurate term for artificial intuition, since it analyzes relationships in data instead of merely analyzing data values. His team creates algorithms which identify previously undetected patterns, such as cybercrime. Artificial intuition has made waves in the financial services sector where global banks are increasingly using it to detect sophisticated financial cybercrime schemes, including: money laundering, fraud, and ATM hacking.

ALPHAGO

One of the major insights into artificial intuition was born out of Google’s DeepMind research in which a super computer used AI, called AlphaGo, to become a master in playing GO, an ancient Chinese board game that requires intuitive thinking as part of its strategy. AlphaGo evolved to beat the best human players in the world. Researchers then created a successor called AlphaGo Zero which defeated AlphaGo, developing its own strategy based on intuitive thinking. Within three days, AlphaGo Zero beat the 18—time world champion Lee Se-dol, 100 games to nil. After 40 days, it won 90% of matches against AlphaGo, making it arguably the best Go player in history at the time.

AlphaGo Zero represents a major advancement in the field of Reinforcement Learning or “Self Learning,” a subset of Deep Learning which is a subset of Machine Learning. Reinforcement learning uses advanced neural networks to leverage data into making decisions. AlphaGo Zero achieved “Self Play Reinforcement Learning,” playing Go millions of times without human intervention, creating a neural network of “artificial knowledge” reinforced by a sequence of actions that had both consequences and inception. AlphaGo Zero created knowledge itself from a blank slate without the constraints of human expertise.

ENHANCING RATHER THAN REPLACING HUMAN INTUITION

The goal of artificial intuition is not to replace human instinct, but as an additional tool to help improve performance. Rather than giving machines a mind of their own, these techniques enable them to acquire knowledge without proof or conscious reasoning, and identify opportunities or potential disasters, for seasoned analysts who will ultimately make decisions.

Many potential applications remain in development for Artificial Intuition. We expect to see autonomous cars harness it, processing vast amounts of data and coming to intuitive decisions designed to keep humans safe. Although its ultimate effects remain to be seen, many researchers anticipate Artificial Intuition will be the future of AI.

A Smarter World Part 4: Securing the Smart City and the Technology Within

In the last installment of our blog series on smart cities, we examined how smart transportation will make for a more efficient society. This week, we’ll examine how urban security stands to evolve with the implementation of smart technology.

Smart security in the modern era is a controversial issue for informed citizens. Many science fiction stories have dramatized the evolution of technology, and how every advance increases the danger of reaching a totalitarian state—particularly when it comes to surveillance. However, as a society, it would be foolish to refrain from using the technical power afforded to us to protect our cities.

Here are the top applications for smart security in the smart cities of the future:

Surveillance

minority-report-iris-scan-blog-hero-778x391

Surveillance has been a political point of contention and paranoia since the Watergate scandal in the early 1970s. Whistleblower Edward Snowden became a martyr or traitor depending on your point of view when he exposed vast surveillance powers used by the NSA. As technology has rapidly evolved, the potential for governments to abuse their technological power has evolved with it.

Camera technology has evolved to the point where everyone has a tiny camera on them at all time via their phones. While monitoring entire cities with surveillance feeds is feasible, the amount of manpower necessary to monitor the footage and act in a timely manner rendered this mass surveillance ineffective. However, deep learning-driven AI video analytics tools can analyze real-time footage and identify anomalies, such as foreboding indicators of violence, and notify nearby law enforcement instantly.

In China, police forces use smart devices allied to a private broadband network to discover crimes. Huawei’s eLTE system allows officers to swap incident details securely and coordinate responses between central command and local patrols. In Shanghai, sophisticated security systems have seen crime rates drop by 30% and the amount of time for police to arrive at crime scenes drop to 3 minutes.

In Boston, to curb gun violence, the Boston police force has deployed an IoT sensor-based gunfire detection system that notifies officers to crime scenes within seconds.

Disaster Prevention

shutterstock_457990045-e1550674981237

One of the major applications of IoT-based security system involves disaster prevention and effective use of smart communication and alert systems.

When disasters strike, governments require a streamlined method of coordinating strategy, accessing data, and managing a skilled workforce to enact the response. IoT devices and smart alert systems work together to sense impending disasters and give advance warning to the public about evacuations and security lockdown alerts.

Cybersecurity

The more smart applications present in city infrastructure, the more a city becomes susceptible to cyber attack. Unsecured devices, gateways, and networks each represent a potential vulnerability for a data breach. The average cost of a data breach according to IBM and the Poneman Institute is estimated at $3.86 million dollars. Thus, one of the major components of securing the smart city is the ramping up of cybersecurity to prevent hacking.

smart-city-1 graphic

The Industrial Internet Consortium are helping establish frameworks across technologies to safely accelerate the Industrial Internet of Things (IIot) for transformational outcomes. GlobalSign works to move secure IoT deployments forward on a world-wide basis.

One of the first and most important steps toward cybersecurity is adopting standards and recommended guidelines to help address the smart city challenges of today. The Cybersecurity Framework is a voluntary framework consisting of standards, guidelines, and best practices to manage cybersecurity-related risk published by the National Institute of Standards and Technology (NIST), a non-regulatory agency in the US Department of Commerce. Gartner projects that 50% of U.S. businesses, critical infrastructure operators, and countries around the globe will use the framework as they develop and deploy smart city technology.

Conclusion

The Smart City will yield a technological revolution, begetting a bevy of potential applications in different fields, and with every application comes potential for hacker exploitation. Deployment of new technologies will require not only data standardization, but new security standardizations to ensure that these vulnerabilities are protected from cybersecurity threats. However, don’t expect cybersecurity to slow the evolution of the smart city too much as it’s expected to grow into a $135 billion dollar industry by 2021 according to TechRepublic.

This concludes our blog series on Smart Cities, we hope you enjoyed and learned from it! In case you missed it, check out our past entries for a full picture of the future of smart cities:

A Smarter World Part 1: How the Future of Smart Cities Will Change the World

A Smarter World Part 2: How Smart Infrastructure Will Reshape Your City

A Smarter World Part 3: How Smart Transportation Will Accelerate Your Business