All posts by admin

How the Internet of Behaviors Will Shape the Future of Digital Marketing

In the digital age, businesses need to leverage every possible platform and cutting-edge technology in order to get a leg up on the competition. We’ve covered the Internet of Things extensively on the Mystic Media blog, but a new and related tech trend is making waves. This trend is called the Internet of Behaviors and according to Gartner, about 40% of people will have their behavior tracked by the IoB globally by 2023.

WHAT IS THE IOB?

Internet of Behavior, or the IoB, exists at the intersection of technology, data analytics, and behavioral science. The IoB leverages data collected from a variety of sources, including online activities, social media, wearable devices, commercial transactions and IoT devices, in order to deliver insights related to consumers and purchasing behavior.

With devices more interconnected than ever, the IoB tracks, gathers, combines and interprets massive data sets so that businesses can better understand their consumers. Businesses leverage analysis from the IoB to offer more personalized marketing with the goal of influencing consumer decision making.

HOW DOES IT WORK?

Traditionally, a car insurance company would analyze a customer’s driving history in order to determine if they are a good or bad driver. However, in today’s digital age, they might take it a step further and analyze social media profiles in order to “predict” whether a customer is a safe driver. Imagine what insights they could gather from a user’s Google search history or Amazon purchases? Access to large datasets enables large companies to create psychographic profiles and gather an enhanced understanding of their customer base.

Businesses can use the IoB for more than just purchasing decisions. UX designers can leverage insights to deliver more effective customer experiences. Large companies such as Ford are designing autonomous vehicles that change based on the city, modulating behavior based on vehicle traffic, pedestrians, bicycles and more.

GBKSOFT created a mobile application that collects data from wearable devices in order to help golfers improve their skills. The application records each golf ball hit, including the stroke, force, trajectory and angle, and delivers visual recommendations to improve their swing and technique. Insights gathered through data are translated into behavioral trends that are then converted into recommendations to improve the user’s game.

The IoB is all about collecting data that can be translated into behavior which helps companies understand consumer tendencies and translate them into meaningful actions.

CONCERNS

While there is quite a bit of enthusiasm surrounding the potential impact of the IoB for B2C companies, a number of legal concerns come with it. A New York Times article, written by Harvard Business School emeritus professor Shoshana Zuboff, warns of the age of surveillance capitalism where tech behemoths surveil humans with the intent to control their behavior.

Due to the speed at which technology and the ability to collect data has proliferated, privacy and data security are under-regulated and major concerns for consumers. For example, Facebook was applying facial recognition scans in advance of the 2016 election without user’s consent. Cambridge Analytica’s use of psychoanalytic profiles has been the subject of much derision. Momentum for data privacy regulation is growing and since the IoB hinges on the ability for companies to collect and market data, forthcoming regulations could inhibit its impact.

CONCLUSION

Despite regulatory concerns, the IoB is a sector that we expect to see grow over time. As the IoT generates big data and AI evolves to learn how to parse through and analyze it, it’s only natural that companies will take the next step to leverage analysis to enhance their understanding of their customers’ behaviors and use it to their advantage. The IoB is where that next step will take place.

How Apple & Google Are Enhancing Battery Life and What We as App Developers Can Do to Help

In 1799, Italian physicist Alessandro Volta created the first electrical battery, disproving the theory that electricity could only be created by human beings. Fast forward 250 years, brands like Duracell and Energizer popularized alkaline batteries—which are effective, inexpensive and soon become the key to powering household devices. In 1991, Sony released the first commercial rechargeable lithium-ion battery. Although lithium-ion batteries have come a long way since the 90s, to this day they power most smartphones and many other modern devices.

While batteries have come a long way, so have the capabilities of the devices which need them. For consumers, battery life is one of the most important features when purchasing hardware. Applications which drain a device’s battery are less likely to retain their users. Software developers are wise to understand the latest trends in battery optimization in order to build more efficient and user-friendly applications.

HARDWARE

Lithium-ion batteries remain the most prevalent battery technology, but a new technology lies on the horizon. Graphene batteries are similar to traditional batteries, however, the composition of one or both electrodes differ. Graphene batteries increase electrode density and lead to faster cycle times as well as the ability to improve a battery’s lifespan. Samsung is allegedly developing a smartphone powered by a graphene battery that could fully charge its device within 30 minutes. Although the technology is thinner, lighter, and more efficient, production of pure graphene batteries can be incredibly expensive, which may inhibit its proliferation in the short-term.

Hardware companies are also coming up with less technologically innovative solutions to improve battery life. Many companies are simply attempting to cram larger batteries into devices. A more elegant solution is the inclusion of multiple batteries. The OnePlus 9 has a dual cell battery. Employing multiple smaller batteries means both batteries charge faster than a single cell battery.

SOFTWARE

Apple and Google are eager to please their end-users by employing techniques to help optimize battery life. In addition, they take care to keep app developers updated with the latest techniques via their respective developer sites.

Android 11 includes a feature that allows users to freeze apps when they are cached to prevent their execution. Android 10 introduced a “SystemHealthManager” that resets battery usage statistics whenever the device is unplugged, after a device is fully charged or goes from being mostly empty to mostly charged—what the OS considers a “Major charging event”.

Apple has a better track record of consuming less battery than Android. iOS 13 and later introduced Optimized Battery Charging, enabling iPhones to learn from your daily charging routine to improve battery lifespan. The new feature prevents iPhones from charging up to 100% to reduce the amount of time the battery remains fully charged. On-site machine learning then ensures that your battery is fully charged by the time the user wakes up based on their daily routines.

Apple also offers a comprehensive graph for users to understand how much battery is being used by which apps, off screen and on screen, under the Battery tab of each devices Settings.

WHAT APPLICATION DEVELOPERS CAN DO

App developers see a 73% churn rate within the first 90 days of downloading an app, leaving very little room for errors or negative factors like battery drainage. There are a number of techniques application developers can employ in their design to reduce and optimize battery-intensive processes.

It’s vital to review each respective app store’s battery saving standards. Both Android and Apple offer a variety of simple yet vital tips for reducing battery drain—such as limiting the frequency that an app asks for a device’s location and inter-app broadcasting.

One of the most important tips is to reduce the frequency of network refreshes. Identify redundant operations and cut them out. For instance, can downloaded data be cached rather than using the radio repeatedly to re-download it? Are there tasks that can be deferred by the app until the device is charging? Backing up data to the cloud can consume a lot of battery on a task that is not always time sensitive.

Wake locks keep the phone’s screen on when using an app. There was a time where wake locks were frequently employed—but now it is frowned upon. Use wake locks only when absolutely necessary—if at all.

CONCLUSION

Software developers need to be attentive to battery drain throughout the process of building their application. This begins at conception, through programming, all the way into a robust testing process to identify potential battery drainage pitfalls. Attention to the details of battery optimization will lead to better, more user-friendly applications.

Part 3: Techniques to Keep Users Coming Back & Increase Retention

How Gamification Can Boost Retention on Any App Part 3: Techniques to Keep Users Coming Back & Increase Retention

The Mystic Media Blog is currently endeavoring on a 3 part series on how gamification mechanics can boost retention on any app—not just gaming apps but utility apps, business apps and more. In this third entry, we explore additional techniques to keep users coming back and increase retention.

Your users have downloaded your app and are acclimated with its features. You’ve perfected your core loop to ensure users can complete meaningful actions in the app on a daily basis. Now the question becomes—how can you retain ongoing usage? The average cost to acquire a mobile app user is $4, yet retention rates can quickly drop from there. Statistics show that a 5% increase in retention can boost profitability by up to 75%.

There are a variety of techniques employed by mobile games that app developers can use in their non-gaming apps to keep users engaged long after the application ends.

INVEST IN THE FUTURE

An optimized application development process requires thinking about how your product can evolve beyond the initial release. Often this is due to schedule and budgetary constraints. It is natural in any creative endeavor to have more ideas than time and money to complete them. However, thinking long-term can be an advantage. New features entice users to continue using the application after download and to allow push notifications for fear of missing out on updates.

Mobile games often have to confront this since the amount of content they offer is finite—a certain amount of levels, achievements, and unlockables which can be completed. Games can offer additional modes and levels to entice users to come back. Similarly, non-gaming apps can offer new content—such as informative blogs, new features, and new product lines.

During the development process, plan out multiple phases and deliver new features and content updates on a regular basis. If you have a blog, host it on your application and keep users coming back for content updates.

IMPLEMENT SOCIAL FEATURES

Game developers know that “Socializers”, or users who thrive on social interaction, constitute one of the most important Bartle Types. Social features are crucial not only to retaining interest and daily usage of an application, but as a marketing technique to encourage users to engage with one another and spread the word. Once your userbase is established, implementing social features will increase engagement.

Consider implementing the following social features in phase 2 of your application:

  • Customizable user profiles: Enabling usernames, profile pictures, bios and other user customization features help users feel more connected to the app vis a vis their profile.
  • Rewarded social sharing: Encourage users to spread the love by rewarding them with discounts and reward points when they share to social media.
  • Likes and comments on products: Implementing comments and likes not only gives users another avenue for engagement, it creates a platform for automated push notifications that will likely result in more daily opens.
  • Follow and friend other users: Allowing users to connect can result in meaningful social relationships which will increase their connection with your application.
  • Rewarded actions: Encourage users to complete an action for the first time by offering them some kind of reward.

PUSH NOTIFICATIONS STRATEGY

Push notifications are integral to every app developers’ retention strategy. They are the most effective vessel for delivering timely reminders and relevant notifications about new features on applications. Users can disallow push notifications at any time, so developers need to pick their spots or risk losing one of their most prized tools.

When developing your push notification strategy, consider the following:

  • Timing: Rather than sending push notifications all at once, target users based on their time zone. Make sure the timing of your notifications makes sense based on the message.
  • Personalization: Optimize UI by tracking app usage data and leveraging it for personalized push notifications. Personalize push notifications based on a user’s behavior such as their purchase history to help build app loyalty and keep notifications relevant.
  • Prudence: If you bombard users with irrelevant notifications, the decision to unsubscribe to push notifications becomes easy. Exercise restraint when sending push notifications and only send valuable information and reminders.

Users are always looking for value and discount—which is why delivery and transportation applications often use push notifications to send discount codes. Shopping apps can also send push notifications which notify users when they have items left in their cart—a timely prompt to finish the purchase can directly lead to revenue.

KEEP INNOVATING

The app development process does not have to end with an apps initial release into app stores. Rolling out new features to maintain engagement with your audience and bolster your application will result in improved retention.

Part 2: Optimize Onboarding with Gamification

How Gamification Can Boost Retention on Any App Part 2: Optimize Onboarding with Gamification

The Mystic Media Blog is currently endeavoring on a 3 part series on how gamification mechanics can boost retention on any app—not just gaming apps but utility apps, business apps and more. In this second entry, we explore how to refine and gamify your onboarding process to keep customers coming back.

ONBOARDING

Your app has been downloaded—a hard-fought battle in and of itself—but the war isn’t over; the onboarding process has just begun.

App onboarding is the first point of contact a user has within an application. It’s one of the most crucial parts of the user experience. Situating users in your application is the first step to ensuring they come back. Twenty-five percent of apps are only opened once after being downloaded. Many apps simply do not make it simple enough for users to understand the value and get the hang of the application—step one in your retention process.

Here are the top tips for smooth onboarding:

MINIMIZE REGISTRATION

A prolonged registration process can turn off new users. Users do not always have time to fill out extensive forms and can quickly become resentful of the pacing of your app. Keep registration to a minimum, minimize required fields, and get users going faster.

We recommend enabling user registration altogether with “Continue as Guest” functionality. Games typically employ this and it enables users to get hands on with the application before they undergo the tenuous account creation process. Hook them with your app, then let them handle the administrative aspects later. Account creation with Google, Facebook, or Twitter can also save quite a bit of time.

Gamification is all about rewarding the user. Offer users an incentive to create their account to positively reinforce the process and you will see more accounts created. If they haven’t created an account, make sure to send prompts to remind them of what the reward they are missing out on. As we detailed in our last entry, FOMO is a powerful force in gamification.

TUTORIAL BEST PRACTICES

When a user enters your application for the first time, they generally need a helping hand to understand how to use it. Many games incorporate interactive tutorials to guide the user through functionality—and business apps are wise to use it as well. However, an ineffective tutorial will only be a detriment to your application.

Pacing is key. A long tutorial will not only bog the onboarding process down, too much information will likely go in and out of the user’s brain. Space your tutorial out and break it into different sections introducing key mechanics as they become relevant. On-the-go tutorials like the four-screen carousel below by Wavely help acclimate users quickly and easily.

And don’t forget to offer a reward! Offer users some kind of reward or positive reinforcement upon completing tutorials to encourage them to continue using the application.

AVOID DEAD ENDS AND EMPTY STATES

An empty state is a place in an application that isn’t populated with any information. For example, favorites, order history, accomplishments, etc.—these pages require usage in order to be populated for information. New users will see these pages and become confused or discouraged. Many applications will offer self-evident statement such as “No Favorites Selected”. Or, in the case of UberEats below, no message is displayed.

It’s confusing and discouraging for users to see these statements. Avoid discouraging your users by offering more information, for example: “Save your favorite restaurants and find them here.” Check out Twitter’s exemplary message for users who’ve yet to favorite a tweet below.

CONCLUSION

Onboarding is the first and most crucial step to building a relationship with your userbase. One of the major things business apps can learn from gaming apps is that time is of the essence when it comes to capturing a user’s attention. Keep it short, punchy, and to the point.

The Top In-App Purchase Tactics for 2022

According to Sensor Tower, consumers spent $111 billion on in-app purchases, subscriptions, and premium apps in 2020 on the Apple App Store and Google Play Store. How can your app take advantage to maximize revenue? Every app is different and begets a unique answer to the all important question: What’s the best way to monetize?

App Figures recently published a study which showed only 5.9% of Apple App Store apps are paid, compared to a paltry 3.7% on Google Play. Thus, the freemium model reigns supreme—according to app sales statistics, 48.2% of all mobile app revenue derives from in-app purchases.

When creating an in-app purchase ecosystem, many psychological and practical considerations must be evaluated. Below, please find the best practices for setting in-app purchase prices in 2022.

BEHAVIORAL ECONOMICS

Behavioral economics is a method of economic analysis that applies psychological insights into human behavior to explain economic decision-making. Creating an in-app purchase ecosystem begins with understanding and introducing the psychological factors which incentivize users to make purchases. For example, the $0.99 pricing model banks on users perceiving items that cost $1.99 to be closer to a $1 price point than $2. Reducing whole dollar prices by one cent is a psychological tactic proven to be effective for both in-app purchases and beyond.

Another psychological pricing tactic is to remove the dollar sign or local currency symbol from the IAP storefront and employ a purchasable in-app currency required to purchase IAPs. By removing the association with real money, users see the value of each option on a lower stakes scale. Furthermore, in-app currencies can play a major role in your retention strategy.

ANCHORING

Anchoring is a cognitive bias where users privilege an initial piece of information when making purchasing decisions. Generally, this applies to prices—app developers create a first price point as an anchoring reference, then slash it to provide users with value. For example, an in-app purchase might be advertised at $4.99, then slashed to $1.99 (60% off) for a daily deal. When users see the value in relation to the initial price point, they become more incentivized to buy.

Anchoring also relates to the presentation of pricing. We have all seen bundles and subscriptions present their value in relation to higher pricing tiers. For example, an annual subscription that’s $20/year, but advertised as a $36 value in relation to a monthly subscription price of $2.99/month. In order for your users to understand the value of a purchase, you have to hammer the point home through UI design.

OPTIMIZE YOUR UI

UI is very important when it comes to presenting your in-app purchases. A well-designed monetization strategy can be made moot by insufficient UI design. Users should always be 1-2 taps away from the IAP storefront where they can make purchases. The prices and discounts of each pricing option should be clearly delineated on the storefront.

Furthermore, make sure you are putting your best foot forward with how you present your prices. Anchoring increases the appeal of in-app purchases, but in order for the user to understand the deal, you have to highlight the value in your UI design by advertising it front and center in your IAP UI.

OFFER A VARIETY OF CHOICES

There are a number of IAPs trending across apps. In order to target the widest variety of potential buyers, we recommend offering a variety of options. Here are a few commonly employed options:

  • BUNDLES: Offer your IAPs either à la carte or as a bundle for a discount. Users are always more inclined to make a bigger purchase when they understand they are receiving an increased value.
  • AD FREE: Offer an ad-free experience to your users. This is one of the more common tactics and die-hard users will often be willing to pay to get rid of the ad experience.
  • SPECIAL OFFERS: Limited-time offers with major discounts are far more likely to attract user attention. Special offers create a feeling of scarcity as well as instill the feeling of urgency. Consider employing holiday specials and sending personalized push notifications to promote them.
  • MYSTERY BOX: Many apps offer mystery boxes—bundles often offered for cheap that contain a random assortment of IAPs. Users may elect to take a chance and purchase in hopes of receiving a major reward.

While offering users a variety of choices for IAPs is key, having too many choices can cause analysis paralysis and be stultifying to users. Analysis paralysis is when users are hesitant to make an in-app purchase because they’ve been given too many options. Restrict your IAPs to the most appealing options to make decisions easy for your users.

TESTING IS KEY

As with any component of app development, testing is the key to understanding your audience and refining your techniques. We recommend testing your app with a random user group and taking their feedback as well as having them fill out a questionnaire. A/B Testing, or split-run testing, consists of testing two different user groups with two different app experiences. A/B testing enables app developers to see how users react to different experiences and to evaluate what tactics are most user-effective.

There are many tactics to help incentivize users to make that big step and invest capital in an app. Savvy developers innovate every day—stay tuned on the latest trends to keep your in-app purchase strategy on the cutting edge.

How Bluetooth Became the Gold Standard of Wireless Audio Technology

Bluetooth technology has established itself over the years as the premiere wireless audio technology and a staple of every smartphone user’s daily mobile experience. From wireless headphones, to speakers, to keyboards, gaming controllers, IoT devices, and instant hotspots—Bluetooth is used for a growing variety of functions every year.

While Bluetooth is now a household name, the path to popularity was built over the course of over 20 years.

CONCEPTION

In 1994, Dr. Jaap Haartsen—an electrical engineer working for Ericsson’s Mobile Terminal Division in Lund—was tasked with creating an indoor wireless communication system for short-range radio connections. He ultimately created the Bluetooth protocol. Named after the renowned Viking king who united Denmark and Norway in 958 AD, the Bluetooth protocol was designed to replace RS-232 telecommunication cables using short range UHF radio waves between 2.4 and 2.485 GHz.

In 1998, he helped create the Bluetooth Special Interest Group, driving the standardization of the Bluetooth radio interface and obtaining worldwide regulatory approval for Bluetooth technology. To this day, Bluetooth SIG publishes and promotes the Bluetooth standard as well as revisions.

BLUETOOTH REACHES CONSUMERS

In 1999, Ericsson introduced the first major Bluetooth product for consumers in the form of a hands-free mobile headset. The headset won the “Best of Show Technology” award at COMDEX and was equipped with Bluetooth 1.0.

Each iteration of Bluetooth has three main distinguishing factors:

  • Range
  • Data speed
  • Power consumption

The strength of these factors is determined by both the modulation scheme and data packet employed. As you might imagine, Bluetooth 1.0 was far slower than the Bluetooth we’ve become accustomed to in 2021. Data speeds capped at 1Mbps with a range up to 10 meters. While we use Bluetooth to listen to audio on a regular basis today, it was hardly equipped to handle music and primarily designed for wireless voice calls.

THE BLUETOOTH EVOLUTION

The Bluetooth we currently enjoy in 2021 is version 5. Over the years, Bluetooth’s range, data speed, and power consumption have increased dramatically.

In 2004, Bluetooth 2.0 focused on enhancing the data rate, pushing from 0.7Mbps in version 1 to 1-3Mbps while increasing range from 10m to 30m. Bluetooth 3.0 increased speeds in 2009, allowing up to 24Mbps.

In 2011, Bluetooth 4.0 introduced a major innovation in BLE (Bluetooth Low Energy). BLE is an alternate Bluetooth segment designed for very low power operation. It enables major flexibility to build products that meet the unique connectivity requirements of their market. BLE is tailored toward burst-like communications, remaining in sleep mode before and after the connection initiates. The decreased power consumption takes IoT devices like industrial monitoring sensors, blood pressure monitoring, and Fitbit devices to the next level. These devices can employ BLE to run at 1Mbps at very low power consumption rates. In addition to lowering the power consumption, Bluetooth 4.0 doubles the typical maximum range from 30m in Bluetooth 3.0 to 60m.

BLUETOOTH 5

Bluetooth 5 is the latest version of the technology. Bluetooth 5 doubles the bandwidth by doubling the speed of transmission. In addition, it quadruples the typical max range, bringing it up to 240m. Bluetooth 5 also introduces Bluetooth Low Energy audio, which enables one device to share audio with multiple other devices.

CONCLUSION

Bluetooth is a game-changing technology which stands to revolutionize more than just audio. IoT devices, health tech, and more stand to improve as the Bluetooth SIG continues to upgrade the protocol. After thirty years of improvement, the possibilities remain vast for savvy developers to take advantage of the latest Bluetooth protocols to build futuristic wireless technologies.

HL7 Protocol Enhances Medical Data Transmissions–But Is It Secure?

In our last blog, we examined how DICOM became the standard format for transmitting files in medical imaging technology. As software developers, we frequently find ourselves working in the medical technology field navigating new formats and devices which require specialized attention.

This week, we will jump into one of the standards all medical technology developers should understand: the HL7 protocol.

The HL7 protocol is a set of international standards for the transfer of clinical and administrative data between hospital information systems. It refers to a number of flexible standards, guidelines, and methodologies by which various healthcare systems communicate with each other. HL7 connects a family of technologies, providing a universal framework for the interoperability of healthcare data and software.

Founded in 1987, Health Level Seven International (HL7) is a non-profit, ANSI-accredited standards developing organization that manages updates of the HL7 protocol. With over 1,600 members from over 50 countries, HL7 International represents brain trust incorporating the expertise of healthcare providers, government stakeholders, payers, pharmaceutical companies, vendors/suppliers, and consulting firms.

HL7 has primary and secondary standards. The primary standards are the most popular and integral for system integrations, interoperability, and compliance. Primary standards include the following:

  • Version 2.x Messaging Standard–an interoperability specification for health and medical transactions
  • Version 3 Messaging Standard–an interoperability specification for health and medical transactions
  • Clinical Document Architecture (CDA)–an exchange model for clinical documents, based on HL7 Version 3
  • Continuity of Care Document (CCD)–a US specification for the exchange of medical summaries, based on CDA.
  • Structured Product Labeling (SPL)–the published information that accompanies a medicine based on HL7 Version 3
  • Clinical Context Object Workgroup (CCOW)–an interoperability specification for the visual integration of user applications

While HL7 may enjoy employment worldwide, it’s also the subject of controversy due to underlying security issues. Researchers from the University of California conducted an experiment to simulate an HL7 cyber attack in 2019, which revealed a number of encryption and authentication vulnerabilities. By simulating a main-in-the-middle (MITM) attack, the experiment proved a bad actor could potentially modify medical lab results, which may result in any number of catastrophic medical miscues—from misdiagnosis to prescription of ineffective medications and more.

As software developers, we advise employing advanced security technology to protect patient data. Medical professionals are urged to consider the following additional safety protocols:

  • A strictly enforced password policy with multi-factor authentication
  • Third-party applications which offer encrypted and authenticated messaging
  • Network segmentation, virtual LAN, and firewall controls

While HL7 provides unparalleled interoperability for health care data, it does not provide ample security given the level of sensitivity of medical data—transmissions are unauthenticated and unvalidated and subject to security vulnerabilities. Additional security measures can help medical providers retain that interoperability across systems while protecting themselves and their patients from having their data exploited.

HOW DICOM BECAME THE STANDARD IN MEDICAL IMAGING TECHNOLOGY

Building applications for medical technology projects often requires extra attention from software developers. From adhering to security and privacy standards to learning new technologies and working with specialized file formats—developers coming in fresh must do a fair amount of due diligence to get acclimated in the space. Passing sensitive information between systems requires adherence to extra security measures—standards like HIPAA (Health Insurance Portability and Accountability Act) are designed to protect the security of health information.

When dealing with medical images and data, one international standard rises above the rest: DICOM. There are hundreds of thousands of medical imaging devices in use—and DICOM has emerged as the most widely used healthcare messaging standards and file formats in the world. Billions of DICOM images are currently employed for clinical care.

What is DICOM?

DICOM stands for Digital Imaging and Communications in Medicine. It’s the international file format and communications standard for medical images and related information, implemented in nearly every radiology, cardiology, imaging, and radiotherapy devices such as X-rays, CT scans, MRI, ultrasound, and more. It’s also finding increasing adoption in fields such as ophthalmology and dentistry.

DICOM groups information into data sets. Similar to how JPEGs often include embedded tags to identify or describe the image, DICOM files include patient ID to ensure that the image retains the necessary identification and is never separated from it. The bulk of images are single frames, but the attribute can also contain multiple frames, allowing for storage of Cineloops.

The History of DICOM

DICOM was developed by the American College of Radiology (ACR) and the National Electrical Manufacturer’s Association (NEMA) in the 1980s. Technologies such as CT scans and other advanced imaging technologies made it evident that computing would play an increasingly major role in the future of clinical work. The ACR and NEMA sought a standard method for transferring images and associated information between devices from different vendors.

The first standard covering point-to-point image communication was created in 1985 and initially titled ACR-NEMA 300. A second version was subsequently released in 1988, finding increased adoption among vendors. The first large-scale deployment of ACR-NEMA 300 was in 1992 by the U.S. Army and Air Force. In 1993, the third iteration of the standard was released—and it was officially named DICOM. While the latest version of DICOM is still 3.0, it has received constant maintenance and updates since 1993.

Why Is DICOM Important?

DICOM enables the interoperability of systems used to manage workflows as well as produce, store, share, display, query, process, retrieve and print medical images. By conforming to a common standard, DICOM enables medical professionals to share data between thousands of different medical imaging devices across the world. Physicians use DICOM to access images and reports to diagnose and interpret information from any number of devices.

DICOM creates a universal format for physicians to access medical imaging files, enabling high-performance review whenever images are viewed. In addition, it ensures that patient and image-specific information is properly stored by employing an internal tag system.

DICOM has few disadvantages. Some pathologists perceive the header tags to be a major flaw. Some tags are optional, while others are mandatory. The additional tags can lead to inconsistency or incorrect data. It also makes DICOM files 5% larger than their .tiff counterparts.

The Future

The future of DICOM remains bright. While no file format or communications standard is perfect, DICOM offers unparalleled cross-vendor interoperability. Any application developer working in the medical technology field would be wise to take the time to comprehensively understand it in order to optimize their projects.

Cloud-Powered Microdroid Expands Possibilities for Android App Developers

Android developers have a lot to look forward to in 2021, 2022, and beyond. Blockchain may decentralize how Android apps are developed, Flutter will see increased adoption for cross-platform development, and we expect big strides in AR and VR for the platform. Among the top trends in Android development, one potential innovation has caught the attention of savvy app developers: Microdroid.

Android developers and blogs were astir earlier this year when Google engineer Jiyong Park announced via the Android Open Source Project that they are working on a new, minimal Android-based Linux image called Microdroid.

Details about the project are scant, but it’s widely believed that Microdroid will essentially be a lighter version of the Android system image designed to function on virtual machines. Google is preparing for a world in which even smartphone OS’s require a stripped-down version that can be run through the cloud.

Working from a truncated Linux, Microdroid will pull the system image from the device (tablet or phone), creating a simulated environment accessible from any remote device. It has the ability to enable a world in which users can access Google Play and any Android app using any device.

What does this mean for developers?

Microdroid will open up new possibilities for Android apps in embedded and IoT spaces which require potentially automated management and a contained virtual machine which can mitigate security risks. Cloud gaming, cloud computing—even smartphones with all features stored in the cloudare possible. Although we will have to wait and see what big plans Google has for Microdroid and how Android developers capitalize on it, at this juncture, it’s looking like the shift to the cloud may entail major changes in how we interact with our devices. App developers are keen to keep their eyes and heads in the cloud.

Although no timeline for release has been revealed yet, we expect more on Microdroid with the announcement of Android 12.