fbpx

What Is Augmented Reality

Augmented Reality (AR) has emerged as a transformative technology in recent years, revolutionizing the way people interact with digital content. From fun Snapchat filters to the ability to visualize furniture in your home before making a purchase, AR has found its way into almost every industry. With the ability to create immersive experiences, businesses now have a unique opportunity to increase engagement, drive sales, and build brand loyalty. It’s no surprise that AR has become a popular topic of discussion among businesses and consumers alike.   

WHAT IS AR?

Augmented Reality (AR) is a technology that allows users to overlay digital content onto the physical world in real-time. AR enhances the existing environment by adding digital elements to it. This technology is typically accessed through a mobile device or a wearable, such as smart glasses. AR works by using the device’s camera and sensors to scan and map the physical environment. The AR software then analyzes this information and superimposes digital elements onto the user’s view of the real world. These digital elements can be anything from 3D models to text to video and can be interactive, allowing users to manipulate or interact with them.  

A BRIEF HISTORY OF AUGMENTED REALITY

Image Source 
The concept of Augmented Reality dates back to the 1960s when Ivan Sutherland, a computer scientist, developed the first head-mounted display, which he called “The Sword of Damocles.” The device was bulky and expensive, but it laid the groundwork for future AR technology. It wasn’t until the 1990s that AR began to gain mainstream attention, with the development of the first AR application, called Virtual Fixtures, by the US Air Force.
Image Source
The early 2000s saw the emergence of AR as a commercial technology, with the launch of ARToolKit, an open-source software library that enabled developers to create AR applications easily. Since then, the technology has evolved significantly, with advances in computer vision, graphics, and mobile computing enabling the creation of more sophisticated AR experiences.  

TYPES OF AUGMENTED REALITY

There are three main types of Augmented Reality: NativeAR, WebAR, and SocialAR. Each type has its unique features and capabilities, and businesses can choose the type that best suits their needs.

NativeAR

NativeAR refers to AR experiences that are created using native app development tools for specific platforms like iOS or Android. Game development tools like Unity and Unreal provide the ability to build AR experiences for both major platforms at the same time. These native experiences typically provide the most advanced features and capabilities, such as 3D object recognition and tracking as well as offline access. That said, creating a NativeAR app can be time-consuming and costly and requires users to download an app which can take several minutes.
Image Source
Ikea makes use of NativeAR through their app Ikea Place. Through advanced 3D object recognition and tracking technology, this app enables customers to preview furniture in their homes before making a purchase.

WebAR

WebAR refers to AR experiences that are accessed through a web browser, without the need for a dedicated app. This type of AR is typically easier and more affordable to create than NativeAR and can be accessed on a wider range of devices. However, WebAR experiences are limited by tracking (they require a horizontal plane so no wall tracking) and require an internet connection. One example of WebAR is the Moët-Hennessy Virtual Concierge activation. This experience allows users to place the concierge in their home/space, and then answer a series of questions that lead to their perfect product recommendation.

SocialAR

SocialAR refers to AR experiences that are integrated into social media platforms, such as Snapchat or Instagram. These experiences are designed to be shared with friends and can be used to promote products, services, or events. SocialAR experiences are typically easy to use and accessible to a wide audience, but they may be limited in terms of functionality. One example of SocialAR is the Steve Maddenverse Big Head Girls AR filters. The Instagram filters enabled users to see what they would look like as 3D avatars of Normani, Nessa Barrett, Jordan Alexander, Sydney Sweeney, and Justine Skye in Steve Maddens iconic “Big Head Girl’ style from the 90’s and early 00’s.  

TOP AR SOFTWARE 

Augmented Reality Software applications are essential for creating immersive AR experiences. The software offers developers the tools and resources necessary to create AR experiences that meet the demands of modern consumers. Here are some of the top development platforms for AR:

8th Wall

8th Wall is a cloud-based development platform for creating WebAR experiences. It stands out from other platforms by providing developers with a set of tools that can help them create AR experiences that are highly responsive, reliable, and scalable. With 8th Wall, developers can create AR experiences that work seamlessly on both iOS and Android devices, and can be customized to meet the needs of any brand.

Zappar

Zappar is an AR development platform that provides developers with tools for creating both marker-based and markerless AR experiences. It offers an extensive analytics suite that enables businesses to measure the impact of their AR experiences on their audience. 

Blippar

Blippar is an AR development platform that provides businesses with tools for creating immersive and interactive AR experiences. It provides a wide range of customization options and templates, making it easy for businesses to create AR experiences that are tailored to their brand and audience. 

ARToolKit

ARToolKit is an open-source software library for creating marker-based AR experiences. It is easy to use and provides developers with a range of features and customization options that enable developers to create AR experiences that are highly customizable and interactive.   

INDUSTRY USES OF AR

AR has found a place in almost every industry, and its potential use cases are vast. Some of the top use cases for AR include:

Retail and Fashion

In the Retail and Fashion Industry, AR has the potential to enhance the customer experience by providing an immersive and interactive way to shop for products. Retailers can use AR to create virtual showrooms, allow customers to try on virtual clothes and accessories, and enable customers to visualize products in their own space before making a purchase. The technology allows retailers to create immersive experiences that bridge the gap between online and in-store shopping, and help customers make more informed purchase decisions. Research has found that these informed online purchase decisions,that AR enables, has led to a 25%66% decrease in returns, a 50% increase in product sampling time, and a 44% increase in likelihood to add an item to the cart.            An example of this is Bloomingdale’s 150th Anniversary AR Catalog, which brought their collection to life. Customers could scan AR-enabled looks within the catalog to see how they look and move in real life, not to mention click to purchase inside the experience. This experience had a 23.4% conversion rate and a 38% engagement rate

Education 

For Black History Month this year, ROSE created our Marching Forward AR exhibition, which commemorated moments of Black resistance in recent history. Users can tap to place the exhibit anywhere and walk along the iconic Black Lives Matter street art text, commemorating the 10th anniversary of the founding of the movement, while looking at and interacting with statues corresponding to these moments.             This experience highlights how augmented reality can be used as an important tool for immersive and accessible educational experiences. Augmented reality can be used to teach in the classroom and alongside, or instead of, IRL museum exhibits making educational resources available to all.

Food and Beverage

In the Food and Beverage Industry, augmented reality has been leveraged to personalize the purchase experience and offer consumers a unique and memorable experience. Through AR, Consumer Product brands can set themselves apart from competitors and be successful in increasing consumer engagement and driving sales.  An example of this is ROSE’s Patrón Tequila Virtual Gift Wrapper activation. This experience allows consumers to personalize their bottles of Patrón tequila with custom messages and virtual ribbons. The user can then share their bottle on social media or purchase it directly from the Patrón website. This activation had an average session duration of 1.5 minutes, and 53% of users clicked through to purchase Patrón before exiting.

Tourism

Augmented Reality can enhance the tourism industry by providing a more interactive and engaging experience for travelers. AR can also provide travelers with information and context about the places they are visiting in a fun and immersive way.             AR can also be used to create virtual tours of destinations, allowing travelers to explore new places without ever leaving their homes. For example, in partnership with the Miami Design District, Mastercard™ has provided cardholders with an AR-powered virtual tour of the area. Using their mobile device, users are transported to the Miami Design District and can experience a 360-view of selected art pieces.  

AR AND THE FUTURE

Augmented Reality is a technology that has the potential to transform the way we interact with digital content and the real world. With its ability to enhance learning, create memorable consumer experiences, and provide innovative solutions across a range of industries, AR is a technology that businesses cannot afford to ignore. By understanding the different types of AR, their industry uses, and the software available, businesses can leverage this technology to create engaging and immersive experiences that delight their customers and drive business growth.  

What Is Extended Reality?

Extended Reality (XR) is one of the many “-R” abbreviations used in the immersive technology space these days. With so many similar terms floating around, it’s easy to get confused. Fortunately, “XR” is a sort of umbrella term that probably includes any other “-R” term out there.  

What Is XR?

The difficulty with the definitions comes from the “X.” Depending on who you ask, it might not stand for anything. Some people use it as a placeholder, like a variable in a math problem. Some even pronounce “XR” as “X Reality.” Others use XR not to mean “any reality” but to mean “all reality” for example to discuss immersive technology generally rather than one at a time. People in this camp are more likely to say “XR” as “Extended Reality.” People have their preferences between the two “XR” uses but both can be handy in different situations depending on what you’re talking about.  A lot of companies getting into immersive activations want to do it because they’re flashy. They might know that they want to do something with immersive technology but might not know whether they want to use AR, VR, or MR. Here the first use, X Reality, can be fitting because they’ll only use one form of immersive technology but they don’t know which one. A lot of academics, journalists, and technologists use “XR” as “Extended Reality” because they’re not just talking about AR or MR or VR – they’re talking about all of these technologies at once. This use is particularly helpful when talking about solutions like Varjo Reality Cloud which operates more like AR for an on-site user and more like VR for remote users. So, what are the differences between the other R terms? Why might it or might it not be important to specify how they are being grouped?  

The “-R” Abbreviations in Immersive Technology

VR, AR, MR – in all of those familiar abbreviations the “R” stands for “reality” and that’s true for “XR” as well. But, with XR being an umbrella term, it’s easier to understand if you also have a firm grasp on the other Rs as well.

AR – Augmented Reality

Augmented reality places virtual elements into a user’s view of their physical surroundings using a camera and either a transparent lens or a live-view of a camera feed often through a mobile phone. Most modern virtual reality headsets have a similar function called “passthrough” but this particular technology is still largely experimental except on professional-grade devices. The virtual elements in augmented reality activations aren’t usually responsive – they add value to the user’s surroundings, or the user’s surroundings add impact to the virtual elements. For example, in the AR lookbook that ROSE developed with KHAITE, users could see models walking in their actual surroundings or view virtual representations of items in their own homes.

MR – Mixed Reality

Mixed reality is similar to augmented reality in that it all starts with the user’s environment. However, the virtual elements in an MR experience are much more intelligent and interactive. They may interact believably with one another or with the environment. They may also collect and display information on the environment from connected devices or onboard sensors. Mixed reality requires a lot more computing power both to drive the interactive virtual elements and to display them in a meaningful way. As a result, most mixed reality experiences are made available exclusively on dedicated mixed reality devices like Magic Leap or Microsoft’s HoloLens. GigXR’s Insight series with ANIMA RES uses a HoloLens headset to display detailed and interactive anatomy models in a healthcare and education solution. If more than one person has a Hololens they can both join that session, or one presenter with a headset can stream or record a session to remote users without access to headsets.

VR – Virtual Reality

Virtual Reality is entirely virtual. The user’s natural field of view is entirely replaced by computer-rendered settings and elements, potentially including other users represented as avatars. That doesn’t mean that everything in a VR experience has to be built from the ground up. For example, products like Microsoft Mesh can place a live volumetric capture of an individual within a virtually constructed environment. Similarly, some VR experiences take place within 3D images or videos. VR is popular in gaming and social applications but is also used for remote collaboration, design, and training. In fact, 3lb XR and 3lb Games design enterprise training simulations and other solutions as well as games, cross-pollinating one another to make intuitive and immersive enterprise solutions as well as fun and challenging entertainment experiences.

The Acronym of Possibilities

Whether XR means one unspecified form of immersive technology or all forms of immersive technology together depends on who uses it and in what context. It’s also one of those terms that end users of the technology don’t really use at all – it’s primarily used at a relatively higher level of discourse. With this knowledge, you’ll probably be able to tease out what someone means when they say “XR” and if you don’t it’s okay to ask them to clarify. This is an emerging technology with an emerging dictionary of terms and everyone being on the same page is more important than appearing to understand nuanced specialist terminology.

Why is AR So Appealing to Marketers?

Augmented Reality is often touted as a user-friendly and efficient way to bring brands to consumers. However, due to the shock value of the still nascent technology and the engagement of a well-designed experience, AR can also be a great way to bring consumers into your brand.  

Augmented Experiences – Not Augmented Ads

Augmented Reality (AR) technology uses digital elements superimposed over a user’s live camera feed. Because most modern smartphones can run most AR experiences, just about everyone has access to AR content. That’s a powerful tool for companies looking to grow their brands. “Currently, for brands using AR to sell goods, it is quite common to use the technology to digitally place real-world items in the user’s environment. While this is a good application of the technology, brands would be mistaken to stop just there. With a bit more imagination AR can be used to create an experience that has a much more emotional impact on the user.” Because AR relies on the view of the user’s physical surroundings, including the objects, people, and settings that are meaningful to them, AR experiences are inherently personal in a way that no other medium is. This is helpful because a strong brand isn’t just about “stuff” either. Bridging physical and digital experiences can help to convey values that aren’t just material. Often, the most successful branded activations aren’t about selling things at all. Rather, experiential AR is about communicating with people on a personal level by letting them explore the world around them through the window of augmented reality.  

Experience Something New

Because AR is an emerging technology, we can limit ourselves by thinking about it as strictly a way to experience futuristic applications. AR can also allow users to put themselves in the past or experience another place as it is today. For brands that have long histories or a far reach, this can be a surprisingly impactful way to engage your community.

Step Into a Memory

Martin was an iconic sitcom that ran from 1992 to 1997. The show opened to cast members posing and dancing infront of solid color backgrounds and their names in the show’s memorable font. This unique and memorable piece of television history was begging to get an AR twist. In 2022, the surviving cast members reunited on BET+ to celebrate the 30th anniversary of the show’s premiere. At the event, visitors had access to a screen where they could dance and pose to have the magic of AR place them into the show’s familiar opening. They could then keep and share the clips, or edit them together to make their own show openings. Fans of the series were already excited to be at the reunion, which was its own piece of Martin history. However, the AR experience allowed them to be more than viewers of an event. They were able to participate in the show’s history in a unique and memorable way.

Visit Miami Without Leaving Home

Priceless” is a promotional initiative for MasterCard holders, giving them access to membership perks including online experiences. These experiences are increasingly taking place in Augmented Reality. MasterCard recently worked with ROSE and 8th Wall to create an AR tour of artwork in Miami’s Design District. In this window to another world experience, the user’s phone became their ticket to a guided tour of the renowned art installations.  Touchscreen navigation even allowed Priceless members to move around the artwork to see them from any angle in their 360-degree virtual view just as they would if touring the Design District in person. A plane ticket to the same experience in person would have been a hefty gift from MasterCard and a hefty commitment from card holders, but the AR experience was achievable for both.  

See Yourself Differently

In the Martin example, the background was all that was augmented and the people stayed the same. However, AR filters and lenses – the joy of modern social media – can help viewers see themselves in new ways as well.   Using AR for social media marketing is also a good business strategy. Social media users use the platform to share their lives with their friends as well as to share in the lives of their friends. A well-designed AR experience can bring viewers into your brand but viewers are also more likely to share their experiences with their own followings.

Enter the Maddenverse

Clothing company Steve Madden already has a strong conventional social media strategy, which encourages customers to tag the company in social media posts that feature themselves wearing Steve Madden apparel. The company can then feature these customers’ user-generated content on its own social media platforms and website, both of which provide purchase options. In 2021, the company decided to get more immersive in their social media campaigns and launched “the Maddenverse.” For one activation, the company worked with ROSE to produce an AR filter for Instagram that turned user selfies into avatars of Steve Madden models. Users were again encouraged to share the images and tag the company’s profile. Like the Martin experience, this Maddenverse activation didn’t cost any money for users or make any money for the company. That wasn’t the point. Rather, the experience allowed fans to express their brand support in a new and fun way, growing their loyalty to the brand while also encouraging them to put the brand in front of their own social media followings. In just one week, almost 18,000 people used the filter to create personalized AR images of themselves in the Maddenverse. The users sharing those images resulted in a total 675,000 impressions in the first week. This illustrates the kind of scale that using AR for social media marketing can achieve when users are encouraged to share their creations with others.  

Give Your Audience Whatever They Want

Customers aren’t just customers anymore. They can be your audience, but they can also be creators working in a sort of partnership as casual ambassadors for your brand. This has huge potential, but it will only work if you cultivate a meaningful relationship with them. Experiential AR and using AR on social media can help to remind your audience why they’re passionate about your brand and it can allow them to express that passion to others. But it  may mean rethinking what you want to give your community and what your community wants from your brand, other than just a purchasable product.

Giving the Gift of AR this Holiday Season

Some people can be hard to shop for. There are two great ways to get around this problem.  The first is to give them something completely unique and personal to them. The second is to give them something that you already know that they love but give it to them in a unique way. Augmented Reality can be an exciting and unexpected way to explore either of these approaches.  

Isn’t VR cooler?

There’s a lot of hype around virtual reality right now – and with good reason. However, virtual reality (in addition to requiring more robust hardware) means that everything is digital. That means that everything has to be created. Items, landscapes, maybe even representations of other users. That takes a lot of time, effort, and money. Augmented reality primarily uses a person’s physical surroundings, with a couple of changes brought to you by creative technologists. That means that a single item, character, or special effect can create a completely unique experience without needing to reinvent the wheel – and everything else – on a computer.  

AR: The Gift That Keeps on Giving

What’s more than all of that, AR draws on the viewer’s connection to their physical environment. It uses computer magic to bring a little something extra to the way that they experience the places, items, and even people that they already love. That brings us back to using AR to solve tricky problems on your holiday gift list. 

Give Something Truly Unique

Everything experienced in AR is completely unique to the viewer because what is going on in the camera feed is going to be different every time. No matter how special the experience is, the physical setting where the user chooses to launch that experience makes it even more personal and meaningful. ROSE created a virtual model of the real-life Edmund Pettus Bridge for an educational AR experience that viewers could visit from anywhere in the world. Some chose to go through the experience wherever it was convenient or practical for them. But, users can also choose to place the experience in an area that has emotional significance to them. A complete experience may be difficult to give as a gift. It is possible for you to create a one-of-a-kind AR item. That could be an object or character that only exists in the digital world. It could also be a 3D model of a physical object with a special significance to the friend or loved one to whom you present it. The great thing about digital objects is that they don’t have to exist in one format. While you might choose a special experience for the initial gifting, consider giving the file of the object itself as part of the gift. That way, the receiver can take their digital object or character with them into other virtual worlds and digital experiences.

Give Something Physical – but Augment it

Some augmented reality experiences originate in the digital world and project out into the environment, like the digital objects that we were just talking about. Other augmented reality experiences start with a physical object that computer magic only enhances. In this way, you can give a “normal” gift that stands out a lot more. Patrón’s digital wrapping project took a bottle and some care to create a magical holiday gift Gifters created a personalized virtual wrapping for a Patrón bottle, including photographs, text messages, and other AR customizations. As a result, the end gift wasn’t “just a bottle of liquor,” it was a meaningful and personal one-of-a-kind experience – through the magic of AR.

Get Really Creative

Some AR gifts combine everything that we’ve talked about: a digitally-enabled personal experience, a virtual object, and a physical object with augmented value.  The adidas DEERUPT sneaker launch involved a physical box that appeared empty. Inside of that box was a grid that served as a target for a social media-friendly AR version of the shoe. This allowed fans to enjoy a product “unboxing” before the shoe was physically available. Giving a gift like this allowed a special early opening of a product naturally followed by the object itself. It’s not every day that a company does something like a virtual unboxing. However, you can apply this idea to your own gifts. Give someone a marker that launches an AR experience, even a simple one, while the “real” gift is something much bigger. That could be an item that hasn’t arrived yet, a trip someplace special, anything that you can think of. You can also use AR to let your friend or family member choose their own gift. Fashion brand KHAITE partnered with ROSE to bring models and fashions into a user’s home using augmented reality. Users got to see a personalized fashion show in their own chosen environment – and then had the option to buy the fashions that they viewed.  

Think Outside the Box

This article has provided a few ideas and a few examples. But, no article could capture all of the possibilities that AR presents for gift giving. In part, that’s because AR allows us to think outside of the box – or any other physical constraints. So, let your imagination run wild.  Freely available AR object and experience building platforms are proliferating but still require a certain amount of skill. So, this article has included links to sites that you can use to have an expert help you create a digital item or experience of your own. You can also keep an eye out for ready-made experiences from brands who are increasingly using AR in creative ways.

Off with their heads: Body and clothing segmentation in Spark AR

Implementing a filter with three different effects on the background, body and clothing.
 
Steve Maddenverse Instagram filters
Many of the augmented reality experiences that ROSE produces are focused on adding new objects to or ways of interacting with the already existing world around us, allowing the user to interface with virtual extensions of a product, brand, or idea. However, lately we have seen a renewed interest from brands in creating person-centric experiences ie. the selfie. Most recently, we delved into this world when working on the Steve Maddenverse campaign’s Instagram filters.  
Of course, person-centric experiences are hardly a new idea. Selfie filters developed for Instagram and Snapchat abound, having exploded in popularity through the last five years. These filters can do anything from magically beautifying someone’s face to aging them, warping them into fearsome orcs and goblins, changing their hair or facial features or jewelry and accessories, or swapping them entirely with someone else’s. This, too, is a kind of augmented reality, and it has its own huge potential.  
An Instagram face swap filter. Credit to amankerstudio on Instagram, 2017.
Alongside that potential come several unique challenges, of which the main one is body tracking. An AR engine needs to identify what sections of the camera feed belong to a person as well as how and where they move — perhaps even tracking the position and orientation of individual body parts. And once we have that information, we can take it a step further to address an even more specific hurdle: segmentation.  
A body tracking algorithm in action. Credit to MediaPipe and Google AI Blog, 2020.

What is Segmentation?

Segmentation is the process of identifying a body part or real object within a camera feed and isolating it, creating a “cutout” that can be treated as an individual object for purposes like transformation, occlusion, localization of additional effects, and so on.

Types of Segmentation:

Hair Segmentation: Changing a user’s hairstyle requires precise segmentation of that user’s real hair so that it can be recolored, resized, or even removed from the rendered scene and replaced entirely without affecting other parts of the scene, such as the user’s face. Body Segmentation: Allows for the user’s background to be replaced without tools like a green screen, throwing the user into deep space, lush jungles, the Oval Office, or anywhere else you would like to superimpose your body outline against. Skin Segmentation: Skin segmentation identifies the user’s skin. This could power an experience in which a user wears virtual tattoos that stop at the boundaries of their clothes and move along with their tracked body parts — almost perfectly lifelike. Object Segmentation: Gives us the ability to perform occlusion so that AR objects might be partially hidden under or beneath real ones as they would logically be in reality, or even to “cut and paste” those real objects into virtual space.  
Person, skin, and hair segmentation via Spark AR. Credit to Facebook, 2021.

Achieving Segmentation

How do we achieve segmentation? Approximating shapes from a database would never be even close to realistic. Identifying boundaries by color contrast is a no go for people with hair or clothes that are close to their skin tone. Establishing a body position at experience start (“Strike a pose as per this outline:”) and then tracking changes over time is clunky and unreliable. We need something near-instantaneous that can recalibrate on the fly and have a wide margin of approximation for adjustment. We need something smarter! Of course, then, the answer is artificial intelligence. These days, “AI” is more often than not a buzzword thrown around to mean everything and yet nothing at all, but in this case we have a practical application for a specific form of AI: neural networks. These are machine learning algorithms that can be trained to recognize shapes or perform operations on data. By taking huge sets of data (for example, thousands and thousands of photos with and without people in them) and comparing them, neural networks have been trained to recognize hands, feet, faces, hair, horses, cars, and various other animate and inanimate entities…perfect for our use case.  

Training a neural network to identify objects and remove backgrounds. Credit to Cyril Diagne, 2020.

All of this is not to say that segmentation is on the cutting razor edge of new technology. Spark AR, for example, has had segmentation capabilities for at least two years. However, it is a pretty recent update to the platform that allows users to use multiple classes of segmentation in a single effect, and you can read more about that update here. This new capability opens the door to a host of more complex effects, and so in this case study, we use multiple-class segmentation to apply separate effects to the user’s background, body (face, hair, and skin), and clothing.
Sketching out a triple segmentation filter. Credit to Eric Liang, 2021.
Each of these layers is easily accomplished on its own using a segmentation texture from the camera. For example, Spark AR provides a “Background” template that shows how to accomplish person segmentation and insert a background image. Breaking the template down, we see that this is accomplished by first creating two flat image rectangles that overlay and fill the device screen. The topmost of these will be the person, and the one underneath will feature the background image. For the top layer (named “user” in the template), the extracted camera feed is used as a color texture. Beginners will observe that there’s no visible distinction from a blank front-facing camera project at this time. This is because the normal display is, for all practical purposes, exactly that: just a flat image rectangle that fills the screen and displays the camera feed. We’ve basically just doubled that in a way that we can tinker with and put our version on top, obscuring the “normal” display. Next, a person segmentation texture is created and used as the alpha texture for the user rectangle. This sets the alpha value, which determines transparency, for all parts of the user rectangle outside of the identified person to 0, so that it is completely transparent and shows what is layered underneath it instead. Within the area that is an identified person, the camera feed continues to show through. This shows us that the segmentation texture is actually made up of two binary areas: is and isn’t, without any information as to what that is/isn’t is actually referring to. Those familiar with image manipulation know this concept as “layer masking”. The camera feed is accessed twice per frame: once to determine that is/isn’t binary and create a texture map (practically, equivalent to a layer mask) recording that information, and once to check what color each pixel within that map should be. (Astute observers will note that it doesn’t matter in which order these checks occur.) Finally, the template allows for any desired background image to be slotted in as the background rectangle’s color map. Voilà: person segmentation! We’ll replace the stock image with a bit of outer space for our aesthetic.  
Background segmentation using Spark AR’s template.

Next step: adding an effect to the face. Problem: we don’t have a built-in “clothes” segmentation! We have “hair” and “body”, but nothing that will allow us to easily separate face and skin from clothes. Snapchat’s Lens Studio is nice enough to provide built-in “upper garment” segmentation, but Spark AR is not so forthcoming. We’ll have to get a little creative with the options available to us. Quick thinkers may have already seen the simple mathematical solution. Our segmentation options are “person”, “hair”, and “skin”. Person minus hair and skin is…exactly what we’re looking for. By combining the hair and skin segmentation textures and subtracting that from the person texture, we get the clothes left behind. Let’s get cracking on what exactly this looks like in patch form.  
Demonstrating multiple segmentation.

As a very basic implementation of the concept, it’s a little rough around the edges, but it gives us what we need. I implement some tweaks for the sample screenshots, but they will not be covered in this case study, and I encourage you to explore, create, and refine your own solutions! “EZ Segmentation” is a patch asset straight from the Spark AR library, and provides options for adding effects to either the foreground (body) or the background (clothes). It’s pretty easy to build effects on their own and then pass the texture into the slot. Here, we add in a light glow gradient paired with a rippled lens flare to the foreground and a starry animation sequence to the background.  
The filter in action.

You can already imagine the kinds of things we can do here with the power to animate designs on the user’s clothing. Inversely, we can leave the clothing untouched and add effects to the user’s skin, whether that be coloring it in à la Smurf or Hulk, or erasing it entirely for an “Invisible Man”-type filter. These suggestions are just a place to start, of course; multiple-class segmentation is powerful enough to open the door to a galaxy’s worth of potential. Show us what you can do!

Render streaming: taking AR to the next level

What’s the deal with AR, anyway?

XR technology is widely touted as having infinite potential to create new worlds. You can design scenes with towering skyscrapers, alien spacecraft, magical effects, undersea expanses, futuristic machinery, really anything your heart desires. Within those spaces, you can fly, throw, slash, burn, freeze, enchant, record, create, draw and paint⁠ — any verb you can come up with. The only limit is your imagination!
Painting in VR with Mozilla’s A-Painter XR project. Credit: Mozilla 2018.

Sounds cool. What’s the problem?

Well, all of that is true — to a point. Despite all of our optimism about this AR and VR potential, we find that we are still bound by the practical limitations of the hardware. One of the biggest obstacles to creating immersive, interactive, action-packed, high-fidelity XR experiences is that the machines used to run them just don’t have the juice to render them well. Or, if they do, they’re either high-end devices that have a steep monetary barrier to entry, making them inaccessible, or too large to be portable and therefore inconducive to the free movement you would expect from an immersive experience. That’s not to say that we can’t do cool things with our modern XR technology. We’re able to summon fashion shows in our living rooms, share cooperative creature-catching gaming experiences, alter our faces, clothing, and other aspects of our appearance, and much, much more. But it’s easy to imagine what we could do past our hardware limitations. Think of the depth, detail, and artistry boasted by popular open-world games on the market: The Elder Scrolls: Skyrim, The Legend of Zelda: Breath of the Wild, No Man’s Sky, and Red Dead Redemption 2, just to name a few. Now imagine superimposing those kinds of experiences against the real world, augmenting our reality with endless new content: fantastic flora and fauna wandering our streets, digital store facades that overlay real ones, information, and quests available to learn about at landmarks and local institutions.  
Promotional screenshot from The Legend of Zelda: Breath of the Wild. Credit: Nintendo 2020.
  There are many possibilities outside of the gaming and entertainment sphere, too. Imagine taking a walking tour through the Roman Coliseum or Machu Picchu or the Great Wall of China in your own home, with every stone in as fine detail as you might see if you were really there. Or imagine browsing through a car dealership or furniture retailer’s inventory with the option of seeing each item in precise, true-to-life proportion and detail in whatever space you choose. We want to get to that level, obviously, but commercially available AR devices (i.e. typical smartphones) simply cannot support them. High-fidelity 3D models can be huge files with millions of faces and vertices. Large open worlds may have thousands of objects that require individual shadows, lighting, pathing, behavior, and other rendering considerations. User actions and interactions within a scene may require serious computational power. Without addressing these challenges and more, AR cannot live up to the wild potential of our imaginations.  

So what can we do about it?

Enter render streaming. Realistically, modern AR devices can’t take care of all these issues…but desktop machines have more than enough horsepower. The proof is in the pudding: we see in the examples of open-world video games previously mentioned that we can very much create whole worlds from scratch and render them fluidly at high FPS rates. So let’s outsource the work! The process of render streaming starts with an XR application running on a machine with a much stronger GPU than a smartphone (at scale, a server, physical or cloud-based). Then, each processed, rendered frame of the experience, generated in real time, is sent to the display device (your smartphone). Any inputs from the display device, such as the camera feed and touch, gyroscope, and motion sensors, are transmitted back to the server to be processed in the XR application, then the next updated frame is sent to the display device. It’s like on-demand video streaming, with an extra layer of input from the viewing device. This frees the viewing device from actually having to handle the computational load. Its only responsibility now is to stream the graphics and audio, which modern devices are more than capable of doing efficiently. Even better, this streaming solution is browser-compatible through the WebRTC protocol, meaning that developers don’t need to worry about cross-platform compatibility, and users don’t need to download additional applications.
Diagram of render streaming process using Unreal Engine. Credit: Unreal Engine 2020.
  There is just one problem: it takes time for input signals to move from the streaming device to the server, be processed, and have results be transmitted back. Nor is this a new challenge; we have long struggled with the same latency issue in modern multiplayer video games and other network applications. For render streaming to become an attractive, widespread option, 5G network connectivity and speeds will be necessary to reduce latency to tolerable levels. Regardless, it would be wise for developers to get familiar with the technology. All the components are already at hand; not only is 5G availability increasing, but Unity and Unreal Engine have also released native support for render streaming, and cloud services catering to clients who want render streaming at scale are beginning to crop up. The future is already here — we just need to grab onto our screens and watch as the cloud renders the ride.  

How AR Brought KHAITE’s Latest Fashion Line Directly To Consumers

At ROSE we build relationships around fast and comprehensive solutions. Our goal when taking on projects is to build seamless solutions and provide a path for further innovation. We want to be a repeat partner for augmented reality. We find the path forward through innovation and then build on that existing framework. This process had led us to our second partnership with KHAITE. This week we launched our second experience with the high-powered fashion brand, and over a short period of time we’ve been able to increase sales and bring AR into the hands of fashion lovers.

What We Did

As the fashion world had to adapt and move to a purely digital landscape — fashion shows had to be pushed to video, new clothing lines had to be shipped to prospective buyers — brands had to move quickly to break through all of the noise. ROSE and Chandelier Creative helped KHAITE bring their newest collection to life. With emerging technology ROSE was able to bring KHAITE’s footwear designs to the homes of their customers, buyers and the market giving customers a deep visual experience unlike any other fashion brand has been able to accomplish. As the world continues to grapple with these unprecedented times, this technology will become a cornerstone of how fashion powerhouses market their designs to their customers. ROSE decided to build a WebAR application for accessibility purposes and to take the burden off consumers. The WebAR experience is widely-supported, deeply interactive and highlights the unique details of KHAITE’s footwear designs in a way that offers endless creative freedom for the user. KHAITE shipped lookbooks that had QR codes embedded within the experience, made by Chandelier Creative, that when scanned take you to the AR experience where users can see the shoes to scale in their own homes. This allows consumers to tap whichever shoes they’d like to get a closer look at and place them in their homes. This allowed customers to get a feel for the items without being able to see them in person. This experience allowed KHAITE to create a visual experience that otherwise would only exist inside one of their showrooms. In the second iteration of the experience, for KHAITE’s pre-fall 2021 collection, ROSE expanded the experience to include models rendered in augmented reality, allowing for users to be able to see the clothing in the way it was meant to be seen. While still using WebAR, this second experience utilized green screen video to build a full runway show with models wearing the new line as they walk up and down whatever environment the user chooses.
 

Challenges

Understanding the mathematics of 3D space is a learning curve in itself, but creating an experience accessible in a browser, as opposed to a native mobile application, makes things even more difficult with issues like sensor permissions and browser compatibility. Adding light sources to a scene requires a careful balance between the existing, real-life lighting observed by the camera and computed lighting that best accentuates the highlights and shadows of the models in the AR scene. This challenge was multiplied tenfold as we created specific lighting setups to complement each unique shoe model. The material of each model was a major consideration; a shoe with a soft, quilted insole and white leather straps needed soft, glowing illumination, whereas a black patent leather boot needed bright point lights that played off the glossy reflectivity of the material. The end result was an experience that tailors to each model, allowing users to see each one in its best light. When we started on the second KHAITE experience, we ran up against challenges that came with showcasing an entire clothing line. KHAITE is a premium brand which places a lot of emphasis on the quality and texture of the materials for their garments and accessories. WebAR is a resource constrained medium, meaning lower-file sizes and compression are required. Capturing 4K, high framerate and high-quality assets for delivery via the web is a challenge. Involving models and movement increases the difficulty of capturing high quality assets. Thankfully, we were able to get incredibly high quality green screen footage enabling the quality of the looks to shine through.

Impact

As the fashion world grapples with how to convert sales and stay afloat amid the pandemic, finding ways to integrate experiences with seamless shopping capabilities is now the only viable option. Within this experience, the sales were proof enough that this execution works for high-fashion labels. Fashion is a tactile and textured experience, and amid social distancing brands have hurdles to jump to create moving experiences for consumers. Companies are integrating new technology to bring fashion shows to people’s phones, computers, and inside their homes. For the first experience ROSE built for KHAITE, sales increased by significantly in just a few short weeks. Evan Rose, CEO and founder of ROSE, said, “We are proud to have partnered with KHAITE and Chandelier Creative to create an experience that changes how consumers engage with physical products in an increasingly digital world. We’re excited to be a part of driving how the retail and fashion industries engage with consumers.” As this current climate continues, and shoppers continue to have decreased consumer confidence, focusing on the clothes and the experience that can be had without in-person experiences are more important than ever. Using augmented reality for elevating fashion in this time of social distancing allows for a rich, interactive experience for all users and customers. AR allows for the color, texture and life of garments to come to life.

ROSE And Patrón Partner To Build The Spirit Industry’s First User-Generated AR Experience

Amid a global pandemic the solutions to some of our most basic problems need some creativity. With COVID’s continued presence in our lives, social distancing may have to continue into a time that is usually filled with parties, family gatherings and holiday festivities. People will be looking for ways to make new traditions, and to connect with their loved ones from afar.

Patrón needed a way to help customers connect despite holiday plans shifting across the country, while also maintaining their brand narrative. We worked with Patrón to create a first-of-its-kind digital wrapping as a special gift this holiday season, and beyond, to solve this specific problem. This experience provides a sentimental and original take on gifting alcohol as well as gives customers first-hand experience not just using augmented reality, but harnessing it to make something themselves.

How Does It Work?

Gifters of Patrón can use a microsite developed by ROSE to create a custom wrapping including a photo, text, and stickers that will transform into a 360-degree augmented reality (AR) gift wrapping around their Patrón bottle. This gives customers a chance to use this emerging technology in a new way that hasn’t been available in retail before.

Select A theme, add photos, text, and stickers and then see it come to life.

“With COVID-19 impacting most celebrations this holiday season, we wanted to give customers a way to continue to celebrate with each other while social distancing,” Nicole Riemer, the art director on the project said. “By creating a custom wrapping, customers can take the act of gifting alcohol from an easy to a thoughtful one. During a time when you might not be able to gift in person, creating a custom wrapping with photos, stickers, and text provides that personal touch that is missing from not being able to gift it in person.”

Using WebGL in both 2D and 3D allows users to see their content change between dimensions in real time. Gifters can then use built-in recording and sharing technology to share the gift with the recipient as well as on social media.

“Creating these designs digitally allows for the process to be instantaneous and affordable, rather than waiting for something to get engraved or physically customized, without losing the ability to share that someone is thinking of you on social media,” Riemer said.

By providing customers the ability to customize their gift of Patrón for both different occasions and gift recipients, we are showing them that Patrón isn’t the “mass brand” they think it is. This virtual gift allows distance to not be a barrier in creating something thoughtful that nurtures customers’ need for growing and maintaining their relationships.

Using augmented reality for this experience had several advantages. The most obvious one being that this experience provides a sentimental gift without having to enter a store or be in the same physical space as the recipient — helping maintain social distancing amid the pandemic. Additionally, augmented reality provides a way for users to generate their own content while maintaining the PATRÓN brand.

“The challenge with AR has always been figuring out how we can take new dimensions and connect them to the ones we’re familiar with in creative, expressive, and helpful ways,” Eric Liang, front-end/AR engineer on the project said. “The AR experiences that ROSE has previously created have each addressed that challenge by taking something important to us — something unseen or out of the ordinary that we wanted to showcase — and constructing it in the user’s world. This time, we’re handing the reins to the user. In this new collaboration, we’re letting users create and realize something that’s uniquely their own.”

Harnessing the power of AR will bring all the holiday cheer customers could be missing into the palm of their hand and inside their home — connecting people who want to be together this holiday. Additionally, PATRÓN has a history of creating limited-run packaging and bottles and this experience offers customers peak exclusivity with the ability to customize every individual bottle they purchase, so the virtual expansion of exclusive boxes was a natural progression for the brand.

In designing this web application, we identified two different types of users. As Patrón’s target demographic for this experience is 21–35, we were less concerned with the technological literacy of the user. Additionally, since this started as a concept that would be mainly pushed through social media, we were bound to attract younger users that would already be at least slightly familiar with augmented reality from exposure through SnapChat and Instagram. After determining this demographic information for our target user, the next question was what a user would want to create when using this tool. This led us to determining the following use cases:

Creator 1: The user that wants to create a really thoughtful collage that they want the recipient to see that they spent time on. They expect that their gift will be shown to others and potentially shared on social media in a similar fashion to birthday posts.

Creator 2: The user that is looking to create a quick gift that still wows the intended recipient. They want to expend minimal effort, but get the same praise and reaction as someone who spends a lot of time on their creation.

In order to satisfy the need for a quick gift, we created quick “themes” that someone can choose from at the start of the experience that allows them to upload a single photo and have created a designed bottle in 5 clicks (including previewing their design). For those that want to spend more time on their creation, we provide the ability to start from scratch and choose the content that goes on every side of the bottle.

Select A Theme

In choosing the predetermined content that users can apply to their digital bottles, we focused on a few things. The first was to choose assets that could be used for multiple occasions, holidays, and were non-denominational. The second was to underscore the socially distant benefit of this gift and continue to have people drink responsibly even when gatherings are not encouraged. The third was to make sure that the assets could be used in many combinations and still create a wrapping that looks high end.

Once we determined the user experience and the content types that could be placed on the wrappings, we had to find a way to map their content to a 3D bottle in real time, to show the user their creation on this model before sending augmented reality link to their recipient, and then ultimately render each individual experience in augmented reality.

The technical inspiration for this experience began in an understanding of how WebXR, the implementation of augmented reality in a web browser, operates. WebXR is the conceptual model of everything that exists in an extended reality scene: where each virtual object is, where light is coming from, where the “camera” stands and observes, how the user interacts and changes all of these things, and so on. Imagine closing your eyes and understanding where everything around you in the room is: your desk, the floor, a lamp, rays of sunlight coming through a window, even your own hands. Now open your eyes and actually observe those things. That’s what WebGL does. WebGL is the graphics engine that takes the theoretical model processed by WebXR and paints it on a screen, rendering the virtual existence of matter and light into visibility.

While we wanted to capture the same magic of seeing something you create exist in 3D space, it was important that it would be accessible to everyone, both in terms of the technology and creativity. We wanted it to be usable from an everyday mobile device, without the need for expensive VR technology. We also didn’t want to require the user to be a painter, have an empty warehouse to dance around with VR goggles on, or have an intricate understanding of 3D sculpture or set design to maximize the reward of the experience.

There were a lot of moving parts that needed to be addressed. There needed to be a simple, intuitive interface for the user to customize their design and we needed to apply the design to a 3D model composed of a number of different materials and textures, from soft cork to clear pebbled glass to shiny metallic gift wrap. The experience needed to show that customized bottle back to the user in an interactive, attention-grabbing 3D experience. And finally, we needed to be able to scale the experience for a mass marketing campaign, which meant preparing for a large number of concurrent users with different devices and intents. We settled on technologies to address each of these challenges: a React/HTML Canvas microsite to design the wrapping, an 8th Wall/A-FRAME experience to view it, and a serverless API backend with cloud storage to support scale.

The next step was to source a 3D model of the bottle and we worked with a 3D artist and modeller and iterated over the model until each detail was as accurate as possible, and then continued to optimize our renders.This involved adjusting lighting through trial and error until we found the best setup to illuminate the bottle and make the glass and its reflectiveness as lifelike as possible, as well as customizing the physical material shaders for each node of the finalized model: the cork, the ribbon, the glass, the liquid, and the wrapping.

3D model renderings of the Silver Patrón bottle. 3D model renderings of the Silver Patrón bottle.

Later on, we also realized that we needed a dynamic approach to the wrapping’s transparency. If the user chose to lay their graphics directly over the glass without using a background, those stickers, photos, and text would need to be opaque while leaving the glass transparent. The answer was taking the texture maps we generated with each user-created design and filtered them into black and white, effortlessly serving double duty as alpha maps to control transparency.

Example of an alpha map. Example of an alpha map.

While the experience would be accessible to everyone, we wanted those who had a Patrón bottle handy to be able to integrate it into the experience. It’s not yet feasible to use a real-life bottle of Patrón to anchor the experience, so we looked outside of the box — and settled on the actual box that each bottle of Patrón comes in. This gave us the opportunity to leverage 8th Wall’s image target feature, using Patrón bottle image on the side of each box to trigger the dramatic emergence of the virtual bottle from the physical box.

Built to share on social, this augmented reality experience allows for recording within the WebAR experience. Built to share on social, this augmented reality experience allows for recording within the WebAR experience.

Those without a box can watch the bottle appear on the plane they have placed it on in the experience. Adding some typical controls like pinch to zoom and finger rotation made it easy for the user to examine the bottle and the details of the design, and we added in 8th Wall’s Media Recorder capability to further boost the shareability of the experience.

As companies look ahead to a greener and more sustainable future, the concept of virtual wrapping and virtual packaging is likely to expand. As augmented reality moves from an emerging technology to an adopted one, user-generated AR content will take center stage, and experiences like this one will enable every day users to create using AR technology. As all industries grapple with how to stay competitive, and stay afloat, innovation is the answer to moving forward. This is the tip of the iceberg when it comes to what augmented reality can accomplish.

We are excited to continue innovating and bringing projects like these to life. We believe anyone can innovate and that process is vital amid the current economic landscape. Our passion for emerging technologies and augmented reality is immense and our work will only continue to reflect that. We’re looking forward to sharing more soon.

Ashley Nelson: Concept and Strategy, UX Copywriter

Eric Liang: Front-end/AR engineer

Eugene Park: Experience Design

Leonardo Malave: Back-end/AR engineer

Marie Liao: QA Engineer

Nicole Riemer: Concept and Strategy, Art Direction, and Experience Design

Yolan Baker: Project Manager

Pay To Play: Visualizing Presidential Campaign Spending Using Augmented Reality

Pay to Play, the AR data visualization experience, showcasing how money spent on presidential campaigns equates to the cost of large U.S. infrastructure projects.

Believe it or not, a few short months ago the main event dominating the news cycle wasn’t coronavirus, but the Presidential election. The Democratic primaries were different from years past, and not just because the number of candidates running could fill a small football field. One thing that stood out to our team was the record spending that occurred this election cycle. Discussions began to swirl around campaign finance specifically when Michael Bloomberg entered the race, funding his entire campaign with his personal fortune, and raising questions about what money should and shouldn’t buy while running for office. We began thinking about a way to contextualize the immensity of campaign spending through the language we speak best — technology. Those conversations and the desire to use technology to answer that question was the origin of Pay to Play. Due to primaries being postponed, and the race being narrowed down to single candidates from each party, we considered not releasing this experience.

However, with the new economic pressures on American families due to coronavirus and the current volatile international economy, we believed the relationship between money and politics was worth exploring. This project considers the disconnect between the monetary impact of the political process and the needs of everyday Americans.

The staggering amount of money spent by Democratic candidates in the 2020 election left us wondering how that money could have been spent on infrastructure and funding the platforms that those candidates had as part of their campaigns. We designed Pay to Play as a way to look back on the record amount of money spent by Democratic candidates that have ended their bids. We also included how much several Republican contenders in the 2016 presidential election spent on their campaigns as another comparison.

We designed this experience to visualize our internal discussions and the conversations happening in the U.S. during this tumultuous time, and in doing so we wanted to answer the question: “What else could we have done with that money?”

How Does It Work

Try it for yourself at campaignspending.rosedigital.co or scan the QR code.

Why Use Augmented Reality

The Build, 3D Modeling, and Optimization

Optimizing for size by reducing face counts and textures in Blender

We found that much of the challenge of this project was using AR in a way that was accessible to as many people as possible while still maintaining the core identity of the project — using numerical scale as a way to evoke a reaction from the user. Rendering any 3D model in a web browser can be an expensive operation. Rendering thousands of them would tax a smartphone’s hardware to the point of unusability. We ended up approaching this by leaning into the idea of scale: we didn’t need exacting detail if the idea was to overwhelm the user with a huge pile of items; we just needed enough to make it clear what each item was. So we selected simple models with fewer polygons, decimated their numbers of faces as low as we could, and reduced the resolution on their textures to minimize file size. The end result worked out — we had piles of apples that were clearly recognizable and deeply satisfying to watch cascade down from the sky.

Additional challenges came from the technologies we used to build the experience itself. Web AR platforms advance every day, but there are still severe limitations to their capabilities. For example, 8th Wall, the platform on which this experience runs, offers surface occlusion capabilities only for its Unity integration into native apps. For browser-based experiences that don’t yet have access to that plane detection technology, we have to emulate a floor by placing a vast invisible sheet at a defined distance below the camera. The distance to the “floor” is not dynamic and doesn’t change whether the user is sitting or standing, resulting in an imperfect representation of reality. This process only makes us more excited to see the next steps web AR will take, as the technology continues to improve and provide us with new and even more compelling ways to augment our reality.

Conclusion

Credits:

Nicole Riemer: Art Direction and Experience Design

Eric Liang: Experience Design and Development