fbpx

What Is Extended Reality?

Extended Reality (XR) is one of the many “-R” abbreviations used in the immersive technology space these days. With so many similar terms floating around, it’s easy to get confused. Fortunately, “XR” is a sort of umbrella term that probably includes any other “-R” term out there.  

What Is XR?

The difficulty with the definitions comes from the “X.” Depending on who you ask, it might not stand for anything. Some people use it as a placeholder, like a variable in a math problem. Some even pronounce “XR” as “X Reality.” Others use XR not to mean “any reality” but to mean “all reality” for example to discuss immersive technology generally rather than one at a time. People in this camp are more likely to say “XR” as “Extended Reality.” People have their preferences between the two “XR” uses but both can be handy in different situations depending on what you’re talking about.  A lot of companies getting into immersive activations want to do it because they’re flashy. They might know that they want to do something with immersive technology but might not know whether they want to use AR, VR, or MR. Here the first use, X Reality, can be fitting because they’ll only use one form of immersive technology but they don’t know which one. A lot of academics, journalists, and technologists use “XR” as “Extended Reality” because they’re not just talking about AR or MR or VR – they’re talking about all of these technologies at once. This use is particularly helpful when talking about solutions like Varjo Reality Cloud which operates more like AR for an on-site user and more like VR for remote users. So, what are the differences between the other R terms? Why might it or might it not be important to specify how they are being grouped?  

The “-R” Abbreviations in Immersive Technology

VR, AR, MR – in all of those familiar abbreviations the “R” stands for “reality” and that’s true for “XR” as well. But, with XR being an umbrella term, it’s easier to understand if you also have a firm grasp on the other Rs as well.

AR – Augmented Reality

Augmented reality places virtual elements into a user’s view of their physical surroundings using a camera and either a transparent lens or a live-view of a camera feed often through a mobile phone. Most modern virtual reality headsets have a similar function called “passthrough” but this particular technology is still largely experimental except on professional-grade devices. The virtual elements in augmented reality activations aren’t usually responsive – they add value to the user’s surroundings, or the user’s surroundings add impact to the virtual elements. For example, in the AR lookbook that ROSE developed with KHAITE, users could see models walking in their actual surroundings or view virtual representations of items in their own homes.

MR – Mixed Reality

Mixed reality is similar to augmented reality in that it all starts with the user’s environment. However, the virtual elements in an MR experience are much more intelligent and interactive. They may interact believably with one another or with the environment. They may also collect and display information on the environment from connected devices or onboard sensors. Mixed reality requires a lot more computing power both to drive the interactive virtual elements and to display them in a meaningful way. As a result, most mixed reality experiences are made available exclusively on dedicated mixed reality devices like Magic Leap or Microsoft’s HoloLens. GigXR’s Insight series with ANIMA RES uses a HoloLens headset to display detailed and interactive anatomy models in a healthcare and education solution. If more than one person has a Hololens they can both join that session, or one presenter with a headset can stream or record a session to remote users without access to headsets.

VR – Virtual Reality

Virtual Reality is entirely virtual. The user’s natural field of view is entirely replaced by computer-rendered settings and elements, potentially including other users represented as avatars. That doesn’t mean that everything in a VR experience has to be built from the ground up. For example, products like Microsoft Mesh can place a live volumetric capture of an individual within a virtually constructed environment. Similarly, some VR experiences take place within 3D images or videos. VR is popular in gaming and social applications but is also used for remote collaboration, design, and training. In fact, 3lb XR and 3lb Games design enterprise training simulations and other solutions as well as games, cross-pollinating one another to make intuitive and immersive enterprise solutions as well as fun and challenging entertainment experiences.

The Acronym of Possibilities

Whether XR means one unspecified form of immersive technology or all forms of immersive technology together depends on who uses it and in what context. It’s also one of those terms that end users of the technology don’t really use at all – it’s primarily used at a relatively higher level of discourse. With this knowledge, you’ll probably be able to tease out what someone means when they say “XR” and if you don’t it’s okay to ask them to clarify. This is an emerging technology with an emerging dictionary of terms and everyone being on the same page is more important than appearing to understand nuanced specialist terminology.

Why is AR So Appealing to Marketers?

Augmented Reality is often touted as a user-friendly and efficient way to bring brands to consumers. However, due to the shock value of the still nascent technology and the engagement of a well-designed experience, AR can also be a great way to bring consumers into your brand.  

Augmented Experiences – Not Augmented Ads

Augmented Reality (AR) technology uses digital elements superimposed over a user’s live camera feed. Because most modern smartphones can run most AR experiences, just about everyone has access to AR content. That’s a powerful tool for companies looking to grow their brands. “Currently, for brands using AR to sell goods, it is quite common to use the technology to digitally place real-world items in the user’s environment. While this is a good application of the technology, brands would be mistaken to stop just there. With a bit more imagination AR can be used to create an experience that has a much more emotional impact on the user.” Because AR relies on the view of the user’s physical surroundings, including the objects, people, and settings that are meaningful to them, AR experiences are inherently personal in a way that no other medium is. This is helpful because a strong brand isn’t just about “stuff” either. Bridging physical and digital experiences can help to convey values that aren’t just material. Often, the most successful branded activations aren’t about selling things at all. Rather, experiential AR is about communicating with people on a personal level by letting them explore the world around them through the window of augmented reality.  

Experience Something New

Because AR is an emerging technology, we can limit ourselves by thinking about it as strictly a way to experience futuristic applications. AR can also allow users to put themselves in the past or experience another place as it is today. For brands that have long histories or a far reach, this can be a surprisingly impactful way to engage your community.

Step Into a Memory

Martin was an iconic sitcom that ran from 1992 to 1997. The show opened to cast members posing and dancing infront of solid color backgrounds and their names in the show’s memorable font. This unique and memorable piece of television history was begging to get an AR twist. In 2022, the surviving cast members reunited on BET+ to celebrate the 30th anniversary of the show’s premiere. At the event, visitors had access to a screen where they could dance and pose to have the magic of AR place them into the show’s familiar opening. They could then keep and share the clips, or edit them together to make their own show openings. Fans of the series were already excited to be at the reunion, which was its own piece of Martin history. However, the AR experience allowed them to be more than viewers of an event. They were able to participate in the show’s history in a unique and memorable way.

Visit Miami Without Leaving Home

Priceless” is a promotional initiative for MasterCard holders, giving them access to membership perks including online experiences. These experiences are increasingly taking place in Augmented Reality. MasterCard recently worked with ROSE and 8th Wall to create an AR tour of artwork in Miami’s Design District. In this window to another world experience, the user’s phone became their ticket to a guided tour of the renowned art installations.  Touchscreen navigation even allowed Priceless members to move around the artwork to see them from any angle in their 360-degree virtual view just as they would if touring the Design District in person. A plane ticket to the same experience in person would have been a hefty gift from MasterCard and a hefty commitment from card holders, but the AR experience was achievable for both.  

See Yourself Differently

In the Martin example, the background was all that was augmented and the people stayed the same. However, AR filters and lenses – the joy of modern social media – can help viewers see themselves in new ways as well.   Using AR for social media marketing is also a good business strategy. Social media users use the platform to share their lives with their friends as well as to share in the lives of their friends. A well-designed AR experience can bring viewers into your brand but viewers are also more likely to share their experiences with their own followings.

Enter the Maddenverse

Clothing company Steve Madden already has a strong conventional social media strategy, which encourages customers to tag the company in social media posts that feature themselves wearing Steve Madden apparel. The company can then feature these customers’ user-generated content on its own social media platforms and website, both of which provide purchase options. In 2021, the company decided to get more immersive in their social media campaigns and launched “the Maddenverse.” For one activation, the company worked with ROSE to produce an AR filter for Instagram that turned user selfies into avatars of Steve Madden models. Users were again encouraged to share the images and tag the company’s profile. Like the Martin experience, this Maddenverse activation didn’t cost any money for users or make any money for the company. That wasn’t the point. Rather, the experience allowed fans to express their brand support in a new and fun way, growing their loyalty to the brand while also encouraging them to put the brand in front of their own social media followings. In just one week, almost 18,000 people used the filter to create personalized AR images of themselves in the Maddenverse. The users sharing those images resulted in a total 675,000 impressions in the first week. This illustrates the kind of scale that using AR for social media marketing can achieve when users are encouraged to share their creations with others.  

Give Your Audience Whatever They Want

Customers aren’t just customers anymore. They can be your audience, but they can also be creators working in a sort of partnership as casual ambassadors for your brand. This has huge potential, but it will only work if you cultivate a meaningful relationship with them. Experiential AR and using AR on social media can help to remind your audience why they’re passionate about your brand and it can allow them to express that passion to others. But it  may mean rethinking what you want to give your community and what your community wants from your brand, other than just a purchasable product.

How AR Impacts Shopping in the Fashion and Retail Industry

Extended Reality can replace a lot of things, like most in-person work meetings, or product design. But, it will never replace the runway or the changing room. Right? Buying clothing can be a hands-on process that can feel very intimate. And, for some situations, that’s not likely to change. But, consumer fashion is increasingly being helped by augmented reality, from virtual clothing try-ons to virtual fashion shows.  

The Role of AR in Retail

Augmented reality has a growing role in the fashion industry, from clothing design to completely virtual clothing. But, for the average shopper buying a physical garment, what good is AR? AR (displaying virtual elements in a user’s physical environment through the use of smart devices like phones) allows the shopper to get a fair understanding of a garment without having direct access to that garment. Throughout the rest of this article, we’ll talk about things like fashion shows viewable from anywhere, or trying on a clothing item before it’s even in the store. AR technology means viewers don’t have to travel to a fashion show to see the latest looks – they can do it from home.  They can see what a clothing item would look like on them without going to the store, seeing if it’s in stock, and trying it on. These factors, and many others, reduce cost for manufacturers and even result in more satisfied customers less likely to return items they buy. It’s true, augmented reality might not let you see exactly how different light would dance off of a reflective bauble, or let you feel the material on your skin. At least, not yet. Still, it’s never been easier to get to know a garment without having it in your hands.  

Virtual Fashion Shows

Fashion shows are one of the industry’s standard methods of introducing the world to their new products. These are the first opportunities for people outside of the designer’s studio to see what a garment looks like on a person, how it flows and moves. However, conventional fashion shows are typically restricted to people in the fashion industry. Even if the average person could find the time and money to travel to one, they probably wouldn’t be allowed in the door. AR helps to bring fashion shows to the people. By replacing the catwalk with a capture studio, experienced designers can make high-quality virtual versions of fashion models. Apply a little techno-wizardry, and these models do their walks wherever a viewer points their mobile device. If the viewer is only interested in a few looks in the collection, they can view those fashions without sitting through the whole show.

See it in Action

Bloomingdale’s  created an AR fashion show for its 150th anniversary. ROSE created it using designs exclusively available for the celebration. The fashion show was visible in Bloomingdale’s stores, or in the homes of over 400,000 shoppers who received an AR-activated catalog. You can still view the experience by scanning a QR code on Bloomingdale’s website. Bloomingdale’s reported a 38 percent increase in shopper engagement and a 22 percent increase in conversions. A lot of those conversions were thanks to a click-to-buy feature that allows shoppers to purchase looks from within the experience just by tapping their favorite fashions.  A similar activation by ROSE and KHAITE led to a 400 percent increase in sales. Further, users browsed an average of 16 looks spending over four minutes in the experience. An experience for Selfridges that was only available in-store caught an average 51 seconds dwell time. Compare that to the amount of time that shoppers spend looking at mannequins.

Becoming the Model

Watching a fashion show can be fun. But, what about being the model? ROSE worked with Steve Madden for an initiative in “the Maddenverse.” This time, it was an Instagram effect that turned users into a stylized Steve Madden virtual avatar. In this case, the idea wasn’t to realistically represent clothing that the viewer could actually buy but rather to give them a fun opportunity to engage with the brand itself. But that’s a topic for another day.  

Virtual Try-Ons and Samples

Seeing an outfit on a model is nice, but when people buy an item, they’re going to care more about how that item looks on them. This can be trickier with augmented reality, but it’s possible. Creating a virtual version of a garment can allow a prospective buyer to see how it works with them. They can see how it matches other items in their collection, their skin color and makeup choices, their hairstyles, and even places where they anticipate wearing the garment like their home or their favorite restaurant. AR tryon in eCommerce may present another “hurdle” – it’s an extra screen tap – but many shoppers find it fun. Further, they’re statistically less likely to return items that they purchase after engaging with them in AR. In fact, ROSE worked with Adidas to create a virtual model of one of their shoes that buyers could “unbox” on social media before the physical shoe shipped. Creating a virtual clothing item can be a lot of work. But, increasingly, that work is already done. Designers and manufacturers frequently start with a digital model because it’s easier to see potential changes than with physical prototypes. In some cases, these design models can be adapted for virtual try-on cases.  

The Future of Virtual Try-On

We’ve discussed some things AR clothing doesn’t do too well. For example, faithfully replicating reflective surfaces, or the way that a fabric moves – or even how a garment will actually fit. Fortunately, all of these are aspects of the technology that are improving year-over-year. While you’ll (probably) never be able to feel an AR garment before you have the physical version, effects like how a piece of clothing reflects light in your environment are improving. In terms of fashion try-on, it was largely pioneered for jewelry in particular but we’ll likely see the approach extended to other materials as well. If you’ve tried virtual tryon before, you have probably had one or two unsatisfactory experiences. Either the jacket that you’re trying on doesn’t move at all, or those earrings that you’re trying on make it look like there’s an earthquake. These technologies are improving too, but there are two big hangups. These effects are powered by physics engines. Different platforms can use different physics engines, so it can be hard to get a quality experience without optimizing for each app and website that you want to publish on. Further, the more advanced the models and effects are, the heavier the experience is. To reach customers where they are, you’ve got to publish on the apps and devices they use. Sometimes that can mean making compromises. Fortunately, this is also being solved by the move toward cloud and edge computing that takes some of that burden off of a user’s device. After all, movie-quality virtual clothing couldn’t be real-time rendered too well within Snapchat. And even that tech is tricky to pull off realistically. That’s one of the reasons Hollywood loves costumed and masked characters so much. That kind of work is closer to VR than AR.  

The Final Frontier of “Virtual Clothing”?

Right now, we’re talking about virtual clothing as a way to drive purchases of physical clothing. However, in recent years, there has been growth in the idea of virtual clothing that stays virtual. That can mean clothing for avatars or virtual clothing that appears in photographs and videos on social media. Whether or not you might ever be interested in buying and selling digital clothing, breakthroughs in this field will likely help to improve the technology as it appears in other use cases as well.

Giving the Gift of AR this Holiday Season

Some people can be hard to shop for. There are two great ways to get around this problem.  The first is to give them something completely unique and personal to them. The second is to give them something that you already know that they love but give it to them in a unique way. Augmented Reality can be an exciting and unexpected way to explore either of these approaches.  

Isn’t VR cooler?

There’s a lot of hype around virtual reality right now – and with good reason. However, virtual reality (in addition to requiring more robust hardware) means that everything is digital. That means that everything has to be created. Items, landscapes, maybe even representations of other users. That takes a lot of time, effort, and money. Augmented reality primarily uses a person’s physical surroundings, with a couple of changes brought to you by creative technologists. That means that a single item, character, or special effect can create a completely unique experience without needing to reinvent the wheel – and everything else – on a computer.  

AR: The Gift That Keeps on Giving

What’s more than all of that, AR draws on the viewer’s connection to their physical environment. It uses computer magic to bring a little something extra to the way that they experience the places, items, and even people that they already love. That brings us back to using AR to solve tricky problems on your holiday gift list. 

Give Something Truly Unique

Everything experienced in AR is completely unique to the viewer because what is going on in the camera feed is going to be different every time. No matter how special the experience is, the physical setting where the user chooses to launch that experience makes it even more personal and meaningful. ROSE created a virtual model of the real-life Edmund Pettus Bridge for an educational AR experience that viewers could visit from anywhere in the world. Some chose to go through the experience wherever it was convenient or practical for them. But, users can also choose to place the experience in an area that has emotional significance to them. A complete experience may be difficult to give as a gift. It is possible for you to create a one-of-a-kind AR item. That could be an object or character that only exists in the digital world. It could also be a 3D model of a physical object with a special significance to the friend or loved one to whom you present it. The great thing about digital objects is that they don’t have to exist in one format. While you might choose a special experience for the initial gifting, consider giving the file of the object itself as part of the gift. That way, the receiver can take their digital object or character with them into other virtual worlds and digital experiences.

Give Something Physical – but Augment it

Some augmented reality experiences originate in the digital world and project out into the environment, like the digital objects that we were just talking about. Other augmented reality experiences start with a physical object that computer magic only enhances. In this way, you can give a “normal” gift that stands out a lot more. Patrón’s digital wrapping project took a bottle and some care to create a magical holiday gift Gifters created a personalized virtual wrapping for a Patrón bottle, including photographs, text messages, and other AR customizations. As a result, the end gift wasn’t “just a bottle of liquor,” it was a meaningful and personal one-of-a-kind experience – through the magic of AR.

Get Really Creative

Some AR gifts combine everything that we’ve talked about: a digitally-enabled personal experience, a virtual object, and a physical object with augmented value.  The adidas DEERUPT sneaker launch involved a physical box that appeared empty. Inside of that box was a grid that served as a target for a social media-friendly AR version of the shoe. This allowed fans to enjoy a product “unboxing” before the shoe was physically available. Giving a gift like this allowed a special early opening of a product naturally followed by the object itself. It’s not every day that a company does something like a virtual unboxing. However, you can apply this idea to your own gifts. Give someone a marker that launches an AR experience, even a simple one, while the “real” gift is something much bigger. That could be an item that hasn’t arrived yet, a trip someplace special, anything that you can think of. You can also use AR to let your friend or family member choose their own gift. Fashion brand KHAITE partnered with ROSE to bring models and fashions into a user’s home using augmented reality. Users got to see a personalized fashion show in their own chosen environment – and then had the option to buy the fashions that they viewed.  

Think Outside the Box

This article has provided a few ideas and a few examples. But, no article could capture all of the possibilities that AR presents for gift giving. In part, that’s because AR allows us to think outside of the box – or any other physical constraints. So, let your imagination run wild.  Freely available AR object and experience building platforms are proliferating but still require a certain amount of skill. So, this article has included links to sites that you can use to have an expert help you create a digital item or experience of your own. You can also keep an eye out for ready-made experiences from brands who are increasingly using AR in creative ways.

Off with their heads: Body and clothing segmentation in Spark AR

Implementing a filter with three different effects on the background, body and clothing.
 
Steve Maddenverse Instagram filters
Many of the augmented reality experiences that ROSE produces are focused on adding new objects to or ways of interacting with the already existing world around us, allowing the user to interface with virtual extensions of a product, brand, or idea. However, lately we have seen a renewed interest from brands in creating person-centric experiences ie. the selfie. Most recently, we delved into this world when working on the Steve Maddenverse campaign’s Instagram filters.  
Of course, person-centric experiences are hardly a new idea. Selfie filters developed for Instagram and Snapchat abound, having exploded in popularity through the last five years. These filters can do anything from magically beautifying someone’s face to aging them, warping them into fearsome orcs and goblins, changing their hair or facial features or jewelry and accessories, or swapping them entirely with someone else’s. This, too, is a kind of augmented reality, and it has its own huge potential.  
An Instagram face swap filter. Credit to amankerstudio on Instagram, 2017.
Alongside that potential come several unique challenges, of which the main one is body tracking. An AR engine needs to identify what sections of the camera feed belong to a person as well as how and where they move — perhaps even tracking the position and orientation of individual body parts. And once we have that information, we can take it a step further to address an even more specific hurdle: segmentation.  
A body tracking algorithm in action. Credit to MediaPipe and Google AI Blog, 2020.

What is Segmentation?

Segmentation is the process of identifying a body part or real object within a camera feed and isolating it, creating a “cutout” that can be treated as an individual object for purposes like transformation, occlusion, localization of additional effects, and so on.

Types of Segmentation:

Hair Segmentation: Changing a user’s hairstyle requires precise segmentation of that user’s real hair so that it can be recolored, resized, or even removed from the rendered scene and replaced entirely without affecting other parts of the scene, such as the user’s face. Body Segmentation: Allows for the user’s background to be replaced without tools like a green screen, throwing the user into deep space, lush jungles, the Oval Office, or anywhere else you would like to superimpose your body outline against. Skin Segmentation: Skin segmentation identifies the user’s skin. This could power an experience in which a user wears virtual tattoos that stop at the boundaries of their clothes and move along with their tracked body parts — almost perfectly lifelike. Object Segmentation: Gives us the ability to perform occlusion so that AR objects might be partially hidden under or beneath real ones as they would logically be in reality, or even to “cut and paste” those real objects into virtual space.  
Person, skin, and hair segmentation via Spark AR. Credit to Facebook, 2021.

Achieving Segmentation

How do we achieve segmentation? Approximating shapes from a database would never be even close to realistic. Identifying boundaries by color contrast is a no go for people with hair or clothes that are close to their skin tone. Establishing a body position at experience start (“Strike a pose as per this outline:”) and then tracking changes over time is clunky and unreliable. We need something near-instantaneous that can recalibrate on the fly and have a wide margin of approximation for adjustment. We need something smarter! Of course, then, the answer is artificial intelligence. These days, “AI” is more often than not a buzzword thrown around to mean everything and yet nothing at all, but in this case we have a practical application for a specific form of AI: neural networks. These are machine learning algorithms that can be trained to recognize shapes or perform operations on data. By taking huge sets of data (for example, thousands and thousands of photos with and without people in them) and comparing them, neural networks have been trained to recognize hands, feet, faces, hair, horses, cars, and various other animate and inanimate entities…perfect for our use case.  

Training a neural network to identify objects and remove backgrounds. Credit to Cyril Diagne, 2020.

All of this is not to say that segmentation is on the cutting razor edge of new technology. Spark AR, for example, has had segmentation capabilities for at least two years. However, it is a pretty recent update to the platform that allows users to use multiple classes of segmentation in a single effect, and you can read more about that update here. This new capability opens the door to a host of more complex effects, and so in this case study, we use multiple-class segmentation to apply separate effects to the user’s background, body (face, hair, and skin), and clothing.
Sketching out a triple segmentation filter. Credit to Eric Liang, 2021.
Each of these layers is easily accomplished on its own using a segmentation texture from the camera. For example, Spark AR provides a “Background” template that shows how to accomplish person segmentation and insert a background image. Breaking the template down, we see that this is accomplished by first creating two flat image rectangles that overlay and fill the device screen. The topmost of these will be the person, and the one underneath will feature the background image. For the top layer (named “user” in the template), the extracted camera feed is used as a color texture. Beginners will observe that there’s no visible distinction from a blank front-facing camera project at this time. This is because the normal display is, for all practical purposes, exactly that: just a flat image rectangle that fills the screen and displays the camera feed. We’ve basically just doubled that in a way that we can tinker with and put our version on top, obscuring the “normal” display. Next, a person segmentation texture is created and used as the alpha texture for the user rectangle. This sets the alpha value, which determines transparency, for all parts of the user rectangle outside of the identified person to 0, so that it is completely transparent and shows what is layered underneath it instead. Within the area that is an identified person, the camera feed continues to show through. This shows us that the segmentation texture is actually made up of two binary areas: is and isn’t, without any information as to what that is/isn’t is actually referring to. Those familiar with image manipulation know this concept as “layer masking”. The camera feed is accessed twice per frame: once to determine that is/isn’t binary and create a texture map (practically, equivalent to a layer mask) recording that information, and once to check what color each pixel within that map should be. (Astute observers will note that it doesn’t matter in which order these checks occur.) Finally, the template allows for any desired background image to be slotted in as the background rectangle’s color map. Voilà: person segmentation! We’ll replace the stock image with a bit of outer space for our aesthetic.  
Background segmentation using Spark AR’s template.

Next step: adding an effect to the face. Problem: we don’t have a built-in “clothes” segmentation! We have “hair” and “body”, but nothing that will allow us to easily separate face and skin from clothes. Snapchat’s Lens Studio is nice enough to provide built-in “upper garment” segmentation, but Spark AR is not so forthcoming. We’ll have to get a little creative with the options available to us. Quick thinkers may have already seen the simple mathematical solution. Our segmentation options are “person”, “hair”, and “skin”. Person minus hair and skin is…exactly what we’re looking for. By combining the hair and skin segmentation textures and subtracting that from the person texture, we get the clothes left behind. Let’s get cracking on what exactly this looks like in patch form.  
Demonstrating multiple segmentation.

As a very basic implementation of the concept, it’s a little rough around the edges, but it gives us what we need. I implement some tweaks for the sample screenshots, but they will not be covered in this case study, and I encourage you to explore, create, and refine your own solutions! “EZ Segmentation” is a patch asset straight from the Spark AR library, and provides options for adding effects to either the foreground (body) or the background (clothes). It’s pretty easy to build effects on their own and then pass the texture into the slot. Here, we add in a light glow gradient paired with a rippled lens flare to the foreground and a starry animation sequence to the background.  
The filter in action.

You can already imagine the kinds of things we can do here with the power to animate designs on the user’s clothing. Inversely, we can leave the clothing untouched and add effects to the user’s skin, whether that be coloring it in à la Smurf or Hulk, or erasing it entirely for an “Invisible Man”-type filter. These suggestions are just a place to start, of course; multiple-class segmentation is powerful enough to open the door to a galaxy’s worth of potential. Show us what you can do!

Render streaming: taking AR to the next level

What’s the deal with AR, anyway?

XR technology is widely touted as having infinite potential to create new worlds. You can design scenes with towering skyscrapers, alien spacecraft, magical effects, undersea expanses, futuristic machinery, really anything your heart desires. Within those spaces, you can fly, throw, slash, burn, freeze, enchant, record, create, draw and paint⁠ — any verb you can come up with. The only limit is your imagination!
Painting in VR with Mozilla’s A-Painter XR project. Credit: Mozilla 2018.

Sounds cool. What’s the problem?

Well, all of that is true — to a point. Despite all of our optimism about this AR and VR potential, we find that we are still bound by the practical limitations of the hardware. One of the biggest obstacles to creating immersive, interactive, action-packed, high-fidelity XR experiences is that the machines used to run them just don’t have the juice to render them well. Or, if they do, they’re either high-end devices that have a steep monetary barrier to entry, making them inaccessible, or too large to be portable and therefore inconducive to the free movement you would expect from an immersive experience. That’s not to say that we can’t do cool things with our modern XR technology. We’re able to summon fashion shows in our living rooms, share cooperative creature-catching gaming experiences, alter our faces, clothing, and other aspects of our appearance, and much, much more. But it’s easy to imagine what we could do past our hardware limitations. Think of the depth, detail, and artistry boasted by popular open-world games on the market: The Elder Scrolls: Skyrim, The Legend of Zelda: Breath of the Wild, No Man’s Sky, and Red Dead Redemption 2, just to name a few. Now imagine superimposing those kinds of experiences against the real world, augmenting our reality with endless new content: fantastic flora and fauna wandering our streets, digital store facades that overlay real ones, information, and quests available to learn about at landmarks and local institutions.  
Promotional screenshot from The Legend of Zelda: Breath of the Wild. Credit: Nintendo 2020.
  There are many possibilities outside of the gaming and entertainment sphere, too. Imagine taking a walking tour through the Roman Coliseum or Machu Picchu or the Great Wall of China in your own home, with every stone in as fine detail as you might see if you were really there. Or imagine browsing through a car dealership or furniture retailer’s inventory with the option of seeing each item in precise, true-to-life proportion and detail in whatever space you choose. We want to get to that level, obviously, but commercially available AR devices (i.e. typical smartphones) simply cannot support them. High-fidelity 3D models can be huge files with millions of faces and vertices. Large open worlds may have thousands of objects that require individual shadows, lighting, pathing, behavior, and other rendering considerations. User actions and interactions within a scene may require serious computational power. Without addressing these challenges and more, AR cannot live up to the wild potential of our imaginations.  

So what can we do about it?

Enter render streaming. Realistically, modern AR devices can’t take care of all these issues…but desktop machines have more than enough horsepower. The proof is in the pudding: we see in the examples of open-world video games previously mentioned that we can very much create whole worlds from scratch and render them fluidly at high FPS rates. So let’s outsource the work! The process of render streaming starts with an XR application running on a machine with a much stronger GPU than a smartphone (at scale, a server, physical or cloud-based). Then, each processed, rendered frame of the experience, generated in real time, is sent to the display device (your smartphone). Any inputs from the display device, such as the camera feed and touch, gyroscope, and motion sensors, are transmitted back to the server to be processed in the XR application, then the next updated frame is sent to the display device. It’s like on-demand video streaming, with an extra layer of input from the viewing device. This frees the viewing device from actually having to handle the computational load. Its only responsibility now is to stream the graphics and audio, which modern devices are more than capable of doing efficiently. Even better, this streaming solution is browser-compatible through the WebRTC protocol, meaning that developers don’t need to worry about cross-platform compatibility, and users don’t need to download additional applications.
Diagram of render streaming process using Unreal Engine. Credit: Unreal Engine 2020.
  There is just one problem: it takes time for input signals to move from the streaming device to the server, be processed, and have results be transmitted back. Nor is this a new challenge; we have long struggled with the same latency issue in modern multiplayer video games and other network applications. For render streaming to become an attractive, widespread option, 5G network connectivity and speeds will be necessary to reduce latency to tolerable levels. Regardless, it would be wise for developers to get familiar with the technology. All the components are already at hand; not only is 5G availability increasing, but Unity and Unreal Engine have also released native support for render streaming, and cloud services catering to clients who want render streaming at scale are beginning to crop up. The future is already here — we just need to grab onto our screens and watch as the cloud renders the ride.  

How AR Brought KHAITE’s Latest Fashion Line Directly To Consumers

At ROSE we build relationships around fast and comprehensive solutions. Our goal when taking on projects is to build seamless solutions and provide a path for further innovation. We want to be a repeat partner for augmented reality. We find the path forward through innovation and then build on that existing framework. This process had led us to our second partnership with KHAITE. This week we launched our second experience with the high-powered fashion brand, and over a short period of time we’ve been able to increase sales and bring AR into the hands of fashion lovers.

What We Did

As the fashion world had to adapt and move to a purely digital landscape — fashion shows had to be pushed to video, new clothing lines had to be shipped to prospective buyers — brands had to move quickly to break through all of the noise. ROSE and Chandelier Creative helped KHAITE bring their newest collection to life. With emerging technology ROSE was able to bring KHAITE’s footwear designs to the homes of their customers, buyers and the market giving customers a deep visual experience unlike any other fashion brand has been able to accomplish. As the world continues to grapple with these unprecedented times, this technology will become a cornerstone of how fashion powerhouses market their designs to their customers. ROSE decided to build a WebAR application for accessibility purposes and to take the burden off consumers. The WebAR experience is widely-supported, deeply interactive and highlights the unique details of KHAITE’s footwear designs in a way that offers endless creative freedom for the user. KHAITE shipped lookbooks that had QR codes embedded within the experience, made by Chandelier Creative, that when scanned take you to the AR experience where users can see the shoes to scale in their own homes. This allows consumers to tap whichever shoes they’d like to get a closer look at and place them in their homes. This allowed customers to get a feel for the items without being able to see them in person. This experience allowed KHAITE to create a visual experience that otherwise would only exist inside one of their showrooms. In the second iteration of the experience, for KHAITE’s pre-fall 2021 collection, ROSE expanded the experience to include models rendered in augmented reality, allowing for users to be able to see the clothing in the way it was meant to be seen. While still using WebAR, this second experience utilized green screen video to build a full runway show with models wearing the new line as they walk up and down whatever environment the user chooses.
 

Challenges

Understanding the mathematics of 3D space is a learning curve in itself, but creating an experience accessible in a browser, as opposed to a native mobile application, makes things even more difficult with issues like sensor permissions and browser compatibility. Adding light sources to a scene requires a careful balance between the existing, real-life lighting observed by the camera and computed lighting that best accentuates the highlights and shadows of the models in the AR scene. This challenge was multiplied tenfold as we created specific lighting setups to complement each unique shoe model. The material of each model was a major consideration; a shoe with a soft, quilted insole and white leather straps needed soft, glowing illumination, whereas a black patent leather boot needed bright point lights that played off the glossy reflectivity of the material. The end result was an experience that tailors to each model, allowing users to see each one in its best light. When we started on the second KHAITE experience, we ran up against challenges that came with showcasing an entire clothing line. KHAITE is a premium brand which places a lot of emphasis on the quality and texture of the materials for their garments and accessories. WebAR is a resource constrained medium, meaning lower-file sizes and compression are required. Capturing 4K, high framerate and high-quality assets for delivery via the web is a challenge. Involving models and movement increases the difficulty of capturing high quality assets. Thankfully, we were able to get incredibly high quality green screen footage enabling the quality of the looks to shine through.

Impact

As the fashion world grapples with how to convert sales and stay afloat amid the pandemic, finding ways to integrate experiences with seamless shopping capabilities is now the only viable option. Within this experience, the sales were proof enough that this execution works for high-fashion labels. Fashion is a tactile and textured experience, and amid social distancing brands have hurdles to jump to create moving experiences for consumers. Companies are integrating new technology to bring fashion shows to people’s phones, computers, and inside their homes. For the first experience ROSE built for KHAITE, sales increased by significantly in just a few short weeks. Evan Rose, CEO and founder of ROSE, said, “We are proud to have partnered with KHAITE and Chandelier Creative to create an experience that changes how consumers engage with physical products in an increasingly digital world. We’re excited to be a part of driving how the retail and fashion industries engage with consumers.” As this current climate continues, and shoppers continue to have decreased consumer confidence, focusing on the clothes and the experience that can be had without in-person experiences are more important than ever. Using augmented reality for elevating fashion in this time of social distancing allows for a rich, interactive experience for all users and customers. AR allows for the color, texture and life of garments to come to life.

ROSE And Patrón Partner To Build The Spirit Industry’s First User-Generated AR Experience

Amid a global pandemic the solutions to some of our most basic problems need some creativity. With COVID’s continued presence in our lives, social distancing may have to continue into a time that is usually filled with parties, family gatherings and holiday festivities. People will be looking for ways to make new traditions, and to connect with their loved ones from afar.

Patrón needed a way to help customers connect despite holiday plans shifting across the country, while also maintaining their brand narrative. We worked with Patrón to create a first-of-its-kind digital wrapping as a special gift this holiday season, and beyond, to solve this specific problem. This experience provides a sentimental and original take on gifting alcohol as well as gives customers first-hand experience not just using augmented reality, but harnessing it to make something themselves.

How Does It Work?

Gifters of Patrón can use a microsite developed by ROSE to create a custom wrapping including a photo, text, and stickers that will transform into a 360-degree augmented reality (AR) gift wrapping around their Patrón bottle. This gives customers a chance to use this emerging technology in a new way that hasn’t been available in retail before.

Select A theme, add photos, text, and stickers and then see it come to life.

“With COVID-19 impacting most celebrations this holiday season, we wanted to give customers a way to continue to celebrate with each other while social distancing,” Nicole Riemer, the art director on the project said. “By creating a custom wrapping, customers can take the act of gifting alcohol from an easy to a thoughtful one. During a time when you might not be able to gift in person, creating a custom wrapping with photos, stickers, and text provides that personal touch that is missing from not being able to gift it in person.”

Using WebGL in both 2D and 3D allows users to see their content change between dimensions in real time. Gifters can then use built-in recording and sharing technology to share the gift with the recipient as well as on social media.

“Creating these designs digitally allows for the process to be instantaneous and affordable, rather than waiting for something to get engraved or physically customized, without losing the ability to share that someone is thinking of you on social media,” Riemer said.

By providing customers the ability to customize their gift of Patrón for both different occasions and gift recipients, we are showing them that Patrón isn’t the “mass brand” they think it is. This virtual gift allows distance to not be a barrier in creating something thoughtful that nurtures customers’ need for growing and maintaining their relationships.

Using augmented reality for this experience had several advantages. The most obvious one being that this experience provides a sentimental gift without having to enter a store or be in the same physical space as the recipient — helping maintain social distancing amid the pandemic. Additionally, augmented reality provides a way for users to generate their own content while maintaining the PATRÓN brand.

“The challenge with AR has always been figuring out how we can take new dimensions and connect them to the ones we’re familiar with in creative, expressive, and helpful ways,” Eric Liang, front-end/AR engineer on the project said. “The AR experiences that ROSE has previously created have each addressed that challenge by taking something important to us — something unseen or out of the ordinary that we wanted to showcase — and constructing it in the user’s world. This time, we’re handing the reins to the user. In this new collaboration, we’re letting users create and realize something that’s uniquely their own.”

Harnessing the power of AR will bring all the holiday cheer customers could be missing into the palm of their hand and inside their home — connecting people who want to be together this holiday. Additionally, PATRÓN has a history of creating limited-run packaging and bottles and this experience offers customers peak exclusivity with the ability to customize every individual bottle they purchase, so the virtual expansion of exclusive boxes was a natural progression for the brand.

In designing this web application, we identified two different types of users. As Patrón’s target demographic for this experience is 21–35, we were less concerned with the technological literacy of the user. Additionally, since this started as a concept that would be mainly pushed through social media, we were bound to attract younger users that would already be at least slightly familiar with augmented reality from exposure through SnapChat and Instagram. After determining this demographic information for our target user, the next question was what a user would want to create when using this tool. This led us to determining the following use cases:

Creator 1: The user that wants to create a really thoughtful collage that they want the recipient to see that they spent time on. They expect that their gift will be shown to others and potentially shared on social media in a similar fashion to birthday posts.

Creator 2: The user that is looking to create a quick gift that still wows the intended recipient. They want to expend minimal effort, but get the same praise and reaction as someone who spends a lot of time on their creation.

In order to satisfy the need for a quick gift, we created quick “themes” that someone can choose from at the start of the experience that allows them to upload a single photo and have created a designed bottle in 5 clicks (including previewing their design). For those that want to spend more time on their creation, we provide the ability to start from scratch and choose the content that goes on every side of the bottle.

Select A Theme

In choosing the predetermined content that users can apply to their digital bottles, we focused on a few things. The first was to choose assets that could be used for multiple occasions, holidays, and were non-denominational. The second was to underscore the socially distant benefit of this gift and continue to have people drink responsibly even when gatherings are not encouraged. The third was to make sure that the assets could be used in many combinations and still create a wrapping that looks high end.

Once we determined the user experience and the content types that could be placed on the wrappings, we had to find a way to map their content to a 3D bottle in real time, to show the user their creation on this model before sending augmented reality link to their recipient, and then ultimately render each individual experience in augmented reality.

The technical inspiration for this experience began in an understanding of how WebXR, the implementation of augmented reality in a web browser, operates. WebXR is the conceptual model of everything that exists in an extended reality scene: where each virtual object is, where light is coming from, where the “camera” stands and observes, how the user interacts and changes all of these things, and so on. Imagine closing your eyes and understanding where everything around you in the room is: your desk, the floor, a lamp, rays of sunlight coming through a window, even your own hands. Now open your eyes and actually observe those things. That’s what WebGL does. WebGL is the graphics engine that takes the theoretical model processed by WebXR and paints it on a screen, rendering the virtual existence of matter and light into visibility.

While we wanted to capture the same magic of seeing something you create exist in 3D space, it was important that it would be accessible to everyone, both in terms of the technology and creativity. We wanted it to be usable from an everyday mobile device, without the need for expensive VR technology. We also didn’t want to require the user to be a painter, have an empty warehouse to dance around with VR goggles on, or have an intricate understanding of 3D sculpture or set design to maximize the reward of the experience.

There were a lot of moving parts that needed to be addressed. There needed to be a simple, intuitive interface for the user to customize their design and we needed to apply the design to a 3D model composed of a number of different materials and textures, from soft cork to clear pebbled glass to shiny metallic gift wrap. The experience needed to show that customized bottle back to the user in an interactive, attention-grabbing 3D experience. And finally, we needed to be able to scale the experience for a mass marketing campaign, which meant preparing for a large number of concurrent users with different devices and intents. We settled on technologies to address each of these challenges: a React/HTML Canvas microsite to design the wrapping, an 8th Wall/A-FRAME experience to view it, and a serverless API backend with cloud storage to support scale.

The next step was to source a 3D model of the bottle and we worked with a 3D artist and modeller and iterated over the model until each detail was as accurate as possible, and then continued to optimize our renders.This involved adjusting lighting through trial and error until we found the best setup to illuminate the bottle and make the glass and its reflectiveness as lifelike as possible, as well as customizing the physical material shaders for each node of the finalized model: the cork, the ribbon, the glass, the liquid, and the wrapping.

3D model renderings of the Silver Patrón bottle. 3D model renderings of the Silver Patrón bottle.

Later on, we also realized that we needed a dynamic approach to the wrapping’s transparency. If the user chose to lay their graphics directly over the glass without using a background, those stickers, photos, and text would need to be opaque while leaving the glass transparent. The answer was taking the texture maps we generated with each user-created design and filtered them into black and white, effortlessly serving double duty as alpha maps to control transparency.

Example of an alpha map. Example of an alpha map.

While the experience would be accessible to everyone, we wanted those who had a Patrón bottle handy to be able to integrate it into the experience. It’s not yet feasible to use a real-life bottle of Patrón to anchor the experience, so we looked outside of the box — and settled on the actual box that each bottle of Patrón comes in. This gave us the opportunity to leverage 8th Wall’s image target feature, using Patrón bottle image on the side of each box to trigger the dramatic emergence of the virtual bottle from the physical box.

Built to share on social, this augmented reality experience allows for recording within the WebAR experience. Built to share on social, this augmented reality experience allows for recording within the WebAR experience.

Those without a box can watch the bottle appear on the plane they have placed it on in the experience. Adding some typical controls like pinch to zoom and finger rotation made it easy for the user to examine the bottle and the details of the design, and we added in 8th Wall’s Media Recorder capability to further boost the shareability of the experience.

As companies look ahead to a greener and more sustainable future, the concept of virtual wrapping and virtual packaging is likely to expand. As augmented reality moves from an emerging technology to an adopted one, user-generated AR content will take center stage, and experiences like this one will enable every day users to create using AR technology. As all industries grapple with how to stay competitive, and stay afloat, innovation is the answer to moving forward. This is the tip of the iceberg when it comes to what augmented reality can accomplish.

We are excited to continue innovating and bringing projects like these to life. We believe anyone can innovate and that process is vital amid the current economic landscape. Our passion for emerging technologies and augmented reality is immense and our work will only continue to reflect that. We’re looking forward to sharing more soon.

Ashley Nelson: Concept and Strategy, UX Copywriter

Eric Liang: Front-end/AR engineer

Eugene Park: Experience Design

Leonardo Malave: Back-end/AR engineer

Marie Liao: QA Engineer

Nicole Riemer: Concept and Strategy, Art Direction, and Experience Design

Yolan Baker: Project Manager

Technology for Good: Supporting Black Lives Matter Through The Bail Out Network

As a black-owned business, the current state of the world has changed ROSE’s daily motions as a company. We view ROSE as a vehicle for improving the world, both in how we support each other internally, and the impact of the products we bring to life. We also acknowledge that as a Black-owned tech firm, we have an innate privilege with our platform. We have the ability to create change through technology, and with that privilege comes a deeply-rooted responsibility. Bail Out Network was our way of assisting in the fight for Black equality, without distracting from the injustices currently happening.

After the death of George Floyd sparked protests across the world, we found ourselves in conversations with the entire staff about how we could make a positive impact. We quickly saw the systematic arrest of protesters around the country, and the flood of donations to community bail funds that have been integral to fighting police overreach for decades.

We wanted to make locating these funds, and their donation portals, as easy as possible as the police violence became clearly visible on a daily basis. So we scoured the internet looking for every bail fund we could find, and any lawyers or law firms that vocalized their willingness to represent those arrested for protesting free of charge. The website was launched in under 24 hours, on June 2, as a rapid response to the grossly visible police violence that was being seen across the country.

This project started as a way for us to collect bail funds in one place, for people looking to lend a hand in their communities as police began systematically targeting protesters. As more time passed, we wanted to expand how Bail Out Network could be utilized as a resource in the fight against police brutality and equity for Black and brown bodies.

Using Airtable, we created a database for bail networks, organizations supporting marginalized communities, and lawyers that support the fight.

The site now has a collection of resources that are dedicated to helping the most marginalized communities — focusing on the Black Trans community, Black LGBTQ+ community, Black youth, Black incarcerated people and other groups that lack protection from this country’s institutions. The site remains open for submissions, and if people come to the site and want to submit new resources they can add them at any time.

We firmly believe, as should all people, that Black Lives Matter and the people protesting should not face punishment for doing what is right. As people continue to protest and fight for the destruction of systemic racism and demanding a complete overhaul of the United States policing system, they’re going to need more money, more resources, and more bodies to achieve these goals. We will continue to promote organizations that are protecting those who have been disenfranchised by the current political, economic, and social structures that exist within the fabric of the United States. This database will be updated as more resources become available.

The Team

Launched in June 2020, the idea for this project built upon Rose Digital’s history of using technology for good in times of public crisis (see also, Help or Get Help). Ashley Nelson, copywriter, originated the idea and identified the need within the current environment to connect those protesting with legal aid and easy access to resources. Nicole Riemer, art director, created the visual direction and UX of the site with Ashley writing the copy.

Bail Out Network will continue to consolidate resources that anyone can use to access resources helping the Black Lives Matter movement. For more information on ROSE, please visit builtbyrose.co.

 

Pay To Play: Visualizing Presidential Campaign Spending Using Augmented Reality

Pay to Play, the AR data visualization experience, showcasing how money spent on presidential campaigns equates to the cost of large U.S. infrastructure projects.

Believe it or not, a few short months ago the main event dominating the news cycle wasn’t coronavirus, but the Presidential election. The Democratic primaries were different from years past, and not just because the number of candidates running could fill a small football field. One thing that stood out to our team was the record spending that occurred this election cycle. Discussions began to swirl around campaign finance specifically when Michael Bloomberg entered the race, funding his entire campaign with his personal fortune, and raising questions about what money should and shouldn’t buy while running for office. We began thinking about a way to contextualize the immensity of campaign spending through the language we speak best — technology. Those conversations and the desire to use technology to answer that question was the origin of Pay to Play. Due to primaries being postponed, and the race being narrowed down to single candidates from each party, we considered not releasing this experience.

However, with the new economic pressures on American families due to coronavirus and the current volatile international economy, we believed the relationship between money and politics was worth exploring. This project considers the disconnect between the monetary impact of the political process and the needs of everyday Americans.

The staggering amount of money spent by Democratic candidates in the 2020 election left us wondering how that money could have been spent on infrastructure and funding the platforms that those candidates had as part of their campaigns. We designed Pay to Play as a way to look back on the record amount of money spent by Democratic candidates that have ended their bids. We also included how much several Republican contenders in the 2016 presidential election spent on their campaigns as another comparison.

We designed this experience to visualize our internal discussions and the conversations happening in the U.S. during this tumultuous time, and in doing so we wanted to answer the question: “What else could we have done with that money?”

How Does It Work

Try it for yourself at campaignspending.rosedigital.co or scan the QR code.

Why Use Augmented Reality

The Build, 3D Modeling, and Optimization

Optimizing for size by reducing face counts and textures in Blender

We found that much of the challenge of this project was using AR in a way that was accessible to as many people as possible while still maintaining the core identity of the project — using numerical scale as a way to evoke a reaction from the user. Rendering any 3D model in a web browser can be an expensive operation. Rendering thousands of them would tax a smartphone’s hardware to the point of unusability. We ended up approaching this by leaning into the idea of scale: we didn’t need exacting detail if the idea was to overwhelm the user with a huge pile of items; we just needed enough to make it clear what each item was. So we selected simple models with fewer polygons, decimated their numbers of faces as low as we could, and reduced the resolution on their textures to minimize file size. The end result worked out — we had piles of apples that were clearly recognizable and deeply satisfying to watch cascade down from the sky.

Additional challenges came from the technologies we used to build the experience itself. Web AR platforms advance every day, but there are still severe limitations to their capabilities. For example, 8th Wall, the platform on which this experience runs, offers surface occlusion capabilities only for its Unity integration into native apps. For browser-based experiences that don’t yet have access to that plane detection technology, we have to emulate a floor by placing a vast invisible sheet at a defined distance below the camera. The distance to the “floor” is not dynamic and doesn’t change whether the user is sitting or standing, resulting in an imperfect representation of reality. This process only makes us more excited to see the next steps web AR will take, as the technology continues to improve and provide us with new and even more compelling ways to augment our reality.

Conclusion

Credits:

Nicole Riemer: Art Direction and Experience Design

Eric Liang: Experience Design and Development