Implementing a filter with three different effects on the background, body and clothing.Steve Maddenverse Instagram filters Many of the augmented reality experiences that ROSE produces are focused on adding new objects to or ways of interacting with the already existing world around us, allowing the user to interface with virtual extensions of a product, brand, or idea. However, lately we have seen a renewed interest from brands in creating person-centric experiences ie. the selfie. Most recently, we delved into this world when working on the Steve Maddenverse campaign’s Instagram filters.
Of course, person-centric experiences are hardly a new idea. Selfie filters developed for Instagram and Snapchat abound, having exploded in popularity through the last five years. These filters can do anything from magically beautifying someone’s face to aging them, warping them into fearsome orcs and goblins, changing their hair or facial features or jewelry and accessories, or swapping them entirely with someone else’s. This, too, is a kind of augmented reality, and it has its own huge potential.
An Instagram face swap filter. Credit to amankerstudio on Instagram, 2017.
Alongside that potential come several unique challenges, of which the main one is body tracking. An AR engine needs to identify what sections of the camera feed belong to a person as well as how and where they move — perhaps even tracking the position and orientation of individual body parts. And once we have that information, we can take it a step further to address an even more specific hurdle: segmentation.
A body tracking algorithm in action. Credit to MediaPipe and Google AI Blog, 2020.
What is Segmentation?
Segmentation is the process of identifying a body part or real object within a camera feed and isolating it, creating a “cutout” that can be treated as an individual object for purposes like transformation, occlusion, localization of additional effects, and so on.
Types of Segmentation:
Hair Segmentation: Changing a user’s hairstyle requires precise segmentation of that user’s real hair so that it can be recolored, resized, or even removed from the rendered scene and replaced entirely without affecting other parts of the scene, such as the user’s face. Body Segmentation: Allows for the user’s background to be replaced without tools like a green screen, throwing the user into deep space, lush jungles, the Oval Office, or anywhere else you would like to superimpose your body outline against. Skin Segmentation: Skin segmentation identifies the user’s skin. This could power an experience in which a user wears virtual tattoos that stop at the boundaries of their clothes and move along with their tracked body parts — almost perfectly lifelike. Object Segmentation: Gives us the ability to perform occlusion so that AR objects might be partially hidden under or beneath real ones as they would logically be in reality, or even to “cut and paste” those real objects into virtual space.
Person, skin, and hair segmentation via Spark AR. Credit to Facebook, 2021.
Achieving Segmentation
How do we achieve segmentation? Approximating shapes from a database would never be even close to realistic. Identifying boundaries by color contrast is a no go for people with hair or clothes that are close to their skin tone. Establishing a body position at experience start (“Strike a pose as per this outline:”) and then tracking changes over time is clunky and unreliable. We need something near-instantaneous that can recalibrate on the fly and have a wide margin of approximation for adjustment. We need something smarter! Of course, then, the answer is artificial intelligence. These days, “AI” is more often than not a buzzword thrown around to mean everything and yet nothing at all, but in this case we have a practical application for a specific form of AI: neural networks. These are machine learning algorithms that can be trained to recognize shapes or perform operations on data. By taking huge sets of data (for example, thousands and thousands of photos with and without people in them) and comparing them, neural networks have been trained to recognize hands, feet, faces, hair, horses, cars, and various other animate and inanimate entities…perfect for our use case.
Training a neural network to identify objects and remove backgrounds. Credit to Cyril Diagne, 2020.
All of this is not to say that segmentation is on the cutting razor edge of new technology. Spark AR, for example, has had segmentation capabilities for at least two years. However, it is a pretty recent update to the platform that allows users to use multiple classes of segmentation in a single effect, and you can read more about that update here. This new capability opens the door to a host of more complex effects, and so in this case study, we use multiple-class segmentation to apply separate effects to the user’s background, body (face, hair, and skin), and clothing.Sketching out a triple segmentation filter. Credit to Eric Liang, 2021.
Each of these layers is easily accomplished on its own using a segmentation texture from the camera. For example, Spark AR provides a “Background” template that shows how to accomplish person segmentation and insert a background image. Breaking the template down, we see that this is accomplished by first creating two flat image rectangles that overlay and fill the device screen. The topmost of these will be the person, and the one underneath will feature the background image. For the top layer (named “user” in the template), the extracted camera feed is used as a color texture. Beginners will observe that there’s no visible distinction from a blank front-facing camera project at this time. This is because the normal display is, for all practical purposes, exactly that: just a flat image rectangle that fills the screen and displays the camera feed. We’ve basically just doubled that in a way that we can tinker with and put our version on top, obscuring the “normal” display. Next, a person segmentation texture is created and used as the alpha texture for the user rectangle. This sets the alpha value, which determines transparency, for all parts of the user rectangle outside of the identified person to 0, so that it is completely transparent and shows what is layered underneath it instead. Within the area that is an identified person, the camera feed continues to show through. This shows us that the segmentation texture is actually made up of two binary areas: is and isn’t, without any information as to what that is/isn’t is actually referring to. Those familiar with image manipulation know this concept as “layer masking”. The camera feed is accessed twice per frame: once to determine that is/isn’t binary and create a texture map (practically, equivalent to a layer mask) recording that information, and once to check what color each pixel within that map should be. (Astute observers will note that it doesn’t matter in which order these checks occur.) Finally, the template allows for any desired background image to be slotted in as the background rectangle’s color map. Voilà: person segmentation! We’ll replace the stock image with a bit of outer space for our aesthetic.
Background segmentation using Spark AR’s template.
Next step: adding an effect to the face. Problem: we don’t have a built-in “clothes” segmentation! We have “hair” and “body”, but nothing that will allow us to easily separate face and skin from clothes. Snapchat’s Lens Studio is nice enough to provide built-in “upper garment” segmentation, but Spark AR is not so forthcoming. We’ll have to get a little creative with the options available to us. Quick thinkers may have already seen the simple mathematical solution. Our segmentation options are “person”, “hair”, and “skin”. Person minus hair and skin is…exactly what we’re looking for. By combining the hair and skin segmentation textures and subtracting that from the person texture, we get the clothes left behind. Let’s get cracking on what exactly this looks like in patch form. Demonstrating multiple segmentation.
As a very basic implementation of the concept, it’s a little rough around the edges, but it gives us what we need. I implement some tweaks for the sample screenshots, but they will not be covered in this case study, and I encourage you to explore, create, and refine your own solutions! “EZ Segmentation” is a patch asset straight from the Spark AR library, and provides options for adding effects to either the foreground (body) or the background (clothes). It’s pretty easy to build effects on their own and then pass the texture into the slot. Here, we add in a light glow gradient paired with a rippled lens flare to the foreground and a starry animation sequence to the background.
The filter in action.
You can already imagine the kinds of things we can do here with the power to animate designs on the user’s clothing. Inversely, we can leave the clothing untouched and add effects to the user’s skin, whether that be coloring it in à la Smurf or Hulk, or erasing it entirely for an “Invisible Man”-type filter. These suggestions are just a place to start, of course; multiple-class segmentation is powerful enough to open the door to a galaxy’s worth of potential. Show us what you can do!
XR technology is widely touted as having infinite potential to create new worlds. You can design scenes with towering skyscrapers, alien spacecraft, magical effects, undersea expanses, futuristic machinery, really anything your heart desires. Within those spaces, you can fly, throw, slash, burn, freeze, enchant, record, create, draw and paint — any verb you can come up with. The only limit is your imagination!
Painting in VR with Mozilla’s A-Painter XR project. Credit: Mozilla 2018.
Sounds cool. What’s the problem?
Well, all of that is true — to a point. Despite all of our optimism about this AR and VR potential, we find that we are still bound by the practical limitations of the hardware. One of the biggest obstacles to creating immersive, interactive, action-packed, high-fidelity XR experiences is that the machines used to run them just don’t have the juice to render them well. Or, if they do, they’re either high-end devices that have a steep monetary barrier to entry, making them inaccessible, or too large to be portable and therefore inconducive to the free movement you would expect from an immersive experience. That’s not to say that we can’t do cool things with our modern XR technology. We’re able to summon fashion shows in our living rooms, share cooperative creature-catching gaming experiences, alter our faces, clothing, and other aspects of our appearance, and much, much more. But it’s easy to imagine what we could do past our hardware limitations. Think of the depth, detail, and artistry boasted by popular open-world games on the market: The Elder Scrolls: Skyrim, The Legend of Zelda: Breath of the Wild, No Man’s Sky, and Red Dead Redemption 2, just to name a few. Now imagine superimposing those kinds of experiences against the real world, augmenting our reality with endless new content: fantastic flora and fauna wandering our streets, digital store facades that overlay real ones, information, and quests available to learn about at landmarks and local institutions.
Promotional screenshot from The Legend of Zelda: Breath of the Wild. Credit: Nintendo 2020.
There are many possibilities outside of the gaming and entertainment sphere, too. Imagine taking a walking tour through the Roman Coliseum or Machu Picchu or the Great Wall of China in your own home, with every stone in as fine detail as you might see if you were really there. Or imagine browsing through a car dealership or furniture retailer’s inventory with the option of seeing each item in precise, true-to-life proportion and detail in whatever space you choose. We want to get to that level, obviously, but commercially available AR devices (i.e. typical smartphones) simply cannot support them. High-fidelity 3D models can be huge files with millions of faces and vertices. Large open worlds may have thousands of objects that require individual shadows, lighting, pathing, behavior, and other rendering considerations. User actions and interactions within a scene may require serious computational power. Without addressing these challenges and more, AR cannot live up to the wild potential of our imaginations.
So what can we do about it?
Enter render streaming. Realistically, modern AR devices can’t take care of all these issues…but desktop machines have more than enough horsepower. The proof is in the pudding: we see in the examples of open-world video games previously mentioned that we can very much create whole worlds from scratch and render them fluidly at high FPS rates. So let’s outsource the work! The process of render streaming starts with an XR application running on a machine with a much stronger GPU than a smartphone (at scale, a server, physical or cloud-based). Then, each processed, rendered frame of the experience, generated in real time, is sent to the display device (your smartphone). Any inputs from the display device, such as the camera feed and touch, gyroscope, and motion sensors, are transmitted back to the server to be processed in the XR application, then the next updated frame is sent to the display device. It’s like on-demand video streaming, with an extra layer of input from the viewing device. This frees the viewing device from actually having to handle the computational load. Its only responsibility now is to stream the graphics and audio, which modern devices are more than capable of doing efficiently. Even better, this streaming solution is browser-compatible through the WebRTC protocol, meaning that developers don’t need to worry about cross-platform compatibility, and users don’t need to download additional applications.
Diagram of render streaming process using Unreal Engine. Credit: Unreal Engine 2020.
There is just one problem: it takes time for input signals to move from the streaming device to the server, be processed, and have results be transmitted back. Nor is this a new challenge; we have long struggled with the same latency issue in modern multiplayer video games and other network applications. For render streaming to become an attractive, widespread option, 5G network connectivity and speeds will be necessary to reduce latency to tolerable levels. Regardless, it would be wise for developers to get familiar with the technology. All the components are already at hand; not only is 5G availability increasing, but Unity and Unreal Engine have also released native support for render streaming, and cloud services catering to clients who want render streaming at scale are beginning to crop up. The future is already here — we just need to grab onto our screens and watch as the cloud renders the ride.
At ROSE we build relationships around fast and comprehensive solutions. Our goal when taking on projects is to build seamless solutions and provide a path for further innovation. We want to be a repeat partner for augmented reality. We find the path forward through innovation and then build on that existing framework. This process had led us to our second partnership with KHAITE. This week we launched our second experience with the high-powered fashion brand, and over a short period of time we’ve been able to increase sales and bring AR into the hands of fashion lovers.
What We Did
As the fashion world had to adapt and move to a purely digital landscape — fashion shows had to be pushed to video, new clothing lines had to be shipped to prospective buyers — brands had to move quickly to break through all of the noise. ROSE and Chandelier Creative helped KHAITE bring their newest collection to life. With emerging technology ROSE was able to bring KHAITE’s footwear designs to the homes of their customers, buyers and the market giving customers a deep visual experience unlike any other fashion brand has been able to accomplish. As the world continues to grapple with these unprecedented times, this technology will become a cornerstone of how fashion powerhouses market their designs to their customers. ROSE decided to build a WebAR application for accessibility purposes and to take the burden off consumers. The WebAR experience is widely-supported, deeply interactive and highlights the unique details of KHAITE’s footwear designs in a way that offers endless creative freedom for the user. KHAITE shipped lookbooks that had QR codes embedded within the experience, made by Chandelier Creative, that when scanned take you to the AR experience where users can see the shoes to scale in their own homes. This allows consumers to tap whichever shoes they’d like to get a closer look at and place them in their homes. This allowed customers to get a feel for the items without being able to see them in person. This experience allowed KHAITE to create a visual experience that otherwise would only exist inside one of their showrooms. In the second iteration of the experience, for KHAITE’s pre-fall 2021 collection, ROSE expanded the experience to include models rendered in augmented reality, allowing for users to be able to see the clothing in the way it was meant to be seen. While still using WebAR, this second experience utilized green screen video to build a full runway show with models wearing the new line as they walk up and down whatever environment the user chooses.
Challenges
Understanding the mathematics of 3D space is a learning curve in itself, but creating an experience accessible in a browser, as opposed to a native mobile application, makes things even more difficult with issues like sensor permissions and browser compatibility.
Adding light sources to a scene requires a careful balance between the existing, real-life lighting observed by the camera and computed lighting that best accentuates the highlights and shadows of the models in the AR scene. This challenge was multiplied tenfold as we created specific lighting setups to complement each unique shoe model. The material of each model was a major consideration; a shoe with a soft, quilted insole and white leather straps needed soft, glowing illumination, whereas a black patent leather boot needed bright point lights that played off the glossy reflectivity of the material. The end result was an experience that tailors to each model, allowing users to see each one in its best light.
When we started on the second KHAITE experience, we ran up against challenges that came with showcasing an entire clothing line. KHAITE is a premium brand which places a lot of emphasis on the quality and texture of the materials for their garments and accessories. WebAR is a resource constrained medium, meaning lower-file sizes and compression are required. Capturing 4K, high framerate and high-quality assets for delivery via the web is a challenge. Involving models and movement increases the difficulty of capturing high quality assets. Thankfully, we were able to get incredibly high quality green screen footage enabling the quality of the looks to shine through.
Impact
As the fashion world grapples with how to convert sales and stay afloat amid the pandemic, finding ways to integrate experiences with seamless shopping capabilities is now the only viable option. Within this experience, the sales were proof enough that this execution works for high-fashion labels. Fashion is a tactile and textured experience, and amid social distancing brands have hurdles to jump to create moving experiences for consumers. Companies are integrating new technology to bring fashion shows to people’s phones, computers, and inside their homes.
For the first experience ROSE built for KHAITE, sales increased by significantly in just a few short weeks. Evan Rose, CEO and founder of ROSE, said, “We are proud to have partnered with KHAITE and Chandelier Creative to create an experience that changes how consumers engage with physical products in an increasingly digital world. We’re excited to be a part of driving how the retail and fashion industries engage with consumers.”
As this current climate continues, and shoppers continue to have decreased consumer confidence, focusing on the clothes and the experience that can be had without in-person experiences are more important than ever. Using augmented reality for elevating fashion in this time of social distancing allows for a rich, interactive experience for all users and customers. AR allows for the color, texture and life of garments to come to life.
Amid a global pandemic the solutions to some of our most basic problems need some creativity. With COVID’s continued presence in our lives, social distancing may have to continue into a time that is usually filled with parties, family gatherings and holiday festivities. People will be looking for ways to make new traditions, and to connect with their loved ones from afar.
Patrón needed a way to help customers connect despite holiday plans shifting across the country, while also maintaining their brand narrative. We worked with Patrón to create a first-of-its-kind digital wrapping as a special gift this holiday season, and beyond, to solve this specific problem. This experience provides a sentimental and original take on gifting alcohol as well as gives customers first-hand experience not just using augmented reality, but harnessing it to make something themselves.
How Does It Work?
Gifters of Patrón can use a microsite developed by ROSE to create a custom wrapping including a photo, text, and stickers that will transform into a 360-degree augmented reality (AR) gift wrapping around their Patrón bottle. This gives customers a chance to use this emerging technology in a new way that hasn’t been available in retail before.
“With COVID-19 impacting most celebrations this holiday season, we wanted to give customers a way to continue to celebrate with each other while social distancing,” Nicole Riemer, the art director on the project said. “By creating a custom wrapping, customers can take the act of gifting alcohol from an easy to a thoughtful one. During a time when you might not be able to gift in person, creating a custom wrapping with photos, stickers, and text provides that personal touch that is missing from not being able to gift it in person.”
Using WebGL in both 2D and 3D allows users to see their content change between dimensions in real time. Gifters can then use built-in recording and sharing technology to share the gift with the recipient as well as on social media.
“Creating these designs digitally allows for the process to be instantaneous and affordable, rather than waiting for something to get engraved or physically customized, without losing the ability to share that someone is thinking of you on social media,” Riemer said.
By providing customers the ability to customize their gift of Patrón for both different occasions and gift recipients, we are showing them that Patrón isn’t the “mass brand” they think it is. This virtual gift allows distance to not be a barrier in creating something thoughtful that nurtures customers’ need for growing and maintaining their relationships.
Why Use Augmented Reality?
Using augmented reality for this experience had several advantages. The most obvious one being that this experience provides a sentimental gift without having to enter a store or be in the same physical space as the recipient — helping maintain social distancing amid the pandemic. Additionally, augmented reality provides a way for users to generate their own content while maintaining the PATRÓN brand.
“The challenge with AR has always been figuring out how we can take new dimensions and connect them to the ones we’re familiar with in creative, expressive, and helpful ways,” Eric Liang, front-end/AR engineer on the project said. “The AR experiences that ROSE has previously created have each addressed that challenge by taking something important to us — something unseen or out of the ordinary that we wanted to showcase — and constructing it in the user’s world. This time, we’re handing the reins to the user. In this new collaboration, we’re letting users create and realize something that’s uniquely their own.”
Harnessing the power of AR will bring all the holiday cheer customers could be missing into the palm of their hand and inside their home — connecting people who want to be together this holiday. Additionally, PATRÓN has a history of creating limited-run packaging and bottles and this experience offers customers peak exclusivity with the ability to customize every individual bottle they purchase, so the virtual expansion of exclusive boxes was a natural progression for the brand.
Design Considerations
In designing this web application, we identified two different types of users. As Patrón’s target demographic for this experience is 21–35, we were less concerned with the technological literacy of the user. Additionally, since this started as a concept that would be mainly pushed through social media, we were bound to attract younger users that would already be at least slightly familiar with augmented reality from exposure through SnapChat and Instagram. After determining this demographic information for our target user, the next question was what a user would want to create when using this tool. This led us to determining the following use cases:
Creator 1: The user that wants to create a really thoughtful collage that they want the recipient to see that they spent time on. They expect that their gift will be shown to others and potentially shared on social media in a similar fashion to birthday posts.
Creator 2: The user that is looking to create a quick gift that still wows the intended recipient. They want to expend minimal effort, but get the same praise and reaction as someone who spends a lot of time on their creation.
In order to satisfy the need for a quick gift, we created quick “themes” that someone can choose from at the start of the experience that allows them to upload a single photo and have created a designed bottle in 5 clicks (including previewing their design). For those that want to spend more time on their creation, we provide the ability to start from scratch and choose the content that goes on every side of the bottle.
In choosing the predetermined content that users can apply to their digital bottles, we focused on a few things. The first was to choose assets that could be used for multiple occasions, holidays, and were non-denominational. The second was to underscore the socially distant benefit of this gift and continue to have people drink responsibly even when gatherings are not encouraged. The third was to make sure that the assets could be used in many combinations and still create a wrapping that looks high end.
Once we determined the user experience and the content types that could be placed on the wrappings, we had to find a way to map their content to a 3D bottle in real time, to show the user their creation on this model before sending augmented reality link to their recipient, and then ultimately render each individual experience in augmented reality.
How We Built This
The technical inspiration for this experience began in an understanding of how WebXR, the implementation of augmented reality in a web browser, operates. WebXR is the conceptual model of everything that exists in an extended reality scene: where each virtual object is, where light is coming from, where the “camera” stands and observes, how the user interacts and changes all of these things, and so on. Imagine closing your eyes and understanding where everything around you in the room is: your desk, the floor, a lamp, rays of sunlight coming through a window, even your own hands. Now open your eyes and actually observe those things. That’s what WebGL does. WebGL is the graphics engine that takes the theoretical model processed by WebXR and paints it on a screen, rendering the virtual existence of matter and light into visibility.
While we wanted to capture the same magic of seeing something you create exist in 3D space, it was important that it would be accessible to everyone, both in terms of the technology and creativity. We wanted it to be usable from an everyday mobile device, without the need for expensive VR technology. We also didn’t want to require the user to be a painter, have an empty warehouse to dance around with VR goggles on, or have an intricate understanding of 3D sculpture or set design to maximize the reward of the experience.
There were a lot of moving parts that needed to be addressed. There needed to be a simple, intuitive interface for the user to customize their design and we needed to apply the design to a 3D model composed of a number of different materials and textures, from soft cork to clear pebbled glass to shiny metallic gift wrap. The experience needed to show that customized bottle back to the user in an interactive, attention-grabbing 3D experience. And finally, we needed to be able to scale the experience for a mass marketing campaign, which meant preparing for a large number of concurrent users with different devices and intents. We settled on technologies to address each of these challenges: a React/HTML Canvas microsite to design the wrapping, an 8th Wall/A-FRAME experience to view it, and a serverless API backend with cloud storage to support scale.
The next step was to source a 3D model of the bottle and we worked with a 3D artist and modeller and iterated over the model until each detail was as accurate as possible, and then continued to optimize our renders.This involved adjusting lighting through trial and error until we found the best setup to illuminate the bottle and make the glass and its reflectiveness as lifelike as possible, as well as customizing the physical material shaders for each node of the finalized model: the cork, the ribbon, the glass, the liquid, and the wrapping.
Later on, we also realized that we needed a dynamic approach to the wrapping’s transparency. If the user chose to lay their graphics directly over the glass without using a background, those stickers, photos, and text would need to be opaque while leaving the glass transparent. The answer was taking the texture maps we generated with each user-created design and filtered them into black and white, effortlessly serving double duty as alpha maps to control transparency.
While the experience would be accessible to everyone, we wanted those who had a Patrón bottle handy to be able to integrate it into the experience. It’s not yet feasible to use a real-life bottle of Patrón to anchor the experience, so we looked outside of the box — and settled on the actual box that each bottle of Patrón comes in. This gave us the opportunity to leverage 8th Wall’s image target feature, using Patrón bottle image on the side of each box to trigger the dramatic emergence of the virtual bottle from the physical box.
Those without a box can watch the bottle appear on the plane they have placed it on in the experience. Adding some typical controls like pinch to zoom and finger rotation made it easy for the user to examine the bottle and the details of the design, and we added in 8th Wall’s Media Recorder capability to further boost the shareability of the experience.
Conclusion
As companies look ahead to a greener and more sustainable future, the concept of virtual wrapping and virtual packaging is likely to expand. As augmented reality moves from an emerging technology to an adopted one, user-generated AR content will take center stage, and experiences like this one will enable every day users to create using AR technology. As all industries grapple with how to stay competitive, and stay afloat, innovation is the answer to moving forward. This is the tip of the iceberg when it comes to what augmented reality can accomplish.
We are excited to continue innovating and bringing projects like these to life. We believe anyone can innovate and that process is vital amid the current economic landscape. Our passion for emerging technologies and augmented reality is immense and our work will only continue to reflect that. We’re looking forward to sharing more soon.
Credits:
Ashley Nelson: Concept and Strategy, UX Copywriter
Eric Liang: Front-end/AR engineer
Eugene Park: Experience Design
Leonardo Malave: Back-end/AR engineer
Marie Liao: QA Engineer
Nicole Riemer: Concept and Strategy, Art Direction, and Experience Design
As a black-owned business, the current state of the world has changed ROSE’s daily motions as a company. We view ROSE as a vehicle for improving the world, both in how we support each other internally, and the impact of the products we bring to life. We also acknowledge that as a Black-owned tech firm, we have an innate privilege with our platform. We have the ability to create change through technology, and with that privilege comes a deeply-rooted responsibility. Bail Out Network was our way of assisting in the fight for Black equality, without distracting from the injustices currently happening.
After the death of George Floyd sparked protests across the world, we found ourselves in conversations with the entire staff about how we could make a positive impact. We quickly saw the systematic arrest of protesters around the country, and the flood of donations to community bail funds that have been integral to fighting police overreach for decades.
We wanted to make locating these funds, and their donation portals, as easy as possible as the police violence became clearly visible on a daily basis. So we scoured the internet looking for every bail fund we could find, and any lawyers or law firms that vocalized their willingness to represent those arrested for protesting free of charge. The website was launched in under 24 hours, on June 2, as a rapid response to the grossly visible police violence that was being seen across the country.
This project started as a way for us to collect bail funds in one place, for people looking to lend a hand in their communities as police began systematically targeting protesters. As more time passed, we wanted to expand how Bail Out Network could be utilized as a resource in the fight against police brutality and equity for Black and brown bodies.
Using Airtable, we created a database for bail networks, organizations supporting marginalized communities, and lawyers that support the fight.
The site now has a collection of resources that are dedicated to helping the most marginalized communities — focusing on the Black Trans community, Black LGBTQ+ community, Black youth, Black incarcerated people and other groups that lack protection from this country’s institutions. The site remains open for submissions, and if people come to the site and want to submit new resources they can add them at any time.
We firmly believe, as should all people, that Black Lives Matter and the people protesting should not face punishment for doing what is right. As people continue to protest and fight for the destruction of systemic racism and demanding a complete overhaul of the United States policing system, they’re going to need more money, more resources, and more bodies to achieve these goals. We will continue to promote organizations that are protecting those who have been disenfranchised by the current political, economic, and social structures that exist within the fabric of the United States. This database will be updated as more resources become available.
The Team
Launched in June 2020, the idea for this project built upon Rose Digital’s history of using technology for good in times of public crisis (see also, Help or Get Help). Ashley Nelson, copywriter, originated the idea and identified the need within the current environment to connect those protesting with legal aid and easy access to resources. Nicole Riemer, art director, created the visual direction and UX of the site with Ashley writing the copy.
Bail Out Network will continue to consolidate resources that anyone can use to access resources helping the Black Lives Matter movement. For more information on ROSE, please visit builtbyrose.co.
Pay to Play, the AR data visualization experience, showcasing how money spent on presidential campaigns equates to the cost of large U.S. infrastructure projects.
Believe it or not, a few short months ago the main event dominating the news cycle wasn’t coronavirus, but the Presidential election. The Democratic primaries were different from years past, and not just because the number of candidates running could fill a small football field. One thing that stood out to our team was the record spending that occurred this election cycle. Discussions began to swirl around campaign finance specifically when Michael Bloomberg entered the race, funding his entire campaign with his personal fortune, and raising questions about what money should and shouldn’t buy while running for office. We began thinking about a way to contextualize the immensity of campaign spending through the language we speak best — technology. Those conversations and the desire to use technology to answer that question was the origin of Pay to Play. Due to primaries being postponed, and the race being narrowed down to single candidates from each party, we considered not releasing this experience.
However, with the new economic pressures on American families due to coronavirus and the current volatile international economy, we believed the relationship between money and politics was worth exploring. This project considers the disconnect between the monetary impact of the political process and the needs of everyday Americans.
The staggering amount of money spent by Democratic candidates in the 2020 election left us wondering how that money could have been spent on infrastructure and funding the platforms that those candidates had as part of their campaigns. We designed Pay to Play as a way to look back on the record amount of money spent by Democratic candidates that have ended their bids. We also included how much several Republican contenders in the 2016 presidential election spent on their campaigns as another comparison.
We designed this experience to visualize our internal discussions and the conversations happening in the U.S. during this tumultuous time, and in doing so we wanted to answer the question: “What else could we have done with that money?”
How Does It Work
Pay to Play was developed using 8th Wall as the hosting platform. The only web specific AR toolkit, 8th Wall allows anyone with a mobile device and an internet connection to access the comparative experience. Users can compare campaign spending amounts from the top seven Democratic candidates who spent the most on their presidential run, as well as the top seven Republican candidates from the 2016 presidential election. The experience has different “common good” filters and each “common good” filter has been paired with a representative 3D object that will fill the space with the appropriate scaled number of objects. With each selection, the data will simultaneously update in the upper left corner.
Using augmented reality for data visualization allows for emotional reactions from the user. This experience showcases the immensity of campaign spending by using cascading scaled objects that fill the users view, as though it could overflow from the screen at any moment. This experience was created using 8th Wall, which meant decreasing file size and the number of objects rendered is important for optimizing load time. To speed up load time and allow for easier comparison, the number of objects was scaled. While AR can make data more manageable for users, it can also create emotional connections through hands-on participation with the product.
The Build, 3D Modeling, and Optimization
We found that the best way to offer an immersive extended reality experience, while still offering relevant information and options to a user, is to combine the XR portion with a heads-up display that lies on top without obstructing the view. As such, this project could immediately be divided into two parts: building the HUD and coding the 3D model portion. We used A-Frame, a 3D framework built on Three.js and HTML, to bridge the gap. By representing our 3D assets and behaviors as HTML, we were free to create our HUD in pure HTML and have it communicate and interact seamlessly with the A-Frame components.
Optimizing for size by reducing face counts and textures in Blender
We found that much of the challenge of this project was using AR in a way that was accessible to as many people as possible while still maintaining the core identity of the project — using numerical scale as a way to evoke a reaction from the user. Rendering any 3D model in a web browser can be an expensive operation. Rendering thousands of them would tax a smartphone’s hardware to the point of unusability. We ended up approaching this by leaning into the idea of scale: we didn’t need exacting detail if the idea was to overwhelm the user with a huge pile of items; we just needed enough to make it clear what each item was. So we selected simple models with fewer polygons, decimated their numbers of faces as low as we could, and reduced the resolution on their textures to minimize file size. The end result worked out — we had piles of apples that were clearly recognizable and deeply satisfying to watch cascade down from the sky.
Additional challenges came from the technologies we used to build the experience itself. Web AR platforms advance every day, but there are still severe limitations to their capabilities. For example, 8th Wall, the platform on which this experience runs, offers surface occlusion capabilities only for its Unity integration into native apps. For browser-based experiences that don’t yet have access to that plane detection technology, we have to emulate a floor by placing a vast invisible sheet at a defined distance below the camera. The distance to the “floor” is not dynamic and doesn’t change whether the user is sitting or standing, resulting in an imperfect representation of reality. This process only makes us more excited to see the next steps web AR will take, as the technology continues to improve and provide us with new and even more compelling ways to augment our reality.
Conclusion
The political process is often a complicated and convoluted one, and accessing data on campaign finance can be overwhelming. Conceptualizing how much candidates spend on their campaigns shows the immensity of American politics. By using AR, it becomes easier to visualize the power that the people funding these campaigns have, and raises real questions about the possibility of sweeping change if these funds were made available.
Credits:
Jordan Long: Concept and Strategy
Nicole Riemer: Art Direction and Experience Design
American productivity is being saved by smart architecture-is your internal infrastructure up to par?
Remote work is a decidedly modern phenomena, especially in the scale it currently exists. It was not planned for. as black swan events cannot be. As COVID-19 swept the country and one by one governors and mayors ordered non-essential employees to go home and shelter in place. As a result, hundreds of companies scrambled to become enabled for remote work. America went from a country where only 45% of businesses had formal remote work policies to 100% remote in about two seconds flat. The transition could have been significantly rockier. While you might hear of companies who had to purchase millions of dollars of laptops, IT departments flooded with helpdesk tickets from people who had never accessed VPN, concerns about the security of video conferencing systems or the challenges of balancing a full house with a full time job. One thing you’re not hearing? Connectivity issues.
All of that silky smooth video showcasing the Zoom 1990’s Macy’s family picture backgrounds is thanks to an infrastructure that was designed over 50 years ago by Vinton Cerf. There were a few groans at the onset, but quickly global providers were able to seamlessly manage the shifting of loads and surges without seeing a major outage. In the US alone, peak traffic was up by near ⅓, with major metropolitan areas seeing surges as high as 60% above normal. Name one utility that could handle that level of increase without seeing massive outages. You can’t, because utilities were not designed to withstand nuclear war. The internet — surprisingly enough — was. However, nuclear war or pandemic are not required to take the lessons of Vinton Cerf and apply them to your own infrastructure choices. Doing that is perhaps more important than ever as the scramble continues to bring previously not digitized services — such as small business loan processing — online. Underlying protocols of the internet’s infrastructure adapt to shifting conditions, working around trouble spots to find efficient routes, and managing glitches in ways that make sure you can access your spreadsheet (or your cache of cat gifs) from the cloud while tens of millions of other people do the same.
To break it down further, if you want a system that is always (or almost always) up consider the following best practices inspired by Vinton Cerf in your architecture decisions:
Architect for uncertainty: When building systems for scale, you can’t anticipate every eventuality. You have to bake uncertainty, resilience and redundancy into your architecture to account even for ‘acts of god’ like COVID-19. Netflix for example has a tool called Chaos Monkey that randomly shuts down services in production to constantly test system stability and resilience.
Simple > Complex: A simple, stable system always outperforms a sprawling and brittle one. For enterprise and large scale systems it is sometimes challenging to boil them down to their core but that’s the job of the systems architects. Identify common processes/uses and abstract them out into simple, reusable services.
Favor protocol over oversight: Humans break things. Always. When building systems, you have the opportunity to create your own tiny world that can be scrubbed free of human error (thank you QA!). These systems don’t live in a vacuum. They’re often used by and governed by humans. Humans who are breaking things right now even as you read this. If you take that into account and simplify human involvement in governance by creating both technical AND governance protocols you can reduce the instability of the system over time
Consensus over mandate: The wisdom of the crowd is greater than the wisdom of individuals (see Wikipedia and the Internet…for the most part). Bringing the power of smart, invested people to solving challenges is immense but it is hard to do this with mandates. Fostering open discussion and driving for the best solution for all can be a powerful driver of system stability and resilience.
Insights By
Evan Rose, Founder of Rose Digital, Evan Rose has been building web and mobile applications since 2009. His focus is on usable, performant application interfaces. He attended the Harvard University and graduated with a degree in Social Anthropology. Evan has launched two venture backed startups, is a board member and investment team member of NFTE Ventures and chairs the Harvard Club Tech and Entrepreneurship Panel. Most recently he was a Senior Presentation Layer Engineer and Mobile Application Architect at Razorfish.