YouTube TV app to gain support for YouTube Shorts - Protocol

2022-08-26 23:40:55 By : Ms. Annie Liu

The company has told partners it wants to bring support for YouTube shorts to YouTube’s app for Android TV and Google TV.

YouTube Shorts are making their way to a larger screen.

YouTube wants people to watch more vertical videos on their TVs: The Google-owned video service is getting ready to support YouTube Shorts, its take on TikTok videos, within its smart TV app, Protocol has learned.

YouTube will also gain better support for the company’s music service on smart TVs, and its paid TV service is looking to launch split-screen viewing to its subscribers. A representative for YouTube declined to comment.

YouTube employees shared these plans at an internal partner event with hardware manufacturers last month. The event was dedicated to Google’s own Android TV and Google TV platforms, but YouTube generally aims to keep its experience on par across smart TV platforms, making it likely that these features will eventually launch on TVs manufactured by companies like Samsung and LG as well.

YouTube Shorts has been a big hit on mobile, where the brief vertical videos reached 30 billion daily views earlier this year. However, there’s little to no support for the feature on larger screens. YouTube’s mobile app doesn’t let people cast Shorts to the TV, and the service’s TV app doesn’t surface the clips to viewers.

That’s about to change with an update that will roll out in the coming months. A mock-up slide presented to the audience of Google’s partner event, which was leaked to Protocol, showed a vertical video at the center of the screen, with the video’s title, the name of the song used in the clip and quick access to up-and-down thumbs off to the side. There was no full-screen scroll bar, suggesting that the implementation isn’t using the interface of the normal YouTube video player.

YouTube isn’t the first service to bring short-form vertical videos to the TV screen. TikTok has been experimenting with smart TV interfaces for some time, and the app launched on smart TVs made by LG and Samsung as well as TVs and streaming devices running both Amazon’s and Google’s platforms in November.

However, YouTube has a massive advantage over TikTok on TVs. YouTube’s app is installed on virtually every smart TV these days, while people have to actively seek out TikTok’s app — something few people may feel inclined to do, due to the assumption that TikTok is a mobile-only service.

TikTok hasn’t said how much traction it has gotten on TVs, but its Android TV app has been downloaded fewer than 5 million times, according publicly available Google Play data.

In addition to bringing vertical videos to the living room, YouTube has also planned a number of other features for its TV app. The service’s Android TV and Google TV apps are supposed to get stability and performance improvements, and YouTube Music will gain additional features on smart TVs as well. These will include the ability to browse playlists and albums, add them to one’s library directly from the TV screen and more.

Finally, YouTube is also looking to update its YouTube TV service with a few new features. Chief among them: YouTube TV will gain something called Mosaic Mode, which will allow subscribers to watch up to four live feeds at the same time by dividing the TV screen into quadrants.

Google regularly briefs its partners about upcoming changes to its smart TV platforms and apps before they are officially announced to the public. At the same event, company representatives also previewed plans to integrate fitness trackers with Google TV and allow owners of Nest Audio devices to use them as wireless TV speakers.

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Don’t know what to do this weekend? We’ve got you covered.

Our recommendations for your weekend.

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

What better way to spend the weekend than by listening to Mark Zuckerberg and Joe Rogan talk for three hours? Once you’re done, check out “Lost Ollie” with the kids and test your Netflix knowledge with Heads Up!

Think of Joe Rogan what you will, but when Zuckerberg sits down with the podcaster to share some exclusive news (Project Cambria is coming in October) as well as his thoughts on Meta’s hardware strategy, the emergence of VR fitness (“It happened way sooner than I thought”) and the future of visual computing and brain-computer interfaces, you kind of have to tune in. Just be warned: The whole conversation is almost three hours long!

The story of lost or discarded toys trying to find their way back to their owners is a tale as old as time, and there have been what feels like a dozen “Toy Story” movies dealing with the same subject. Still, Netflix’s new limited series “Lost Ollie” stands out from the crowd with its own take on growing up, the fleeting nature of childhood memories and the types of adventures only children and the young at heart can undertake. A great four-parter to watch with your little ones this weekend.

The charades game Heads Up has been a hit on iOS and Android for some time. Now Netflix has licensed the title as part of its growing mobile games initiative. But instead of replacing the existing version, the video service simply released a Netflix-specific version with tons of charades prompts related to shows like “Stranger Things,” “Bridgerton” and “Squid Game,” as well as categories like “Strong Black Lead,” “Netflix Family” and “True Crime.” It’s a fun game to play with all the TV and streaming nerds in your life. A Netflix subscription is required.

Microsoft wants to acquire Activision Blizzard for $68.7 billion. Take-Two has spent $12.7 billion to acquire Zynga. Sony has paid $3.6 billion for Bungie. All together, the video game industry has seen 651 transactions totaling $107 billion during the first half of this year alone. Will this trend continue, what is it driven by and what does it mean for game developers, players and the industry at large? In this deep dive, The Ringer explores the age of the gaming mega mergers, and it’s well worth a read.

A version of this story also appeared in today’s Entertainment newsletter; subscribe here.

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

If you thought the rise of remote work, independent contractors and contingent workers rose sharply during the pandemic, just wait until the next few months when you see a higher uptick in the on-demand talent economy.

Rising workload and pace, the stress of commuting and a taste of the flexible work-from-anywhere lifestyle have all contributed to what many are calling the Great Resignation, which is only just the beginning of the headwinds organizations are facing, says Tim Sanders, vice president of client strategy at Upwork, a marketplace that connects businesses with independent professionals and agencies around the globe.

“It began with front-line workers, but it’s not going to end there,” Sanders notes, “Recent data suggests that the biggest industries for quits are now software and professional services and on top of that, I predict that we’ll see more leaders and managers continuing to quit their jobs.”

As the economy leans toward a recession, and layoffs across dozens of tech firms make headlines, Sanders predicts companies will increasingly turn to on-demand talent. “These highly skilled independent contractors and professionals offer the speed, flexibility and agility companies are seeking right now. Leaders are becoming more empowered to fully embrace a hybrid workforce and shift away from rigid models.”

Leaning into headwinds: Driving growth amid uncertainty

A recent report from Upwork, The Adaptive Enterprise, underscores the importance of flexible on-demand talent during uncertain times. Sanders notes: “A growing number of organizations, including Upwork and customers like Microsoft, Airbnb and Nasdaq understand that on-demand talent enables companies to reduce risk, drive cost savings, and at the same time, protect their people from burnout. Flexible workforce models also allow businesses to respond to and recover faster from crises than more traditional models.”

Some crises come in the form of economic slowdowns, while others can take the shape of geopolitical conflicts that disrupt life and work as we know it. Mitigating risk — such as a pandemic wave striking a certain region housing the majority of a company’s staff — is one reason businesses turn to on-demand talent, but it’s certainly not the only one.

CEOs surveyed by Deloitte in 2022 see talent shortages as the biggest threat to their growth plans. The survey goes on to report that CEOs believe that talent is the top disruptor to their supply chain and there is more to be gained within their workforce by providing greater flexibility (83% in agreement) as opposed to merely offering more financial-related incentives. What is top of mind for many business leaders is needing to fill talent and skills gaps, so they can deliver new products and enhanced services. In other words, companies are struggling to find the specific skill sets needed to advance their business objectives and innovation agendas.

The biggest benefit of leveraging on-demand talent is often tapping into the talent and skills that businesses can’t find elsewhere. Upwork’s recent report highlights that 53% of on-demand talent provide skills that are in short supply for many companies, including IT, marketing, computer programming and business consulting.

By harnessing a global talent pool from digital marketplaces like Upwork, businesses have wider access to skilled talent who can accelerate what those companies offer to customers at a fraction of the cost. “Skillsourcing” on-demand talent helps companies maintain a more compact population of full-time employees to concentrate on work that only they can do as well as maximize their strengths while bringing in independent professionals to handle the rest.

Behind the growth: Speed, flexibility and agility

Speed, flexibility and agility are three critical benefits offered by on-demand talent to businesses seeking competitive advantages in their sector. While on-demand talent solutions give companies speed-to-market advantages, Sanders sees that they also give organizations a strategic form of flexibility.

“An agile organization is able to make bold and quick moves without breaking everything,” Sanders says, “and look at a number of our Fortune 100 customers that have a workforce made up of almost half on-demand talent, and how they can pivot on a dime. It's a case of structure enabling strategy.”

As for speed and efficiency doing the actual work, Sanders says clients report that when hiring managers have been given access to on-demand talent, they engage the needed talent within days instead of months, and when they bring them onto projects, the work is completed up to 50% faster than through traditional avenues.

Sanders says, “Businesses have realized that remote work experiences are best led and judged by outcomes, not just time in the office, and more leaders are comfortable and confident opting for a hybrid workforce that can deliver based on those outcomes.”

Upwork’s Labor Market Trends and Insights page shows that organizations are indeed ramping up their hybrid workforces: 60% of businesses surveyed said they plan to use more on-demand talent in the next two years.

“The old way of acquiring talent isn’t efficient,” Sanders says. “Staffing firms aren’t the silver-bullet solution they once were, and more businesses need to rethink and redesign their workforce with on-demand talent as the economy and work rapidly evolve. The conversation is no longer about the future of work, but the future of winning.”

Sommer Panage, Slack’s senior accessibility manager, talks about her goals since joining the company in April and how she hopes to build a more accessible product.

“'How could someone else experience this?' is the number one question we ask."

Sarah (Sarahroach_) writes for Source Code at Protocol. She's based in Boston and can be reached at sroach@protocol.com

Before Sommer Panage joined Slack, there was no centralized team working on accessibility.

Panage said there were some people who focused on desktop accessibility and others who worked on Slack for mobile, but they were scattered across the company. Panage joined Slack a few months ago as senior engineering manager and helped bring the company’s accessibility efforts under one roof. Before joining, she worked on accessibility efforts on iOS at Apple and held roles at Twitter before that.

Slack recently announced updates to improve keyboard navigation and introduced a new interface for screen readers as well as what the company called “an ongoing effort to bridge gaps.” Panage said bringing together one unified accessibility team has helped Slack focus on these different areas of improvement and work with teams across the company to build new features with accessibility in mind. But she stressed that the work is ongoing.

“Accessibility is never done,” she told Protocol. “A common challenge for companies is to say, ‘Oh, we made our product accessible. And now it's done.’ But it's not the case.”

This interview has been edited for clarity and brevity.

How is Slack’s approach to the topic different from others?

In large companies, in the Apples or the Microsofts and the big companies of the world, there's definitely an accessibility team. But I think it’s much less common in the small companies, and often there will be people who care deeply about it, and they might be scattered. They might start a networked effort across the company. I've seen that in various places as well, but it's not necessarily the standard for companies to have an accessibility team, a centralized hub of accessibility. That's one thing that Slack recognized pretty early as it started to grow … It's not super common, but it is super beneficial.

Can you point me to a time, either at Slack or a previous position, when you had an idea that didn’t work out in the way you expected? And on the flip side, what was a change you made that had an immediate impact?

Accessibility is a field that, especially when I started in it over 10 years ago, there was not a lot of information. There were standards online that I could read about there, but there was not much else. So I made a lot of mistakes early in my career. A common one I think folks will make as developers is to overlabel things or be overly verbose when you're thinking about screen-reader experience. So that was a mistake I made in many ways multiple times … We started getting this feedback from our screen-reader users saying, “Oh, hey, this is way too verbose. This is not helpful to me.” That was where I learned two lessons. One, verbosity is incredibly important for screen-reader users. Two, listening to our users is vital to making good decisions about the product, and certainly that's something that Slack was already doing before I arrived.

"[L]istening to our users is vital to making good decisions about the product."

As far as things that have gone really well, sometimes a very small idea can be a really big thing. One of the changes that we recently made in our updates at Slack was to add a couple preferences that allow users more fine-grained control over how their messages are read out. And it sounds so simple, right? It's like, “Oh, you read the date first or the date last.” It's the little preference, but this can be so important for someone using a screen reader because listening takes time. If the information I want is up front, you've just made me so much more efficient.

How did you decide to focus on these areas of improvement?

At Slack, we focus heavily on what our users tell us and the experiences that they're having. So this work stems from a large amount of time and feedback and process with both external user feedback that comes in through our various feedback systems as well as user groups who are full-time assistive technology users. By combining feedback from these two spots, we've found key pain points within the Slack product that we knew we wanted to really focus on.

And those were really focused around the notion of keyboard navigation and keyboard focus. We had a lot of feedback from our screen-reader users. And so we wanted to make sure we put a lot of work there to make sure the desktop product was fantastic for them.

Since joining Slack, what have been your top goals in terms of accessibility?

One of the things I've really wanted to focus on is thinking about how Slack can really take a stance in accessibility and build the product to be something that says, “This is how Slack should work from an accessibility standpoint. And this is how we believe — with the feedback of our users and with what we've learned in our research — this is what we want to create.”

The other thing is thinking about each platform individually. Just because there's a cohesive picture for accessibility doesn't mean it's going to behave exactly the same on each platform. It might need to be different. Android is different from iOS, which is different from web, etc. And so a second-tier focus there is then thinking about, “OK, so now we've agreed on how we want to approach it. What does that look like on Android? What does that look like on a web product?”

And then certainly as well, looking for any broken windows, any things where we're like, “Hey, this needs to be better.” So one thing that you may have noticed if you happen to use our Android product is significant improvements in our larger text size support.

What do you mean by your goal that you want to “take a stance” on accessibility?

We're thinking about Slack really as the digital headquarters right now. This is a place you go to get work done. Part of that stance is making sure that Slack is a place for everyone to come and to get their work done. And it's really about Slack being this digital headquarters that is equitable, that is delightful to us and that is efficient for all of our users.

And the other part of taking a stance on accessibility is about how we do accessibility. Not just that our product will be equitable, but also how do we actually approach making that happen? And the approach part of it is really strongly based, for us, in user-centered design and user-centered engineering. From both perspectives — from the design perspective and from the perspective of which we build the products — we want to be sure that we're drawing from our users and understanding what they're experiencing and what they experience in other products and on different platforms.

What does the process look like for introducing an accessibility improvement at Slack to implementing it?

We'll prototype an idea and we'll get something working, something functional, say, for example, a screen-reader feature. And so we'll get something prototyped and ready to share with our internal user groups who are full-time assistive technology users. And by doing that we can get that early information as to whether or not this idea was good or not … and there has to be a really big willingness to be wrong, because sometimes we don't get it right.

From there, we'll have a prototype, we’ll iterate with our internal user group and start to hone in on what the product is going to be. And then that will develop into a feature brief and something that becomes part of our road map for the accessibility team. From there, it's going to go through a pretty standard process of planning and execution all the way through. But through that planning and execution, we'll continue to iterate with our user group, so it won't just go into a box and come out the other side, but rather at various milestones, we will go back with them and say, “Hey, can you try this out? Give us feedback, take a pass at it.”

Would you say someone with an accessibility background should be in the room during every conversation about different Slack changes?

To some degree, yes. Obviously having accessibility in the room, in every meeting — we can't scale that. But when a feature is in the early phase — a great example would be something like the Huddles feature, where having someone say, “Hey, these are going to need captions" really early on — that's a really fantastic example of what happens when someone with an accessibility mindset is in that room really early.

What was your perspective as a user before joining Slack versus your experience since joining?

One of the things that drew me to Slack was noticing the progress they were making on accessibility. I'm always tinkering. I will make the text size way big, I will invert the colors on my phone, I will do all kinds of things just to see what happens, to learn about the product. And I was consistently noticing that Slack was making improvements. My friends who have visual impairments and my friends who are hard of hearing had made comments about some things they noticed and been impressed with the product. And so early on, I knew that Slack was a company that was really putting some great effort into accessibility, and that drew me there.

"[T]here has to be a really big willingness to be wrong, because sometimes we don't get it right.

Since coming in and joining, that perspective hasn't really changed. Now I just know the people who are doing this work. But since joining Slack, I noticed that the work was coming from a lot of different places. And so that was kind of what pulled me in. I thought, “Oh, it'd be so great if this were an accessibility team all under one roof working together,” because I think you can be more efficient that way. You can reach out through the company more successfully when you're coming from a centralized place. And it also helps people know who to go to when they have a question. So that was the shift I wanted to help Slack achieve by joining.

How has your background in psychology helped you in your role?

It's one of those degrees I did not anticipate I would utilize, and I found it very helpful in my career in technology and then specifically in accessibility. In studying psychology and doing a bit of psychology research through my undergrad, one of the things that I had to learn was a lot about thinking about how people think. And that particular skill … all of that became very, very useful then when I came to work in accessibility. “How could someone else experience this?” is the number one question we ask.

One of your goals at Slack is to identify “broken windows.” Are there any that you want to focus on in the coming year or so?

One that I'm very excited about is just seeing us improve our Android product and how it has its text sizes. I think that one of the challenges that the recent changes are trying to solve is around the fact that Slack is a web product on desktop. And so I think that because of that, I wouldn't necessarily say there's a lot of broken windows around it, but it creates challenges because it's a web product as a desktop app.

For a company that's scaling, how can you keep accessibility in mind?

It's a really big challenge. No question there. As a company grows, one thing that is important to establish pretty early on is what the accessibility process looks like … It's really important for something like accessibility in the same way we would process for security, right?

As a company is growing and adding teams, it's important to have a way that says, “This is how Slack does accessibility.” So as a new team spins up, that process is already there for them. They just need to look into it. They don't have to reinvent the wheel as to what these things mean. The process and the “how” is already there. In Slack’s case, that means the design reviews and the accessibility review toward the end, and the office hours.

Sarah (Sarahroach_) writes for Source Code at Protocol. She's based in Boston and can be reached at sroach@protocol.com

Snapchat relied on microservices and a multicloud strategy to overhaul its technology approach as it grew.

Jerry Hunter, senior vice president of engineering at Snap, told Protocol about its infrastructure.

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

In 2017, 95% of Snap’s infrastructure was running on Google App Engine. Then came the Annihilate FSN project.

Snap, which launched in 2011, was built on GAE — FSN (Feelin-So-Nice) was the name for the original back-end system — and the majority of Snapchat’s core functionality was running within a monolithic application on it. While the architecture initially was effective, Snap started encountering issues when it became too big for GAE to handle, according to Jerry Hunter, senior vice president of engineering at Snap, where he runs Snapchat, Spectacles and Bitmoji as well as all back-end or cloud-based infrastructure services.

“Google App Engine wasn't really designed to support really big implementations,” Hunter, who joined the company in late 2016 from AWS, told Protocol. “We would find bugs or scaling challenges when we were in our high-scale periods like New Year's Eve. We would really work hard with Google to make sure that we were scaling it up appropriately, and sometimes it just would hit issues that they had not seen before, because we were scaling beyond what they had seen other customers use.”

Today, less than 1.5% of Snap’s infrastructure sits on GAE, a serverless platform for developing and hosting web applications, after the company broke apart its back end into microservices backed by other services inside of Google Cloud Platform (GCP) and added AWS as its second cloud computing provider. Snap now picks and chooses which workloads to place on AWS or GCP under its multicloud model, playing the competitive edge between them.

The Annihilate FSN project came with the recognition that microservices would provide a lot more reliability and control, especially from a cost and performance perspective.

“[We] basically tried to make the services be as narrow as possible and then backed by a cloud service or multiple cloud services, depending on what the service we were providing was,” Hunter said.

Snapchat now has 347 million daily active users who send billions of short videos, send photos called Snaps or use its augmented-reality Lenses.

Its new architecture has resulted in a 65% reduction in compute costs, and Hunter said he has come to deeply understand the importance of having competitors in Snap’s supply chain.

“I just believe that providers work better when they've got real competition,” said Hunter, who left AWS as a vice president of infrastructure. “You just get better … pricing, better features, better service. We're cloud-native, and we intend on staying that way, and it's a big expense for us. We save a lot of money by having two clouds.”

The Annihilate FSN process wasn’t without at least one failed hypothesis. Hunter mistakenly thought that Snap could write its applications on one layer and that layer would use the cloud provider that best fit a workload. That proved to be way too hard, he said.

“The clouds are different enough in most of their services and changing rapidly enough that it would have taken a giant team to build something like that,” he said. “And neither of the cloud providers were interested at all in us doing that, which makes sense.”

Instead, Hunter said, there are three types of services that he looks at from the cloud.

“There's one which is cloud-agnostic,” he said. “It's pretty much the same, regardless of where you go, like blob storage or [content-delivery networks] or raw compute on EC2 or GCP. There's a little bit of tuning if you're doing raw compute but, by and large, those services are all pretty much equal. Then there's sort of mixed things where it's mostly the same, but it really takes some engineering work to modify a service to run on one provider versus the other. And then there's things that are very cloud-specific, where … only one cloud offers it and the other doesn't. We have to do this process of understanding where we're going to spend our engineering resources to make our services work on whichever cloud that it is.”

Snap’s current architecture also has resulted in reduced latency for Snapchatters.

In its early days, Snap had its back-end monolith hosted in a single region in the middle of the United States — Oklahoma — which impacted performance and the ability for users to communicate instantly. If two people living a mile apart in Sydney, Australia, were sending Snaps to each other, for example, the video would have to traverse Australia's terrestrial network and an undersea cable to the United States, be deposited in a server in Oklahoma and then backtrack to Australia.

“If you and I are in a conversation with each other, and it's taking seconds or half a minute for that to happen, you're out of the conversation,” Hunter said. “You might come back to it later, but you've missed that opportunity to communicate with a friend. Alternatively, if I have just the messaging stack sitting inside of the data center in Sydney … now you're traversing two miles of terrestrial cable to a data center that's practically right next to you, and the entire transaction is so much faster.”

If I want to experiment and move something to Sydney or Singapore or Tokyo, I can just do it.

Snap wanted to regionalize its services where it made sense. The only way to do that was by using microservices and understanding which services were useful to have close to the customer and which ones weren't, Hunter said.

“Customers benefit by having data centers be physically closer to them because performance is better,” he said. “CDNs can cover a lot of the broadcast content, but when doing one-on-one communications with people — people send Snaps and Snap videos — those are big chunks of data to move through the network.”

That ability to switch regions is one of the benefits of using cloud providers, Hunter said.

“If I want to experiment and move something to Sydney or Singapore or Tokyo, I can just do it,” he said. “I'm just going to call them up and say, ‘OK, we're going to put our messaging stack in Tokyo,’ and the systems are all there, and we try it. If it turns out it doesn't actually make a difference, we turn that service off and move it to a cheaper location.”

Snap has built more than 100 services for very specific functions, including Delta Force.

In 2016, any time a user opened the Snapchat app, it would download or redownload everything, including stories that a user had already looked at but hadn’t yet timed out in the app.

“It was … a naive deployment of just ‘download everything so that you don't miss anything,’” Hunter said. “Delta Force goes and looks at the client … finds out all the things that you've already downloaded and are still on your phone, and then only downloads the things that are net-new.”

This approach had other benefits.

“Of course, that turns out to make the app faster,” Hunter said. “It also costs us way less, so we reduced our costs enormously by implementing that single service.”

Snap uses open-source software to create its infrastructure, including Kubernetes for service development, Spinnaker for its application team to deploy software, Spark for data processing and memcached/KeyDB for caching. “We have a process for looking at open source and making sure we're comfortable that it's safe and that it's not something that we wouldn't want to deploy in our infrastructure,” Hunter said.

Snap also uses Envoy, an edge and service proxy and universal data plane designed for large, microservice service-mesh architectures.

“I actually feel like … the way of the future is using a service mesh on top of your cloud to basically deploy all your security protocols and make sure that you've got the right logins and that people aren't getting access to it that shouldn't,” Hunter said. “I'm happy with the Envoy implementations giving us a great way of managing load when we're moving between clouds.”

Hunter prefers using primitives or simple services from AWS and Google Cloud rather than managed services. A Snap philosophy that serves it well is the ability to move very fast, Hunter said.

“I don't expect my engineers to come back with perfectly efficient systems when we're launching a new feature that has a service as a back end,” he said, noting many of his team members previously worked for Google or Amazon. “Do what you have to do to get it out there, let's move fast. Be smart, but don't spend a lot of time tuning and optimizing. If that service doesn't take off, and it doesn't get a lot of use, then leave it the way it is. If that service takes off, and we start to get a lot of use on it, then let's go back and start to tune it.”

Our total compute cost is so large that little bits of tuning can have really large amounts of cost savings for us.

It’s through that tuning process of understanding how a service operates where cycles of cloud usage can be reduced and result in instant cost savings, according to Hunter.

“Our total compute cost is so large that little bits of tuning can have really large amounts of cost savings for us,” he said. “If you're not making the sort of constant changes that we are, I think it's fine to use the managed services that Google or Amazon provide. But if you're in a world where we're constantly making changes — like daily changes, multiple-times-a-day changes — I think you want to have that technical expertise in house so that you can just really be on top of things.”

Three factors figure into Snap’s ability to reap cost savings: the competition between AWS and Google Cloud, Snap’s ability to tweeze out costs as a result of its own work and going back to the cloud providers and looking at their new products and services.

“We're in a state of doing those three things all the time, and between those three, [we save] many tens of millions of dollars,” Hunter said.

Snap holds a “cost camp” every year where it asks its engineers to find all the places where costs possibly could be reduced.

“We take that list and prioritize that list, and then I cut people loose to go and work on those things,” he said. “On an annual basis depending on the year, it's many tens of millions dollars of cost savings.”

Snap has considered adding a third cloud provider, and it could still happen some day, although the process is pretty challenging, according to Hunter.

“It's a big lift to move into another cloud, because you've got those three layers,” he said. “The agnostic stuff is pretty straightforward, but then once you get to mixed and cloud-specific, you've got to go hire engineers that are good at that cloud, or you've got to go train your team up on … the nuances of that cloud.”

Enterprises considering adding another cloud provider need to make sure they have the engineering staff to pull it off: 20 to 30 dedicated cloud people as a starting point, Hunter said.

“It's not cheap, and second, that team has to be pretty sophisticated and technical,” he said. “If you don't have a big deployment, it's probably not worth it. I think about a lot of the customers I used to serve when I was in AWS, and the vast majority of them, their implementations … were serving their company's internal stuff, and it wasn't gigantic. If you're in that boat, it's probably not worth the extra work that it takes to do multicloud.”

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Tech companies that rely on cloud computing and want to reduce their carbon emissions should take a long look at a new report.

A new report has revealed the most climate-friendly regions in which to operate data centers.

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

A new report has revealed the most climate-friendly regions in which to operate data centers. The findings point to the challenges holding the sector back from reducing carbon emissions, as well as ways tech companies can mitigate the climate toll of their cloud computing demands.

The report, released Thursday by cloud management platform Cirrus Nexus, analyzed the energy consumed over the course of a week in regions of the U.S. and Europe where major cloud service providers tend to concentrate their data centers. It then estimated each region’s carbon intensity, a metric of the amount of carbon dioxide emitted per unit of electricity generated (in this case, grams per kilowatt hour).

Chris Noble, CEO and co-founder of Cirrus Nexus, said the report emerged out of a desire to recommend the regions with the least carbon-intensive data centers. However, Noble said, there’s “not a simple answer.” While regions that rely the most on solar, wind, hydro and nuclear power tend to have the lowest carbon intensity, that measure fluctuates dramatically due to renewables’ intermittency when the sun isn’t shining or the wind isn’t blowing.

In the U.S., Midwestern data centers were consistently among the most carbon-intensive due to the grid’s heavy reliance on coal and methane gas. Texas, in comparison, relies on both wind and gas. That leaves it a cut above the Midwest but worse off than the Northwest, where hydropower plays a major role in electricity generation.

In Europe, data centers located in Sweden and France — both of which rely largely on nuclear, though Sweden has abundant hydro resources as well — had the lowest carbon intensities. The countries also avoided the peaks and valleys in carbon intensity common across countries like Italy and Germany, which have solar infrastructure but rely on fossil alternatives when the sun is not shining.

Ireland offered a particularly stark example of the swings in carbon intensity that come with renewables. The country started the week Cirrus Nexus analyzed with a carbon intensity in the middle of the European pack. But when the wind slackened mid-week, it became the dirtiest region in Europe. Once the wind picked up again, though, Ireland rocketed to third-cleanest and even generated excess power, which it exported to the U.K.

The report emphasized the importance of increasing energy storage. Doing so would allow the grid — and the cloud computing infrastructure that relies on it — to smooth out the inconsistency of renewables without relying on fossil fuels in the absence of sun or wind.

Noble said it would behoove companies to factor fluctuating carbon intensities into where they locate their operations, if minimizing their climate toll is deemed a corporate priority: “Companies should also focus on optimizing their operations in order to reduce total emissions, not just use carbon credits to offset,” he added.

However, Noble said companies that buy cloud computing services historically have had a blindspot for the emissions tied to data center operations, and factors like cost and proximity to a company’s main operations generally outweigh carbon intensity when selecting a cloud computing provider.

Complicating matters is the fact that the regions with the lowest carbon intensity also tend to offer the most expensive cloud computing services. And the report points out that if demand for clean computing increases, it could actually drive up prices even more in the short to medium term, at least until more carbon-free generation capacity comes online.

Tech companies with cloud computing workloads generally look to cloud management platforms to oversee both their systems and how much they spend on them. Cirrus Nexus advises companies to design their applications so that their workloads can be moved between data centers to keep costs down as they fluctuate overtime; according to Noble, an increasing number of the company’s clients have asked about managing carbon as well.

Ultimately, Noble said the carbon intensity of cloud operations is a function of what customers demand. If they suddenly tell cloud computing providers that they will go somewhere else unless the provider minimizes its carbon intensity, Noble said there could be a rush to bolster data centers with solar panels or storage.

But that all starts with companies actually factoring carbon intensity into their decision of where to go to get their cloud computing needs met.

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.