On a clear, cool late October evening, the residents of the village of Star, U.K., turned off their lights, left their homes, and gathered together in a field. The mayor of the tiny Welsh hamlet was already there, serving everyone tea and coffee, and people grouped together around deck chairs set up for the occasion—which was quite unusual for the 70 or so inhabitants. Because despite having one of the clearest night skies in all of the U.K., it turns out that residents of Wales are the least likely to pause and look up at the stars.
We thought the launch of astrophotography on Pixel 4’s Night Sight mode was a great opportunity to try and change that, and where better to start than the aptly named Star? Photos of the night sky have traditionally been best left to the experts, but Pixel 4 makes it easy for anyone to snap a stunning shot of the Milky Way. So we brought a handful of new phones, along with some chairs and tripods, to give the people of Star a new way to stargaze. Here are a few shots from the night, taken on Pixel 4.
Star, UK
Star, UK in the daylight
Pixel 4 set up at sunset
As the sun goes down, getting ready to take photos.
Pixel 4 Astrophotography.jpg
Left: a Pixel 4 photographer admires his Night Sight Shot. Right: another shot of the night sky in Star.
If you’re looking to try your hand at astrophotography, our photography lead engineer Marc Levoy has a few tips for you.
Hold your phone still. Use two hands to hold your device. Tuck your elbows into your sides, and hold the phone close to your chest. Spread your feet apart to create a stable base. Lean against a wall or solid object to prevent you from swaying back and forth.
Use a makeshift tripod.When the Pixel is held still against against anything stable—a tree trunk, a big rock, a car hood—the camera enters a “braced” mode. It will use longer exposures, and give you even more detail and less noise than when you hand-hold it.
Be patient. In very dark environments, the Pixel 4 may need some time to gather enough light. The phone tips you off to that with a countdown timer, which may be up to four minutes if you’re using a tripod. But if you’re about to get interrupted—say a car headlamp is about to come into the picture—you can end the capture early with the stop button.
Let autofocus do its thing.For best results, we recommend just letting the camera do the work and using autofocus. But if you’re determined to strike out on your own, select a manual focus mode (“near” or “far”) on the toolbar. “Far” is what you’ll want for astro shots. If your subject is close to you (within six feet), choose “near.”
Play around with the exposure compensation slider.Take a night sky photo that was too bright or too dark? Try again and adjust the exposure compensation slider: Tap on your subject, move the exposure slider up or down, and take a photo. Tap again to reset it.
Coming up with big ideas in technology used to take the kind of time and money that only large companies had. Now open source tools—like TensorFlow, which provides access to Google’s machine learning technology—mean anyone with a smart concept has the opportunity to make it a reality. Just ask Arjun Taneja and Vayun Mathur, two friends and high school students from Singapore with a big ambition to improve recycling rates.
Arjun and Vayun realized that separating waste is sometimes confusing and cumbersome—something that can derail people's good intentions to recycle. Using TensorFlow, they built a “Smart Bin” that can identify types of trash and sort them automatically. The Smart Bin uses a camera to take a picture of the object inserted in the tray, then analyzes the picture with a Convolutional Neural Network, a type of machine learning algorithm designed to recognize visual objects.
To train the algorithm, Arjun and Vayun took around 500 pictures of trash like glass bottles, plastic bottles, metal cans and paper. It’s a process that would normally be laborious and expensive. But by using Google’s Colab platform for sharing resources and advice, the students could access a high powered graphics processor (GPU) in the cloud for free. They were also able to access Tensor Processing Units, Google’s machine learning processors which power services like Translate, Photos, Search, Assistant and Gmail. These tools helped their system analyze large amounts of data at once, so the students could correct the model if it didn't recognize an object. As a result, the model learned to classify the objects even more quickly. Once the Smart Bin was trained, all they had to do was place an object in the tray, and the system could predict whether it was metal, plastic, glass or paper—with the answer popping up on a screen.
Building on their successful trials at home, Arjun and Vayun showcased the Smart Bin with a stall at last week’s Singapore Maker Faire, and they continue to work on other projects. It’s a great example of how tools available in the cloud are cutting out processes and costs that might have held back this kind of invention in the past.
Editor’s note: Today’s post comes from Will Butler of Be My Eyes, whose recent partnership with Google makes support more accessible for people who are blind or have low vision.
As a blind person, accessibility is everything. If the products or services that I use aren’t accessible, I can’t communicate with friends and family, hold down a job, buy things, or invest my money. This experience is a reality for the over 253 million people who are blind or have low vision, and the millions more who face daily accessibility barriers with the products, services, and websites they use.
Be My Eyes enables people who are blind or have low vision to live more independently with the support of nearly 3 million sighted volunteers. These volunteers are a lifeline for many people around the world, providing a pair of eyes over a quick video call to help with everyday tasks—like helping someone figure out when their chicken is finished cooking, play video games and learn how to use a new washer for the first time.
Left phone: Shows the Google profile on the Be My Eyes Specialized Help platform. Right phone: Shows a Be My Eyes call with a Google support agent helping with a Pixelbook
Often Be My Eyes users need help with a task—like setting up a new phone—that requires an expert pair of eyes. Over the last year, Be My Eyes has partnered with companies and organizations, like Google and Microsoft, who use the app to provide specialized support for their products.
Using the Be My Eyes app, someone can get help directly from Google’s Disability Support team. This means that if someone with low vision wants help turning on screen reader support for their Pixel phone, they can talk to a Google support agent directly to get help with their question. The Google Assistant can help with this, too. By saying "Hey Google, open Be My Eyes for Google" you’ll be immediately connected to the Disability Support team.
One of the coolest parts about the accessibility community is the spirit of collaboration that exists. Today, together with Google and Microsoft, we're calling for more technology companies to join the Be My Eyes platform, make your support center accessible with video, and help this community of people be more independent. Become part of this mission by joining us today—email [email protected].
With phones becoming more crucial to every part of daily life, more people are taking steps to find their balance with technology. To help them do this, we’re making Digital Wellbeing a part of our products, like Wind Down on Android and Take a Break reminder on YouTube. Today, in support of our efforts to extend our best practices to the community, we’re launching Digital Wellbeing Experiments—a platform to encourage designers and developers to build digital wellbeing into their products. Anyone can use the platform to share their ideas and experimental tools to help people find a better balance with technology.
To kick it off, we created five helpful and even playful digital wellbeing experimental apps. Each experiment centers around a different behavior, offering small ways to help improve your digital wellbeing and find a balance that feels right for you.
For example, Unlock Clock displays the number of times you unlock your phone in a day. We Flip helps families and friends disconnect from technology together by simultaneously flipping a big switch, setting their phones to “Do Not Disturb” mode. And Desert Island helps you to find focus by spending a day only using your essential apps.
We’ve open-sourced the code and created guides for others to make their own experiments. We hope these experiments inspire developers and designers to keep digital wellbeing top of mind when building technology. The more people that get involved, the more we can all learn how to build better technology for everyone. If you want to create your own experiment, you can add it to the collection at g.co/digitalwellbeingexperiments.
Today, Nature published its 150th anniversary issue with the news that Google’s team of researchers have achieved a big breakthrough in quantum computing known as quantum supremacy. It’s a term of art that means we’ve used a quantum computer to solve a problem that would take a classical computer an impractically long amount of time. This moment represents a distinct milestone in our effort to harness the principles of quantum mechanics to solve computational problems.
While we’re excited for what’s ahead, we are also very humbled by the journey it took to get here. And we’re mindful of the wisdom left to us by the great Nobel Laureate Richard Feynman: “If you think you understand quantum mechanics, you don't understand quantum mechanics.”
In many ways, the exercise of building a quantum computer is one long lesson in everything we don’t yet understand about the world around us. While the universe operates fundamentally at a quantum level, human beings don’t experience it that way. In fact many principles of quantum mechanics directly contradict our surface level observations about nature. Yet the properties of quantum mechanics hold enormous potential for computing.
A bit in a classical computer can store information as a 0 or 1. A quantum bit—or qubit—can be both 0 and 1 at the same time, a property called superposition. So if you have two quantum bits, there are four possible states that you can put in superposition, and those grow exponentially. With 333 qubits there are 2^333, or 1.7x10^100—a Googol—computational states you can put in superposition, allowing a quantum computer to simultaneously explore a rich space of many possible solutions to a problem.
As we scale up the computational possibilities, we unlock new computations. To demonstrate supremacy, our quantum machine successfully performed a test computation in just 200 seconds that would have taken the best known algorithms in the most powerful supercomputers thousands of years to accomplish. We are able to achieve these enormous speeds only because of the quality of control we have over the qubits. Quantum computers are prone to errors, yet our experiment showed the ability to perform a computation with few enough errors at a large enough scale to outperform a classical computer.
For those of us working in science and technology, it’s the “hello world” moment we’ve been waiting for—the most meaningful milestone to date in the quest to make quantum computing a reality. But we have a long way to go between today’s lab experiments and tomorrow’s practical applications; it will be many years before we can implement a broader set of real-world applications.
We can think about today’s news in the context of building the first rocket that successfully left Earth’s gravity to touch the edge of space. At the time, some asked: Why go into space without getting anywhere useful? But it was a big first for science because it allowed humans to envision a totally different realm of travel … to the moon, to Mars, to galaxies beyond our own. It showed us what was possible and nudged the seemingly impossible into frame.
That’s what this milestone represents for the world of quantum computing: a moment of possibility.
It’s been a 13-year journey for Google to get here. In 2006, Google scientist Hartmut Neven started exploring the idea of how quantum computing might help our efforts to accelerate machine learning. This work led to the founding of our Google AI Quantum team, and in 2014, John Martinis and his team at the University of California at Santa Barbara joined us in our efforts to build a quantum computer. Two years later, Sergio Boixo published a paper that focused our efforts around the well-defined computational task of quantum supremacy, and now the team has built the world’s first quantum system that exceeds the capabilities of supercomputers for this particular computation.
We made these early bets because we believed—and still do—that quantum computing can accelerate solutions for some of the world's most pressing problems, from climate change to disease. Given that nature behaves quantum mechanically, quantum computing gives us the best possible chance of understanding and simulating the natural world at the molecular level. With this breakthrough we’re now one step closer to applying quantum computing to—for example—design more efficient batteries, create fertilizer using less energy, and figure out what molecules might make effective medicines.
Those applications are still many years away and we are committed to building the error-corrected quantum computer that will power these discoveries. We’ve always known that it would be a marathon, not a sprint. The thing about building something that hasn’t been proven yet is that there is no playbook. If the team needed a part, they had to invent it and build it themselves. And if it didn’t work—and often, it didn’t—they had to redesign and build it again.
One turning point came in October 2018, when the wildfires were raging in Southern California. I got a message that they would need to close down the Santa Barbara laboratory for a few days out of an abundance of caution. What I didn’t know was that the team had been experiencing one of those periods where progress had slowed to a crawl. The few days of forced vacation helped the team to reset and think about things differently, and a few months later, they made this breakthrough.
As with any advanced technology, quantum computing raises its own anxieties and questions. In thinking through these issues, we’re following a set of AI principles that we developed to help guide responsible innovation of advanced technology. For example, for many years the security community, with contributions from Google, has been working on post-quantum cryptography, and we’re optimistic we are ahead of the curve when it comes to future encryption concerns. We will continue to publish research and help the broader community develop quantum encryption algorithms using our open source framework Cirq. We’ve appreciated the National Science Foundation’s support for our researchers, and we’ve collaborated with NASA Ames and Oak Ridge National Laboratory on this latest result. As was the case with the Internet and machine learning, government support of basic research remains critical to long-term scientific and technological achievement.
I am excited about what quantum computing means for the future of Google and the world. Part of that optimism comes from the nature of the technology itself. You can trace the progress from the mega-computers of the 1950s to advances we’re making in artificial intelligence today to help people in their everyday lives.
Quantum computing will be a great complement to the work we do (and will continue to do) on classical computers. In many ways quantum brings computing full circle, giving us another way to speak the language of the universe and understand the world and humanity not just in 1s and 0s but in all of its states: beautiful, complex, and with limitless possibility.
Quantum computing: It sounds futuristic because until recently, it was. But today we’re marking a major milestone in quantum computing research that opens up new possibilities for this technology.
Unlike classical computing, which runs everything from your cell phone to a supercomputer, quantum computing is based on the properties of quantum mechanics. As a result, quantum computers could potentially solve problems that would be too difficult or even impossible for classical computers—like designing better batteries, figuring out what molecules might make effective medicines or minimizing emissions from the creation of fertilizer. They could also help improve existing advanced technologies like machine learning.
Today, the scientific journal Nature has published the results of Google’s efforts to build a quantum computer that can perform a task no classical computer can; this is known in the field as “quantum supremacy.” In practical terms, our chip, which we call Sycamore, performed a computation in 200 seconds that would take the world’s fastest supercomputer 10,000 years.
This achievement is the result of years of research and the dedication of many people. It’s also the beginning of a new journey: figuring out how to put this technology to work. We’re working with the research community and have open sourced tools to enable others to work alongside us to identify new applications. Learn more about the technical details behind this milestone on our AI blog.
When I served as U.S. Ambassador to Vietnam, it often struck me that young people there had vastly more access to news and information than I did when I first lived in that country 20 years earlier—a sign of how things can change for the better from generation to generation.
The internet has enabled people in Vietnam and across Asia Pacific to learn, connect and express themselves in ways we couldn’t have imagined in the past. We need to keep expanding those opportunities, but we also need to help the next generation explore the internet with confidence as they come online.
In Southeast Asia, this includes programs run by the Indonesian Anti-Slander Society and the Child and Youth Media Institute in Thailand to create video teaching tools for local schools, building on a pilot program we developed with the University of Hong Kong. And today we took the next step, announcing that Google.org will support a new initiative run by the Asian Institute of Journalism and Communication in the Philippines. The funding will enable the AIJC to hold “school summits” across the country, training 300 high school teachers so they can teach media literacy to around 9,000 students each year—helping them tell the difference between misinformation and reliable news online.
We asked Ramon Tuazon, President of the AIJC, to tell us a bit more.
In 2017, the Philippines became the first country in Asia to make media and information literacy (MIL) part of its high school curriculum. Why is this so important?
When we first started discussing adding MIL to the curriculum in 2013, we knew we had to address misrepresentation and propaganda in traditional media as well as social media. But we also had to deal with the new challenges the internet has created, including the fact that young people are becoming media literate online before they learn ethics and responsibility in how to use technology.
With the new campaign, what do you hope students and teachers get out of the experience?
I hope the students gain new perspectives and better understand how to verify news, deal with their biases and be sensitive to misinformation and disinformation. For teachers, I hope the training helps them learn new, creative and engaging teaching approaches. Over the long term, I hope both teachers and students will be able to go out and challenge misinformation on social media and other platforms.
What’s next after this initial campaign?
We’ll be working closely with the Department of Education to continue improving how we teach media and information literacy as part of the curriculum, including through new tools and better teacher training. Our challenge is to expand this new initiative nationwide.
Nest Mini hits shelves today, just in time to help you catch up with what’s been happening in Arendelle as Disney’s “Frozen 2” hits theaters. Join Elsa, Anna, Kristoff and Olaf around the campfire for a dozen tales that you can only hear on the new Nest Mini (and other Google Assistant smart speakers), available starting today.
To get started, just say, “Hey Google, tell me a Frozen story” and you can pick which character you’d like to narrate (and more stories will be coming by the end of the year).
For families who want to relive the films at home, Google Home and Nest smart speakers can help read along with you. When you read Disney’s “Frozen” and “Frozen 2” Little Golden Books aloud, the Google Assistant brings the books to life with sound effects and music. Just say, “Hey Google, read along with Frozen 2” to get started.
You can pick up your Nest Mini, Google’s newest smart speaker with better sound, an upgraded Assistant and a sustainable design, at the Google Store, Target, Best Buy and more retailers. So, get ready to gather the family around your new Nest Mini and go back to Arendelle with Disney’s “Frozen” 2 stories and be sure to catch the film in theaters on November 22.
Fi helps you get the highest-quality connection on your phone, no matter where you are. For Designed for Fi devices, our network switches between three 4G LTE networks and gives you free, secure access to 2 million Wi-Fi hotspots around the world. Our enhanced network provides you with more security and privacy across all your connections, specifically when you are switching between Wi-Fi and cellular on the go.
Fi's new dual connect technology, available first on Pixel 4, gives you even better coverage by connecting you to two LTE networks at once. This means that if you’re watching a video and Fi switches you to a better network, you won't experience any delays or pauses—you won’t even notice.
Curious how this works? Connecting to two networks at once is possible with Dual Sim Dual Standby (DSDS) hardware. While one network is used at a time for sending data or making phone calls, dual connect technology improves your connection by keeping both networks ready for use at a moment’s notice, and allows for more frequent network switching. To use DSDS to connect to two networks simultaneously, Fi uses a SIM card and the eSIM at the same time.
If you purchased a Pixel 4 from Google Fi or the Google Store, your phone will be ready for dual connect technology as it becomes available over the next few weeks. Over time, we’ll expand dual connect technology to more Designed for Fi devices.
Pixel 4 is our latest phone that can help you do a lot of stuff, like take pictures at night or multitask using the Assistant. With on-device AI, your camera can translate foreign text or quickly identify a song that’s playing around you. Everything needed to make these features happen is processed on your phone itself, which means that your Pixel can move even quicker and your information is more secure.
Lens Suggestions
When you point your camera at a phone number, a URL, or an email address using Pixel, Google Lens already helps you take action by showing you Lens Suggestions. You can call the number, visit the URL or add the email address to your contact with single tap. Now, there are even more Lens Suggestions on Pixel 4. If you’re traveling in a foriegn country and see a language you can’t read, just open your camera and point it at the text, and you’ll see a suggestion to Translate that text using Lens. For now, this works on English, Spanish, German, Hindi, and Japanese text, with more to come soon.
There are also Lens Suggestions for copying text and scanning a document, which are processed and recognized on-device as well. So if you point your camera at a full page document, you’ll see a suggestion to scan it and save it for later using Lens.
Lens will prompt you with a suggestion to translate foreign text, which happens on device. Then, you’ll see the translation in your native language.
Recorder
Remember that time you were in a brainstorm, and everyone had good ideas, but no one could remember them the next day? Or that meeting when you weren’t paying attention because you were too busy taking notes? With the Recorder app on Pixel 4, you can record, transcribe and search for audio clips. It automatically transcribes speech and tags sounds like applause (say your great idea was met with cheers!), music or whistling, and more, so you can find exactly what you’re looking for. You can search within a specific recording, or your entire library of recordings, and everything you record stays on your phone and private to you. We're starting with English for transcription and search, with more languages coming soon.
Now Playing is a Pixel feature that identifies songs playing around you. If that song gets stuck in your head and you want to play it again later, Now Playing History will play it on your favorite streaming service (just find the song you want, tap it to listen to it on Spotify, YouTube Music and more). On Pixel 4, Now Playing uses a privacy-preserving technology called Federated Analytics, which figures out the most frequently-recognized songs on Pixel devices in your region, without collecting individual audio data from your phone. This makes Now Playing even more accurate because the database will update with the songs people are most likely to hear (without Google ever seeing what you listen to).
With so much processing happening directly on your Pixel 4, it’s even faster to access the features that make you love being a #teampixel member. Pre-order Pixel 4 or head out to your local AT&T, Verizon, T-Mobile or Sprint store on October 24.
Traveling to new places is fun and exciting—but for a lot of people, planning what to do once you’re there is not. It often involves hours of research: finding things to do and see, reading travel guides and blogs, comparing prices and asking friends for recommendations. Even after all that research, there’s always a fear of wasting precious vacation time on disappointing experiences, or missing out on the most important things to do. And travelers who skip the planning process entirely can find themselves paying the price when a popular attraction or activity is sold out or closed.
Our small team decided to work on improving the research and booking experience for tours and activities within Area 120, Google’s internal incubator for experimental ideas. Area 120 was created to provide Google employees a place to pursue and test their promising ideas full-time, with dedicated support and resources. Through this program, we set out to build a new tool that addresses the need for a centralized destination to plan your next vacation.
Touring Bird helps you explore and compare prices and options across providers and makes it easier to book tours, tickets and activities in top destinations around the world. You can do all this in a single place—saving both research time and money. We also wanted to make travel more fun and memorable by making it easier to discover unique things to do, like how to trace the footsteps of Sherlock Holmes in London, visit Senso-ji temple in Tokyo at night or explore hidden catacombs in New York City. For travelers looking for ideas, Touring Bird presents options by interest (like local eats, photo-worthy spots or kid-friendly activities) and offers curated recommendations from travel writers, locals, and destination experts.
After an initial test launch in Paris in early 2018, we expanded Touring Bird to cover 20 of the world’s top destinations later that year. We learned a lot from the people using Touring Bird, and we continued tweaking the product based on their feedback. For example, based on the insight that every traveler has unique interests and needs, we introduced a new way to customize and compare options across multiple providers. And earlier this year, we expanded even further to cover 200 destinations. So far, the reaction from users and partners has been overwhelmingly positive.
Today we are announcing that Touring Bird is successfully moving on from the experimental environment of Area 120 into Google, where the team will continue building compelling experiences for travelers and connecting them to tour and activity providers in destinations around the world. That way, the next time you're booking a vacation, planning what to do during your trip is one less thing to worry about.
Celebrate every goal and dribble along with your favorite team: The 2019/2020 UEFA Champions League season, Europe’s top-division football club tournament, is underway. The tournament, which has been held annually since 1955, consists of 32 teams that compete in five rounds for the title of best club in European soccer. This year marks the 65th anniversary of the tournament, and it all leads to Turkey. The final will be hosted at the Atatürk Olympic Stadium in Istanbul on May 30, 2020.
From September through May, you can follow along with our special experience on Google News. Search for and follow the Champions League topic on both desktop and mobile and get pre-match, live and post-match updates and video highlights of every game. You can also dive deep into club and player news including line-ups, game stats, analysis, injury reports, trade rumors, tweets and much more. With our new wheel format, you can quickly view every upcoming opponent and be prepared for the biggest games.
The experience will be available across Android, iOS and web platforms, so you’ll never miss a minute no matter where you are. Will your club be crowned best in Europe? Follow along now with Google News.
What is love? Do I have free will? Is there anybody out there?
These are some of life’s universal questions—questions that many of us, in fact, may bring to Google Search. And while Search can help us get started, we were curious to see what would happen if we brought in performing and visual artists in the UK.
Google Arts & Culture teamed up with BBC Arts to produce “You Asked, Art Answered,” our first collaboration with the BBC. In an unconventional pairing of Search Trends with art, six UK artists from different fields, including the visual arts, poetry and choreography, chose one question and created an imaginative short film to illustrate their response.
“What does it mean to be British?” British-Iranian visual artist and provocateur Sarah Maple asked members of the public in the UK, and lip-synced their answers while dressed up as different British icons.
“How do you know you’re in love?” Writer, performer and illustrator Jessie Cave uses her own script and special characters and illustrations to explore possible responses.
“What is love?” Artist Andy Holden’s avatar seeks the answers as he wanders through a cartoon world and recites lyrics to well-known pop songs.
“Do I have free will?” In choreographer and dancer Jamiel Laurence’s energetic piece, two dancers tussle for control over themselves and each other.
“What if I fall?” In “Sensational Simmy,” writer and filmmaker Runyararo Mapfumo imagines a champion runner who faces challenges on her way home one night.
Jamiel Laurence.jpg
Jamiel Laurence ponders, “Do I have free will?”
Sarah Maple.jpg
Sarah Maple’s question: What does it mean to be British?
Jessie Cave.jpg
Jessie Cave asked, “How do You know You Are in Love?”
Andy Holden
Andy Holden meditates on, “What is love?”
Salena Godden.jpg
Salena Godden would like to know: “ Is there is anybody out there?”
Runyararo Mapfumo.jpg
Runyararo Mapfumo wonders, “What if I fall?”
The artists also delved into our partners’ virtual collections on Google Arts & Culture and, using our search tools like Themes, Mediums and Historical Figures, each choose six artworks that correspond to their question. With images ranging from Picasso to Daffy Duck, Andy Holden hopes we might learn something new about the meaning of love. Sarah Maple, enlightening us about British identity, opted for a photo of the Queen laughing and Sarah Lucas’s “Self Portrait with Fried Eggs.” You can explore their selections, and read exclusive interviews with each artist, on our website. Watch the full films on BBC Arts.
With fall around the corner here in the U.S., our thoughts at Google Cloud are turning to … baking. Apple pies, applesauce, apple crisps—we’ve got it all covered.
Because, for us, when we think of apples, what comes to mind is virtualization, which is the way computer servers are divided up to be more efficient. No, really. Bear with us while we explain why. Most of the computers running the applications you use, like email and web browsing, are not just one computer. They’re a set of computers, divided up into virtual computers, also called virtual machines. (There are millions upon millions of VMs in the world, so you have an idea of the scale.)
When this concept first arrived, it changed computing entirely. Instead of one computer in one physical box (remember those desktop towers we all used to have at work?), that one physical box could now hold multiple computers that members of the IT team managed through software (called a hypervisor). So the one computer that held all of the HR department’s applications and files, for example, could now also hold the finance team’s applications and files too, without having to buy another computer. Here’s where the apples come in: If you think of a single, non-virtualized computer as a single apple, virtualization is that apple, but sliced up.
But what about virtualization in the cloud? In the years since virtualization was invented, it’s come a long way, especially as the cloud has come into the picture. Now, each of those virtual computers (known as virtual machines, or VMs) don’t need to be managed by that special software on-premises, but can actually be moved to the cloud and managed there. So there are different ways a company might choose to move their VMs—usually containing most or all of the applications and data that actually run their companies—into the cloud. They might just move those apple slices as is from their grocery store package (on-prem) to a plate (the cloud).
But they might want to update those servers to work better in the cloud and be more efficient, so people get the information they need easily and quickly. In that case, they may modernize the servers—so those apple slices might now be mixed with some cinnamon and baked into a pie. You can still make out the actual slices, but they’re different from raw slices and have different pros and cons. Or, the IT teams moving the virtual servers might go even further with changing and modernizing them, so now they’re applesauce. You can’t make out the individual servers, or slices, anymore. But they maintain the same data and information they had before, but that data can be used and accessed more easily and by more computers and users than before.
What we find at Google Cloud is that moving those sliced apples into the cloud as they are is a good place to start. They’re familiar, and look like they did before, so it’s a successful first step in the overall move to cloud. From an IT perspective, it’s easier to keep managing those apple slices because you’re already familiar with them.
But, eventually, your business might yearn for something beyond apple slices. And that’s when you have to start cooking a little bit. A logical step might be to turn those virtual machines into containers instead, which is somewhat akin to baking an apple pie. There are clearly similarities you can see between the virtual machines and your new containers, but it’s still different—and tastier. And, from IT’s perspective, easier to manage, since there aren’t as many separate tools to keep track of. Plus, containers let you pack even more applications in because you can use fewer computing resources for each container vs. those virtual servers you started with.
We see lots of different journeys to cloud and those are just two examples. For us, though, we like to help customers plan how they’ll get all those servers to the cloud. So no matter what you want to do with your “apple slices,” we’ll figure out the best recipe for you based on your goals, requirements and constraints.
Growing up, I loved everything about Halloween: the candy, staying up past my bedtime and my small suburban town that came to life at night. But I always struggled with finding the right costume. I’d ask my friends and roam party stores for hours to no avail. One time, I even dressed up as “binary code”—I wore head-to-toe silver and wrote “Happy Halloween” in binary on my costume—in a moment of last-minute desperation.
Had I worked at Google then, I’m sure this idea would have been more popular with my peers, but it didn’t quite land at the time. But thanks to the tech available today, it’s much easier to come up with a great costume idea. Now, a simple search or voice command can lead me to thousands of ideas instantly, and show me step-by-step how to recreate them myself.
Come to think of it, technology has made so many things about Halloween easier. In celebration of that, we’re sharing 13 tips and tricks from Google Nest for Halloween—whether you’re trick-or-treating, hosting a party or staying in with a scary movie.
1. New! Enable spooky ringtones on Nest Hello.Starting today through early November, all Nest Hello users in the U.S. will have the ability to transform their doorbell chime into a cackling witch, a ghost, a vampire or a scary monster to make your front door a neighborhood destination on Halloween night. And the festive features don’t stop there: Winter ringtones are coming in late November. 2. Get costume and makeup inspiration. With Nest Hub and Nest Hub Max, you can watch YouTube videos with a simple command. For costume inspiration and DIY tips, just say “Hey Google, show me DIY Halloween costume videos,” or “show me Halloween makeup videos on YouTube,” and scroll through the list. 3. “Hey Google, get spooky.”Say this command to one of your Google Nest speakers or displays and your device will start an hour-long playlist of “spooktacular” sounds and music to greet your trick-or-treaters or party guests. 4. Enjoy your favorite scary movie.Use Chromecast to cast your favorite scary movie to your TV (media content subscriptions may be required). To take your experience up a notch, you can create a speaker group for cast-enabled speakers around your entertainment center for room-filling sound effects, too. 5. Get the family involved. If Grandma or Grandpa can’t see your trick-or-treaters all dressed up, simply give them a quick video call using Nest Hub Max and Duo: “Hey Google, video call Grandma.” 6. Conquer your to-do list.Busy families have lots to prep for in the lead-up to Hallow’s Eve. As you remember things you have to do, just add them to a running list of reminders: “Hey Google, remind me to pick up cupcakes for school,” and when you head out for the day, you’ll have the reminder on your phone. 7. Add candy to your shopping list with ease.Just say, “Hey Google, create a list,” which you can then name “Candy Shopping,” and your Google Assistant will ask what you want to add. 8. Learn a festive new recipe.Say “Hey Google, show me recipes for pan de muerto” to your Nest Hub display and see a list of traditional Day of the Dead bread recipes to choose from and follow along, completely hands-free. 9. Protect your home from Mischief Night.Nest cameras like Nest Cam Outdoor and Nest Hello notify you when activity is detected around your house, and you can talk and listen through the Nest app to deter trespassers and TP’ers. 10. Find one-stop shopping near you.Just say, “Hey Google, show me Halloween stores nearby” to one of your smart displays to see options near you. Once you tap on one, you can say “Hey Google, call this store” to give them a ring (in the U.S., U.K., and Canada only). 11. Hear your favorite Halloween playlist in a heartbeat.Google Home Max is our smart speaker made for music lovers. Use it to blast your favorite playlist—whether your ideal Halloween tunes involve "The Monster Mash" or indie rock. 12. Set up a ghostly guest network for your party.Using Google Wifi, you can create a separate network for your party guests and give it a fun name and password, like “Hocus Pocus.” 13. A party to remember, with help from our partners. Google Nest products work with over 30,000 partners in the U.S.—everything from smart lights to Wi-Fi plugs for smoke machines—so you can throw the ultimate Halloween party with a little help from tech.
In today’s mobile-first world, people use a wide range of device types. As a result, app publishers who use banner ads must now serve them across a greater variety of screen sizes and layouts. While some responsive banner ad formats exist, they often produce ads that are too small and not sufficiently tailored to the height and aspect ratio of each device.
To address this, we’ve created a new banner type called adaptive anchor banners. These ads dynamically adjust banner sizes to deliver a creative that is ideally sized across your user’s devices without the need for any custom code.
The adaptive anchor banner advantage
Adaptive anchor banners are designed to be a drop-in replacement for the industry standard 320x50 banner size and the smart banner format. Standard sized banners return the same sized creative across every screen, which often results in ads that appear too small or too large. Smart banners only support fixed heights, so they often return creative that appears too small on high-res devices.
Unlike other banner APIs on the market, adaptive anchor banners consider the device in use, the ad width you’re comfortable using, and the aspect ratios and performance of all available demand. Adaptive anchor banners return creatives with the best height and aspect ratio for each device, with hard limits to prevent the wrong sizes from being served.
Your banners will look better than ever in your app, and writing custom code to handle different devices will be a task of the past. Using this format in place of standard and/or smart banners can help you maximize revenue while also making managing your ads less complex and more efficient.
Standard banner vs. smart banner vs. AdMob’s adaptive anchor banner
Getting started with adaptive anchor banners
Adaptive anchor banners are a great option for AdMob publishers who want the simplest solution to getting the best banner ad returned across any device. This format is still in beta on Google Ad Manager, so publishers who want to try it out on that platform should reach out to their account managers or contact our support team.
Adaptive anchor banners are currently only available for anchored placements—banners locked to the top or bottom of the screen. However, AdMob is actively developing another adaptive algorithm for in-line banners placed in scroll views or within content.
To get started with adaptive anchor banners for AdMob, check out our implementation guides (iOS, Android). We walk you through when it’s appropriate to use adaptive banners, implementation notes, and code examples.
We recommend testing adaptive banners against some of your existing banner ads to understand how they can help you maximize fill rates, engagement, and revenue.
I was diagnosed with breast cancer twice, in 2001 and again in 2004. Thanks to early detection and access to extraordinary care—including multiple rounds of chemo, radiation and more surgery than any one person should ever have in a lifetime—I’m still here and able to write this piece. In fact, I’ve probably never been healthier.
I remember receiving the news. I was initially terrified. Our three kids were only five, seven, and nine at the time of my first diagnosis, and all I wanted was to live to see them grow up. I’m grateful I had options and access to treatments, but no aspect of it was pleasant. Last year, I had the joy of seeing our youngest son graduate from college. In the years since I first learned of my cancer, there’s been remarkable progress in global health care, augmented with pioneering work from medical researchers and technology companies. I know how incredibly fortunate I am, but I also know that for far too many, a diagnosis comes too late and the best care is beyond reach.
And that’s where Google has focused its work: to bring healthcare innovations to everyone.Working at Google, I have had a front-row seat to these technological breakthroughs.
When it comes to breast cancer, Google is looking at how AI can help specialists improve detection and diagnosis. Breast cancer is one of the most common cancers among women worldwide, taking the lives of more than 600,000 people each year. Thankfully, that number is on the decline because of huge advances in care. However, that number could be even lower if we continue to accelerate progress and make sure that progress reaches as many people as possible. Google hopes AI research will further fuel progress on both detection and diagnosis.
Early detection depends on patients and technologies, such as mammography. Currently, we rely on mammograms to screen for cancer in otherwise healthy women, but thousands of cases go undiagnosed each year and thousands more result in confusing or worrying findings that are not cancer or are low risk. Today we can’t easily distinguish the cancers we need to find from those that are unlikely to cause further harm. We believe that technology can help with detection, and thus improve the experience for both patients and doctors.
Just as important as detecting cancer is determining how advanced and aggressive the cancer is. A process called staging helps determine how far the cancer has spread, which impacts the course of treatment. Staging largely depends on clinicians and radiologists looking at patient histories, physical examinations and images. In addition, pathologists examine tissue samples obtained from a biopsy to assess the microscopic appearance and biological properties of each individual patient’s cancer and judge aggressiveness. However, pathologic assessment is a laborious and costly process that--incredibly--continues to rely on an individual evaluating microscopic features in biological tissue with the human eye and microscope!
Last year, Google created a deep learning algorithm that could help pathologists assess tissue and detect the spread and extent of disease better in virtually every case. By pinpointing the location of the cancer more accurately, quickly and at a lower cost, care providers might be able to deliver better treatment for more patients. But doing this will require that these insights be paired with human intelligence and placed in the hands of skilled researchers, surgeons, oncologists, radiologists and others. Google’s research showed that the best results come when medical professionals and technology work together, rather than either working alone.
During my treatment, I was taken care of by extraordinary teams at Memorial Sloan Kettering in New York where they had access to the latest developments in breast cancer care. My oncologist (and now good friend), Dr. Clifford Hudis, is now CEO of the American Society of Clinical Oncology (ASCO), which has developed a nonprofit big data initiative, CancerLinQ, to give oncologists and researchers access to health information to inform better care for everyone. He told me: “CancerLinQ seeks to identify hidden signals in the routine record of care from millions of de-identified patients so that doctors have deeper and faster insights into their own practices and opportunities for improvement.” He and his colleagues don't think they’ll be able to deliver optimally without robust AI.
What medical professionals, like Dr. Hudis and his colleagues across ASCO and CancerLinQ, and engineers at companies like Google have accomplished since the time I joined the Club in 2001 is remarkable.
I will always remember words passed on to me by another cancer survivor, which helped me throughout my treatment. He said when you’re having a good day and you’ve temporarily pushed the disease out of your mind, a little bird might land on your shoulder to remind you that you have cancer. Eventually, that bird comes around less and less. It took many years but I am relieved to say that I haven’t seen that bird in a long time, and I am incredibly grateful for that. I am optimistic that the combination of great doctors and technology could allow us to get rid of those birds for so many more people.
I was diagnosed with breast cancer twice, in 2001 and again in 2004. Thanks to early detection and access to extraordinary care—including multiple rounds of chemo, radiation and more surgery than any one person should ever have in a lifetime—I’m still here and able to write this piece. In fact, I’ve probably never been healthier.
I remember receiving the news. I was initially terrified. Our three kids were only five, seven, and nine at the time of my first diagnosis, and all I wanted was to live to see them grow up. I’m grateful I had options and access to treatments, but no aspect of it was pleasant. Last year, I had the joy of seeing our youngest son graduate from college. In the years since I first learned of my cancer, there’s been remarkable progress in global health care, augmented with pioneering work from medical researchers and technology companies. I know how incredibly fortunate I am, but I also know that for far too many, a diagnosis comes too late and the best care is beyond reach.
And that’s where Google has focused its work: to bring healthcare innovations to everyone.Working at Google, I have had a front-row seat to these technological breakthroughs.
When it comes to breast cancer, Google is looking at how AI can help specialists improve detection and diagnosis. Breast cancer is one of the most common cancers among women worldwide, taking the lives of more than 600,000 people each year. Thankfully, that number is on the decline because of huge advances in care. However, that number could be even lower if we continue to accelerate progress and make sure that progress reaches as many people as possible. Google hopes AI research will further fuel progress on both detection and diagnosis.
Early detection depends on patients and technologies, such as mammography. Currently, we rely on mammograms to screen for cancer in otherwise healthy women, but thousands of cases go undiagnosed each year and thousands more result in confusing or worrying findings that are not cancer or are low risk. Today we can’t easily distinguish the cancers we need to find from those that are unlikely to cause further harm. We believe that technology can help with detection, and thus improve the experience for both patients and doctors.
Just as important as detecting cancer is determining how advanced and aggressive the cancer is. A process called staging helps determine how far the cancer has spread, which impacts the course of treatment. Staging largely depends on clinicians and radiologists looking at patient histories, physical examinations and images. In addition, pathologists examine tissue samples obtained from a biopsy to assess the microscopic appearance and biological properties of each individual patient’s cancer and judge aggressiveness. However, pathologic assessment is a laborious and costly process that--incredibly--continues to rely on an individual evaluating microscopic features in biological tissue with the human eye and microscope!
Last year, Google created a deep learning algorithm that could help pathologists assess tissue and detect the spread and extent of disease better in virtually every case. By pinpointing the location of the cancer more accurately, quickly and at a lower cost, care providers might be able to deliver better treatment for more patients. But doing this will require that these insights be paired with human intelligence and placed in the hands of skilled researchers, surgeons, oncologists, radiologists and others. Google’s research showed that the best results come when medical professionals and technology work together, rather than either working alone.
During my treatment, I was taken care of by extraordinary teams at Memorial Sloan Kettering in New York where they had access to the latest developments in breast cancer care. My oncologist (and now good friend), Dr. Clifford Hudis, is now CEO of the American Society of Clinical Oncology (ASCO), which has developed a nonprofit big data initiative, CancerLinQ, to give oncologists and researchers access to health information to inform better care for everyone. He told me: “CancerLinQ seeks to identify hidden signals in the routine record of care from millions of de-identified patients so that doctors have deeper and faster insights into their own practices and opportunities for improvement.” He and his colleagues don't think they’ll be able to deliver optimally without robust AI.
What medical professionals, like Dr. Hudis and his colleagues across ASCO and CancerLinQ, and engineers at companies like Google have accomplished since the time I joined the Club in 2001 is remarkable.
I will always remember words passed on to me by another cancer survivor, which helped me throughout my treatment. He said when you’re having a good day and you’ve temporarily pushed the disease out of your mind, a little bird might land on your shoulder to remind you that you have cancer. Eventually, that bird comes around less and less. It took many years but I am relieved to say that I haven’t seen that bird in a long time, and I am incredibly grateful for that. I am optimistic that the combination of great doctors and technology could allow us to get rid of those birds for so many more people.
I was born cross-eyed, and after two corrective surgeries, I thought I could see like everyone else. But I still had trouble driving, navigating stairs, and playing sports. In my late twenties, I learned that I mostly saw with one eye, and I couldn’t see in 3D. This is considered a hidden disability (similar to dyslexia or color blindness), and people with hidden disabilities could go years without knowing why some basic daily activities and interactions with technology are challenging.
There are millions of people with hidden disabilities and over 2.2 billion people who have a vision impairment around the world, but more than 70 percent of all websites are inaccessible to them. Often, there is a lack of awareness among developers and designers about both the challenges as well as how best to design and code for accessibility.
To bridge this gap, our Material Design team updated the accessibility guidelines on how to make images more accessible for websites and applications. The new guidelines explain how to write HTML code in the correct order for images to be read aloud by a screen reader, how to write alt text and captions for sighted and non-sighted people to understand images, and which types of images have to follow accessibility requirements. By following these guidelines, designers and developers can prevent common mistakes that may leave beautifully designed websites and apps difficult to use for people with visual impairments. We’ve started applying these rules to images in the Material Design guidelines, but there's more to do to make the web more inclusive. Here are a few of the key lessons we learned:
Designing and coding should start with inclusivity in mind
Imagine how someone with a visual impairment experiences your website or app.When text is embedded in images, it may not be read aloud by screen reader software used by people with visual impairments. By implementing captions that describe how the images relate to the topic and alt text to explain the contents of the images, screen reader users will hear what the images are about.
The captions appear below the photo and explain the who, what, when, and where about the image. The alt text describes the colors, sizes, and location of the objects in the image.
Designers put images in a specific order on a website, such as a four-step recipe with photos showing what to do for each step. However, if the HTML is not in the correct order, the screen reader will read out the alt text for each image in the wrong order, and the screen reader user may follow the recipe incorrectly. To prevent such (untasty) problems, we provided visual and text examples of the correct HTML code order.
The HTML reflects the visual hierarchy by reading the content from the top left (Step 1) to the top right (Step 2), bottom left (Step 3) to bottom right (Step 4).
Not all images are alike
Decorative images such as illustrations of fruit on a recipe website may not have to follow accessibility guidelines because they don’t contain critical information. However, informative images such as the foods in a recipe, should follow the guidelines because they convey information that is relevant to the adjacent text. The updated accessibility guidelines contain information about color contrast, text size, captions, and alt text. Images such as logos, icons, images within a button, and images that are links, benefit from alt text that describes their function and not what they look like.
Making products accessible means that even people beyond your target users may benefit. Captions help sighted people understand images. Alt text appears when images don’t load, helping sighted users understand what they are missing. People reading an online menu in poor lighting, such as during an electricity outage, might experience a temporary disability. They are more likely to be able to read the menu if it has good color contrast and large text.
Disabilities have too often stayed hidden and taboo. I believe we are entering a new age where disabilities can serve as a precursor to improving the world for others. The first time somebody at Google saw me looking at a document that was enlarged to 125 percent, I was absolutely mortified because I wasn’t keen on sharing my visual impairment. But then I realized that, in fact, being open and vocal could help make products more useful and accessible for everyone. I hope that these guidelines can help ensure that developers and designers implement accessibility so that those of us with visual impairments can fully access the content of their websites and apps.
Legendary photographer Annie Leibovitz is unveiling a series of portraits of individuals who are changing the landscape of their time. Using her Google Pixel exclusively, Annie encountered her subjects in the places they live and work and are inspired into action.
Annie photographed equal justice lawyer Bryan Stevenson in Alabama.
The pictures portray extraordinary people who are defined by their fierce desire to make the world a better place, no matter how daunting the obstacles. The individuals photographed include soccer player Megan Rapinoe, equal justice lawyer Bryan Stevenson, artist James Turrell, journalist Noor Tagouri, hip-hop activist Xiuhtezcatl Martinez, Army Officer Sarah Zorn, global-health scientist Jack Andraka and more.
Everyone can check out the full collection of these stunning portraits online, along with a behind-the-scenes glimpse of Annie’s work. The Face Forward series will expand with new images as Annie continues to tell the story of today’s changemakers.
Annie on a shoot with soccer player Megan Rapinoe.
This project pushed Annie, who rarely has shot professional portraits on a camera phone. “I wanted to challenge myself to shoot with the camera that’s always in your pocket,” she says. “I’d heard so much about the Pixel and was intrigued.”
Annie with Marc Levoy from the Pixel camera team.
Working closely with Pixel’s camera team, Annie tested new tools on the Pixel 4 including astrophotography. “I’ve been really impressed with the camera. It took me a beat, but it really started clicking when I relaxed and let the camera do the work.”
Finally—for those who are hoping to channel your own inner photographer, we’ll leave you with a piece of advice from Annie: “It’s all inside you. You just go do it. It’s all there.”