Categories
Online Marketing

Next-Generation 3D Graphics on the Web (Google I/O ’19)

Okay, let’s sudden my name is Ricardo and together with a current in, I will be talking about the future of 3d graphics on the web. But before we do that, let’s have a quick look at the past and the present WebGL landed in browsers in February 2011.

That was in chrome, 9 and Firefox. 4 were the first ones. Those words those rosters were the first ones who implemented back. Then the Google, with the Google query, lab we created a interactive music article that aimed to showcase the new powers the technology was bringing to the world. It was a pretty big project in between creators, directors, concept, artists, animators around 100 people work on the on the project for half a year, and ten of us were JavaScript graphics developers.

We knew the workflow and tools were very different compared to traditional web development. So we also made the project open source, so others could use it as reference. Some years later, Internet Explorer an H and Safari implemented WebGL 2, which means that today the same experience works in all major browsers in desktops and tablets and phones too. What I find most remarkable is the fact that we don’t have to modify the code for that to happen.

Anyone anyone with experience, doing graphics, programming like knows that this is this is rarely the case. Usually we had to recompile the project every every couple of years when operating systems updates or like new devices appear. So it’s a quick recap just start with checking WebGL is a JavaScript API that prove it’s binding to OpenGL. It allows web developers to utilize the users, graphic users, graphics card in order to create a efficient and performant graphics on the web.

It is a low-level API, which means that it is very powerful, but it’s also very verbose. For example, a graphics cards main primitive is a it’s a triangle. Everything is done with triangles here’s the code that we’re going to need to write an order in order to display just just one triangle. First, we need to create a canvas element. Then, whichever skill we get the context for that canvas and then, since things get pretty complicated like pretty fast like after defining positions for each vertex, we have to add them to a buffer.

Send it to the GPU, then link link the vertex and fragment shaders, compile a program that will you will be used in from the graphics card to how to fill those pixels. So that’s why a bunch of us back then like started creating libraries and frameworks to that abstract, all that complexities, so so developers and ourselves cool, stay, productive and focus those libraries take care of placing objects in 3d space material configuration is loading 2d and 3d assets.

Interaction sounds etc, like anything for doing any and you sort of like game on application, there’s only designing those libraries takes time, but over the years people have been doing like free amazing projects with them. So let’s have a look at what people are doing today, so people are still doing interactive music articles – that’s good, in fact like in this example, track by little workshop, not only works on desktop mobile, but it also works on VR devices, letting you look around while Chopping through like glowing tunnels, another clear use of the technology is gaming.

Home is a beautiful game developed by a super surprisingly small team and was released last year last year’s Christmas experiments. Another one is a web experiences in this case out. The goat is an interactive animated, storybook designed to teach children about bullying and the guys that the folks at assembly used my to model on rican animates those characters and then export it to gltf via blender for rendering they use fridges and they brought like 13 13 Thousand lines of typescript to make the whole thing work, and yet another very common use is a product configurators.

The guys are like a little workshop again show how good this can look in this demo click, but those those cases are not and they’re like what people are doing. Like data visualizations, enhancing newspaper articles, virtual tours, documentaries, movie promotions and more like you can check, you can check the 3js webster than the babylon Jet Set to see more of those examples. Alright, we don’t want to.

We don’t want to end up like in a world where the only HTML elements in your page is just a combat stack in the script tag. Instead, we must find like a ways of combining WebGL on an HTML, so the good news is the laylee. We have been like seeing more and more projects and examples of web designers utilizing bits of WebGL to enhance their HTML pages. Here’s a site that welcomes the user with a beautiful immersive image, we’re able to interact with the 3d scene by moving the mouse around the image.

But after strong the page, then we reach like traditional a static layout with all the information about the product as we as traditional web sites. Usually look like the personal portfolio. Bertrand Candice shows a set of developments affecting the dynamic background, a bit dark with JavaScript. We can figure out the position of those deep elements and then we can use that information to affect the physical simulation that happens on this 3d scene.

On the background, but like for under power devices, we can just replace that WebGL scene with the static image and the websites so functional. Another interesting trend we have been seeing is the websites that use distortion effects. The worst for Japanese director of terajima has a very impressive use of them. However, the content is actually plane and selectable HTML, so it is surprising because, as you know like, we cannot do these kind of effects with CSS.

So if we look look at it again, what I believe that they were doing is like the edges copying they they had the Dom elements that they’re copying the pixels all those elements into the background of WebGL canvas. Then they hide the Dom element that they apply. The distortion – this is the finish, the transition and they put the next Dom on top, so it’s still something that you can enable/disable depending on there if it’s small, it also works on mobile some of the things, but something that you can progressively enhance one more example To set for this side applies the social effect on top of the HTML, basically making the layout like like truly fluid, then again like this is something surprising because with the may be possible with CSS.

So I think those are all great examples of the kind of results you can get by mixing HTML and WebGL, but it still requires the developer to diving into JavaScript and that you know, as we know, can be a little bit tedious to connect all the parts. If you’re more used to react this new library web poll Henschel, I can be a great option for you. React 3 fibre mixes react concepts on top of three years abstraction so that, like here’s, the code that for the animation that which is so notice how the like critters will define effect and content components easily composed into the canvas it makes it much more reusable and Easy much and easy to maintain.

However, I think that we can still make even simpler, enter web components. I believe, where components will allow us to finally bring bring all the power of WebGL right into the HTML layer. We can now encapsulate all those effects in composable custom elements and hide all the code complexity so, for example, here’s another project that we did for the WebGL lunch eight years ago. It was kind of a globe platform.

It was a like a project that allow JavaScript developers to visualize different data sets on top of the globe. You will have the library you have your data and then you are to trade like use different, like money to use the different parts of the data to display. But even if we try to hide the WebGL code, developers still had to write custom JavaScript for loading. The data and configure the globe and append it to the DOM and the worst part was like the boroughs will still have to handle the positioning of the Dom object and the reciting and I did was use difficult to like mix it with like a normal HTML Page so today, with web components, we can simplify all that code.

We use those two lines. The developer only has to include JavaScript library on the website and a powerful custom element is now available to place whenever whenever they need in the Dom, I’m not only that alike. But at that point I duplicated a line with by duplicate by duplicating the line that you can have like multiple Globes before it will never have to. You know duplicate all the code and it will be again how there’s are like more code to read and parse a component that is already ready to use the previous one is not ready.

Yet this one model Bureau is really ready and for this one basically, we wanted to do that. The problem is that displaying 3d models on the web still pretty hard, so we really wanted to make it this as simple as like, embedding like an image in your page like a simple as adding like an image tag. So that’s me, that’s a main goal for this one again, the developer only has to include a JavaScript library and then and then like a powerful.

This custom element is ready to display, like any 3d 3d models were using the gltf open standard. An important feature of HTML tags is accessibility for low vision and blind users. We’re trying to inform them on both the 3d model, like what the 3d model is and also orientation of the model. Here you can see that the view angle is being communicated verbally to the user, so they can be oriented with what’s going on and also it prompts the for how to control the model with keyboard, and I see an easy exit back to the rest of the Page, the mala paper also supports air like a mental reality, and this you can see how it’s being it’s also really being used on the nasa website so use by adding the a RI attributes.

It’s going to be able it’s going to show an icon and it’s going to be able to launch the a or b word for both on Android and iOS for iOS. You have to include the u.S. DC file and, lastly, while building the component, we realized that, depending on the device, you can only have up to 8 WebGL context at once. So if you create a new one, the first one disappears. It is actually like a well-known a limitation of WebGL bit lots of good practice.

You only have one context for keeping memory in one place. The best solution that we found for this was creating a single WebGL context on off-screen, so like it’s hidden and then we use we use that one to render all the model. We were elements on the page. We also like utilize, the interesting observer to make sure that we are not rendering objects that not are not in BO and also recites observer too, whenever detecting, if the, if the developer is modifying the size we render we have to, but we all know how the Web is sooner than later, someone we want to display hundreds of those components and ones, and that is great.

We want to allow for that. But for that we’ll need to make sure that the underlying API is or as efficient as possible. So, for that now, quarantine is going to share with us what’s coming up in the future. Thank you. Okay! Thank you, Ricardo. This was an amazing display of what’s possible on the web using GPUs today. So now I’ll give a sneak peak of what’s coming up next. In the future, where you’ll be able to extract even more computational power from GPUs on the web, so hey everyone, I’m Colton Velez, and for the last two years at Google, I’ve been working on an emerging web standard called web GPU in collaboration with all the major Browsers at w3c, so web GPU is a new API, that’s the successor to WebGL and it will unlock the potential of GPUs on the web.

So now you’ll be asking go Anton. We already have WebGL. So why are you making a new API? The high level reason for this is that WebGL is based on an understanding of GPUs as they we’re 12 years ago and in 12 years GPU hardware has evolved, but also the way we use GPU hardware has evolved. So there is a new generation of GPU. Api is native, for example, Vulcan that helped do more with GPUs and web GPU is built to close the gap with what’s possible in native today, so it will improve what’s possible on the web for game developers, but not only it also improve what you can do In visualization in heavy design, applications for machine learning, practitioners and much more so for the rest of the session, I’ll be going through specific advantages or things that web GPU improves over WebGL and show how it will help build better experiences.

So, first web GPU is still a low level and verbose API so that you can tailor usage of a GPU to exactly what your application needs. This is the triangle Ricardo just showed and as a reminder, if this was the code to render this that triangle in WebGL now this is the minimum web GPU code to render it the same triangle. As we can see, the complexity is similar to WebGL, but you don’t need to worry about it, because, if you’re using a framework like three or Babylon, then you’ll get the benefits transparently for free when the framework updates to support what GPU.

So the first limitation. For that WebGL frameworks run into is the number of elements or objects they can draw it frame, because each drawing command has a fixed cost and needs to be done individually, each frame so with WebGL and optimized. The application can do a maximum, a thousand objects per frame and that’s kind of already pushing it, because, if you’re on, if you want to target a variety of mobile devices and desktop devices, you might need to go even lower than this.

So this is a photo of a living room. It’s not rendered it’s an actual photo, but the idea is that it’s super stylish, but it feels empty and cold. Nobody lives there, and this is sometimes what it feels looking at WebGL experiences, because they can like complexity. In comparison, game developers in native or on consoles are used to, I don’t know, maybe 10,000 objects per frame if they need to, and so they can build richer, more complex, more lifelike experiences, and this is a huge difference, even with the limitation in the number of Objects, WebGL developers have been able to build incredible things and so imagine what they could do if they could render it as many objects.

So Babylon genius is another very popular 3d JavaScript framework and just last month when they heard we were starting to implement web GPU. They’re, like hey, can we get? Can we get some web GPU now and we’re like? No, it’s not ready like it’s, not in Chrome, but here’s a custom build and the demo I’m going to show is what they came back to us with just two days ago. So can we switch to the demo, please all right, so this is a complex scene rendered with WebGL and it tries to replicate what a more complete game would do if every object was drawn independently and a bit differently, so it doesn’t look like it, but all The trees and rocks and all that there are independent objects and could be different objects, so in the corner in the top left top right corner, there’s the performance numbers and we can see that as we zoom out and we see more objects.

The performance starts. Dropping heavily and that’s because of the relatively high fixed cost of drawing each object of sending the command to draw each object, and so the bottleneck here is not the power of the GPU on this machine or anything like that. It’s just JavaScript iterating through every object and sending the command. Now, let’s look at an initial version of the same demo in Webb GPU and keep in mind.

This was done in just two weeks. So, as the demo as the scene zooms out, we can see that the performance stays exactly the same. Even if there’s more objects to draw and what’s more, we can see that the CPU time of JavaScript is basically nothing. So we are able to use more of the GPU power GPUs power because we’re not bottlenecked on JavaScript, and we also have more time on the CPU to run our applications logic.

So, let’s go back to the slides. What we have seen is that, for this specific and early demo, web CPU is able to submit three times more drawing commands in WebGL and leave his room for your applications. Logic I made Renu a major new version of babylons. A yes Babylon’s is 4.0 was released. Just last week, and now today, the WebGL, the Babylon Jazz developers, are so excited about what GPU that they are going to implement full support for web GPU.

For the initial version of what GPU in the next version of Babylon is that blonde J is 4.1. But what GPU is not just about drawing more and more complex scenes with more objects, a common operation done on GPUs are say: post-processing image filters, for example, def depth-of-field simulation. We see this all the time in cinema and photography, for example, this photo of the fish. We can see the fish is in focus while the background is out of focus, and this is really important because it gives the feeling that the fish is lost in a giant environment.

So this type of effect is important in all kinds of rendering. So we can get a better cinematic experience, but it’s also used in other places like camera applications and of course this is one type of post-processing filter, but there’s many other cases of post-processing filters. Like I don’t know, color grading image sharpening a bunch more and all of them can be accelerated using the GPU. So, for example, the image on the Left could be the background behind the fish.

If, before we apply the depth of field and on the right, we see, the resulting color of the pixel what’s interesting is that the color of the pixel depends only on the color of a small neighborhood in the original image in a small neighborhood of the pixel. In the original image, so imagine the grid on the left is a neighborhood of original pixels. We’re going to number them in 2d, and the resulting color will be essentially a weighted average of all these pixels.

Another way to look at it is to see that, on top we have the output image and each of the the color of each of the output pixels will depend only on the 5×5 stencil of the input image on the bottom. The killer feature of a GPU, in my mind, is what we call GPU compute, and one use case of GPU compute is to speed up local image filters like we just saw, and so this is going to be pretty far from Dom manipulation.

I would like react, or like amazing web features, like course headers. So please bear with me we’re going to go through it in three steps. First, we’ll look at how GPUs are architectures and how an image filter in WebGL uses that architecture and then we’ll see how web GPU takes better advantage of the architecture to do the same image filter but faster. So let’s look at how a GPU works – and I have one here so this is a package you can buy in stores.

And can you see it? Oh yes! So this is a package you can buy in stores and the huge heatsink. But if we see inside there’s this small chip here – and this is the actual GPU, so if we go back to the slides, this is what we call a die shot, which is a transistor level picture of the GPU. And we see a bunch of repeating patterns in it, so we’re going to call them execution units. These execution units are a bit like cores and CPUs in that they can run in parallel and process different workloads independently.

If we zoom in even more in one of these execution units, this is what we see. So we have in the middle a control unit which is responsible for choosing the next instruction like, for example, add two registers or load something from main memory, and once it has chosen an instruction, it will send it to all the alias. The ALUs are the arithmetic and logic units and when they receive an instruction, they perform it.

So, for example, if they need to add two registers, they will look at their respective registers and add them together. What’s important to see is that a single instruction from the control unit will be executed at the same time by all the ALUs just on different data, because they all have their own registers. So this is single instruction, multiple data processing, so this is the part of the execution unit that is accessible from WebGL, and what we see is that it’s not possible for L used to talk to one another.

They will have no ways to communicate, but in practice, GPUs look more like this. Today there is a new shared memory region in each of the execution units where I’ll use our can share data with one another. So it’s a bit like a memory cache in that it’s much cheaper to access than the Jemaine GPU memory, but you can program it directly explicitly and have a use shared memory there. So a big benefit of GPU compute is to give developers access to that shared memory region.

This was the architectures of GPUs and their execution needs. So now we’re going to look at how the image filter in WebGL maps to that architecture. For reminder, this was our the algorithm we’re going to look at and in our example, since our execution units has 16 a I’ll use, we’re going to compute a 4 by 4 block, which is 16 pixels of the output in parallel, and each ALU will take care Of computing, the value for one output pixel – and this is GPU pseudocode for the filter in WebGL – and essentially it’s just a two de loop on X & Y, that fetches from the input and computes the weighted average of the input pixels.

What’s interesting here is the coordinates. Argument to the function is a bit special because it’s going to be pre-populated for each of the ALUs and it’s what will make it that’s what will make that they’ll use each to the execution on different data because they start populated with different data. So this is a table for the execution of the program, and likewise we can see the coordinates are pre-populated.

So each column is the registers for one of the ALUs and we have 16 of them for the 16 ail use. So the first thing that happens is that the control you need says, hey, initialize, some to 0, so all of them initialize the sum to 0, and then we get to the first iteration of the loop in X and each ALU gets its own value for X. Likewise, it’s edges, h-hell, u gets its own value for y and now we get to the line that does the memory load of a value of the input.

So each ALU has a different value of x and y in their registers, and so each of them will be doing a memory load to a different location of the input. Let’s look at this register at this ALU. It’s going to do a memory load at position. Minus 2 minus 1 we’re going to get back to this one. So if we go and do an audit or iteration of the loop in Y. Likewise, we have data while register and we do a memory load.

What’s interesting here is that the first ALU will do a memory load in minus 2 minus 1. That’s a redundant load because we already did it at the nest at the last iteration anyways. The loop keeps on looping and there’s more loading and summing and all that that happens and in the end we get to the return, and that means the output. The sum will get written to the output pixel and the computation for a 4 by 4 block is finished.

Overall, the execution of WebGL on the you of the algorithm in WebGL for a 4 by 4 block did 400 memory loads. The reason for this is, we have 16 pixels in each of them. Each of them did 25. So now this was how the filter executes in WebGL we’re going to look at how web GPU uses the shared memory to make it more efficient. So we take the same shader the same program as before, so it’s the exact same code and we’re going to optimize it with shared memory.

So we introduced a cache. That’s going to load! That’s going to contain all the pixels of the input that we need to do the computation. This cache is going to be in shared memory so that it’s cheaper to access than the actual input. It’s like a global variable, that’s inside the execution unit. Of course, we need to modify the shader to use that input tile and because input tile needs to contain values at the beginning.

We can’t just start like this, so this is this function. It’s going to be a helper function that computes the value of the pixel and we’re going to have a real main function that first go plates the cache and then calls the computation. So like the previous version of the shader, the coordinates are pre-populated. So each of the I’ll use does a different execution and then all the L users work together to populate the cache and there is a bunch of loops and whatnots there.

But it’s not really important. So, as we use this what’s interesting to see is that only 64 pixels of the input are loaded and put in the cache. There is no redundant memory loads. Then we go through the main computation of the value, and likewise this is very similar to what happened before, but on this line, the memory load is now from the shared memory instead of the main memory – and this is cheaper.

So, overall, thanks to the caching of a tile of the input, the web GPU version didn’t do any redundant main memory load. So for a 4 by 4 block it did 64 memory loads and, like we saw before what GL had to do 400. So this looks very, very biased in favor of web GPU, but in practice things are a bit more mixed, because web GPU did didn’t do main memory loads, but it did a bunch of shared memory loads and it’s still not free and also WebGL, is a bit More efficient than this, because GPUs have a memory, cache hierarchy, and so some of these memory loads will have hit the cache that’s inside the execution unit.

But the point being overall web GPU will be more efficient because we explicitly are able to cache input data. So the code we just talked about in the graphics world, it’s called an image filtering. But if we look at the machine learning world, it’s called a convolution or a convolution operator, all the optimizations we talked about. They also apply to convolutional neural networks, also known as CNN’s, so the basic ideas for CNN’s were introduced in the late 80s, but back then it was just too expensive to train and run the models to produce the results we have today.

The ml boom of the last decade became possible because CNN’s and other types of models could run efficiently on GPUs in part, thanks to the optimization we just saw. So we are confident that machine learning web frameworks such as tensorflow JS, will be able to take advantage of GPUs to significantly improve the speed of their algorithms. Finally, algorithms can be really difficult to write on GPUs in WebGL and sometimes sometimes there’s just not possible to write at all.

The problem is that in WebGL, where the output of computation goes is really really constrained. On the other hand, GPU compute that web GPU has is much more flexible because each ALU can read and write memory in at any place in the GPU memory. This unlocks a whole new class of GPU algorithms from physics and particle based fluid simulation, like we see here, two parallel sorting on the GPU mesh skinning and much much many many more algorithms that can be offloaded from JavaScript to the GPU.

So, to summarize, the key benefits of web GPU are that you can have increasing complexity for just better and more engaging experiences, and this is what we have seen with babylons is it provides performance, improvements for scientific computing, like machine learning and it unlocks a whole new Class of algorithms that you can upload from JSC PU time to run on a GPU in parallel, so now you’re like hey, I want to try this API you’re.

In luck. The web GPU group at the web, CPU is a group effort and everyone is on board. The Chrome Firefox edge, Safari they’re, all all starting to implement the API today, we’re making an initial version of a GPU available on Chrome Canary on Mac, OS and other operating system will follow shortly to try it. You just need to download Chrome Canary on Mac OS and enable the experimental, Schlag and safe web GPU and again this is an unsafe lag.

So please don’t browse the internet with it. On for your daily browsing, more information about about web GPU is a valid available on web GP, dot io. So there’s the status of implementations, there’s link to some samples and demos, a link to a forum where you can discuss web GPUs and we’re going to add more stuff to this. With articles to get started and and all that, what we’d love is for you to try the API and give us feedback on what the pain points are, what you’d like the thing to do for you, but also what’s going great and what you would like about It so thank you, everyone for coming to this session Ricardo and I will be at the web, send box for the next hour or so.

If you want to discuss more. Thank you. You


 

Categories
Online Marketing

Linux for Chromebooks: Secure Development (Google I/O ’19)

We are here to talk to you about Linux, for Chrome box, also known as crostini. We will start by introducing ourselves. My name is Sudan. I am a designer on crostini for Chromebooks hi, I’m Dylan, I’m the Chrome, OS virtualization lead and I’m Tom product manager for Linux on Chromebooks. Now it’s the end of day two at i/o and you’ve probably already been to a bunch of different sessions that have talked about all the new frameworks that you need to be using or the platforms that you need to be building for and everyone’s right.

You absolutely should be, but we’re not really here to talk about that. Instead, what we want to talk about is you as developers and how you can get more peace of mind by using Linux on Chromebooks, we give you that peace of mind by balancing simplicity and security. On that note, let’s do a quick user study. How many of you are developers in the audience? Wow, that’s full room as expected. Keep your hands raised, how many of you use your computers for anything else, other than development like doing your taxes, checking email, again, 100 % of you, okay, one last question: how many of you are worried about security? Good, that’s pretty! I mean you all should be so I’m glad to see many hands up anyway, so I don’t know about you, but when I start a new project I I get stuck a lot right.

I hit a lot of walls and I hit a lot of barriers and go to look for a problem, go to look for a solution and I turn to Google. Luckily, Google’s almost always got a great answer for me. Unluckily. Sometimes the answer looks like this, and I know I shouldn’t run this script from evil site, comm and pipe it to soo do. But you know that deadlines coming up. This may be, the site, looks kind of legit, so in this case I’ll make an exception and I’ll do this and then it happens again and again and eventually I end up with a system that I don’t trust as much as I should, because I don’t really Know what code I’ve run on it anymore? I don’t have time to read all these scripts.

My solution to this has been to carry two laptops, one for my developer world and one for my everything else world that I want to be secure in, but recently I switched to using one laptop and Tom’s going to talk about how I do that. So our goal with Chrome OS has been to give you a simple and secure experience from the start, but if you tried it previously, you might have seen that it wasn’t quite ready for developers in order to be simple and secure.

We couldn’t run all of the tools that developers need to get their job done, but that all changed at i/o. Last year, when we announced that we were going to start supporting Linux on Chromebooks Linux on Chromebooks lets, you run all of your favorite editors, IDs and tools, and it’s now supported on over 50 % of Chromebooks, including great devices with eighth generation. Intel CPUs like the Lenovo yoga book C 630 and the Acer Chromebook spin 13.

If you haven’t seen it we’re going to run through a few examples. First off, how do you get it? It’s really easy! It’s just a couple clicks now. In the background, this is downloading a virtual machine setting up containers configuring, it all Dylan’s, going to go more into that in a few minutes. But for you as a developer, it’s just a couple clicks to get started and this adds a terminal to your launcher.

Now, if you open that terminal, you’ll see that you have a pretty standard, debian environment and we’ve already loaded in a bunch of the tools that developers expect like git and vim. And if you need anything else, you have the apt package manager and you can install whatever packages you need and if you want to, instead install files or install programs via dev files, you can do that too. This gives you access to thousands of world-class developer tools.

Now, once you’ve installed, any graphical you’ll find that they all show up in your launcher, just like the rest of your Chrome, OS apps, and if you open them, they show up in your window manager again just like the rest of your Chrome, OS apps. This is the simple experience that people have come to expect from Chrome OS and we didn’t want to change that with Linux. But importantly, this is also secure.

You don’t have to worry about malware, accessing your files, snooping on your traffic or infecting your peripherals. I’d ask you to trust us on that, but this is way too important for you to take on Trust alone. So over the course of this talk, Dylan and Sudha are going to walk you through the principles behind the architecture and design of crostini. We’re then, going to dissect some common developer flows to show you how these principles apply and, finally, we’re going to share some tips and tricks for advanced usage for the power users out there.

So now I’m going to hand it over to Dylan to talk about the architecture. Okay, so Chrome OS has always had a layered approach to security, and our big layer has always been the browser and the renderer and running all untrusted code in a nice, isolated renderer, and that keeps the attack surface of your core system to an absolute minimum. They’re not allowed to make a lot of system calls, they can’t poke at random bits of your kernel and that worked really well for webpages web apps.

However, for developer tools, I need to install a lot of different programs. They need a lot of different privileges. They can do anything any app on Linux can do and that wasn’t acceptable for us on the core of Chrome OS. So we need that a layer, so we added a virtualization layer and that lives in the main, Chrome OS layer and that spins up a VM. And now this VM has a much more limited interface, while still exposing a full Linux kernel to the programs that run inside the VM.

The only way the VM can talk to Chrome OS proper is through a small API that that cross VM program on the left up there exposes to the guest. This was pretty good. Now we’ve got a lot greatly reduced attack surface. We were pretty happy with this. We wanted to go a little further, so we made sure that the guest VM was also signed by Google and somewhat trusted. This lets us trust some of the actions the guest VM takes, and it’s also read-only.

So users can only break things so much and that no matter what you do, you’re going to be able to boot a VM. However, with all that security solved, we’re back in a situation where you don’t have enough flexibility, your apps can’t do anything. It’s a read-only thing: you can’t install anything in it, so we had another layer and for this we stole used lxd from canonical. That teams been very helpful in getting this spun up with us.

It’s a pretty standard container run time. It’s built for running system containers and in our case we started a system container of Debian and exposed to that to the user so that cross VM layer. I was talking about that’s kind of the most important part of the security story. Here it’s the last line of defense before something gets into Chrome OS. So we went. We focused on this for a long time and made sure we got that as secure as possible.

We wrote it in a memory safe programming language. We chose rust. This eliminates buffer overflows and integer overflows a lot of common bugs related to memory safety that are exploited by attackers. We were pretty happy with that, but we again added another layer of security here in that we broke up the virtualization program into pillars and made sure that each pillar that interfaces with the guest only has access to small parts of your hosts Chrome OS system.

So your host Chrome, OS system, you’ve got your bank’s web page open. You’ve got your online tax filing thing. Open you’ve got all kinds of personal identifiable information everywhere. We really wanted to protect that, but we needed to give the guest access to things like a random number, a display, a USB device, so each of those got their own jail and they can only see the thing they need, so our random number generator can generate Random numbers: it can’t access any files, it’s in an empty file system.

From its perspective, it doesn’t have any network access the display driver, it can access the display again, it can’t touch the network, it can’t go, grab your files and upload them, even if somebody gets into it and tries to make it do things we didn’t intend it To this is all a little complicated, but we’ve added a great amount of system UI to make this easy for you to use. So when you’re just doing your job as a developer, you don’t have to worry about these.

Pretty pictures I’ve drawn for you and I’ll show you of what we did. Thank You. Dylan security is absolutely top of mind for us. While crafting the Linux experience on Chromebooks, we came up with three high-level design goals. The first goal was to keep your experience intuitive. Everyone here in this room has been using computers for a long time and you have just established your workflows and habits.

So, basically, what we wanted to do is to match to those those expectations. We wanted to provide an experience, that’s natural to you. We want developers everywhere to be using Chromebooks and feel right at home doing it. The second goal was to make your experience native. We could have taken the easy path by giving you a full Linux desktop in a VM, but that wasn’t good enough. Our goal was to bring the Linux apps.

You depend on for development into your native Chrome, OS experience. The third goal was to make your experience simple, and I think this is very important. There’s a lot of complexity, that’s going on under the hood, and we want to leave it there. Our guiding principle is that complexity shouldn’t interfere with the user experience. There’s a couple of things. We are trying to balance here. The security concerns that come with installing Linux apps on Chromebooks and the simplicity that comes with sticking to design patterns established by Chrome, OS and our mission was to find that sweet spot all right.

So now we’re going to talk about three common developer flows and see how they work with crusting. The first of these is accessing files as developers. We have to do this all the time our editors need to access files, as do our compilers, our source control and a whole lot more, but the problem is that our file systems have a lot more than just code. They have our personal photos, our tax returns.

Maybe that novel that you’ve been working on a lot can go wrong. Ransomware can hold all of that data hostage. Malware can upload your files to some rain server, or maybe you just get something that goes and deletes everything for the fun of it. We built crostini with those threats in mind to limit what can go wrong and Dylan will tell you how so our goal, sharing files with your VM and with your container, was to make it easy for you to get the files you needed for your development tasks.

Where you need them, but not expose things, you don’t want exposed to untrusted code, because ultimately we don’t trust the code. That’s running inside this VM. To do this, we took a layered approach, your files all live in Chrome OS at the very bottom, and we share them out to the VM with a 9p server. We named it 9s again. We wrote it in rust, so it’s memory safe. We fuzzed it to make sure unexpected inputs, don’t cause unexpected behavior and we put it in a in a tight jail.

So it can access only the files you share with it and it takes those files and exports them to the VM. The VM mounts the 9p thing, that’s built into Linux and then lxd takes that mount and exposes it into your container, where your development tools are running. The important thing here is that your container can only see files. You say I want to share with my development environment. Your VM can only see those same files and even the server that we wrote running on Chrome OS can only see those files.

It doesn’t get to see everything. So somebody exploits this stack all the way back into Chrome OS. They still don’t have access to the files you haven’t shared with the container. That’s a lot of stuff to set up setting up 9 P mounts bind mounting things into containers. We had to do this manually for a while. We were developing it. It was painful, so let’s do to show you how easy we made it for you.

There are a lot of layers going on, but let’s see how simple this is in the UI right out of the box, you have a directory called Linux files, which is your home directory within Linux. Anything in this directory is automatically shared with Linux. Outside of this directory anywhere else on the device, Linux doesn’t have access to anything until you grant permissions I’ll walk you through a couple of examples here, let’s say you’re working on a project, and you see yourself needing files from this.

One folder called illustrations to share this: all you have to do is access the right-click menu and click on share with Linux. In as simple as two steps, you now share this folder with Linux. If you notice, this is in Google Drive and that’s a cool thing when you don’t want to share this anymore, you can do that by going to settings and unshare here’s another example where we made quick edits, really simple for you.

You have a data file in your downloads folder and when you double-click it automatically opens in vs code when this happens in the background it’s implicitly shared and the sharing last until you restart. This is the balance of security and simplicity. We wanted to bring you. Thank you so, for our second developer flow that we’re going to talk about we’re going to look at running a web server. Now being Chrome OS.

We care a lot about people making great web apps and we want to make sure that they can create those on a Chromebook and being able to run a web server is pretty central to being able to build any web app. Unfortunately, web servers also need a pretty low level of access, and that can cause some problems. The code that can run a web server is also capable of snooping on your internet traffic. It can know what sites you’re accessing and, in some cases, even see the contents of those pages.

This means that a malicious web server could potentially track everything that you’re doing now again, we thought of this as we design crostini, and we made sure that we prevented this kind of attack. Linux Dylan will tell you how I can be called Linux. It’s my job. All right so starting a web server from crostini, simple we’ve got a good demo over in the web, dev sandbox already type of command. You fire up your web server, just like you would on any Linux distribution out there what’s actually happening under.

The hood, though, is you’re in a container, and you open up a port that ports in a network namespace inside a VM running under our special hypervisor, which puts its network stack in another namespace on the host and then finally out to Chrome, so Chrome can’t get Back in which is great for security, you’ve got wonderful isolation, but if I want to test this new PWA or webpage, I’m running in my VM, how do I get chrome to talk to it? This was not simple.

So for that we had to add some demons along the way. Actually, every layer gets a daemon for this there’s the first one is running in the in the VM and it’s sitting there waiting to check if any container that’s running happens to open a port, and then it’s got to figure out which container open that port and Bundles that information up sends it to Chrome OS, so hey this port in this container is listening.

The user might want to use that port and on the Chrome, OS side we say – ok, the other daemon responds says I will set up a route to do some forwarding I’m going to forward all of this over Vee sock, which is a protocol used to talk To local VMS, on on machines, that’s kept under the hood, so either end talks HTTP in in to the demons and the daemons dog Vee sock to each other. So the key here is that the web server gets to talk.

Http Chrome gets to talk, HTTP everything’s, normal everything works. Just like you would well under the hood. We’ve got all this extra daemons and V sock forwarding going, but we’ve hidden that one other important thing we’ve made it trusted. So you can get all your PWA features. You can install it to your desktop, even though it’s not technically the same machine. We know it is because we’ve got the information we set up the VM, so we allow that to be trusted domain and all this complexity, I think, makes one of our best demos.

Today of how complicated we made it under the hood and how simple you’re going to see it is to actually use. I totally agree that this is very complicated under the hood, but in the UI it’s exactly like you would expect it to be. Let’s say: you’re experimenting with building this cool PWA here in terminal you’re in your folder pwe, a starter kit, you’re running a commands to start your web server and if you see at the bottom of this screen, it’s listening at port 8080.

At this point, you can launch your browser, go to localhost 8080 and test your web app on the screen here on the left. You have your web app in Chrome and on the right if you’re noticing it it’s in Chrome. Yes, you can test your web app on a Chromebook in Firefox too, if you noticed, we did not prompt you to give any permissions while we were in this flow. This is because the host is accessing the VM and not the other way around again.

This is another way we kind of balanced the security and simplicity factor. We were talking about all right for finally for our third demo, we’re going to talk about testing an Android app now this is really exciting, because just yesterday we announced that Android studio is officially supported on Chromebooks and we even created an installer just for Chrome OS To make it really easy to get started with now, of course, Android studio isn’t the only thing that you need in order to build a great Android app.

You also need something to test that app on usually a phone and well. You could do that over Wi-Fi with ADB remote all this sort of stuff. We wanted to make it easy, just the experience that you’d expect on any other vice. I can plug my phone in over USB and test my app that way. Now, if I’m an Android developer, sure I’ll plug my phone in to test my app, but I’m also going to plug in a lot of other devices over USB over the course of my day, I’m a plug in a USB Drive that has a lot of family Photos on it, I’m a plug in a wearable that has some health information I may even plug in my security key for work.

That gives me all of my access. Malware can take advantage of these devices to uniquely identify you as you move between machines to spread itself or even to make changes to them again. We thought of these threats at when designing crostini and made sure that we were preventing them. Implementing USB was a lot of fun, for us might have been our most painful stack same principles. Apply, we’ve got our layers, we protect the host there’s a lot of attack surface in a hosts USB stack to very complicated kind of loosely spec to protocol.

It’s an exact spec, that’s loosely implemented by a lot of people, so we’ve hidden that kept that on the host side wrote a device that we live in cross VM jail again, we’ve got a USB driver, it’s pretty complicated. It’s got a lot of code in it. I’r sure there’s a bug or two, so we made sure it was very well isolated. It can’t get to your files, they can’t get to the network, it also can’t get to any USB device.

You have to explicitly say hey. I want to give this USB device to my development environment. We’ve tried to make that as easy as possible and what actually happens under the hood. We’ve always got an emulated USB bus running so that the guest always sees hey. I’ve got a USB bus. There’s nothing plugged in and once you indicate that I want to give this to my VM, it says: ok, I’m going to add this device to this bus and then we show it to the guest and then the guest again in turn, has to forward that into The container and the container can see it there’s two things we were really focused on here.

One was security: again we dressed that with the jail, and we made sure the attack surface was as minimal as possible. It’s also written in rust and it’s nice and memory safe and it’s fuzz. But the other issue here is its privacy, because people somehow use lists of USB devices attached to machines to fingerprint and track users, and we wanted to make sure the untrusted code running inside the container couldn’t be another way to do that again.

This is a lot of steps. We have to create a device, we have to export it to a VM. We have to export it to a container. We have to decide which device to export and not and again we’ll have a demo that shows how easy it is. Okay, what this is the last demo, let’s say: I’m on my Linux and Abel Chromebook and you’re plugging in your phone you’ll see a notification that prompts you to connect to Linux.

At this point, only Chrome OS has knowledge of your phone. Linux doesn’t even know that your phone exists and that’s a good thing. If you see here, your phone is not listed in the USB list, but when you rerun the command once you connect on the notification, your phone shows up in the list. At this point, you established access to Linux to your phone. Let’s say: you’re working on a project, you’re developing a cool app again in Android studio and you’re, ready to test it out.

You hit, run and select the phone and boom just like that. You’re able to test your phone test, your app on your phone at this point you can debug and test out your app. Finally, you can go to settings and manage what Linux access to at any point of time. So you can see how security is at the core of your Linux experience on Chromebooks you, the user, are in full control at all times of what linux has access to.

We take advantage of a variety of UX patterns to make it simple to use and also native to Chrome OS. The combination of principles of Chrome, OS and crostini make this experience pretty unique thanks. My turn all right good. We got plenty of time, so we’ve been talking about a lot of details and I’ve been talking a lot about layers and jails and all that’s important and it’s a good reason for you to trust our normal flows and at when I’m using my Chromebook.

I almost always stay within these common workflows that we’ve polished and made sure work. However, a lot of that technical detail I was talking about is still usable and we’ve left hooks in for you to play with it. So I’m glad I’ve got time left. So I can go through a few of these examples and kind of just wait. Your appetite for what else you can do. We don’t test this stuff. We don’t support this stuff.

We really want the standard flow to be enough for everybody, but every once in a while there might be a reason you want to do something a little more advanced or you know you might just want to go, have fun and play with things under the hood. We’re tinkerers right supposed to be so we’ll go through and show how some of this stuff works. All this is going to be from the Chrome, OS shell. This has been in Chrome OS since longer than I have and so ctrl alt T gets you a shell.

There’s a set of debug commands. You can run we’re going to focus on one command, which is the the VMC command that we added to control virtual machines and containers the basic command. You can do a VM C list. It’ll show you what VMs you have installed on your system. I, the default VM, is called termina, so hopefully the fonts big enough and you can see what size it is. The terminal VM is the one that all the demos were done for the slides before.

So it’s up and running, we’ve made a shortcut to enter a container inside of yem. So if you want to go into the default container, the containers name penguin again, that’s that’s where we were doing all these demos from so the there’s, a BMC container command to get you into there will pop out of there and then we’ll pop back into just The VM so VMC start enters your virtual machine without entering your container.

So if you go back to my layers, it’s the one in the middle. The thing that LXDE runs in – and the reason you want to be in here is if you want to manipulate or change containers, so I mentioned we used lxd, there’s going to be a lot of LXE commands, that’s the lxd control program. This is well documented online and most of it will work inside Chrome OS, just like it does on a default. It want to install the first one you can do is a list you can see, we’ve got penguin running, we have one container it’s up and running.

It’s got an IP address, so we’ve got our one container. We might want to play with it a little bit and before we do, maybe I want to make sure I can get back to a state where I know it’s good right, because I’ve broken them before. It’s nice to be able to just go back to where I was and play around without worrying, so standard LXE command, it’s called snapshot and you give it your container name and you can give it the name of your snapshot and now you’ve got an image saved.

That you can go back to if you break things, there’s a copy on right. We use butter FS in the VM, so you’re not eating up a ton of disk space. We can get info on our container. This gives a bunch of information. Again, you can go poke around with this on a Chromebook. If you want to the important bit here, is that we’ve got one snapshot at the bottom, the i/o one snapshot we just created, you can have multiple snapshots.

It’s got a date on it to help. You remember if you didn’t use a very creative name and then, when you want to restore it back, Alex see restore these are well-named commands. They did a better job with this than I did. If you really want to go and play with different things, sometimes you want more than one container, so I’ve got my penguin container and I’m going to go say, install some different libraries in this one.

Maybe I want to have a container. That’s got Python two seven and a different one. That’s got Python three or maybe I want a different container for writing. Go then the container I have for writing rust. So we let you do that you can create as many containers as you want disk space limited. These do do cost this space. The most basic way to start off a new container is to copy an existing one. There’s an LXE copy command.

The example up here copies the the default penguin container over to a new container named Kingfisher. You can list the containers. We’ve got two by default: containers are stopped, so we have to start them now. We can list two there it’s running and you can jump in you say hey. I want to run bash in Kingfisher and now I’ve got a shell in my new container and I can go off and install whatever random toolchain.

I didn’t want in my default container, taking that one step further. We chose Debian because it was kind of the easiest thing for us to do. We didn’t want to tie you down to that, though. We support the Debian workflow. We support some guest packages that are installed in Debian by default, but some people want to use their favorite distro and there is a huge amount of distres available from the image server.

That canonical runs will install an arch one here, I’m not I’m not an arch guy. I don’t really know much about arch, but some of my co-workers talked me into doing this and playing with it. So now you can see, we’ve got three containers and I’ve got two Debian containers, my penguin and my Kingfisher, and now I’ve got something called arch test and again I can enter it by telling it to run bash, and if I want to install packages in this One I’ll use pac-man instead of app it’s actually it’s actually arch.

I promise that’s just a taste of what you can do from here. If you go and look at the LXE and lxd documentation online, you can get some more ideas, there’s even some help online about installing other ones and getting them to integrate better with the GUI. If you want more than just a command line, all right, so Dillon just showed you a bunch of the really cool tricks you can do with crostini. When you go under the hood and if you’re interested in this kind of thing, we really recommend checking out the crostini subreddit.

The folks they’re buying features as soon as we release them, sometimes even sooner and they’re, also really welcoming to new users of Linux on Chromebooks. So if you have any questions, please check it out and a big thanks to the folks there. So that’s Linux. On Chromebooks, as you can see, we already support a lot of web and Android developer flows and there’s a lot more to come, both in supporting other developers and in expanding what we can do with new capabilities like multiple containers and backup and restore we’re going to Keep applying these principles of simplicity and security to give you the best developer experience possible whenever you’re ready, we hope you’ll join us.

Thank you. You


 

Categories
Online Marketing

Linux for Chromebooks: Secure Development (Google I/O ’19)

We are here to talk to you about Linux, for Chrome box, also known as crostini. We will start by introducing ourselves. My name is Sudan. I am a designer on crostini for Chromebooks hi, I’m Dylan, I’m the Chrome, OS virtualization lead and I’m Tom product manager for Linux on Chromebooks. Now it’s the end of day two at i/o and you’ve probably already been to a bunch of different sessions that have talked about all the new frameworks that you need to be using or the platforms that you need to be building for and everyone’s right.

You absolutely should be, but we’re not really here to talk about that. Instead, what we want to talk about is you as developers and how you can get more peace of mind by using Linux on Chromebooks, we give you that peace of mind by balancing simplicity and security. On that note, let’s do a quick user study. How many of you are developers in the audience? Wow, that’s full room as expected. Keep your hands raised, how many of you use your computers for anything else, other than development like doing your taxes, checking email, again, 100 % of you, okay, one last question: how many of you are worried about security? Good, that’s pretty! I mean you all should be so I’m glad to see many hands up anyway, so I don’t know about you, but when I start a new project I I get stuck a lot right.

I hit a lot of walls and I hit a lot of barriers and go to look for a problem, go to look for a solution and I turn to Google. Luckily, Google’s almost always got a great answer for me. Unluckily. Sometimes the answer looks like this, and I know I shouldn’t run this script from evil site, comm and pipe it to soo do. But you know that deadlines coming up. This may be, the site, looks kind of legit, so in this case I’ll make an exception and I’ll do this and then it happens again and again and eventually I end up with a system that I don’t trust as much as I should, because I don’t really Know what code I’ve run on it anymore? I don’t have time to read all these scripts.

My solution to this has been to carry two laptops, one for my developer world and one for my everything else world that I want to be secure in, but recently I switched to using one laptop and Tom’s going to talk about how I do that. So our goal with Chrome OS has been to give you a simple and secure experience from the start, but if you tried it previously, you might have seen that it wasn’t quite ready for developers in order to be simple and secure.

We couldn’t run all of the tools that developers need to get their job done, but that all changed at i/o. Last year, when we announced that we were going to start supporting Linux on Chromebooks Linux on Chromebooks lets, you run all of your favorite editors, IDs and tools, and it’s now supported on over 50 % of Chromebooks, including great devices with eighth generation. Intel CPUs like the Lenovo yoga book C 630 and the Acer Chromebook spin 13.

If you haven’t seen it we’re going to run through a few examples. First off, how do you get it? It’s really easy! It’s just a couple clicks now. In the background, this is downloading a virtual machine setting up containers configuring, it all Dylan’s, going to go more into that in a few minutes. But for you as a developer, it’s just a couple clicks to get started and this adds a terminal to your launcher.

Now, if you open that terminal, you’ll see that you have a pretty standard, debian environment and we’ve already loaded in a bunch of the tools that developers expect like git and vim. And if you need anything else, you have the apt package manager and you can install whatever packages you need and if you want to, instead install files or install programs via dev files, you can do that too. This gives you access to thousands of world-class developer tools.

Now, once you’ve installed, any graphical you’ll find that they all show up in your launcher, just like the rest of your Chrome, OS apps, and if you open them, they show up in your window manager again just like the rest of your Chrome, OS apps. This is the simple experience that people have come to expect from Chrome OS and we didn’t want to change that with Linux. But importantly, this is also secure.

You don’t have to worry about malware, accessing your files, snooping on your traffic or infecting your peripherals. I’d ask you to trust us on that, but this is way too important for you to take on Trust alone. So over the course of this talk, Dylan and Sudha are going to walk you through the principles behind the architecture and design of crostini. We’re then, going to dissect some common developer flows to show you how these principles apply and, finally, we’re going to share some tips and tricks for advanced usage for the power users out there.

So now I’m going to hand it over to Dylan to talk about the architecture. Okay, so Chrome OS has always had a layered approach to security, and our big layer has always been the browser and the renderer and running all untrusted code in a nice, isolated renderer, and that keeps the attack surface of your core system to an absolute minimum. They’re not allowed to make a lot of system calls, they can’t poke at random bits of your kernel and that worked really well for webpages web apps.

However, for developer tools, I need to install a lot of different programs. They need a lot of different privileges. They can do anything any app on Linux can do and that wasn’t acceptable for us on the core of Chrome OS. So we need that a layer, so we added a virtualization layer and that lives in the main, Chrome OS layer and that spins up a VM. And now this VM has a much more limited interface, while still exposing a full Linux kernel to the programs that run inside the VM.

The only way the VM can talk to Chrome OS proper is through a small API that that cross VM program on the left up there exposes to the guest. This was pretty good. Now we’ve got a lot greatly reduced attack surface. We were pretty happy with this. We wanted to go a little further, so we made sure that the guest VM was also signed by Google and somewhat trusted. This lets us trust some of the actions the guest VM takes, and it’s also read-only.

So users can only break things so much and that no matter what you do, you’re going to be able to boot a VM. However, with all that security solved, we’re back in a situation where you don’t have enough flexibility, your apps can’t do anything. It’s a read-only thing: you can’t install anything in it, so we had another layer and for this we stole used lxd from canonical. That teams been very helpful in getting this spun up with us.

It’s a pretty standard container run time. It’s built for running system containers and in our case we started a system container of Debian and exposed to that to the user so that cross VM layer. I was talking about that’s kind of the most important part of the security story. Here it’s the last line of defense before something gets into Chrome OS. So we went. We focused on this for a long time and made sure we got that as secure as possible.

We wrote it in a memory safe programming language. We chose rust. This eliminates buffer overflows and integer overflows a lot of common bugs related to memory safety that are exploited by attackers. We were pretty happy with that, but we again added another layer of security here in that we broke up the virtualization program into pillars and made sure that each pillar that interfaces with the guest only has access to small parts of your hosts Chrome OS system.

So your host Chrome, OS system, you’ve got your bank’s web page open. You’ve got your online tax filing thing. Open you’ve got all kinds of personal identifiable information everywhere. We really wanted to protect that, but we needed to give the guest access to things like a random number, a display, a USB device, so each of those got their own jail and they can only see the thing they need, so our random number generator can generate Random numbers: it can’t access any files, it’s in an empty file system.

From its perspective, it doesn’t have any network access the display driver, it can access the display again, it can’t touch the network, it can’t go, grab your files and upload them, even if somebody gets into it and tries to make it do things we didn’t intend it To this is all a little complicated, but we’ve added a great amount of system UI to make this easy for you to use. So when you’re just doing your job as a developer, you don’t have to worry about these.

Pretty pictures I’ve drawn for you and I’ll show you of what we did. Thank You. Dylan security is absolutely top of mind for us. While crafting the Linux experience on Chromebooks, we came up with three high-level design goals. The first goal was to keep your experience intuitive. Everyone here in this room has been using computers for a long time and you have just established your workflows and habits.

So, basically, what we wanted to do is to match to those those expectations. We wanted to provide an experience, that’s natural to you. We want developers everywhere to be using Chromebooks and feel right at home doing it. The second goal was to make your experience native. We could have taken the easy path by giving you a full Linux desktop in a VM, but that wasn’t good enough. Our goal was to bring the Linux apps.

You depend on for development into your native Chrome, OS experience. The third goal was to make your experience simple, and I think this is very important. There’s a lot of complexity, that’s going on under the hood, and we want to leave it there. Our guiding principle is that complexity shouldn’t interfere with the user experience. There’s a couple of things. We are trying to balance here. The security concerns that come with installing Linux apps on Chromebooks and the simplicity that comes with sticking to design patterns established by Chrome, OS and our mission was to find that sweet spot all right.

So now we’re going to talk about three common developer flows and see how they work with crusting. The first of these is accessing files as developers. We have to do this all the time our editors need to access files, as do our compilers, our source control and a whole lot more, but the problem is that our file systems have a lot more than just code. They have our personal photos, our tax returns.

Maybe that novel that you’ve been working on a lot can go wrong. Ransomware can hold all of that data hostage. Malware can upload your files to some rain server, or maybe you just get something that goes and deletes everything for the fun of it. We built crostini with those threats in mind to limit what can go wrong and Dylan will tell you how so our goal, sharing files with your VM and with your container, was to make it easy for you to get the files you needed for your development tasks.

Where you need them, but not expose things, you don’t want exposed to untrusted code, because ultimately we don’t trust the code. That’s running inside this VM. To do this, we took a layered approach, your files all live in Chrome OS at the very bottom, and we share them out to the VM with a 9p server. We named it 9s again. We wrote it in rust, so it’s memory safe. We fuzzed it to make sure unexpected inputs, don’t cause unexpected behavior and we put it in a in a tight jail.

So it can access only the files you share with it and it takes those files and exports them to the VM. The VM mounts the 9p thing, that’s built into Linux and then lxd takes that mount and exposes it into your container, where your development tools are running. The important thing here is that your container can only see files. You say I want to share with my development environment. Your VM can only see those same files and even the server that we wrote running on Chrome OS can only see those files.

It doesn’t get to see everything. So somebody exploits this stack all the way back into Chrome OS. They still don’t have access to the files you haven’t shared with the container. That’s a lot of stuff to set up setting up 9 P mounts bind mounting things into containers. We had to do this manually for a while. We were developing it. It was painful, so let’s do to show you how easy we made it for you.

There are a lot of layers going on, but let’s see how simple this is in the UI right out of the box, you have a directory called Linux files, which is your home directory within Linux. Anything in this directory is automatically shared with Linux. Outside of this directory anywhere else on the device, Linux doesn’t have access to anything until you grant permissions I’ll walk you through a couple of examples here, let’s say you’re working on a project, and you see yourself needing files from this.

One folder called illustrations to share this: all you have to do is access the right-click menu and click on share with Linux. In as simple as two steps, you now share this folder with Linux. If you notice, this is in Google Drive and that’s a cool thing when you don’t want to share this anymore, you can do that by going to settings and unshare here’s another example where we made quick edits, really simple for you.

You have a data file in your downloads folder and when you double-click it automatically opens in vs code when this happens in the background it’s implicitly shared and the sharing last until you restart. This is the balance of security and simplicity. We wanted to bring you. Thank you so, for our second developer flow that we’re going to talk about we’re going to look at running a web server. Now being Chrome OS.

We care a lot about people making great web apps and we want to make sure that they can create those on a Chromebook and being able to run a web server is pretty central to being able to build any web app. Unfortunately, web servers also need a pretty low level of access, and that can cause some problems. The code that can run a web server is also capable of snooping on your internet traffic. It can know what sites you’re accessing and, in some cases, even see the contents of those pages.

This means that a malicious web server could potentially track everything that you’re doing now again, we thought of this as we design crostini, and we made sure that we prevented this kind of attack. Linux Dylan will tell you how I can be called Linux. It’s my job. All right so starting a web server from crostini, simple we’ve got a good demo over in the web, dev sandbox already type of command. You fire up your web server, just like you would on any Linux distribution out there what’s actually happening under.

The hood, though, is you’re in a container, and you open up a port that ports in a network namespace inside a VM running under our special hypervisor, which puts its network stack in another namespace on the host and then finally out to Chrome, so Chrome can’t get Back in which is great for security, you’ve got wonderful isolation, but if I want to test this new PWA or webpage, I’m running in my VM, how do I get chrome to talk to it? This was not simple.

So for that we had to add some demons along the way. Actually, every layer gets a daemon for this there’s the first one is running in the in the VM and it’s sitting there waiting to check if any container that’s running happens to open a port, and then it’s got to figure out which container open that port and Bundles that information up sends it to Chrome OS, so hey this port in this container is listening.

The user might want to use that port and on the Chrome, OS side we say – ok, the other daemon responds says I will set up a route to do some forwarding I’m going to forward all of this over Vee sock, which is a protocol used to talk To local VMS, on on machines, that’s kept under the hood, so either end talks HTTP in in to the demons and the daemons dog Vee sock to each other. So the key here is that the web server gets to talk.

Http Chrome gets to talk, HTTP everything’s, normal everything works. Just like you would well under the hood. We’ve got all this extra daemons and V sock forwarding going, but we’ve hidden that one other important thing we’ve made it trusted. So you can get all your PWA features. You can install it to your desktop, even though it’s not technically the same machine. We know it is because we’ve got the information we set up the VM, so we allow that to be trusted domain and all this complexity, I think, makes one of our best demos.

Today of how complicated we made it under the hood and how simple you’re going to see it is to actually use. I totally agree that this is very complicated under the hood, but in the UI it’s exactly like you would expect it to be. Let’s say: you’re experimenting with building this cool PWA here in terminal you’re in your folder pwe, a starter kit, you’re running a commands to start your web server and if you see at the bottom of this screen, it’s listening at port 8080.

At this point, you can launch your browser, go to localhost 8080 and test your web app on the screen here on the left. You have your web app in Chrome and on the right if you’re noticing it it’s in Chrome. Yes, you can test your web app on a Chromebook in Firefox too, if you noticed, we did not prompt you to give any permissions while we were in this flow. This is because the host is accessing the VM and not the other way around again.

This is another way we kind of balanced the security and simplicity factor. We were talking about all right for finally for our third demo, we’re going to talk about testing an Android app now this is really exciting, because just yesterday we announced that Android studio is officially supported on Chromebooks and we even created an installer just for Chrome OS To make it really easy to get started with now, of course, Android studio isn’t the only thing that you need in order to build a great Android app.

You also need something to test that app on usually a phone and well. You could do that over Wi-Fi with ADB remote all this sort of stuff. We wanted to make it easy, just the experience that you’d expect on any other vice. I can plug my phone in over USB and test my app that way. Now, if I’m an Android developer, sure I’ll plug my phone in to test my app, but I’m also going to plug in a lot of other devices over USB over the course of my day, I’m a plug in a USB Drive that has a lot of family Photos on it, I’m a plug in a wearable that has some health information I may even plug in my security key for work.

That gives me all of my access. Malware can take advantage of these devices to uniquely identify you as you move between machines to spread itself or even to make changes to them again. We thought of these threats at when designing crostini and made sure that we were preventing them. Implementing USB was a lot of fun, for us might have been our most painful stack same principles. Apply, we’ve got our layers, we protect the host there’s a lot of attack surface in a hosts USB stack to very complicated kind of loosely spec to protocol.

It’s an exact spec, that’s loosely implemented by a lot of people, so we’ve hidden that kept that on the host side wrote a device that we live in cross VM jail again, we’ve got a USB driver, it’s pretty complicated. It’s got a lot of code in it. I’r sure there’s a bug or two, so we made sure it was very well isolated. It can’t get to your files, they can’t get to the network, it also can’t get to any USB device.

You have to explicitly say hey. I want to give this USB device to my development environment. We’ve tried to make that as easy as possible and what actually happens under the hood. We’ve always got an emulated USB bus running so that the guest always sees hey. I’ve got a USB bus. There’s nothing plugged in and once you indicate that I want to give this to my VM, it says: ok, I’m going to add this device to this bus and then we show it to the guest and then the guest again in turn, has to forward that into The container and the container can see it there’s two things we were really focused on here.

One was security: again we dressed that with the jail, and we made sure the attack surface was as minimal as possible. It’s also written in rust and it’s nice and memory safe and it’s fuzz. But the other issue here is its privacy, because people somehow use lists of USB devices attached to machines to fingerprint and track users, and we wanted to make sure the untrusted code running inside the container couldn’t be another way to do that again.

This is a lot of steps. We have to create a device, we have to export it to a VM. We have to export it to a container. We have to decide which device to export and not and again we’ll have a demo that shows how easy it is. Okay, what this is the last demo, let’s say: I’m on my Linux and Abel Chromebook and you’re plugging in your phone you’ll see a notification that prompts you to connect to Linux.

At this point, only Chrome OS has knowledge of your phone. Linux doesn’t even know that your phone exists and that’s a good thing. If you see here, your phone is not listed in the USB list, but when you rerun the command once you connect on the notification, your phone shows up in the list. At this point, you established access to Linux to your phone. Let’s say: you’re working on a project, you’re developing a cool app again in Android studio and you’re, ready to test it out.

You hit, run and select the phone and boom just like that. You’re able to test your phone test, your app on your phone at this point you can debug and test out your app. Finally, you can go to settings and manage what Linux access to at any point of time. So you can see how security is at the core of your Linux experience on Chromebooks you, the user, are in full control at all times of what linux has access to.

We take advantage of a variety of UX patterns to make it simple to use and also native to Chrome OS. The combination of principles of Chrome, OS and crostini make this experience pretty unique thanks. My turn all right good. We got plenty of time, so we’ve been talking about a lot of details and I’ve been talking a lot about layers and jails and all that’s important and it’s a good reason for you to trust our normal flows and at when I’m using my Chromebook.

I almost always stay within these common workflows that we’ve polished and made sure work. However, a lot of that technical detail I was talking about is still usable and we’ve left hooks in for you to play with it. So I’m glad I’ve got time left. So I can go through a few of these examples and kind of just wait. Your appetite for what else you can do. We don’t test this stuff. We don’t support this stuff.

We really want the standard flow to be enough for everybody, but every once in a while there might be a reason you want to do something a little more advanced or you know you might just want to go, have fun and play with things under the hood. We’re tinkerers right supposed to be so we’ll go through and show how some of this stuff works. All this is going to be from the Chrome, OS shell. This has been in Chrome OS since longer than I have and so ctrl alt T gets you a shell.

There’s a set of debug commands. You can run we’re going to focus on one command, which is the the VMC command that we added to control virtual machines and containers the basic command. You can do a VM C list. It’ll show you what VMs you have installed on your system. I, the default VM, is called termina, so hopefully the fonts big enough and you can see what size it is. The terminal VM is the one that all the demos were done for the slides before.

So it’s up and running, we’ve made a shortcut to enter a container inside of yem. So if you want to go into the default container, the containers name penguin again, that’s that’s where we were doing all these demos from so the there’s, a BMC container command to get you into there will pop out of there and then we’ll pop back into just The VM so VMC start enters your virtual machine without entering your container.

So if you go back to my layers, it’s the one in the middle. The thing that LXDE runs in – and the reason you want to be in here is if you want to manipulate or change containers, so I mentioned we used lxd, there’s going to be a lot of LXE commands, that’s the lxd control program. This is well documented online and most of it will work inside Chrome OS, just like it does on a default. It want to install the first one you can do is a list you can see, we’ve got penguin running, we have one container it’s up and running.

It’s got an IP address, so we’ve got our one container. We might want to play with it a little bit and before we do, maybe I want to make sure I can get back to a state where I know it’s good right, because I’ve broken them before. It’s nice to be able to just go back to where I was and play around without worrying, so standard LXE command, it’s called snapshot and you give it your container name and you can give it the name of your snapshot and now you’ve got an image saved.

That you can go back to if you break things, there’s a copy on right. We use butter FS in the VM, so you’re not eating up a ton of disk space. We can get info on our container. This gives a bunch of information. Again, you can go poke around with this on a Chromebook. If you want to the important bit here, is that we’ve got one snapshot at the bottom, the i/o one snapshot we just created, you can have multiple snapshots.

It’s got a date on it to help. You remember if you didn’t use a very creative name and then, when you want to restore it back, Alex see restore these are well-named commands. They did a better job with this than I did. If you really want to go and play with different things, sometimes you want more than one container, so I’ve got my penguin container and I’m going to go say, install some different libraries in this one.

Maybe I want to have a container. That’s got Python two seven and a different one. That’s got Python three or maybe I want a different container for writing. Go then the container I have for writing rust. So we let you do that you can create as many containers as you want disk space limited. These do do cost this space. The most basic way to start off a new container is to copy an existing one. There’s an LXE copy command.

The example up here copies the the default penguin container over to a new container named Kingfisher. You can list the containers. We’ve got two by default: containers are stopped, so we have to start them now. We can list two there it’s running and you can jump in you say hey. I want to run bash in Kingfisher and now I’ve got a shell in my new container and I can go off and install whatever random toolchain.

I didn’t want in my default container, taking that one step further. We chose Debian because it was kind of the easiest thing for us to do. We didn’t want to tie you down to that, though. We support the Debian workflow. We support some guest packages that are installed in Debian by default, but some people want to use their favorite distro and there is a huge amount of distres available from the image server.

That canonical runs will install an arch one here, I’m not I’m not an arch guy. I don’t really know much about arch, but some of my co-workers talked me into doing this and playing with it. So now you can see, we’ve got three containers and I’ve got two Debian containers, my penguin and my Kingfisher, and now I’ve got something called arch test and again I can enter it by telling it to run bash, and if I want to install packages in this One I’ll use pac-man instead of app it’s actually it’s actually arch.

I promise that’s just a taste of what you can do from here. If you go and look at the LXE and lxd documentation online, you can get some more ideas, there’s even some help online about installing other ones and getting them to integrate better with the GUI. If you want more than just a command line, all right, so Dillon just showed you a bunch of the really cool tricks you can do with crostini. When you go under the hood and if you’re interested in this kind of thing, we really recommend checking out the crostini subreddit.

The folks they’re buying features as soon as we release them, sometimes even sooner and they’re, also really welcoming to new users of Linux on Chromebooks. So if you have any questions, please check it out and a big thanks to the folks there. So that’s Linux. On Chromebooks, as you can see, we already support a lot of web and Android developer flows and there’s a lot more to come, both in supporting other developers and in expanding what we can do with new capabilities like multiple containers and backup and restore we’re going to Keep applying these principles of simplicity and security to give you the best developer experience possible whenever you’re ready, we hope you’ll join us.

Thank you. You


 

Categories
Online Marketing

Solving SEO with Headless Chrome (Polymer Summit 2017)

If you manage to pick up on my accent in the last five words, I am indeed Australian and it’s honored to be followed up by Trey, my fellow Aussie, as well prior to joining this team. I’d worked on the beloved chrome dev tools, one of my smallest, but maybe my greatest contribution was adding the ability to rearrange tabs in dev tools, there’s probably the greatest five lines I’ve ever written.

I did work another five other features. So if you find me afterwards feel free to ask me about them, and I might share dev tools trickle to more recently, I’ve had the humbling experience of building web components at all and witnessing all the incredible components that all of you have built and published. For example, the one and only Pokemon selector and if you’re the person who says but there’s a Hanyu, only 151 pokemon in the original set well there’s even an option that lets you set that too, so all kudos to Sammy.

For this, it was, however, in the process of building web components at all, which brings us to what we’re here to talk about today. So, first I’m going to cover my story of how I came to encounter this SEO problem while building web components. Our dog will then look at how I used have less chrome to solve this before diving into all the details of how that actually works and how you can use it. So I’m going to take a step back for a moment and talk about what I learnt in the process of building web components.

A talk. The first thing I learned was how the platform supports encapsulation through the use of web components with this encapsulation comes with inherent code reuse, which leads to a specific architecture. I also learnt about progressive web apps and how they can provide us with fast engaging experiences. I learned how the platform provides api’s such as service workers, to help enable those experiences, as I learned how to compose web components, to build a progressive web.

App we’ve heard from Kevin yesterday about the purple pattern: push render precache lazy load as a method of optimizing delivery of this application to the user and one of the architectures which enables us to utilize. The purple panel is the app shell model. It provides us with instant, reliable performance by using an aggressively cached app shell. You can see that for all the requests which hit our server, we serve the entry point file which we serve regardless of the route.

The client then requests the app shell, which is similar, but because the same URL across the application, we can combine that with a serviceworker to achieve near-instant loading on repeated visits. The shell is then responsible for looking at the actual route that was requested and then request. The necessary resources to render that route. So this point I’d learned how to build a progressive web app using client-side technologies like web components in polymer and how to use patterns such as the purple pan to deliver this application quickly to the user.

Then there’s the elephant in the room SEO for some of these BOTS they’re, basically just running curl with that URL and stop right there, no rendering no JavaScript. So what are we left with with this PWA that we built using the app shell model? We’re left with just your entry point file, which has no information in it at all, and in fact it’s the same generic entry point file that you serve across your entire application.

So this is particularly problematic for web components, which require JavaScript to be executed for them to be useful. This issue applies to all search engine indexes that don’t render JavaScript, but it also applies to the plethora of link rendering BOTS out there. There’s a social BOTS like Facebook and to but don’t forget the enormous number of link renting BOTS such as slack hangouts Gmail, you name it.

So what is it about the app shell model that I’d really like to keep well? For me, this approach pushes our application complexity out to the client. You can see that the server has no understanding of routes. It just serves the entry point file and he has no real understanding of what the user is actually trying to achieve. This allows our server to be significantly decoupled from the front end application, since it now only needs to expose a simple API to read and manipulate data.

The client that we pushed out to the application that we pushed out to the client is then responsible for servicing. This data to the user and mediating user user interactions to manipulate this data, so I asked: can we keep this simple architecture that we know and we love and also solve this SEO use case with zero performance cost? So then we thought what, if we just use headless chrome to render on our behalf, so here’s a breakdown of how that would work.

We have our regular users who are making a request and they would like a cat picture because who wouldn’t and as part of this approach, we ask our robot and to answer this, we look at the user agent string and check if it’s an own bot that Doesn’t render in this case the user can render so we serve the page as we normally would. The server responds with the fetch cat picture function and then the client can go and execute that function to get the rendered result by the way.

This is one of my kittens, which I fostered recently, which is super adorable. Now, when we encounter a boss, we can look at a user agent string and determine that they don’t render, and instead of serving that fetch cat picture function, we fire for a quest to headless Chrome to render this page on our behalf, and then we send the Serialized rendered response back to the bar, so they can see the full contents of the page.

So I built a proof-of-concept of this approach for web components rock and it worked. I wrote a medium post about it, and people really interested in this approach and want to see more of it. So, based on this response, I eventually decided that instead of my hacky solution that I would build it properly but then came the most challenging part of any project and I know you’ve all experienced it as well naming.

So I asked on our team chat for some suggestions and I got a tongue, so these are some of our top ones. There’s some great ones in their power renders use the platform as a renderer. However, today I’m very pleased to introduce render Tron. Let me render that, for you. Brenda Tron is a doc arised, headless, chrome, rendering solution. So that’s a mouthful, so let’s break it down. First off what is docker and why did I use it? Well, no one knows what it means, but it’s provocative in all seriousness.

Docker containers allow you to create lightweight images and standalone executable packages which isolate software from its surrounding environment in render Tron. We have headless chrome packaged up in this container so that you can easily clone and deploy this to wherever you like. So what about headless chrome? It was introduced in chrome, 59 for Linux and Mac chrome 60 for Windows, and it allows chrome to be run in environments which don’t have a UI interface such as a server.

This means that you can now use Chrome as part of any any part of your tool chain. You can use it for automated testing. You can use it for measuring the performance of your application, generating PDFs amongst many other things. Headless chrome itself exposes a really basic JSON API for managing tabs with most of the power coming from the dev tools protocol. All of dev tools is built on top of this protocol.

So it’s a pretty powerful API, and one of the key reasons that headless chrome is great. Is that now we’re bringing the latest and greatest from chrome to ensure that all the latest web platform features are supported with render Truong? This means that net your SEO can now be a first-class environment which is no different, the rest of your users. So just a quick shout out. This all sounds really interesting to you and you would like to include headless chrome in some other way in your to a chain.

There’s a brand new library, node library that was published just last week, that exposes a high level API to control chrome, while also bundling all of chrome inside that node package. So you can check it out on github at google chrome, slash puppeteer, so we’ve looked at the high level of how headless chrome can fit into your application to fulfill your SEO needs now it’s time to dive to how it works.

But I’ve been talking a lot. So, who wants to see render tron in action alright. So this is the hacker news PWA created by some of my awesome colleagues and it’s built using polymer and web components. It loads really fast and all-round performs pretty well. We can see that there’s a separate network requests which loads the main content that we see and we can guess that it’s affected by this SEO problem, since it uses web components which require JavaScript and it pulls the in data asynchronously.

So one quick way to verify this is by disabling JavaScript and refreshing the page, and once we do that, we can see that we still get the app header, since that was in the initial request. But we lose the main content of the page which isn’t good. So we jump over to render Truong the headless chrome service that is meant to render and serialize this for you. So I wrote this UI as a quick way to put in a URL and test the output from render Tron so first off.

What are we hoping to see because these bots only perform one request? We want to see that whole page come back in that one network request. We also want to see that it doesn’t need any JavaScript to do this. So take a look, I’m going to put in the hacker news URL and tell render Tron to render and serialize this and that using web components, and it renders correctly I’m going to disable JavaScript and verify that it still works.

So you can see it’s still there and it all comes back in that single network requests render tron automatically detects. When your PWA has completed loading. It looks at the page load event and ensures that it has fired. But we know that’s a really poor indication of when the page is actually completed. Loading, so Rena Tron also ensures that any async work has been completed and it also looks at your network requests to make sure they’re finished as well.

In total, you have a ten-second rendering budget. This doesn’t mean that it waits 10 seconds, though it’ll finish as soon as your rendering is complete. If this is insufficient for you, you can also fire a custom event which signals to rent Ron that your PWA has completed. Loading serializing web components is tricky because of shadow Dom which it straps away part of the dom tree so to keep things simple.

Rennet ron uses shady Dom, which polyfills shadow Dom this allows render tron to effectively serialize the dom tree so that it can be preserved. In the output, so let’s take a look at the news PWA, which you’ve all seen – and it’s also built by some of my other colleagues and we’ll plug that in to render tron will then ask render tron to render this as well and that I’m also using Web components, and then we have it.

So what do you need to do to enable this behavior with polymer 1? This is super easy and render tron doesn’t actually need to do anything simply append D’Amico’s shady to the URLs that you pass to render Tron and polymer 1 will ensure that shady Dom is used with polymer 2 and with web web components. V1. It’s recommended you use web components, loader jeaious, which pulls in all the right polyfills on different browsers.

You then set a flag to render tron tell it that telling it that you’re using web components, and it will ensure that the necessary polyfills that it needs for serialization get enabled so another feature of render Tron is that it lets you set HTTP status codes, these Status codes are used by indexes as important signals, for example, if he comes across a 404, it’s not going to link to that page, because that will be a really poor search result.

Now server, though, it’s still returning that entry point bar with a status code of 200. Okay, so it looks like every URL exists. Rena-Chan lets you configure that status code from within your PW, a which understands when a page is invalid, simply add meta tags. Dynamically is fine to signal to render on what the status code should be render. Tron will then pick these up and return that status code to the bot, so this approach isn’t specific to polymer or even web components, let’s plug in Fahnestock google.

Com and sees what happens when we serialize it. So that looks pretty good. Who can guess what javascript library was used to build? Google fonts angular render Trond works with any and all client-side technologies that work in Chrome and whose Dom tree can be serialized. The render tron endpoint also features screenshot capabilities, so that you can check that headless, chrome and the load detecting function are performing as you expect.

Unfortunately, this service is not fast for each URL that we render we spin up headless Chrome to render that entire page, so performance is strictly tied to the performance of your PWA. Renat Ron does, however, implement a perfect cache. This means that if we have rendered the same page within a certain cache freshness threshold will serve the cached response instead of rear-ending it again. So how can you get your hands on this today and how do you use it? Well, first, you need to deploy the random tron service to an end.

You’ll need to clone the github repo at Google, Chrome, slash, magnetron, and it’s built primarily for Google cloud. So it’s easy to deploy there. But if you remember this is a darker container, so you can deploy this to anywhere, which supports a docker image. So to make things simple for you to test our. We have the demo service endpoint, which you can hit at render Tron appspot.Com and that’s the one with the UI that we saw earlier.

It is not intended to be used as a production endpoint. However, you are welcome to use it, but we make no guarantees on uptime. Having this as a ready to use service is something we might consider based on the interest receive. So, just in case you’re wondering my boss’s twitter handle is at met, Matt s McNulty, just in case. You want to tell him how awesome I am so once we have that endpoint up you’re going to need to install some middleware in your application to do the user agent splitting that I was talking about earlier.

So this middleware needs to look at the user agent figure out whether or not they can render and if not proxy, the requests through the render tron endpoint, if you’re using purple server, which is a node server designed to serve production applications using purple. You simply need to specify the bot proxy option and provide it with your rennet on endpoint, if you’re using Express, there’s a middleware that you can include directly by saying app, don’t use render on top make middleware with the proxy endpoint and whether or not you’re using Web components, if you’re not using either of these check the docs for a list of community maintained bit aware, there’s a firebase function there, as well as a list of existing middleware that render China is compatible with.

If it’s not listed, it’s also fairly simple to roll. Your own middleware by simply proxying based on the user agent string, and that’s it, that’s all the changes you need to make to use, render tron today and all these bots can now be happy. Brenda Tron is available to use today compatible with any client-side technologies, including both polymer 1 and polymer 2. Thank you.


 

Categories
Online Marketing

Build a successful web presence with Google Search (Google I/O ’18)

You skipped lunch to come here about search, no pressure, so my name is Maria and I’m a webmaster trends. Analyst coming all the way from Switzerland to talk to you about search and I’m John Mueller. I’m also webmaster trends, analyst we’re both from Google Switzerland. Our role at Google is to help the web ecosystem to be more successful in Google search.

So we hope we can bring some of this across here as well great, so I want to start us off with an example of how this actually works, so you can get an idea of what you as a developer, can do to be more successful with search. So this is Japanese website Rakuten recipes and they wanted to get more users. They also have a ton of delicious recipes on their site, so they were wondering. What can we do to get more people to notice us in search and what they decided to do? Is they changed their CMS in order to be able to markup every recipe that is added to the site, with structured data markup? What this does is it lets the search engine know that there are different entities, so things like cooking time, different ingredients, a picture and lets us display the result in a much more attractive way in the search results and the site itself was in Japanese.

So we decided to switch off the markup for those of you who don’t speak, Japanese to something more legible, so instead of dumpling, so you have here a party coffee cake, but it works the same way for dumplings, and this is how this ends up. Looking like in the search results, so you can see that, in addition to the regular elements of the search page, so they have a title, they have the URL and then they have the description.

They also have a really nice picture of dumplings, and then they have the cooking time as well. So this worked out for them pretty well in fact kind of spectacularly, so they got 2.7 times more traffic from search yeah and we thought you know developers usually don’t get as much advice around search and they might not even know about all of the pitfalls and Possibilities shouldn’t we share some of the knowledge with them, so with you all as well, so we hope we can show how search can make your projects a little bit more successful.

So today we’ll look into various types of public web presences and gives you a broad overview of specific details that you, as a developer, can read out for and implement, and these details can help make your projects more successful, on search making it easier for search engines To send users to your projects directly, so you might be thinking as a developer. I don’t really care how or if my stuff appears in search, but probably your customers, your users, the people that you’re building these projects for they do care.

And since you control how your content appears in search, you can have a huge impact here, think back about Rakuten recipes and what they did there. Globally. Google search and Google News send Philly ins of visitors each month to web sites for free, and it’s not just about web sites. We’re going to be looking into various ways that search can work depending on what you’re working on a website is just the most common format.

You could also be building a web app which is kind of similar, but slightly different as well or you could be contributing to a content management system, its so-called CMS, which enables others to build web sites of their own or as a part of that, maybe you’re Working on plugins or themes or extensions for these content management systems, we’ll take a brief look at these, as well as detailed recommendations for each of them and, as I mentioned before, search brings billions of visitors to websites every month.

That’s a lot of visitors. We serve trillions of searches each year and out of those, this is quite surprising for me every time I look at it about 15 % of the queries every day are completely new ones. Things we’ve never seen before, so maybe they’re looking for one of your projects and regardless of what you’re building if search engines, understand your content, you could get a lot more visitors and potential customers with search.

So you as a developer, you can control that through the way that you set up your website or content platform, so under these Sandy’s better, let’s take a quick look at how search works right so in order to be successful as a developer in search, you need To know at least the basics of how it works, and I’m going to take you through the super super high-level picture of how it works. If you’re interested in the details, google.

Com slash jobs, welcome to apply, and then we can go into a lot more detail. But let’s get started with the super high level picture, so we generally talk about three things. First crawling and discovery, then indexing and finally ranking and serving so I’m going to show you very briefly what each of these things is about. So, of course, in order for us to be able to show anything in the search results.

First, we need to be aware that it exists, so we have a series of systems batterer going around following links on the web and downloading web pages. Html files, you know all the different resources that come into making a website like javascript files, CSS images, what-have-you and those systems collectively are crawlers and we call them Googlebot. So the goal for us is to find everything that is fresh, new, interesting, relevant and important, and to do that in an efficient way and in order to know which URLs to crawl and in which order.

We have another set of systems which are known as schedulers. So they queue the URLs for the crawlers to go and fetch, and all of this gets then stored. So you might think that this is a pretty simple process. But if you start thinking that we have to do this 20 billion times per day, then you kind of get an idea. It’s a little bit trickier than it seems at first sight. In fact, in 2016 we saw a hundred and thirty trillion pages and a new link that we see usually there’s two more links that we’ve never seen before so there’s constantly new stuff and we have to decide what to crawl.

How to update and to do this. In the most efficient manner, so whilst we find the content, we have a series of other tasks. First, we have to make sure that we are allowed to access that content, and for that we will first go every time. We access a site we’ll go to a file called robots.Txt, which is a pretty simple file containing instructions to search engines and other crawlers, and it tells you this is okay to fetch, and this is not okay and we obey this very strictly.

So that’s the first thing that we’ll try to find on a website. The other thing that we’ll try to do is to get as much content as possible without troubling the normal work of the server. So the website can function and serve its clients as usual and then finally we’ll try to handle errors gracefully. So as a developer, you have two tasks here: the first, if you remember again that we do fetches 20 billion times a day and we see trillions of pages every year – is that your content should be really easy to discover so ways to do.

That and John will go into a little bit more detail later is to, for example, submit to us a list of URLs. They have like a sitemap or also check that all the resources that are necessary for your site to be rendered are accessible to our crawlers. So once we fetched everything that we were able to fetch, we go to the next stage, and that is indexing. So here we’re going to parse the content and into this comes things like what language is this page? Are there any images? Is there a title? Is there a description and other different elements on the page, so to do that, we also try to render the page and as a developer, especially if you’re building a lot of really cutting-edge fancy things.

You have to keep in mind that currently, the search systems are using chrome 41 to render pages. So not all of the different functionalities that you might be thinking about could be supported by the search rendering systems, and if you want to find out more, I would suggest that you have a look at the talk that John did earlier today in the morning. In case you didn’t wake up at 8:30 to see it, it will be available in YouTube and you’ll be able to see a lot more about what we support and search and how to render things properly.

So, of course, given the huge amount of pages on the web, we also don’t want to index more than one of each unique thing. So we have a lot of systems in place to eliminate duplicates and to keep only one copy of each thing and then finally, we don’t want error pages and we also don’t want any spam. So we will kick all of that out and everything else that we want to keep we put in the index and we process it so that it’s ready to serve to users when they search.

So for you as a developer. Here I guess it’s important to remember that key elements like titles and descriptions are available in each page that your users are creating and then also to check how it’s rendered. But John will go into a lot more detail here later and then. Finally, once we have everything in the index, when users start searching we’re going to pull a set of pages, that we think are relevant results, we’re going to add a bunch of information that we’ve already accumulated to them like how important they are and how they relate To the users query and then we’re going to show them in some specific order that we think it’s most relevant for this user.

So this is mostly on our side and you don’t need to worry about anything here if your content is already accessible and easy to render, but if you’re really interested in ranking and search quality again, Google Chrome, slash jobs, there’s plenty of interesting problems to solve. So now that you know how search works, let’s have a summary of the two things that you need to remember. There’s. First, you have to help us find the content and second, you have to help us evaluate the content.

So if you’re able to do these two things you’re pretty much set as a developer. Now this is super super high level. So what we’re going to do next is show you how you can apply this for each specific thing that you need to build, so we’re going to start with websites and John is going to share with you some very specific advice about what to do, and what Not to do when you’re building a website for someone all right thanks Maria, that was a great introduction into search.

So, like you said, let’s start with websites, you can build a maintained one for yourself to kind of showcase, your own content, or maybe you’re doing that for other people to show to let them create websites on their own. You might be thinking that showing up in search isn’t really your job as a developer, but, like we mentioned before as a developer, you play a really big role in kind of putting everything in place, so that search can pick up the content properly.

So that’s really important for us when it comes to websites. I think it’s worth taking a really big step back and looking at the absolute basics. So for us, that’s a URL. That’s essentially the address. That’s used to address a single piece of content on the web. Perhaps surprisingly, URLs are the cause of an solution to a lot of web search problems. Traditionally, URLs on the web started out quite simple.

Their requests that sent to the server and the server responds with unique HTML per URL fragments within the URL. So everything after the hash sign here. They essentially just lead to a different part of the same page. Javascript changed that a little bit and suddenly a single URL could do a little bit more and show different kinds of content and provide extra functionality to keep State some JavaScript sites use fragments, since these were easy to set with JavaScript.

However, Google generally doesn’t support this and, as far as I know, no search engine supports addressing unique content with individual fragments. So nowadays we recommend using JavaScript history API to use normal traditional, looking URLs, so in short, with URLs, stick to something more traditional. Another really important thing that comes into play with your ELLs is that often you have many different URLs that lead to the same content as a developer.

That’s usually no big deal and you can look at that. You think well, index.Html. That’s obviously homepage I mean that’s like every developer knows that, but for search engines, that’s not so obvious. It could be something completely different. Sometimes you also just track ad tracking parameters to URLs, and all of these different URLs are essentially for search engines, separate pages that we could look at and say well, there might be something different here and you can imagine at 20 billion times a day that could Lead to a lot of inefficient crawling, so we prefer to have a single URL per piece of content, and there are two ways that you can do that the first is to consistently use the same URLs across your whole website.

So if you have internal navigation link to the same pages, if you have a sitemap file, like Maria mentioned, you use the same URLs there. If you use anything to guide people to your websites, make sure you use the same URLs there. Instead of having these different patterns that only two the same thing, and secondly, one element that you can also use is the rel canonical link element, which is something you can place in the head of a page that tells us well search engines or Google.

If you look at this page, this is actually the URL that I prefer you look at. This is the one that I want you to index and together. This makes it a little bit easier for search engines to pick the right URL. So we have your ELLs covered. What else is there? Let’s take a look at a typical search results page, so we have on top the title in this case the Google i/o schedule page. Then we have the URL which is right below it.

In this case, it’s a it’s a breadcrumb URL, we’ll look at that briefly a bit later as well, and then you have the description. So these are three elements on a search results, page that are immediately visible to everyone who is searching for something and they come from your pages directly. So it’s something that, as a developer, it’s really easy to place. When you look at an HTML page, they’re, very visible and easy to find, so we have the title on top.

This is really easy to put in here. We have the canonical tag, the rel canonical link element, which is also really easy to place, and we have the description meta description. So while these elements don’t directly affect the ranking, so the order that Maria talked about they do affect how we show a page in the search results and with that they do affect how people actually come and visit your pages or not.

So we’ve seen a few of the basic elements here, like the metadata, the titles URLs and the descriptions. What could you as a developer do to make that a little bit easier, especially if you have various people who are using your website your project, to put content online? We recommend making it as easy as possible for them to do the right thing, so not just for you as a developer, to put titles descriptions and all of that into your pages, but also for those who are creating pages on your platform.

To put that in there, so here you see a user interface from blogger with a really easy way to just add a description to individual pages, and we feel the easier it is for for people making your pages to actually put this content in there. The more likely they’ll actually do that. So when we looked at the search results, we saw this kind of breadcrumb there as well, and a breadcrumb is for us something.

That is something that you can provide on your pages to make it easier to understand where this page belongs within your website. We call this a type of a rich result, because it’s not just the pure text text result and there are different kinds of rich results that you can also use. For example, you could add markup for articles. If you have articles on a page, you could tell us about podcasts, which is really cool, because there’s a podcast player built into the search results.

So if you have a podcast, if you have a project that includes audio content, then suddenly that content is immediately available. In the search results, without anyone needing to install an extra app which is really cool and then finally, recipes, of course, which we saw with Rakuten in the beginning. So how do you get all of these rich results? Well, Maria mentioned that briefly. Essentially, it’s just a bunch of json-ld markup that you can add to the top of your pages.

That gives us a lot more information. So this is something that you can just add to the pages here. It’s really easy to add. We have a bunch of different types of markup that you can add here, there’s a code lab here for i/o as well on adding structured data markup. So if you’re curious on how to do that, I definitely take a look at the code lab. I have a link here and the code lab includes information on finding the right types of markup to add how to add it and how to test it.

So that’s a great thing to check out another element when it comes to web pages. If you’re working on just a general web page project is speed. For us, speed is a ranking factor at Google, so it helps us to determine which pages we should show in. In the order in the search results, but generally we’ve also found that speed. It makes a big difference even outside of search engines and the various tools to test speed.

We we have a link here that gives you an overview of the different testing tools that we have. One of the tools is PageSpeed insights, which I showed here. That gives you a great overview of what you could be testing, what you could be looking at what you could be improving and then one other really important tool when it comes to search, is search, console kind of what the name says so within search console, you Get a lot of information about this whole pipeline that Maria showed everything from discovery to crawling to indexing and to serving so how we show your pages in the search results.

You can find information about this in search console. Additionally, we’ll also alert you of critical issues. As they arise, so we strongly recommend that everyone checks this out if you’re, making a public web presence, anything that you want to have indexed and searched and looks like a lot of you do so. The first step when it comes to search console is to verify ownership. We don’t show the data in search console to just everyone.

You have to kind of prove to us that this is actually your website. One thing that I find really important here is, if you’re, making a project for others online, make it as easy as possible for them to verify owners so make it possible for them to add any of these verification tokens so that they don’t always have to go Back to the development team say hey, I need this special file that has this content and put that on a page.

So we we talked about websites quite a bit, but web app is another really important topic which I imagine a lot of. You have seen in different ways here at i/o already for us. A web app is kind of like a normal website, but it provides a lot more interactive functionality. Interaction may be logged in view personalization. Maybe it has parts that don’t actually need to be indexed as well. For example, a travel business might have information about timetables and general pricing, but also have detailed information about kind of specific connection plans for individual connections or personalized pricing as well or in this case.

For search console, we have a lot of general informational pages as well as a lot of content, that’s kind of unique and where you have to be logged in to actually gain access to that. So for these types of sites, you kind of have to balance between what you want to have indexed and what you don’t really want to have indexed and for web apps in general. I’d also take a look at the JavaScript site session from earlier today, so one there are few things that we found that are kind of unique when it comes to web apps.

That, generally, don’t play such a big role on websites in general, especially if you’re making normal HTML pages. The first one is how to actually find URLs on your site. So we talked about URLs briefly Maria mentioned how important they are for discovering pages and within web apps. We’ve seen that people sometimes don’t use traditional anchor tags to. Let us know about your ELLs, so in particular we we love finding things like this.

Where we’ve have an a tag, we have a link to a page. We control that it’s really easy to find. It’s a lot trickier when you have something like a span that essentially just calls a JavaScript function with an onclick handler, then search engines when they look at that. But, like I don’t know what what do we need to do here? Does this show a dialog? Does it show new page? Does this go somewhere? We don’t know so we can’t crawl this kind of a link.

So what you can do, if you want to have an onclick handler and handle things in JavaScript, is combine the two. So you have your onclick handler and you have your href attribute to. Let us know about the other page that we can go off and crawl. Another extreme when it comes to web apps is that we often run into situations where we see tons of different URLs, which makes it again quite inefficient to actually crawl through.

So there are different things that you can do here to. Let us know about this. The first is obviously to avoid actually going off and crawling all of these different URLs. So if these don’t provide unique functionality that you need to have indexed separately, maybe you can use other ways of linking to them other than a element. Another thing that you can do is within search console tell us about individual parameters within the URL that you don’t care about.

So this is really neat tool, but it’s also very strong functionality in that. If you set this up incorrectly, then of course we won’t go up and crawl all of these URLs, and if this is something that you care about, then suddenly we won’t be able to index that. So I read out for this, but this is a great way of handling this kind of duplication within a website. Again like like we talked about before a lot of web apps use, JavaScript frameworks and for JavaScript frameworks.

You have to read out for some of specific details as well, so that we can actually render the content that we can crawl and index the content in an efficient way for that. I’d really refer back to the JavaScript side session that we had this morning. A really quick way, if you just want to have a short view of whether or not your javascript site your web app works for search, is to use a mobile-friendly test which is shown here, which shows the mobile view as mobile Googlebot would show.

This is really important for us because we’re switching to mobile first indexing, where Googlebot is actually using a mobile device to for all pages rather than a desktop device. So definitely make sense to check this out and we also have a bunch of best practices and general guidelines that that apply more to web apps that you can check out in the other session as well. So what do you do if you’re not just building one application or one website, but rather a whole platform? I don’t know Maria.

Can you tell us more? I have some ideas all right, so you could be building an individual site or a web app for someone or for yourself or you could be contributing to an entire content management system or another hosting platform. And here what I mean by this is any type of platform where other people can create their own online presence, so it can come in different flavors, for example, it could be something like WordPress, where you could download it and host it on your own server or It could be a fully hosted system, plus your own domain, like Squarespace, for example, or it could be something where you just get a URL on their own domain and also it’s hosted by them like Tumblr.

So there are all these different flavors and you could be working for a system like this which, in its own turn, has a bunch of users. So what you do affects all of these people, and that is a lot of power and a lot of responsibility. So we’re going to talk about what can you do to make all these people successful in search by making some changes to the platform itself? And this is a really important topic for us right now, because more than 50 % and growing of the web is currently build on various CMS’s.

So more than half of the content on the web is affected by the systems and if you’re working on one of them were planning to do so in the future, it’s really great, if you’re able to make those people successful in search as well, because that’s why They came on the web, they wanted to connect to others, maybe find some customers and so forth. So we’ve been thinking a lot about this and we’ve built a set of api’s to help you integrate search functionality directly into the interface of those systems, and I want to show you api’s and how they’ve been integrated already.

Maybe, to give you some inspiration and some ideas about what you can do so as John was mentioning before the first thing that we need in order to show any type of search, information or search functionality is to have proof that you are indeed the owner of The site – and he mentioned how this works for individual sites, so you can have an HTML file. You can use a DNS entry and so forth, but for those users, especially for the less savvy CMS users – wouldn’t it be great if you could actually simplify it to one click and it is possible with the verification, API and three-legged OAuth.

So we’ve built this API. So that you can use it, and if the user authorizes you, you could verify their site, which is hosted on your platform on their behalf. They just need to click one button and they immediately have access to all the search information. So the experience for them is really smooth, and you can do this for a thousand users or for two million users or whatever it is, and then immediately they get access to all kind of interesting stats.

Which brings me to the next API, which is the search console API and that provides access to aggregated stats per site. So you can see things like clicks impressions, crawl errors. You can submit a sitemap through there and you can slice and dice this in many different ways. So, for example, you can look per country per time period or per device, and you can build very interesting interfaces with that, and here on.

The slide is an example of a request where you just pulled the top 10 queries per clicks for a specific period of time and then, as a result, you would get a table where the query clicks, impressions position and so forth. Now a table in itself might be informative, but it’s not really exciting. So let me show you some ways in which existing CMS’s have actually integrated. This we’ve been working with Wix and they created this achievements sidebar for their users, so they’re using the search analytics data to give this little badges every time.

Something happens that they think the user will be happy to hear about. So their users are super excited about this gamified approach and they’re constantly looking there in order to see okay. What did I get now? What did I get now? So here’s clicks and impressions built in an achievement sidebar like this we’ve, also been working with Squarespace and actually just this Monday, they announced this new report that they integrate it into the interface of their own CMS.

So what you see here is the one-click verification when the user click connects to Google in the backend, their site gets verified, and then this report gets populated with information from search console. So here the user can see clicks impressions and the time series over the last month. Squarespace has a bunch of other analytics reports inside their CMS, so people can compare and build the full picture of how they’re doing in search – and at this point they don’t even know that search console exists, but they have everything that they need to know how they’re Doing and to accomplish the correct tasks right there in their Squarespace dashboard.

So we’re pretty excited about this kind of functionality and we want to build up on it and we would be looking forward to work with other CMS’s if you’re, representing one and you’re interested in this. Another thing that we really wanted to help users with is get their content as fast as possible on the search results, and so we’ve been looking into ways to use the indexing API that we have in order to get content submitted super quickly and then also be Able to share the indexing decisions, so what did our search systems think about the CRL and what do they want to do with it? And we worked on this for a few months and at this point is in a place where this can happen within seconds.

So again, with Wix, we built a pretty cool integration where, when a user submits a page and it matches a certain quality criteria, basically they can click a button within the Wix interface and then the page gets submitted through the indexing API and then, after that, they Immediately get a response if their page got on the search results or not so for the Wix users. This is a pretty cool experience because they can see their page in the search results immediately after they’ve created it there’s no waiting, there’s no wondering and my own search or not within seconds they’re on Google, so we’re interested in working with other CMS’s and if you Represent some kind of content management system or a platform which lets users create their own presence online and especially if your users are less savvy and they don’t really know what to do with search.

We are really interested in talking to you to see if it might be a good fit to participate in the CMS Partnership Program. So, there’s a link on the slide which will take you to a forum and there you can tell us a little bit more about who you represent and how you would like to work with us. So, looking forward to hearing from some of you hopefully now you could be contributing not just to the core product of the CMS, but to a bunch of other things which people install in order to enhance the functionality of their site.

And one of those things are plugins now plugging here is defined as any kind of add-on that people would add to their site. So, for example, things like a shopping, cart or maybe a way to add reviews or a comment plug-in things like that. So while it can enhance the functionality of the site, it can also significantly alter the functionality of the site in terms of performance and other factors. So I wanted to give you a few tips on what to do if you’re building plugins.

First of all make sure that it doesn’t slow down the performance of the site. So, in order to do this, have a test site, install the plug-in and then use our speed tools to make sure that the site with the plug-in is doing just as well as the site without the plugin. This is webpagetest.Org, which is one of the performance tools that we have, and the neat part about it is that it will give you a super, detailed breakdown, a float loaded and when so, you can see how your plugin is affecting the performance of this site.

So test that out, then, if you’re building a comment plug-in and if you been on the Internet in general, you will know that there are a lot of comments out there, which are maybe a little bit less valuable than other comments and especially in some cases there Altogether, spammy or there bots, which are going around and posting auto-generated stuff in order to create links that they’re hoping search engines will follow to some spammy websites.

This is not pleasant for any user and you can actually help them out a little bit if you’re, building a plugin like this by adding a specific type of annotation to those links by default, so that search engines know not to trust them. This is a link attribute that we call nofollow and basically, what it does is. It will just tell the search engines. Don’t follow this link, don’t trust it. So, if you’re building a comment plug-in, definitely consider adding this to the links in the comments.

Finally and kind of most, unfortunately, we’ve noticed that one of the main vectors for attack on websites and attacks on websites are increasing is hacks through plugins. So a lot of hackers and malicious other malicious people will get access to a site through an old plugin and if you’re, building plugins there’s a few things, you can do to make sure that your users are not affected by this. First of all make sure that every time you add an update, everybody who has this plug-in automatically receives it then make sure to follow coding, best practices so that there’s no backdoors that the hackers can exploit.

And finally, if you get tired of this plug-in and decide not to support it anymore, make sure it’s clear to people that this is not supported, so they don’t go ahead and install something that is actually making their site more vulnerable themes are another thing that is Very closely related to CMS’s and a lot of people install them and though, in order to improve the appearance of their scientific Euler or give it a specific like feel, so they can change how the site looks on how users perceive it.

But they can also really affect performance and they can also affect mobile friendliness. So again, here test your theme make sure that it’s responsive. You can do this with the mobile-friendly test that John was showing earlier and for performance. Specifically, we recommend again having a test site and then, with the theme and with another theme, having a look at how it performs lighthouse is one of our speed tools, which is really useful in this case, because it’s in the browser you don’t need to have have The site process, by search in order to test it so here’s a blogger site, the we use for purposes of this example.

We install the theme and then we use lighthouse to do the performance testing. So you see how long it takes actually until what they call the first meaningful paint, which is when all the elements appear to the user. So the overall score for this team was not super great and also the specific user metrics with we’re not great either. But then we went ahead and we switched to another theme, so you can see here it’s much much faster to load and the user can interact with it much faster as well and consequentially.

The score is also great. So if you’re building themes definitely make sure that it’s performant and also that it’s responsive by using the free tools that we provide so there’s a lot of stuff that we cover today and hopefully for any of these things that we’ve told you that you might be Building you now have enough tools and equipment to go ahead and make improvements and make your users happy and more successful in search.

We know that there’s many many different details and links that we provided. So if you have to remember just four things: pay attention to what John is going to tell you right now, all right thanks wow, that was a lot. Ok, look looking back at these things! There are a few common elements that that we covered that came up again and again. So first is remember the basics, URLs titles and descriptions they do matter.

They do play a big role when it comes to search. They play a big role in how people come to your site through search. Secondly, remember to take advantage of structured data like the Rakuten example in the beginning. Obviously, they saw a big change in the traffic from search even without ranking changes just by making their search results. Look a lot more visually appealing and then take advantage of all of the tools and api’s that we have available so use.

The search console understand how search console works, use the api’s from search console to make it better for your users, people who are using your products to really create fantastic web presences and then fine, especially if you’re, making something for other people to create web presences. In make it as easy as possible for them to do the right thing as well, so make it easy for them to put the right fields in in to add data about titles and descriptions on pages, make it easy for them to really create high performance.

Web pages, so these are only some general tips I think, to get started with they’re. Obviously, a lot of different aspects that come into play with search, but we think these are aspects are really critical to start with, and we have a lot more information in our developer Center developer guides. We have a search console Help Center with a lot of more information about search in general and specifically about search console.

If you have any more questions today will be in the web, and payment sandbox area later today, so feel free to come by there as well and finally, there of course other ways to reach out to us online as well. So you can find us on Twitter. We do live office hours hangouts, where you can join us with a YouTube, live hangout, we’re available in the webmaster help forum. If you have any questions, so don’t don’t let questions kind of stick around make sure you get answers to them from from us.

If you need them, we hope you found this introduction into search interesting. Thank you all for coming. We wish you and your projects more success online through Google search. Thank you.


Who is helping with your digital business footprint?