Categories
Online Marketing

Next-Generation 3D Graphics on the Web (Google I/O ’19)

Okay, let’s sudden my name is Ricardo and together with a current in, I will be talking about the future of 3d graphics on the web. But before we do that, let’s have a quick look at the past and the present WebGL landed in browsers in February 2011.

That was in chrome, 9 and Firefox. 4 were the first ones. Those words those rosters were the first ones who implemented back. Then the Google, with the Google query, lab we created a interactive music article that aimed to showcase the new powers the technology was bringing to the world. It was a pretty big project in between creators, directors, concept, artists, animators around 100 people work on the on the project for half a year, and ten of us were JavaScript graphics developers.

We knew the workflow and tools were very different compared to traditional web development. So we also made the project open source, so others could use it as reference. Some years later, Internet Explorer an H and Safari implemented WebGL 2, which means that today the same experience works in all major browsers in desktops and tablets and phones too. What I find most remarkable is the fact that we don’t have to modify the code for that to happen.

Anyone anyone with experience, doing graphics, programming like knows that this is this is rarely the case. Usually we had to recompile the project every every couple of years when operating systems updates or like new devices appear. So it’s a quick recap just start with checking WebGL is a JavaScript API that prove it’s binding to OpenGL. It allows web developers to utilize the users, graphic users, graphics card in order to create a efficient and performant graphics on the web.

It is a low-level API, which means that it is very powerful, but it’s also very verbose. For example, a graphics cards main primitive is a it’s a triangle. Everything is done with triangles here’s the code that we’re going to need to write an order in order to display just just one triangle. First, we need to create a canvas element. Then, whichever skill we get the context for that canvas and then, since things get pretty complicated like pretty fast like after defining positions for each vertex, we have to add them to a buffer.

Send it to the GPU, then link link the vertex and fragment shaders, compile a program that will you will be used in from the graphics card to how to fill those pixels. So that’s why a bunch of us back then like started creating libraries and frameworks to that abstract, all that complexities, so so developers and ourselves cool, stay, productive and focus those libraries take care of placing objects in 3d space material configuration is loading 2d and 3d assets.

Interaction sounds etc, like anything for doing any and you sort of like game on application, there’s only designing those libraries takes time, but over the years people have been doing like free amazing projects with them. So let’s have a look at what people are doing today, so people are still doing interactive music articles – that’s good, in fact like in this example, track by little workshop, not only works on desktop mobile, but it also works on VR devices, letting you look around while Chopping through like glowing tunnels, another clear use of the technology is gaming.

Home is a beautiful game developed by a super surprisingly small team and was released last year last year’s Christmas experiments. Another one is a web experiences in this case out. The goat is an interactive animated, storybook designed to teach children about bullying and the guys that the folks at assembly used my to model on rican animates those characters and then export it to gltf via blender for rendering they use fridges and they brought like 13 13 Thousand lines of typescript to make the whole thing work, and yet another very common use is a product configurators.

The guys are like a little workshop again show how good this can look in this demo click, but those those cases are not and they’re like what people are doing. Like data visualizations, enhancing newspaper articles, virtual tours, documentaries, movie promotions and more like you can check, you can check the 3js webster than the babylon Jet Set to see more of those examples. Alright, we don’t want to.

We don’t want to end up like in a world where the only HTML elements in your page is just a combat stack in the script tag. Instead, we must find like a ways of combining WebGL on an HTML, so the good news is the laylee. We have been like seeing more and more projects and examples of web designers utilizing bits of WebGL to enhance their HTML pages. Here’s a site that welcomes the user with a beautiful immersive image, we’re able to interact with the 3d scene by moving the mouse around the image.

But after strong the page, then we reach like traditional a static layout with all the information about the product as we as traditional web sites. Usually look like the personal portfolio. Bertrand Candice shows a set of developments affecting the dynamic background, a bit dark with JavaScript. We can figure out the position of those deep elements and then we can use that information to affect the physical simulation that happens on this 3d scene.

On the background, but like for under power devices, we can just replace that WebGL scene with the static image and the websites so functional. Another interesting trend we have been seeing is the websites that use distortion effects. The worst for Japanese director of terajima has a very impressive use of them. However, the content is actually plane and selectable HTML, so it is surprising because, as you know like, we cannot do these kind of effects with CSS.

So if we look look at it again, what I believe that they were doing is like the edges copying they they had the Dom elements that they’re copying the pixels all those elements into the background of WebGL canvas. Then they hide the Dom element that they apply. The distortion – this is the finish, the transition and they put the next Dom on top, so it’s still something that you can enable/disable depending on there if it’s small, it also works on mobile some of the things, but something that you can progressively enhance one more example To set for this side applies the social effect on top of the HTML, basically making the layout like like truly fluid, then again like this is something surprising because with the may be possible with CSS.

So I think those are all great examples of the kind of results you can get by mixing HTML and WebGL, but it still requires the developer to diving into JavaScript and that you know, as we know, can be a little bit tedious to connect all the parts. If you’re more used to react this new library web poll Henschel, I can be a great option for you. React 3 fibre mixes react concepts on top of three years abstraction so that, like here’s, the code that for the animation that which is so notice how the like critters will define effect and content components easily composed into the canvas it makes it much more reusable and Easy much and easy to maintain.

However, I think that we can still make even simpler, enter web components. I believe, where components will allow us to finally bring bring all the power of WebGL right into the HTML layer. We can now encapsulate all those effects in composable custom elements and hide all the code complexity so, for example, here’s another project that we did for the WebGL lunch eight years ago. It was kind of a globe platform.

It was a like a project that allow JavaScript developers to visualize different data sets on top of the globe. You will have the library you have your data and then you are to trade like use different, like money to use the different parts of the data to display. But even if we try to hide the WebGL code, developers still had to write custom JavaScript for loading. The data and configure the globe and append it to the DOM and the worst part was like the boroughs will still have to handle the positioning of the Dom object and the reciting and I did was use difficult to like mix it with like a normal HTML Page so today, with web components, we can simplify all that code.

We use those two lines. The developer only has to include JavaScript library on the website and a powerful custom element is now available to place whenever whenever they need in the Dom, I’m not only that alike. But at that point I duplicated a line with by duplicate by duplicating the line that you can have like multiple Globes before it will never have to. You know duplicate all the code and it will be again how there’s are like more code to read and parse a component that is already ready to use the previous one is not ready.

Yet this one model Bureau is really ready and for this one basically, we wanted to do that. The problem is that displaying 3d models on the web still pretty hard, so we really wanted to make it this as simple as like, embedding like an image in your page like a simple as adding like an image tag. So that’s me, that’s a main goal for this one again, the developer only has to include a JavaScript library and then and then like a powerful.

This custom element is ready to display, like any 3d 3d models were using the gltf open standard. An important feature of HTML tags is accessibility for low vision and blind users. We’re trying to inform them on both the 3d model, like what the 3d model is and also orientation of the model. Here you can see that the view angle is being communicated verbally to the user, so they can be oriented with what’s going on and also it prompts the for how to control the model with keyboard, and I see an easy exit back to the rest of the Page, the mala paper also supports air like a mental reality, and this you can see how it’s being it’s also really being used on the nasa website so use by adding the a RI attributes.

It’s going to be able it’s going to show an icon and it’s going to be able to launch the a or b word for both on Android and iOS for iOS. You have to include the u.S. DC file and, lastly, while building the component, we realized that, depending on the device, you can only have up to 8 WebGL context at once. So if you create a new one, the first one disappears. It is actually like a well-known a limitation of WebGL bit lots of good practice.

You only have one context for keeping memory in one place. The best solution that we found for this was creating a single WebGL context on off-screen, so like it’s hidden and then we use we use that one to render all the model. We were elements on the page. We also like utilize, the interesting observer to make sure that we are not rendering objects that not are not in BO and also recites observer too, whenever detecting, if the, if the developer is modifying the size we render we have to, but we all know how the Web is sooner than later, someone we want to display hundreds of those components and ones, and that is great.

We want to allow for that. But for that we’ll need to make sure that the underlying API is or as efficient as possible. So, for that now, quarantine is going to share with us what’s coming up in the future. Thank you. Okay! Thank you, Ricardo. This was an amazing display of what’s possible on the web using GPUs today. So now I’ll give a sneak peak of what’s coming up next. In the future, where you’ll be able to extract even more computational power from GPUs on the web, so hey everyone, I’m Colton Velez, and for the last two years at Google, I’ve been working on an emerging web standard called web GPU in collaboration with all the major Browsers at w3c, so web GPU is a new API, that’s the successor to WebGL and it will unlock the potential of GPUs on the web.

So now you’ll be asking go Anton. We already have WebGL. So why are you making a new API? The high level reason for this is that WebGL is based on an understanding of GPUs as they we’re 12 years ago and in 12 years GPU hardware has evolved, but also the way we use GPU hardware has evolved. So there is a new generation of GPU. Api is native, for example, Vulcan that helped do more with GPUs and web GPU is built to close the gap with what’s possible in native today, so it will improve what’s possible on the web for game developers, but not only it also improve what you can do In visualization in heavy design, applications for machine learning, practitioners and much more so for the rest of the session, I’ll be going through specific advantages or things that web GPU improves over WebGL and show how it will help build better experiences.

So, first web GPU is still a low level and verbose API so that you can tailor usage of a GPU to exactly what your application needs. This is the triangle Ricardo just showed and as a reminder, if this was the code to render this that triangle in WebGL now this is the minimum web GPU code to render it the same triangle. As we can see, the complexity is similar to WebGL, but you don’t need to worry about it, because, if you’re using a framework like three or Babylon, then you’ll get the benefits transparently for free when the framework updates to support what GPU.

So the first limitation. For that WebGL frameworks run into is the number of elements or objects they can draw it frame, because each drawing command has a fixed cost and needs to be done individually, each frame so with WebGL and optimized. The application can do a maximum, a thousand objects per frame and that’s kind of already pushing it, because, if you’re on, if you want to target a variety of mobile devices and desktop devices, you might need to go even lower than this.

So this is a photo of a living room. It’s not rendered it’s an actual photo, but the idea is that it’s super stylish, but it feels empty and cold. Nobody lives there, and this is sometimes what it feels looking at WebGL experiences, because they can like complexity. In comparison, game developers in native or on consoles are used to, I don’t know, maybe 10,000 objects per frame if they need to, and so they can build richer, more complex, more lifelike experiences, and this is a huge difference, even with the limitation in the number of Objects, WebGL developers have been able to build incredible things and so imagine what they could do if they could render it as many objects.

So Babylon genius is another very popular 3d JavaScript framework and just last month when they heard we were starting to implement web GPU. They’re, like hey, can we get? Can we get some web GPU now and we’re like? No, it’s not ready like it’s, not in Chrome, but here’s a custom build and the demo I’m going to show is what they came back to us with just two days ago. So can we switch to the demo, please all right, so this is a complex scene rendered with WebGL and it tries to replicate what a more complete game would do if every object was drawn independently and a bit differently, so it doesn’t look like it, but all The trees and rocks and all that there are independent objects and could be different objects, so in the corner in the top left top right corner, there’s the performance numbers and we can see that as we zoom out and we see more objects.

The performance starts. Dropping heavily and that’s because of the relatively high fixed cost of drawing each object of sending the command to draw each object, and so the bottleneck here is not the power of the GPU on this machine or anything like that. It’s just JavaScript iterating through every object and sending the command. Now, let’s look at an initial version of the same demo in Webb GPU and keep in mind.

This was done in just two weeks. So, as the demo as the scene zooms out, we can see that the performance stays exactly the same. Even if there’s more objects to draw and what’s more, we can see that the CPU time of JavaScript is basically nothing. So we are able to use more of the GPU power GPUs power because we’re not bottlenecked on JavaScript, and we also have more time on the CPU to run our applications logic.

So, let’s go back to the slides. What we have seen is that, for this specific and early demo, web CPU is able to submit three times more drawing commands in WebGL and leave his room for your applications. Logic I made Renu a major new version of babylons. A yes Babylon’s is 4.0 was released. Just last week, and now today, the WebGL, the Babylon Jazz developers, are so excited about what GPU that they are going to implement full support for web GPU.

For the initial version of what GPU in the next version of Babylon is that blonde J is 4.1. But what GPU is not just about drawing more and more complex scenes with more objects, a common operation done on GPUs are say: post-processing image filters, for example, def depth-of-field simulation. We see this all the time in cinema and photography, for example, this photo of the fish. We can see the fish is in focus while the background is out of focus, and this is really important because it gives the feeling that the fish is lost in a giant environment.

So this type of effect is important in all kinds of rendering. So we can get a better cinematic experience, but it’s also used in other places like camera applications and of course this is one type of post-processing filter, but there’s many other cases of post-processing filters. Like I don’t know, color grading image sharpening a bunch more and all of them can be accelerated using the GPU. So, for example, the image on the Left could be the background behind the fish.

If, before we apply the depth of field and on the right, we see, the resulting color of the pixel what’s interesting is that the color of the pixel depends only on the color of a small neighborhood in the original image in a small neighborhood of the pixel. In the original image, so imagine the grid on the left is a neighborhood of original pixels. We’re going to number them in 2d, and the resulting color will be essentially a weighted average of all these pixels.

Another way to look at it is to see that, on top we have the output image and each of the the color of each of the output pixels will depend only on the 5×5 stencil of the input image on the bottom. The killer feature of a GPU, in my mind, is what we call GPU compute, and one use case of GPU compute is to speed up local image filters like we just saw, and so this is going to be pretty far from Dom manipulation.

I would like react, or like amazing web features, like course headers. So please bear with me we’re going to go through it in three steps. First, we’ll look at how GPUs are architectures and how an image filter in WebGL uses that architecture and then we’ll see how web GPU takes better advantage of the architecture to do the same image filter but faster. So let’s look at how a GPU works – and I have one here so this is a package you can buy in stores.

And can you see it? Oh yes! So this is a package you can buy in stores and the huge heatsink. But if we see inside there’s this small chip here – and this is the actual GPU, so if we go back to the slides, this is what we call a die shot, which is a transistor level picture of the GPU. And we see a bunch of repeating patterns in it, so we’re going to call them execution units. These execution units are a bit like cores and CPUs in that they can run in parallel and process different workloads independently.

If we zoom in even more in one of these execution units, this is what we see. So we have in the middle a control unit which is responsible for choosing the next instruction like, for example, add two registers or load something from main memory, and once it has chosen an instruction, it will send it to all the alias. The ALUs are the arithmetic and logic units and when they receive an instruction, they perform it.

So, for example, if they need to add two registers, they will look at their respective registers and add them together. What’s important to see is that a single instruction from the control unit will be executed at the same time by all the ALUs just on different data, because they all have their own registers. So this is single instruction, multiple data processing, so this is the part of the execution unit that is accessible from WebGL, and what we see is that it’s not possible for L used to talk to one another.

They will have no ways to communicate, but in practice, GPUs look more like this. Today there is a new shared memory region in each of the execution units where I’ll use our can share data with one another. So it’s a bit like a memory cache in that it’s much cheaper to access than the Jemaine GPU memory, but you can program it directly explicitly and have a use shared memory there. So a big benefit of GPU compute is to give developers access to that shared memory region.

This was the architectures of GPUs and their execution needs. So now we’re going to look at how the image filter in WebGL maps to that architecture. For reminder, this was our the algorithm we’re going to look at and in our example, since our execution units has 16 a I’ll use, we’re going to compute a 4 by 4 block, which is 16 pixels of the output in parallel, and each ALU will take care Of computing, the value for one output pixel – and this is GPU pseudocode for the filter in WebGL – and essentially it’s just a two de loop on X & Y, that fetches from the input and computes the weighted average of the input pixels.

What’s interesting here is the coordinates. Argument to the function is a bit special because it’s going to be pre-populated for each of the ALUs and it’s what will make it that’s what will make that they’ll use each to the execution on different data because they start populated with different data. So this is a table for the execution of the program, and likewise we can see the coordinates are pre-populated.

So each column is the registers for one of the ALUs and we have 16 of them for the 16 ail use. So the first thing that happens is that the control you need says, hey, initialize, some to 0, so all of them initialize the sum to 0, and then we get to the first iteration of the loop in X and each ALU gets its own value for X. Likewise, it’s edges, h-hell, u gets its own value for y and now we get to the line that does the memory load of a value of the input.

So each ALU has a different value of x and y in their registers, and so each of them will be doing a memory load to a different location of the input. Let’s look at this register at this ALU. It’s going to do a memory load at position. Minus 2 minus 1 we’re going to get back to this one. So if we go and do an audit or iteration of the loop in Y. Likewise, we have data while register and we do a memory load.

What’s interesting here is that the first ALU will do a memory load in minus 2 minus 1. That’s a redundant load because we already did it at the nest at the last iteration anyways. The loop keeps on looping and there’s more loading and summing and all that that happens and in the end we get to the return, and that means the output. The sum will get written to the output pixel and the computation for a 4 by 4 block is finished.

Overall, the execution of WebGL on the you of the algorithm in WebGL for a 4 by 4 block did 400 memory loads. The reason for this is, we have 16 pixels in each of them. Each of them did 25. So now this was how the filter executes in WebGL we’re going to look at how web GPU uses the shared memory to make it more efficient. So we take the same shader the same program as before, so it’s the exact same code and we’re going to optimize it with shared memory.

So we introduced a cache. That’s going to load! That’s going to contain all the pixels of the input that we need to do the computation. This cache is going to be in shared memory so that it’s cheaper to access than the actual input. It’s like a global variable, that’s inside the execution unit. Of course, we need to modify the shader to use that input tile and because input tile needs to contain values at the beginning.

We can’t just start like this, so this is this function. It’s going to be a helper function that computes the value of the pixel and we’re going to have a real main function that first go plates the cache and then calls the computation. So like the previous version of the shader, the coordinates are pre-populated. So each of the I’ll use does a different execution and then all the L users work together to populate the cache and there is a bunch of loops and whatnots there.

But it’s not really important. So, as we use this what’s interesting to see is that only 64 pixels of the input are loaded and put in the cache. There is no redundant memory loads. Then we go through the main computation of the value, and likewise this is very similar to what happened before, but on this line, the memory load is now from the shared memory instead of the main memory – and this is cheaper.

So, overall, thanks to the caching of a tile of the input, the web GPU version didn’t do any redundant main memory load. So for a 4 by 4 block it did 64 memory loads and, like we saw before what GL had to do 400. So this looks very, very biased in favor of web GPU, but in practice things are a bit more mixed, because web GPU did didn’t do main memory loads, but it did a bunch of shared memory loads and it’s still not free and also WebGL, is a bit More efficient than this, because GPUs have a memory, cache hierarchy, and so some of these memory loads will have hit the cache that’s inside the execution unit.

But the point being overall web GPU will be more efficient because we explicitly are able to cache input data. So the code we just talked about in the graphics world, it’s called an image filtering. But if we look at the machine learning world, it’s called a convolution or a convolution operator, all the optimizations we talked about. They also apply to convolutional neural networks, also known as CNN’s, so the basic ideas for CNN’s were introduced in the late 80s, but back then it was just too expensive to train and run the models to produce the results we have today.

The ml boom of the last decade became possible because CNN’s and other types of models could run efficiently on GPUs in part, thanks to the optimization we just saw. So we are confident that machine learning web frameworks such as tensorflow JS, will be able to take advantage of GPUs to significantly improve the speed of their algorithms. Finally, algorithms can be really difficult to write on GPUs in WebGL and sometimes sometimes there’s just not possible to write at all.

The problem is that in WebGL, where the output of computation goes is really really constrained. On the other hand, GPU compute that web GPU has is much more flexible because each ALU can read and write memory in at any place in the GPU memory. This unlocks a whole new class of GPU algorithms from physics and particle based fluid simulation, like we see here, two parallel sorting on the GPU mesh skinning and much much many many more algorithms that can be offloaded from JavaScript to the GPU.

So, to summarize, the key benefits of web GPU are that you can have increasing complexity for just better and more engaging experiences, and this is what we have seen with babylons is it provides performance, improvements for scientific computing, like machine learning and it unlocks a whole new Class of algorithms that you can upload from JSC PU time to run on a GPU in parallel, so now you’re like hey, I want to try this API you’re.

In luck. The web GPU group at the web, CPU is a group effort and everyone is on board. The Chrome Firefox edge, Safari they’re, all all starting to implement the API today, we’re making an initial version of a GPU available on Chrome Canary on Mac, OS and other operating system will follow shortly to try it. You just need to download Chrome Canary on Mac OS and enable the experimental, Schlag and safe web GPU and again this is an unsafe lag.

So please don’t browse the internet with it. On for your daily browsing, more information about about web GPU is a valid available on web GP, dot io. So there’s the status of implementations, there’s link to some samples and demos, a link to a forum where you can discuss web GPUs and we’re going to add more stuff to this. With articles to get started and and all that, what we’d love is for you to try the API and give us feedback on what the pain points are, what you’d like the thing to do for you, but also what’s going great and what you would like about It so thank you, everyone for coming to this session Ricardo and I will be at the web, send box for the next hour or so.

If you want to discuss more. Thank you. You


 

Categories
Online Marketing

AWS Serverless Web App Tutorial

We will be covering a new concept in cloud computing, namely the serverless architecture. What does this mean? Well, essentially, server less computing allows you to build and run applications and services without the need to manage the actual server. It also provides flexible scaling under heavy load and automated high availability becoming a very versatile tool.

The AWS server less platform has many capabilities, as you can see, but for the purpose of this article we will be showcasing the creation of a server less web app from scratch. The structure of this project can be easily understood by following this diagram. All of the front-end code will reside in the s3 storage unit, which allows us to host the website publicly. Therefore, we start by creating an empty s3 bucket, like so give it a unique global name and click create and now to upload the static websites content into the bucket.

I simply click upload and drag and drop the build folder into here. Like so note that I am using react as a framework, so it might be a bit different for you. The next step is making the pocket public and for that we go into permissions and pocket policy and paste this bucket policy. In order now public reads: click save and do not forget to change the bucket name. Now this bucket has gained public access.

As far as configuration is concerned, the only thing remaining is enabling the website hosting in the S stream, which will allow the objects to be available at the website. Endpoint of the bucket click properties and select this card to do so. Check use this pocket to host the website and input a route for us. It will be index dot HTML. Furthermore, you can set up a custom domain for your s3 bucket by following the tutorial right here, which will be linked in the description anyhow.

Now you have the endpoint of the website and if you click it, you will see that the static website has come online. Going back to the diagram. There are a couple of things left to do. We should set up an instance of the Amazon Cognito user pool which enables us to register log in and authenticate securely. Then we will write the backbone of the serverless project that are the various AWS lambda functions which replace the server as they access and manipulate of Amazon DynamoDB now SQL database and respond to easy to use restful api calls.

But first the user pool head to Amazon, kognito and click manage Azure pools. Here you can create one or several. These are pools based on your application, for instance, one for regular users and one for administrator. You give it a name and click preview default settings here. There are various options: the password length, some type of restrictions, for it. Verification by email is a really important one, because you actually also gain a two-factor authentication while using the Amazon, Cognito and finally create the user pool.

You can find it under the user pools category and make sure you note down the pool ID that it has given itself as well as the previous endpoint of the website. The final step of configuring, the user pool, is creating an app client for your project. I already have one in here, but you can always add multiple ones, just give it a name and make sure you deselect the generate secret client option, because that means that it doesn’t support the JavaScript SDK.

You can also manage and configure every single user in your user group by clicking here and, for instance, having the option to disable delete users even creating or importing new ones as well, and most projects use JavaScript these days after that, leave the other options as default And click create app client make sure you save the app client ID for future use. Once you have the two IDs you can insert them into your application and benefit from the various cognitive.

A P is like the login and register ones. Let us test this new functionality into the website. Now we can register a new user, for instance, with this email password that satisfies the constraints and of verification code has been sent to the said email. It has sent this Jason using the user pool ID and client ID that we have just provided it with, and we have already appeared as an unconformity in the user pool after user verifies himself.

He can also login by following the same procedure and of course you would get a warning if his password is incorrect, once kognito is out of the way. All that remains is the server less back-end. Aws lamda interacts with the database, so we must first create one head on to dynamo DB and click create table here, give it a name and the primary key you can choose between various types and then create.

For instance. Here is my database. I have events with various categories: DynamoDB is a non relational database and you can edit the data inside at any time as long as it is not the primary key of the item. For instance, here I have made a mistake. Event capacity should be a number and then click Save it successfully changed it. Furthermore, you can customize it by adding new fields inside it. Let’s say time of day morning, and then it would add that category now that we have the database, we need to create a role more specifically Identity and Access Management role for the future function, so that it has access to read and write to this database.

Specifically, so let us do just that: go on to the overview, tab and write down the Amazon resource name so that we know which database we’ll use then proceed to go to. I am and create a new role here: choose lambda as that’s the type of function that we will use in the filter, policies, search bar, begin typing, lambda and choose AWS lambda basic addiction role so that it will have permissions to write to cloud read logs so That you can see whether you have errors or not, then click Next provide a name for it and then you’re ready to create an actual role.

You can find it here in my case now to attach the policy for our specific database. Click here add inline policy for choose a service type DynamoDB actions. Let’s give it all the types of read rights and lists and for resources with specific option checked, go on to table and click. Add a RN here you can paste your your own database AR m and it will automatically fill in the fields, for you click Add, and then you can review the policy giving it a name as well once the role is created with both policies in place.

We are now ready to create the actual and the function. This is easily done by going to AWS lamda and clicking create function. I like living author from scratch. Option enabled for runtime give it the programming language of your choice, a name and the role. You can choose an existing role, and here you can see the one that we’ve just made. Example, lambda role create function and the base one will be created for you.

After we create a function. You can see that it already has access to the cloud read logs, as well as the Amazon DynamoDB database that we’ve just created the function itself is a hello world stub onto which you can build other functionalities in order to test it. You click here, give it a name and some parameters if needed. In our case it doesn’t use them, so it doesn’t really matter. Click create and again test.

As you can see, it says hello from lamda back. This is why we’ve added the cloud read logs policy. Now, on to a more complex example, we go into functions and get events. This is a lambda function which takes all of the events in the database. We’ve previously looked that that have reached past a certain hard-coded date due to this line here, using all the big land scan method, we click test and check out the results, seeing that it it does output all of the events.

In the same way, you can make a most event function, which takes into account the parameter is required to make a new entry in the database. Let’s name it test three in this case and click test. The function has added test three, so let us see that, and here it is going back to the function. Okay, I would have noticed that we have something called an API gateway here as a trigger what this does is.

It enables an URL which we can use in our application, on which we call witta get or post, for instance, and receive the result of the function back in JSON format. This is essentially the final piece of the puzzle that connects all of the services together. In order to create one remember, every function has one API gateway. We give it a name, a description if necessary and an endpoint type as choose edge.

Optimized in this case create the API. Go to actions, create a method of the type that your function is. Let’s put it at any select, lambda function, the region and the function that you’ve wanted. The API, for, let’s continue with the example function in our case, use the full time mode should be checked as well. You will land in this page and the API gateway is almost ready to be deployed. You’ll notice here in method, request, authentification is set to none.

What does this mean? Well, if we change that, we can make it so that, for instance, the view events viewing all of the events in the database is only allowed if you’re already logged in so after we do so only then we’ll be able to see the events. This is done by creating a new authorizer click kognito and give it a name. Let’s call it user select, the user pool that you’ve previously created and the token show source should be authorization create and then back in resources, click, any method, request, authorization refresh, and you can see the newly created user now only being logged in into that specific cognitive User pool, who will I allow you to access the specific function? Those being said, we are now ready to deploy the gateway API.

You should also note this option here enable cross-origin resource sharing. I recommend you to enable it if you like me, have opted to use Axios as a method of communication with the api, otherwise it wouldn’t work and finally deploy api, select a new deployment stage, with the name of your choice and clicking deploy. This will yield invoke. Url, which you will put in your application and don’t which will call with the get or post, depending on the function that you’ve wanted to call this server less web app functionality is now complete.

You have the static website sitting in the s3 storage in it with all the images CSS HTML, then the user pool uses kognito to manage the login register and authentication followed by the newly created database, which is populated and manipulated by the various lambda functions that you Will create which are called from the project through the Gateway API. Thus, they all amounts to a single web application which can be expanded upon easily.

Last but not least, regarding price. Well, the cost of creating the serverless web app is known as AWS accounts only for the time its service is actually run, which is mainly the amount of traffic and how many times the lambda functions compute. For me, personally, it has been less than 0.1 dollars and I have had the webapp on for a period of about two weeks time with extensive function tests. Ultimately, all of these resources are available in a github project and check the article description for more details.

Thank you for reading


 

Categories
Online Marketing

Web Payments (Chrome Dev Summit 2016)

But if you think registration forms are difficult, we should talk about. Checkout forms a lot more form fields, a lot more questions, but I think that you’re going to see a consistent theme emerging through, like our talks here today, which is this one of let the browser help you. There are certain advantages that we have as a browser, especially when it comes to reducing friction and making life easier for users, especially things around repetitive data steps, things that the users can store inside of the browser’s, but we’re trying to do our expose api’s and give You tools to reduce, friction and make things easier for your users.

We saw it in Koreans, while management and we’ll see a very similar theme with what we’re doing in payments, but first a little user activity just before we go off for lunch, which are some questions. Okay, great so first question just curious: how many people here actually enjoy the process of buying something on the web using their mobile device? Okay, good yeah and some people, but by large? No and whether we should come talk.

I’d love to hear like what is that you’d like about buying things on the mobile web and what it is that you don’t as much a second question, and I would be really impressed here how many people can remember all the details of their credit card. I’r talking full 60 digit number really CVC on expiration. Okay, it’s more than I expected. I got to be honest, like okay, we’re still like under 15 %, but okay cuz I’ve even payments out for like 18 months, and I think I have yet to remember a credit card number, that’s great and then okay final question: how many people enjoy the process Of handing over all their sensitive credit card information to a random third party server I’ll get one.

It’s almost like. I’r asking these questions to lead up to a particular point, and – and there was a points and the reality is that most users finds payment difficult. They find it insecure and scary and frightening and they find the process of doing it on the mobile web, particularly bad, and so we had this number. We talked about it at i/o as well, and it hasn’t really changed, which is that on average, we tend to see about 66 percent fewer conversions on mobile than on desktop and again we think there’s an answer to that, which is all around again high.

Friction the difficulty and issues around trustworthiness and security, and so we’ll talk about sort of how we’re addressing those today and how we’re trying to bring fast, simple and secure payments to the web platform. But this is a little I’m a p.M. It’s a little bit too PME for me actually, and so I have a much better mission for us inside of the chrome team, which is we’re trying to save the world from annoying check out forms.

So I’m trying to save the world from virtual keyboards and having to memorize and all of those terrible things. I actually started this joke of the better payments Bureau a couple of months ago, and now it’s become like a thing so anyway, but actually Chrome has been fighting the good fight against annoying check out forms for many years. Actually, we start with autofill back in the day, you guys are probably familiar with autofill.

This is my one slide on it. It’s not really the topic today, but consider this. My 10 second plea to say: if nothing else leave today and set autocomplete types on your check out forms. It helps us it helps. The users helps the browsers and it basically ensures a hundred percent accuracy on autofill. I’r not here to talk about that today. I’r really here to talk today about payment requests, which is this new API that we’re building for the web to really help solve a lot of the problems I’ve been talking about.

But before I talk about what payment requests is I want to talk about what payment request isn’t and that’s, because payment is complicated. There are a lot of players in this space and I just want to sort of set up fronts and sort of help alleviate any confusion. So the first thing payment request is not a new payment method, so we’re not trying to create Chrome pay or browser pay or yet another X pay button on your website.

That’s not fundamentally our goal. Our goal is to help users pay, that they went the way that they want to pay and do it quickly and efficiently. Secondly, we are not trying to become a gateway or a process or or some entity that literally moves money in the ecosystem. So it’s not we’re not trying to step on any toes here or like enter into this ecosystem. We think that the market has actually done an incredible job here.

Already players like stripe and Braintree and others have done a really stellar job over the last couple of years of taking the incredible complexity of accepting online payments and making it really simple. They’ve removed the burden of things like acquiring banks and all the couples of PCI, and they put it all into a easy-to-use API. And so our goal is to ensure that whatever we do plays really nicely with all these gateways and processors.

But that’s not fundamental goal to become one. The thing about all these great new services, though, is that they’ve really focused on developers, which is great they’ve made your lives easier and it made easier for you to accept payments, but the user experience has largely remained the same. You have to go from the state where you know nothing to a user to everything and formfields tend to be the way that we do this.

So payment request was fundamentally built for users. I mean we think it’s pretty good for developers too, and it’s pretty easy and we’ll sort of talk about code samples. But fundamentally, like my goal, I think about users and how I can help them and help them get through these burdensome flows on mobile, faster and more efficiently. So what exactly is payment request? Well, pay requests, like I said, is a new standards-based, API and standards-based.

I want to emphasize that we joined the web payments working group almost a couple years ago now and every major browser is a vendor. We have financial institutions from around the globe and we’re trying really hard to build something that everyone can integrate, that all forms of payment integrate with and all browsers can do so that users on a variety of devices and ecosystems can continue to leverage and have the Benefit of it, we’re just in the early stages of it and sort of will talk about where we’re at, but that’s sort of fundamentally our goal, and so when we started to think about what design this API looked like.

We had two high-level goals in mind and they sort of reference back to my original question set the first one is we to build a seamless, easy-to-use, checkout experience on mobile? In particular, we wanted to make sure that users like could minimize typing and minimize friction as much as possible, and the second thing is we really want to bring more secure payments into the web platform. In many ways like the web is one of the the last places where it’s very commonplace to exchange over all of your sensitive information to some unknown third party, and even though there’s an answer to this from the payments community.

With regards to tokenization, the web really didn’t have a great answer for that, which is why we’re really excited that we’ve brought Android pay into the web platform and again we’ll continue to expand that. But this brings tokenized forms of payment. So in the events of data breach or other problems, you as a users are protected, but also it also reduces the burden for for you, as developers and merchants, and so those are our two high-level goals that we had and again the idea here, just at a High level is that, if you think of your traditional checkout flow, it looks something like this.

It’s you know anywhere from like two to five pages, maybe one for single page things and you have somewhere between like 10 and 40 form fields where you’re asking a variety of questions. Things like what’s your name, what’s your shipping address, what’s remail, what’s your phone number? What’s yer credit card number, what’s your expiration and then you have users who are like you know trying to like do all this on their mobile device and at some point they’re, like man and I kind of give up, and maybe they go to desktop later or Most likely, they don’t – and you know and Aaron talked a lot about the growth of mobile right, and so we really think we need to fix this and make this easier and the way this happens with payment requests is you can imagine that the browser sort of Plays this role and and helps facilitate checkout across this, the the highest friction point.

So we take that common set of data, those common things that you request and sort of leverage our strengths to make it easier for users to to be successful. So before I could show you a demo, I want to talk about what types of data is actually supported by payment requests. So the first one is probably a little bit obvious, but it’s a form of payments. So, at the end of the day, you need a way to actually request money from the ecosystem, so it needs some sort of form of payments.

Right now in Chrome, we support we support credit cards and Android pay. I put etc on here, because the is to support more but we’ll talk about that a bit more later, and so you always have to request a form of payment. You can’t call payment requests and not want to form a payment that would just be weird and then would just be request: arbitrary user data API. So the other big thing that we allow you to request is shipping address and shipping options so for physical good purchases.

You can leverage the API to say, hey give me their shipping address and then there’s a dynamic mechanism for you to take that address and then populate shipping options that have updated pricing, etc. You can also request a phone number. You can request a an email address, of course, for like sending a receipt or even prompting sign up afterwards and coming soon, actually, but not quite there, but in a couple of months is payer name support, and these are all flexible.

You can request any of these or none of these, if you want the idea is to support a broad range of of use cases out there. So if you’re like a ride, pickup service, you probably don’t need you. If I don’t need everything, but you definitely need. Let’s say a location like an address and a name, let’s say or if you’re a physical good you may may or may not need their payor name because you’ll get that from the shipping address, so it’s flexible and you can sort of accommodate experiences as as fits Your the business, but the really important point here, is that all of these data points can be stored and returned by the by the browser, so users, by and large trust chrome to store this data.

They trust us to store their names, their emails and even their credit card data. And so the question is like: why put users through the burden of a form that they have to fill out manually and, like you saw like sabine slide about like fat, fingering and the difficulty of mobile keyboard typing and those problems were multiplied across all those form Fields, so if you can save them the burden of doing that, we think it’s worthwhile and sort of payment request is really designed to do that.

But let’s go ahead and just let’s just going to sort of see it in action switch over to a demo here see if we can see all right, excellent, I’m going to you open up, Chrome on stable and I’m actually going to use the exact same shop. Api, that’s, oh and you see it, it auto sign man, you have to love and a good demo goes right, and but otherwise it’s the exact same website, polymer shop demo, except I’m going to go a little bit further and actually just make a purchase.

So I hit the shop now button. You know, I definitely don’t have enough. Google hoodies so I’ll, just source wet shirt, so I’ll just buy it yet another one. So it standard shop. You see that there’s like size and quantity. I won’t affect those, but you see that there’s there’s two buttons at the bottom there’s a typical Add to Cart button, but there’s this also. This Buy Now button that Buy Now. Button is based on feature detection, so we’re checking to see if payment request exists and if it’s there great, let’s leverage it and if not it would.

You would just see an edit card, but I’m going to end use the rapid checkout approach, and so I tap on the Buy Now button and you see that this this payment sheet slides up from the bottom. This is a payment request in action, so you’re looking at sort of natively drawn UI, it’s controlled by us. We can through it, but it’s populated with data from the merchant. So you see that my total amount is there $ 22.

15. I defaults to my form of payment that I prefer, which is android pay. If it’s available only cuz, it’s faster and more secure. You see it they’re also requesting my email address for the purpose of sending a receipt, and the only thing I need to do here is select. The shipping address it’s very difficult to ship, a sweatshirt to someone. If you don’t know where it goes so I’ll tap on that you’ll see that the payment chief slides up to full screen – and it has my addresses, automatically populated for use.

These are our two Google offices here, so I’ll go ahead and shift to the one in San Francisco, where I work, you see that when I do that the shipping options are automatically populated there, and so we have a free shipping in California option or in Express Shipping, and if I change those it will dynamically change the price, so you can see here that express shipping changes, but of course, why would I pay more I’m going to go back to zero? That seems to make a whole lot more sense to me and now I’m ready to pay.

So we just have the pay button and then you’ll see the Android pay screen slide directly up we’re running the test app. So it says unrecognized ooh, you guys wouldn’t have that and because I’ve actually authenticated in the last couple of minutes. I don’t even have to do any extra authentication on Android pay I’ll literally just tap the continue button. A responses comes back and the transaction successful so pay with Android pay, no keyboard, no typing.

All I had to do was tap and select and confirm my shipping addresses so really great, really seamless, we’re really excited about it and just to show you that if you don’t have Android pay available no big deal, we can always change our form of payment and, If I didn’t have Android pay, I would just default back to my credit card, in this case, a Visa card that I have once again I’ll select my shipping address and options.

I hit the pay and the only thing the only keyboard we can’t get rid of is the CVC input everything else we have so I’m going to do one two three. I used to do like a live credit card on this and I discovered that what that didn’t work well for me, so I’ve switched to a demo card but either way the same. The same concept applies, we’ll talk about what’s happening behind the scenes, but this is all client-side basically, so it’s all happening super fast and pretty great they’re really excited about that.

And now maybe we can switch back to the slides and talk more about what it takes to make this actually happen. So how do you leverage payment requests? Well, it’s pretty simple. There are three parts to payment requests, two of which are that are required and one of which is completely optional, and so we’ll talk about them in order. The very first one are payment methods, so we need to know basically all the ways that you can get paid.

This could be a wide variety of things in the future, so it could be. I accept Visa and MasterCard and AMEX and Discover JCB UnionPay. It could be in the future, I accept Ally, pay or idea or PayPal, etc. As long as people are built into the ecosystem, like I said for now, Chrome we just launched so we’re, starting with credit card support, Android pay, and so it looks a little bit like this. So we basically pass in this thing called method.

Data and metadata is an array of objects and those objects. Each have a an array of supported payment methods. So you can see here that Mont. The first thing I support our credit cards. I support the standard for Visa Mastercard, Amex and discover. That’s it nothing else to do it, just as I accept this in the future coming out in a couple of months, we have added support for granularity for things like debit or credit or our prepaid, but for right now, essentially, when you say visa, we sort of Assume you can accept all visa and don’t make a strong differentiation there, but the second one is a little bit more interesting, and this is Android pay, there’s sort of an abbreviated version of this, but to support Android pay.

You see that there’s an additional key inside of that object, which is the data data, is sort of a generic object and it’s a payment method. Specific. The reality is that different payment methods out there have different dependencies different things that you’re going to pass in when you instantiate it by default, so for Android pay, for example, you always have to pass in like your merchant ID, you have to pass in what kind Of token you would like either network or gateway.

We don’t have a full example here, but and then what happens then is when a user chooses to pay with one of those forms of payment, we basically bundled it all up and pass it on to the payment. App so and then the payment app uses that data plus things like origin and assertions from chrome to basically verify that the payment app is the right one, and so the payment can can continue. So it’s pretty simple, but the idea here is that you throw everything you can at the browser for ways that you accept payment.

So if you can accept a like a hundred different ways of paying around the globe, tell us a hundred ifferent ways to pay, because what the browser does is we find that spot in the middle between the set of ways you can get paid and a way That a user can pay you and give a user an optimized experience about the ones that make the most sense for them. So you solve, for example, in the demo that Android pay and a Visa card were available, but let’s say that we had removed visa as an option, then visa just wouldn’t show up, because that doesn’t make any sense and so for, as you go across the globe There are wide variety of ways to pay, but we recommend giving us all to them and then we’ll find the best experience for the user to optimize around their preferences, their defaults and what? What is the best thing for them? The second bit of data is also quite important, so now that we know how I can pay you, we need to know how much money you wants to get paid, and this is what this looks like great.

So the first thing, the most important thing that’s required is this total attribute three parts basic. There are two parts really. The first ones are label, so we customize this. So if you tell us, total will display total, but you this could be like authorization donation. Whatever you want – and you have to, we have to know an amount and the amount is compose of a total amount of money and an underlying currency code.

So we know, for example, or the underlying payment app that we transferred to knows what currency to charge in we’re also to support display items. So, just like I showed you like when I tapped on the total those line items. I came down that basically told you how the total amount was reached this you can’t. We also support this. It’s a wholly optional. You can pass in. You can pass it if you want or ignore it.

We recommend it it’s nice to give a high-level overview to a user about the things that inform the total amount, things like the subtotal tax, shipping, cost, etc. Less of like a full itemized receipt and again in more of like a high-level overview, one important point payment request does not do math we’re not good at floating point math. So if you pass in, you know, you have two line items that sum to five and your total says four like we’re not going to throw anything so you’re totally in control of this thing.

So just keep that in mind and they’re by the way it might be some use cases where it makes sense for those not to a line, but by-and-large just want to point that out. The other point to note is that transaction details can also contain shipping options, and in this case, if you put them in there by default, we support default shipping options. We only recommend you use this if you’re highly confident that your shipping options will not change, then they’re not dynamic.

So if you support, for example, world wide free shipping and it never changes, no matter what the address is feel free to default populate this. But if your shipping is dependent upon a user address, then we recommend waiting until you’ve gotten a user, a user address event which we can talk about a little bit and then you can use that to dynamically query, against whatever service you use to calculate prices, and You can repopulate this and that’s the important point that basically the transaction details object can actually be updated and overwritten throughout the lifecycle of payment requests at certain events and points.

So if a user changes their shipping option, you saw like how, in my demo, when I changed my ship, the price change and the list items changed. That’s because when that event took place, we repopulated those set of transaction details, and so you have that flexibility in control on those events, and so that’s how we get sort of that dynamic pricing model that exists out there and so again don’t do default shipping options Unless you’re highly confident they aren’t going to change and the final piece is the extra information, the optional set of options, and that’s that things I talked about user address, shipping support, name, email and phone all entirely optional, but definitely useful.

I think there was like sort of this myth out there that the the only drop-off point in the funnel is the is the process of putting in your credit card, but really like the entire checkout. Funnel is well a funnel and so like wherever your users. Experience. Friction and there’s a step. Others are drop-off, so we highly recommend taking advantage of these these different pieces, and so there’s a few that we support, like I said, and it’s as simple as passing in just a bunch of boolean’s.

Basically, do I want shipping? Yes, do I want email, yes, etc, and you can again, these can be variable, so you can say I don’t want shipping, but I do want a name and phone number or you can say I just want an email address to send a receipt to. For example, it’s completely configurable and again. The idea here is to support a wide variety of use cases. Something funny that is minor that we have coming soon in the next couple of months is we’re adding support as well for a shipping type value.

It’s pretty simple, but the idea here is that let’s say you are buying a pizza, one does not ship a pizza right. That’s just weird like we deliver pizzas, and so it’s a very minor feature that allows you to actually specify shipping delivery or pickup. As like the address type, so we still call shipping address underlined in the system, but this way in the UI a user would see. Oh, I want my pizza delivered two three four five speed streets.

I don’t want it delivered or if you’re a ride-sharing service, for example, you can say pickup and it’s your pickup address where you’re currently at or located, and so that’s the value of that particular little thing again pretty minor but allows us to just have like a Better user experience underlining the whole system so now we just put it together basically, and we get that whole experience that we talked about, or I just showed you so.

The first thing we do is we instantiate payment requests and we pass in our method data way that we things that we support. We pass in the transaction details again how much money, what currency code, what line items do we once and then our optional options right? So in the case of our demo that I went through, that would be request, shipping, true and request email, true, but again that one’s completely optional um.

You see here. I’ve also added an event listener to my shipping address change and we support two events in the system: shipping address change and shipping option change. These are this: is that dynamic mechanism that allows you to receive the events parse out the new details? Let’s say so: if a user selects a shipping address that event fires, you can actually pull out that full shipping address.

We don’t do just the zip code because you can’t get fully accurate shipping information with just a zip code. So you get a full user address. You can use that at that time to call event update with this basically says: hey, you know browser, I’m thinking. I need to calculate this. You can call your back-end api’s and you can update resolve a promise with new transaction details. So again that updated transaction details object can now contain your updated set of shipping options, including the empty set of options and an error which says like opps.

Sorry, we don’t ship to you know wherever it is that you’re trying to ship to so that’s also supported and so and by instantiating payment request or not. There’s no UI. That shows it’s just instantiation when you want that actual payment sheet to slide up from the bottom. Oh, we called that show that show is actually our signal and we actually raised that payment sheet and put the user through the process.

That returns like a promise, and when that promise resolves you have a payment response and that payment response contains the entire set of data. It’s just a JSON object that contains the entire set of data that you requested so for a credit card. For example, you would know what the underlying network was so Visa, MasterCard, etc, and then you would see credit card number phone number, full, CBC, expiration, etc.

Think, like the same set of data, that a user would have typed out into your form, you’re now getting just from the browser as a JSON response, you can use that to basically send that response directly off to your gateway, your server or even let’s say in The case of like, like you know, scribe API is like directly to over to the API is for tokenization, it’s completely up to you, because it’s all plaintext these responses.

It’s important to note our our method specific. So if you selected Android pay as your form of payments, then when that response comes back, it’s going to look like an Android pay response. You’re going to be able to select this there’s a key and they’ll. Tell you that, oh there, the form of payment they chose was Android pay and then you’ll have to expect that the Android pay details object, looks different than a credit card, one which may look different than some other form of payment like an alley, pay, etc.

I mean in a good that’s because different payment methods have different requirements and are different systems and call things different things. The final step here is, we just need you to tell us to close the UI, because once you get this payment response back, we actually show us a little loading spinner and the little loading spinner is sort of waiting for you to come back and let us Know the the result of the transaction we highly encourage, but do not require that at the time that payment response comes back, you try to make the payment.

There are legitimate use cases where you can’t do this things like 3d 3ds flows, etc, but by and large, if you can submit, we recommend it, and so you call the complete you can call with success or failure, but you can also call it with nothing. This is basically an affordance for the browser to do like special UI considerations in the event of success. You know like a little animated check mark or something but yeah, it’s totally optional, but the important thing is that, when the UI closes will actually resolve that promise and that promise is your cue that the UI has been completely torn down.

So if you have animations or things that you’re trying to time with the close of that wait for that promise to resolve and then you’ll be guaranteed that any chrome UI has now been stripped from the page, and that’s it on with that. You basically have the hold experience, so in just a few lines of code, you basically get help user alleviate all that friction and difficulty I’ll type in all those annoying form fields, pretty simple, but also again, with Android pay and other future forms of tokenized payment.

You’re. Basically, getting easy tokenize forms a payment that reduced like the burden of CBC memorization, etc. So really excited about this, and this is all possible because the browser is sort of sitting as the middlemen. There proxying data back and forth between native apps on the device and and the underlying website and the developer. That’s requesting it. So my last few minutes here I want to talk about just a few UX considerations and forward-looking stuff.

So first one is my very bold. Hyperbolic statement to say, kill the cards, which is maybe a bit strong, but just sort of my way of saying if a user is coming to your site or you for a lot of users who come to your site on mobile, that only make a single purchase. Why put them through the burden of opening the page, adding it’s a card? Finding the cart page clicking the card page going to review page go to the checkout page, then.

Finally, starting the process there on mobile want to optimize their experiences. Payment requests allows you to do that quickly and immediately, so consider adding like Buy Now buttons directly to your product pages, especially on mobile, when it makes sense again. This moment won’t make sense for all businesses, but I would encourage you to go back and sort of check the numbers and see if, like this might be a powerful tool for you guys to leverage to help your users just a few other things really quickly that I’ve talked about – and you’ve heard mention here today – is thank progressive enhancement.

This is a new API. It won’t always be available, so you can’t necessarily completely depend on it, especially in a cross-browser way. Yet, although hopefully we’ll get there, so think what happens if it’s not available, you’ll still need a fallback flows etc. The second one, of course, is we encourage you to keep the set of list items high level, so don’t think of it like an itemized receipts. We don’t want the user to select this long scrolling list in the UI, if possible, try to keep it high level, subtotal tax things like that.

If it’s single item you can put it in there, but by and large we encourage high-level subtotals and things like that and then last one. Something to consider is that if you already have a user’s information – and you already have a credit card on file or some way to pay – I wouldn’t expect you to use. Don’t think you have to use payment requests like give the user the best experience they can and that means go ahead and just leveraging what you already have on file.

But if you don’t have anything and you don’t have the credit card or the credit cards expired, you need a new one, a consider payment request as a tool to help these users. You know we talked about. You know. We talked about sign up first right, but that might not always make sense for your business if you think about it, like maybe your your p0 or your most important thing is getting user through that checkout flow, then you can request an email address from payment requests And now all you need from them at the end of that funnel to sort of optimize.

The experience for next time is a password and so consider sort of leveraging this again. These are tools to help you be successful, so just a quick status update. So we are live in in Chrome as of m-53, so we’ve only been live for about, say, eight wait weeks now and it’s sort of a quiet launch. We had a great set of early launch partners that we worked very closely with and they integrated and tested and gave us a lot of great feedback again.

The API is still early. Chrome is the first browser to implements, and so we’re really thankful to all of these players for their great feedback and from it we’re actually making a lot of changes and improvements and enhancements to the underlining experience. And so I just want to talk a little bit about what you can expect to come soon, so the first one that we’re working really hard on is support for third-party payment apps.

As you go around the world. There are a lot of ways to pay in India. You know you have like paid TM and snapdeal, and all these other new emerging wallets and you go to Holland. You have ideal if you go into other countries, a whole new forms of payment that are not just credit cards and Android pay, and we want to be able to support all of this in a nice open way where we can support users from all over the Globe, no matter what, and so we’re really close to finalizing this and we hope to have support next year.

Secondly, we have a lot of spec and feature enhancements coming, so we have the shipping address types. I talked about you’ll, be able to call this within an iframe coming up soon, as well as much other small enhancements and improvements, and then we also have much of UX improvements, so we added a are scanning just recently, so you can now just like scan you. If you don’t have a credit card, you can just scan it directly into into the into the UI, so there’s things to make it faster, easier and fundamentally better onboarding flows and then just quick time lines here.

Just so you’re aware we’re sort of targeting in 56. That’s our January release as, like our next big major release. It’s going to have all these enhancements all these improvements and we’re really excited about it on you’ll, continue get updates along the way. This is all live in chrome, stable and we’d love to continue to work with you and get your feedback um everything I’ve talked about today is available online.

In a lot more detail, we have integration guides a bunch of examples in sample code where the phones come up, and then we also have a game start with Android pay. Android pay is really simple. With payment requests, it’s like less than ten lines. We do almost all the heavy lifting for you, so just a quick shout out there, but I’ll be around the rest the day I would love to chat with you learn about your challenges, things that you think you need from the browser ways that we can help.

You be successful, especially in checkouts, so thank you so much


 

Categories
Online Marketing

Service Workers – The State of the Web

My guest is jeff posnick, he’s on Google’s developer relations team and today we’re talking about service workers and how they’re elevating the capabilities of progressive web apps. Let’S get started all right, so Jeff, thanks for being here in the context of web technologies. What does it mean for a worker and what does it actually do so? The whole idea of a worker has been around for a while.

Traditionally there were web workers and it’s basically serves as almost like a background thread for the web, so a worker can execute JavaScript code, that’s kind of independent from the context of your actual web page and it’s a great way to kind of offload processing or I Do tasks that might take a certain amount of time without slowing down the main thread for your web page and yeah, that that’s kind of should been the traditional model for workers on the web.

So now what does it mean for a Service Worker? What does that? Actually do the service workers builds kind of on that concept and adds some superpowers really things that you were not able to do before so a service worker is similar to worker and that it’s, you know, running independent from your actual web page and it doesn’t have Access to things like the Dom you know or the global scope of your web page, but unlike workers, it could respond to specific events and some of those events relate to network traffic.

So one of the really cool things and most common use cases for a Service Worker is to respond to outgoing Network requests that your webpage might be making, and you can kind of sit in between your webpage and the network and almost serve as a proxy that You control and you could write code to take advantage of things like the cache, storage, API and say hey. You know, I know how to respond to this particular request without having to go to the network.

I could just use this cache response and thereby saving you know the uncertainty and unreliability that comes with coming against the network. It also enables capabilities like push notifications, etc. Yeah so there’s a whole bunch of kind of event based listeners that you can set up in the Service Worker, including responding to portion of vacations. That may come from a notification server and you know fetching requests and people other kind of interesting things are kinda slated for the future as well.

So what’s the status of its implementation and support? Yes, the service workers are well supported right now in modern browsers. So pretty much anything Chrome or chromium based, Firefox, Safari and edge at the moment, it’s great. They all have at least a basic level of support for service workers and some of the enabling technologies, like the cache storage API, so they’re they’re ready to use right now.

So web sites may experience Network reliability issues at any. Given time, would you recommend service workers for every website? Should they all be using one? Well, I mean it’s tempting to just throw a service worker up and see what happens. I would suggest to take a little bit more of a considerate approach before adding a Service Worker to your web app. Ideally, a service worker will kind of play the same role that your web server would play and maybe share the same logic for doing routing and templating that your web server would normally respond with.

And if you have a setup where, like your web server, for instance from a lot of single page apps, the web servers just can respond with some static HTML that could be used satisfy any sort of request. That’S pretty easy to map into a Service Worker behavior. We call that the app shell model or a service work role say: hey. You know, you’re navigating to XYZ URL. I could just respond with this HTML and it’ll always work.

So that’s a really good model for using a serviceworker. If you have a single page app we’re also seeing some success with partners or using models where their servers implemented in JavaScript, they have some routing logic and they have some templating logic. That’S on JavaScript, and that translates over really well to the serviceworker as well, where the serviceworker you just basically fill the role that the server would normally play.

I would say if you have a scenario where your back-end web server is doing a whole bunch of complex templating and remote API calls and language that is not JavaScript. It really might be hard to get your serviceworker to behave exactly the same way. So in those scenarios I mean you can add a serviceworker and we have some kind of provisions in place to not pay the price of having that serviceworker, intercepting all requests and then not doing anything and just going on against the network.

There are waves of saying, hey, you know we have a serviceworker, but we’re not going to be able to respond with HTML for navigation requests. In those scenarios it is still possible use the serviceworker for things like ok, show, custom offline page when you detect that a user’s network connection is down or implement a kind of interesting caching strategy, like still while revalidate for certain types of resources.

So it is still possible to add a serviceworker in those cases, but you won’t necessarily get the same performance and reliability benefits that you get when your serviceworker really respond to all navigations with HTML by essentially having a network proxy juggling requests and responses. Is there a latency cost of having a serviceworker yeah, so I mean you’re you’re running JavaScript code, that’s sitting in between your web app and then a work and that’s not for me.

Some of it depends upon whether the serviceworker is already running. One of the kind of neat features about a serviceworker is that just it’s particularly to preserve battery on mobile devices? It’S killed pretty aggressively. It doesn’t just keep running forever in the background. So sometimes you do have to startup the serviceworker again and there is a cost involved in that startup. There’S a really good talk from the chrome dev summit that just happened a couple of months ago that kind of goes into some metrics and real-world performance.

Timings of you know exactly how long it takes to startup a serviceworker, seeing tens to hundreds of milliseconds depending upon the actual device and things like the storage beautiful device. So you are going to be paying that cost. Potentially, when you’re using a serviceworker – and you know again – that’s really why it’s important to make sure that you have a strategy in place for responding to requests, hopefully by avoiding that work and just going against storage API.

Ideally, and if you’re doing that, then you should see the service worker give you an that positive in terms of performance, you know paying tens, maybe even hundreds of milliseconds is nothing compared to the multiple seconds. Simply didn’t see that you might expect from making a network request each time you navigate to a new URL right. What’S the saying the fastest request is the one that you never need to make indeed yeah.

So what are some anti patterns that you’ve seen the way that people have implemented service workers? There’S a lot of power involved in using a Service Worker? It is just JavaScript that you could write that will pretty much do whatever you want, so you can do all sorts of crazy things, some of which are kind of cool as proof of concepts, but not necessarily things you want to deploy to production in terms of The things that we’ve seen kind of as pain, points or things that are pretty easy to, unfortunately get wrong when implementing a Service Worker.

I think one of the things that it’s most common is caching requests and responses, as you go without having any sort of upper limit on the amount of data that you’re storing. So now you can imagine a website that maybe has a bunch of different articles. Each of those articles has images it’s pretty easy to write a serviceworker that just intercepts all those requests and takes the responses, save some in the cache, but those cached responses will never get cleaned up by default.

There’S not really any provision in the cache storage API for saying you know stop when you reach 50 or 100 entries, or something like that, so you could very easily just keep using up space on your users devices and potentially use up space for things that are Never going to be used again, you know if you have an article from a week ago and you’re caching, all the images and that article that’s kind of cool.

I guess if you’re going to be visit article immediately, but if it’s a page that users never going to go to again, then you’re, really just caching things for no reason. I would say that really one of the important things before you implement your serviceworker kind of have a strategy for each type of request and say: here’s my navigation requests that are being made for HTML; here’s how I’m going to respond to them here.

The image requests. I’M making you know, maybe it doesn’t make sense to cash them at all, or maybe certain it only cache certain images and not others. So thinking about that – and that really just means getting really comfortable with the kind of network info panel in the browser’s dev tools and just seeing the full list of requests are being made. You know. Sometimes your web app is making requests.

If you don’t even realize it’s happening and it’s coming from the third-party code and your service worker ends up seeing that too, so you want to make sure that you know what your service work is doing. You know what your web app is doing and just one other. I would know that a lot of times and kind of pain, point and things that could go wrong with me using a service work, but just has to do with controlling updates to resources.

So you know you are stepping in between. You know your web app and a web server you’re responding, potentially the cached resources, if you’re not sure that those cached resources are being updated. Every time you make changes to your actual website and you read – apply to your web server, it’s possible that your users will end up seeing stale content kind of indefinitely, and this is a trade-off like seeing stale content, but avoiding the network gives you performance benefits.

So that’s that’s good for a lot of scenarios, but you do need to have a provision in place for updating and making sure that you know. Maybe the user sees still content then the next time they visit the site. They get fresh content. So you know you could do that right. Unfortunately, you could get that part wrong and the users can end up the frustrating experience. So you maintain a tool called work box j/s.

What is that? What does it do sure so? Work box is open source and a set of libraries for dealing with service workers and kind of all aspects of building service workers. So we have some tools that integrated with build processes, including you know we have web pack plugin. We have a command line tool. We have a node module and that aspect of the tools, basically, is something you can drop in your current build process and kind of get a list of all of the assets that are being produced.

Every time you rebuild your site along with kind of some fingerprinting information like say you know, this is a particular version of your index. Dot HTML work backs will keep track of that for you and then it will efficiently cache all of those files that are being created by your build process for you and that just helps ensure that you don’t run into scenarios like I just described where you’ve rebuilt.

Your site – and you know you never get updates to your previously cached resources and we also have some tools as part of work box, that kind of harm or execute at runtime. That’S part of the serviceworker, so some libraries for doing common things like routing requests. We have there’s just kind of some canonical response strategies for dealing with caching, so things like still while we validate or going cache.

First, we have implementations of those strategies inside of work box, and then we have some kind of like value adds on top of what you get with the basic serviceworker spec in the cache stored specs. So we actually have an implementation of a cache expiration policy that you could apply to the caches that would otherwise just grow indefinitely, but using work box you could say, hey. You know it actually like to stop.

When I reach ten items and purge the least recently used items and just cache when that happens, and a few other kind of ran two modules, we see it as a bit of a kind of grab bag for all the things that somebody might want to do With a serviceworker and we kind of ship them as individual modules, you can choose the ones that you think would be useful for your particular use case. I don’t want to use something, that’s fine, you don’t have to incur the cost of you know downloading it or anything like that.

Do you foresee some of those caching and expiration policies making their way back into the cache storage API yeah. I mean it’s kind of interesting whenever you have something: that’s almost like a polyfill for some behavior on the web. You know whether that ends up being implemented back into the standards, and you know the the actual runtime could just fade away and just use the underlying standards.

And you know I’d like to see that. I think that where cost has been really great for kind of enabling folks to ship service workers in production and seeing the types of things that they actually need, when you’re shipping somebody in production and a lot of times when you could do that and get points. As a vision thing like yeah, you know it is actually important to have run time, cache expiration.

That could then be used. You know when going to different standards, groups and saying hey, we really do need to extend. You know, what’s supported natively in the platform, to take care of this really common use case. You know what that actually happens or not remains to be seen, but you know I think work box is positioned to help folks with kind of that initial, proving that these things are necessary stage kind of take it from there.

So, in terms of adoption, according to the HTTP archive, less than 1 % of websites tested actually include a serviceworker which is kind of a misleading number. For two reasons. The first is that it’s actually growing at a very fast rate and the websites that do include it are actually pretty popular websites. Can you give us some examples of those yeah? So I think you know the raw number of URLs unique URLs might be on the lower side, but I think in terms of traffic, you know sites as big as Google search have deployed a serviceworker for some types of clients.

You know partners that we’ve talked about using work box, in particular in the past and Gleevec Starbucks has a nice progressive web app, that’s implemented Pinterest as well, and there’s also some sites that you might have heard of like Facebook and Twitter that are using service workers. Not using work box but using them to kind of unlock things like you know, they’re progressive web app experience – or you know in some cases just showing notifications, which is important part of you know being on the web and having parity with native apps.

So you know, I think that the actual number of you know you visits to web pages is probably much higher than the 1 % number would indicate, and you know I mean there are challenges with adding a service worker into especially legacy sites. You know it does. Take that coordination that we talked about before tree, making sure that your service worker actually is behaving in a similar way that your web server would behave and yeah that doesn’t always fit into existing sites.

So a lot of times we’ve seen when working with partners in particular, is like you know: you’re planning a rewrite, re architecture of your site anyway, that’s a great time to add a service worker in and just kind of take care of that story as well. Are there any options for CMS users who may be using things like WordPress or Drupal? So there definitely are, and I think that you know first of all, I’d work for everybody back to another talk from the most recent chrome dev summit.

That really goes into some detail about the WordPress ecosystem in general, so they have a really cool solution, some folks from the dev rel team that Google have been working on it and I think it kind of works around one that that problem. I was saying where the architecture for your kind of back-end web server needs to match up with the serviceworker implementation I kind of just sending a baseline.

So it’s not an attempt to take any arbitrary, WordPress site that might be out there, which might be executing random PHP code depending upon you know what kind of themes and extensions and all the other stuff is going on. You really are not going to be able to successfully translate that into just a general-purpose serviceworker, but the approach that was subscribed and this talk. It seems to be building on top of a kind of a common baseline of using the amp plugin as a starting point.

So any site that has gone through the effort of kind of meeting all the requirements for using the amp plugin. So it means I don’t know the full set, but I think, like not running external scripts, not doing anything too crazy with other plugins. That’S inserting random HTML on the page building. On top of that, you can then have a serviceworker. That’S like okay. I actually do know how to handle this subset of activities that you know WordPress is doing when it’s using the unplug in and it can automatically generate that serviceworker for you.

So again, it’s part of a migration story. I think it’s not going to just drop into any existing legacy WordPress site, but it does give a nice path forward for folks who are planning on rewriting anyway are planning on making some changes anyway, and plugging into the CMS ecosystem is great way to increase adoption By tens of percents on what yeah absolutely so, what kinds of resources would you recommend for someone who’s just getting started with service workers? We have a lot of material available, some of which is more recent than others.

I would say that the things that I worked on most recently are the resiliency section of web dev. So if you were to go there kind of have something I would walk you through the various steps of thinking about adding a service worker to your website or just really thinking about making your website more resilient in general. So it’ll talk about you know identifying your network traffic it’ll talk about using the browser’s HTTP cache effectively, which is kind of your first line of defense, and then it all kind of go into how you could add work box to an existing site and the various Steps involved there, so if you want kind of a guided path, I would say that’s one option we’ll biased.

For that. I would say that if you want to just learn more about service workers in general and material written by my colleague, Jake Archibald, it’s probably the best that for folks who really want to deep dive on things, he was somebody who worked on the actual serviceworker specification And you know he knows more than anybody else about these things, so he was a really great article talking about the serviceworker lifecycle, just all the different events we have fired, and you know how you have to handle those events differently and implications that they have for You know the state of your caches and updates, and things like that so diving into that would be kind of my recommended starting point, and he has another article that talks about kind of a cookbook almost for recipes for caching, so implementations of the stove are valid.

A pattern cache first pattern: if you wanted to implement it yourself, instead of using work box, he kind of walks through the process. There is that the offline cookbook, yes, the offline cookbook, and if you want something, that’s really offline, there’s some actual physical books that that are pretty cool, related to service workers and progressive web apps in general. There’S a new book written by Jason, Grigsby, eight in particular, that I would recommend and just kind of talks a little bit about, I’m necessarily some of the technical aspects of service workers, but more about why you should think about adding a service worker to your site And why you might want to build progressive web app in general and that’s a really cool book, that kind of takes it from a slightly different angle, but gives some good perspective great Jeff.

Thank you again for being here. Absolutely you can find links to everything we talked about in the description below thanks a lot and we’ll see you next time.


Website management packages are important for any business these days. Check out the video from Allshouse Designs to see what can be done for your company and yes, for how much.