Categories
Online Marketing

Varunkumar Nagarajan – Web components and the future of web development

Varun is a software engineer and a blogger and he’s going to be talking about. How do you actually do web development today in 2013? How do you use components? How do you use modular development to develop much more quickly than you can so? Okay, so my audible, audibility back? Ok, ok, let’s get started so today I am going to be talking about the web components and so what we have in future for developing components on the web, so my name is Varun as I will introduced.

So this is my twitter handle and I actually use google plus. You know me on SAS. Well, ok, so once upon a time waivers of just a platform for creating interactive and navigable contents, so it was just a static contents and people. So people were using HTML to define static text and images, and so we were using the JavaScript to put up some animations or some interactions on that page and CSS was just used to this just show some basic.

So it was a presentation layer thing which we were using. The capabilities on the browser were very much limited and we need to depend on back-end servers for accomplishing most of the user tasks. So there are lots of heavy lifting was still but done by the the backend servers. Things have changed a lot, and so, if you look at today’s web, so we have this powerful combo of html5, css3 and JavaScript browsers are becoming more and more powerful.

So we can do lots of stuff on the suburb publications of today or almost as powerful as the native applications. So we can do lots of stuff on the web platform itself, so we can do. For example, we can do. We can build application which can work offline, so you don’t need internet, it’ll, download all this stuff locally and you can build applications with that nature and sew applique. The your web applications can have access to your camera and it can access the devices and we can even have databases and a complete file system on the web platform.

So we can even do a real-time communication and lots of so web has web. As a platform has I mean we have gone so far in the platform, so with great power actually comes great responsibility, so as a front end engineer as more as web applications are getting complex as we start building more and more, as we start putting more and More features in the web applications, so there is a need that we should follow better or engineering practices.

So we should so back-end engineers have been have been organizing, the complexity have been modular racing stuff, and so they have been enjoying lots of facilities on the backend servers, but as a friend end engineer, so so it is a need. So we also should start building code that is maintainable. So as a front-end engineer, so we should also start writing code, which is maintainable and to achieve that the code should be modular and it should be encapsulated, and so it should be reusable.

So, basically, what it means is, we should start building reusable components which anyone can just plug into your page, and so they can use it. So some examples of this reusable components are so, for example, even this google+ like button + button and facebook like button. So all these things are components. So what do you take? So what you do is you take these components and you put it on your page.

Okay. So, as I said back-end engineers, so they have been achieving some of these things using the object, oriented language. So any language will inherently support will help us get there, but how do we get there on the web platform so before that? So, let’s see what’s actually missing on the web platform today, so let’s say so I’ll. Go back to the same example of the facebook. Like button and the google plus + button, so let’s say after that, if you are so if they are providing some sort of commenting capabilities, ok, so so they’ll show some text box or some way to actually share your the current page.

Ok, and so there is a chance that it could lead to some sort of name collision. So whenever you have both these components, you have two components: one from facebook and one from google plus. So there is a very good chance that these guys can collide, and so a command button within a command text box within Facebook component can interact with actually a google plus component. So there is no way of Dom we can.

We can’t do any sort of Dom encapsulation today, and so that could lead to problems like coalition of Dom contents. So you could, it could lead to some same ID collisions and things like that and it could potentially lead to broken styles. Let’s say if Facebook is trying to apply some sort of style for its its text box that might get applied to Google Plus also, of course, there are ways of overcoming this, so we use some conventions so, like all Facebook will have some prefix or we use Some suffix or some other conventions to overcome these problems, but inherently a web as a platform, doesn’t provide any sort of capabilities to overcome these.

So that’s what we are going to look at today, so I mean, as web applications get complex. So there is a set of frameworks which are becoming popular, so we call it as MVC frameworks or in general we can put it as my star frameworks, so these helpers to some extent so so they’ll help us to organize the complexity in our application. So so we can separate out the contents on the presentation layer to some extent with if you use any of these MVC frameworks, so we will be able to separate out the contents and the presentation layer.

So I mean there are so many yemisi frameworks available. So I’m not going to get into any of the v-star frameworks, but I will just talk about how, in general any any of these frameworks work. For example, every if I mean, if you take any of MVC framework for example, so it will be, it will provide you an option to define views. Views are a template, so that is how you’re you that’s, that sort of like the presentation layer and then so.

You will be associating a model with your view, so you’ll be creating a model, object and you’ll. Associate that with a view and then so will have to provide a way for monitoring the changes. So all these libraries provide some sort of mechanism for data binding. So, whenever the model object changes, so it should reflect immediately on the views and whenever the views changes, it should reflect on the model object.

So that’s the principle with which any of these MVC frameworks work. So, let’s see how these guys are able to achieve that. In today’s web, so this is so first, let’s look at how templates are there today, so there are various mechanisms in which you can create a template, but one of the still commonly used method is creating a Dom which is off screen, so you can create a And you can either hide it or you can put set its display to none.

So this is how we do it, so we generally create a development or something so which has all the all it’s. So it’s like an abstraction it brought it I mean. So it’s composed of all its comp parts and then you’ll hide it initially and whenever you want to show it on the screen, so you will change its visibility, but then it has its own problem. So one of the obvious advantage with this is see. We are working directly on Dom elements, so say it’s a declarative, so you don’t have to use JavaScript to achieve any of these things.

It’s declarative, but then so some of the obvious problems are so as we create this template. So, whatever resources you are trying to load between that page, so those all the resources are pre-loaded. So it’s not so whenever so the moment you create this template. So at that time itself the images and everything will be loaded. So it doesn’t wait for you to actually start consuming that template.

So that is one problem and it doesn’t solve any of the problem with encapsulation, which I talked about like whenever the same name is being used somewhere else, so it doesn’t solve any of those problems. So that’s the first method, so in the second method, what we do is yeah yeah. So on the previous slide, you had hidden as a value there as an attribute yeah. Is it valid? Yes, so it actually works if we simply say hidden in the Dom yeah insurgency.

Essence: yeah. Okay, that’s new! I didn’t know about that. Okay, so it the only difference between that and display. None less. So this will still occupy the space. So it will occupy said I’d, say if your device off with some 100 pixels, so it will still occupy that space. So that’s the difference between hidden and display now, so this effectively does equivalent of visibility, colon hidden, yes on the second way of second method of creating templates today, so so we can either use strings.

So whatever the contents, which we want to put it up in the template, so we can have a string content or the other commonly used technique is putting that content within a script tag when are overloading the behavior of scripts. So you can set some type something other than JavaScript so that the browser’s doesn’t load that so so here the problem is, we are sort of getting into parsing this string parsing, and this could lead to other cross-site scripting attacks, and things like that.

So this this opens up a new set of problems, so we have two methods. Today I mean so we can roughly classify the techniques into these two buckets, but both of both of these techniques have its own a problem. So, let’s look at so this is templates of today and now, let’s see how models are what is there in JavaScript space, so in JavaScript things are little more better, so we can organize the complexity to a better extent because of its module pattern.

So generally, when we create some model or model object or something on JavaScript, so we and we actually tend to create a mod. We use model patterns or a JavaScript object constructor to achieve that. So if you look at any of the implementations of any MVC frameworks like backbone or something else, so what they do is they’ll provide you with one base class for your model, and so you will have to extend that model to create your model class.

So the reason for that is so so whenever you are trying to change any of the property of your model, so they need to know the changes. So there are so inherently there is no way for you to monitor changes in an object. So the that’s. I’r talking about present day today’s web, so things are going to change in the future, but in today’s web, so there is no easy way for you to monitor for any new property changes of an object.

So how these libraries achieve is so they’ll provide you a rapper object, so you will be using the accessors of those rapper object, and so once you use that object, so they’ll set they’ll, get to know that you’re trying to modify this property, and so that’s how It is handled today this is for the model changes now coming for the other side of binding, so for whenever the view changes, how do we map it to the model? So, for that we use, we can use either Dom mutation or some of the event handlers, but so these are not very efficient, so I so I have a later slide, which will talk about the mean: what are the better approaches so, but so in today’s web.

So, even though we are trying to organize the complexity through the amidst our frameworks, so there are some still inherent. There are some problems which still exists today, so let’s see how we can achieve encapsulation today, so this is actually a fundamental concept of object-oriented programming, so it is. This helps us to separate the code which we write from the presentation layer and then so in web.

Also, we have this to some extent, so we can achieve this with the help of iframes. So I frames help us to abstract stuff to some. So it’s it help us to make things very secure, but then so it has its own problem, for example. So it doesn’t fit well so whenever so, it doesn’t resize depending on the contents. Let’s say: if your content is a big, I mean it doesn’t resize to fit the contents. So that’s a problem with that.

So there is a new spec which is coming up, which called seamless iframe, so it’s actually not attribute which you set on the iframe, which will help to solve some of these problems. It’s still very much new, it’s not available in all the browsers, but so that’s something you can read out for so what we have seen so far is so we have seen what is the principle behind any MVC framework. So what are all the different components of it and then so how these things, how we are achieving all these things using today’s web? So there are, of course there are a lots of problems so in this, so whatever stuff which we have discussed so in today’s web.

So there are inherent problems in the web platform. So how are we going to solve all this web platforms? Is the solution to all these problems? So it’s actually a set of it’s not a single API, so it is a set of a PA which is going to provide you some of the some of the functionalities. So let’s look at all those things. Ok, some of the key players of web components are template, so this is a native HTML native way of defined, defining a template, so you can define any inner chunk of cloneable DOM, and so you can use it later.

So the I will talk about in detail about each of these things and then also using web components. It will provide your mechanism to create custom elements. So let’s say if you want to extend so currently there is nothing called so we have divs behave spans paragraphs and stuff stuff like that. But if you want to create a tub tub container natively in HTML, so currently there is no way for it. So we use some sort of JavaScript library or some library to get there, but so using this custom elements, so you will be able to create some tab container element.

I mean any element for that matter, and so later you will be able to declaratively use it, and the next one is shadow Dom so share atom is actually the building block of encapsulation. That will help you to abstract some of the implementation details to the consumer. So we will see all these stuff in detail and some of the supporting things are so we have something called style encapsulation so which will help you to specify a style to a particular scope.

So we’ll we’ll see all these things will start with templates. So, as I said so, this is something which is not available in browsers today, but you can expect it so this this package, so that is a respective ailable for this and browser vendors are working on it. So we can soon expect this in different browsers. So how it works is so we can define, we can use this template element and also we can give an ID for this template and you can define whatever the Dom elements which goes inside that.

So again, we are where we are talking about declarative way of defining the component yeah. A quick question see if you say scripts, don’t run in your template yeah what happens with browsers that don’t understand the template. Oh there’s a very compactly. Currently, there is a render it as in line yeah okay, so basically that means that you now need a way to also preserve browser compatibility in something that is supposed to be inert yeah.

I covering that. I hope you are oh no I actually. I am NOT cutting that, but i’ll see i’ll try to see if I can link it to somewhere so uh. So today, yep to Jason’s point right. I think the way they will maintain backward compatibility will be exactly like they were. They did for the new html5 elements. Is that they’ll put div tags inside we’ll have to handle all the backward tips that will hopefully have styles that will hide and then we’ll have to scrape will have to script it to an extent so that we don’t end up with things being visible right in Old browsers, so the you know you can just put a display, none on template and hide it by default in old browsers.

But what do you do with scripts? How do you tell the browser to not run scripts display? None isn’t the only problem. It will still create all the image resources. It will do. A whole bunch of issues will arise when this happens yeah. So the advantage here is again. We are directly working on Dom, so it’s a declarative way of for doing things and then so the other advantages. So the contents are not parsed and so they’re not rendered so you want so whenever if you have any images or anything within your template, so those things so one get loaded.

So this is how you have to use it, so you can just select that template. Whatever you have defined and then so you should t dot, so the template, dot content content will give you the document fragment. So whatever you have defined within that template, so you will get hold of that contents and you can clone it and attach it anywhere. So wherever you want to use that template, so all you have to do is so you just have to clone that content and you just have to include it up and it to the place where you want to insert okay.

So so this solves all the problem which we discuss, so we are not doing any string parsing, so the scripts are not loaded, of course. So there is this back up backward compatibility issue, but other than that, so it solves all the problems natively on html5 itself. Okay, so next, let’s see what is shadow Dom shadow DOM is currently available in Chrome Canary, so I’m not sure whether it has landed in Google Chrome, but it has already available in Chrome, Canary I’m using Chrome Canary.

So the concept itself is so it’s actually a separate topic in itself, I’m going to cover some of the basics of it, and so how we can define insertion points in that and different aspects of it so looks like this is not something new. So some of the Dom company Dom elements which we already are aware of, so those are doing that to some extent. So, for example. So if you look at any article tag or if you are looking at any of the special in the day, time or time, input elements, so if you see here so it’s all so it’s composed of some complex set of controls.

So it’s not a single control. It’s so you have a slider here, I mean a timeline here and then so you have the control to pass play. So these are all different components, but when you actually inspect this things, you will actually let me go there. So what you are actually seeing here is its you have a drop down and there is a spinner. So there is a whole set of complex, Dom objects. There Dom elements there, but when you actually look at when you inspect for that element, so all you are seeing is just an single input.

So essentially, what browsers are doing this? It’s hiding some of the Dom nodes within certain other Dom nodes. So this has been there for in browsers for some time, but just that it’s not exposed to the end users. So shadow DOM is a specification. It’s a it’s, an APA which will help you to see. We can also define. We can also hide certain Dom nodes within other downloads, okay, so this is how it works.

So let’s say you have the initial Dom component, something like this so Dom tree. Something like this. You have a host and you have some certain children, and so let’s say if you want to attach some, if you want to abstract this host node and if you want to install whatever children it is being rendered. If you want to render this contents. So what you just jump to the example? So so, whatever you sing on the top, so I am defining the development, and so it has a host and within that I have certain other elements, so it has a hitch one.

She chose my name and my place and some other div contents. So that’s the original content, which is there in the Dom tree. So now I define our a shadow host, so this is how I created. Currently, you have to use the event that prefix to get there, so what do you do? Is I get the so there is. I there is a method called create shadow host, so I pick I actually select the host and for that host I am getting the shadow root.

So once I get the shadow root, I can add any elements. So here I am just putting some h2 and I am just putting some other deep contents, so when it’s actually rendered on the browser, so instead of seeing the original contents, you will be seeing the shadow content. So this sir demo. Actually so, let’s say, let’s see what we are seeing on the so this is, she said visible at the back. Ok, I will just controller.

So that’s what i’m trying listen to you, okay! I will just read out: what’s there, so what you are seeing in the inspector is still the original dom dom contents. So it’s still showing the ID eid host and within that I am seeing the h1 element and the h2, which shows the name of the place and things like that. But what it actually rendered is something different. So you have a shadow host, so you have a host and which has some certain children initially.

But so you have, you have created a shadow root and you have added some content to that. So when you attach it it actually it gets replaced. So whatever contents you initially had in your host, so that will be replaced by this new content so to sew up this in developer, chrome, developer tools, so there is a way you can actually see what is being rendered. So to do that. So that is so. There is an option called it shows, shadow Dom so once you enable this option, so, instead of the original content, you will be still able to see the shadow Dom.

Also, along with that, you will be seeing this on the shadow Dom also so for that we need to close and open it. So now, if I inspect, I think I’ll again read out so so now, apart from just seeing the original contents of that host, so you are seeing the the shadow Dom whatever contents we it got replaced. So that’s the that’s how it works. So you have to enable the show shadow Dom when your dev tools, I have a question.

Okay, is there a way to define the shadow Dom decoratively? Oh so we are declaring so so once you get that object, shadow Dom object know rather than doing it. In Java, so can I do it in HTML s 0 so that you can do it in so if you have custom elements, I will be talking about that custom elements. So once you define that so everything you put in within that custom element, it’s actually a shadow Dom only just quickly run through so now coming to the style and capsulation part.

So whatever contents you actually put so so initially we are at the sketch to undo so now. Let’s say if I am adding a style tag within this shadow shadow Dom, so it will so that style will be applicable only to my shadow Dom. So it’s it’s encapsulated within that scope, so I have the setting h2 color red. So if, as you see, it’s only applicable to the the inner contents, it’s not reflecting on the my host.

So to do that we can show. There are properties which, using which we can change, that behavior okay. So this is an important point point. So we have seen how to define how to set a color. So let’s say how to style your dumb shadow, Dom okay. So now, in practice, what we’ll be doing is so instead of V, defining the color or something so we’ll be providing. So when you want to build a theming capability to your shadow, Dom.

So what we’ll be doing this? So that’s where our CSS variables helpin. So we can so instead of putting the actual color itself, so we can use this where hyphen some variable, so whenever the consumer, who ever is consuming this shadow Dom so they can actually use that variable, so, whatever the property they set down that variable, so that Will be reflected that will be used within that shadow Dom, so you can actually use shadow Dom in conjunction with CSS variables, and so the other thing is so showed you.

We had some initial contents and we added some shadow Dom, so it it entirely got replaced. So, instead of that, so there is something called insertion points. So if you don’t want the complete contents to be replaced, if you want only certain parts of your original Dom to be used here, so you can use this content element, so you can show on it. Has a select attribute, so in the select a tribute you can specify any CSS selector.

So here what I am so this is the original my host. So in the host. I have something called first name, last name and all so here. So when I use it so I can use this content and content select first name, so it will use. The chase is selected to select that element from the original host. So this is how it will be used in practice where so we can specify. So this is more like a way of specifying ap ice into your shadow, Dom now, let’s look at observers, so we have also currently what we have is mutation events, so whenever you change a Dom, so whenever you modify a Dom, you add something to your Dom Tree so you will get you will get notified as events, so I have told in the earlier slide that it’s not very efficient, so there is something called.

I mean mutation observers, so this is some sort of similar to events, but then so, instead of getting notified for each and every event, so whenever a bunch of dorms are getting modified, you will get it in a single call back. So there will be a single call back in which you will get all the mutations, so then, so it is very efficient when compared to the mutation events, so we will see that with an example.

Okay, so I built a small example today team india’s taking on australia, so this is the 15-member squad, and so this is simple thing which will actually sort and give you the top 11. So so initially, I am going to use dumb mutation events. So let’s say if I am clicking on rotate, so it’s actually trying to adjust all these 15 players and it it may need, rearrange those players so in that process, so we have got two thousand events fired.

So we have. We are trying to do some person rotations and in that process we have done some Dom manipulations some 2,000 times, and so it has triggered the call back two thousand times so now, let’s see how it will be done in the case of mutation observers. So let’s say the same same use case if I use mutation observers, so I will be getting two thousand on mutations, but i’ll be getting all these changes in a one call back.

So that’s how it is efficient, so also it doesn’t do a whole propagation. He doesn’t the event needs doesn’t need, since it’s not an event, so it doesn’t need to propagate the alway so, but so you will get all the mutations in a single call back. So the another interesting thing which is coming up is so object observer. This is similar to mutation observer, but it’s for objects. So let’s say you have a java javascript object and if you want to monitor for some changes so now it’s possible.

I think it’s already available in Chrome Canary. So this is how you have to use it to. You have to use object that observe and you have to give a observer function, a call back and so the object which are actually observing. So whenever you add any new properties or you change some property, so it will get notified. So again, the important thing to note here is so it’s not triggered for each and every change.

So let’s say if you are doing you’re setting name and if you are setting Peter handle on a particular object, so the browser the JavaScript engine decides when to call so it will club certain operations and it will call the callback in a single go. So so now we have seen how to create templates natively and with the help of shadow, Dom will be able to encapsulate stuff, and so with observers will be able to bind the model and the view.

So now that we have got all these things. So how do we create a custom element? So let’s say if you want to create something called X tabs, so there is something called element touch. This is still not available, so what you can do is so I will come back to your question. See you are asking see if we can declaratively define a shadow Dom right, so whatever contents you put in within that template.

So that’s that’s actually like a shadow Dom so when you consume it using that X tabs when you are consuming it. So at that moment you will not be seeing any of the implementation whatever so, for example, so there you are seeing content select h1 first list. So that is an insertion point and you are declaratively setting it so once you do that, so when you consume it, so you can declaratively say that h1 title, so it will get replaced there, and so, apart from the actual templates and the shadow Dom, you can Also put in some scripts, so we’re in you can specify some API.

So whenever somebody wants to instantiate the component using javascript, so they can still do it, so you so whatever constructor, which we specify here. So that will go on global scope, so you can instantiate it from anywhere so for getting this features you need to enable, so I am so. These are all the things you have to follow to use some of these features. Okay, so this is a doodle. I of build something back, so this is actually your it’s not image or it’s not it’s completely CSS and HTML.

So there is no images used here. So all these animations, our CSS beast, and for just for the most following, so I have used the JavaScript library. So now what I have done is: I have made this into a company, so there are lots of okay again so so today, browsers are not supporting any of these. So many of these standards it supports shadow, DOM and the observers are available, but not all these things which we have discussed are available today, but you can still use it using some of the polyfills.

But there are some. There is one of firefox library called X tags, and so there is so one more polyfill. There is a model view. There is a MVM mdv framework. I think it’s from google, so so using some of these things, you can you can just play with these functionalities. Today itself, so this is something I have built, and so, if you look at the actual Dom so how I use it is so it’s a. I use the element X, hyphen doodle and there is a small open condom star again.

If I want to instantiate one more instance of this component, all I have to do is so: let’s see how we can add it to body. So I got the string. I ordered a string instead of the element, so so it got us one condoms. So so now this so I can anywhere wherever I want to use this component. I can just use this custom element so today. Also, you can enable yourself by using some of the polyfills, so I’m hoping that all browsers vendors will soon be providing these functionalities, and so that will help us to organize our code better.

And so we can natively achieve things which some of the MVC frameworks have been achieving so far, so these are some of the references. That’s it from my side, any questions. What is the performance impact of these components? Oh, as opposed to, I haven’t really done anything. I haven’t actually lived into the performance aspect of it. This is very it. This is very new. Actually so not all browsers are supporting it, so only shadow, DOM and style encapsulation features are available on that took, chrome can realign, and so there is one more.

There is an excellent library from Firefox. It’s called X tax, so you can use that to use some of these standards, but I haven’t really done any performance analysis on that hi here yeah. Is there any limitation of the the code what we are using, because here it seems like we have to encapsulate the JavaScript, HTML and CSS all together, so is that limited to one single file or something you know? So there is nothing like that, so you eat.

So I mean it’s it all mandates. Is you need to follow that structure, but have you include that it can be spread across multiple files or so there is no mandate on? Have you organized your source code? So all it needs is so it specifies certain format. You need to create a element, a custom element tag and within that you can put your shadow DOM, and so that’s all it mandates, but it doesn’t mandate you to keep it in a single file or you can keep it in a different file.

Also, okay, so basically it is the same as like how we are currently doing like. First, when you’re loading a page, you have to load all the supporting files and everything, but your way of creation of the Dom our creation of the object will be the different yeah exactly so. If I want to select say if I I so we have this component, so let’s say: if somebody else want to you consume it, so I will be providing a single file and so the all the other guy needs to do is so he needs to include That as a link link relative components, and so once that is done so he’ll be able to shop yo.

This is this is the this is how the consumer needs to use it. He needs to add a link tag with the HTML, so this is where the original definition of the the custom element was there, and so once you have that, so all you need is just that the definition so after that you can start consuming it. Any other questions, Oh yo, so there is a very good when Google+ page, where lots of talks and information about the specification are posted.

So you can follow that or you can follow eric beetle month. So actually I this presentation itself was inspired by his talk, and so there is a very nice article about shadow Dom on html5 rocks. It’s called shadow, Dom one, not one so so. This article talks about the basics of HM in shadow DOM, and so these things are about mutation, observers and yeah. So do you see people using this instead of templating as we do it right now? I don’t.

I haven’t seen anybody using it in production, or it’s not even close to that, but I mean, as the spec gets more standardized and as more grossest windows browser vendors start using it. So probably this will be the future all right. So you know it seems like a very neat way of essentially taking a single replicating a component. It’s often repeated yeah and inserting it in multiple places without changing the code yeah.

Now, when you do, that, can you make each of them slightly different in their behavior? Is so that’s what we have this API is for. So apart from declaratively, defining your the the actual component, so you can even write some JavaScript so that will that will help you to change the behavior. So let’s say so: that’s a mean hours. Sorry, it’s a it’s the same tub controller component, but now I am adding were constructed to it and I can define certain methods within here.

So so, when you create, when you instantiate the tub controller, you can actually set certain attribute or certain things, and you can change the behavior okay, so you can actually run a scratch custom. Every time you use a component okay and did I just notice a style tag in the section there in the previous slide, the slide you were at just now, so you got a slight a style tag then, which is not in the head: yeah, oh no! This is so this actually uh.

So basically I define I element, so this is the so the whole HTML will contain only this element declaration and within that I will put a put up a style, so this style will be scope only to this element. Right so does restyle encapsulation. So if I want to sorry what exactly does it inherit the parent it doesn’t it doesn’t? So that is what I was saying. So if you want to overwrite that behavior, so there is something called reset style inheritance or let me go back to that slide yeah.

It’s called reset style inheritance. So, by default in any style which you put in within a dom shadow, Dom it doesn’t inherit anything from the parent and also whenever you put some whatever you, whatever styles you put put in here, so that doesn’t go back to the the consumed work. So if you want to overwrite that behavior, so there are two properties, reset style, inheritance and apply author style.

So once you so the first one indicates that so, if you set it to true, it will inherit these tiles from the container page and the second one. What it says is the other direction, so if you set it to true by default, it’s false. So let’s say here I have changed h2, and this is this tile encapsulation title that is also h2. Currently, that’s not getting reflected, even though I have set the content.

So I will show you the okay, so this is how I am doing, because shadow DOM is something already available. So I know this default. Behavior is only scope, it scoped. It is limited to that shadow Dom. So, even though I have said this h2, so it’s getting applied only to this, so this is the shadow Dom which is rendered and it’s getting applied only to this guy and not to something else which I have on the page.

So, for example, this is the title of the page so which again is a h2 element, but it’s not getting reflected. I think I can show you it doesn’t support like waiting. That’s pretty cool thanks, however, in one more question: okay, yeah yeah, so that style tag right. I believe it. It was in line there, but can it be external call as well? Is it can be external as well, then, when so you said that it doesn’t parse.

It right when, on page loop that template your does it also does it get downloaded on page load or does it get downloaded when it’s in it? Okay, i’ll, have to check i’ll check and get back to you on items? Okay, okay! So we’re going to take another break now, for those who still have questions one is going to be around. Others will see you at four fifteen. Ok,


 

Categories
Online Marketing

Hitchhiker’s Guide to Generative Adversarial Networks (GANs): Ramanan Balakrishnan

Today it’s about Hitchhiker’s Guide to generative adversarial networks, very fancy. Name again, they seem to be very popular this year. Also, but let’s get started, let’s see how far we can understand this basis. So most of you might have heard about this. When I was for some semantics pre, we work with e-commerce, focused companies.

We do a lot of machine learning, algorithms data primarily and also intelligence layers. On top of them, we help them cover a lot of automated tasks like categorization brought us matching automated entity, recognition, proper products and, of course, the whole slew of offering this product matching distributed. Calling is one of our strong points, but I guess this is something you might be interested in.

The rest of the talks are committed to it. Don’t worry so, okay, let me give you an overview of how today’s going to be structured, we’re going to start with a generic overview of. Why do we study generative networks? What do I mean by generative networks and what are the things that involve, and then there are slightly theoretical part we don’t leave. I think it will be little bit of math, but should be interested for the fundamentals of how adversarial networks are set up for how brands themselves are structured and introduction to the system.

And then, this is why I didn’t want you to leave, because there are huge developments in the last year and then I’d like to motivate them through applications of how they can. We really visualize, and this should be a whole bunch of demos and seeing that it is a guide. This is like across the whole debate. I have links everywhere in the slice and then hopefully it points you in the right direction and then finally ganzar not going to be the most promising link queen Ganz are very promising, but they are going to be the solution to everything that you see.

So, let’s look at the issues which are commonly faced and, of course, again, improvements and the path forward. Moving up to the research is being conducted, the mic still works, so the slides alone. Okay, let’s look at generative networks right, so what do we mean by generated networks? So let’s get everyone active here and you thought the left. We have a think is a volcano. Everything on the right side.

It looks like hills. Can you get Mississippi is like easy. I think can you guess which one was drawn by a computer and what was done by an artist like a real human left right, anyone artist on the right? Yes, that’s Marvin from Douglas Adams. The Marvin also points out the artist on the right. I think sort of you suggesting that it’s too abstract, I don’t get it not very artistically inclined, but he has that the teapot on the right side yeah.

So that’s the boil something harder. Maybe so again, there is one of these was drawn by the bot and then here I thought of new Strax. I don’t remember which one was which so can anyone say left right, left right, computer on the left very good. Well done again! So that’s again, some study, which was conducted and it turns out people are quite difficult at making a sound. So let’s go to insane mode, and at this point I don’t know so both of these look very similar fancy.

One is like a very expensive painting and the other is 30 seconds on an Indian processor. So let’s look at this and then turns out it’s the one on the right and then so that seems to be so I here. If you look at most of my slides later, you have a description to whether sites are hosted it’s on an website. So it’s just click, so you’ll be able to see the papers which are referencing this and then this very recently came out in May this year.

I think El Gamal and it’s group they together MIT, is study and then a lot of people voted on the wrong paintings. So again these are house generated networks are structure. You essentially want to be able to do forgeries of high-end artwork make money that way. So why do we want to study generator networks we want to understand and that those complex information right? We want to be able to study, not just classifiers, not just discriminators, not just acting performance of numbers.

We want to be able to understand if there’s any other language behind them and then most of the time there is like high dimensionality in the objects being studied. So the probability distributions are no longer simple of one or two variable whatever we wanted. I mention it. What if the dimensions are in the hundreds or thousands you want to be able to positive correctly, and maybe that helps us finally, like in the previous example, we don’t want to just model them.

We want to maybe generate additional samples based on some criteria, and I think this title to the previous of themselves on reinforcement learning, because one of the proposed uses of Ganz is to augment the world environment. So say you have a reinforcement, agent and the agent starts learning. Maybe Ganz can help fit in and simulate the environment in which the agent learns. So they pretty much tie in together quite nicely at that point, so generative networks so good to study them, and then let me just motivate how they are going to be structured right.

I’ll take a very simple example: there is me xx very messy very soon by then. Let’s take these points, that’s a whole bunch of points on this distribution, and then these are the points which we want. The model think of these as data points to our structure. Now one way to do it is something called very popular. It’s like maximum likelihood. So wherever we see the points being recorded, we adjust our probability.

Distribution function, sort of upwards. So later, if you look at the blue line, it sort of models how the points are distributed in that particular dimension, and this is very common – this a subset may be the most popular one of how generator functions are formulated mathematically. You can define a probability. Distribution function, P and then the P is based on parameters theta and then you start describing the speed of model which gives a sample X conditioned on or is greatly dependent on these other parameters.

We want to optimize these parameters, do something like the actual operation, where we do gradient descent back propagation of those things, but basically operations where you want to optimize the parameters theta, and we want to get to be right and from P, we get this distribution and From there you get the point, so there are two approaches: P can be either explicitly defined or we ain’t be carefully.

That’s the one on the left hand, side explicit density functions, though, is that they’re like a whole way of approaches and then over here as a root nodes, you see all the popular approaches I just focused on a few for now, so that you get an understanding Of how everything is set up, so when you look at a operation which, like eggs, to get the explicit probability function, you can either do it as a retractable method.

So if you look at deep mind in the last one or two years, I think they came out with wave depth wave. That was where you enter text and then it automatically speaks in very realistic audio. The person speaking sounds like a serial person speaking and not like your old Microsoft thoughts they could now but supports which that useless, literally promise word by word from recording. So we’ve met one based on this sort of approach where they had fully visible belief, net, which are very popular wavelength.

But the problem was that it’s based on, like the probability of conditioning on every single input, because, like the sequential learners which was expecting to model the explicit density function, another very popular approach is the variation auto encoder, where, instead of aiming so, we don’t want to Get the perfect p, we set another function which is a lower bound on the. We call that maybe L and then that variational bound is sort of proven to be optimal in some situations, not at all.

But the idea is that you sort of optimize your function and then variational auto-encoders are at that stage where they tackle the lower bound would be existed. Entity function, that’s the left, 3/4. Now, on the right hand, side again, you have also changes which are very similar. They do repeatedly sampling and they’re two approaches to it, but for now F be the end vation auto-encoders. They are very popular because they’ve been giving very good results as, for instance, they tend to be functions.

What do I mean so? Instead of assuming as a P exists or itself aiming to get the P, the probability function, we just assume that it exists, so we say that okay, there might be a fee. I don’t care about the probability distribution, probability density function. I instead just try to get the samples out of it. So again: yes, the gastric networks, markov chains and then it’s that aspect. So in that sub-segment of the tree, where we assume that all the identity function and then we start generating the outputs of the model that sort of square or both south is going to be centered on.

So we’re going to look at ants in this context. So if you start from generator network in general, Gann sort of written displaced relative, one of the probabilities in the probability density function so many times things just keep coming and to say that, so that’s how the ants are supposed to work again. He and Goodfellows very popular with the tutorial is there on the link. You should check it out it’s much more than I can fit in forty minutes.

So how do we do this comparison right? So when I reported wavelet, which is like a very popular model, unfortunately, it takes two minutes to generate one second of audio, so it’s like very difficult to you can realize it’s very popular gives very good results, but the computational complexity is sort of like a problem Guaranteed asymptotic consistency, although our main methods there, of course lower bounds and you expect to optimize it in terms of auto-encoders and bae, they sort of don’t give you the full result.

Eventually, you hopefully get there, but the guarantees are not a hundred percent and Markov chains because of their consistent, resampling and continuous requirements. They sort of take a long time to converge. If you’re play with Markov chains, you realize you don’t know how many games to play of the multiple iterations, again rate description, links below the slides are not exhaustive. Also, these are also based on perfect sort of existing results and dance again are not going to be perfect.

This is just an example of why can you not suffer from these problems but later on, towards the end, we’ll see the problem that they have of their own? So now we can jump into it, generative adversarial networks right, so just to give you an idea. This is like the statistics. Over the past year, it literally said cumulative gain names against people, sites calling names, sim game discs, organic slice against PC, get so many games right, and then they decided, let’s plot a chart that even after 2017, you see this right here.

It’s literally like a zoo like so many abbreviations that people have no idea what to name the against. We are running out of abbreviations for again, so if you can think of one make sure to put a preprint and Rainier span. So let’s look at the gandu. Now, of course, was very interesting that last week was the computer vision and pattern recognition conference and turns out Apple started, publishing their own research papers, their first one, which came out, which also won the best day for our last week, was on gents figures cents.

So they started generating eyes for others, relations and it turns out the leaves are quite effective in training their own models. So things moving very fast, and I guess before I finish my thoughts, I might be another three or four preprint. So let’s look at the answer. More detail, I like sex, like just talk about the structure, how they fit in a sort of training, is involved. The easiest way, like I said, is adversarial networks.

So there were, we do players over here by players. I mean eventually two models: dude, your networks, two systems. Currently we just give them as column players right. So that’s a tea energy. The names has become apparent very quickly. Why sorry just check? Okay? Okay, now it was maybe it was not selected, so the adversarial gain can be formulated with like a game theory formation. For those of you have an idea.

It’s like the national system between two different agents that the local equilibrium, hopefully between the two, which sort of arrives at a stable solution and then the two names are for discriminator and generator. So one is the forgers. The other is the expert, and then they generally are playing together where they are trying to compete against each other. There are also formulations where you can think of it as a cooperative game, but so far the way of think of it as like us, agent, white tooth peoples, might against each other sort of skills are easier understanding and it me James to defeat the other.

So if you think of this framework, let’s look at this picture again, so this is pretty much how you can explain how a can is structured together. So let’s look at the right side. First, where we have this sort of noise, so you can think of this noise as the vector space. Let’s just call it a latent vector space and I’ll denote it by a vegetable V and then this V sort of fed into a neural network, the generator and then the generator has a function.

Let’s add a G of G and then it starts generating samples. So it’s just start generating one example of another based on some random price, at least initially, so it starts generating output on the other side over here you have a sample data set, and all of these are real light. Images are real life examples, so these are other fake ones, but the real one, and then these real ones are going to be where we sample our true values.

So what happens is that you have this network at the top? That’s our second network that will be called our discriminator, then very simply the discriminator is either fed real values. G of X is the function then or it’s pet output from the Jen that becomes D of G of Z. So you have the effect. So G of G of Z, one of the view feet one or the other into the discriminator. And then it’s like a binary classification problem and I just raised it the real image or it’s a fake image.

It keeps getting again again and then these two systems are what we are going to frame now. One of them starts generating the other one starts discriminating. So let’s look at the really easy. I want to think office or the discriminator, which is one at the top. It thinks of whether quanta to determine per sample is real or fake. So when the sample is from the real set, it wants to say it’s one where the sample is from the fake set, it sort of starts.

This function enter zero. So that’s like the complication of how it goes about it. This is how we want the discriminator to behave, and then this is a very fancy way of saying the two lines here. So we define a loss, function, the J for the discriminator as a function of its parameters, theta and then, if you just close your eyes or squint real hard, you can just see here where T of X is the ones which need to become one and D Of GOC pretty much in a 1 minus, so the idea is that by combining these two expectation values, we now have a easier way of defining how the log function needs to be written for the discriminator.

So you can think of this as the final loss which is going to be calculated and back propagated through the network or the discriminator updating. So we have the discriminator, it’s quite straightforward: 1. 0, it once discriminated, then we have the generator for the generator it. The idea is to put the discriminator, so we can sort of use like the recursive thinking about it. What are the send winner? It needs to pull the discriminator and the simplest way is to just define it as i-.

So you take the discriminators function. You put a minus sign, you call it for the generator. This is sort of like a min/max game, where you make sure that one of them is able to defeat the other. There are a few problems with this, because the think of them as two separate networks, so, if that this later becomes optimal, then the generator stops learning because one of the lofts becomes zero. You put a negative sign: doesn’t matter zero, so you sort of fool.

The generator is thinking that it’s no longer learning. So there are some heuristic motivation to get another way of defining the last function for the generator, and one of them is this. So, G of Z, that’s the output function for the generator, and the of UFC needs to be one over where this is different, because the generator wants to fool the discriminator. So he wants to convince people that becomes one.

So you write like a non saturating. Heuristic function – and this is pretty much to say, is that the expectation needs to become one for the generator output. This is how the generator thinks about it. There are other ways like I said this is formulated in a maximum likelihood type of environment. So you can pretty much pass the equation where you sell here, like the logistic sigmoid function on top of the output.

This is just for defense, so that you understand that this is actually functionally equivalent to the maximum likelihood estimators I was talking about, so this is pretty much just a way of recasting it. So you can think of this as like the two popular ways of how to define a lot function for the generator. So now we have 2 log functions, one for the denominator, one for the generator. What do we do? Stochastic gradient descent, most common, so we take two mini batches.

We take one from the real samples X and then we run the vectors on the generator we get another batch of output, so we have too many Westerns one real one fake. We pass it through the discriminator it gives you answers, gives you ones and zeroes and then, based on that, you can simultaneously run back propagation on the two networks, so you calculate the loss for the generator you use it to update the weights for the generator you Calculate the loss for the discriminator you use it to update the weights for the discriminator the setup with each one, whatever loss function to calculate can be used to update both networks, and this sort of joint training is why it sort of pillars together.

So, for a long time, so this was back in yeah. I don’t have a theory, but this was back year 2014. I think when he had good fit on the other sort of formulated material system and then for a long time, there was not enough for successful computational EDC implementation forever and something is called lab. Ganzalo prescient betterment operators and using that was slightly difficult and the whole thing didn’t take off until much later when we had our familiar congressional networks and deep conversion networks being applied to them.

So this was sometime in 2015, when a group proposed that you take your layers, you make them all convolutional, remove any pooling or run pulling that you have and then eventually you start to expand, generalization and turns out. The output of these sort of networks was much more clearer, higher resolution and also easier to Train. So just a few pointers here. So what happens here is this is the network of the generator.

So we have two networks. This is probably the generator. So you look here: it takes like a vector set and then it has a whole voice. Job for convolution excited convolutions over here it makes it bigger and bigger and bigger. So you can think of the final output as like an image which has been generated. So this image is then going to go into more traditional classifiers and then that’s the one with such distinguishing, whether it’s a real image or a fake image.

So this like the deep convolutional again which were set up in order to successfully make it work together. So that’s about it hope that wasn’t too much of technical detail. But now again we look at very interesting applications. So with that setup of how generated disk invaders are now working with each other, you know an idea of the different components involved and, let’s just see how that translates to real-world applications and the first thing, which is also from the same paper, which I was referring To DC gas, and when I saw this, I was like blown away for a few minutes.

I had to sit down and think about it. So what happened was that if you remember what we went and all the complications that are all sorted, we look at the example and you see very strong similar. So when the generator starts generating output, people were able to model the ways in which the outputs are related to each other. Let me give an example here, so there were a whole bunch of pictures. All of these are generated pictures.

None of them are real pictures. So all of these pictures were generated by the generator or a specific position in the latent vector space. So you has a generator. It takes some noise type of input and then gives these photos of men with blocks. Then for some other point in the vector space. It’s just leaves photos of men. So what do you do you subtract it out, and then you can see where this is headed.

When you add a woman with glasses from the vector space in the noise in the latent vector space, the generator seems to be able to transform it like this. So if you remember the words awake approach where King Midas screen plus man, that was here, but it’s out of the way where you start seeing that the vector is able to identify the important components in each base – and this was something that’s quite interesting.

It’s not just segments of the image or it’s not just lines and edges which are being identified, but it’s really the concept of the image, and this seems to be quite interesting of how the generator networks are learning. That’s another research paper. Most of it is going to be this so just assume that the link will be there, so you take like a very strong image or very high-quality image, so this is like a very good painting.

I guess I think I think editing the painting and then you down some to the painting say it’s a thousand five thousand pixels you make it much smaller, wonderboy, Boris, exactly and then you put into Photoshop, and then you run the bicubic interpolation or some simple method. I hope it’s appear, but this is very blurry, so this photo turns out when you approximate each point by the neighbors, you get a blurry result.

However, it is possible to take this sort of lorry down. Sampled images run it through something which is called an Ed’s organic. So many name again, so you run through it again and then it’s not generating much more clear images. So it starts giving high-resolution images which sort of end up being typical improvement over what was previously proposal. The super resolution seems to be possible now interactive Gantt.

I guess all of us went to school at some point in it. Remember those is art, class and officials that you draw a painting and everyone goes. The scenery draw a little dot. Two mountains draws Sun, you’re, really fancy you draw birds and a house in front of you so again looks like the gantt are reading at meeting us at it. So over here, what you see is actually a person conveying his intent. A word like a vector space over the noise vector space again and by simply telling them what colors they want.

They’re able to do things like real-time editing of images, so by simply drawing a white line, he sort of adds no to the mountain by drawing like green lines over here it becomes like a like a field. The interesting part was that for each of these updates, it was of the order of seconds, so we are literally able to hear real-time edits of the images. So that’s what interactive again! I think there was another impressive article where they took a photo of a lady and they started adding black colors.

Then the hair change blonde to black again very impressive result. So these are like the rough images, but it actually starts generating high resolution images once related to settle down image to image translation again very popular. Most of you might have already seen. It also referred to as a fix to fix, and this is sort of what was being done. So, on the left hand, side we have input on the right hand, side, they are expected outputs from the generator again.

This is not a real picture of a street with cars on it, a person sort of describes how it will look through segments and then it starts modeling it and such if outputting images which are trying to fake people. Similarly, if you have spatial photographs there, you have a new satellite and it starts taking photographs of these earth, and then you start seeing growth and maps be auto-generate. It can go even further.

It can even start doing impressive results which are being done in Photoshop like if you want to convert from black and white to color. You want to go from bad dancing to a nice Street if you’re an architect and you design a house, and you want ads where the doors and windows are going. It may look impressive here, but if you zoom in there might be something very weird about it, but I guess it’s part of the process of how these registers learning.

Of course, you can do things like this, where you draw a rough sketch and then it starts giving realistic interpretation of how it might look in the given domain so create website over here you can go up, try it. They have very impressive other models, not just fiction books here, text to image synthesis. So a few years ago there was a very popular article where you give a photo, and then you start describing the photo automatically.

I think Facebook even now does it. If you have got a photo to Facebook, it starts telling you that scribing add keywords to the photo which sort of describe what’s happening there we’d like to do it in Reverse here. So what we do is that we start writing sentences and these vectors sort of condition our input. So when we feed to the generator we sort of add additional input into the vectors and then these sort of vectors, the text vectors, which are encoded as it’s not vector space, they start determining the output of how the image is going to look like and then Even for me, like I, couldn’t do this right, so you start with describing this and then it starts coming out with realistic photos of how a bird with a red V and with like a black feathers standing out.

There also possible very impressively for flowers and some other aspects, so this is literary translation in the context of going from text descriptions to being able to go to images again. These are again very terrific examples and all of that, but seems impressive, that it even works in the first equation: image completion – and what do I mean by this? It’s about corrupted images right, so there are whole bunch of celebrities here you might recognize a few.

I don’t recognize most of them, but you cut out their nose and then you train the network on how to do in paintings over these images, and then it starts to identify that adult is most oftenly the structure over there. It starts to match the color and the shape with the rest of their faces, and again all of these are ways in which the whole system becomes a part of it. So it starts to generate sort of realistic results and again generative network is the center of it.

You start noticing a trend. Most of the results are 2016 or 2017. So that’s like the last year. It’s when was the things that are happening and it keeps improving day to day, multiple Gant, which I think this 2017 now right now, you months ago, we’re back again and people are able to describe them as combining to gain networks, so, instead of having a generator Discriminator by itself, you take another pair of generator discriminators, and then you put them together and you start training them to identify relationships across domains.

So what happens here is that you go shopping and then you have a shoe which is in a particular style, and then you want to get a bag which matches that scarf. So they started being able to generate samples which are consistent across their whole catalogue. Very impressive results and then there’s a new one, so this was called disco pants and the other one was called psycho again. Don’t have to remember the names but yeah.

So there’s this also well here and they were able to like take a photo or a article of the horse trotting around and one network was able to do it in to convert it into a zebra. So when I looked at this, I thought. Okay, that looks like a zebra, but the trails pervious. Then such zebra trails are not striped, but I didn’t know that so yeah and then you could also do it the other way around where zebras could be converted into a horse.

So these sort of games were able to do it in either way. So you have tracked against all the cool games and then it’s just going to be all the way down. You keep going with gems, so very impressive results and, of course, a few more minutes and then we’ll talk about issues that were happening. What is the state of the art research and if you want to go to the nips this year, I think solidly closes the papers have been accepted.

But most always, I could end up talking about things like these, so stability and other problems that you just recorded, increase them. So finding equilibrium are hard because, like I said this is not a explicit last function, which is just being optimized, we need to be able to do sort of a joint system. We’re optimizing address one network, but together with another one, so the local equilibrium points are sometimes local minima, which you end up being stuck in non convergence, has always been a problem, especially especially something called both collapse.

So what do I mean by that so say? You have pictures of dogs and the idea is that you want to start generating fake pictures of dogs. You feel it like a Labrador of world or golden retrievers, and we start reaching with all these types of dogs. But what happens is that, as mentioned like the view we mean like the few talks earlier and some people think that the networks are of learns to cheat and it starts realizing that okay, I will just start generating bugs and those are like.

Well, it’s dogs go up, so it ends up that we sort of collapse into like a diversity problem. The network starts generating very, very similar-looking examples and, of course, there’s one thing which I sort of skipped because wasn’t very apparent over there, which is the differentiability requirement or the loss function and the whole network. So you morality of C R, T of Z – and you have read both of the either of the functions – need to be differentiable and go to much of the mathematics.

But the idea was that this constrains us sort of continuous functions, just being output, so working with text has been a problem in this area, so most of the functions are most of the outputs. When I notice a trend are all images, there are approaches which are looking at generating text with realistic results, but not into convincing so forth. There are both close, but let’s see, and how do we improve can’t so stability has been a problem for a long time.

I’ll just leave all of the tears okay, so the first view are on hacks of how to improve and the sewage container has like a very popular repository when he did this to it. You want the Train again, just follow these steps, and one of the idea is that, like the first vector space set in which you start generating the output, instead of having in a select an IV uniform distribution instead of having like a spherical space – and that seems To give by t better results, one sided label, smoothing which may be applicable to other domains as well, essentially means that, instead of going from 0 or 1, you sort of adjust your label so that your output is like a point 8 or 0.

7. This sort of still conveys the information that you are good, but sort of adds noise at the layer. Finally, you also have reference normalizations batch novelizations. These are something started by says: the losses are computed and the intermediate layers that organization has done slightly differently and finally, this is sort of like the big problem with Ganz which people are still working on.

It’s about stability in all of these systems. So when the two networks are learning against each other, they’re, not very sure, on how stable the outputs are going to perform, so people are looking at different ways: the expanding to load when almost all of them are sitting 17, but most of them are still trying On how to ensure that stability is a solved problem, one easy or one effective approach seems to be conditioning on how realistic the output looks so instance.

So how do you search the quality of a fake image right? Everyone has their own opinion. So when you start judging the quality of fixed images, people select defining it using other distance functions. I just messaged, I want one of them, so you start defining other loss functions. The other distances based on that, where you make sure that the realism of the image is sort of an important parameter and also Auto, is other approaches like the Egon and began.

It has actually work now so began and then these approaches sort of like help, define or help set the stage for stability problems to be involved quite sure, a lot more to be covered, but I leave all of these here, so that was the guide to. Can you can visit this link for the slide and but that and all the references, hopefully it’s useful? I just wanted to give a presentation on all the overview, maybe hopefully abuse when you start searching off the checklist guides again there was media Mona and thank you very much we’d love to offer you a few minutes for questions and answers.

If you have a question, please raise your hand we’ll get you a mic. Oh I’m going to write down here, I’ll, give you a mic um! Please stand when you give your question and I’d just like who you are. We are doing a live stream. Hi thanks for the interesting talk, I’m the food, so my question was around. You said that differentiability requirement is causing problems with discrete outputs, so that applies when I’m trying to go from say text to text.

So you can’t do my homework yet, but so what about other forms of continuous off fruits, for example? So images is one, but what about the graphs, or you know, time, series and things like that, so have they been areas like curve? I think that have been few approaches, especially like at the start where people are trying to generate audio based samples or even for images right. You can pick up it as so.

Essentially, the pixels are going to be numerical values which are being generated. So I would think that if you want to solve problems which are time, series or value based the same encoding system or again, as I don’t remember, of any specific approaches which I work. But if you look at an image when you actually type it out, it’s going to be a series of numbers, pixel values, so I think that might still work and those are not just discrete yet but quite sure it might work.

I don’t know. I can’t remember here of any specific examples. My next question is a winner hi. So when you showed the equation for this Renee, when you read it for this degenerative equation: okay, catalog, when you said it’s only finery, okay, what? If the problem is an equal person? It does it as a dragon, go match. Yes, so I didn’t talk about how the network terminates. So what happens is that when you start off initially, the discriminators has the option between zero and one, and the whole structure is based on whether the loss is like a finite value.

When the discriminators becomes confused between a half, so it says X is half that it has. That sort of the instability condition that ideally, when you start the training. So what happens? Is that at that point, you’re at the place where the generator is like completely as good as it can get, and then that’s essentially the stopping condition. So if you have the outer of the discriminator stuck at half or 0.

5, and then it cannot differentiate between the real and fake images, that’s at the point where you start seeing that you stop your condition and that sort of the end state of your training. Yes, yes, so the equilibrium condition is exactly that. The equilibrium position is that the disco mater fails to generate any difference between the between the two images, all right and ask questions right here in the middle hi yeah thanks Don.

So this is also strengthening stirring through the process the distributed effectively. Sorry, I couldn’t hear it quite clearly here: yeah, okay yeah, it was very interesting. Is it supposed to discriminate to generate your strengthening through the process discriminated the factory classifier? Okay, yes, for their learnings. For the effectively the architecture, the way the discriminator eventually strengthens from other classification problems.

Yes, so this was expect something that’s quite interesting or that you have two networks which are being that here and most of the results are focusing on the output of the generator. So one thing which might be possible is to take a very good discriminator, which has been trained using this process and literally start using it as a classifier or other types of problems. That’s definitely what motivated me also, because I’m more using the classification problem, so this discriminator by itself might have significant uses in solving problems.

It might not be trained explicitly on the classification problem that you are trying to solve, but again there’s definitely some amount of transfer, learning or now want of retrain weights, which might be useful, I’m not sure but hope, I’m quite sure there are. But I stuff my mind: I’m not sure yes, but definitely something we should look out for with question weigh in that most of the applications which we saw was from the generator part.

Is there any like? Can we use the discriminator part for exactly? I think. That’s exactly what the previous question was also about, so I think the utility of the generator is what has been discussed so far. I also strongly believe that the discriminator needs could also be used, and I think, most of the times when you want to use the discriminator it’s sort of not at the image context, but maybe in the classification context, and some of those are quite useful.

We are not seeing anyone tries to use the discriminator for likes, imagenet, glasses or anything like that. I think for simple binary operation of like deciding whether an images force of a particular class or not. I think this matters are still able to do quite significantly whether they have been used anywhere so far for any classification problems not aware of any okay. If sorry can’t hear you a really, maybe my people, oh okay, better later so this motion is no.

No, no, like you might have come across the different, fake news generation like Obama’s speech, has been moved. Okay, yes, what we can do so how ganzar like helping and tackling this fake news challenge, or are there any other any countermeasures to handle such big news? That are getting generated using ends like screaming, for speech based input for organic, because xbase filtering not aware of any good results because of the whole problem, with discrete inputs and discrete outputs effect, but for filtering of voices.

Other images like we can identify this is generated by some victor so that the fake news can be in the future. Possibly, hopefully you get a network which is talking. This involves using the discriminators component, I think yeah so in the decimator component, and if you have a generator which is aiming to models of the fake news. Part of it quite sure that, as long as you generate your samples consistently, it might be able to use the encoding into a representation of what a fake news represents.

Is the actual problem, in my opinion, so getting the presentation right might be an issue. We have sense of one more question and we have a question down here. Okay, one way you are talking your board or you know I explained about the thief and you can draw the is using jack, but at the same time, when I look at you yeah when I look at the disco gasps, I see that I, the heart, is: We change to G Brown.

Okay, I can actually put some Rafal some wrong person in the same article. You are usually in the judiciary system, be kind of relay the articles as well. It’s kind of regularity on this kind of mind. I research on the instrumentation and the research good question so far. There have been very convincing results. I think, a week ago someone made a made a picture of Obama’s speak very realistically things which he never said.

The legal issue substances a minefield. I have no idea and how people are going forward. There are few organizations which are looking at it. Efs the Electronic Frontier Foundation open a eyes to an extent. All of them are looking at the ramifications of how reinforcement learning systems Gantt systems generator networks. How all of these will fit into the broader context of how it might cost issues? We’re still not sure some people are convinced that it’s going to destroy the world.

Some people are convinced that it’s merely a pet project, but we still need to have reasoning with the entire society, no longer computer scientists who are going to determine these things, but yes, due to source of the skills for me to think about here. Yes, yes, this is such an interesting topic and I think we could probably have questions all day long Ramadan. Would you take questions offline during you, yeah sure I will be here the rest of the day I for a while and it’d be happy to meet you anywhere website, also fantastic.

Well, it’s time for a break before you guys go out to get your child off e or whatever caffeine. You need


 

Categories
Online Marketing

Develop to Design – A guide to emergency design for front-end developers

I will be talking about develop to design a guide to emergency design for front-end developers. This talk is primarily aimed at developers working in small teams who find hiring a designer too expensive, or for people who are interested in designing the experience for their own products. The experience of using an application starts even before registration, and it has to be maintained even when the user is not using your product.

Even minor things like the way you format and send your registration mate makes an impact on the user’s experience of the app design. Is hard and as developers we tend to focus more on functionality, but it is fun and it is a problem and problem solving is what we thrive on. We need to understand that design is more than making things pretty and goes a long way in developing to make your user happy.

I would like to stress on the importance of making conscious design decisions avoid making design mistakes that lead to jarring user experience by understanding a few simple concepts while having robust functionality. That said, there are no idle clad groups to design just conventions. Who am? I am an intermediate-level front-end developer, who is learning about design and user experience for the user, because at the end of the day, you are responsible for what you deliver see a at meta refresh