Categories
Online Marketing

Brave+IPFS v1, Browser Design Guidelines, WebUI updates – IPFS GUI and Browsers Weekly, 2019-10-30

If anyone wants to add something feel free to do so. I’ve added two first two items, so I quickly go over them and then we will go, spend some more time on the rest, so the first one should combine and recover from missing local gateway. So, for a background, we are working on improving resilience in like offline or like censorship and by maybe not driven, but like impacted environments.

So if request for ipfs resource files due to HTTP error or network error and request to a public gateway, we will recover from that and that will land or it landed in the latest beta. We which soon be published to Chrome Web Store and Firefox MO. But basically we recover requests for resources at the public gateway and the question is: what happens if local gateway goes offline? Maybe it was shut down.

Maybe it crashed how companion should act in theory, we could do similar thing if requests to local gateway failed. We could recover and open the resource using public gateway. This is like a seamless experience. The potential con for this is that it will leak information about which resource user was trying to open. It will link that information to the public gateway user has in settings such as a public gateway.

So that’s like an open question. I wanted to broadcast if there are other, like other considerations. Apart from this, like privacy leakage, I should or like how you X should work should that is it okay for this feature to be enabled by default, but behind it like a toggle, so people who can’t privacy conscious people can disable this behavior. So those are like questions I wanted to highlight and like ad-hoc thoughts on that, I don’t think so.

It’s do. We have enough local gateway usage like what’s the? What is the level of a threat? I guess every user we’ve active as desktop is basically using local gateway. Okay, when we install IP first companion, it checks if local gateway runs. If not it the on the welcome screen, we explicitly ask people to install ipfs desktop or go ipfs, and basically we communicate that that’s the default mode.

You should be using IP FS companion as a companion to IP first desktop, so I feel most effective as companion users will be running local gateway, but sometimes they will shut it down, forgot to run it or maybe they would they bookmarked linked to local gateway. But the local gateway is no more so I’d say like most of companion. Users will be having this gateway. We don’t have starts how many like request to local gateway fails because we are not tracking user actions can just desktop, have a log of where requests are coming from so like this Desktop know that it’s its companion requesting we could, assuming those api’s, do not disappear.

In manifest before we free, we could add a special header, informing local gateway, that this request comes from web browser with IQs compiled. So we could do that. We don’t do that yet, but how would that information be cool? It will like the implementation guide, which is, for example, we need to use an information to make it useful yeah, so you’d have to like expose this information to through like to the country.

Somehow the problem is like URL, that’s that’s pretty complex, yeah yeah and it’s like also the problem is, we would be able to track requests that reached like local, a gateway. We won’t be able to log request which failed. That thing needs to be locked at the companion level, so we are back to come like tracking, so the I think really maybe focusing on what they end user is trying to do. Might give us the best answer here: yeah physical, the the exposure, especially in the censorship mitigation, said it’s.

They really do want to get to that content, and that is probably the most important thing, but some of the design work that we have around letting them know the relative risk of doing that. It’s going to maybe help the situation and mitigate the level of danger. There, because I think, you’re right like there, is a leakage and exposure if you’re just constantly sending out these C IDs out to the Gateway that we’re intended to remain private, but without knowing why the user loaded it in the first place.

That’s very difficult to determine. Yeah, it may may be that it’s feasible to do this as an opt-in, because, like X, like leaking this kind of information as a default, we can just like tor project, cannot assume people reading caldera, docks and understand it fully. I don’t think, there’s a risk that people make certain assumption if we say that companion is able to recover from like censorship. I think there’s like a deeper discussion around what well one thing.

Also, what let’s say that the connection is made to the local gateway and in the case so everything’s working local gateways up companion, makes the request to local gateway. Local gateway makes that then you that’s talking to your desktop nerd and then you’re. Just talking makes that broadcasts that’s the ID to the public network at that point right if it doesn’t have the CID it’s default. Action is to essentially do the same thing yeah.

So the information leakage that happens at the idea, fest level is sort of like I want to like box it. I want to like keep that sort of separates cuz. We could imagine there would be a private profile in ipfs, which runs only let’s say: tor transport does not. It has a ephemeral peer ID each time it starts and things like that and then the only like leakage affected would be. Would you remain in companion yeah, but at that point, like you, really need to have more explicit configuration and communication around that to begin with, like companion and desktop, would have to have a much stronger and negotiated like communication at that point, so that the user, using Companion knows that they are protected in that way, so I feel like you’re, going to solve that problem in the UX explicitly in that case, no matter what yeah.

So I think you are right at this stage how how transports and how our likely provider announced like announcing everything you have locally to the HD. If we recover from local gateway, we effectively do not leak anything more than I give us no leaks right now. Yeah. I think way to describe it. Is we don’t leak anymore than we would have? Ipfs is working optimally as designed yeah, and that does there there are that design.

We know is a known problem that we need to fix yep, it makes sense and if there, if we added this like privacy mode at some point in the future, we then would like disable this behavior detect that yep all right. I think that’s enough for that. Let’s move to the next one, I try to make this even shorter, so we are publishing our browser extension to both Mozilla and Google stores, so every chromium based web browser is basically installing a sign package from Chrome Web Store and the Google is in the process Of like introducing a new set of web extension, API switch significantly change the capabilities of extensions and those new IP eyes are not even published yet nor we know what to really expect.

There is no nightly chromium built which enables us to play with it. Yet. However, Chrome Web Store already started, making life more difficult if your web extension is using those powerful API eyes. So it’s very unfortunate for us, because we we have both better and stable blog on Chrome Web Store and right now, due to the fact that we use those powerful api’s every new every release, both beta and stable, goes into this in-depth review, which means, instead of Like being published within one hour, it may take now, multiple days or weeks before the published version is available to the public that also impacts brake integration, because that toggle and brave is using chrome, webstore sin means of distributing design package.

So that’s that’s unfortunate, especially for the better blog, because I released a new better today and the problem is it’s not on the Chrome Web Store, I believe at least it was not before this code, so I released it two hours ago and maybe it is nope. It’s still in the review, so that’s that’s the problem like on Firefox it’s available immediately because we are self hosted on Chrome. We use Chrome Web Store for our distribution of the better blog, and I think we are not able to do anything about this.

Maybe we could switch to self hosted beta. This is the most mostly a PSA in case someone wants to try the latest batons or stable or C’s there’s a new companion version, but it’s not available for Chrome. That’s why it might take a week or more to be published and we’re still trying to figure out how to mitigate that. I think that’s it from me. It’s only going to get harder yeah. It’s like this is the beginning, yeah, it’s! I understand changes of api’s and things like that, but it’s like bit frustrating that we are being like vanished, but we are not provided with api’s.

That’s replaced those IPS that are being deprecated. So it’s like yeah, it’s a it’s. The classic google classic google move, which is to deprecate the old product before the new one, has been, but at the API level, but in this case, like you know, by design, they want to eradicate that permission like they they’ve made it pretty clear. They don’t want ad blockers to exist and that permission allows you to do like it’s a pretty broad permission.

So I kind of understand that you know it’s a very it’s always we know when we were building Firefox, it’s the most risky permission. I could give an add-on view into every bite that crosses the network from there from there for people who are not familiar. We are talking about web request api’s, especially like the blocking version which lets you both like inspect every request. That goes and every response that goes back so you can check like headers companion, is using that to tell oh, this is a request for ipfs resource and I will redirect it to local gateway and that’s why we need this permission on every web site.

So every web site can use content addressing and it will be supported. Out-Of-The-Box yeah we’ll see what time, what will I’m waiting for manifest version free at least to land, to this nightly build of chromium? So we at least can see what we are able to re-implement and what functionality of companion will be lost because right now we are not even able to participate in discussion curve because there’s like no API draft, not even like a document with example of API.

So we are not even able to like on the piece of paper just to prototype nothing yeah Huck. Do you want to talk about idea, Scott cohost? Yes, I just readed PR to make ipfs co-host compatible with the spec, so that I was wondering if we should release the 2.0 version for pesco hosts or address some issues. First, such as testing or the lingo of GSI PFS yeah. I have thoughts on that. So my thoughts on that, if we release it as 2.

0 right now, it will spawn and by jacek, if s and we’ll still include it as a dependency right, yeah. Yes, so then, where? If we decide to do like convert this to a proper library which is not bundling, then we need to like release 3.00 again right yeah, that’s true! Let me just check, which is the introduced version like if it we could, if it’s 1.0 yeah it is, we could do with 1.1 because there’s no breaking changes.

We only made additions and I know we remove the flag, mmm, there’s a breaking change. Now I think it’s like switched to 2.0. Is it’s a good, a better idea? Cuz, we effect if you change the entire back and how those websites are hosted. Yeah yeah, I I think I feel if we wait for like this decision or like refactoring Jessica’s out it maybe take more more time to be released, so maybe just released 2.

0 and then, if we decide to convert it to library, find them it’s like 3.0. Just a number right you all right and then we could add it to co-hosting spec like repo, okay, the green, so people can like actually play with the spec without, like so there’s a value in releasing it. You give a more polished implementation of co-hosting spec and we could start playing with it in companion and desktop yeah, because it’s is it a problem that, like is the problem for developer.

It takes more space like node modules. Take small space but, like I just combined, depends on Jase ipfs anyway. Ip first desktop may add support to JSA. Give us anyway at some point so for our projects it does not matter and for other projects, PRS welcome. That would be my approach. I think ok I’ll get is really stay. Thank you. Dietrich. Do you want to tell us about browser vendor design, guidelines yeah? Well, if, when I’m able to find that tab, it’s the word well sure.

So a quick discussion about the introduction into a project that we’ll be doing over the next few months with a designer will be joining us temporarily to work on this. This. The idea is that when we talk to browser vendors, they have existing web standards and design teams and security teams that, at that intersection of security and user interface, design, iconography and visual communication patterns for the things that we see in the browser relative to thinks.

We should be concerned about so is your connection secure or not the padlock very familiar with everybody, but whether they understand it or not? It’s a whole different question, but either way a lot of thought. A lot of practice, a lot of experience and a lot of care goes into the treatments and the visual treatments of how things are communicated when a navigation event happens in the browser. So the idea here is that, for ipfs we have a set of use.

Cases about what ipfs provides in the browser is an well articulated threat model around the things that it needs to communicate, suggestions for UX patterns and interaction, design patterns and recommendations and experiments in iconographic treatments for what types of icons should actually be in the URL bar. When my professors present in a way that that communicates the things that users may or may not need to be concerned about when loading cryptographically, verifiable yet can come from anywhere a p2p content and a browser for the first time, that’s really ever happened in a in A meaningful way we can use this materials when we’re talking to browser vendors and when we’re talking to standards bodies when we engage that w3c in the IETF to communicate, have something tangible to communicate and open a discussion about so far.

You know we’ve seen several. You know interactions where you know standards Fridays or I got ta, that’s just what you guys are just doing. It’s crazy when you just ask them to come to your world, so this is more of a let’s bring the mountain to them and very clearly state the problems that we’re trying to solve and a set of recommend provide a set of recommendations almost like I said, Of instruction manual for them to follow or a design kit that speaks the language of the of the design teams that would be implementing this in browsers sort of their very least reviewing it so they’re aiming to start hope.

Hopefully, there’s a couple of weeks on this work and will definitely be present to get here. The I prevents desktop and ipfs companion applications are the the user facing. Are you know, artifacts of the office project for the most part, so in or at least for the internal work that we do so we’ll probably be using those as guinea pigs to test out some of this material and up experiment with applying it to how we’re Implementing user interface related I professed, but it should be, should be pretty fun and useful and something we’ll hopefully be able to leverage all through next year.

In our conversations with other partners, I just wanted to give you a sneak peak in heads up about that work. That we’ll be starting to see yeah, it’s super, exciting and especially like in Firefox. I see how companion could be like a vessel for look like implementing it and look at how it would look like because, like in interesting facts in chrome, you are not able to add both page action and browser action.

So browser action. Is this button icon in the toolbar and the page? Action is also an icon, but it’s on the right side of location bar address bar sorry. It’s address because we are not location addressed anymore. Those days like like small new small nuances like that are, are really important, because that change that change in language does signify a pretty major or significant yeah a protocol where the command the content comes from.

How many people are hosting kids? Can I co-host it? Should there be an interface for co-hosting the right there, things like that so yeah, it’s exciting brave, maybe perhaps yeah so we talked about it browser week was we identified the set of changes that we need to be made in order to be able to call the Brave integration, a 1.0: what is what has done mean for that first set of work, so I figured this meeting is probably the right place.

Definitely, this is about the right time since we’re now one third through the quarter, and we had a goal of shipping that by the end of the quarter, so this is probably good time to do it. A quick review of where we are with regards to that work and what needs to happen next, so you know meeting with brave and talking about either, like you know, joints blog posts where we announce the you know, ready, ready, ready for use launch as well as You know we talked a little bit before too about maybe do a deeper technical post that identifies the unique and innovative work.

That’s been done to be able to make that happen. Yep, I can look, give a quick, very short overview and then we can discuss so there’s a project called brave in ipfs, companion, repo, which gives you sort of like an overview of related tasks and what’s ongoing and what’s done. However, there is this progress. This is the big meta meta issue, which has some challenges already tackled. Some are not.

I need to go over this list again because I’ve been not looking at this most like working here and update this this project, but long story short things that really remained to be implemented. Our local discovery for browser nodes so right now, embedded JS ipfs is able to detect, go IP affairs in local network, but it’s not able to announce itself. There’s this open problem of DNS its coverage service being already taken, like the port being already taken.

That may be the problem with polyfills, or it may be just a problem with Chrome OS API. So it’s something I need to look at. What reminds him to be done is to see if it can be addressed. If not, we probably need to add additional discovery method, just for like Chrome, OS notes, running in Chrome, OS, environment. Basically all were browser notes such as select brave. They would have additional discovery method on top of all the standard ones.

So it’s not like, we are introducing something custom to replace the generic one. The generic one will still be there. We, when the note he discovers other notes, it runs all discovery methods in parallel and he gets all the results from all of them. I feel that’s. That’s the plan for addressing this, and we should plant this, and apart from that, that’s like the only liquor, miss missing feature on our end.

The rest is mostly performance improvements, so there are like quirks buts, like there are multiple issues related to performance improvements, but honestly, most of them fall under the problem with extensive use of preload or delegate nodes. So I introduced this throttling when you we don’t send more than four parallel requests. However, there are like there are additional limits in chromium, not this one gosh.

Where is it brave, Great Wave? Yes, I believe this one yeah, so it’s like we are, we are triggering anti-ddos protection at both preload nodes now and as well. There is a separate, I believe yes, so this one is our preload nodes throttling us, and this one is chrome itself is saying: oh, this extension is not naughty and it blocks outgoing requests before they leave your machine because turns out.

There is like a built-in anti-ddos protection, so if you have an extension which starts behaving badly and there’s like a terroristic here, so if you get multiple HTTP errors in some time window, your extension is blacklisted and it’s not able to send requests. And basically the problem is our api’s are returning HTTP error 500, if, like DNS link, is not present, which is like not very HTTP semantics.

It’s like a technical depth from the time when HTTP API was used only from by the command-line client of the go ipfs, but now it has unexpected consequences like this. So but you can see it’s mostly like related to the fact that when we start when your bra nodes start browsing, let’s say Wikipedia because all those errors, I just show you those errors, come from the fact that I love that Wikipedia page – and there are multiple Pictures and then each picture is one or more blocks and all those blocks are requested using like delegated the engage, the queries delegate that concentrating computing and that’s like a lot of requests to the rubber preload nodes.

So long story short do something we need to figure out. Something are about this either add a better limit or like remove limits at our preload interrogate nodes. Try to add a wholistic on our end to be below this threshold of chromium. So that’s a mitigation long-term solution is to remove delegated routing and replace that with native DHT, but that won’t happen this quarter. This quarter is basically focused on a secrete factor and if we are able to do that in jail and that’s very good quarter, so we probably it will be at least six months before we have tested and highly functional DHT in Jas.

So until that happens, we will probably need to solve the problem on the problem with delegate delegate its requests. The the the chrome limitation heuristic probably should be implemented deep in the core anyway, right like there is, there is no circumstance under which we do not, but that we want to surpass that we want to hit that limit right thing yeah. So that’s the problem like I understand the one of fixes would be make our API to stop returning invalid error code, make it like 404.

Maybe that seems like a that seems like a much much more aligned with the intent and purpose of the problem is like. That’s requires change to go ipfs and release of corrective s, and then it requires change that release to be installed on our gateways, so it may be faster to address it temporarily at the level of our delegated routing modules cause to me. That seems like we want to make that go change as early as possible.

It’s going to take. If it’s something we know we need to do either way and it’s going to take a while. Then it seems like. Maybe we should get that we should get that requested to go to prioritize that fix, but that’s the problem yeah, that’s this like HTTP 500, is a problem. It’s a big problem across entire API, because it’s not just this one endpoint, the entire API behaves that way and all the HTTP clients expect that behavior.

So you can see how fast we balloon the scope of this change. That’s why I again it I address. I had fully understand it should be fixed, but I am also realistic that before this gets fixed in go and our gateways and our libraries, we will have the HD in Jas and will no longer need to solve this problem. So that’s me just being pragmatic right. All right – and I understand so – I have another question which is less about the contents of these things.

Then maybe we it then then reframing. What 1.0 means we’ve kind of assumed that so far, that embedded note is part of 1.0. Even though embedded note is clearly marked experimental and actually not necessarily recommended for for production use, so I I wonder if we are able to just maybe if for one of the options that we have to be able to kind of make sure that we close up This work, it also kind of aligns with how opera is approaching it to which is just gateway, is their initial release.

What if we pushed on the local network, discovery bit for 1.0 and made that part of 2.0, when we have kind of better understanding of what embedded node capabilities are going to be anyway and better understanding of the performance and – and I can resource consumption of that And instead fix some of these performance issues and then all that call that at one point oh yeah, because it’s not like just to clarify it’s not like without this discovery to break browsers, are not able to discover each other and to exchange like that.

The problem is, these discovery is facilitated by a centralized, signaling server. So that’s, which is how everything else works today, yep our bootstraps now nodes are basically the same thing right very true and JIT. Fs does not remember peers between rest arts and all that just so yeah. I agree, there’s something I hope too land in v1, but now after we broaden the scope of this like browser design guidelines which will be created, I believe it’s not.

We will also probably discover this address bar from the v1 – that’s probably for the next next iteration. Hopefully we will. If we have guidelines, we will provide those guidelines to brave and then pray for you implement. I, like sort of like started this discussion, like maybe not that this started discussion just like, showed the problem and a different direction in which we’ll probably move right like when you remove local gateway from the picture.

How do you present that in address bar that the green padlock is go away? What replaces it’s things like that? I feel that will be the part for the design work you describe, so I feel yep that probably also will be discovered from this yeah. I think I think that makes sense and well, I think, maybe what we should do is kind of like it is in very coarse blocks. We can think of like v1 of integration as things like companion by default local gateway access.

These types of things you know the v2 of browser integration that next major step would be deeper, deeper levels of integration, either in the UI or embedded node, and then like that final third layer, and I kind of lead this out in the browsers post as well. Now final third layer is when we have a native implementation, that middle one is more, that deeper integration level is where things like a protocol handler would be or a node so like in.

If you chop it up into those course, three stages kind of makes sense. Oh yeah and in the v1 we still have like amber that node, but it’s still clearly labeled as an experiment, opt-in, experiment, yep. Actually, basically, it’s like solving the problem with delegate and preload requests: yeah yep, okay. So I think I think that means then so that these days, a couple of performance issues are going to be the priority for brave for this the quarter and then in our next meet up with them.

We can talk about what may be a unified like. Let them know about this phased plan and then talk about when we want to when we would expect Chavez performance issues ready when they would and when they would be ready for a kind of like a joint announcement. I will I will. I will try to get up early yeah call status yeah. This was actually just an it that I had about, and you know that we have the project operations meeting and then we have the now a weekly cross-functional meeting, where it’s like all of the different working groups and project finishes and efforts share a status, and you Know I look at I look at this meeting notes and there’s a lot usually a lot of stuff, but one of the asks that I had for you all is when you note things here, maybe just put like a check mark or something for something that you Would like to be highlighted in that weekly status, so there’s always a bunch of different things landing and it it’s hard to keep up with all of that stuff.

But one thing that would make sure that we’re providing the whole project with a view into the work that you are doing on desktop and on companion and even stuff, where you’re contributing to other parts of the project is to make sure to mark those and list. Those here and then that’ll make it easier for us to be able to broadcast those wins and those achievements that you have made to the brother project so that everybody knows how amazing and awesome you both are.

I was trying throughout the section and now my hot bath is offline. Oh now, it’s online, I think in the past, like when it was with the GUI working group. We had like a highlight section, so maybe we could like replace this, like team updates with highlights and basically have drug or just copy specific links there yeah exactly it just copied up there, because I think it’s really nice to be able to see what you, the Log of things that you both thought were important enough to do and your individual work stream as well.

I think that’s, that’s totally cool, so if there are specific things that you’re like people should know about this, then maybe yep but pop them up on the top. Is that’s a correct work? I vulcanize for this meeting. Look, I know it’s just so: booked Earth’s, an email out or slack miss or something like that about a system he came up with for using crypt pad and like Auto meeting notes generator for it, and I owe you my talk.

I I know I was looking for the link to before this medium like it’s. I can’t remember where he posted it, but I just thought I’d bring it up with something we could look at to reduce the the overhead of of doing the the new generation. I would love for all of our they’d give us meetings, I’m like when unified system that takes the manual aspect out of it is so manual just to post notes right now. I think it was the new I’m sorry.

I think it was an email to shift yeah. I found it. Do you want me to share it or send email ID? I don’t see any link. Okay, it’s published. I will send you the link, alright yeah. I don’t think we need to make a decision or anything today, but I thought it’d be worth bringing up and looking at and ask him a question. Maybe we should decide evaluate it. Oh yeah, I think like I – can try use it like this week for this call and maybe give feedback how it went yeah.

So I remember this: it’s basically creating a peer out. It’s automating all those manual steps. We usually, I need to do to publish this idea. First, team management, people yep it’s right there is it when why do work, when we can use robots all right? Also, my TFS policy. I believe it’s a follow-up for last week’s discussion. Yeah and I forgot to edit first yeah, so I don’t like feet just to remind it’s like we have this.

Also, my PFS, our website, like different projects, articles data set services built or using EFS and it’s backed by a repo and problem is the repo has a lot of PR so pans and we don’t have a written policy. What is considered awesome or not awesome? What’s the threshold for including something into this website, we did not. I did not look at it nor road and it thoughts on this policy. Did you like have time to think about this? What would that be, or should we like? Take it? I think just create an issue in in awesome, ipfs repo, sir I’m trying to find it there already is an issue about the resilience already from Victor from some years ago.

I think yeah, if you can like, find it and add it to agenda it, will at least be like an action item to drop some thoughts on that issue. Yeah, okay, I just did and it’s yeah. This is something that would be what it would be worth talking about in and when one of the things we should put at the top of the agenda because take some time – and I think that’s what we said last week – yeah, I think we after this call – I probably put it at the very top for the next week.

Yes, yeah just do that and dedicate 30 minutes to it. Yeah and everyone will have time to basically get familiar with historical look outlook because yeah we need to like just spend time and and tackle this and publish it. So we can include more people in okay. Apparently, why? When I picture this updates, it is supposed to connect it up check the latest start. Yml file checker serves the new release and download it, which is normal.

This is HTTP, but this is a partisan RCE system I didn’t know about. I don’t know what that is, but apparently, for some reason, I professor stop disconnecting now trying to download load and using git for some weird reason. I don’t know how to check if that’s really happening like this, that specific image is not wrong. It should connect a good app through HTTP or the binary, so it stands, but it’s part and the node part that’s something I don’t know about, because electron has nodes built in so I think it might be node itself checking if there are updates for nodes.

I don’t know if that can be the reason I searched about it. I couldn’t find anything about it, but it might be. I don’t know if node itself checks for new releases yeah, it’s it’s something we probably could look deeper with like Wireshark or TCP dump. Like start recipe Affairs, is it like on Windows, or is it yeah let’s make apparently, but I don’t know what RC is, but it’s some system run exploit right.

No, it’s like remote code execution, execute a yeah yeah. It’s like it’s like useful tool and I I agree. A lot of people may have this type of software running and it may raise some eyebrows why ipfs is trying to connect amazing yeah, but I mean some make sense because not on that specific screenshot, but the github binaries are store stored on Amazon. So then, I don’t know why not that’s not bad yeah.

That’s also like, I think we are using github, which is like centralized service for publishing our artifacts, instead of like using our gateway, might be right, because if you, if the domain here was IDF Sao, I don’t think he. This issue would be here or the discussion around this issue would be different. The problem is, you have github here which is like owned by Microsoft, and then you who is really concerned about this and then you have AWS, which is owned by Amazon.

And then you got and both are used by a TFS which is like sort of like trying to replace an AWS s3 and I’m like yeah. I agree why why it could raise some eyebrows, but I don’t know if we are able to solve this, assuming it’s like our upgrade mechanism, which is like checking this URL around it. It’s supposed to be yeah, I’m not sure if we are able to easily replace this, because it’s like out of the box solution, we are using for publishing binaries, for I believe, windows and chrome right, which are dates also snap and up image.

I think no yeah yeah yeah yeah, but I yeah I, these auto-update mechanism letters choose between either been trained or a custom HTTP website so far like we could use this type FS. The real perpetual there’s an issue about about that when I business system yeah. If so, if we are able to like the lee, replace that, with our like self hosted solution, we do that just to remove like those HTTP requests to github and amazin.

The quick could run have like a dedicated subdomain for that, like like updates, ipfs io or something so in those tools. It’s like self, descriptive, hostname. That removes like. Oh, why this software is. We could call it like out updates ipfs ago or something and there’s a valuing dad’s like all like. If you’re running tour node, you want to like have a specific host name that says: tour exit node, something something so when, like network security, people are inspecting stuff or incorporate environments, they are like aggregating logs and then in different city country person is looking at aggregated Logs and looks at the host names that are repeated, those self-describing host names are super useful.

Okay, could we use just reuse dist? We only have adjusted. I profess that I ship updates. The problem is like this electron update mechanism. It’s probably like requiring custom manifests to be published right yeah, so we we either need to do a PR to the district or we need like a separate domain. I think that that’s not a topic for this call. We can like take this a sink and decide, but, like personally, I I see it like a way to address.

This is to basically move to a self-hosted solution, because I fully understand why people get comfy. With this all right, I like the idea that we’re dogfooding, as well as the fact that we will never stop fighting those battles. As long as the mission of our organization is to allow people self agency in owning their own data and be able to communicate. Without is specifically that list of software companies, I think that no, we will always be fighting that battle for as long as that, we are really really using them.

As apart from apart from like like security, the fact that you are sending to a third party requested a third party, it’s also like pretty by the optics. It’s also. It also could be just bad for that person as well yeah like not just optics, but it could be. It could be a threat to that person here in the US. That’s a that’s a clear and present danger, yeah. That person could be targeted. The like certificate for the trap, or something could be in theory, replaced by some countries and the green padlock – will still be there right.

Yep. I think, like the gist of this discussion, is we need to look what’s needed to switch to self hosted solution. So if you remember, if you remember in Italy, please write down, if not, we probably need to make this research and, OH anything, we need from like infra team. If we, if it’s like a static content, then it’s much easier because we then basically just publish it to ipfs, create a DNS link to some host name.

If it’s like a dynamic website that you need to run some software, then we need to work with infra team, but it’s still something we need to self host cause like more and more people will be using a different desktop and they will be relying on auto-update Mechanism and that mechanism should be like if it’s supposed to ask third power like third party, it should be like protocol ops server, not like someone else right, I’m checking the electron midsection.

They say they support out of the box. It releases Amazon, digital oceans, entry and generic HTTP servers. Here’s the idea, so perhaps perhaps is an opportunity to create a provider at IKEA first provider for electron, build to like publish releases to ipfs, somehow just just an idea, because if there’s this needs to be sort of like a plug-in system, if they support github and Something else something else right: yeah there is a sudden.

I see me IP NS will be fast really soon so any moment now, and that will honestly. That would be probably the best case if we remove reliance on DNS for updates, but even if not going with DNS link would be better than going through github. Alright guys, I feel that’s more or less a conclusion for sad and also like for agenda anything left. We go to two and half minutes now: the another action-packed high high high value stream, Lee productive 57 minutes.

It’s incredible how we always manage to feel entire hour, no matter how many agenda items we have, but if you change the meeting to half an hour when we get the same amount, I think we’ve awesome things done. It still takes one hour yeah if it were to change it in the chamber in the calendar. I it was a good hour, though I guess was my point. That was our well spent. I see how it is like making agenda for the next week, so I think we should end this week’s one and you are just adding awesome eyepieces.

Oh, I know we don’t forget.


 

By Jimmy Dagger

Find out my interests on my awesome blog!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.