Online Marketing

How to Setup a new Python Project

My name is Stephanie. I am member of the local part user group and please welcome Felix wick. He will talk about how to set up a new point project thanks and welcome to my talk. So, as we already heard so, this will be about how to set up a new pie’s in project, so this will be quite of an introductory talk, but there might also be a few interesting tweaks for maybe more advanced PI’s and users.

So it might also be interesting for them. So what to do when you start a new pie’s in project, so the first thing you you need to do is you want to start implementing, but before you can start implementing, you have to set up all that stuff. So you you just have all your ideas together, you know which packages you want to use and everything. But first of all you have to do the boring stuff before you can really start coding and what you usually need.

First, is you need an environment? So you might have on Python interpreter installed on your system and the side packages on your system, but you usually have several projects and not just one. So you need some kind of tool to to manage two different projects and the different dependencies which you might have, and you just need a let’s say: project, specific environment, for your for your different projects – and there are a virtual NFS – is a really cool tool to Do that and you can just use pip to install all your stuff there.

So, first of all the few words about virtual end. So, as I said, this is just a tool to create an isolated Pisan environment and what I mean with isolated is that you get an own Pisan interpreter for each of your projects and you get an own side packages for each of your projects. So the only thing you need to have to install it is you you just yeah. You just need to have virtual end on your on your global.

Let’s say global pious, an interpreter package, so it just install it via pip or other sources, and then you can create a new virtual environment by just typing in virtual end and then the name of the virtual environment. It’s just as easy as that, and this then directly comes with we’ve set up tools and pip installed in the you new in to your new virtual environment, and you can then there’s a small helper script, activate which you can use to just at your new virtual End to your to your path variable, so that means you can when you point it type pipe install of something else.

Afterwards, it just gets installed to your virtual environment, for example, and to get out of it you just type deactivate, and then you can directly jump to another virtual environment if you work on different projects at the same time. So this is really a very good tool for that. So then a few words about tip. So you probably all of you note, but so this is just in a tool to install and manage all your packages and it offers installation from the package index.

If it’s a local or global or sauce or binary distribution, so whatever you have so, for example, to get to get an install the latest version of a package. In this case, I just chose the package PI scaffold, which we will learn about in a few minutes from the public, PI API or trust type in pip, install PI scaffold. So just you have done very often, I think so, but it gets more interesting if you have something like versions of your packet switcher, which you add, which are different in your different projects, for example.

So then you can use the requirement specify us in pip install so, for example, here you just take only versions with triad and 0.7 or equal 0.7. So, and what’s even nicer is that you can also use just a whole requirement file to to get all your all your dependencies in once, so it just use this minus our option and you do all your your requirements. You just type them in in your package. In your file requirements txt for example, so if you’re then afterwards type in Patrice, you see what you have installed.

So in this case it’s a just PI scaffold and a few other things. You will see it later in a trot installation. So, okay, so now you have your your environment, so you can really start with your project. So what what you need to do to organize it? So the first thing is, you need, of course, a version control system, so this you need all the time and for this talk and for the demonstration later, I trust use git, because I think this is a good choice.

But there are a lot of other good tools also so and after you have this this setup. So this is another thing which you have to do. Then you need the reasonable directory structure and you need, of course, tests and documentation. So these are also things which you have to take care that have a good folder structure for it and that you have to the right tools and so on and so on. So what is a good directory structure for it? So, first of all, there are a few words about what I mean in the in the following with module and package.

So module is nothing ends, then a file containing Python code, so something which, with the dot PI ending, for example, and a package. This is a this is a folder which contains an inner PI file and all of your other PI’s and modules which you want to have in this package. So, in this case, you have to you have to package my package, which is in a bro check, which has the same name. So this is usually a good practice that you just name your project after your package or vice versa, and in this my package, you have this unit PI file.

So it is just the thus in principle that you can import all the stuff with the inside of your directory structure and you have your your modules in which do the actual stuff. So your actual implementation in module 1 and module 2 in this case, and you might also have some pack the sub packages in if you are doing different things and your package and then in each of your sub picture. Sub-Packages there’s another in a PI file electric and import all that stuff afterwards properly.

So this is the first step so and then you need to test the tests and the documentation, so this usually should go to separate folders so which I just call here. Tests for the unit tests and Docs for, for example, is Fink’s documentation and inside your test tests folder. You should have another init PI file and you should have all your for each of your modules. You should have a separate test module which just starts with the with test underscore and then the module name.

So then you have it really in ordered way, and you know what belongs to what and you can start afterwards. The test with, for example, setup tools or whatever, so this will, you will see in a moment so in a docs folder, you have all your restructure text text files, for example. So this might be this index rst, which is the more or less index.Html of things, and you might have a configuration PI and a make file.

If you have those things set up beforehand and there’s also another file which is called set up, PI about which I will also talk in a few minutes so or actually, even though, so, how is your the next step? If you have then implemented everything have everything in order and what you need to do, then, is you want to to tell the stuff to other people, so you want to distribute it in some way, so um how to do that? So there are pison comes with an onboard tool which is called these tutors.

So this this is actually it is basically the set up pi, which i, which I showed you so, but there’s also something are called set up, tools which you heard and which is cat, which comes with literal end, so which is more or less just an extension Of this of dis duty, so usually I would just recommend to you setup tools. Then you have all this stuff inside and there might also be effort might also off distribute.

So this is a another thing which is something in between so this was kind of a fork of setup tools, but it’s more or less an older version. So um just you set up tools or these tutors. If you don’t want the additional features. Okay, then, if you have your package, everything in order – and you want to install it now in your virtual length – so then you just type in pious and setup. I install and this installs every everything which you have just implemented into your virtual environment and what is helpful during development is this setup I develop command because then there are just links created in your site packages in in your virtual environment, and this means that if You change something in your source code implementation.

Then it directly changed the installation, individual environment, otherwise you would have to type in pip, install pious and setup. I install again afterwards, you have changed something. Okay, and if you want to pack it and ship it, then you can just use the setup. Is this tour set up by beedis commands, so this means the source, distribution or a binary distribution so per default. This will be a zipped tar ball and for the binary distribution is of course dependent on the machine.

So this we will also see later in the demonstration okay. So now we have done all this stuff. So but if you want to ship something to somebody, then you have to give it a version, because afterwards you have will then have a new version a few weeks later, a few months later, and then one was maybe 1.0 and some at some time. You have a 1.0 or something so you need to update that version stuff all the time.

So that means you need to update a version attribute of your package or module, and you need to to get the self identification in the inner PI file, and you need to update an argument in this in insider PI itself that so a meta information of the Setup PI that you just have the right name for your package and also have to the right thing and in for the meta information for the pi PI server, for example – and this is really cumbersome to do this manually because then, if you really have to do This all the time you will forget it, and then you will have to do it again, so you need something which is just as this automatically for you and what is a cool small tool.

Is this version your package, so this this manages just the versions by kit. So this means that you can just do a good tech if you want to release so you’re, just the get tagged version or point zero and afterwards you directly have the right version number and if you then do an STI be destroyed. I wrecked Lee have the the version of your last get text, so this is really cool. What makes it really simple? Okay, so this would be the let’s say: bass basic behavior of of the the set up play and if you then want to do your tests and your documentation also, we’ve set up high, so dagger that comes with whether for unified unit package just comes on board.

So just type eyes and set up by test, and it runs all your tests in your test folder. If you have set up your set up PI or correctly, it just runs them all through with unit test. So maybe you want to use Pais the PI test. Instead, because unittest is okay, but maybe you want to do coverage or something else, so then there’s the possibility to have in in setup by this command class argument. So there you can justify node command classes, let’s say where you can use them PI test.

Instead of unit test, for example, and just run it with setup, I test or you want to do something like PI flakes or whatever, so you cost the contrast. The do everything you want within set up a set up high, and what is a quite good thing to do, for example, is to run the documentation by it. So you override your command, this command class thing, and then you can, for example, to set up by docks and you just run your documentation with we’ve set up by itself.

Otherwise, you would have to type in all the type make item a HTML or something like that. Okay, so now I have talked a lot about setup. Pi, so you’ll see a bit of this boring code now so um. In this case, I just imported from setup tools: the setup function, which is basically what is in there and a small helping function, fine packages and what is really the the basic of this of this setup function is the name and the version.

So this is also what your package will be called afterwards, so your top boil, for example, if you do an S test, and you need to tell them why our packages, what has to be included in the distribution so in there, you can, for example, use this. Fine packages where you include everything but just exclude the test folders, for example – and this install requires this is this – is the testing in beforehand before you install your package.

If all this stuff is already there, and if it’s not there, it would install it afterwards. Why are so easy installed, but, as we have already done it beforehand with with our requirements file, we can also at this position, just use the requirements file, for example, and we’ll have it also again in one in one place. Yes, you will also see later so the tests hood. This is just a test which need to be run by the by our setup by Testament or command, as we have already seen, and what is the nice functionality? Is this entry points, so there are in this.

In our package, my package, which we had before there was this module one and inside this module one there might be a function run and this function run gets maybe some arguments and you can trust if you, if you set it up this way, you can trust Afterwards, in the under command line and type run, and then the options of this, and then you will just run this function through. So this is an interesting thing if you, for example, want to start a web server or whatever trust.

Why? I your why I this entry point, okay, so to to set up a Pisan project you you need to do quite a few things, so you need to think about a good project structure. You have to set up a git repository or something or another version control system. You have to to do all the tweaks with your setup PI’s. So there are a lot of a lot more commands than the ones I’ve shown you. You have to take care of the versioning and everything.

So all this stuff is is pretty: let’s say, tough, if you, if you are a new person user or at least boring, if you, if you are experienced, so you don’t want to do this all the time again so there. This was the case also for us when we, when we did this over and over again for small project. This was why we just implemented a small tool which is called buy a pie, scaffold which just does this for you.

So you can just install it by a pie, pie, so pip install pie, scaffold and the sources are available on github so, and it only requires virtual enfin and get to be installed on your system and setting up a pro checklist and just as simple as put Up my bro check afterwards and then you will have everything so now I just give you small or short demonstration. So let’s say we: we want to have a virtual environment first, so already done so now we have a virtual environment in our directory.

So we saw that we have to to activate it. So now it’s activated, it changed the prompt which are which are also saw on the slides. So now, what do we want to do? We want to just install PI scaffold and if the network works, this will do it. Ok done so check what we have inside our virtual environment. So the PI scaffold is there in the version 2.7 at the moment. So now we need a new project. So just put it up already done so then we have our project folder, my project.

Inside there we have now, for example, this set up pi. We have the the project folder my project. We have our test folder and our documentation folder. So, for example, we look into tests there’s only the unit, PI n, also in my project, so there’s there’s another thing, this version and version your PI. This comes from version here. So don’t have to go into details about that. So you don’t need to care about it.

Usually so, and then we have this requirements file, for example, so that you could now put all your requirements. For example, there would be an umpire or sci-fi moment, and this manifest in is also something which, which you might know this is about Estes. So this is a kind of a template file where you can just include what you want to include in your source distribution, and what is good practice is to have something like a readme here so which you could also use them in your as meta information.

In your setup PI, for example, to have this directly on anti ti and now, let’s see what we wanted to do, we wanted to implement all it so already done so our version was was unknown for the moment, so we might techno version so which might be Version 0.1 and we call it first or something so then, if we look to to get K, then you conc almost nothing but there’s now a new, a new tech in there, and so now we could, for example, do Pisan set up PI s test so and Now we have a folder dist in here, and here we have now our our our topper.

So this my project, oh that one target set. So this is now the the version which is gets from version year. Yr this this kid tech and the name which we gave in in our setup hi. So let’s have a look in the in the setup file shortly so that you see what’s actually happening there. So you need you. We didn’t need to do anything of this by our own. It just took our fill order, meta information which we have to use this command class and also has the the this Docs and setup Docs and testing already configured so yeah.

Basically, what else we wanted to do so we wanted to have maybe something like and beta stall, so that you see this this too. So this is then, an in this tanaga, another tar ball, which is dependent on the on your machine and if we want to run now, for example, the test, so this is as easy as that. So, as we has no test also, the coverage is perfect, so and if you want to now to run, for example, this Daxing, so there we see this will fail because we have no another installed things up to now.

This is the only thing which you, which would then need to do, oops this way so done, and this will then just build your documentation takes a moment. So then we have everything in our Doc’s file, for example. So if we now want to to have a look, how this work oops, this will look like so just close this and look into you. So we have our our project and have to the stocks folder, and then it comes into the HTML and there you have than just your index.

Html, and you see your your documentation, for example, which we which we just wrote. So this is now more or less the module reference and what you are, what you could have in your injury txt so and it’s basically it so yeah as as you want to know more, so you can now, of course, ask now. I just come to our to our boosts afterwards and then we can discuss everything thanks a lot for the talk. Are there any questions? Yeah, please come forward to the microphone, so everybody can hear things what about the wheel packaging, yet a wheel packaging? You could just do with we’ve set up higher beta steel, for example, and then you have directly the two wheel there.

So usually wheel is a good thing to do so we can just just do it. So this you read, do something like that, so you need to, of course, install wheel beforehand. No, okay, not a good choice! This question: that’s why I’m asking it’s always missing, like it’s, the new cool format and it’s somehow a lot of doctors so know you. Can you can do it? It’s sorry, ah good thing to do yeah, mmm. Of course we need to do that first and then we can do it and then we also have it as wheel in our in our dist.

Thank you. Are there any more questions? Okay thanks a lot


Online Marketing

Rethinking packaging, development and deployment

I’r going to talk about the tool sets that I’ve been using for the last year and a half called Nicks and how it applies to the whole stack of packaging development deployment, basically like using in Python. We all know it’s a quite a depressing topic, but it’s getting better and one of the main things that I really really really hate about.

It is that we have the setup dot pipe that is dynamic and whatever you do, we have to run this dynamic script at least run the egg info command to get something out of it, and that’s why, for example, we don’t have dependencies on the pipe I And so on, and so on and for example, no J’s community has these J’s simple JSON file that is static. You write it down and, and you can easily parts packages and do stuff with it, but there is hope there is the pep for to 6 meter data 2.

0, which specifies basically the JSON media data for packaging, and hopefully people will then generate this file. Put it into a distribution with with the Python source and we will have static meter data available it’s in the draft mode. So who knows when this will get upstream but yeah? There is hope – and the second point is – we have a lot of legacy infrastructure which is kind of connected to this set of data.

But there is now Python packaging Authority group, that’s working on this and all the contributors, and I think they really deserve an applause for they work. And then there is this scary third part of the problem. These are non Python dependencies and this is this. Is the problem that every community tries to solve by you know, building an infrastructure to package all the other stuff that not just Python, and we all share this goal, but maybe maybe it’s time you know to to to look out and find take something else.

You know nowadays, JavaScript stack is basically inevitable. You have always JavaScript stack in your tool set and you know we can either build all the the the tools in Python to process JavaScript dependencies and so on and so on, or we can take the JavaScript. Our tools said to use, but then we need a tool that will actually package Python in JavaScript right for our application, and then we might even other things.

So so mixed project was basically started 11 years ago and it was developed by L code Ostra. As part of his PhD and the PhD talks about dependency hell and and how to approach that – and it was done in a university in hotel in Holland in a functional language department. So the idea is to take functional language for moral thinking and applied to the packaging problem, and it turns out it’s. It’s really really really fits the problem, so basically mixes two things: it’s a package manager and it’s also the language which we also call mix expressions.

It’s a very minimal language. It’s basically a configuration files plus lambda functions and it’ll be a little bit of other stuff. It’s lazy, evaluator! That’s something that we’re kind of not used in the Python community that much and you have to get used to the whole thing that when you actually something is actually touched, then it’s actually evaluated, and that gives it a really a lot of power for a configuration Which I will show a little bit later, it’s a standalone package manager.

You can install it on any POSIX system, the visual support we have for Linux, Mac and FreeBSD. There could be it could work on Windows if a company will sponsor that work. But currently it’s the support is basically discontinued yeah. This is so so. What is a purely functional language? I will give a very vague description in in sense of the software package, so basically, the software package should be output of a function that is deterministic and it only depends on the functional inputs without any side effects.

So when we describe the packages in Nix the the meter data that we put on those files, that’s the only thing that should affect the package and not nothing else, and we call this purity mix. So there you see for an example package. This is the prefix where the package would get installed. Our slash sneaks story is like a flat repository for the packages, and then you see the cryptographic hash as how one and then the name of the package and the version.

So every package is stored in this separate folder mmm and because we want this result, the outputs to be to be deterministic. We want to make it immutable, so the whole NIC story is mounted as read-only just to be sure that nobody will touch it and all the times times are set to UNIX day time, plus one and so on, and so on and this this hash that you See there is basically the hash of all the inputs, so if we, if this theory works that it’s the the result should depend on it on the inputs, then if we hash it, then we can uniquely identify a package.

So so this is, can you actually see this? It’s a little bit. I hope you can, but this is like an example nginx package, how you would package nginx it’s a little bit simplified at what we currently have in the repositories. So at the top you see this is our anonymous lambda function. It gets high order function, so those are just. Another function was passed in and then we called the standard environment make the duration, which is the main, the main function that does all the heavy lifting and in there you basically see it’s like.

We call it attributes I’d cynics, but this is basically a dictionary and we pass it name version, and then we tell it where to go to download sources. We tell the dependencies which is called built in puts some configure flags and then some just description about the package and that’s and then all this is basically passed to to a bash script. That goes to different phases, and it knows what to do with this meter.

Data and what you see here, this is basically that’s what gets hashed. These are all the inputs to build and nginx. This is all the information we need and, of course, there is a dependency graph of the of the packages or OpenSSL dilip and so on and so on are also written in these enix. So this is like a quick example how how powerful Nix is? If you look, if you go they’ll be back, if you look at the of this this file, we want to overwrite the things.

Basically, the the lambda function at the top and Demeter data by the package. So we want to give a user of this distribution or package repository the power to to change anything. So the top, the top line, basically overrides the lambda function with something new but is probably fell asleep. So basically it overrides the lambda lambda function and we can. We can say: okay, let’s, let’s take another open, SSL and feed it into, and then we get a new engine expeced which, which is a different, open, SSL version mmm, and we can all write a derivation itself and, for example, at the bottom example.

I have I over I’d their source and we can, for example, take nginx from gate and this this is you what you can do in the user space right, because sometimes you you have to change what the upstream does so so to install Nix on your distribution. This is like I mean just from security point of view. People will like go crazy, but you can download the script and see it’s not doing that much and you can run it yourself, but basically this is the easiest way to install it and because everything is stored under slash sneaks, you can just remove slash sneaks and You don’t have your package manager anymore, and you also have to remove the profile and you’re in your user.

Ok, so so this is. This is basically where, where everything comes together and there’s a lot of things to explain around how the NICS works so ok, we say that we have slash, sneak store and inside there are the packages and you can see it’s version and Firefox. So somehow we need to get these file system here here, that you are used to nowadays right and that basically, is then joined together into an user environment which you can see on the right – and this is this – is basically your environment with all the binaries, our Libraries are stored and under slash bin, slash leap and so on, and because we have this set of packages, will it be cool that we can have multiple of those, not just one on the system.

So this is what we have so called profile. Cynics like. We have, then, I will talk later about Nick size, which is distribution, builds on top of the package manager and we have a system profile there, which is basically your distribution. But then each user gets a profile and you can define it can create profiles on the fly per project and each profile has its own life cycle, of how you install packages inside and upgrade and and delete them and uninstall them and and basically then, the profile.

Also has a whole history of what what’s changed. So basically, when you install a package, you would get a new user environment with that binary inside and the profile would get like an another version number in the history and basically the last thing that you actually do something to the package manager. Is that sibling, Nicks vernix profile default will then at at the end of all operation, change to the new user environment and because ceilings are atomic in in POSIX.

That means that we have atomic operations of installing upgrading and uninstalling packages, and then you see on the left, you see that each user, then yeah has its own profile, and that means that users without root access, they can install packages for themselves. Of course, you can disable this if you want, and so on. So one of the really cool features is that Nix is source and binary based collection of packages.

This is very unique and the way that this is actually works is that we have a so called built farm called Hydra, and we built all the packages about all the packages there and basically the because the hash basically provides the basically unique, uniquely identifies the package. You can go as the hijra server or do you have a package with this hash, and if it has this package it will fetch the binary and if not, it will go and compile it.

And this is something that companies they then used to to set up Hydra at their own servers and have their own, like basically continuous integration tool for building the packages, and there is now all in the Nick’s 1.7. I think there is also support for SSH. So you can do the same thing to the SSH protocol, not only HTTP. So just I don’t know if this is going to work, but let’s try it out it’s a little bit yeah.

So basically like this is nixar, so it’s a little bit different than if you only use NYX but like my vim binary, then points to the vim to the vim binary that is stored inside the next or I’ll. Let me show you next or just how did you see that there is a bunch of stuff in there so yeah? This is. This is the whole thing and, for example, if we look at if we look at the linker of vim, you will see that all the dynamic libraries points to precisely one package in the nick store, and that makes it very deterministic that you, you know for sure That if you build this on two different machines, you will get the same result if you use the same source of Nick’s packages.

So right, let’s talk about Python mmm. Of course we have also collection of Python packages, and we have this. Have this function called build fight on package, which is basically a wrapper around make the duration that knows about these two tools and setup tools, and it works a little bit. It’s quite designed and then to make the duration – and this is like, for example, how you would package pillow, provide the dependencies source meter data and build vitam package will know how to run the setup I built and then setup.

I install at the right phases. You can check inside the next packages repository how it works. It’s just like 200 lines for the whole implementation. So so, when you have a lot of packages, for example, I also do a clone development and we have like 250 packages. You don’t want to do it by your hand, so there are basically two two tools for this Python. Tunics is basically just goes. There grabs the tarball gets the hash and splits out a template that is very generic, and if there is something really like non python dependencies, then you will have to fix that on your own and there is this cool tool called pipette unix, which will also be We will be working on during the sprints there’s quite a lot of developers here that tries to handle all the edge cases and automatically basically fetch packages from pypy and then generate these NYX packages.

For you – and we have these tools for note. We have no judge makes and – and we have our and so on – and so on right. So, let’s, let’s packaging, let’s move to development right, so wouldn’t it be cool if we had a tool like virtual environment but on the level on the on the layer of the package manager, not just for Python software, so you would activate environment and you would get Like geeks and all the other non Python, basically dependencies in tools available and that’s what Nick shell does so, basically how Nick shell works is that it will build all the dependencies of your package.

It will source all the information it has about those and instead of actually go and building this package, it will get you into this shell that it’s in the shell that it would actually build the package. So you have everything there available and there is a cool. Well, not really hack, because it’s also meant to use this way. But there is a cool feature that you can say that you’re not building any package.

You said source to no, and then you just provide the build inputs – and you say Nick shell – and you only get these dependencies available in your shell, for example, and in this right. This works on unemployed system, so this POSIX system. So you can give this to developers and they will get always the same environment with the same gates and and so on and so on. There is also a flag called code, pure so by default.

Nick shell will inherit your current environment and you will have all the tools available and pure basically means that it will not do that it. You will have only the tools available that you list in the build inputs. So, let’s I’m sorry for them. I hope you still see something, so this is basically activated Nick shell. I mean I I did this before on my laptop. Otherwise it would go and download those packages from from Hydra, but the network here is a bit flaky.

So, basically now I have it available. Well, let me just see that file yeah. So now I have get available and if I do pure here and then get it will say it’s not available because it will not inherit it, and this is one way to make sure that you have all the tools in your nick shell. So the same thing goes: this is, I use this this trick to actually install media core on centers, because I just didn’t want to bother with Python there.

So I just use the whole Nick’s like a stack of packages and use Nick shell, and then I have everything available in run virtual and install and that’s it. So the same goes if you have a Python package, basically all right. So this is, for example, a Python package of this is one project that I did like. I have gstreamer in there d-bus and all kinds of things that are hard to package normally with Python and, and there is of this code trick, we call we have a like – we have a variable called in Excel, so when you actually run the shell, this will Be true and we can act the extra dependencies in this case and if you only build the package, those dependencies will not get into the derivation.

So so then, okay, then okay, we have this set of packages. How could we extend this idea of a functional language to to the whole operating system and build a distribution on top of it, and it turns out that, yes, this works really nicely and when you think of it, like a configuration, files are basically just one file And software package is a bunch of files. The only difference is that your Linux distribution will package software for you and the configurations.

Files is what you will not really write yourself or change some default, but Nix Nix is basically the language that we have now. So you can use this language on both sides. Hmm so Nick says basically is a stateless. This uses stateless approach to configuration so, for example, in puppet and in chef they have declarative configuration in front, but at the back, basically, they still execute a step-by-step imperative.

They check if nginx is up and if not I mean that Network started and so on, and there are a lot of edge cases to cover here. So a lot of errors that you can hit onto and here basically the way it works is if something changes. I will show later an example of how how to do a system D process and if any parameter to that system D process changes, then it will know that it has to restart or real that process.

So it all boils down to these data going through these functions, Enix and and when something changes it will do an update. So here you can see like a minimum configuration and I just like configure moanin and you then say Nick shires rebuild switch and it would activate and get the machine into this state. So one of the things that’s also good to mention here is that mixes is basically DevOps from the beginning.

You want going chains on configuration files by default. You have one file to specify it. What is what you want, your state machine machine state to be, and you execute it, and then we have a tool that basically does provisioning of cloud servers and so on. On top of that, so, for example, if we wanted to use pyramids, which I’m using my day job, basically we would import that the default that makes file that we were using before for development.

So the project is already packaged, but then we would say package rights tests and we would write the production any file to Nick store. That’s right X is basically a function that will write a configuration files to the next or then we will do a clarity of Li specify. Ok, we have a process system D process that should start with pyramids Ben serve and pass the production in E file and, for example, if the production in e file changes here, then this hash of will change of this service, and it will know it has to Reload or restart it and so on, and then of course, on top of that, we want to use a provisioning control right, and this is like the minimum example how to how to them provision excise machines.

You install Nix ops, you specify, for example, this is a trivial, a trivial machine, so we have like a web server running Apache serving some static files. This is like the physical configuration and then we have the logical, which is basically. Where do we want to deploy it? We say: okay, the backend is VirtualBox, giving me one gigabyte of memory, and then I have like a trivial hetner, because sneaks up supports like Amazon, Pfitzner and now also Google compute engine, it’s experimental bits and all Korean.

And then you would say, create created this configuration and then deploy it and it would actually provision the VirtualBox and and you would have Apache running in your VirtualBox. So I don’t really have a demo for this because it’s going to take a while to actually show it, but just to show just to show you the whole stack. So when I would actually deploy in my projects, then I would have three files, one into full techniques, which is for the map, the development and the building of this project and developing example, machine which defines the physical and then the physical state of machine.

At the end of VirtualBox, or can then also the history, so it’s four and if we look at all those files, first default, it’s a little bit straight, but basically there you have just like build packets, my name sources current, so director II and dependencies pyramid – and This is this is, for example, configuration of machine that would lunch of chromium full screen in kiosk mode and serve the pyramid.

So at the top you say we say: okay, import. The package then enable each server enable display manager, window manager and sets etc, and the desktop manager with provider or all command, which is basically waits three seconds and then run chromium at a local host 8080 and then lower. You see again the configuration of the system D service for pyramids very simple example, and at the bottom you see how we define declaratively, a user called guest that we use for the for this.

For the chromium, graphical interface and it’s. This is basically, then the whole configuration of the Machine, so you can see like you package. The package was about 10 lines. This is about, I don’t know 100 lines or something or 150, and the VirtualBox is is like a few lines also, and this is – and this is the whole packaging development and deployment stack, that you then use to to actually and let’s might still try to so Now it’s tries to download the basic image and it’s yeah.

We can wait for a little while, but there is no point so basically now it would download first the image. Then all the dependencies configure them. It configure launch the VirtualBox copy. All those sneaks store packages inside and then activate, and you were to get the fullscreen Chrome, which we the pyramid, application running. We are having the cycle mixer springe in slovenia and Leblon aware i’m thrown in from the 23rd to 22nd of august.

So this is a great opportunity to meet the developers, talk to them. The decor and the Elco toaster will probably also be there the core developer – and this is this – is basically the image from the last year just to to shameless plugs. I wrote a blog post a little bit more into detail. Why why whining sighs tries to solve this problem in the little better, wait and other other solutions, and I had a talk at forced them about Nick’s OS.

It was more focused on Nick’s eyes, so there is a article on YouTube if you want to read it and, of course, well check it out on Nick’s eyes, dork and you’re welcome on freenode to stop by and say hi and meet the community, no questions yeah. So it looks quite interesting – and I didn’t know Nick so as before, so it looks like my puppet, my vagrant and everything I can throw away and get Nicks and it all runs out of the box.

Why isn’t that like widely adopted? So so far? What do you can you mark out? What are the differences, what are pros and cons, and so on? So the question is: if why did why? Is it not that popular yet yeah it comparison to like vagrant and all the established tools, which are, of course, very different, have different ESL’s and so on? So of course, it would be awesome to have one approach to this kind of problems.

I mean the short answer would be: we need more marketing people. The long answer would be that actually now for about a year or two, we have Nick shops and a Nick shell and so on, and now I think finally, this is this stack is ready to be used and one of the biggest two companies is logic blocks And Zalora, and they have about 100 servers provisioned with this and and the community is really growing the haskell community.

Basically, there were a few blog posts in the last few months explaining how people develop haskell with Nick shires, and it went very viral, and I would love that the same happens to to the Python community. If we actually want to solve these problems – and I think now it’s the time – that’s you know – this is we’ll really see a lot of a lot of new users. You can see that a mailing list IRC and everywhere, and I hope that we get there.

I hope that we get to the same point. It looks really interesting. The you said that it’s supported on POSIX systems include Mac, yeah yeah, and we have we have quite a bunch of unhappy home, Brio X users that ain’t no use sneaks or credit packages. Thank you, but we don’t have that much freebsd users yet so I it’s more of a just one thing. Maybe was not clear what it does like. What are the benefits of it’s not a question.

It’s just as what I miss from the talk. What are the benefits of actually using in starting developing mix, using mix and with Python? It’s you get in our company. We got development environment, which was the kind of switch to continuous development environment like not only for deployments. We use the this thing, but also for continuous development right. So, each time a developer comes like in a snap of a finger, you have a development environment, ready and no virtualization right.

It’s just your system, so it’s kind of a lot different way and quickly getting started with new projects. That was a major boost for us cool thanks hi. Thanks for a great great talk, I had a few questions. One is that you mentioned briefly: binaries find uh. Do you provide, like the sneaks itself, provide binary? So do you expect the one wanting to use binary for deploying stuff with next to like roll, your own binary storage or something? Yes, so the Hydra project? Basically, you can host it yourself and then it will build the binaries of your customized packages or your projects in your company, and you can point that to the official Hydra and to your here, and it will just ask about for binaries and fetch them.

Mine is more like a security related like I say you find you know things that would never happen. You find a bug in open, SSL or something, and you get like like counting, is like. If I run normal like say, depending upon the books, I get the new openness cell, which has to say ABI competitively, and our only thing I need to do is just restart my services for them to end up in the same by using the newest version.

So how do you – because I saw that you linked specifically to those like a certain verses of software? How do you solve this index? Dude, that’s just mean rebuilding everything or, and so um yeah. This is this is one of the problems that we so basically have. If you, if you change the OpenSSL, then he drew has to recompile all the binaries. This is not like the last time that was a hole in the OpenSSL.

I think it took like one day or something and that’s of course, unacceptable, but we have now an option called. I think it’s called security updates or something in exercise, and basically there is a hack around thing. So, as the hash will not change, so you don’t have to recompile everything, so you say the regional, the regional library was open, SSL and the new one is open, SSL, bla, bla, bla and, and that will replace older everything that uses this open, SSL library and It will not go and rebuild everything, that’s needed, and that way you can really really quickly.

You know update your server and because, if you’re using the NYX OS, it will also know that which which processes were using open, SSL and go and restart those. The the hash of a mafia package depends on its inputs, which is say sorry Saudi hash. For your package depends on the inputs, which say for a pure Python package would be and Python itself, so you can rebuild that and get the same version, but is there? Is there anything that also ties it to the version of the next tool chain that you used to build it? Because if there was a new feature introduced in the Nix tool chain or something is that? Is there a way of basically rebuilding a package of like two years ago or something exactly as it was at that time? So if basically Nix upgrades or something yeah yeah, but that’s like the whole Nix is upgraded separately.

So it doesn’t affect the Nix tool chain. But everything else down to the GCC and Julie and so on is basically a dependency then of your Python up package. I’r not sure if that answers your question and so um, I believe the Knicks toolchain sort of, if you build a binary, will change to change the our path for some in binaries etc. But if the so, if they, if that behavior slightly changes, is something like that, can you rebuild its especially diversion of the next tool chain that are used in a part of the hash, just as a as its dependencies are? There’s that part of the dependency? I’r not sure I understand the questions I may be chocolates.

Yeah come to me and we can hear okay any more questions. Then your slides, I missed a version definition when you listed the dependencies where are they defined like so. Basically, this is kind of like Ubuntu style, where the name is. The version is tied to the name and the name. Basically, the version is not important in UNIX at all, because that’s just basically a meter data and when, when the mix packages repository changes, you will just get the new package and inside there.

Of course there is the the version name, but we don’t do any. Like detection or something about the versions, you showed this plan example, so I want some specific like GStreamer version, and it was part of the name yeah. So when we have like, when you have like GStreamer, which you have version 1 and the version before 1, then we have basically two packages and then you can pick which one you want to use and you can always override the source and get another version.

If you want to change the upstream default for your project or server or whatever, what is the difference between Nix and docker? Let’s so, basically, docker tries to isolate the environments from your system right and provides a very nice API. On top of that, while sneaks basically tries to solve the packaging problem and configuration problem, so these are not like, I think, they’ll. Those two things go together, you can use.

Nix is inside the docker. If you want, of course, we’re also using the darker to solve the packaging problem and providing a huge binary blob, but that’s another discussion and in Nix you don’t have to do this. You don’t have this problem, but it’s still nice to have. You know those lightweight containers and to experiments around that and so on. That’s very short answer cool. Oh one make one question: yes, lon! Do you use sneaks in the web development because you showed a lot of stuff about the OS dependencies and OS package dependencies and even if python dependencies, let’s say right now it’s for the backend, but in our company we have a lot of struggle of packaging and Deploying services with a lot of let’s say, for example, JavaScript with Bower and so on.

So how does Nix apply to that? I know that you can declare your own sources and it can be JavaScript sources, but do you have any, for example, JavaScript repository and how does it apply to the package so that, for example, Python code finds those JavaScript libraries and so on? Because this is our crucial problem, the Debian, for example, can handle their own dependencies and it’s fine. You can do you, your own pipe people repository and it’s fine, but gluing beep and bother, for example, that’s a struggle and how does nicks apply to that yeah? That’s! That’s that’s exactly where Nick shines really good, so we have a tool.

Bower tunics. I think it’s called to generate Bower packages to generate nick’s packages from bower upstream in the next packages repository and then you would go in your project. Do the same for all the extra stuff that you want and basically the NICS knows all about about both sets of packages and you have those available and then you have all the Python dependencies available and then you use the DES make derivation and Nick shell to Develop on that and it will make it it will expose those packages for you to use.

It’s really hard to explain this without an example. But there are, there is a blog post. If you, if you google around, you, will see how it’s used for notes packages for Bower, I don’t think it’s, but it’s the same thing I mean it’s just the front end, and this is exactly where Nick’s really shines when you have to combine two two stacks Together, you


Online Marketing

Learning Chess from data

My name is Tom Ron and I will present the Rosati who couldn’t attend. You can see both our slides, github nib m-learning chess. So what are you going to talk about today? Is learning chess from data, but what everyone wants to make computer we’re a bit modest and we just want to make the computer play chess, okay, um! So, what’s on our mind, we want to know if a computer can learn chess only by looking at data of chess games.

So there are many questions that can be asked in this domain. We’re going to focus today on two of those questions. One is giving a board state: can we make okay, we do a specific move. Is it a legal move and that one is game over giving a board state? Is it a checkmate? Is the game? Has the game ended? Of course, if those are possible, then the sky is the limit, and what else can we imply learn about other systems, maybe some physics and other things? I want to mention this that this is a work on progress, we’re still working on it.

We have additional and further ideas, but I came here today to show you what we have done so far. Ok, so, let’s start and what will know about chess – and there is first done at this – there is some constant tension between feature that we allow ourselves to know when doing this learning process and features or other things that we wanted to know. But first we know that there are two sides to parties who play the game.

We know that could and with either one winner or a time, no two winners or other situation. We know that the board is eight by eight and doesn’t change through the game. We know that there are different pieces that have different unknown properties such as: how can our the species move? Can they eat other pieces? What happened to them when they get eaten, maybe promotion, four pounds and so on? Okay, okay, so the data set we worked on is given in algebraic chess notation.

If you have some time in the end, I’ll show you how it looks like, but the idea is that every square on the board is represented by a letter, A two to H and the number one to write and the move is basically done from one square To another, usually only the two square is written, and while there is only one piece that can do the truth or if it’s not clear than the both the two and the from square is written, we ignored the metadata on this set, such as Flair, ranking, location And so on, we had just a bit more than one thousand one hundred thousand games with full or partial description.

There were many games that didn’t end. I do it checkmate or tie just ended in the middle and had a bit more than eight million most with distributes tribution between the different pieces. We use a pie tomorrow package, which is called chest. It allows us to parse chess McGavock notation and provided the board status provided methods like is this as chances making so on, and some pilot mainly site by some mud float for plotting and the 9pi.

Basically, we thought we would head. We would have not enough or big enough data for doing MapReduce and all we build it as your first we’re going to to do MapReduce, but for this time it was enough to do it on a single machine, maybe some in the future. So this nurse think we wanted to do the first question we addressed before the game on. Can we do a simple move, so the most naive think would be okay.

Have we seen that smooth before? By saying this move, I mean the board status and the move? I want to do if so, yes, good, do it no try again, maybe there’s not enough data, so I haven’t seen it move or maybe it’s not legal and therefore I haven’t seen it. It’s not efficient on neither running time or memory so well and, of course, there’s no learning down here. So, let’s move to our second try you and so for each move for each move.

We made. We checked the difference from the to describe farm square and to square, and we drove the diff histogram. For example, if the pound move two steps on the first time a pumpkin move, then the X difference is zero and the y lift is two and we did some adjustments of the black and white. So it will be relative it, and now you can see those histogram. So this is a least 1/4 pound a pound can move either one step forward, two step forth or one step forward into the side and two to each side.

This is how the Bishop move. This is how the workman only stretch. This is how an I night room it’s a kind of nice, the king, and you can see that the king can move one step to each side and cast a link to one of the sides. Okay, so the price of these approaches, it’s very good for common moves, and it’s getting better as this data size files, of course, and it’s fairly time and memory efficient, we can code all this really really simply and, however, it doesn’t take into account the word status.

So if there are pieces in the way, I cannot answer this question. I can answer it strongly, so so it’s a necessary condition if we have enough data, but it’s not sufficient so did next. Take we did on this idea. Was that for each move we have not only looked at the move, dick, but also into the surrounding of each piece. So you can see here and we have three possible states. One is occupied, one is 3 and Y is out of the board.

If we’re standing off the edge of the board, then some of the squares ma can be out of the board, and this is some of the results. We got aggregating those histograms and doing some working on it. So, for example, for the Quinn, if the Quinn wanted to move to at least 2 steps, then the square above it and on the right must be free, and that makes sense knowing the chess rules. Another thing about the Quinn: if you want to move seven steps downward and right, then this dis means that she’s moving cross of the board.

Therefore she must stand in the court in the corner and this square must be free. Okay, cool um. So is the the King: if there is custom link and the King move, then the one near it should be free must be free also for the pound. If the pound go goes forward, and, oh surprisingly, nothing for the night and no chess rules, we know that the the knight can jump over pieces. However, not having this rule doesn’t tell us anything, because maybe there is not enough data.

Maybe there is nothing relevant, but that’s nice for us, knowing the rules of the system that the knight can skip over pieces. Okay. So the premise of this approach is also. We keep it efficient, not too much. Doesn’t that win it and we store and of course run time we take the surrounding into account, so we can argue whether the surrounding is one reduce to reduce more, but also doing this says tell us that we have the trade-off.

I talked before that we have some external knowledge about the game and about the environment. We are in so again this trade-off and the main comp about this or these strategies is that to assume that moves are independent of one another and while it we can usually say that true, it’s not true for all. The moves, for example, cast a link and a kink and not to castling if there was just before or if the King moved before, and there are several more more moves that are limited by this limitation.

So, okay, this is how we’re going to discuss about moves today and we still have an idea to improve it. But but we know that this gives you roughly good results, and it’s of course, let’s generalize, which I mentioned before. Okay, so now for learning checkmate – and here we ask giving state of a board, is it checkmate or not, we’re not asking whether if it is a checkmate who, on the black or the white, we might be asking that in the future? Okay, yeah, we used several datasets 10k a 30k, a 800 K, the training set.

We use 40 % of its, we used for training 60 for testing, and we had 50/50 of true and full samples. Of course, the real distribution. The probability is much less because you only have one checkmate at most at each game, maybe less and we use SVM classifier with linear kernel. We won’t probably won’t use it in the future, although we had some nice results. Just with this naive, classifier now crash course about classification for people who don’t come from this domain, really crash course um.

I know I speak too fast and I apologize. We have a lot to talk about tonight. Ok, so we start with data and then we extract features and we’ll talk about the features we we use in a minute, but features can be current. Reachable and features categories met, many others may be. A combination may be. The features depend on one another. There are models for each problem and then there is a classification.

Some of the data is used for training some for testing some. You predict we use side pipe for this mission and actually we’re able cipher is very, very general, and we were able to use very a code that we used before for a total of different tests, just applying our feature extraction and pushing it to the classifier. We had and actually another good feature of cyber. It is very easy to tackle between different classifiers.

They all have trying and estimate and fit functions so just play with it. Okay, so here again we have a few versions. So the first version we had was a simple count, features what what that means. First, we counted the number the total number of faces that were on the board. Then we counted how many white pieces, how many black pieces for each type of piece we counted. How many piston world, for example, door, five white count and three black pants, so we had total of eight pounds.

We also counted the number of different white points, five and three black pounds. So this process to something that well, is a bit better than a monkey with accuracy of 70 we had for the cases forward checkmate. This is a checkmate and we were four not checkmate were able to say in 59 percent of time that it’s not a checkmate, but then we had some misclassifications so well. We want to be much better than a monkey, so we moved to the next thing and the next thing was using the previous features and you think that about the first-degree neighbors in this case we didn’t looked out of the board and we excluded it.

But we’ll do that on the next versions, and so we looked on that of empty. Is it of the same side of the piece we are looking on or from other piece around it and we aggregated the data for all the different pieces on the board from each party, and we also built some boolean features based on this data, for example, is Is there more pieces around me from my side or from the other side? Is it mostly empty and such features, and we did had improvement? We can say see that the checkmate rises to now able to classify well on 87 %.

Remember we had 59 % previously, so we’re doing much better now, and this third version was doing taking the same as before, but extending the radius to two and three. This make much more features, however. 300 features is not that much and maybe in the next versions. We can add more features without it’s not as much and but it’s I said four, it makes it less journal. I generalize, as we assume something bigger about the game and the board, and indeed we we had improvement.

Curacy is eighty nine point. Five and both has its improved. We can ask further questions if increasing the radius to four five six eight would improve it. I personally don’t like this approach and don’t want to do it because we assume more about the board and about the game and about the system as a whole, and I would rather think about or suggest different features so having this benchmark. What what we think about or what we suggest want to do in the future? Okay, so test different classifiers, you used SP and maybe changing the kernel, maybe think of nearest neighbor, may be using some deep learning as a password.

I don’t know this is a small change, but I think that it would have interesting effect and the result integrate out of the board. The edges of the board into the different counts, we’re doing. Okay asking who is the winner, which I mentioned earlier? Is it the black the white? We can either approach it as multi classification problem, where the white one, whether the black one or it’s not a checkmate or we can use it just as black or white like if we have checkmate.

Is it black or white who won okay, asking whether a specific situation is just not necessary, checkmate, complex, smooth detection, okay, history, starting writing the history, or maybe we can think of other features you have to represent us what we have done, maybe, for example, counting how Many time this specific piece moved or something else, of course, as I said in summation, we want to reduce the data.

We have external data, we have on the game: okay, more efficient parsing. We use the chess package which it, which is nice, but on some cases we did something like bootstrapping. We took the data we put it into. The chest will produce what we wanted. Maybe we cannot not do this lap and just do it ourself scaling. So for classifying the 800,000 samples, it was really hard for our computers and sci-fi.

It eventually happened, but it was hard. So maybe we need to think about distributing it about using something likes like shaman that was mentioned here earlier. I think there are many tools that we can think of and surprising. We have time for question and thank you for listening so far, but and of course the question is whether we can like to honor system. Can you learn physics just by looking at me? For example, Chesson Giants generally are stateless.

We just get a position and make estimations neither except for the first part of the game, which is very easy, any stage just the Inquisition and not that yeah, but knowing what they can do doing strategy we’re focused on you. You


Online Marketing

Interview with Carla Nemr | Global Head of Business Development

Thank you for taking the time out to talk well, thank you for having me here. How are you I’m fine, good, good, good, okay, so Carla? I think the first thing for us to talk about is a typical day in your role. So what would a typical day look like for you? What are the kind of responsibilities that you’re doing on a day to day basis, yeah so so, via in the guiding to what I do are usually on on daily basis? There are a lot of may be passed and priorities that I should look into.

So I try to maybe divide the FASTA to different days or we even mix the reason. Why is because I look into a different market globally, so, starting off, like I develop individual plans with the county managers for each country or region that they take care of, and then I have also evaluates the market trends and provides the appropriate support to product development With you and of course I do provide some training and maybe guidance to the team so to also overcome some development issues.

Let’s say we do some weekly meetings or monthly meetings to discuss what challenges each market is facing in every county or region, and we try to find solutions for that as well together, I also so I’m just going to answer what you’re saying is that I’m you Actually go in with a tailored approach for each market so for the different country managers. You have to tailor it to the clientele there are in that country.

Yes, okay, so there’s a lot of work going in. Yes, of course, yeah! So that’s why we need to also always work together. It was you know everything when you, when you try to over look to a global market, you need to listen and understand every individual market. You know how you can approach it. You know because every market has different tradition, difference the culture different, try, a type of marketing tools, approach, etc.

So we need to always communicate together as a thing. So it’s a real Tyler, the pro yeah. It is in addition to that. I analyze the business strategies and develop improvement plans, so I can also provide the right support for the you know: the growth of the business. So we also, of course, when I gather all the information and the challenges or issues. So I also try to coordinate that with a management team to maintain as well the quality of service that we provide for every individual market and to maintain, of course, the budgets that we need to allocate for those countries.

So it’s about maintaining the consistency as yes correct. If we look at the technology side as well, I need today is also the IT department in order to maybe inform them about the latest technology that we need to provide for our their clients different. They also in every market. If you look at Asia, they they may be discussed about copy trading or even mena or a Latin America, so every individual market also is they need for certain technical tools.

So we need to always communicate with our IT department in order to provide the most innovative tools and products for our clients Wow. So that’s actually quite a huge scope of activity that you’ve got to cover. And what do you feel? That’s the most important thing for your team to focus on and what are your priorities to offer you anyway? What do you feel your priorities towards the traders? I believe there are a lot of maybe key points that I try to communicate.

It with my team, in order to provide the best service that we can, because at the end of the day we are the online financial trading company. So in that matter you are not offering a tangible product. So, therefore, you need to focus on the service that you provide to clients or traders. So the one of the maybe key services that we need to focus on is education is education, so in cooperation, of course, the marketing department we try to see.

Maybe each market also individually, what type of webinars they would look into educational webinars course. Also, we try to be offline as well, so we target different markets by going on on the ground being they’re present as well and provide them the educational seminars that they might require. So we have a lot of, for example, beginners traders that they need to understand. More that is about you, know the risk of this market of this industry before they get into that market.

So we try to provide them the most information possible for them to make the right decision in order to see whether it suits them or not. That’s for the education part. Now we talk about services. We need also to maintain the quality of communication with our traders. We need to understand the needs and requirements for them in order to provide the best service for those daters and then again, we look also for each market individually.

We do not provide the same service for all markets, because every market, as I said before, as a way a different way of approach, the first thing than your presence as well on. If you look at the technology part as well, so you need to be present on social media and therefore we have to look into how we present our services by providing the right information to our clients from the business development side.

We are active of social media, so traders all over the world can understand what signals vision is also can also get close in communication with the Wrights business development, whether he’s the county manager, the salesperson or the customer support. So we try to understand the need for each market and be there as well active on social media. I mean, if you want to reach the global markets, you need to be present there, so they need to see with who they’re dealing with.

So it’s really important to be active on social media. What images you are giving or representing the company and do you think it’s important that they have that accessibility as well to the company? So it’s more about the transparency, so people always kind of be in contact. Should they have communications more about it for immediate communication? Yes, they do nice. So that’s a that’s, a very kind of comprehensive approach that you have so Carla.

What I’m hearing is that a lot of emphasis is put on education and making sure the traders know what they’re doing so. What do you feel are the current needs of the traders and how do you make sure that your team caters to that? Okay? If we look at different type of traders, say, for example, we start off with the beginners or the guys that they want to learn more about ratings. They’ve heard they tried for some times and they want to learn more yeah.

So what my team does, or maybe the needs for the trader to start learning more about trading – is to walk them through basically the platform you do to understand more how to place a trade. What are there is that also they might encounter. They can also ask. Maybe about different type of technology on the platform that, if you could say like leverage, margin, etc, so those guys are there to help them walk them through from A to Z, then the trader obviously need to understand also the cost, so they do ask sometimes, if We’re talking also about advanced traders, they come and they look for.

What are the cost for the spreads? You know what’s the leverage that they can be entitled for now with the Asthma, so they need to understand how they can be converted to professional clients or not yeah. If they are details, what leverage their they get. So the team is to explain to them and maybe to walk them through the whole registration process. So every trader has its different needs and we make sure that every trader will find the right tool and services that they they would be looking for mm-hmm.

And what about tick well as a whole with regards the condition, so the spreads the leverage, the margin requirements, how does tickle to a plethora of clients with their conditions? Nowadays, I mean there is a lot of competition in the market and the competition market requires to be maybe the the broker that you should be getting or maybe offering the best lower spread, offering the leverage that the trader will look into it.

Also, the regulations, a lot of clients they would come and ask where are deregulated if it’s an FCA company or broker or if it’s a officer a license as well so check, may try it’s best to provide the most innovated products as well. So the more variety of products, the better for the traders to have more options in order to see how they can or maybe which products to look into and trade. They tried as well to have the best execution with this competition.

You need to always offer the best execution as well, in addition to the spreads of the low-cost of spread and the leverage that they would be looking into so and not to mention at the segregated accounts that the clients would feel more safe once they found their Money in order to place some trade, so those may be points tick – may try to focus on in order to be competitive in the market. So it’s kind of the fundamental things that you’re addressing in that case, so the the conditions that will allow them to excel and also the putting in place the procedures that will allow them to feel safe while they’re trading, bringing all the bases very good, very good.

So, with regards to the team that you’re the you monetarily work with directly, how is the team organized and how do you could you also talk us through the process of onboarding, a client yeah, so we have different. My actually team is consist of different departments. The reason why I have different departments is to try also to give like a customized service for every individual may be type of clients, so we have, for example, the activation department.

If you have partnership departments – and we have County managers and customer support yeah and not to mention also the technical analyst guys as well – that they provide educational, webinars and seminars to the traders, so once the client gets one boarded, we have an activation team that would Help the client to be to go through the registration process, starting from filling the form online, sending the KYC documents that I documents for the everyday market gets approved and then they’re ready to start just free traders to understand kyc what does kyc meaning.

It means that the doctor, the right documents for know your client, basically abbreviation of know your clients at the right documents for every trader in order for them to get approval so once they get the approval they get their account activated. So they need to be. You know approved from the compliance point of view. They have a distasteful registration form to help. You provide the right information about their profile, then the documents and then once the documents are correctly submitted, then that counts as a right to be approved.

And after that, that’s a there will be they did to funded his account and start writing nice. Okay, yes, regarding to the other departments, so we have also partnership departments with the partnership department focuses on maybe retailer search, think the idea or maybe partners. So the partners, usually they look into providing clients or maybe refer to friends or they would like to introduce business, but it’s they can either trade as well or maybe just focusing on and growing their businesses yeah.

So those guys that are they have some connections and it would like maybe to introduce them to tickles, so they are called partners or IDs introducing brokers, so those guys and a partnership Department. They make sure that every IDs that, despite that providing the rights, also information for the clients before they get embroidered, they also get some rewards from Dickman once they introduce their clients at say this to us.

You do and the track is dependent also they ensure that the ideas they get may be the right material, so they can ask in there from their side to provide it to their clients right. Okay, we have also the county managers, so county managers are mainly focusing on every individual market they’re assigned to allocate the to yeah, so they do the business development for that market. They study the market, they see what are their eyes approach for stickmen to guess in that markets expand the business there yeah, so they do analyze.

They look at the competition there. What are the right to products, services and marketing budget that we need get for that market and they come forward, so we came together, maybe build that business in the countries or regional they’re taking care of Wow. Okay, you try to always provide, and maybe listen through. Also, the traders, what exactly they need. I mean we try to provide what the client is asking for.

Yeah, I mean there’s no point of just focusing on certain things that the trader would never take into consideration while his focus is somewhere else. So you need to have a very efficient communication with your traders, especially if they’re advanced experienced traders, so they are the best people to hear feedback from in order to provide your onion improve to always services to the client and social media gives you those open blogs Of communication, yes, we do forth and through laxity yeah and I’m guessing that tool must trigger down into your team, so you’re encouraging your team to actually use social media to engage with their clients, to possibly open up more markets and to be a gamification blog yeah.

My team they’re quite active on social media. They try to always the present right image for each individual. We have also some information may be posted about the profits, the volume, the trading volume of the company every now and then because it’s been important for the traders who maybe to compare to compare signal with other brokers as well. So he can make the right. The decision where he would like to maybe invest response nice.

So could you tell me a little bit more about your team, so we’ve been talking about the tools that you’re using to engage with your clients. But how specifically do you organize your team and how do you structure the team and was the onboarding process like so when it’s someone first comes to a member of your team, all the way through to when they’re actually trading with you as a client. So my team consists of different departments.

The reason why we have divided different departments into different tasks: educator needs of each group by the clients. So, for example, we have the activation department. They are the first, the department that will be in contact with the client so that what the client wants. This shows interest of opening an account with us or they’re registering with a company. They get in contact with him and they try to accommodate his needs from a to Zed, so starting off from the registration form and then later to submit his documents as well.

So they walk. Will they walk the client through the whole process until he gets approved and he’s ready to open the account to find the right account as well for him, because we have different types of accounts, so he would be in contact with that account manager in order to Give him the right information to make his decision, which type of accounts and whether the client is approved, so he can start saving on the phone.

Okay, the other department would be. We have another Department called apart. So in the partnership Department, it is the partnership Department. They take care of all our partners and ideal, so we do also have a department that calls county managers Department, those County manager. They get involved more into developed business in every individual market right, so they look into, for example, Asia.

They look into Africa, Latin America. Maybe MENA region, so they try to study the market analyze, what the market needs and then they communicated together. I mean we communicated together, so we can see what are the right approach in order to expand the business in each individual country. Wow. Okay. Well, I think that’s we’ve covered quite a lot today and that’s probably all we’ve got time for so I’d like to thank Carla for coming to talk to us today.

It’s been a wonderful day and you’ll be hearing for some more of the tin wall team. Very very soon have a nice day bye,

Starting a business is not easy! Think about who will be working on your digital image. Hiring a good webmaster will help!