1 on Linux, we’ll run it in boom, even though that isn’t its native Linux distribution, because I’ve run that one a lot in the past and I can configure it pretty easily, for it will run timeline performance as well as some renders, and I think you might be surprised by the results let’s get to it.
As with all our timeline benchmarks, we’ll start by making sure that we delete our optimized media, we uncheck all of our proxy options, ensure that we’ve got no render cache no fusion cache. This gives us a realistic view of what it looks like when the processor is working and the graphics card is rendering in timeline. Next thing you can see is that indeed, I’ve installed the proprietary drivers, don’t kill me for the Nvidia graphics card and the 2080 TI is crankin, we’ll zoom in so we can see the timeline performance as it plays back you’ll notice.
It sticks to the framerate. That’s the intended frame right here 2997. This is a gh 5 shooting in 8-bit color h.264, with the f1 720 millimeter Prime. I’ve got a little bit of noise reduction in these clips, which is why it surprises me that it sticks so well as it does to 2997. In fact, the only time we really see any challenge with this footage as we’re playing it back in the linux system, is when we throw in the fusion clips, and that tends to jump over to the processor, not that being the cpu and creating a little bit Of a bottleneck because it has to render and then push out those titles, even there, though you saw it, was pretty smooth and pretty quick.
Let’s check out the system statistics as it’s running will pop up the system monitor here and what you’re seeing is the G? The CPU threads as they execute, let’s keep in mind that we’re running OBS. Of course we do that when we do our benchmarks and windows and I display them as well, so I think it’s kind of fair they’re reading these threads as fusion does its work. To put title screens up, you see that a few of them run off by themselves and that’s a 32 thread workstation processor here.
But when you’ve got one, that’s primarily doing all the work. It tells you that the application is not multi-threaded in a way that it can leverage the cpu architecture. This is something that backs up exactly what I’ve seen in Windows, as fusion does its job, because I’m too lazy to format one of my nvme drives set it up with Linux, I put in a SSD 120 gigabyte standard, kingston SSD, and it’s the only difference between The windows and the Linux machine configuration inspect on to the results of our render test, which, as always, ran three each test three times and was able to take the average.
You can see immediately off the bat in linux. The h.264 blue bar is significantly shorter than the one over in the window space. Now this was weird because I used the hardware encoder in both instances on the twenty atti. There’s a question as to whether or not the encoder got more resources in Linux, but it achieves a 15 percent difference and improvement over the windows. Encoder crazy. As you can see, that is not what I expected.
I thought with the hardware encoder in the graphics card running both sets of tests we’d end up with effectively the same thing. I don’t know if more memory was available to the graphics card in Linux, because the genome desktop was less heavy than maybe the windows. Who knows, but it’s cool cool data to have and hope it might help you thanks for reading, please I do respond to all the comments. So if you have questions or something weird, you want researched.
Let me know I love doing that stuff and subscribe and like if you would thanks for reading, have a great day.
Videos are truly an awesome way to get the point across. Any type of content from your business is important!