AI Gigapixel 1.1 Speed Test

I just updated to version 1.1 and ran a test with a 16MB Raw file at 400% enlargement to test the difference between GPU (GTX 980 SC ACX @ 1442 Mhz) and CPU (I7-7700K @ 4.6 GHz) image processing speeds. The CPU took 992 seconds to process, and the GPU took 388 seconds. That puts the difference at 2.56X or a 61% reduction in time by using the GPU versus the CPU. Pretty impressive difference even with a fast 8 core CPU. I did notice that the GPU was at about 40% load most of the time with only momentary spikes in peak demand, and the temperature never broke 45C. By comparison, in my flight simulator it stays at a constant 99% load and reaches 65C, so it looks like I have plenty of excess capacity for running AI Gigapixel even with this older GPU.

1 Like

Seems like there is a lot of room to process at least 2 images at one time if the software was optimized for it. I have a GTX 1080Ti and I got an average 30-35% of GPU load.

In a recent 2-minute papers video a “curriculum GAN” based method can do similar enlargements in 4 seconds.

It is probably a good time to invest in storage technologies - unless of course 800% enlargements in real-time becomes possible.

Interesting video, but there was no information as to what kind of computing machinery they were using to get the speeds they talked about. A 4 second up-size seems very, very fast, but that would be less so if they were using a Cray than if they were using a MacBook Pro.

I also suppose that they were using massive parallel processing but, again, there was no information. Still it is something to keep in mind.

After reading the above posts I ran a few tests of my own and I am perplexed and disappointed. My computer uses a 6 core Ryzen 2600X with 16GB of pc 3000 ram. I bought a new Gigabyte Radeon RX 580 4GB video card. I’m using Gigapixel v1.1
Using a Sony 16GB raw file and upscaling 400% with the GPU took 986 seconds. When I converted the same file to a Jpeg and saved in the highest quality setting and then used Gigapixel to upscale 400% it took 1227 seconds. I thought it would be faster with a Jpeg file but it was much slower. The task manager showed the GPU working at around 100% about 80% of the time and using 1.9 GB or video memory.

Finally, I took a small jpeg file 496 x 738 pixels and upscaled 400% (all tests used none for the reduce noise setting) with the GPU setting it took 42 seconds and the CPU setting 40 seconds. How can the CPU be faster than the GPU? The RX580 is running at 1380 MHz clock and 7500 MHz memory.

This is a crazy result and slow. I would like Topaz to look into this issue.

From the paper: http://igl.ethz.ch/projects/prosr/prosr-cvprw-2018-wang-et-al.pdf

“The asymmetric pyramid architecture contributes to
faster runtime compared to other approaches that have similar
reconstruction accuracy. In our test environment with
NVIDIA TITAN XP and cudnn6.0, ProSRℓ takes on average
0.8s, 2.1s and 4.4s to upsample a 520 × 520 image
by 2×, 4× and 8×. In the NTIRE challenge, we reported
the runtime including geometric ensemble, which requires
8 forward passes for each transformed version of the input
image. Nonetheless, our runtime is still 5 times faster than
the top-ranking team.”

I suspect it is quite likely that an Nvidia card would work better, since they have CUDA, which helps applications use the GPU for calculations and such. Not sure if AI GP uses it, though.

Yours is a decent graphics card, but we would need to hear from Topaz to know if it’s only OpenGL that’s being used or also CUDA on Nvidia cards etc. (Actually it could probably use CUDNN, their neural network speedup thing to work even faster in Nvidia cards as well).

At the same time being slow is somewhat relative… Since these results and tech are quite different from your usual upscaling. There’s nothing that you can directly compare it to call it slow.

I was comparing it to the time in the first post (andymagee) who’s GPU took only 388 seconds for a 16GB file. Also, the Jpeg file took much longer than the raw file as I mentioned. Wouldn’t anyone expect the GPU to be faster than the CPU since that is how Topaz designed gigapixel? That is why I upgraded my video card. I have reverted my video driver to the previous version to test, but havn’t done it yet.

[quote=“Artisan-West, post:8, topic:7543”]
I was comparing it to the time in the first post (andymagee) who’s GPU took only 388 seconds for a 16GB file.[/quote]

388 seconds for a 16 GB file would be absolutely amazing, but I don’t think technology is quite there yet :slight_smile:

Anyhow, for a realistic comparison we would need to operate on the exact same file, so if you’d like to share an image of yours along with parameters and how long it took to enlarge I can try it on my system (and perhaps others can as well). I have a nVidia GTX 1060 card that, according to benchmarks, seems quite close to your Radeon RX 580. That would be the closest to a direct comparison we can get (if we use a single file as a benchmarking tool basically).

I suspect, once again, with Neural Networks it isn’t as cut and dry as “process 16 million pixels, one by one”, but rather some recursion takes place depending on what’s happening in the image etc. But then only the developers could elaborate (and I wish they did - for someone who’s been working with more demanding graphics applications and following some of this tech the rendering time isn’t that surprising, but it seems many users are caught off guard when an older machine cannot keep up).

As stated in the original post it was a 16 MB file. I agree that the file content and type will make a difference, and comparison testing between both enlargement methods and computer systems should be done with a standard file.

My original intent here was to validate the relative performance of my GPU and CPU with the latest version of Gigapixel.

The significance here is that the performance depends on the size, i.e. dimensions of the image, and not the type or size in bytes.

The GPU will always be faster as, to quote NVIDIA:

“Architecturally, the CPU is composed of just few cores with lots of cache memory that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously. The ability of a GPU with 100+ cores to process thousands of threads can accelerate some software by 100x over a CPU alone. What’s more, the GPU achieves this acceleration while being more power- and cost-efficient than a CPU.”

Essentially GPUs are optimized for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place because of I/O operations, hardware operations and application usage etc.

Of great importance is also the dedicated GPU memory available to your GPU and the available memory bandwidth. For example I have a notebook with a GTX 1050 and 4GB of memory but your GTX 980 (with 4GB of memory) will operate nearly 3 times faster than mine because although it has a lower core speed it has a bigger memory bus and a greater memory bandwidth, larger memory cache and 2048 processing units compared with 640 on mine.

It isn’t a simple linear calculation based on the number of processing units but a supported GPU will be faster than a CPU.

My Radeon RX 580 video card has a memory buss of 256 bits and uses 2304 stream processors. It should easily outperform my CPU. I reverted the video driver from V24 back to V22 and Gigapixel runs much faster. I just performed the test using the same 16M pixel jpeg picture previously mentioned and the time was 490 seconds instead of the previous 1227 seconds (250% faster). It is easy to see that there is a problem with the Gigabyte (and probably all brands) Radeon V24 video driver. Topaz should check this and pass on the info to AMD/Gigabyte.

Edit: the file is 16 megapixel not Giga, sorry.

ydobemos, I would share the image for you to test but it’s 18 MB (jpeg) so I will see if I can attach it to a message.

If someone else has this problem, you can check your video driver in the Device manager.

I received the original testing image from Artisan-West and tested the same settings on my Nvidia GTX 1060, but the result was 18 minutes.

During the process the GPU load never exceeded 55% or so, though. Might be Nvidia’s ”desktop" settings, might be the way it works or drivers. Will need to do some more testing.

If anyone wants to test Gigapixel using the 16 Megapixel file I uploaded, here is the link and the setting should be as shown here to be consistent. The file is only available for 2 weeks. File upload and sharing. Large file transfers. Free online cloud storage. The file should be 3280 x 4928 pixels.

In Windows you can use the clock above the calendar and just start the process when the seconds reach 00 and write down the time for start and stop, when the Process info turns green. The file suffix doesn’t matter. It would be interesting if you publish your video card type and the time for processing. No need for pictures.

Gigapixel settings:

The driver you are showing there is a year out of date, check the current version on the AMD web site. The driverlisted there is the one that is verified by your hardware supplied not AMD. Latest drivers are dated 27/08/2018.

Instructions are here:

https://support.topazlabs.com/hc/en-us/articles/201864478-Updating-your-display-drivers-in-Windows

Why would you select ‘Convert all files to format’ for this sRGB HQ file as the output is HQ anyway.

The latest version 24 is what I had installed and it slowed Gigapixel by the 250% over the version 22 which is what I have now. I bought the new RX580 installed and ran Gigapixel, then a few days later I updated the driver which caused the slow down. I’m warning people about that. From what I understand the drivers are usually done by the chip maker not the video card makers. I don’t know if this is always the case but you can go to the AMD website and get drivers for Radeon. https://www.amd.com/en/support Topaz should look into why the latest driver slowed gigapixel by 250%.

Well both yours and the figures posted by @ydobemos are unusually slow because with my GTX 1050 it took 10:30 minutes.

You can click on my icon to see my PC configuration, note only SSDs on the PC. To save you having to do that:

Win 10 x64 System with Intuos Pen & Touch
Sys : Intel® Core™ i7-7700HQ CPU @ 2.80GHz (8 CPUs), 16GB RAM
GPU 1: Intel HD Graphics 630, 1GB, OpenGL v4.5
GPU 2: NVIDIA GT 1050, 4GB, OpenGL v4.6, OpenCL v1.2

The 16MB sample file you posted, at the settings that were specified, took 377 seconds with the GTX 980, and 960 seconds with the i7-7700K. So the results are essentially the same as what I saw with my raw file of a similar size to this jpg.

The earlier test was with Nvidia’s Win 10 - 64bit Display Drivers 399.07, and these used 399.24.

I think his initial figure of 490 seconds (8 minutes) is about right, as is yours and I now did a test on another system with a GTX 970 and it took about 10:45 minutes, which properly correlates with the other results.

Did notice that on the initial machine Windows was doing some updates in the background, although those have nothing to do with the GPU, but who knows how AI GP works. As we speak the system is updating and I also did some cleanup here and there, will do a test later to see if there’s an improvement.

Edit: Would be great if Topaz Labs could integrate the progress bar into Windows (well, whatever that process is) so it shows in the taskbar and you don’t have to check it every now and then. Also, a processing time display once it is done wouldn’t hurt.