The significance here is that the performance depends on the size, i.e. dimensions of the image, and not the type or size in bytes.
The GPU will always be faster as, to quote NVIDIA:
"Architecturally, the CPU is composed of just few cores with lots of cache memory that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously. The ability of a GPU with 100+ cores to process thousands of threads can accelerate some software by 100x over a CPU alone. What’s more, the GPU achieves this acceleration while being more power- and cost-efficient than a CPU."
Essentially GPUs are optimized for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place because of I/O operations, hardware operations and application usage etc.
Of great importance is also the dedicated GPU memory available to your GPU and the available memory bandwidth. For example I have a notebook with a GTX 1050 and 4GB of memory but your GTX 980 (with 4GB of memory) will operate nearly 3 times faster than mine because although it has a lower core speed it has a bigger memory bus and a greater memory bandwidth, larger memory cache and 2048 processing units compared with 640 on mine.
It isn't a simple linear calculation based on the number of processing units but a supported GPU will be faster than a CPU.