AI gigapixel batch processing

Are you referring to the same upscale algo that was released back in 2013, and only updated again in 2017?

I’m sure that the topaz team is working on fixing the 2000 frames crash limit / limited batch processing capabilities.

Video capabilities probably by next year?

1 Like

Yeah, that’s the one. I found that although AIGP was great in some areas, such as retaining edge detail, Other larger areas lost a lot of small texture details. This could be down to my choice of enhancement but I did do tests to pick the one I thought was best so not sure… It took too long to do another test though so I’ll keep to stills until perhaps a video version comes along.

1 Like

Has anyone tested the new Gigapixel 3.0 for batch processing more than 2,000+ images at a time?

1 Like

Hi, could you let me know if you use After Effects at all.

1 Like

I have it as part of the Adobe subscription, but I rarely use it.

I’ll gladly use it, if it can be used with Gigapixel to process vids/large image batches.

1 Like

I just tested V3.1.1 with 4500 images and it just hangs unresponsive (doing whatever it is doing in the background)

So it’s still a no-go for batch-processing of such large numbers of images.

1 Like

Hey all, please check out cloudai.topazlabs.com to see our new cloud service for Gigapixel AI.

Are there any developers here?
This cloud service comes with an API that can be used to automate your Gigapixel upsampling tasks.

There is a contact form at the bottom of the page where you can request your API key.

If you are not a developer, don’t worry, you can still upload images through the website and have your processed results sent back to you.

Please let me know if you have any questions guys. Thanks :slight_smile:

4 Likes

I am a noncommercial user of the desktop version of Gigapixel AI, and I want to use the software to batch process 65,000+ video frames without having to interact with the software GUI.

The new web portal looks to be an interesting and useful service for companies. However, this does not address the lack of batch processing for me, as I don’t wan’t to pay additional fees for API access and the internet bandwidth required to batch process these images would be prohibitive.

To the developers: please provide a method of batch processing images through some sort of scripting. While this new web service and API is impressive, I don’t want this issue to be considered solved because it exists.

2 Likes

Agreed, batch processing is needed, i am duly surprised this high tech software doesn’t have it, it seems more or less intentional to avoid implementing proper batch processing to upsell the cloud service which is a pay per month type thing making the software product unusable.

1 Like

It is not a pay per month type but a pay per transaction, for example:

I don’t get the issue. You can select all images you want to be processed and then let Gigapixel work. Getting the inintial images loaded into Gigapixel takes a long time, but according to the developers this will be improved to be faster in the future. I personally wish I could open more working instances, so Gigapixel could work on images via GPU use and other images via CPU use at the same time.

Many professionals make use of batch processing at CLI to orchestrate steps of a workflow. If you always require some pre / post processing you can just drop the files in a folder and they are processed. GUI is great, but cumbersome for these use cases.

In my opinion its not a matter of GUI or commandline. The code behind is still the same. It needs to be changed for something that is queued like “load first 50 images and allocate memory. load the next 50 and free the ram for it…” i think you get what i mean. As a ritired programmer i see very often a kind of programming behaviour to allocate memory and not freeing it properly…" Because Ram isnt a issue these days… Back in the days we spend after coding much more time to optimize code and squeeze it to work with limited Ram.

1 Like

Honestly I would LOVE to see a Python module for Gigapixel. Then I can load the images in increments of 50, and just let it run. But I agree with @martin_kaiser1-52260, it would be nice if it just behaved this way to begin with.

My current workflow for increasing the resolution of a video is th extract the frames, put the frames in seperate folders grouped by 10,000 images (beyond that I’ve had limited success with loading images). Takes a few hours to run each group, but it’s been reliable for me so far, and 1 episode of a show is between 60,000 and 70,000 frames (45min runtime). Then compile the frames into an mp4, finally adding the audio and subtitles from the original video.

I would just love to have a script that did all of this for me, currently I can only script the the frame extraction, and the creating the final video, however loading the frames (even in groups of 10,000) is quite a tedious effort.

Have you had a look at Video Enhance AI?

That looks like exactly what I’ve been wanting, steep price tag though, but I’ll give the trial a try here soon. This would save me sooooooo much time, I’m stupid excited to give this a try, thank you so much!

1 Like

Video Enhance AI does a great job. It also processes videos faster than Gigapixel frame by frame and the quality of the results looks more consistant. :smiley:

2 Likes

So far it’s doing a good job. As a test I did the “Toxic” music video by Britney Spears. That came out great. Slowly working my way through my DS9 DVD rip. It’ll be fun to watch DS9 in HD and surround sound (finally). Well worth the $200 so far. It’s not perfect, but it’s so much faster. I noticed that the CG engine has had the best results, two of the engines (artimus?) don’t seem to do anything though, which I found kinda weird.

I’m looking forward to upscaling more of my collection, unfortunately it does take 20hrs per episode, but that’s still faster than extracting, queuing, and bringing the frames back together.

I am not totally sure but I think someone mentioned recently that the sound quality saved in the processed video file does not match the quality of the original file.

I found that exporting to a .tiff sequence then bringing everything together with FFMPEG, then using MergeMKV to replace the video track of the original file with the newly created video track and save it as a new mkv seems to work pretty well. I wrote a little python app that stitches everything together for me, only takes about an hour to run on a 45min video clip. Video Enhancer still takes between 15-20hrs to upscale the video into a tiff sequence though.

I’m starting to poke around for a command line utility for Video Enhancer, I’m currently playing around with a new script I wrote that utilizes the “Gigapixel for Video” beta command line operations. I’m going to see how well a video upscales with that utility. Right off the bat I can tell that it’s far more time consuming than Video Enhancer. Gigapixel takes about 15-20 sec per frame, where Video Enhancer does roughly 1 frame per second (give or take .5 sec in my experience).

What I’m really trying to do is create a little 1 click solution app that’ll trigger some automation with some AI upscaler, FFMPEG, and MergeMKV (as needed).

Overall though, Video Enhancer has been amazing. I’ve already upscaled about 7 episodes of Star Trek DS9 and VOY, and I’m looking to upscale some old classics (that’ll never see a blu-ray release) like The Gods Must be Crazy.

I’ve seen some folks upscaling to 4k, but I’ve been restricting my upscales to 1080p, because I feel like the quality of the video has diminishing returns beyond a 250% upscale.