I’ve automated my video timelapse production. It’s as simple as can be so I’m making one per panel I draw for my webtoon. The thing is my video editing is always the same. So I can scripti it to death. It’s not meant to make original edits and super dynamic cuts.
It’s just a shell script that takes some image, sound and video globs as input and slugs everything. Adapts the video speed to match the sound duration, makes the transitions and panoramics with ffmpeg…
There is a couple options to allow keeping and resuming from temp files for fast iteration. I mainly used it for debug. but I guess no one cares at this point!
Actually on mac by a half-hearted choice. So yes *nix for the win!
The video.sh is a (z)shell script. I love how raw and concise it can feel compared to everyday java…
I do have a few python scripts for scribus and krita I couldn’t live without (Especially the krita one but Wolthera wrote 80% of it) but I mostly have some old things I would be ashamed of!
any chance you can share the script by chance? hopefully with comments?
I’m really interested in knowing more on how you use ffmpeg to automate all this. are there tutorials out there for this stuff? this would be a game changer for me.
The script needs to be made portable but that’s a next step. For now It’s only used on zsh on macOS + some hardcoded values (like the fps for me is decided at 60).
For now there is only one piece of comment in french
About the use of ffmpeg, here is how I
slug all the soundtracks
get the audio duration
slug all video input files (just the recording video, not the intro and panoramics).
$TEMP_SOUNDS and $TEMP_VIDEO are just constants… I think this is not the way we do in shell…
ffmpeg -f concat -safe 0 -i <(for f in $sounds; do echo file "'$f'"; done) -c:a copy $TEMP_SOUNDS
audioDuration=$(ffprobe -v error $TEMP_SOUNDS -select_streams a:0 -show_entries stream=duration | grep -v '\[' | cut -d '=' -f 2)
ffmpeg -f concat -safe 0 -i <(for f in $videos; do echo file "'$f'"; done) -c:v copy $TEMP_VIDEOS
This is full of my custom logic etc, I’m not sure that’s the best way to learn about ffmpeg. For
I’m just a manual exploring noob
it would be clearer with simple examples not tainted with my use case (There is a difference if the image is a portrait or landscape, I have output override confirmation/forcing, backup and resume logic in my naïve way…).
So I’ll just enumerate what comes next:
probe the main video length (the slug)
probe the extra videos’ length (the intro, the panoramics etc)
speed up the main video so it fits the sound length (reencoding needed here as I drop most frames)
Then I slice the main video in 3 parts [1st]-[2nnnnnnnnnnnnnnnnnnnnnd]-[3rd].
so I can fade in and out [1st] and [3rd] (10 sec each) without reencoding all [2nd](several minutes).
Yes, it’s a shell script, so all I do is run commands like the ones in the first post to indicate where is the final image, the soundtrack globs, the video globs, the output…
This temp video is just a slug of all videos. It’s like appending strings if you will.
This is the most basic thing you do in ffmpeg, so you will find lots of resources about this.
It’s cheap so you can concat an X hours video in a couple seconds. Then I can proceed with speeding up the video.
Sorry, I didn’t understand ?
I guess it’s as fast as can be depending on your parameters, as it’s used by everybody under the hood anyway (google, blender, whatever…)
Reencoding induces quality loss and heavy computing. In my script I only do it once as I drop frames to go from X hours to X minutes duration.
Here is some valuable resource for the command you are interested in. It’s always worth reading the docs without the over simplification and misunderstanding of some extra youtuber/intermediate: https://trac.ffmpeg.org/wiki/Concatenate
Here are the docs I must admit it’s gigantic and ffmpeg is far from being the easiest cli tool but it’s super powerfulll and you’ll need that to handle it: https://ffmpeg.org/documentation.html
Oh OK! Me, I surround code with triple backticks ` not single quotes ’
From what I can see from the very first command, escaped spaces in string args are not handled well.
The “preformatted text” option will use single backticks for a single line, but triple backticks for multiple lines.
You can type a language name after the first set of backticks to get specific syntax highlighting, otherwise it will guess. For example with “```zshell”:
ffmpeg -f concat -safe 0 -i <(for f in $videos; do echo file “‘$f’”; done) -c:v copy $TEMP_VIDEOS
Is this the actual code that would merge all the videos? Every single time I have used the concat command, I would always have to write a text file that would list all my videos. Concat would have to use that file in order to merge all the videos. I don’t see anywhere in your command that concat is reading a text file (unless I’m misreading it). Do you skip this step?
the ffmpeg here has the -safe 0 option to accept absolute paths,
because $videos is an array containing absolute file paths. (The files are not located relatively to the “file descriptor” which is /dev/fd/11 here).
So it’s more like:
last question. from my research online it’s been mentioned several times that concat only works if the files all have the same resolution, framerate, file format, etc. but what would happen if the video files are not all the same? i could see concat giving an error if the file format is wrong, but what if the file formats are right but the resolution is wrong? or framerate is different, etc?