Extend Running Shell Commands: Dynamic Chaining and Task Queuing
Ever found yourself kicking off a long-running shell command, only to realize moments later you forgot a crucial follow-up action? Adding && foo || bar after the fact is impossible. This common predicament often leads to a dilemma: either wait for the initial process to finish manually or open a new terminal, neither of which is ideal. Fortunately, there are powerful techniques to chain commands dynamically, even after the first one has already begun, enhancing your command-line productivity.
Dynamic Command Chaining with Job Control
The most direct solution for extending an already running foreground process leverages your shell's built-in job control features. This method allows you to inject follow-up logic without restarting the original command:
-
Suspend the foreground process: Press
Ctrl+Z. This will temporarily pause the running command and return control to your shell prompt. You'll typically see a message indicating the job has beenStopped, for example,[1]+ Stopped ( sleep 10; false ). -
Send it to the background: Type
bgand press Enter. The suspended job will resume execution in the background, freeing your terminal. The shell will usually output the job ID and command, like[1]+ ( sleep 10; false ) &. -
Wait for completion and get the exit code: Now, you can queue your follow-up commands. To ensure they run after the background process finishes and react to its success or failure, use the
waitcommand.wait %%tells your shell to wait specifically for the last backgrounded job (identified by%%) to complete. Whilewaitby itself waits for all jobs,wait %%targets a specific one.- When given a specific job ID,
waitwill return the exit code of the waited-on process. This is critical for conditional execution. -
You can then use
&&for commands that should execute only upon success or||for commands that should execute upon failure:bash wait %% && echo "Process succeeded, doing next step..." || echo "Process failed, handling error..."
This allows you to react to the outcome of your original command even though you added the follow-up logic mid-execution.
Proactive Task Queueing with task-spooler
For situations where you anticipate a chain of commands or want to manage a queue of long-running tasks, task-spooler (often available via package managers as tsp on Debian/Ubuntu and similar systems) is an excellent utility.
With task-spooler, you can:
- Queue commands immediately: Simply prepend
tspto your command, e.g.,tsp yt-dlp <some URL>. Thetspcommand returns immediately, and your task is added to a background queue. - Chain subsequent tasks: You can then queue another command that might depend on the previous one's output or existence, even if the first hasn't started yet:
tsp ffmpeg -i <file that will be downloaded by yt-dlp> <other params>.task-spoolerintelligently manages the execution order of queued tasks. - Monitor status: Check the status and output of any queued or running tasks at any time using
tsp -t, providing full visibility into your background operations. - Run in parallel: While
task-spoolerruns tasks sequentially by default, it can be configured to run tasks in parallel, offering flexibility for different workflows.
This approach provides a robust, persistent system for managing multiple long-running operations without blocking your primary shell session, significantly boosting your command-line efficiency.
Other Considerations
A less ideal workaround, which came up in discussion, involves running the initial command in the background from the start (long-running-command &) and then using tail --pid=$PID -f /dev/null && forgotten-command. While this works by having tail monitor the process ID and execute a command upon its disappearance, it has significant drawbacks: it requires forethought to add the & initially, and it cannot retrieve the original command's exit code, limiting its utility for conditional follow-up actions and making the job control method generally superior for dynamic post-execution logic.