Last modified: Feb 19, 2026 By Alexander Williams
Python subprocess check_output Guide
Python is a powerful language for automation. Often, you need to run system commands. The subprocess module is the key. This article focuses on subprocess.check_output. It is a vital function for scripters.
What is subprocess.check_output?
The subprocess.check_output function runs a command. It waits for the command to finish. Then, it returns the command's output. This output is captured as bytes. It is perfect for getting results from shell tools.
This function is part of the subprocess module. You must import it first. It simplifies a common task. You run a command and get its text result.
Basic Syntax and Parameters
Here is the basic syntax for the function.
import subprocess
# Basic syntax
output = subprocess.check_output(args, *, stdin=None, stderr=None, shell=False, cwd=None, encoding=None, errors=None, universal_newlines=None, timeout=None, text=None, **other_popen_kwargs)
The args parameter is most important. It can be a string or a list. Using a list is safer. It avoids shell injection risks.
Other key parameters include shell, cwd, and encoding. We will explore them with examples.
A Simple Example: Getting Directory List
Let's start with a simple example. We will list files in a directory.
import subprocess
# Run the 'ls' command on Unix/macOS or 'dir' on Windows
try:
# For Linux/macOS
output = subprocess.check_output(["ls", "-l"])
print("Command output:", output.decode('utf-8'))
except subprocess.CalledProcessError as e:
print(f"Command failed with return code {e.returncode}")
Command output: total 24
-rw-r--r-- 1 user staff 1234 Jan 10 10:00 script.py
-rw-r--r-- 1 user staff 567 Jan 9 09:00 data.txt
The command ls -l runs. The output is captured as bytes. We decode it to a string for printing. The try-except block catches errors.
Handling Command Errors
Commands can fail. check_output raises an exception on failure. It is a CalledProcessError. You must handle it.
import subprocess
try:
# This command will likely fail (file doesn't exist)
output = subprocess.check_output(["cat", "non_existent_file.txt"])
except subprocess.CalledProcessError as error:
print(f"Error! Command returned code: {error.returncode}")
# The error output is often in error.output
if error.output:
print(f"Error message: {error.output.decode()}")
Error! Command returned code: 1
Error message: cat: non_existent_file.txt: No such file or directory
This is a critical feature. It makes your script robust. Always wrap calls in try-except.
Using the shell=True Argument
Sometimes you need shell features. Use shell=True. But be very careful. It can be a security risk.
import subprocess
# Using shell=True to run a pipeline
command = "ls -l | grep .py"
output = subprocess.check_output(command, shell=True, text=True)
print("Python files:", output)
Python files: -rw-r--r-- 1 user staff 1234 Jan 10 10:00 script.py
Notice the text=True parameter. It automatically decodes output to a string. It's cleaner than manual .decode().
Warning: Avoid shell=True with user input. It can lead to shell injection attacks.
Setting Timeouts for Commands
Commands can hang. Use the timeout parameter. It stops the command after N seconds.
import subprocess
try:
# This 'sleep' command would take 10 seconds
output = subprocess.check_output(["sleep", "10"], timeout=2)
except subprocess.TimeoutExpired:
print("Command took too long and was terminated!")
Command took too long and was terminated!
This prevents your script from freezing. It is essential for reliable automation.
Changing the Working Directory
Run a command in a specific folder. Use the cwd parameter.
import subprocess
import os
# Create a path to a specific directory
target_dir = "/path/to/your/project"
if os.path.exists(target_dir):
output = subprocess.check_output(["ls"], cwd=target_dir, text=True)
print(f"Files in {target_dir}:\n{output}")
This is very useful. It lets you control where the command executes.
Capturing Standard Error (stderr)
By default, check_output captures standard output only. Error messages go to stderr. To capture them, redirect stderr.
import subprocess
# Capture both stdout and stderr by redirecting stderr to stdout
output = subprocess.check_output(
["ls", "non_existent_file.txt"],
stderr=subprocess.STDOUT, # Redirect stderr to the same pipe
text=True
)
print("Output (including errors):", output)
Output (including errors): ls: non_existent_file.txt: No such file or directory
Use stderr=subprocess.STDOUT to merge streams. Now you can log all messages.
Practical Example: Checking Disk Usage
Let's build a useful script. It checks disk usage and alerts if it's high.
import subprocess
def check_disk_usage(path="/"):
"""Check disk usage percentage for a given path."""
try:
# Run the 'df' command
output = subprocess.check_output(
["df", "-h", path],
text=True,
stderr=subprocess.STDOUT
)
lines = output.strip().split('\n')
# The second line contains the data for our path
if len(lines) > 1:
data_line = lines[1]
# Split by whitespace and get the usage percentage (without %)
usage_percent = int(data_line.split()[4].replace('%', ''))
return usage_percent
else:
return None
except subprocess.CalledProcessError as e:
print(f"Failed to check disk usage: {e.output}")
return None
# Use the function
usage = check_disk_usage()
if usage is not None:
print(f"Disk usage is at {usage}%")
if usage > 90:
print("Warning: Disk is almost full!")
Disk usage is at 75%
This shows a real-world application. You can run system diagnostics with Python.
Conclusion
subprocess.check_output is a powerful tool. It bridges Python and the system shell. Remember to use list arguments for safety. Always handle exceptions with try-except. Use timeouts for long-running commands.
This function is a cornerstone of system automation in Python. With it, you can integrate any command-line tool into your scripts. Start using it to make your Python programs more powerful and connected to the operating system.