Mass Data Processing with SurfQuake
⚠️ Note: This feature is under active development. The expected stable release is SurfQuake 0.1.0 (July 2025).
Welcome to the Signal Processing Module of SurfQuake — a powerful and scalable command-line tool for applying signal processing pipelines to large collections of seismic waveform data.
Whether you're analyzing earthquake catalogs or preparing data for machine learning models, this module allows you to define, control, and automate your signal processing workflow with precision.
📘 Learn more in our Signal Processing Tutorial
The macro configuration (provided via a YAML file) defines your signal processing pipeline in a structured, step-by-step format. Each process
entry specifies a method, its parameters, and the order in which it will be applied to each trace. You can process either full daily waveform files or extract and process specific segments based on an event file. Additionally, an interactive plotting tool is available to help you visually inspect and validate the processing steps.
Running the Process
This tool runs via the terminal. First in the terminal you can ask for help. Available commands are quick, processing and processing_daily.
surfquake quick
Let's start explaining quick command:
Overview: Process seismic traces. You can: - Apply processing steps (filtering, normalization, etc.) - Optionally visualize the waveforms (interactive mode) - Apply a user-defined post-processing script before or after plotting
Modes:
Default : Interactive mode (plotting + user prompts)
--auto : Non-interactive mode (no plots, no prompts, outputs written automatically)
Post-Script Logic:
Use `--post_script` to apply a custom script to each event stream.
Use `--post_script_stage` to control **when** it runs:
• before : script runs before plotting (good for filtering, editing headers)
• after : script runs after plotting (good if picks or metadata are added)
Examples:
Process waveform files with a config and plot interactively:
surfquake quick -w "./data/*.mseed" -c ./config.yaml -i ./inventory.xml --plot_config plot.yaml
Key Arguments:
-w, --wave_files [REQUIRED] Glob pattern or path to waveform files
-c, --config_file [OPTIONAL] YAML config defining processing steps
-i, --inventory_file [OPTIONAL] Station metadata (StationXML or RESP)
-o, --output_folder [OPTIONAL] Directory to save processed traces
-a, --auto [OPTIONAL] Run in automatic (non-interactive) mode
--plot_config [OPTIONAL] Plotting settings YAML
--post_script [OPTIONAL] Python script to apply to each stream
--post_script_stage When to run post-script: 'before' or 'after' (default: after)
surfquake processing
Overview:
Process or cut waveforms associated with seismic events.
You can:
- Cut traces using event times and headers
- Apply processing steps (filtering, normalization, etc.)
- Optionally visualize the waveforms (interactive mode)
- Apply a user-defined post-processing script before or after plotting
Modes:
Default : Interactive mode (plotting + user prompts)
--auto : Non-interactive mode (no plots, no prompts, outputs written automatically)
Post-Script Logic:
Use `--post_script` to apply a custom script to each event stream.
Use `--post_script_stage` to control **when** it runs:
• before : script runs before plotting (good for filtering, editing headers)
• after : script runs after plotting (good if picks or metadata are added)
Usage Example:
surfquake processing -p "./project.pkl" -i inventory.xml -e events.xml -c config.yaml -o ./output_folder --phases P,S --plot_config plot_settings.yaml --post_script custom_postproc.py
Key Arguments:
-p, --project_file [OPTIONAL] Path to an existing project file
-w, --wave_files [OPTIONAL] Path or glob pattern to waveform files
-i, --inventory_file [OPTIONAL] Station metadata file (XML, RESP)
-e, --event_file [OPTIONAL] Event catalog in QuakeML format
-c, --config_file [OPTIONAL] Processing configuration file (YAML)
-l, --plots [OPTIONAL] In case user wants seismograms interactivity
-o, --output_folder [OPTIONAL] Folder where processed files are saved
-n, --net [OPTIONAL] project net filter Example: NET1,NET2,NET3
-s, --station [OPTIONAL] project station filter Example: STA1,STA2,STA3
-ch, --channel, [OPTIONAL] project channel filter Example: HHZ,BHN
-t, --cut_time [OPTIONAL] pre & post first arrival in seconds (symmetric)
-cs, --cut_start_time [OPTIONAL] cut pre-first arrival in seconds
-ce, --cut_end_time [OPTIONAL] cut ppost-first arrival in seconds
-r, --reference [OPTIONAL] Reference |event_time| if set the pick event origin time is the cut reference.
--phases [OPTIONAL] Comma-separated list of phases for arrival estimation (e.g., P,S)
--plot_config [OPTIONAL] Optional plot configuration file (YAML)
--post_script [OPTIONAL] Python script to apply per event stream
--post_script_stage [OPTIONAL] When to apply the post-script: before | after (default: before)
Notes:
You might need to set -w for individual files or -p for loading your project file
Example Plotting File Format (YAML)
plotting:
traces_per_fig: 6,
sort_by: False, # options: distance, backazimuth, None
vspace: 0.05,
title_fontsize: 9,
show_legend: True,
plot_type: "standard", # 'record' for record section and overlay for all traces at the same plot
pick_output_file: "./picks.csv",
sharey: False,
show_crosshair: False,
show_arrivals: # False Show Thereical arrival times in record section
Example Event File Format
date;hour;latitude;longitude;depth;magnitude
2022-02-02;23:35:29.7;42.5089;1.4293;20.7;1.71
2022-02-03;12:01:21.6;42.3047;2.2741;0.0;1.65