spark-submit

Usage

spark-submit is used to execute a PySpark-based Prism project. This is distinct from a Python-based Prism project, in which case you would need to use the run command.

In order to use the spark-submit command, you must have a PySpark profile specified in profile.yml.

Usage: prism spark-submit [OPTIONS]                                                                                                 
                                                                                                                                     
 Execute your Prism project as a PySpark job.                                                                                        
                                                                                                                                     
 Examples:                                                                                                                           
                                                                                                                                     
  โ€ข prism spark-submit                                                                                                               
  โ€ข prism spark-submit -m module01.py -m module02.py                                                                                 
  โ€ข prism spark-submit -m module01 --all-downstream                                                                                  
  โ€ข prism spark-submit -v VAR1=VALUE1 -v VAR2=VALUE2                                                                                 
  โ€ข prism spark-submit --context '{"hi": 1}'                                                                                         
                                                                                                                                     
โ•ญโ”€ Options โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ --module          -m  TEXT                     Modules to execute. You can specify multiple modules with as follows: -m           โ”‚
โ”‚                                                <your_first_module> -m <your_second_module>.                                       โ”‚
โ”‚ --all-downstream                               Execute all tasks downstream of modules specified with --module.                   โ”‚
โ”‚ --all-upstream                                 Execute all tasks upstream of modules specified with --module.                     โ”‚
โ”‚ --log-level       -l  [info|warn|error|debug]  Set the log level.                                                                 โ”‚
โ”‚ --full-tb                                      Show the full traceback when an error occurs.                                      โ”‚
โ”‚ --vars            -v  TEXT                     Variables as key value pairs. These overwrite variables in prism_project.py. All   โ”‚
โ”‚                                                values are intepreted as strings.                                                  โ”‚
โ”‚ --context             TEXT                     Context as a dictionary. Must be a valid JSON. These overwrite variables in        โ”‚
โ”‚                                                prism_project.py.                                                                  โ”‚
โ”‚ --help                                         Show this message and exit.                                                        โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

Important: this command is identical to prism run, with the exception that this should only be used to submit PySpark jobs.

Here's what the output looks like in Terminal:

$ prism spark-submit
--------------------------------------------------------------------------------
<HH:MM:SS> | INFO  | Running with prism v0.2.1...
<HH:MM:SS> | INFO  | Found project directory at /my_first_project
 
<HH:MM:SS> | INFO  | RUNNING 'parsing prism_project.py'.............................................. [RUN]
<YY/MM/DD> <HH:MM:SS> WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
<HH:MM:SS> | INFO  | FINISHED 'parsing prism_project.py'............................................. [DONE in 0.03s]
<HH:MM:SS> | INFO  | RUNNING 'task DAG'.............................................................. [RUN]
<HH:MM:SS> | INFO  | FINISHED 'task DAG'............................................................. [DONE in 0.01s]
<HH:MM:SS> | INFO  | RUNNING 'creating pipeline, DAG executor'....................................... [RUN]
<HH:MM:SS> | INFO  | FINISHED 'creating pipeline, DAG executor'...................................... [DONE in 0.01s]
 
<HH:MM:SS> | INFO  | ===================== tasks (vermilion-hornet-Gyycw4kRWG) =====================
<HH:MM:SS> | INFO  | 1 of 2 RUNNING EVENT 'decorated_task.example_task'.............................. [RUN]
<HH:MM:SS> | INFO  | 1 of 2 FINISHED EVENT 'decorated_task.example_task'............................. [DONE in 0.02s]
<HH:MM:SS> | INFO  | 2 of 2 RUNNING EVENT 'class_task.ExampleTask'................................... [RUN]
<HH:MM:SS> | INFO  | 2 of 2 FINISHED EVENT 'class_task.ExampleTask'.................................. [DONE in 0.01s]
 
<HH:MM:SS> | INFO  | Done!
--------------------------------------------------------------------------------

Last updated