Prism
v0.1.9rc2
v0.1.9rc2
  • 👋Welcome to Prism!
  • Getting Started
    • Installation
    • Creating your first project
    • Why Prism?
  • Fundamentals
    • Tasks
      • tasks
      • hooks
        • hooks.sql
        • hooks.spark
        • hooks.dbt_ref
    • Targets
      • Multiple targets
    • Config files
      • prism_project.py
        • RUN_ID / SLUG
        • SYS_PATH_CONF
        • THREADS
        • PROFILE_YML_PATH / PROFILE
        • PRISM_LOGGER
        • TRIGGERS_YML_PATH / TRIGGERS
      • Profile YML
      • Triggers YML
    • Jinja
      • __file__ and Path
      • prism_project
      • wkdir
      • parent_dir
      • concat
      • env
  • Adapters
    • Overview
    • sql
      • BigQuery
      • Postgres
      • Redshift
      • Snowflake
      • Trino
    • PySpark
    • dbt
  • Agents
    • Overview
    • Docker
    • EC2
  • CLI
    • Command Line Interface
    • init
    • compile
    • connect
    • create
      • agent
      • task
      • trigger
    • graph
    • run
    • spark-submit
    • agent
      • apply
      • run
      • build
      • delete
  • Advanced features
    • Concurrency
    • Logging
    • Triggers
    • Retries
    • Python Client
  • API Reference
    • prism.task.PrismTask
    • @task(...)
    • @target(...)
    • @target_iterator(...)
    • tasks.ref(...)
    • hooks.sql(...)
    • hooks.dbt_ref(...)
  • Use Cases
    • Analytics on top of dbt
    • Machine Learning
  • Wiki
    • DAGs
Powered by GitBook
On this page
  1. CLI

spark-submit

PreviousrunNextagent

Last updated 1 year ago

Usage

spark-submit is used to execute a PySpark-based Prism project. This is distinct from a Python-based Prism project, in which case you would need to use the command.

In order to use the spark-submit command, you must have a PySpark profile specified in profile.yml.

Usage: prism spark-submit [-h] [--modules [X.py Y.py ...]] [--all-upstream] [--all-downstream] [--full-tb] [-l] [--vars [A=a B=b ...]] [--context '{}']

Execute your Prism project as a PySpark job

Options:
  -h, --help            show this help message and exit

Command Options:
  --modules [X.py Y.py ...]
                        Path to script(s) that you want to run; if not specified, all modules in project are run
  --all-upstream        Run all modules upstream of --modules
  --all-downstream      Run all modules downstream of --modules

General Options:
  --full-tb             Display the full traceback for errors in the project; default is False
  -l, --log-level       Log level, must be one of `info`, `warn`, `error`, or `debug`. Default is `info`
  --vars [A=a B=b ...]  Prism variables as key-value pairs `key=value`. These overwrite any variable definitions in `prism_project.py`. All values are read as strings.
  --context '{}'        Prism variables as JSON. Cannot co-exist with --vars. These overwrite any variable definitions in `prism_project.py`.

Important: this command is identical to prism run, with the exception that this should only be used to submit PySpark jobs.

Here's what the output looks like in Terminal:

$ prism spark-submit
--------------------------------------------------------------------------------
<HH:MM:SS> | INFO  | Running with prism v0.1.9rc2...
<HH:MM:SS> | INFO  | Found project directory at /Users/my_first_project

<HH:MM:SS> | INFO  | RUNNING EVENT 'parsing prism_project.py'................................................ [RUN]
22/06/28 21:01:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
<HH:MM:SS> | INFO  | FINISHED EVENT 'parsing prism_project.py'............................................... [DONE in 0.03s]
<HH:MM:SS> | INFO  | RUNNING EVENT 'module DAG'.............................................................. [RUN]
<HH:MM:SS> | INFO  | FINISHED EVENT 'module DAG'............................................................. [DONE in 0.01s]
<HH:MM:SS> | INFO  | RUNNING EVENT 'creating pipeline, DAG executor'......................................... [RUN]
<HH:MM:SS> | INFO  | FINISHED EVENT 'creating pipeline, DAG executor'........................................ [DONE in 0.01s]

<HH:MM:SS> | INFO  | ===================== tasks 'vermilion-hornet-Gyycw4kRWG' =====================
<HH:MM:SS> | INFO  | 1 of 1 RUNNING EVENT 'module01.py'...................................................... [RUN]
<HH:MM:SS> | INFO  | 1 of 1 FINISHED EVENT 'module01.py'..................................................... [DONE in 0.01s]

Done!
---------------------------------------------------------------------------------
run