Tasks

In its most basic form, any data pipeline can be thought of as a series of discrete steps that run in some sort of sequence. For example, ETL pipelines generally have three steps: extract --> transform --> load.

Prism projects are no different. A Prism project is composed of a set of tasks, and these tasks contain the brunt of the project's core logic.

What are tasks?

Tasks are classes that inherit an abstract class called PrismTask. There are two requirements to which all tasks must adhere:

  1. Each task must have method called run. This method should contain all the business logic for the task, and it should return a non-null output.

  2. Tasks must live in their own *.py file.

Important: the output of a task's run function is what's used by downstream tasks in your pipeline. The return value can be anything – a Pandas or Spark DataFrame, a Numpy array, a string, a dictionary, whatever – but it cannot be null. Prism will throw an error if it is.

Apart from these two conditions, feel free to structure and define your tasks however you'd like, i.e., add other class methods, class attributes, etc.

What does a task look like?

Here's a simple task that produces the string Hello, world!

# modules/hello_world.py

from prism.task import PrismTask

class HelloWorld(PrismTask):
    
    def some_other_fn(self):
        return "This is a different class method"
    
    def run(self, tasks, hooks):
        test_str = "Hello, world!"
        return test_str

The HelloWorld task is defined in its own *.py file in the modules folder. It inherits the PrismTask class, and it contains a run function that returns a non-null string.

Critical: The run function has two mandatory parameters: tasks, and hooks. Both are critical, and Prism will throw an error if it finds a run function without these two parameters.

Each task must live in its own *.py file in the modules folder. Multiple tasks in single module with throw an error

Good to know: Although user-defined tasks can be arbitrarily long or complex, it is helpful to think of them as discrete steps or objectives in your pipeline. For example, if you are creating an ETL pipeline, then you may want to split your code into three tasks: an extract task, a transform task, and a load task.

And that's it! Create a class that inherits the PrismTask class and implement the run method. Prism will take care of the rest.

Why do tasks live in their own modules?

Other orchestration platforms, like Airflow, leave task and module organization to the user. So why do we require tasks to live in their own module?

The answer is pretty simple: it improves readability and ensures that all members of a data team are speaking the same language.

Different developers have different coding styles and intuitions, which can make it difficult to maintain consistency across a team. However, when you open a Prism project, you know exactly what to expect, no matter the author. You know that prism_project.py will contain all the configurations of the project. You know that the tasks will all live in modules, and you know that, for each task, the core logic will be contained in the run function.

Prism's common project structure helps to keep the code organized and makes it easier to locate specific files and functionality. This can help to prevent issues such as code duplication and can improve the overall quality and reliability of the code.

Last updated