07-11-2016, 12:56 PM
1466097298-airflow.pdf (Size: 1.94 MB / Downloads: 7)
Principles
• Dynamic: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This
allows for writing code that instantiates pipelines dynamically.
• Extensible: Easily define your own operators, executors and extend the library so that it fits the level of abstraction
that suits your environment.
• Elegant: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow
using the powerful Jinja templating engine.
• Scalable: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of
workers. Airflow is ready to scale to infinity
Beyond the Horizon
Airflow is not a data streaming solution. Tasks do not move data from one to the other (though tasks
can exchange metadata!). Airflow is not in the [Spark Streaming](http://spark.apachestreaming/) or
[Storm](https://storm.apache) space, it is more comparable to [Oozie](http://oozie.apache) or [Azkaban](http://data.linkedinopensource/azkaban).
Workflows are expected to be mostly static or slowly changing. You can think of the structure of the tasks in your
workflow as slightly more dynamic than a database structure would be. Airflow workflows are expected to look similar
from a run to the next, this allows for clarity around unit of work and continuity
3.2.2 It’s a DAG definition file
One thing to wrap your head around (it may not be very intuitive for everyone at first) is that this Airflow Python
script is really just a configuration file specifying the DAG’s structure as code. The actual tasks defined here will run
in a different context from the context of this script. Different tasks run on different workers at different point it time,
which means this script cannot be directly to cross communicate between tasks for instance. Note that for this purpose
we have a more advanced feature called XCom.
People sometimes think of the DAG definition file as a place where they can do some actual data processing, that is
not the case at all! The script’s purpose is to define a DAG object. It needs to evaluate quickly (seconds, not minutes)
since the scheduler will execute it periodically to reflect the changes if any