status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 22,626 | ["airflow/jobs/backfill_job.py", "airflow/models/taskmixin.py", "tests/dags/test_mapped_classic.py", "tests/dags/test_mapped_taskflow.py"] | The command 'airflow dags test' is deadlocking tasks using AIP-42 features with the PythonVirtualenvOperator | ### Apache Airflow version
main (development)
### What happened
When using the command airflow dags test tasks are put in the deadlock state.
### What you think should happen instead
I think the airflow dags test shouldn't deadlock tasks.
### How to reproduce
```
from airflow.models import DAG
from airflow.operators.python import PythonOperator, PythonVirtualenvOperator
from airflow.utils.log.log_reader import TaskLogReader
from airflow.utils.dates import days_ago
from pkgutil import iter_modules
from datetime import datetime, timedelta
the_imports = ["flask", "wheel", "click", "cryptography", "packaging"]
def dynamic_imports(imports: list):
from random import choices
return [choices(imports, k=3) for _ in range(7)]
def dynamic_importer(*args):
import importlib
for mod in args[0]:
print(mod)
importlib.import_module(mod)
with DAG(
dag_id="sys_site_packages_true",
schedule_interval=timedelta(days=365),
start_date=datetime(2001, 1, 1),
doc_md=docs,
tags=["core", "extended_tags", "venv_op"],
) as dag:
pv0 = PythonVirtualenvOperator.partial(
task_id="dont_install_packages_already_in_outer_environment",
python_callable=dynamic_importer,
python_version=3.8,
system_site_packages=True,
requirements=the_imports,
).expand(op_args=dynamic_imports(the_imports))
[2022-03-30 17:13:36,447] {taskinstance.py:850} DEBUG - Setting task state for <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [None]> to scheduled
[2022-03-30 17:13:36,455] {backfill_job.py:405} DEBUG - *** Clearing out not_ready list ***
[2022-03-30 17:13:36,458] {taskinstance.py:759} DEBUG - Refreshing TaskInstance <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> from DB
[2022-03-30 17:13:36,461] {backfill_job.py:418} DEBUG - Task instance to run <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> state scheduled
[2022-03-30 17:13:36,461] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Previous Dagrun State' PASSED: True, The context specified that the state of past DAGs could be ignored.
[2022-03-30 17:13:36,461] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2022-03-30 17:13:36,462] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Task has been mapped' PASSED: False, The task has yet to be mapped!
[2022-03-30 17:13:36,462] {taskinstance.py:1041} DEBUG - Dependencies not met for <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]>, dependency 'Task has been mapped' FAILED: The task has yet to be mapped!
[2022-03-30 17:13:36,462] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Task Instance Not Running' PASSED: True, Task is not in running state.
[2022-03-30 17:13:36,462] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2022-03-30 17:13:36,462] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Task Instance State' PASSED: True, Task state scheduled was valid.
[2022-03-30 17:13:36,463] {backfill_job.py:536} DEBUG - Adding <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> to not_ready
[2022-03-30 17:13:41,428] {base_job.py:226} DEBUG - [heartbeat]
[2022-03-30 17:13:41,428] {base_executor.py:156} DEBUG - 0 running task instances
[2022-03-30 17:13:41,429] {base_executor.py:157} DEBUG - 0 in queue
[2022-03-30 17:13:41,429] {base_executor.py:158} DEBUG - 32 open slots
[2022-03-30 17:13:41,429] {base_executor.py:167} DEBUG - Calling the <class 'airflow.executors.debug_executor.DebugExecutor'> sync method
[2022-03-30 17:13:41,429] {backfill_job.py:596} WARNING - Deadlock discovered for ti_status.to_run=dict_values([<TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]>])
[2022-03-30 17:13:41,433] {dagrun.py:628} DEBUG - number of tis tasks for <DagRun sys_site_packages_true @ 2022-03-30T17:13:35+00:00: backfill__2022-03-30T17:13:35+00:00, externally triggered: False>: 1 task(s)
[2022-03-30 17:13:41,433] {dagrun.py:644} DEBUG - number of scheduleable tasks for <DagRun sys_site_packages_true @ 2022-03-30T17:13:35+00:00: backfill__2022-03-30T17:13:35+00:00, externally triggered: False>: 0 task(s)
[2022-03-30 17:13:41,434] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Not In Retry Period' PASSED: True, The context specified that being in a retry period was permitted.
[2022-03-30 17:13:41,434] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2022-03-30 17:13:41,434] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2022-03-30 17:13:41,434] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Task has been mapped' PASSED: False, The task has yet to be mapped!
[2022-03-30 17:13:41,434] {taskinstance.py:1041} DEBUG - Dependencies not met for <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]>, dependency 'Task has been mapped' FAILED: The task has yet to be mapped!
[2022-03-30 17:13:41,434] {dagrun.py:570} ERROR - Deadlock; marking run <DagRun sys_site_packages_true @ 2022-03-30T17:13:35+00:00: backfill__2022-03-30T17:13:35+00:00, externally triggered: False> failed
[2022-03-30 17:13:41,434] {dagrun.py:594} INFO - DagRun Finished: dag_id=sys_site_packages_true, execution_date=2022-03-30T17:13:35+00:00, run_id=backfill__2022-03-30T17:13:35+00:00, run_start_date=2022-03-30 23:13:36.434073+00:00, run_end_date=2022-03-30 23:13:41.434824+00:00, run_duration=5.000751, state=failed, external_trigger=False, run_type=backfill, data_interval_start=2022-03-30T17:13:35+00:00, data_interval_end=2023-03-30T17:13:35+00:00, dag_hash=None
[2022-03-30 17:13:41,435] {backfill_job.py:362} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 0 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 1 | not ready: 1
[2022-03-30 17:13:41,435] {backfill_job.py:376} DEBUG - Finished dag run loop iteration. Remaining tasks dict_values([])
[2022-03-30 17:13:41,438] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2022-03-30 17:13:41,438] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2022-03-30 17:13:41,438] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2022-03-30 17:13:41,438] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Task has been mapped' PASSED: False, The task has yet to be mapped!
[2022-03-30 17:13:41,438] {taskinstance.py:1041} DEBUG - Dependencies not met for <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]>, dependency 'Task has been mapped' FAILED: The task has yet to be mapped!
[2022-03-30 17:13:41,440] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2022-03-30 17:13:41,440] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Previous Dagrun State' PASSED: True, The context specified that the state of past DAGs could be ignored.
[2022-03-30 17:13:41,440] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2022-03-30 17:13:41,440] {taskinstance.py:1061} DEBUG - <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]> dependency 'Task has been mapped' PASSED: False, The task has yet to be mapped!
[2022-03-30 17:13:41,440] {taskinstance.py:1041} DEBUG - Dependencies not met for <TaskInstance: sys_site_packages_true.dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 [scheduled]>, dependency 'Task has been mapped' FAILED: The task has yet to be mapped!
BackfillJob is deadlocked.
These tasks have succeeded:
DAG ID Task ID Run ID Try number
-------- --------- -------- ------------
These tasks are running:
DAG ID Task ID Run ID Try number
-------- --------- -------- ------------
These tasks have failed:
DAG ID Task ID Run ID Try number
-------- --------- -------- ------------
These tasks are skipped:
DAG ID Task ID Run ID Try number
-------- --------- -------- ------------
These tasks are deadlocked:
DAG ID Task ID Run ID Try number
---------------------- -------------------------------------------------- ----------------------------------- ------------
sys_site_packages_true dont_install_packages_already_in_outer_environment backfill__2022-03-30T17:13:35+00:00 1
[2022-03-30 17:13:41,446] {cli_action_loggers.py:84} DEBUG - Calling callbacks: []
[2022-03-30 17:13:41,447] {settings.py:383} DEBUG - Disposing DB connection pool (PID 816091)
```
### Operating System
Ubuntu Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
tested with airflow breeze and a local virtualenv with airflow 2.3.0 installed.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22626 | https://github.com/apache/airflow/pull/22904 | 5d4a387c996559d2b49a528fdbceea59272b2028 | 30ac99773c8577718c87703a310ffc454316cfce | "2022-03-30T17:08:21Z" | python | "2022-04-12T08:38:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,606 | ["airflow/providers/jenkins/operators/jenkins_job_trigger.py"] | Jenkins JobTriggerOperator bug when polling for new build | ### Apache Airflow Provider(s)
jenkins
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.3
### Operating System
macOs Monterey 12.0.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
No specific details
### What happened
When using the JenkinsJobTriggerOperator there is a polling mechanism to check the built job to return the newly created job number. There is a retry mechanism if it fails to poll it retries couple of times to get the built job number. It is possible though the response when returned does not contain the exact info we are looking for, therefore the part of code which is checking the details of the `executable` key in the body of the jenkins response fail, due to an iteration code over the content of that key which could be None.
It is a faulty code and will result in multiple job creation which could happen in case of a retry on this operator.
```
[2022-03-29, 11:51:40 PDT] {taskinstance.py:1259} INFO - Executing <Task(JenkinsJobTriggerOperator): trigger_downstream_jenkins> on 2022-03-29 17:25:00+00:00
[2022-03-29, 11:51:40 PDT] {standard_task_runner.py:52} INFO - Started process 25 to run task
[2022-03-29, 11:51:40 PDT] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'dag_code', 'trigger_downstream_jenkins', 'scheduled__2022-03-29T17:25:00+00:00', '--job-id', '302242', '--raw', '--subdir', 'DAGS_FOLDER/git_dags/somecode.py', '--cfg-path', '/tmp/tmp84qyueun', '--error-file', '/tmp/tmp732pjeg5']
[2022-03-29, 11:51:40 PDT] {standard_task_runner.py:77} INFO - Job 302242: Subtask trigger_downstream_jenkins
[2022-03-29, 11:51:40 PDT] {logging_mixin.py:109} INFO - Running <TaskInstance: code_v1trigger_downstream_jenkins scheduled__2022-03-29T17:25:00+00:00 [running]> on host codetriggerdownstreamjenkins.79586cc902e641be9e9
[2022-03-29, 11:51:40 PDT] {taskinstance.py:1424} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=data
AIRFLOW_CTX_DAG_ID=Code_v1
AIRFLOW_CTX_TASK_ID=trigger_downstream_jenkins
AIRFLOW_CTX_EXECUTION_DATE=2022-03-29T17:25:00+00:00
AIRFLOW_CTX_DAG_RUN_ID=scheduled__2022-03-29T17:25:00+00:00
[2022-03-29, 11:51:40 PDT] {jenkins_job_trigger.py:182} INFO - Triggering the job/Production/Downstream Trigger - Production on the jenkins : JENKINS with the parameters : None
[2022-03-29, 11:51:40 PDT] {base.py:70} INFO - Using connection to: id: JENKINS. Host: server.com, Port: 443, Schema: , Login: user, Password: ***, extra: True
[2022-03-29, 11:51:40 PDT] {jenkins.py:43} INFO - Trying to connect to [https://server.com:443](https://server.com/)
[2022-03-29, 11:51:40 PDT] {kerberos_.py:325} ERROR - handle_other(): Mutual authentication unavailable on 403 response
[2022-03-29, 11:51:40 PDT] {jenkins_job_trigger.py:160} INFO - Polling jenkins queue at the url https://servercom/queue/item/5831525//api/json
[2022-03-29, 11:51:40 PDT] {taskinstance.py:1700} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1455, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1511, in _execute_task
result = execute_callable(context=context)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/jenkins/operators/jenkins_job_trigger.py", line 191, in execute
build_number = self.poll_job_in_queue(jenkins_response['headers']['Location'], jenkins_server)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/jenkins/operators/jenkins_job_trigger.py", line 167, in poll_job_in_queue
if 'executable' in json_response and 'number' in json_response['executable']:
TypeError: argument of type 'NoneType' is not iterable
[2022-03-29, 11:51:40 PDT] {taskinstance.py:1267} INFO - Marking task as UP_FOR_RETRY. dag_id=code_v1 task_id=trigger_downstream_jenkins, execution_date=20220329T172500, start_date=20220329T185140, end_date=20220329T185140
[2022-03-29, 11:51:40 PDT] {standard_task_runner.py:89} ERROR - Failed to execute job 302242 for task trigger_downstream_jenkins
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 298, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1455, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1511, in _execute_task
result = execute_callable(context=context)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/jenkins/operators/jenkins_job_trigger.py", line 191, in execute
build_number = self.poll_job_in_queue(jenkins_response['headers']['Location'], jenkins_server)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/jenkins/operators/jenkins_job_trigger.py", line 167, in poll_job_in_queue
if 'executable' in json_response and 'number' in json_response['executable']:
TypeError: argument of type 'NoneType' is not iterable
[2022-03-29, 11:51:40 PDT] {local_task_job.py:154} INFO - Task exited with return code 1
[2022-03-29, 11:51:40 PDT] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
There should be an extra check to prevent the iteration over possible None type returned in the `executable` key from the body of the response from jenkins poll request.
This is the current code:
https://github.com/apache/airflow/blob/b0b69f3ea7186e76a04b733022b437b57a087a2e/airflow/providers/jenkins/operators/jenkins_job_trigger.py#L161
It should be updated to this code:
```python
if 'executable' in json_response and json_response['executable'] is not None and 'number' in json_response['executable']:
```
### How to reproduce
During calls to poling to jenkins it is happening randomly, it might not happen at all. It depends on the jenkins performance as well, how fast it could build the jobs might prevent this.
### Anything else
No further info.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22606 | https://github.com/apache/airflow/pull/22608 | 5247445ff13e4b9cf73c26f902af03791f48f04d | c30ab6945ea0715889d32e38e943c899a32d5862 | "2022-03-29T21:52:20Z" | python | "2022-04-04T12:01:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,576 | ["airflow/providers/ssh/hooks/ssh.py", "tests/providers/ssh/hooks/test_ssh.py"] | SFTP connection hook not working when using inline Ed25519 key from Airflow connection | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I am trying to create an SFTP connection which includes the extra params of `private_key` which includes a txt output of my private key. Ie: `{"look_for_keys": "false", "no_host_key_check": "true", "private_key": "-----BEGIN OPENSSH PRIVATE KEY-----
keygoeshere==\n----END OPENSSH PRIVATE KEY-----"}`
When I test the connection I get the error `expected str, bytes or os.PathLike object, not Ed25519Key`
When I try and use this connection I get the following error:
```
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/sftp/hooks/sftp.py", line 208, in list_directory
conn = self.get_conn()
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 324, in wrapped_f
return self(f, *args, **kw)
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 404, in __call__
do = self.iter(retry_state=retry_state)
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 407, in __call__
result = fn(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/sftp/hooks/sftp.py", line 172, in get_conn
self.conn = pysftp.Connection(**conn_params)
File "/home/airflow/.local/lib/python3.7/site-packages/pysftp/__init__.py", line 142, in __init__
self._set_authentication(password, private_key, private_key_pass)
File "/home/airflow/.local/lib/python3.7/site-packages/pysftp/__init__.py", line 164, in _set_authentication
private_key_file = os.path.expanduser(private_key)
File "/usr/local/lib/python3.7/posixpath.py", line 235, in expanduser
path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not Ed25519Key
```
This only seems to happen for Ed25519 keys. RSA worked fine!
### What you think should happen instead
It should work, I don't specify this as an `Ed25519Key` I think the connection manager code is saving it as a paraminko key but when testing / using it as a DAG it is expecting a string!
I don't see why you can't save it as a paraminko key and use it in the connection.
Also it seems to work fine when using RSA keys, but super short keys are cooler!
### How to reproduce
Create a new Ed25519 ssh key and a new SFTP connection and copy the following into the extra field:
{"look_for_keys": "false", "no_host_key_check": "true", "private_key": "-----BEGIN RSA PRIVATE KEY----- Ed25519_key_goes_here -----END RSA PRIVATE KEY-----"}
Test should yield the failure `TypeError: expected str, bytes or os.PathLike object, not Ed25519Key`
### Operating System
RHEL 7.9 on host OS and Docker image for the rest.
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==3.0.2
apache-airflow-providers-docker==2.4.1
apache-airflow-providers-elasticsearch==2.2.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.4.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-microsoft-azure==3.6.0
apache-airflow-providers-mysql==2.2.0
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.4.1
apache-airflow-providers-slack==4.2.0
apache-airflow-providers-sqlite==2.1.0
apache-airflow-providers-ssh==2.4.0
### Deployment
Other Docker-based deployment
### Deployment details
Docker image of 2.2.4 release with VERY minimal changes. (wget, curl, etc added)
### Anything else
RSA seems to work fine... only after a few hours of troubleshooting and writing this ticket did I learn that. 😿
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22576 | https://github.com/apache/airflow/pull/23043 | d7b85d9a0a09fd7b287ec928d3b68c38481b0225 | e63dbdc431c2fa973e9a4c0b48ec6230731c38d1 | "2022-03-28T20:06:31Z" | python | "2022-05-09T22:49:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,561 | ["airflow/models/dagrun.py", "airflow/models/mappedoperator.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Mapped task is not expanded when mapped against a literal | ### Apache Airflow version
main (development)
### What happened
Here's a DAG:
```python3
@task
def mynameis(arg, **context):
ti: TaskInstance = context["ti"]
print(ti.task_id)
print(arg)
with DAG(
dag_id="my_name_is",
start_date=datetime(1970, 1, 1),
schedule_interval=timedelta(days=30 * 365),
) as dag:
mynameis.expand(arg=["slim", "shady"])
```
<details>
<summary>Outdated report here; no longer relevant</summary>
When I unpause it, the scheduler crashes like so:
```
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2022-03-27 21:48:45,682] {scheduler_job.py:712} INFO - Starting the scheduler
[2022-03-27 21:48:45,683] {scheduler_job.py:717} INFO - Processing each file at most -1 times
[2022-03-27 21:48:45,686] {executor_loader.py:106} INFO - Loaded executor: SequentialExecutor
[2022-03-27 21:48:45,692] {manager.py:156} INFO - Launched DagFileProcessorManager with pid: 36986
[2022-03-27 21:48:45,694] {scheduler_job.py:1242} INFO - Resetting orphaned tasks for active dag runs
[2022-03-27 21:48:46,269] {settings.py:55} INFO - Configured default timezone Timezone('UTC')
[2022-03-27 21:48:46,276] {manager.py:398} WARNING - Because we cannot use more than 1 thread (parsing_processes = 2) when using sqlite. So we set parallelism to 1.
[2022-03-27 21:48:56,576] {dag.py:2931} INFO - Setting next_dagrun for my_name_is to 1999-12-25T00:00:00+00:00, run_after=2029-12-17T00:00:00+00:00
[2022-03-27 21:48:56,618] {scheduler_job.py:354} INFO - 1 tasks up for execution:
<TaskInstance: my_name_is.mynameis scheduled__1970-01-01T00:00:00+00:00 [scheduled]>
[2022-03-27 21:48:56,618] {scheduler_job.py:376} INFO - Figuring out tasks to run in Pool(name=default_pool) with 128 open slots and 1 task instances ready to be queued
[2022-03-27 21:48:56,619] {scheduler_job.py:434} INFO - DAG my_name_is has 0/16 running and queued tasks
[2022-03-27 21:48:56,619] {scheduler_job.py:520} INFO - Setting the following tasks to queued state:
<TaskInstance: my_name_is.mynameis scheduled__1970-01-01T00:00:00+00:00 [scheduled]>
[2022-03-27 21:48:56,621] {scheduler_job.py:562} INFO - Sending TaskInstanceKey(dag_id='my_name_is', task_id='mynameis', run_id='scheduled__1970-01-01T00:00:00+00:00', try_number=1, map_index=-1) to executor with priority 1 and queue default
[2022-03-27 21:48:56,622] {base_executor.py:88} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'my_name_is', 'mynameis', 'scheduled__1970-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/my_name_is.py']
[2022-03-27 21:48:56,623] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'my_name_is', 'mynameis', 'scheduled__1970-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/my_name_is.py']
[2022-03-27 21:48:57,229] {dagbag.py:506} INFO - Filling up the DagBag from /Users/matt/2022/03/27/dags/my_name_is.py
[2022-03-27 21:48:57,405] {sequential_executor.py:66} ERROR - Failed to execute task Command '['airflow', 'tasks', 'run', 'my_name_is', 'mynameis', 'scheduled__1970-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/my_name_is.py']' returned non-zero exit status 1..
[2022-03-27 21:48:57,407] {scheduler_job.py:615} INFO - Executor reports execution of my_name_is.mynameis run_id=scheduled__1970-01-01T00:00:00+00:00 exited with status failed for try_number 1
[2022-03-27 21:48:57,420] {scheduler_job.py:659} INFO - TaskInstance Finished: dag_id=my_name_is, task_id=mynameis, run_id=scheduled__1970-01-01T00:00:00+00:00, map_index=-1, run_start_date=None, run_end_date=None, run_duration=None, state=queued, executor_state=failed, try_number=1, max_tries=0, job_id=None, pool=default_pool, queue=default, priority_weight=1, operator=_PythonDecoratedOperator, queued_dttm=2022-03-28 03:48:56.619831+00:00, queued_by_job_id=1, pid=None
[2022-03-27 21:48:57,420] {scheduler_job.py:688} ERROR - Executor reports task instance <TaskInstance: my_name_is.mynameis scheduled__1970-01-01T00:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
[2022-03-27 21:48:57,421] {taskinstance.py:1785} ERROR - Executor reports task instance <TaskInstance: my_name_is.mynameis scheduled__1970-01-01T00:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
[2022-03-27 21:48:57,428] {scheduler_job.py:769} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/Users/matt/2022/03/27/venv/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 753, in _execute
self._run_scheduler_loop()
File "/Users/matt/2022/03/27/venv/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 843, in _run_scheduler_loop
num_finished_events = self._process_executor_events(session=session)
File "/Users/matt/2022/03/27/venv/lib/python3.10/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/Users/matt/2022/03/27/venv/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 707, in _process_executor_events
ti.handle_failure(error=msg % (ti, state, ti.state, info), session=session)
File "/Users/matt/2022/03/27/venv/lib/python3.10/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/Users/matt/2022/03/27/venv/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 1793, in handle_failure
task = self.task.unmap()
File "/Users/matt/2022/03/27/venv/lib/python3.10/site-packages/airflow/models/mappedoperator.py", line 456, in unmap
raise RuntimeError("Cannot unmap a deserialized operator")
RuntimeError: Cannot unmap a deserialized operator
[2022-03-27 21:48:58,450] {process_utils.py:125} INFO - Sending Signals.SIGTERM to group 36986. PIDs of all processes in the group: [36986]
[2022-03-27 21:48:58,450] {process_utils.py:80} INFO - Sending the signal Signals.SIGTERM to group 36986
[2022-03-27 21:48:58,683] {process_utils.py:75} INFO - Process psutil.Process(pid=36986, status='terminated', exitcode=0, started='21:48:45') (36986) terminated with exit code 0
[2022-03-27 21:48:58,684] {scheduler_job.py:780} INFO - Exited execute loop
```
</details>
After some updates, this now fails with the following scheduler logs:
```
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2022-03-28 08:40:44,195] {scheduler_job.py:712} INFO - Starting the scheduler
[2022-03-28 08:40:44,196] {scheduler_job.py:717} INFO - Processing each file at most -1 times
[2022-03-28 08:40:44,199] {executor_loader.py:106} INFO - Loaded executor: SequentialExecutor
[2022-03-28 08:40:44,204] {manager.py:156} INFO - Launched DagFileProcessorManager with pid: 47735
[2022-03-28 08:40:44,206] {scheduler_job.py:1242} INFO - Resetting orphaned tasks for active dag runs
[2022-03-28 08:40:44,769] {settings.py:55} INFO - Configured default timezone Timezone('UTC')
[2022-03-28 08:40:44,774] {manager.py:398} WARNING - Because we cannot use more than 1 thread (parsing_processes = 2)
when using sqlite. So we set parallelism to 1.
[2022-03-28 08:40:45 -0600] [47734] [INFO] Starting gunicorn 20.1.0
[2022-03-28 08:40:45 -0600] [47734] [INFO] Listening at: http://0.0.0.0:8793 (47734)
[2022-03-28 08:40:45 -0600] [47734] [INFO] Using worker: sync
[2022-03-28 08:40:45 -0600] [47740] [INFO] Booting worker with pid: 47740
[2022-03-28 08:40:45 -0600] [47741] [INFO] Booting worker with pid: 47741
[2022-03-28 08:40:53,363] {dag.py:2930} INFO - Setting next_dagrun for my_name_is to 1999-12-25T00:00:00+00:00, run_af
ter=2029-12-17T00:00:00+00:00
[2022-03-28 08:40:53,397] {scheduler_job.py:354} INFO - 1 tasks up for execution:
<TaskInstance: my_name_is.mynameis scheduled__1970-01-01T00:00:00+00:00 [scheduled]>
[2022-03-28 08:40:53,397] {scheduler_job.py:376} INFO - Figuring out tasks to run in Pool(name=default_pool) with 128
open slots and 1 task instances ready to be queued
[2022-03-28 08:40:53,397] {scheduler_job.py:434} INFO - DAG my_name_is has 0/16 running and queued tasks
[2022-03-28 08:40:53,398] {scheduler_job.py:520} INFO - Setting the following tasks to queued state:
<TaskInstance: my_name_is.mynameis scheduled__1970-01-01T00:00:00+00:00 [scheduled]>
[2022-03-28 08:40:53,400] {scheduler_job.py:562} INFO - Sending TaskInstanceKey(dag_id='my_name_is', task_id='mynameis
', run_id='scheduled__1970-01-01T00:00:00+00:00', try_number=1, map_index=-1) to executor with priority 1 and queue de
fault
[2022-03-28 08:40:53,401] {base_executor.py:88} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'my_name_is', 'myn
ameis', 'scheduled__1970-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/my_name_is.py']
[2022-03-28 08:40:53,403] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'my_name_i
s', 'mynameis', 'scheduled__1970-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/my_name_is.py']
[2022-03-28 08:40:54,061] {dagbag.py:506} INFO - Filling up the DagBag from /Users/matt/2022/03/27/dags/my_name_is.py
Traceback (most recent call last):
File "/Users/matt/2022/03/27/venv2/bin/airflow", line 33, in <module>
sys.exit(load_entry_point('apache-airflow', 'console_scripts', 'airflow')())
File "/Users/matt/src/astronomer-airflow/airflow/__main__.py", line 38, in main
args.func(args)
File "/Users/matt/src/astronomer-airflow/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/Users/matt/src/astronomer-airflow/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/Users/matt/src/astronomer-airflow/airflow/cli/commands/task_command.py", line 362, in task_run
ti, _ = _get_ti(task, args.execution_date_or_run_id, args.map_index)
File "/Users/matt/src/astronomer-airflow/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/Users/matt/src/astronomer-airflow/airflow/cli/commands/task_command.py", line 144, in _get_ti
raise RuntimeError("No map_index passed to mapped task")
RuntimeError: No map_index passed to mapped task
[2022-03-28 08:40:54,233] {sequential_executor.py:66} ERROR - Failed to execute task Command '['airflow', 'tasks', 'ru
n', 'my_name_is', 'mynameis', 'scheduled__1970-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/my_name_is.p
y']' returned non-zero exit status 1..
[2022-03-28 08:40:54,235] {scheduler_job.py:615} INFO - Executor reports execution of my_name_is.mynameis run_id=sched
uled__1970-01-01T00:00:00+00:00 exited with status failed for try_number 1
[2022-03-28 08:40:54,245] {scheduler_job.py:659} INFO - TaskInstance Finished: dag_id=my_name_is, task_id=mynameis, ru
n_id=scheduled__1970-01-01T00:00:00+00:00, map_index=-1, run_start_date=None, run_end_date=None, run_duration=None, st
ate=queued, executor_state=failed, try_number=1, max_tries=0, job_id=None, pool=default_pool, queue=default, priority_
weight=1, operator=_PythonDecoratedOperator, queued_dttm=2022-03-28 14:40:53.398693+00:00, queued_by_job_id=1, pid=Non
e
[2022-03-28 08:40:54,246] {scheduler_job.py:688} ERROR - Executor reports task instance <TaskInstance: my_name_is.myna
meis scheduled__1970-01-01T00:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None)
Was the task killed externally?
[2022-03-28 08:40:54,247] {taskinstance.py:1785} ERROR - Executor reports task instance <TaskInstance: my_name_is.myna
meis scheduled__1970-01-01T00:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None)
Was the task killed externally?
[2022-03-28 08:40:54,255] {taskinstance.py:1277} INFO - Marking task as FAILED. dag_id=my_name_is, task_id=mynameis, e
xecution_date=19700101T000000, start_date=, end_date=20220328T144054
[2022-03-28 08:40:54,322] {dagrun.py:534} ERROR - Marking run <DagRun my_name_is @ 1970-01-01 00:00:00+00:00: schedule
d__1970-01-01T00:00:00+00:00, externally triggered: False> failed
[2022-03-28 08:40:54,323] {dagrun.py:594} INFO - DagRun Finished: dag_id=my_name_is, execution_date=1970-01-01 00:00:0
0+00:00, run_id=scheduled__1970-01-01T00:00:00+00:00, run_start_date=2022-03-28 14:40:53.370452+00:00, run_end_date=20
22-03-28 14:40:54.323434+00:00, run_duration=0.952982, state=failed, external_trigger=False, run_type=scheduled, data_
interval_start=1970-01-01 00:00:00+00:00, data_interval_end=1999-12-25 00:00:00+00:00, dag_hash=fb9f4777fcb737e19d3401
99f9950b05
[2022-03-28 08:40:54,325] {dagrun.py:774} WARNING - Failed to record first_task_scheduling_delay metric:
list index out of range
[2022-03-28 08:40:54,326] {dag.py:2930} INFO - Setting next_dagrun for my_name_is to 1999-12-25T00:00:00+00:00, run_af
ter=2029-12-17T00:00:00+00:00
``` | https://github.com/apache/airflow/issues/22561 | https://github.com/apache/airflow/pull/22679 | 0592bfd85631ed3109d68c8ec9aa57f0465d90b3 | 91832a42d8124b040073481fd93c54e9e64c2609 | "2022-03-28T03:56:03Z" | python | "2022-04-07T08:27:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,551 | ["docker_tests/test_prod_image.py", "docs/apache-airflow-providers-microsoft-azure/index.rst", "setup.py"] | Consider depending on `azure-keyvault-secrets` instead of `azure-keyvault` metapackage | ### Description
It appears that the `microsoft-azure` provider only depends on `azure-keyvault-secrets`:
https://github.com/apache/airflow/blob/388723950de9ca519108e0a8f6818f0fc0dd91d4/airflow/providers/microsoft/azure/secrets/key_vault.py#L24
and not the other 2 packages in the `azure-keyvault` metapackage.
### Use case/motivation
I am the maintainer of the `apache-airflow-providers-*` packages on `conda-forge` and I'm running into small issues with the way `azure-keyvault` is maintained as a metapackage on `conda-forge`. I think depending on `azure-keyvault-secrets` explicitly would solve my problem and also provide better clarity for the `microsoft-azure` provider in general.
### Related issues
https://github.com/conda-forge/azure-keyvault-feedstock/issues/6
https://github.com/conda-forge/apache-airflow-providers-microsoft-azure-feedstock/pull/13
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22551 | https://github.com/apache/airflow/pull/22557 | a6609d5268ebe55bcb150a828d249153582aa936 | 77d4e725c639efa68748e0ae51ddf1e11b2fd163 | "2022-03-27T12:24:12Z" | python | "2022-03-29T13:44:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,487 | ["airflow/cli/commands/task_command.py"] | "Running <TaskInstance: *.* * [queued]> on host *" written with WARNING level | ### Apache Airflow version
2.2.3
### What happened
"Running <TaskInstance: *.* * [queued]> on host *" written with WARNING level
### What you think should happen instead
This message should be written with INFO level
### How to reproduce
_No response_
### Operating System
Composer
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22487 | https://github.com/apache/airflow/pull/22488 | 388f4e8b032fe71ccc9a16d84d7c2064c80575b3 | acb1a100bbf889dddef1702c95bd7262a578dfcc | "2022-03-23T13:28:26Z" | python | "2022-03-25T09:40:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,474 | ["airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | CLI command "airflow dags next-execution" give unexpected results with paused DAG and catchup=False | ### Apache Airflow version
2.2.2
### What happened
Current time 16:54 UTC
Execution Schedule: * * * * *
Last Run: 16:19 UTC
DAG Paused
Catchup=False
`airflow dags next-execution sample_dag`
returns
```
[INFO] Please be reminded this DAG is PAUSED now.
2022-03-22T16:20:00+00:00
```
### What you think should happen instead
I would expect
```
[INFO] Please be reminded this DAG is PAUSED now.
2022-03-22T16:53:00+00:00
```
To be returned since when you unpause the DAG that is the next executed DAG
### How to reproduce
Create a simple sample dag with a schedule of * * * * * and pause with catchup=False and wait a few minutes, then run
`airflow dags next-execution sample_dag`
### Operating System
Debian
### Versions of Apache Airflow Providers
Airflow 2.2.2
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22474 | https://github.com/apache/airflow/pull/30117 | 1f2b0c21d5ebefc404d12c123674e6ac45873646 | c63836ccb763fd078e0665c7ef3353146b1afe96 | "2022-03-22T17:06:41Z" | python | "2023-03-22T14:22:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,473 | ["airflow/secrets/local_filesystem.py", "tests/cli/commands/test_connection_command.py", "tests/secrets/test_local_filesystem.py"] | Connections import and export should also support ".yml" file extensions | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Trying to export or import a yaml formatted connections file with ".yml" extension fails.
### What you think should happen instead
While the "official recommended extension" for YAML files is .yaml, many pipeline are built around using the .yml file extension. Importing and exporting of .yml files should also be supported.
### How to reproduce
Running airflow connections import or export with a file having a .yml file extension errors with:
`Unsupported file format. The file must have the extension .env or .json or .yaml`
### Operating System
debian 10 buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22473 | https://github.com/apache/airflow/pull/22872 | 1eab1ec74c426197af627c09817b76081c5c4416 | 3c0ad4af310483cd051e94550a7d857653dcee6d | "2022-03-22T15:36:21Z" | python | "2022-04-13T16:52:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,471 | ["tests/system/providers/zendesk/example_zendesk_custom_get.py"] | Migrate Zendesk example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `Zendesk` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/zendesk/example_dags/example_zendesk_custom_get.py | https://github.com/apache/airflow/issues/22471 | https://github.com/apache/airflow/pull/24053 | 3c868b65efaf22e87685c9d8302a088d0ec7d75b | 2e8bd9d2f98decf731ed40a50e9d960ba08f3441 | "2022-03-22T15:14:44Z" | python | "2022-06-12T09:57:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,470 | ["airflow/providers/yandex/example_dags/__init__.py", "docs/apache-airflow-providers-yandex/operators.rst", "tests/system/providers/yandex/example_yandexcloud_dataproc.py"] | Migrate Yandex example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `Yandex` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/yandex/example_dags/example_yandexcloud_dataproc.py | https://github.com/apache/airflow/issues/22470 | https://github.com/apache/airflow/pull/24082 | fb3e84f6908ac3c721128c20c563d5b2b482c71d | 65ad2aed26f7572ba0d3b04a33f9144989ac7117 | "2022-03-22T15:14:41Z" | python | "2022-06-01T19:24:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,469 | ["airflow/providers/trino/example_dags/__init__.py", "docs/apache-airflow-providers-trino/operators/transfer/gcs_to_trino.rst", "tests/system/providers/trino/example_gcs_to_trino.py"] | Migrate Trino example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `Trino` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/trino/example_dags/example_gcs_to_trino.py | https://github.com/apache/airflow/issues/22469 | https://github.com/apache/airflow/pull/24118 | d0a295c5f49f0a2bc2ef7c47dcbaa1bee40c61bd | 7489962e75a23071620a30c1e070fb7c9e107179 | "2022-03-22T15:14:38Z" | python | "2022-06-02T21:08:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,460 | ["airflow/providers/qubole/example_dags/__init__.py", "docs/apache-airflow-providers-qubole/index.rst", "docs/apache-airflow-providers-qubole/operators/index.rst", "docs/apache-airflow-providers-qubole/operators/qubole.rst", "tests/system/providers/qubole/example_qubole.py", "tests/system/providers/qubole/example_qubole_sensors.py"] | Migrate Qubole example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `Qubole` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/qubole/example_dags/example_qubole.py | https://github.com/apache/airflow/issues/22460 | https://github.com/apache/airflow/pull/24149 | c2f10a4ee9c2404e545d78281bf742a199895817 | e3824ce52181089779a409e5ff64fbf9677cccfc | "2022-03-22T14:39:42Z" | python | "2022-06-03T16:09:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,458 | ["airflow/providers/postgres/example_dags/__init__.py", "docs/apache-airflow-providers-postgres/index.rst", "docs/apache-airflow-providers-postgres/operators/postgres_operator_howto_guide.rst", "tests/system/providers/postgres/example_postgres.py"] | Migrate Postgres example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `Postgres` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/postgres/example_dags/example_postgres.py | https://github.com/apache/airflow/issues/22458 | https://github.com/apache/airflow/pull/24148 | fb1187dbec19377d2a8b7dbc35813b2aaa56506f | c60bb9edc0c9b55a2824eae879af8a4a90ccdd2d | "2022-03-22T14:39:37Z" | python | "2022-06-03T16:07:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,456 | ["docs/apache-airflow-providers-papermill/index.rst", "docs/apache-airflow-providers-papermill/operators.rst", "tests/system/__init__.py", "tests/system/providers/__init__.py", "tests/system/providers/papermill/__init__.py", "tests/system/providers/papermill/example_papermill.py", "tests/system/providers/papermill/example_papermill_verify.py", "tests/system/providers/papermill/input_notebook.ipynb", "tests/www/api/experimental/test_endpoints.py"] | Migrate Papermill example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `Papermill` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/papermill/example_dags/example_papermill.py
| https://github.com/apache/airflow/issues/22456 | https://github.com/apache/airflow/pull/24146 | dbe80c89b2a99d6ab737f2c4146bf8f918034f0f | b4d50d3be1c9917182f231135b8312eb284f0f7f | "2022-03-22T14:39:31Z" | python | "2022-06-05T15:33:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,453 | ["airflow/providers/mysql/example_dags/__init__.py", "docs/apache-airflow-providers-mysql/index.rst", "docs/apache-airflow-providers-mysql/operators.rst", "tests/system/providers/mysql/example_mysql.py"] | Migrate MySQL example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `MySQL` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/mysql/example_dags/example_mysql.py | https://github.com/apache/airflow/issues/22453 | https://github.com/apache/airflow/pull/24142 | 478459f01eff42d3fc63949614f5ffe173c67006 | 3df8ff7407f76b8c944d9e353744e6e79ed6277d | "2022-03-22T14:38:09Z" | python | "2022-06-03T10:40:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,448 | ["airflow/providers/http/example_dags/__init__.py", "docs/apache-airflow-providers-http/operators.rst", "tests/providers/http/operators/test_http_system.py", "tests/system/providers/http/example_http.py"] | Migrate HTTP example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `HTTP` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/http/example_dags/example_http.py | https://github.com/apache/airflow/issues/22448 | https://github.com/apache/airflow/pull/23991 | 3dd7b1ddbaa3170fbda30a8323286abf075f30ba | 9398586a7cf66d9cf078c40ab0d939b3fcc58c2d | "2022-03-22T14:37:54Z" | python | "2022-06-01T20:12:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,445 | ["docs/apache-airflow-providers-elasticsearch/connections/elasticsearch.rst", "tests/system/providers/elasticsearch/example_elasticsearch_query.py", "tests/system/utils/__init__.py"] | Migrate Elasticsearch example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `Elasticsearch` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/elasticsearch/example_dags/example_elasticsearch_query.py | https://github.com/apache/airflow/issues/22445 | https://github.com/apache/airflow/pull/22811 | 0cbf2c752d7af9ca1c378b013b8f77dd3d858dd9 | a801ea3927b8bf3ca154fea3774ebf2d90e74e50 | "2022-03-22T14:37:44Z" | python | "2022-04-13T18:10:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,444 | ["airflow/providers/docker/example_dags/__init__.py", "airflow/providers/docker/example_dags/example_docker.py", "airflow/providers/docker/example_dags/example_docker_copy_data.py", "docs/apache-airflow-providers-docker/index.rst", "docs/apache-airflow/tutorial_taskflow_api.rst", "tests/system/providers/docker/example_docker.py", "tests/system/providers/docker/example_docker_copy_data.py", "tests/system/providers/docker/example_docker_swarm.py", "tests/system/providers/docker/example_taskflow_api_etl_docker_virtualenv.py"] | Migrate Docker example DAGs to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current example dags need to be migrated and converted into system tests, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all example DAGs related to `Docker` provider. It is created to track progress of their migration.
List of paths to example DAGs:
- [x] airflow/providers/docker/example_dags/example_docker_swarm.py
- [x] airflow/providers/docker/example_dags/example_docker.py
- [x] airflow/providers/docker/example_dags/example_docker_copy_data.py | https://github.com/apache/airflow/issues/22444 | https://github.com/apache/airflow/pull/23167 | 550d1a5059c10b84dc40d7a66c203cc6514e6c63 | 06856337a51139d66b1a39544e276e477c6b5ea1 | "2022-03-22T14:37:42Z" | python | "2022-06-06T15:20:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,434 | ["airflow/providers/snowflake/example_dags/__init__.py", "docs/apache-airflow-providers-snowflake/index.rst", "docs/apache-airflow-providers-snowflake/operators/s3_to_snowflake.rst", "docs/apache-airflow-providers-snowflake/operators/snowflake.rst", "docs/apache-airflow-providers-snowflake/operators/snowflake_to_slack.rst", "tests/system/providers/snowflake/example_snowflake.py"] | Migrate Snowflake system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Snowflake` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/snowflake/operators/test_snowflake_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Snowflake example DAGs to new design`
| https://github.com/apache/airflow/issues/22434 | https://github.com/apache/airflow/pull/24151 | c60bb9edc0c9b55a2824eae879af8a4a90ccdd2d | c2f10a4ee9c2404e545d78281bf742a199895817 | "2022-03-22T13:48:43Z" | python | "2022-06-03T16:09:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,433 | ["tests/providers/postgres/operators/test_postgres_system.py"] | Migrate Postgres system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Postgres` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/postgres/operators/test_postgres_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Postgres example DAGs to new design`
| https://github.com/apache/airflow/issues/22433 | https://github.com/apache/airflow/pull/24223 | 487e229206396f8eaf7c933be996e6c0648ab078 | 95ab664bb6a8c94509b34ceb8c189f67db00c71a | "2022-03-22T13:48:42Z" | python | "2022-06-05T15:45:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,432 | ["tests/providers/microsoft/azure/hooks/test_fileshare_system.py", "tests/providers/microsoft/azure/operators/test_adls_delete_system.py", "tests/providers/microsoft/azure/transfers/test_local_to_adls_system.py", "tests/providers/microsoft/azure/transfers/test_local_to_wasb_system.py", "tests/providers/microsoft/azure/transfers/test_sftp_to_wasb_system.py"] | Migrate Microsoft system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Microsoft` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/microsoft/azure/operators/test_adls_delete_system.py (1)
- [x] tests/providers/microsoft/azure/transfers/test_sftp_to_wasb_system.py (1)
- [x] tests/providers/microsoft/azure/transfers/test_local_to_adls_system.py (1)
- [x] tests/providers/microsoft/azure/transfers/test_local_to_wasb_system.py (1)
- [x] tests/providers/microsoft/azure/hooks/test_fileshare_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Microsoft example DAGs to new design`
| https://github.com/apache/airflow/issues/22432 | https://github.com/apache/airflow/pull/24225 | 9d4da34c3b1d94369c2393df3d40c09963757601 | d71787e0d7423e8a116811e86edf76588b3c7017 | "2022-03-22T13:48:42Z" | python | "2022-06-05T15:41:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,431 | ["airflow/providers/http/example_dags/__init__.py", "docs/apache-airflow-providers-http/operators.rst", "tests/providers/http/operators/test_http_system.py", "tests/system/providers/http/example_http.py"] | Migrate HTTP system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `HTTP` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/http/operators/test_http_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate HTTP example DAGs to new design`
| https://github.com/apache/airflow/issues/22431 | https://github.com/apache/airflow/pull/23991 | 3dd7b1ddbaa3170fbda30a8323286abf075f30ba | 9398586a7cf66d9cf078c40ab0d939b3fcc58c2d | "2022-03-22T13:48:41Z" | python | "2022-06-01T20:12:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,429 | ["tests/providers/cncf/kubernetes/operators/test_spark_kubernetes_system.py"] | Migrate CNCF system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `CNCF` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/cncf/kubernetes/operators/test_spark_kubernetes_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate CNCF example DAGs to new design`
| https://github.com/apache/airflow/issues/22429 | https://github.com/apache/airflow/pull/24224 | d71787e0d7423e8a116811e86edf76588b3c7017 | 487e229206396f8eaf7c933be996e6c0648ab078 | "2022-03-22T13:48:39Z" | python | "2022-06-05T15:44:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,428 | ["tests/providers/asana/operators/test_asana_system.py"] | Migrate Asana system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Asana` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/asana/operators/test_asana_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Asana example DAGs to new design`
| https://github.com/apache/airflow/issues/22428 | https://github.com/apache/airflow/pull/24226 | b4d50d3be1c9917182f231135b8312eb284f0f7f | 9d4da34c3b1d94369c2393df3d40c09963757601 | "2022-03-22T13:48:38Z" | python | "2022-06-05T15:34:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,427 | ["tests/providers/apache/beam/operators/test_beam_system.py"] | Migrate Apache Beam system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Apache Beam` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/apache/beam/operators/test_beam_system.py (8)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Apache Beam example DAGs to new design`
| https://github.com/apache/airflow/issues/22427 | https://github.com/apache/airflow/pull/24256 | 42abbf0d61f94ec50026af0c0f95eb378e403042 | a01a94147c2db66a14101768b4bcbf3fad2a9cf0 | "2022-03-22T13:48:37Z" | python | "2022-06-06T21:06:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,426 | ["tests/providers/amazon/aws/hooks/test_base_aws_system.py", "tests/providers/amazon/aws/operators/test_eks_system.py", "tests/providers/amazon/aws/operators/test_emr_system.py", "tests/providers/amazon/aws/operators/test_glacier_system.py", "tests/providers/amazon/aws/operators/test_s3_bucket_tagging_system.py", "tests/providers/amazon/aws/transfers/test_google_api_to_s3_system.py", "tests/providers/amazon/aws/transfers/test_imap_attachment_to_s3_system.py", "tests/providers/amazon/aws/transfers/test_s3_to_redshift_system.py"] | Migrate Amazon system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Amazon` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/amazon/aws/operators/test_ecs_system.py (1)
- [x] tests/providers/amazon/aws/operators/test_eks_system.py (4)
- [x] tests/providers/amazon/aws/operators/test_glacier_system.py (1)
- [x] tests/providers/amazon/aws/operators/test_s3_bucket_tagging_system.py (1)
- [x] tests/providers/amazon/aws/operators/test_emr_system.py (2)
- [x] tests/providers/amazon/aws/transfers/test_imap_attachment_to_s3_system.py (1)
- [x] tests/providers/amazon/aws/transfers/test_google_api_to_s3_system.py (2)
- [x] tests/providers/amazon/aws/transfers/test_s3_to_redshift_system.py (1)
- [x] tests/providers/amazon/aws/hooks/test_base_aws_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Amazon example DAGs to new design`
| https://github.com/apache/airflow/issues/22426 | https://github.com/apache/airflow/pull/25655 | 6e41c7eb33a68ea3ccd6b67fb169ea2cf1ecc162 | bc46477d20802242ec9596279933742c1743b2f1 | "2022-03-22T13:48:35Z" | python | "2022-08-16T13:47:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,418 | ["airflow/www/static/css/main.css", "airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py"] | auto refresh Dags home page | ### Description
Similar to auto refresh in the DAG page, it would be nice to have this option in the home page as well.
![image](https://user-images.githubusercontent.com/7373236/159442263-60bbcd58-50e5-4a3d-8d6f-d31a65a6ff81.png)
### Use case/motivation
having an auto refresh at the home page will let users have a live view of running dags and tasks.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22418 | https://github.com/apache/airflow/pull/22900 | d6141c6594da86653b15d67eaa99511e8fe37a26 | cd70afdad92ee72d96edcc0448f2eb9b44c8597e | "2022-03-22T08:50:02Z" | python | "2022-05-01T10:59:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,417 | ["airflow/providers/jenkins/hooks/jenkins.py", "airflow/providers/jenkins/provider.yaml", "airflow/providers/jenkins/sensors/__init__.py", "airflow/providers/jenkins/sensors/jenkins.py", "tests/providers/jenkins/hooks/test_jenkins.py", "tests/providers/jenkins/sensors/__init__.py", "tests/providers/jenkins/sensors/test_jenkins.py"] | Jenkins Sensor to monitor a jenkins job finish | ### Description
Sensor for jenkins jobs in airflow. There are cases in which we need to monitor the state of a build in jenkins and pause the DAG until the build finishes.
### Use case/motivation
I am trying to achieve a way of pausing the DAG until a build or the last build in a jenkins job finishes.
This could be done in different ways but cleanest is to have a dedicated jenkins sensor in airflow and use jenkins hook and connection.
There are two cases to monitor a job in jenkins
1. Specify the build number to monitor
2. Get the last build automatically and check whether it is still running or not.
Technically the only important thing from a sensor perspective is to check whether the build is ongoing or finished. Monitoring for a specific Status or result doesn't make sense in this use case. This use case only concerns whether there is an ongoing build in a job or not. If a current build is ongoing then wait for it to finish.
If build number not specified, sensor should query for the latest build number and check whether it is running or not.
If build number is specified, it should check the run state of that specific build.
### Related issues
There are no related issues.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22417 | https://github.com/apache/airflow/pull/22421 | ac400ebdf3edc1e08debf3b834ade3809519b819 | 4e24b22379e131fe1007e911b93f52e1b6afcf3f | "2022-03-22T07:57:54Z" | python | "2022-03-24T08:01:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,413 | ["chart/templates/flower/flower-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_flower.py"] | Flower is missing extraVolumeMounts | ### Official Helm Chart version
1.5.0 (latest released)
### Apache Airflow version
2.2.4 (latest released)
### Kubernetes Version
1.19
### Helm Chart configuration
```
flower:
extraContainers:
- image: foo
imagePullPolicy: IfNotPresent
name: foo
volumeMounts:
- mountPath: /var/log/foo
name: foo
readOnly: false
extraVolumeMounts:
- mountPath: /var/log/foo
name: foo
extraVolumes:
- emptyDir: {}
name: foo
```
### Docker Image customisations
_No response_
### What happened
```
Error: values don't meet the specifications of the schema(s) in the following chart(s):
airflow:
- flower: Additional property extraVolumeMounts is not allowed
```
### What you think should happen instead
The flower pod should support the same extraVolumeMounts that other pods support.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22413 | https://github.com/apache/airflow/pull/22414 | 7667d94091b663f9d9caecf7afe1b018bcad7eda | f3bd2a35e6f7b9676a79047877dfc61e5294aff8 | "2022-03-21T22:58:02Z" | python | "2022-03-22T11:17:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,404 | ["airflow/task/task_runner/standard_task_runner.py"] | tempfile.TemporaryDirectory does not get deleted after task failure | ### Discussed in https://github.com/apache/airflow/discussions/22403
<div type='discussions-op-text'>
<sup>Originally posted by **m1racoli** March 18, 2022</sup>
### Apache Airflow version
2.2.4 (latest released)
### What happened
When creating a temporary directory with `tempfile.TemporaryDirectory()` and then failing a task, the corresponding directory does not get deleted.
This happens in Airflow on Astronomer as well as locally in for `astro dev` setups for LocalExecutor and CeleryExecutor.
### What you think should happen instead
As in normal Python environments, the directory should get cleaned up, even in the case of a raised exception.
### How to reproduce
Running this DAG will leave a temporary directory in the corresponding location:
```python
import os
import tempfile
from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
class MyException(Exception):
pass
@task
def run():
tmpdir = tempfile.TemporaryDirectory()
print(f"directory {tmpdir.name} created")
assert os.path.exists(tmpdir.name)
raise MyException("error!")
@dag(start_date=days_ago(1))
def tempfile_test():
run()
_ = tempfile_test()
```
### Operating System
Debian (Astronomer Airflow Docker image)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!3.0.0
apache-airflow-providers-cncf-kubernetes==1!3.0.2
apache-airflow-providers-elasticsearch==1!2.2.0
apache-airflow-providers-ftp==1!2.0.1
apache-airflow-providers-google==1!6.4.0
apache-airflow-providers-http==1!2.0.3
apache-airflow-providers-imap==1!2.2.0
apache-airflow-providers-microsoft-azure==1!3.6.0
apache-airflow-providers-mysql==1!2.2.0
apache-airflow-providers-postgres==1!3.0.0
apache-airflow-providers-redis==1!2.0.1
apache-airflow-providers-slack==1!4.2.0
apache-airflow-providers-sqlite==1!2.1.0
apache-airflow-providers-ssh==1!2.4.0
```
### Deployment
Astronomer
### Deployment details
GKE, vanilla `astro dev`, LocalExecutor and CeleryExecutor.
### Anything else
Always
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/22404 | https://github.com/apache/airflow/pull/22475 | 202a3a10e553a8a725a0edb6408de605cb79e842 | b0604160cf95f76ed75b4c4ab42b9c7902c945ed | "2022-03-21T16:16:30Z" | python | "2022-03-24T21:23:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,398 | ["airflow/providers/google/cloud/hooks/cloud_build.py", "tests/providers/google/cloud/hooks/test_cloud_build.py"] | CloudBuildRunBuildTriggerOPerator: 'property' object has no attribute 'build' | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.6.0
### Apache Airflow version
2.2.4 (latest released)
### Operating System
GCP Cloud Composer 2
### Deployment
Composer
### Deployment details
We're currently using the default set up of cloud composer 2 on GCP.
### What happened
When trying to trigger cloud build trigger using the `CloudBuildRunBuildTriggerOperator` We receive the following error:
```
[2022-03-21, 12:28:57 UTC] {credentials_provider.py:312} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook.
[2022-03-21, 12:28:58 UTC] {cloud_build.py:503} INFO - Start running build trigger: <TRIGGER ID>.
[2022-03-21, 12:29:00 UTC] {taskinstance.py:1702} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 75, in _get_build_id_from_operation
return operation.metadata.build.id
AttributeError: 'property' object has no attribute 'build'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1330, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1457, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1513, in _execute_task
result = execute_callable(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/cloud_build.py", line 739, in execute
result = hook.run_build_trigger(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 433, in inner_wrapper
return func(self, *args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 512, in run_build_trigger
id_ = self._get_build_id_from_operation(Operation)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 77, in _get_build_id_from_operation
raise AirflowException("Could not retrieve Build ID from Operation.")
airflow.exceptions.AirflowException: Could not retrieve Build ID from Operation.
[2022-03-21, 12:29:00 UTC] {taskinstance.py:1268} INFO - Marking task as FAILED. dag_id=deploy_index, task_id=trigger_build, execution_date=20220321T122848, start_date=20220321T122856, end_date=20220321T122900
[2022-03-21, 12:29:00 UTC] {standard_task_runner.py:89} ERROR - Failed to execute job 1003 for task trigger_build
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 75, in _get_build_id_from_operation
return operation.metadata.build.id
AttributeError: 'property' object has no attribute 'build'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 94, in wrapper
return f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 302, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1330, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1457, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1513, in _execute_task
result = execute_callable(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/cloud_build.py", line 739, in execute
result = hook.run_build_trigger(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 433, in inner_wrapper
return func(self, *args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 512, in run_build_trigger
id_ = self._get_build_id_from_operation(Operation)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 77, in _get_build_id_from_operation
raise AirflowException("Could not retrieve Build ID from Operation.")
airflow.exceptions.AirflowException: Could not retrieve Build ID from Operation.
[2022-03-21, 12:29:00 UTC] {local_task_job.py:154} INFO - Task exited with return code 1
[2022-03-21, 12:29:01 UTC] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
Below is the code snippet that's causing the above error.
```
trigger_deploy = CloudBuildRunBuildTriggerOperator(
task_id="trigger_deploy",
trigger_id="TRIGGER_ID",
project_id="PROEJCT_ID",
source=RepoSource({"project_id": "PROJECT_ID",
"repo_name": "REPO_NAME",
"branch_name": "BRANCH",
}),
wait=True,
do_xcom_push=True
)
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
We reckon the source of the bug is [here](https://github.com/apache/airflow/blob/71c980a8ffb3563bf16d8a23a58de54c9e8cf556/airflow/providers/google/cloud/hooks/cloud_build.py#L165).
``` python
id_ = self._get_build_id_from_operation(Operation)
```
Since the function signature is looking for an instance of Operation instead of the class itself.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22398 | https://github.com/apache/airflow/pull/22419 | f51a674dd907a00e4bd9b4b44fb036d28762b5cc | 0f0a1a7d22dffab4487c35d3598b3b6aaf24c4c6 | "2022-03-21T13:16:58Z" | python | "2022-03-23T13:37:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,392 | ["airflow/cli/commands/connection_command.py", "tests/cli/commands/test_connection_command.py"] | Unknown connection types fail in cryptic ways | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I created a connection like:
```
airflow connections add fsconn --conn-host /tmp --conn-type File
```
When I really should have created it like:
```
airflow connections add fsconn --conn-host /tmp --conn-type fs
```
While using this connection, I found that FileSensor would only work if I provided absolute paths. Relative paths would cause the sensor to timeout because it couldn't find the file. Using `fs` instead of `File` caused the FileSensor to start working like I expected.
### What you think should happen instead
Ideally I'd have gotten an error when I tried to create the connection with an invalid type.
Or if that's not practical, then I should have gotten an error in the task logs when I tried to use the FileSensor with a connection of the wrong type.
### How to reproduce
_No response_
### Operating System
debian (in docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start`
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22392 | https://github.com/apache/airflow/pull/22688 | 9a623e94cb3e4f02cbe566e02f75f4a894edc60d | d7993dca2f182c1d0f281f06ac04b47935016cf1 | "2022-03-21T04:36:56Z" | python | "2022-04-13T19:45:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,381 | ["airflow/providers/amazon/aws/hooks/athena.py", "airflow/providers/amazon/aws/hooks/emr.py", "airflow/providers/amazon/aws/operators/athena.py", "airflow/providers/amazon/aws/operators/emr.py"] | AthenaOperator retries max_tries mix-up | ### Apache Airflow version
2.2.4 (latest released)
### What happened
After a recent upgrade from 1.10.9 to 2.2.4, an odd behavior where the aforementioned Attributes, are wrongfully Coupled, is observed.
An example to showcase the issue:
```
AthenaOperator(
...
retries=3,
max_tries=30,
...)
```
Related Documentation states:
* retries: Number of retries that should be performed before failing the task
* max_tries: Number of times to poll for query state before function exits
Regardless of the above specification `max_tries=30`, inspection of related _Task Instance Details_ shows that the Value of both Attributes is **3**
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
Imagine a Query, executed on an hourly basis, with a varying scope, causing it to 'organically' execute for anywhere between 5 - 10 minutes. This Query Task should Fail after 3 execution attempts.
In such cases, we would like to poll the state of the Query frequently (every 15 seconds), in order to avoid redundant idle time for downstream Tasks.
A configuration matching the above description:
```
AthenaOperator(
...
retry_delay=15,
retries=3,
max_tries=40, # 40 polls * 15 seconds delay between polls = 10 minutes
...)
```
When deployed, `retries == max_tries == 3`, thus causing the Task to terminate after 45 seconds
In order to quickly avert this situation where our ETL breaks, we are using the following configuration:
```
AthenaOperator(
...
retry_delay=15,
retries=40,
max_tries=40,
...)
```
With the last configuration, our Task does not terminate preemptively but will retry **40 times** before failing - which causes an issue with downstream Tasks SLA, at the very least (that is before weighing in the waste of time and operational costs)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22381 | https://github.com/apache/airflow/pull/25971 | 18386026c28939fa6d91d198c5489c295a05dcd2 | d5820a77e896a1a3ceb671eddddb9c8e3bcfb649 | "2022-03-20T08:50:45Z" | python | "2022-09-11T23:25:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,380 | ["dev/provider_packages/SETUP_TEMPLATE.cfg.jinja2"] | Newest providers incorrectly include `gitpython` and `wheel` in `install_requires` | ### Apache Airflow Provider(s)
ftp, openfaas, sqlite
### Versions of Apache Airflow Providers
I am the maintainer of the Airflow Providers on conda-forge. The providers I listed above are the first 3 I have looked at but I believe all are affected. These are the new releases (as of yesterday) of all providers.
### Apache Airflow version
2.2.4 (latest released)
### Operating System
Linux (Azure CI)
### Deployment
Other Docker-based deployment
### Deployment details
This is on conda-forge Azure CI.
### What happened
All providers I have looked at (and I suspect all providers) now have `gitpython` and `wheel` in their `install_requires`:
From `apache-airflow-providers-ftp-2.1.1.tar.gz`:
```
install_requires =
gitpython
wheel
```
I believe these requirements are incorrect (neither should be needed at install time) and this will make maintaining these packages on conda-forge an absolute nightmare! (It's already a serious challenge because I get a PR to update each time each provider gets updated.)
### What you think should happen instead
These install requirements should be removed.
### How to reproduce
Open any of the newly released providers from pypi and look at `setup.cfg`.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22380 | https://github.com/apache/airflow/pull/22382 | 172df9ee247af62e9417cebb2e2a3bc2c261a204 | ab4ba6f1b770a95bf56965f3396f62fa8130f9e9 | "2022-03-20T07:48:57Z" | python | "2022-03-20T12:15:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,358 | ["airflow/api_connexion/openapi/v1.yaml"] | ScheduleInterval schema in OpenAPI specs should have "nullable: true" otherwise generated OpenAPI client will throw an error in case of nullable "schedule_interval" | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Currently we have this schema definition in the OpenAPI specs:
```
ScheduleInterval:
description: |
Schedule interval. Defines how often DAG runs, this object gets added to your latest task instance's
execution_date to figure out the next schedule.
readOnly: true
oneOf:
- $ref: '#/components/schemas/TimeDelta'
- $ref: '#/components/schemas/RelativeDelta'
- $ref: '#/components/schemas/CronExpression'
discriminator:
propertyName: __type
```
The issue with above is, when using an OpenAPI generator for Java for example (I think is same for other languages as well), it will treat `ScheduleInterval` as **non-nullable** property, although what is returned under `/dags/{dag_id}` or `/dags/{dag_id}/details` in case of a `None` `schedule_interval` is `null` for `schedule_interval`.
### What you think should happen instead
We should have `nullable: true` in `ScheduleInterval` schema which will allow `schedule_interval` to be parsed as `null`.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
If the maintainers think is a valid bug, I will be more than happy to submit a PR :)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22358 | https://github.com/apache/airflow/pull/24253 | b88ce951881914e51058ad71858874fdc00a3cbe | 7e56bf662915cd58849626d7a029a4ba70cdda4d | "2022-03-18T09:13:24Z" | python | "2022-06-07T11:21:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,340 | ["airflow/decorators/base.py", "airflow/models/baseoperator.py", "airflow/models/mappedoperator.py", "airflow/serialization/serialized_objects.py", "tests/models/test_taskinstance.py", "tests/serialization/test_dag_serialization.py"] | Expanding operators inside of task groups causes KeyError | ### Apache Airflow version
main (development)
### What happened
Given this DAG:
```python3
foo_var = {"VAR1": "FOO"}
bar_var = {"VAR1": "BAR"}
hi_cmd = 'echo "hello $VAR1"'
bye_cmd = 'echo "goodbye $VAR1"'
@task
def envs():
return [foo_var, bar_var]
@task
def cmds():
return [hi_cmd, bye_cmd]
with DAG(dag_id="mapped_bash", start_date=datetime(1970, 1, 1)) as dag:
with TaskGroup(group_id="dynamic"):
dynamic = BashOperator.partial(task_id="bash").expand(
env=envs(), bash_command=cmds()
)
```
I ran `airflow dags test mapped_bash $(date +%Y-%m-%dT%H:%M:%SZ)`
Got this output:
```
[2022-03-17 09:21:24,590] {taskinstance.py:1451} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=mapped_bash
AIRFLOW_CTX_TASK_ID=dynamic.bash
AIRFLOW_CTX_EXECUTION_DATE=2022-03-17T09:21:12+00:00
AIRFLOW_CTX_DAG_RUN_ID=backfill__2022-03-17T09:21:12+00:00
[2022-03-17 09:21:24,590] {subprocess.py:62} INFO - Tmp dir root location:
/var/folders/5m/nvs9yfcs6mlfm_63gnk6__3r0000gn/T
[2022-03-17 09:21:24,591] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'echo "hello $VAR1"']
[2022-03-17 09:21:24,597] {subprocess.py:85} INFO - Output:
[2022-03-17 09:21:24,601] {subprocess.py:92} INFO - hello FOO
[2022-03-17 09:21:24,601] {subprocess.py:96} INFO - Command exited with return code 0
[2022-03-17 09:21:24,616] {taskinstance.py:1277} INFO - Marking task as SUCCESS. dag_id=mapped_bash, task_id=dynamic.bash, execution_date=20220317T092112, start_date=20220317T152114, end_date=20220317T152124
[2022-03-17 09:21:24,638] {taskinstance.py:1752} WARNING - We expected to get frame set in local storage but it was not. Please report this as an issue with full logs at https://github.com/apache/airflow/issues/new
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 1335, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 1437, in _execute_task_with_callbacks
self.render_templates(context=context)
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 2091, in render_templates
task = self.task.render_template_fields(context)
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 602, in render_template_fields
unmapped_task = self.unmap()
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 454, in unmap
dag._remove_task(self.task_id)
File "/Users/matt/src/airflow/airflow/models/dag.py", line 2188, in _remove_task
task = self.task_dict.pop(task_id)
KeyError: 'dynamic.bash'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 1750, in get_truncated_error_traceback
execution_frame = _TASK_EXECUTION_FRAME_LOCAL_STORAGE.frame
AttributeError: '_thread._local' object has no attribute 'frame'
```
### What you think should happen instead
No error
### How to reproduce
Trigger the dag above
### Operating System
Mac OS 11.6
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
```
❯ cd ~/src/airflow
❯ git rev-parse --short HEAD
df6058c86
❯ airflow info
Apache Airflow
version | 2.3.0.dev0
executor | SequentialExecutor
task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler
sql_alchemy_conn | sqlite:////Users/matt/2022/03/16/airflow.db
dags_folder | /Users/matt/2022/03/16/dags
plugins_folder | /Users/matt/2022/03/16/plugins
base_log_folder | /Users/matt/2022/03/16/logs
remote_base_log_folder |
System info
OS | Mac OS
architecture | x86_64
uname | uname_result(system='Darwin', node='LIGO', release='20.6.0', version='Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64', machine='x86_64')
locale | ('en_US', 'UTF-8')
python_version | 3.9.10 (main, Jan 15 2022, 11:48:00) [Clang 13.0.0 (clang-1300.0.29.3)]
python_location | /Users/matt/src/qa-scenario-dags/venv/bin/python3.9
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22340 | https://github.com/apache/airflow/pull/22355 | 5eb63357426598f99ed50b002b72aebdf8790f73 | 87d363e217bf70028e512fd8ded09a01ffae0162 | "2022-03-17T15:30:06Z" | python | "2022-03-20T02:05:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,328 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | bigquery provider's - BigQueryCursor missing implementation for description property. | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When trying to run following code:
```
import pandas as pd
from airflow.providers.google.cloud.hooks.bigquery import BigqueryHook
#using default connection
hook = BigqueryHook()
df = pd.read_sql(
"SELECT * FROM table_name", con=hook.get_conn()
)
```
Running into following issue:
```Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<string>", line 1, in <module>
File "/Users/utkarsharma/sandbox/astronomer/astro/.nox/dev/lib/python3.8/site-packages/pandas/io/sql.py", line 602, in read_sql
return pandas_sql.read_query(
File "/Users/utkarsharma/sandbox/astronomer/astro/.nox/dev/lib/python3.8/site-packages/pandas/io/sql.py", line 2117, in read_query
columns = [col_desc[0] for col_desc in cursor.description]
File "/Users/utkarsharma/sandbox/astronomer/astro/.nox/dev/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2599, in description
raise NotImplementedError
NotImplementedError
```
### What you think should happen instead
The property should be implemented in a similar manner as [postgres_to_gcs.py](https://github.com/apache/airflow/blob/7bd165fbe2cbbfa8208803ec352c5d16ca2bd3ec/airflow/providers/google/cloud/transfers/postgres_to_gcs.py#L58)
### How to reproduce
_No response_
### Operating System
macOS
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.5.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22328 | https://github.com/apache/airflow/pull/25366 | e84d753015e5606c29537741cdbe8ae08012c3b6 | 7d2c2ee879656faf47829d1ad89fc4441e19a66e | "2022-03-17T00:12:26Z" | python | "2022-08-04T14:48:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,325 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_schema.py", "airflow/models/dag.py", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"] | ReST API : get_dag should return more than a simplified view of the dag | ### Description
The current response payload from https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_dag is a useful but simple view of the state of a given DAG. However it is missing some additional attributes that I feel would be useful for indiduals/groups who are choosing to interact with Airflow primarily through the ReST interface.
### Use case/motivation
As part of a testing workflow we upload DAGs to a running airflow instance and want to trigger an execution of the DAG after we know that the scheduler has updated it. We're currently automating this process through the ReST API, but the `last_updated` is not exposed.
This should be implemented from the dag_source endpoint.
https://github.com/apache/airflow/blob/main/airflow/api_connexion/endpoints/dag_source_endpoint.py
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22325 | https://github.com/apache/airflow/pull/22637 | 55ee62e28a0209349bf3e49a25565e7719324500 | 9798c8cad1c2fe7e674f8518cbe5151e91f1ca7e | "2022-03-16T20:49:07Z" | python | "2022-03-31T10:40:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,320 | ["airflow/www/templates/airflow/dag.html"] | Copying DAG ID from UI and pasting in Slack includes schedule | ### Apache Airflow version
2.2.3
### What happened
(Yes, I know the title says Slack and it might not seem like an Airflow issue, but so far this is the only application I noticed this on. There might be others.)
PR https://github.com/apache/airflow/pull/11503 was a fix to issue https://github.com/apache/airflow/issues/11500 to prevent text-selection of scheduler interval when selecting DAG ID. However it does not fix pasting the text into certain applications (such as Slack), at least on a Mac.
@ryanahamilton thanks for the fix, but this is fixed in the visible sense (double clicking the DAG ID to select it will now not show the schedule interval and next run as selected in the UI), however if you copy what is selected for some reason it still includes schedule interval and next run when pasted into certain applications.
I can't be sure why this is happening, but certain places such as pasting into Google Chrome, TextEdit, or Visual Studio Code it will only include the DAG ID and a new line. But other applications such as Slack (so far only one I can tell) it includes the schedule interval and next run, as you can see below:
- Schedule interval and next run **not shown as selected** on the DAG page:
![Screen Shot 2022-03-16 at 11 04 21 AM](https://user-images.githubusercontent.com/45696489/158659392-2df1f428-61e9-4785-be21-cdb1eda9ff6e.png)
- Schedule interval and next run **not pasted** in Google Chrome and TextEdit:
![Screen Shot 2022-03-16 at 11 05 10 AM](https://user-images.githubusercontent.com/45696489/158659521-adc2be64-1b31-403f-8630-b36b40900b42.png)
![Screen Shot 2022-03-16 at 11 15 14 AM](https://user-images.githubusercontent.com/45696489/158659539-0c76c079-3b44-4846-b41e-9038689bb33d.png)
- Schedule interval and next run **_pasted and visible_** in Slack:
![Screen Shot 2022-03-16 at 11 05 40 AM](https://user-images.githubusercontent.com/45696489/158659837-a57b0a57-306e-4ea2-9648-a4922d41c403.png)
### What you think should happen instead
When you select the DAG ID on the DAG page, copy what is selected, and then paste into a Slack message, only the DAG ID should be pasted.
### How to reproduce
Select the DAG ID on the DAG page (such as double-clicking the DAG ID), copy what is selected, and then paste into a Slack message.
### Operating System
macOS 1.15.7 (Catalina)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
This is something that possibly could be a Slack bug (one could say that Slack should strip out anything that is `user-select: none`), however it should be possible to fix the HTML layout so `user-select: none` is not even needed to prevent selection. It is sort of a band-aid fix.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22320 | https://github.com/apache/airflow/pull/28643 | 1da17be37627385fed7fc06584d72e0abda6a1b5 | 9aea857343c231319df4c5f47e8b4d9c8c3975e6 | "2022-03-16T18:31:26Z" | python | "2023-01-04T21:19:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,318 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py"] | KubernetesPodOperator xcom sidecar stuck in running | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When the main container errors and failed to write a return.json file, the xcom sidecar hangs and doesn't exit properly with an empty return.json.
This is a problem because we want to suppress the following error, as the reason the pod failed should not be that xcom failed.
```
[2022-03-16, 17:08:07 UTC] {pod_manager.py:342} INFO - Running command... cat /airflow/xcom/return.json
[2022-03-16, 17:08:07 UTC] {pod_manager.py:349} INFO - stderr from command: cat: can't open '/airflow/xcom/return.json': No such file or directory
[2022-03-16, 17:08:07 UTC] {pod_manager.py:342} INFO - Running command... kill -s SIGINT 1
[2022-03-16, 17:08:08 UTC] {kubernetes_pod.py:417} INFO - Deleting pod: test.20882a4c607d418d94e87231214d34c0
[2022-03-16, 17:08:08 UTC] {taskinstance.py:1718} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 385, in execute
result = self.extract_xcom(pod=self.pod)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 360, in extract_xcom
result = self.pod_manager.extract_xcom(pod)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 337, in extract_xcom
raise AirflowException(f'Failed to extract xcom from pod: {pod.metadata.name}')
airflow.exceptions.AirflowException: Failed to extract xcom from pod: test.20882a4c607d418d94e87231214d34c0
```
and have the KubernetesPodOperator exit gracefully
### What you think should happen instead
sidecar should exit with an empty xcom return value
### How to reproduce
KubernetesPodOperator with command `mkdir -p /airflow/xcom;touch /airflow/xcom/return.json; cat a >> /airflow/xcom/return.json`
### Operating System
-
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22318 | https://github.com/apache/airflow/pull/24993 | d872edacfe3cec65a9179eff52bf219c12361fef | f05a06537be4d12276862eae1960515c76aa11d1 | "2022-03-16T17:12:04Z" | python | "2022-07-16T20:37:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,298 | ["airflow/kubernetes/pod_generator.py", "tests/kubernetes/test_pod_generator.py"] | pod_override ignores namespace configuration | ### Apache Airflow version
2.0.2
### What happened
I've attempted to use the pod_override as per the documentation [here](https://airflow.apache.org/docs/apache-airflow/2.0.2/executor/kubernetes.html#pod-override). Labels, annotations, and service accounts work. However, attempting to override the namespace does not: I would expect the pod to be created in the namespace that I've set `pod_override` to, but it does not.
My speculation is that the `pod_generator.py` code is incorrect [here](https://github.com/apache/airflow/blob/2.0.2/airflow/kubernetes/pod_generator.py#L405). The order of reconciliation goes:
```
# Reconcile the pods starting with the first chronologically,
# Pod from the pod_template_File -> Pod from executor_config arg -> Pod from the K8s executor
pod_list = [base_worker_pod, pod_override_object, dynamic_pod]
```
Note that dynamic pod takes precedence. Note that `dynamic_pod` has the namespace from whatever namespace is [passed](https://github.com/apache/airflow/blob/2.0.2/airflow/kubernetes/pod_generator.py#L373) in. It is initialized from `self.kube_config.kube_namespace` [here](https://github.com/apache/airflow/blob/ac77c89018604a96ea4f5fba938f2fbd7c582793/airflow/executors/kubernetes_executor.py#L245). Therefore, the kube config's namespace takes precedence and will override the namespace of the `pod_override`, because only one namespace can exist (unlike labels and annotations, which can be additive). This unfortunately means pods created by the KubernetesExecutor will run in the config's namespace. For what it's worth, I set the namespace via an env var:
```
AIRFLOW__KUBERNETES__NAMESPACE = "foobar"
```
This code flow remains the same on the latest version of Airflow -- I speculate that the bug still might be in.
### What you think should happen instead
If we use pod_override with a namespace, the namespace where the pod is run should take the namespace desired in pod_override.
### How to reproduce
Create a DAG using a PythonOperator or any other operator. Use the pod_override:
```
"pod_override": k8s.V1Pod(
metadata=k8s.V1ObjectMeta(namespace="fakenamespace",
annotations={"test": "annotation"},
labels={'foo': 'barBAZ'}),
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base"
)
],
service_account_name="foobar"
)
)
```
Look at the k8s spec file or use `kubectl` to verify where this pod is attempted to start running. It should be `fakenamespace`, but instead will likely be in the same namespace as whatever is set in the config file.
### Operating System
Amazon EKS 1.19
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22298 | https://github.com/apache/airflow/pull/24342 | 6476afda208eb6aabd58cc00db8328451c684200 | 1fe07e5cebac5e8a0b3fe7e88c65f6d2b0c2134d | "2022-03-16T00:59:19Z" | python | "2022-07-06T11:56:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,250 | ["Dockerfile", "docs/docker-stack/build.rst", "scripts/docker/pip"] | Fail installing `pip` packages in Dockerfile extension when attempting to install as root | ### Description
We should fail attemps to install packages by `pip` after switching to root user in PROD airlfow image.
We should provide the user good error message with information on how to install `pip` packages properly (i..e. switching to `airflow` user first).
### Use case/motivation
When extending Ariflow image, all packages should be installed as "airflow" user. Some users attempt to run `pip install` right after adding some `apt` packages - which require switching to root.
For example this is wrong:
```
FROM apache/airflow:2.1.2-python3.8
USER root
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
vim \
awscli \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN pip install -Iv --no-cache-dir apache-airflow-providers-amazon==3.0.0
USER airflow
```
And should be:
```
FROM apache/airflow:2.1.2-python3.8
USER root
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
vim \
awscli \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER airflow
RUN pip install -Iv --no-cache-dir apache-airflow-providers-amazon==3.0.0
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22250 | https://github.com/apache/airflow/pull/22292 | e07bc63ec0e5b679c87de8e8d4cdff1cf4671146 | b00fc786723c4356de93792c32c85f62b2e36ed9 | "2022-03-14T15:38:20Z" | python | "2022-03-15T18:15:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,248 | ["airflow/utils/docs.py", "docs/apache-airflow-providers/index.rst"] | Allow custom redirect for provider information in /provider | ### Description
`/provider` enables users to get amazing information via the UI, however if you've written a custom provider the documentation redirect defaults to `https://airflow.apache.org/docs/airflow-provider-{rest_of_name}/{version}/`, which isn't useful for custom operators. (If this feature exists then I must've missed the documentation on it, sorry!)
### Use case/motivation
As an airflow developer I've written a custom provider package and would like to link to my internal documentation as well as my private github repo via the `/provider` entry for my custom provider.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
As this is a UI change + more, I am willing to submit a PR, but would likely need help.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22248 | https://github.com/apache/airflow/pull/23012 | 3b2ef88f877fc5e4fcbe8038f0a9251263eaafbc | 7064a95a648286a4190a452425626c159e467d6e | "2022-03-14T15:27:23Z" | python | "2022-04-22T13:21:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,220 | ["airflow/providers/databricks/provider.yaml", "docs/apache-airflow-providers-databricks/index.rst", "setup.py", "tests/providers/databricks/operators/test_databricks_sql.py"] | Databricks SQL fails on Python 3.10 | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
The databricks SQL does not work on Python 3.10 due to "from collections import Iterable" in the `databricks-sql-connector`
* https://pypi.org/project/databricks-sql-connector/
Details of this issue dicussed in https://github.com/apache/airflow/pull/22050
For now we will likely just exclude the tests (and mark databricks provider as non-python 3.10 compatible). But once this is fixed (in either 1.0.2 or upcoming 2.0.0 version of the library, we wil restore it back).
### Apache Airflow version
main (development)
### Operating System
All
### Deployment
Other
### Deployment details
Just Breeze with Python 3.10
### What happened
The tests are failing:
```
self = <databricks.sql.common.ParamEscaper object at 0x7fe81c6dd6c0>
item = ['file1', 'file2', 'file3']
def escape_item(self, item):
if item is None:
return 'NULL'
elif isinstance(item, (int, float)):
return self.escape_number(item)
elif isinstance(item, basestring):
return self.escape_string(item)
> elif isinstance(item, collections.Iterable):
E AttributeError: module 'collections' has no attribute 'Iterable'
```
https://github.com/apache/airflow/runs/5523057543?check_suite_focus=true#step:8:16781
### What you expected to happen
Test succeed :)
### How to reproduce
Run `TestDatabricksSqlCopyIntoOperator` in Python 3.10 environment.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22220 | https://github.com/apache/airflow/pull/22886 | aa8c08db383ebfabf30a7c2b2debb64c0968df48 | 7be57eb2566651de89048798766f0ad5f267cdc2 | "2022-03-13T14:55:30Z" | python | "2022-04-10T18:32:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,191 | ["airflow/dag_processing/manager.py", "airflow/dag_processing/processor.py", "tests/dag_processing/test_manager.py"] | dag_processing code needs to handle OSError("handle is closed") in poll() and recv() calls | ### Apache Airflow version
2.1.4
### What happened
The problem also exists in the latest version of the Airflow code, but I experienced it in 2.1.4.
This is the root cause of problems experienced in [issue#13542](https://github.com/apache/airflow/issues/13542).
I'll provide a stack trace below. The problem is in the code of airflow/dag_processing/processor.py (and manager.py), all poll() and recv() calls to the multiprocessing communication channels need to be wrapped in exception handlers, handling OSError("handle is closed") exceptions. If one looks at the Python multiprocessing source code, it throws this exception when the channel's handle has been closed.
This occurs in Airflow when a DAG File Processor has been killed or terminated; the Airflow code closes the communication channel when it is killing or terminating a DAG File Processor process (for example, when a dag_file_processor_timeout occurs).This killing or terminating happens asynchronously (in another process) from the process calling the poll() or recv() on the communication channel. This is why an exception needs to be handled. A pre-check of the handle being open is not good enough, because the other process doing the kill or terminate may close the handle in between your pre-check and actually calling poll() or recv() (a race condition).
### What you expected to happen
Here is the stack trace of the occurence I saw:
```
[2022-03-08 17:41:06,101] {taskinstance.py:914} DEBUG - <TaskInstance: staq_report_daily.gs.wait_staq_csv_file 2022-03-06 17:15:00+00:00 [running]> dependency 'Not In Retry Period' PASSED: True, The context specified that being in a retry p
eriod was permitted.
[2022-03-08 17:41:06,101] {taskinstance.py:904} DEBUG - Dependencies all met for <TaskInstance: staq_report_daily.gs.wait_staq_csv_file 2022-03-06 17:15:00+00:00 [running]>
[2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: gdai_gcs_sync> because no tasks in DAG have SLAs
[2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: unity_creative_import_process> because no tasks in DAG have SLAs
[2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: sales_dm_to_bq> because no tasks in DAG have SLAs
[2022-03-08 17:44:50,454] {settings.py:302} DEBUG - Disposing DB connection pool (PID 1902)
Process ForkProcess-1:
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/python3.8/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 370, in _run_processor_manager
processor_manager.start()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 610, in start
return self._run_parsing_loop()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 671, in _run_parsing_loop
self._collect_results_from_processor(processor)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 981, in _collect_results_from_processor
if processor.result is not None:
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/processor.py", line 321, in result
if not self.done:
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/processor.py", line 286, in done
if self._parent_channel.poll():
File "/opt/python3.8/lib/python3.8/multiprocessing/connection.py", line 255, in poll
self._check_closed()
File "/opt/python3.8/lib/python3.8/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
```
This corresponded in time to the following log entries:
```
% kubectl logs airflow-scheduler-58c997dd98-n8xr8 -c airflow-scheduler --previous | egrep 'Ran scheduling loop in|[[]heartbeat[]]'
[2022-03-08 17:40:47,586] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.56 seconds
[2022-03-08 17:40:49,146] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.56 seconds
[2022-03-08 17:40:50,675] {base_job.py:227} DEBUG - [heartbeat]
[2022-03-08 17:40:50,687] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.54 seconds
[2022-03-08 17:40:52,144] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.46 seconds
[2022-03-08 17:40:53,620] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.47 seconds
[2022-03-08 17:40:55,085] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.46 seconds
[2022-03-08 17:40:56,169] {base_job.py:227} DEBUG - [heartbeat]
[2022-03-08 17:40:56,180] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.49 seconds
[2022-03-08 17:40:57,667] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.49 seconds
[2022-03-08 17:40:59,148] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.48 seconds
[2022-03-08 17:41:00,618] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.47 seconds
[2022-03-08 17:41:01,742] {base_job.py:227} DEBUG - [heartbeat]
[2022-03-08 17:41:01,757] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.58 seconds
[2022-03-08 17:41:03,133] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.55 seconds
[2022-03-08 17:41:04,664] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.53 seconds
[2022-03-08 17:44:50,649] {base_job.py:227} DEBUG - [heartbeat]
[2022-03-08 17:44:50,814] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 225.15 seconds
```
You can see that when this exception occurred, there was a hang in the scheduler for almost 4 minutes, no scheduling loops, and no scheduler_job heartbeats.
This hang probably also caused stuck queued jobs as issue#13542 describes.
### How to reproduce
This is hard to reproduce because it is a race condition. But you might be able to reproduce by having in a dagfile top-level code that calls sleep, so that it takes longer to parse than core dag_file_processor_timeout setting. That would cause the parsing processes to be terminated, creating the conditions for this bug to occur.
### Operating System
NAME="Ubuntu" VERSION="18.04.6 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.6 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic
### Versions of Apache Airflow Providers
Not relevant, this is a core dag_processing issue.
### Deployment
Composer
### Deployment details
"composer-1.17.6-airflow-2.1.4"
In order to isolate the scheduler to a separate machine, so as to not have interference from other processes such as airflow-workers running on the same machine, we created an additional node-pool for the scheduler, and ran these k8s patches to move the scheduler to a separate machine.
New node pool definition:
```HCL
{
name = "scheduler-pool"
machine_type = "n1-highcpu-8"
autoscaling = false
node_count = 1
disk_type = "pd-balanced"
disk_size = 64
image_type = "COS"
auto_repair = true
auto_upgrade = true
max_pods_per_node = 32
},
```
patch.sh
```sh
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $0 namespace"
echo "Description: Isolate airflow-scheduler onto it's own node-pool (scheduler-pool)."
echo "Options:"
echo " namespace: kubernetes namespace used by Composer"
exit 1
fi
namespace=$1
set -eu
set -o pipefail
scheduler_patch="$(cat airflow-scheduler-patch.yaml)"
fluentd_patch="$(cat composer-fluentd-daemon-patch.yaml)"
set -x
kubectl -n default patch daemonset composer-fluentd-daemon -p "${fluentd_patch}"
kubectl -n ${namespace} patch deployment airflow-scheduler -p "${scheduler_patch}"
```
composer-fluentd-daemon-patch.yaml
```yaml
spec:
template:
spec:
nodeSelector: null
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-nodepool
operator: In
values:
- default-pool
- scheduler-pool
```
airflow-scheduler-patch.yaml
```yaml
spec:
template:
spec:
nodeSelector:
cloud.google.com/gke-nodepool: scheduler-pool
containers:
- name: gcs-syncd
resources:
limits:
memory: 2Gi
```
### Anything else
On the below checkbox of submitting a PR, I could submit one, but it'd be untested code, I don't really have the environment setup to test the patch.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22191 | https://github.com/apache/airflow/pull/22685 | c0c08b2bf23a54115f1ba5ac6bc8299f5aa54286 | 4a06f895bb2982ba9698b9f0cfeb26d1ff307fc3 | "2022-03-11T20:02:15Z" | python | "2022-04-06T20:42:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,174 | ["airflow/www/static/js/ti_log.js", "airflow/www/templates/airflow/ti_log.html"] | Support log download in task log view | ### Description
Support log downloading from the task log view by adding a download button in the UI.
### Use case/motivation
In the current version of Airflow, when we want to download a task try's log, we can click on the task node in Tree View or Graph view, and use the "Download" button in the task action modal, as in this screenshot:
<img width="752" alt="Screen Shot 2022-03-10 at 5 59 23 PM" src="https://user-images.githubusercontent.com/815701/157787811-feb7bdd4-4e32-4b85-b822-2d68662482e9.png">
It would make log downloading more convenient if we add a Download button in the task log view. This is the screenshot of the task log view, we can add a button on the right side of the "Toggle Wrap" button.
<img width="1214" alt="Screen Shot 2022-03-10 at 5 55 53 PM" src="https://user-images.githubusercontent.com/815701/157788262-a4cb8ff7-b813-4140-b8a1-41a5d0630e1f.png">
I work on Airflow at Pinterest, internally we get such a feature request from our users. I'd like to get your thoughts about adding this feature before I create a PR for it. Thanks.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22174 | https://github.com/apache/airflow/pull/22804 | 6aa65a38e0be3fee18ae9c1541e6091a47ab1f76 | b29cbbdc1bbc290d67e64aa3a531caf2b9f6846b | "2022-03-11T02:08:11Z" | python | "2022-04-08T14:55:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,152 | ["airflow/models/abstractoperator.py", "airflow/models/dag.py", "airflow/models/taskinstance.py", "tests/models/test_dag.py", "tests/models/test_taskinstance.py"] | render_template_as_native_obj=True in DAG constructor prevents failure e-mail from sending | ### Apache Airflow version
2.2.4 (latest released)
### What happened
A DAG constructed with render_template_as_native_obj=True does not send an e-mail notification on task failure.
DAGs constructed without render_template_as_native_obj send e-mail notification as expected.
default_email_on_failure is set to True in airflow.cfg.
### What you expected to happen
I expect DAGs to send an e-mail alert on task failure.
Logs for failed tasks show this:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1767, in handle_failure
self.email_alert(error)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2101, in email_alert
subject, html_content, html_content_err = self.get_email_subject_content(exception)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2093, in get_email_subject_content
subject = render('subject_template', default_subject)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2091, in render
return render_template_to_string(jinja_env.from_string(content), jinja_context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 268, in render_template_to_string
return render_template(template, context, native=False)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 263, in render_template
return "".join(nodes)
TypeError: sequence item 1: expected str instance, TaskInstance found
```
### How to reproduce
1. Construct a DAG with render_template_as_native_obj=True with 'email_on_failure':True.
2. Cause an error in a task. I used a PythonOperator with `assert False`.
3. Task will fail, but no alert e-mail will be sent.
4. Remove render_template_as_native_obj=True from DAG constructor.
5. Re-run DAG
6. Task will fail and alert e-mail will be sent.
I used the following for testing:
```
import datetime
from airflow.operators.python_operator import PythonOperator
from airflow.models import DAG
default_args = {
'owner': 'me',
'start_date': datetime.datetime(2022,3,9),
'email_on_failure':True,
'email':'myemail@email.com'
}
dag = DAG(dag_id = 'dagname',
schedule_interval = '@once',
default_args = default_args,
render_template_as_native_obj = True, #comment this out to test
)
def testfunc(**kwargs):
#intentional error
assert False
task_testfunc = PythonOperator(
task_id = "task_testfunc",
python_callable=testfunc,
dag=dag)
task_testfunc
```
### Operating System
Red Hat Enterprise Linux Server 7.9 (Maipo)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22152 | https://github.com/apache/airflow/pull/22770 | 91832a42d8124b040073481fd93c54e9e64c2609 | d80d52acf14034b0adf00e45b0fbac6ac03ab593 | "2022-03-10T16:13:04Z" | python | "2022-04-07T08:48:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,141 | ["airflow/cli/commands/scheduler_command.py", "airflow/utils/cli.py", "docs/apache-airflow/howto/set-config.rst", "tests/cli/commands/test_scheduler_command.py"] | Dump configurations in airflow scheduler logs based on the config it reads | ### Description
We don't have any way to cross-verify the configs that the airflow scheduler uses. It would be good to have it logged somewhere so that users can cross-verify it.
### Use case/motivation
How do you know for sure that the configs in airflow.cfg are being correctly parsed by airflow scheduler?
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22141 | https://github.com/apache/airflow/pull/22588 | c30ab6945ea0715889d32e38e943c899a32d5862 | 78586b45a0f6007ab6b94c35b33790a944856e5e | "2022-03-10T09:56:08Z" | python | "2022-04-04T12:05:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,129 | ["airflow/providers/google/cloud/operators/bigquery.py", "docs/apache-airflow-providers-google/operators/cloud/bigquery.rst", "tests/providers/google/cloud/operators/test_bigquery.py"] | Add autodetect arg in BQCreateExternalTable Operator | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.4 (latest released)
### Operating System
macOS Monterey
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Autodetect parameter is missing in the create_external_table function in BQCreateExternalTableOperator because of which one cannot create external tables if schema files are missing
See function on line 1140 in this [file](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py)
### What you expected to happen
If autodetect argument is passed to the create_external_table function then one can create external tables without mentioning the schema for a CSV file leveraging the automatic detection functionality provided by the big query
### How to reproduce
Simply call BigQueryCreateExternalTableOperator in the dag without mentioning schema_fields or schema_object
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22129 | https://github.com/apache/airflow/pull/22710 | 215993b75d0b3a568b01a29e063e5dcdb3b963e1 | f9e18472c0c228fc3de7c883c7c3d26d7ee49e81 | "2022-03-09T18:12:30Z" | python | "2022-04-04T13:32:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,115 | ["airflow/providers/docker/hooks/docker.py", "airflow/providers/docker/operators/docker.py", "tests/providers/docker/hooks/test_docker.py", "tests/providers/docker/operators/test_docker.py", "tests/providers/docker/operators/test_docker_swarm.py"] | add timeout to DockerOperator | ### Body
`APIClient` has [timeout](https://github.com/docker/docker-py/blob/b27faa62e792c141a5d20c4acdd240fdac7d282f/docker/api/client.py#L84) param which sets default timeout for API calls. The package [default](https://github.com/docker/docker-py/blob/7779b84e87bea3bac3a32b3ec1511bc1bfaa44f1/docker/constants.py#L6) is 60 seconds which may not be enough for some process.
The needed fix is to allow to set `timeout` to the APIClient constructor in DockerHook and DockerOperator:
for example in:
https://github.com/apache/airflow/blob/05f3a309668288e03988fc4774f9c801974b63d0/airflow/providers/docker/hooks/docker.py#L89
Note that in some cases DockerOperator uses APIClient directly for example in:
https://github.com/apache/airflow/blob/05f3a309668288e03988fc4774f9c801974b63d0/airflow/providers/docker/operators/docker.py#L390
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/22115 | https://github.com/apache/airflow/pull/22502 | 0d64d66ceab1c5da09b56bae5da339e2f608a2c4 | e1a42c4fc8a634852dd5ac5b16cade620851477f | "2022-03-09T12:19:41Z" | python | "2022-03-28T19:35:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,111 | ["Dockerfile", "Dockerfile.ci", "airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/ads/hooks/ads.py", "docs/apache-airflow-providers-google/index.rst", "setup.cfg", "setup.py", "tests/providers/google/ads/operators/test_ads.py"] | apache-airflow-providers-google uses deprecated Google Ads API V8 | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google = 6.4.0
### Apache Airflow version
2.1.3
### Operating System
Debian Buster
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
`apache-airflow-providers-google 6.4.0` has the requirement `google-ads >=12.0.0,<14.0.1`
The latest version of the Google Ads API supported by this is V8 - this was deprecated in November 2021, and is due to be disabled in April / May (see https://developers.google.com/google-ads/api/docs/sunset-dates)
### What you expected to happen
Update the requirements to so that the provider uses a version of the Google Ads API which hasn't been deprecated
At the moment, the only non-deprecated version is V10 - support for this was added in `google-ads=15.0.0`
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22111 | https://github.com/apache/airflow/pull/22965 | c92954418a21dcafcb0b87864ffcb77a67a707bb | c36bcc4c06c93dce11e2306a4aff66432bffd5a5 | "2022-03-09T10:05:51Z" | python | "2022-04-15T10:20:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,072 | ["airflow/decorators/python.py", "airflow/example_dags/example_python_operator.py", "airflow/example_dags/sql/sample.sql", "docs/apache-airflow/howto/operator/python.rst", "tests/api_connexion/endpoints/test_task_instance_endpoint.py", "tests/serialization/test_dag_serialization.py"] | @task decorator does not handle "templates_exts" argument correctly (files are not rendered) | ### Apache Airflow version
2.2.2
### What happened
When I use Taskflow, and especially the `@task` decorator to create a task, the template files are not correctly rendered : I still have the filename inside `templates_dict` instead of the templated file content, even though I supplied a valid `templates_exts` argument (at least one which works with `PythonOperator`).
I do not encounter the same issue using the old `PythonOperator` syntax.
### What you expected to happen
I expect that the behavior of the `@task` decorator would be the same as the `PythonOperator` class, or at least have a workaround explained in the documentation such as other arguments I would need to provide.
### How to reproduce
Sample code below :
```python
from airflow.decorators import task
@task(templates_dict={"query": "sql/sample.sql"}, templates_exts=[".sql"])
def aggregate_logs(**kwargs):
logging.info("query: %s", str(kwargs["templates_dict"]["query"]))
```
Which returns `INFO - query: sql/sample.sql`.
However, if I use a `PythonOperator` with the old syntax it works :
```python
from airflow.operators.python import PythonOperator
def aggregate_logs(**kwargs):
logging.info("query: %s", str(kwargs["templates_dict"]["query"]))
aggregate_task = PythonOperator(
python_callable=aggregate_logs,
templates_dict={"query": "sql/sample.sql"},
templates_exts=[".sql"],
)
```
I get the entire templated file inside `kwargs["templates_dict"]["query"]`, not just its name.
### Operating System
AWS MWAA
### Versions of Apache Airflow Providers
Not sure if relevant here since I only use Airflow packages in the example.
### Deployment
MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22072 | https://github.com/apache/airflow/pull/26390 | 24d88e8feedcb11edc316f0d3f20f4ea54dc23b8 | 4bf0cb98724a2cf04aab6359881a87aeb9cec0ce | "2022-03-08T07:52:17Z" | python | "2022-09-19T20:37:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,065 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_mssql_to_gcs.py", "tests/providers/google/cloud/transfers/test_mysql_to_gcs.py", "tests/providers/google/cloud/transfers/test_oracle_to_gcs.py", "tests/providers/google/cloud/transfers/test_postgres_to_gcs.py", "tests/providers/google/cloud/transfers/test_presto_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_trino_to_gcs.py"] | DB To GCS Operations Should Return/Save Row Count | ### Description
All DB to GCS Operators should track the per file and total row count written for metadata and validation purposes.
- Optionally, based on param, include the row count metadata as GCS file upload metadata.
- Always return row count data through XCom. Currently this operator has no return value.
### Use case/motivation
Currently, there is no way to check the uploaded files row count without opening the file. Downstream operations should have access to this information, and allowing it to be saved as GCS metadata and returning it through XCom makes it readily available for other uses.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22065 | https://github.com/apache/airflow/pull/24382 | 8e0bddaea69db4d175f03fa99951f6d82acee84d | 94257f48f4a3f123918b0d55c34753c7c413eb74 | "2022-03-07T23:36:35Z" | python | "2022-06-13T06:55:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,034 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py"] | BigQueryToGCSOperator: Invalid dataset ID error | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google==6.3.0`
### Apache Airflow version
2.2.3
### Operating System
Linux
### Deployment
Composer
### Deployment details
- Composer Environment version: `composer-2.0.3-airflow-2.2.3`
### What happened
When I use BigQueryToGCSOperator, I got following error.
```
Invalid dataset ID "MY_PROJECT:MY_DATASET". Dataset IDs must be alphanumeric (plus underscores and dashes) and must be at most 1024 characters long.
```
### What you expected to happen
I guess that it is due to I use colon (`:` ) as the separator between project_id and dataset_id in `source_project_dataset_table `.
I tried use dot(`.`) as separator and it worked.
However, [document of BigQueryToGCSOperator](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/bigquery_to_gcs/index.html) states that it is possible to use colon as the separator between project_id and dataset_id. In fact, at least untill Airflow1.10.15 version, it also worked with colon separator.
In Airflow 1.10.*, it separate and extract project_id and dataset_id by colon in bigquery hook. But `apache-airflow-providers-google==6.3.0` doesn't have this process.
https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/contrib/hooks/bigquery_hook.py#L2186-L2247
### How to reproduce
You can reproduce following steps.
- Create a test DAG to execute BigQueryToGCSOperator in Composer environment(`composer-2.0.3-airflow-2.2.3`).
- And give `source_project_dataset_table` arg source BigQuery table path in following format.
- Trigger DAG.
```
source_project_dataset_table = 'PROJECT_ID:DATASET_ID.TABLE_ID'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22034 | https://github.com/apache/airflow/pull/22506 | 02526b3f64d090e812ebaf3c37a23da2a3e3084e | 02976bef885a5da29a8be59b32af51edbf94466c | "2022-03-07T05:00:21Z" | python | "2022-03-27T20:21:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,015 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Allow showing more than 25 last DAG runs in the task duration view | ### Apache Airflow version
2.1.2
### What happened
Task duration view for triggered dags shows all dag runs instead of n last. Changing the number of runs in the `Runs` drop-down menu doesn't change the view. Additionally the chart loads slowly, it show all dag runs.
![screenshot_2022-03-05_170203](https://user-images.githubusercontent.com/7863204/156891332-5e461661-970d-49bf-82a8-5dd1da57bb02.png)
### What you expected to happen
The number of shown dag runs is 25 (like for scheduled dags), and the last runs are shown. The number of runs button should allow to increase / decrease the number of shown dag runs (respectively the task times of the dag runs).
### How to reproduce
Trigger a dag multiple (> 25) times. Look at the "Task Duration" chart.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22015 | https://github.com/apache/airflow/pull/29195 | de2889c2e9779177363d6b87dc9020bf210fdd72 | 8b8552f5c4111fe0732067d7af06aa5285498a79 | "2022-03-05T16:16:48Z" | python | "2023-02-25T21:50:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,007 | ["airflow/api_connexion/endpoints/variable_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/variable_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_variable_endpoint.py"] | Add Variable, Connection "description" fields available in the API | ### Description
I'd like to get the "description" field from the variable, and connection table available through the REST API for the calls:
1. /variables/{key}
2. /connections/{conn_id}
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22007 | https://github.com/apache/airflow/pull/25370 | faf3c4fe474733965ab301465f695e3cc311169c | 98f16aa7f3b577022791494e13b6aa7057afde9d | "2022-03-04T22:55:04Z" | python | "2022-08-02T21:05:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,996 | ["airflow/providers/ftp/hooks/ftp.py", "airflow/providers/sftp/hooks/sftp.py", "tests/providers/ftp/hooks/test_ftp.py"] | Add test_connection to FTP Hook | ### Description
I would like to test if FTP connections are properly setup.
### Use case/motivation
To test FTP connections via the Airflow UI similar to https://github.com/apache/airflow/pull/19609
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21996 | https://github.com/apache/airflow/pull/21997 | a9b7dd69008710f1e5b188e4f8bc2d09a5136776 | 26e8d6d7664bbaae717438bdb41766550ff57e4f | "2022-03-04T15:09:39Z" | python | "2022-03-06T10:16:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,989 | ["airflow/providers/google/cloud/operators/dataflow.py", "tests/providers/google/cloud/operators/test_dataflow.py"] | Potential Bug in DataFlowCreateJavaJobOperator | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.3
### Operating System
mac
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
Passing anything other than a GCS bucket path to the `jar` argument results in the job never being started on DataFlow.
### What you expected to happen
Passing in a local path to a jar should result in a job starting on Data Flow.
### How to reproduce
Create a task using the DataFlowCreateJavaJobOperator and pass in a non GCS path to the `jar` argument.
### Anything else
It's probably an indentation error in this [file](https://github.com/apache/airflow/blob/17d3e78e1b4011267e81846b5d496769934a5bcc/airflow/providers/google/cloud/operators/dataflow.py#L413) starting on line 413. The code for starting the job is over indented and causes any non GCS path for the `jar` to be effectively ignored.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21989 | https://github.com/apache/airflow/pull/22302 | 43dfec31521dcfcb45b95e2927d7c5eb5efa2c67 | a3ffbee7c9b5cd8cc5b7b246116f0254f1daa505 | "2022-03-04T10:31:02Z" | python | "2022-03-20T11:12:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,987 | ["airflow/providers/amazon/aws/hooks/s3.py"] | Airflow S3 connection name | ### Description
Hi,
I took a look some issues and PRs and noticed that `Elastic MapReduce` connection name has been changed to `Amazon Elastic MapReduce` lately. [#20746](https://github.com/apache/airflow/issues/20746)
I think it would be much intuitive if the connection name `S3` is changed to `Amazon S3`, and would look better on connection list in web UI. (also, it is the official name of s3)
Finally, AWS connections would be the followings:
```
Amazon Web Services
Amazon Redshift
Amazon Elastic MapReduce
Amazon S3
```
Would you mind to assign me for PR and change it to `Amazon S3`?
It would be a great start for my Airflow contribution journey.
Thank you!
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21987 | https://github.com/apache/airflow/pull/21988 | 2b4d14696b3f32bc5ab71884a6e434887755e5a3 | 9ce45ff756fa825bd363a5a00c2333d91c60c012 | "2022-03-04T07:38:44Z" | python | "2022-03-04T17:25:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,978 | ["airflow/providers/google/cloud/hooks/gcs.py", "tests/providers/google/cloud/hooks/test_gcs.py"] | Add Metadata Upload Support to GCSHook Upload Method | ### Description
When uploading a file using the GCSHook Upload method, allow for optional metadata to be uploaded with the file. This metadata would then be visible on the blob properties in GCS.
### Use case/motivation
Being able to associate metadata with a GCS blob is very useful for tracking information about the data stored in the GCS blob.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21978 | https://github.com/apache/airflow/pull/22058 | c1faaf3745dd631d4491202ed245cf8190f35697 | a11d831e3f978826d75e62bd70304c5277a8a1ea | "2022-03-03T21:08:50Z" | python | "2022-03-07T22:28:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,970 | ["docs/helm-chart/manage-dags-files.rst"] | helm chart - mounting-dags-from-a-private-github-repo-using-git-sync-sidecar | ### Describe the issue with documentation
doc link: https://airflow.apache.org/docs/helm-chart/stable/manage-dags-files.html#mounting-dags-from-a-private-github-repo-using-git-sync-sidecar
doc location:
"""
[...]
repo: ssh://git@github.com/<username>/<private-repo-name>.git
[...]
"""
I literally spent one working day making the helm deployment work with the git sync feature.
I prefixed my ssh git repo url with "ssh://" as written in the doc. This resulted in the git-sync container being stuck in a CrashLoopBackOff.
### How to solve the problem
Only when I removed the prefix it worked correctly.
### Anything else
chart version: 1.4.0
git-sync image tag: v3.1.6 (default v3.3.0)
Maybe the reason for the issue is the change of the image tag. However I want to share my experience. Maybe the doc is misleading. For me it was.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21970 | https://github.com/apache/airflow/pull/26632 | 5560a46bfe8a14205c5e8a14f0b5c2ae74ee100c | 05d351b9694c3e25843a8b0e548b07a70a673288 | "2022-03-03T16:10:41Z" | python | "2022-09-27T13:05:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,941 | ["airflow/providers/amazon/aws/hooks/sagemaker.py", "airflow/providers/amazon/aws/operators/sagemaker.py", "tests/providers/amazon/aws/operators/test_sagemaker_transform.py"] | Sagemaker Transform Job fails if there are job with Same name | ### Description
Sagemaker Transform Job fails if there are job with Same name exist. Let say I create a job name as 'transform-2021-01-01T00-30-00' . So if I clear the airflow task run id for this so that the operator re-triggers then the Sagemaker Job creation fails because job with same name exists. So can we add 'action_if_job_exists flag where Behaviour if the job name already exists. Possible options are "increment" (default) and "fail".'
### Use case/motivation
Now in production environment failures are inevitable and with Sagemaker Jobs we have to ensure there is unique name for each run of the Job. So like the Sagemaker Processing Job operator or training operator we have an option to increment a job name by appending the count like if I run same job twice the job name will be 'transform-2021-01-01T00-30-00-1' where 1 is appended at end with the help of 'action_if_job_exists ([str](https://docs.python.org/3/library/stdtypes.html#str)) -- Behaviour if the job name already exists. Possible options are "increment" (default) and "fail".'
I have faced this issue personally on one of the task I am working on and think will save time and cost instead of running entire workflow again to get unique job names if there are other dependent task in the job by just clearing failed task id post fixing the failure in Sagemaker code , docker image input etc and that will just continue from where it failed
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21941 | https://github.com/apache/airflow/pull/25263 | 1fd702e5e55cabb40fe7e480bc47e70d9a036944 | 007b1920ddcee1d78f871d039a6ed8f4d0d4089d | "2022-03-02T13:52:31Z" | python | "2022-08-02T18:20:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,929 | ["airflow/providers/elasticsearch/hooks/elasticsearch.py", "docs/apache-airflow-providers-elasticsearch/hooks/elasticsearch_python_hook.rst", "docs/apache-airflow-providers-elasticsearch/hooks/elasticsearch_sql_hook.rst", "docs/apache-airflow-providers-elasticsearch/hooks/index.rst", "docs/apache-airflow-providers-elasticsearch/index.rst", "tests/always/test_project_structure.py", "tests/providers/elasticsearch/hooks/test_elasticsearch.py", "tests/system/providers/elasticsearch/example_elasticsearch_query.py"] | Elasticsearch hook support DSL | ### Description
Current elasticsearch provider hook does not support query with DSL. Can we implement some methods that support user input json to get query results?
BTW, why current `ElasticsearchHook's` father class is `DbApiHook`? I thought DbApiHook is for relational database that supports `sqlalchemy`?
### Use case/motivation
I think the elasticsearch provider hook should be like `MongoHook` that inherit from `BaseHook` and provide more useful methods works out of the box.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21929 | https://github.com/apache/airflow/pull/24895 | 33b2cd8784dcbc626f79e2df432ad979727c9a08 | 2ddc1004050464c112c18fee81b03f87a7a11610 | "2022-03-02T08:38:07Z" | python | "2022-07-08T21:51:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,923 | ["airflow/api/common/trigger_dag.py", "airflow/jobs/scheduler_job.py", "airflow/models/dag.py", "airflow/models/dagrun.py", "airflow/timetables/base.py", "airflow/utils/types.py", "docs/apache-airflow/howto/timetable.rst", "tests/models/test_dag.py"] | Programmatic customization of run_id for scheduled DagRuns | ### Description
Allow DAG authors to control how `run_id`'s are generated for created DagRuns. Currently the only way to specify a DagRun's `run_id` is through the manual trigger workflow either through the CLI or API and passing in `run_id`. It would be great if DAG authors are able to write a custom logic to generate `run_id`'s from scheduled `DagRunInterval`'s.
### Use case/motivation
In Airflow 1.x, the semantics of `execution_date` were burdensome enough for users that DAG authors would subclass DAG to override `create_dagrun` so that when new DagRuns were created, they were created with `run_id`'s that provided context into semantics about the DagRun. For example,
```
def create_dagrun(self, **kwargs):
kwargs['run_id'] = kwargs['execution_date'] + self.following_schedule(kwargs['execution_date']).date()
return super().create_dagrun(kwargs)
```
would result in the UI DagRun dropdown to display the weekday of when the Dag actually ran.
<img width="528" alt="image001" src="https://user-images.githubusercontent.com/9851473/156280393-e261d7fa-dfe0-41db-9887-941510f4070f.png">
After upgrading to Airflow 2.0 and with Dag serialization in the scheduler overridden methods are no longer there in the SerializedDAG, so we are back to having `scheduled__<execution_date>` values in the UI dropdown. It would be great if some functionality could be exposed either through the DAG or just in the UI to display meaningful values in the DagRun dropdown.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21923 | https://github.com/apache/airflow/pull/25795 | 5c48ed19bd3b554f9c3e881a4d9eb61eeba4295b | 0254f30a5a90f0c3104782525fabdcfdc6d3b7df | "2022-03-02T02:02:30Z" | python | "2022-08-19T13:15:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,908 | ["airflow/api/common/mark_tasks.py", "airflow/www/views.py", "tests/api/common/test_mark_tasks.py"] | DAG will be set back to Running state after being marked Failed if there are scheduled tasks | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I was running a large DAG with a limited concurrency and wanted to cancel the current run. I marked the run as `Failed` via the UI which terminated all running tasks and marked them as `Failed`.
However, a few seconds later the run was set back to Running and other tasks started to execute.
<img width="1235" alt="image" src="https://user-images.githubusercontent.com/16950874/156228662-e06dd71a-e8ef-4cdd-b958-5ddefa1d5328.png">
I think that this is because of two things happening:
1) Marking a run as failed will only stop the currently running tasks and mark them as failed, does nothing to tasks in `scheduled` state:
https://github.com/apache/airflow/blob/0cd3b11f3a5c406fbbd4433d8e44d326086db634/airflow/api/common/mark_tasks.py#L455-L462
2) During scheduling, a DAG with tasks in non-finished states will be marked as `Running`:
https://github.com/apache/airflow/blob/feea143af9b1db3b1f8cd8d29677f0b2b2ab757a/airflow/models/dagrun.py#L583-L585
I'm assuming that this is unintended behaviour, is that correct?
### What you expected to happen
I think that marking a DAG as failed should cause the run to stop (and not be resumed) regardless of the state of its tasks.
When a DAGRun is marked failed, we should:
- mark all running tasks failed
- **mark all non-finished tasks as skipped**
- mark the DagRun as `failed`
This is consistent with the behaviour from a DagRun time out.
### How to reproduce
Run this DAG:
```python
from datetime import timedelta
from airflow.models import DAG
from airflow.operators.bash_operator import BashOperator
from airflow import utils
dag = DAG(
'cant-be-stopped',
start_date=utils.dates.days_ago(1),
max_active_runs=1,
dagrun_timeout=timedelta(minutes=60),
schedule_interval=None,
concurrency=1
)
for i in range(5):
task = BashOperator(
task_id=f'task_{i}',
bash_command='sleep 300',
retries=0,
dag=dag,
)
```
And once the first task is running, mark the run as failed. After the next scheduler loop the run will be set back to running and a different task will be started.
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
Noticed this in Airflow 2.2.2 but replicated in a Breeze environment on the latest main.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21908 | https://github.com/apache/airflow/pull/22410 | e97953ad871dc0078753c668680cce8096a31e32 | becbb4ab443995b21d783cadfba7fbfdf3b1530d | "2022-03-01T18:39:28Z" | python | "2022-03-31T17:09:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,897 | ["docs/apache-airflow/logging-monitoring/metrics.rst"] | Metrics documentation incorrectly lists dag_processing.processor_timeouts as a gauge | ### Describe the issue with documentation
According to the [documentation](https://airflow.apache.org/docs/apache-airflow/2.2.4/logging-monitoring/metrics.html), `dag_processing.processor_timeouts` is a gauge.
However, checking the code, it appears to be a counter: https://github.com/apache/airflow/blob/3035d3ab1629d56f3c1084283bed5a9c43258e90/airflow/dag_processing/manager.py#L1004
### How to solve the problem
Move the metric to the counter section.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21897 | https://github.com/apache/airflow/pull/23393 | 82c244f9c7f24735ee952951bcb5add45422d186 | fcfaa8307ac410283f1270a0df9e557570e5ffd3 | "2022-03-01T13:40:37Z" | python | "2022-05-08T21:11:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,891 | ["airflow/providers/apache/hive/provider.yaml", "setup.py", "tests/providers/apache/hive/hooks/test_hive.py", "tests/providers/apache/hive/transfers/test_hive_to_mysql.py", "tests/providers/apache/hive/transfers/test_hive_to_samba.py", "tests/providers/apache/hive/transfers/test_mssql_to_hive.py", "tests/providers/apache/hive/transfers/test_mysql_to_hive.py"] | hive provider support for python 3.9 | ### Apache Airflow Provider(s)
apache-hive
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.4 (latest released)
### Operating System
Debian “buster”
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Hive provider can not be used in python 3.9 airflow image without also manually installing `PyHive`, `sasl`, `thrift-sasl` python libraries.
### What you expected to happen
Hive provider can be used in python 3.9 airflow image after installing only hive provider.
### How to reproduce
_No response_
### Anything else
Looks like for python 3.9 hive provider support was removed in https://github.com/apache/airflow/pull/15515#issuecomment-860264240 , because `sasl` library did not support python 3.9. However now python 3.9 is supported in `sasl`: https://github.com/cloudera/python-sasl/issues/21#issuecomment-865914647 .
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21891 | https://github.com/apache/airflow/pull/21893 | 76899696fa00c9f267316f27e088852556ebcccf | 563ecfa0539f5cbd42a715de0e25e563bd62c179 | "2022-03-01T10:27:55Z" | python | "2022-03-01T22:16:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,832 | ["airflow/decorators/__init__.pyi", "airflow/providers/docker/decorators/docker.py", "airflow/providers/docker/example_dags/example_docker.py"] | Unmarshalling of function with '@task.docker()' decorator fails if 'python' alias is not defined in image | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I am using Airflow 2.2.4 docker which is to run a DAG, `test_dag.py`, defined as follows:
```
from airflow.decorators import dag, task
from airflow.utils import dates
@dag(schedule_interval=None,
start_date=dates.days_ago(1),
catchup=False)
def test_dag():
@task.docker(image='company/my-repo',
api_version='auto',
docker_url='tcp://docker-socket-proxy:2375/',
auto_remove=True)
def docker_task(inp):
print(inp)
return inp+1
@task.python()
def python_task(inp):
print(inp)
out = docker_task(10)
python_task(out)
_ = test_dag()
```
The Dockerfile for 'company/my-repo' is as follows:
```
FROM nvidia/cuda:11.2.2-runtime-ubuntu20.04
USER root
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python3 python3-pip
```
### What you expected to happen
I expected the DAG logs for `docker_task()` and `python_task()` to have 10 and 11 as output respectively.
Instead, the internal Airflow unmarshaller that is supposed to unpickle the function definition of `docker_task()` inside the container of image `company/my-repo` via `__PYTHON_SCRIPT` environmental variable to run it, makes an **incorrect assumption** that the symbol `python` is defined as an alias for either `/usr/bin/python2` or `/usr/bin/python3`. Most linux python installations require that users explicitly specify either `python2` or `python3` when running their scripts and `python` is NOT defined even when `python3` is installed via aptitude package manager.
This error can be resolved for now by adding the following to `Dockerfile` after python3 package installation:
`RUN apt-get install -y python-is-python3`
But this should NOT be a requirement.
`Dockerfile`s using base python images do not suffer from this problem as they have the alias `python` defined.
The error logged is:
```
[2022-02-26, 11:30:47 UTC] {docker.py:258} INFO - Starting docker container from image company/my-repo
[2022-02-26, 11:30:48 UTC] {docker.py:320} INFO - + python -c 'import base64, os;x = base64.b64decode(os.environ["__PYTHON_SCRIPT"]);f = open("/tmp/script.py", "wb"); f.write(x);'
[2022-02-26, 11:30:48 UTC] {docker.py:320} INFO - bash: python: command not found
[2022-02-26, 11:30:48 UTC] {taskinstance.py:1700} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1455, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1511, in _execute_task
result = execute_callable(context=context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/docker/decorators/docker.py", line 117, in execute
return super().execute(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/decorators/base.py", line 134, in execute
return_value = super().execute(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/docker/operators/docker.py", line 390, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/docker/operators/docker.py", line 265, in _run_image
return self._run_image_with_mounts(self.mounts + [tmp_mount], add_tmp_variable=True)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/docker/operators/docker.py", line 324, in _run_image_with_mounts
raise AirflowException('docker container failed: ' + repr(result) + f"lines {res_lines}")
airflow.exceptions.AirflowException: docker container failed: {'Error': None, 'StatusCode': 127}lines + python -c 'import base64, os;x = base64.b64decode(os.environ["__PYTHON_SCRIPT"]);f = open("/tmp/script.py", "wb"); f.write(x);'
bash: python: command not found
```
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04 WSL 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21832 | https://github.com/apache/airflow/pull/21973 | 73c6bf08780780ca5a318e74902cb05ba006e3ba | 188ac519964c6b6acf9d6ab144e7ff7e5538547c | "2022-02-26T12:18:44Z" | python | "2022-03-07T01:45:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,808 | ["airflow/providers/amazon/aws/operators/sagemaker.py", "tests/providers/amazon/aws/operators/test_sagemaker_base.py"] | Add default 'aws_conn_id' to SageMaker Operators | The SageMaker Operators not having a default value for `aws_conn_id` is a pain, we should fix that. See EKS operators for an example: https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/eks.py
_Originally posted by @ferruzzi in https://github.com/apache/airflow/pull/21673#discussion_r813414043_ | https://github.com/apache/airflow/issues/21808 | https://github.com/apache/airflow/pull/23515 | 828016747ac06f6fb2564c07bb8be92246f42567 | 5d1e6ff19ab4a63259a2c5aed02b601ca055a289 | "2022-02-24T22:58:10Z" | python | "2022-05-09T17:36:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,807 | ["UPDATING.md", "airflow/exceptions.py", "airflow/models/mappedoperator.py", "airflow/sensors/base.py", "airflow/ti_deps/deps/ready_to_reschedule.py", "tests/sensors/test_base.py", "tests/serialization/test_dag_serialization.py"] | Dynamically mapped sensors throw TypeError at DAG parse time | ### Apache Airflow version
main (development)
### What happened
Here's a DAG:
```python3
from datetime import datetime
from airflow import DAG
from airflow.sensors.date_time import DateTimeSensor
template = "{{{{ ti.start_date + macros.timedelta(seconds={}) }}}}"
with DAG(
dag_id="datetime_mapped",
start_date=datetime(1970, 1, 1),
) as dag:
@dag.task
def get_sleeps():
return [30, 60, 90]
@dag.task
def dt_templates(sleeps):
return [template.format(s) for s in sleeps]
templates_xcomarg = dt_templates(get_sleeps())
DateTimeSensor.partial(task_id="sleep", mode="reschedule").apply(
target_time=templates_xcomarg
)
```
I wanted to see if it would parse, so I ran:
```
$ python dags/the_dag.py
```
And I got this error:
```
Traceback (most recent call last):
File "/Users/matt/2022/02/22/dags/datetime_mapped.py", line 23, in <module>
DateTimeSensor.partial(task_id="sleep", mode="reschedule").apply(
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 203, in apply
deps=MappedOperator.deps_for(self.operator_class),
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 287, in deps_for
return operator_class.deps | {MappedTaskIsExpanded()}
TypeError: unsupported operand type(s) for |: 'property' and 'set'
Exception ignored in: <function OperatorPartial.__del__ at 0x10ed90160>
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 182, in __del__
warnings.warn(f"{self!r} was never mapped!")
File "/usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/warnings.py", line 109, in _showwarnmsg
sw(msg.message, msg.category, msg.filename, msg.lineno,
File "/Users/matt/src/airflow/airflow/settings.py", line 115, in custom_show_warning
from rich.markup import escape
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 982, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 925, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1414, in find_spec
File "<frozen importlib._bootstrap_external>", line 1380, in _get_spec
TypeError: 'NoneType' object is not iterable
```
### What you expected to happen
No errors. Instead a dag with three parallel sensors.
### How to reproduce
Try to use the DAG shown above.
### Operating System
Mac OS Bug Sur
### Versions of Apache Airflow Providers
N/A
### Deployment
Virtualenv installation
### Deployment details
version: 2.3.0.dev0
cloned at: [8ee8f2b34](https://github.com/apache/airflow/commit/8ee8f2b34b8df168a4d3f2664a9f418469079723)
### Anything else
A comment from @ashb about this
> We're assuming we can call deps on the class. Which we can for everything but a sensor.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21807 | https://github.com/apache/airflow/pull/21815 | 2c57ad4ff9ddde8102c62f2e25c2a2e82cceb3e7 | 8b276c6fc191254d96451958609faf81db994b94 | "2022-02-24T22:29:12Z" | python | "2022-03-02T14:19:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,801 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | Error in creating external table using GCSToBigQueryOperator when autodetect=True | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I was trying to create an external table for a CSV file in GCS using the GCSToBigQueryOperator with autodetect=True but ran into some issues. The error stated that either schema field or schema object must be mentioned for creating an external table configuration. On close inspection of the code, I found out that the operator cannot autodetect the schema of the file.
In the [file](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py), a piece of code seems to be missing when calling the create_external_table function at line 262.
This must be an oversight but it **prevents the creation of an external table with an automatically deduced schema.**
The **solution** is to pass autodetect=self.autodetect when calling the create_external_table function as mentioned below:
if self.external_table:
[...]
autodetect=self.autodetect,
[...]
### What you expected to happen
The operator should have autodetected the schema of the CSV file and created an external table but it threw an error stating that either schema field or schema object must be mentioned for creating external table configuration
This error is due to the fact that the value of autodetect is not being passed when calling the create_external_table function in this [file](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py) at line 262. Also, the default value of autodetect is False in create_external_table and so one gets the error as the function receives neither autodetect, schema_field or schema_object value
### How to reproduce
The above issue can be reproduced by calling the GCSToBigQueryOperator with the following parameters as follow:
create_external_table = GCSToBigQueryOperator(
task_id = <task_id>
bucket = <bucket_name>,
source_objects = [<gcs path excluding bucket name to csv file>],
destination_project_dataset_table = <project_id>.<dataset_name>.<table_name>,
schema_fields=None,
schema_object=None,
source_format='CSV',
autodetect = True,
external_table=True,
dag = dag
)
create_external_table
### Operating System
macOS Monterey 12.2.1
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21801 | https://github.com/apache/airflow/pull/21944 | 26e8d6d7664bbaae717438bdb41766550ff57e4f | 9020b3a89d4572572c50d6ac0f1724e09092e0b5 | "2022-02-24T18:07:25Z" | python | "2022-03-06T10:25:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,768 | ["airflow/models/baseoperator.py", "airflow/models/dag.py"] | raise TypeError when default_args not a dictionary | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When triggering this dag below it runs when it should fail. A set is being passed to default_args instead of what should be a dictionary yet the dag still succeeds.
### What you expected to happen
I expected the dag to fail as the default_args parameter should only be a dictionary.
### How to reproduce
```
from airflow.models import DAG
from airflow.operators.python import PythonVirtualenvOperator, PythonOperator
from airflow.utils.dates import days_ago
def callable1():
pass
with DAG(
dag_id="virtualenv_python_operator",
default_args={"owner: airflow"},
schedule_interval=None,
start_date=days_ago(2),
tags=["core"],
) as dag:
task = PythonOperator(
task_id="check_errors",
python_callable=callable1,
)
```
### Operating System
Docker (debian:buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro CLI with images:
- quay.io/astronomer/ap-airflow-dev:2.2.4-1-onbuild
- quay.io/astronomer/ap-airflow-dev:2.2.3-2
- quay.io/astronomer/ap-airflow-dev:2.2.0-5-buster-onbuild
### Anything else
Bug happens every time.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21768 | https://github.com/apache/airflow/pull/21809 | 7724a5a2ec9531f03497a259c4cd7823cdea5e0c | 7be204190d6079e49281247d3e2c644535932925 | "2022-02-23T18:50:03Z" | python | "2022-03-07T00:18:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,752 | ["airflow/cli/cli_parser.py"] | triggerer `--capacity` parameter does not work | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When you run `airflow triggerer --capacity 1000`, you get the following error:
```
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/bin/airflow", line 8, in <module>
sys.exit(main())
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/cli/commands/triggerer_command.py", line 34, in triggerer
job = TriggererJob(capacity=args.capacity)
File "<string>", line 4, in __init__
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/sqlalchemy/orm/state.py", line 433, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
compat.raise_(
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/sqlalchemy/orm/state.py", line 430, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 63, in __init__
raise ValueError(f"Capacity number {capacity} is invalid")
ValueError: Capacity number 1000 is invalid
```
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Operating System
Linux / Mac
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21752 | https://github.com/apache/airflow/pull/21753 | 169b196d189242c4f7d26bf1fa4dd5a8b5da12d4 | 9076b67c05cdba23e8fa51ebe5ad7f7d53e1c2ba | "2022-02-23T07:02:13Z" | python | "2022-02-23T10:20:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,711 | [".github/actions/configure-aws-credentials", "airflow/providers/amazon/aws/hooks/sagemaker.py", "airflow/providers/amazon/aws/operators/sagemaker.py", "tests/providers/amazon/aws/hooks/test_sagemaker.py", "tests/providers/amazon/aws/operators/test_sagemaker_processing.py"] | SageMakerProcessingOperator does not honor action_if_job_exists | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon | 2.4.0
### Apache Airflow version
2.2.3 (latest released)
### Operating System
Amazon Linux 2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
Sagemaker Processing Operator no longer honors the `action_if_job_exists` param and always fails creation of a new processing job is a job with the name already exists.
This happens because in a recent change, the function responsible for executing the job no longer honors the increment setting:
Change that breaks the increment: https://github.com/apache/airflow/commit/96dd70348ad7e31cfeae6d21af70671b41551fe9
New code: https://github.com/apache/airflow/blob/6734eb1d09a99dc519e89a59e2086cef09a87098/airflow/providers/amazon/aws/operators/sagemaker.py#L167
### What you expected to happen
When Sagemaker Processing operator is called with a job-name that already exists, the job creation should succeed with a name that is incremented by 1.
### How to reproduce
invoke SageMakerProcessingOperator twice with the same job name while keeping action_if_job_exists as 'increment'.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21711 | https://github.com/apache/airflow/pull/27456 | e8ab8ccc0e7b82efc0dbf8bd31e0bbf57b1d5637 | 9f9ab3021800b5cebbf9c7190716ab753a020dbe | "2022-02-21T10:11:39Z" | python | "2022-11-11T00:28:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,672 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "docs/apache-airflow-providers-amazon/connections/aws.rst", "tests/providers/amazon/aws/hooks/test_base_aws.py"] | [AWS] Configurable AWS SessionFactory for AwsBaseHook | ### Description
Add custom federated AWS access support through configurable [SessionFactory](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/base_aws.py#L424),
User will have the option to use their own `SessionFactory` implementation that can provide AWS credentials through external process.
### Use case/motivation
Some company uses custom [federated AWS access](https://aws.amazon.com/identity/federation/) to AWS services. It corresponds to [credential_process](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html) in AWS configuration option.
While the hook currently support AWS profile option, I think it would be great if we can add this support on the hook directly, without involving any AWS configuration files on worker nodes.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21672 | https://github.com/apache/airflow/pull/21778 | f0bbb9d1079e2660b4aa6e57c53faac84b23ce3d | c819b4f8d0719037ce73d845c4ff9f1e4cb6cc38 | "2022-02-18T18:25:12Z" | python | "2022-02-28T09:29:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,671 | ["airflow/providers/amazon/aws/utils/emailer.py", "docs/apache-airflow/howto/email-config.rst", "tests/providers/amazon/aws/utils/test_emailer.py"] | Amazon Airflow Provider | Broken AWS SES as backend for Email | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
```
apache-airflow==2.2.2
apache-airflow-providers-amazon==2.4.0
```
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux 2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
As part of this PR https://github.com/apache/airflow/pull/18042 the signature of the function `airflow.providers.amazon.aws.utils.emailer.send_email` is no longer compatible with how `airflow.utils.email.send_email` invokes the function. Essentially the functionally of using SES as Email Backend is broken.
### What you expected to happen
This behavior is erroneous because the signature of `airflow.providers.amazon.aws.utils.emailer.send_email` should be compatible with how we call the backend function in `airflow.utils.email.send_email`:
```
return backend(
to_comma_separated,
subject,
html_content,
files=files,
dryrun=dryrun,
cc=cc,
bcc=bcc,
mime_subtype=mime_subtype,
mime_charset=mime_charset,
conn_id=backend_conn_id,
**kwargs,
)
```
### How to reproduce
## Use AWS SES as Email Backend
```
[email]
email_backend = airflow.providers.amazon.aws.utils.emailer.send_email
email_conn_id = aws_default
```
## Try sending an Email
```
from airflow.utils.email import send_email
def email_callback(**kwargs):
send_email(to=['test@hello.io'], subject='test', html_content='content')
email_task = PythonOperator(
task_id='email_task',
python_callable=email_callback,
)
```
## The bug shows up
```
File "/usr/local/airflow/dags/environment_check.py", line 46, in email_callback
send_email(to=['test@hello.io'], subject='test', html_content='content')
File "/usr/local/lib/python3.7/site-packages/airflow/utils/email.py", line 66, in send_email
**kwargs,
TypeError: send_email() missing 1 required positional argument: 'html_content'
```
### Anything else
Every time.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21671 | https://github.com/apache/airflow/pull/21681 | b48dc4da9ec529745e689d101441a05a5579ef46 | b28f4c578c0b598f98731350a93ee87956d866ae | "2022-02-18T18:16:17Z" | python | "2022-02-19T09:34:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,656 | ["airflow/models/baseoperator.py"] | Airflow >= 2.2.0 execution date change is failing TaskInstance get_task_instances method and possibly others | ### Apache Airflow version
2.2.3 (latest released)
### What happened
This is my first time reporting or posting on this forum. Please let me know if I need to provide any more information. Thanks for looking at this!
I have a Python Operator that uses the BaseOperator get_task_instances method and during the execution of this method, I encounter the following error:
<img width="1069" alt="Screen Shot 2022-02-17 at 2 28 48 PM" src="https://user-images.githubusercontent.com/18559784/154581673-718bc199-8390-49cf-a3fe-8972b6f39f81.png">
This error is from doing an upgrade from airflow 1.10.15 -> 2.2.3.
I am using SQLAlchemy version 1.2.24 but I also tried with version 1.2.23 and encountered the same error. However, I do not think this is a sqlAlchemy issue.
The issue seems to have been introduced with Airflow 2.2.0 (pr: https://github.com/apache/airflow/pull/17719/files), where the TaskInstance.execution_date changed from being a column to this association_proxy. I do not have deep knowledge of SQLAlchemny so I am not sure why this change was made, but it results in it the error I'm getting.
2.2 .0 +
<img width="536" alt="Screen Shot 2022-02-17 at 2 41 00 PM" src="https://user-images.githubusercontent.com/18559784/154583252-4729b44d-40e2-4a89-9018-95b09ef4da76.png">
1.10.15
<img width="428" alt="Screen Shot 2022-02-17 at 2 56 15 PM" src="https://user-images.githubusercontent.com/18559784/154585325-4546309c-66b6-4e69-aba2-9b6979762a1b.png">
if you follow the stack trace you will get to this chunk of code that leads to the error because the association_proxy has a '__clause_element__' attr, but the attr raises the exception in the error when called.
<img width="465" alt="Screen Shot 2022-02-17 at 2 43 51 PM" src="https://user-images.githubusercontent.com/18559784/154583639-a7957209-b19e-4134-a5c2-88d53176709c.png">
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Operating System
Linux from the official airflow helm chart docker image python version 3.7
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 2.4.0
apache-airflow-providers-celery 2.1.0
apache-airflow-providers-cncf-kubernetes 2.2.0
apache-airflow-providers-databricks 2.2.0
apache-airflow-providers-docker 2.3.0
apache-airflow-providers-elasticsearch 2.1.0
apache-airflow-providers-ftp 2.0.1
apache-airflow-providers-google 6.2.0
apache-airflow-providers-grpc 2.0.1
apache-airflow-providers-hashicorp 2.1.1
apache-airflow-providers-http 2.0.1
apache-airflow-providers-imap 2.0.1
apache-airflow-providers-microsoft-azure 3.4.0
apache-airflow-providers-mysql 2.1.1
apache-airflow-providers-odbc 2.0.1
apache-airflow-providers-postgres 2.4.0
apache-airflow-providers-redis 2.0.1
apache-airflow-providers-sendgrid 2.0.1
apache-airflow-providers-sftp 2.3.0
apache-airflow-providers-slack 4.1.0
apache-airflow-providers-sqlite 2.0.1
apache-airflow-providers-ssh 2.3.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
The only extra dependency I am using is awscli==1.20.65. I have changed very little with the deployment besides a few environments variables and some pod annotations.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21656 | https://github.com/apache/airflow/pull/21705 | b2c0a921c155e82d1140029e6495594061945025 | bb577a98494369b22ae252ac8d23fb8e95508a1c | "2022-02-17T22:53:28Z" | python | "2022-02-22T20:12:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,647 | ["docs/apache-airflow-providers-jenkins/connections.rst", "docs/apache-airflow-providers-jenkins/index.rst"] | Jenkins Connection Example | ### Describe the issue with documentation
I need to configure a connection to our jenkins and I dont find anywhere an example.
I suppose that I need to define a http connection with the format:
`http://usename:password@jenkins_url`
However don't have any idea about adding `/api` so that the url would be:
`http://usename:password@jenkins_url/api`
### How to solve the problem
Is it possible to include at least a jenkins connection example in the documentation?
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21647 | https://github.com/apache/airflow/pull/22682 | 3849b4e709acfc9e85496aa2dededb2dae117fc7 | cb41d5c02e3c53a24f9dc148e45e696891c347c2 | "2022-02-17T16:40:43Z" | python | "2022-04-02T20:04:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,638 | ["airflow/models/connection.py", "tests/models/test_connection.py"] | Spark Connection with k8s in URL not mapped correctly | ### Official Helm Chart version
1.2.0
### Apache Airflow version
2.1.4
### Kubernetes Version
v1.21.6+bb8d50a
### Helm Chart configuration
I defined a new Connection String for AIRFLOW_CONN_SPARK_DEFAULT in values.yaml like the following section (base64 encoded or with correct string (spark://k8s://100.68.0.1:443?deploy-mode=cluster):
```
extraSecrets:
'{{ .Release.Name }}-airflow-connections':
data: |
AIRFLOW_CONN_SPARK_DEFAULT: 'c3Bhcms6Ly9rOHM6Ly8xMDAuNjguMC4xOjQ0Mz9kZXBsb3ktbW9kZT1jbHVzdGVy'
```
in Section extraEnvFrom i defined the following:
```
extraEnvFrom: |
- secretRef:
name: '{{ .Release.Name }}-airflow-connections'
```
### Docker Image customisations
added apache-airflow-providers-apache-spark to base Image
### What happened
Airflow Connection mapped wrong because of the k8s:// within the url. if i ask for the connection with cmd "airflow connections get spark_default" then host=k8s and schema=/100.60.0.1:443 which is wrong
### What you expected to happen
the spark Connection based on k8s (spark://k8s://100.68.0.1:443?deploy-mode=cluster) should be parsed correctly
### How to reproduce
define in values.yaml
```
extraSecrets:
'{{ .Release.Name }}-airflow-connections':
data: |
AIRFLOW_CONN_SPARK_DEFAULT: 'c3Bhcms6Ly9rOHM6Ly8xMDAuNjguMC4xOjQ0Mz9kZXBsb3ktbW9kZT1jbHVzdGVy'
extraEnvFrom: |
- secretRef:
name: '{{ .Release.Name }}-airflow-connections'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21638 | https://github.com/apache/airflow/pull/31465 | 232771869030d708c57f840aea735b18bd4bffb2 | 0560881f0eaef9c583b11e937bf1f79d13e5ac7c | "2022-02-17T09:39:46Z" | python | "2023-06-19T09:32:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,616 | ["airflow/models/trigger.py", "tests/models/test_trigger.py"] | Deferred tasks being ran on every triggerer instance | ### Apache Airflow version
2.2.3 (latest released)
### What happened
I'm currently using airflow triggers, but I've noticed that deferred tasks are being ran on every triggerer instance right away:
```
host 1 -> 2/10/2022 1:06:08 PM Job a702656f-01ce-4e7a-893a-5b42cdaa38e2 progressed from Unknown to RUNNABLE.
host 2 -> 2/10/2022 1:06:06 PM Job a702656f-01ce-4e7a-893a-5b42cdaa38e2 progressed from Unknown to RUNNABLE.
```
within 2 seconds, the job was issued on both triggerer instances.
### What you expected to happen
The deferred task is only scheduled on 1 triggerer instance.
### How to reproduce
Create many jobs that have to be deferred, and start multiple triggerers.
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
2.2.3
### Deployment
Other Docker-based deployment
### Deployment details
A docker setup w/ multiple triggerer instances running.
### Anything else
I believe this issue is due to a race condition here: https://github.com/apache/airflow/blob/84a59a6d74510aff4059a3ca2da793d996e86fa1/airflow/models/trigger.py#L175. If multiple instances start at the same time, each instance will get the same tasks in their alive_job_ids.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21616 | https://github.com/apache/airflow/pull/21770 | c8d64c916720f0be67a4f2ffd26af0d4c56005ff | b26d4d8a290ce0104992ba28850113490c1ca445 | "2022-02-16T14:23:07Z" | python | "2022-02-26T19:25:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,615 | ["chart/tests/test_create_user_job.py"] | ArgoCD deployment: Cannot synchronize after updating values | ### Official Helm Chart version
1.4.0 (latest released)
### Apache Airflow version
2.2.3 (latest released)
### Kubernetes Version
v1.20.12-gke.1500
### Helm Chart configuration
defaultAirflowTag: "2.2.3-python3.9"
airflowVersion: "2.2.3"
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
images:
migrationsWaitTimeout: 300
executor: "KubernetesExecutor"
### Docker Image customisations
_No response_
### What happened
I was able to configure the synchronization properly when I added the application to _ArgoCD_ the first time, but after updating an environment value, it is set properly (the scheduler is restarted and works fine), but _ArgoCD_ cannot synchronize the jobs (_airflow-run-airflow-migrations_ and _airflow-create-user_), so it shows that the application is not synchronized.
Since I deploy _Airflow_ with _ArgoCD_ and I disable the _Helm's_ hooks, these jobs are not deleted when finished and remain as completed.
The workaround I am doing is to delete these jobs manually, but I have to repeat this after an update.
Should the attribute `ttlSecondsAfterFinished: 0` be included below this line when the _Helm's_ hooks are disabled in the jobs templates?
https://github.com/apache/airflow/blob/af2c047320c5f0742f466943c171ec761d275bab/chart/templates/jobs/migrate-database-job.yaml#L48
https://github.com/apache/airflow/blob/af2c047320c5f0742f466943c171ec761d275bab/chart/templates/jobs/migrate-database-job.yaml#L48
PD: I created a custom chart in order to synchronize my value's files with _ArgoCD_, and this chart only includes a dependency with the _Airflow's_ chart and my values files (I use one by environment), and the _Helm_ configuration I put in the section _Helm Chart configuration_ is under a _airflow_ block in my value's files.
This is my _Chart.yaml_:
```yaml
apiVersion: v2
name: my-airflow
version: 1.0.0
description: Airflow Chart with my values
appVersion: "2.2.3"
dependencies:
- name: airflow
version: 1.4.0
repository: https://airflow.apache.org
```
### What you expected to happen
I expect that _ArgoCD_ synchronizes after changing an environment variable in my values file.
### How to reproduce
- Deploy the chart as an _ArgoCD_ application.
- Change an environment variable in the values file.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21615 | https://github.com/apache/airflow/pull/21776 | dade6e075f5229f15b8b0898393c529e0e9851bc | 608b8c4879c188881e057e6318a0a15f54c55c7b | "2022-02-16T13:19:19Z" | python | "2022-02-25T01:46:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,597 | ["airflow/providers/presto/hooks/presto.py", "airflow/providers/trino/hooks/trino.py"] | replace `hql` references to `sql` in `TrinoHook` and `PrestoHook` | ### Body
Both:
https://github.com/apache/airflow/blob/main/airflow/providers/presto/hooks/presto.py
https://github.com/apache/airflow/blob/main/airflow/providers/trino/hooks/trino.py
uses terminology of `hql` we should change it to `sql`.
The change needs to be backwards compatible. e.g deprecating hql with warning
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/21597 | https://github.com/apache/airflow/pull/21630 | 4e959358ac4ef9554ff5d82cdc85ab7dc142a639 | 2807193594ed4e1f3acbe8da7dd372fe1c2fff94 | "2022-02-15T20:40:22Z" | python | "2022-02-22T09:07:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,573 | ["airflow/providers/amazon/aws/operators/ecs.py"] | ECS Operator doesn't get logs from cloudwatch when ecs task has finished within 30 seconds. | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.6.0
### Apache Airflow version
2.2.3 (latest released)
### Operating System
Amazon Linux2
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
ECS Operator doesn't print ECS task logs stored in CloudWatchLogs when ECS task has finished within 30 seconds.
### What you expected to happen
I expected to see ECS task logs in the Airflow logs view.
### How to reproduce
You create a simple ECS task that executes only "echo hello world" and runs it by ECS Operator, and ECS Operator doesn't print "hello world" in Airflow logs view. But we can see it XCom's result.
### Anything else
I think EcsTaskLogFetcher needs to get log events after sleep. When EcsTaskLogFetcher thread receives a stop signal, EcsTaskLogFetcher will stop run method without getting a log event that is happened during a sleep period.
https://github.com/apache/airflow/blob/8155e8ac0abcaf3bb02b164fd7552e20fa702260/airflow/providers/amazon/aws/operators/ecs.py#L122
I think maybe #19426 is the same problem.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21573 | https://github.com/apache/airflow/pull/21574 | 2d6282d6b7d8c7603e96f0f28ebe0180855687f3 | 21a90c5b7e2f236229812f9017582d67d3d7c3f0 | "2022-02-15T07:10:13Z" | python | "2022-02-15T09:48:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,566 | ["setup.cfg"] | typing_extensions package isn't installed with apache-airflow-providers-amazon causing an issue for SqlToS3Operator | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
3.0.0rc2
### Apache Airflow version
2.2.3 (latest released)
### Python version
Python 3.9.7 (default, Oct 12 2021, 02:43:43)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
I was working on adding this operator to a DAG and it failed to import due to a lack of a required file
### What you expected to happen
_No response_
### How to reproduce
Add
```
from airflow.providers.amazon.aws.transfers.sql_to_s3 import SqlToS3Operator
```
to a dag
### Anything else
This can be resolved by adding `typing-extensions==4.1.1` to `requirements.txt` when building the project (locally)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21566 | https://github.com/apache/airflow/pull/21567 | 9407f11c814413064afe09c650a79edc45807965 | e4ead2b10dccdbe446f137f5624255aa2ff2a99a | "2022-02-14T20:21:15Z" | python | "2022-02-25T21:26:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,559 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/hooks/databricks_base.py", "airflow/providers/databricks/operators/databricks.py", "docs/apache-airflow-providers-databricks/operators/run_now.rst", "docs/apache-airflow-providers-databricks/operators/submit_run.rst", "tests/providers/databricks/hooks/test_databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Databricks hook: Retry also on HTTP Status 429 - rate limit exceeded | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
2.2.0
### Apache Airflow version
2.2.3 (latest released)
### Operating System
Any
### Deployment
Other
### Deployment details
_No response_
### What happened
Operations aren't retried when Databricks API returns HTTP Status 429 - rate limit exceeded
### What you expected to happen
the operation should retry
### How to reproduce
this happens when you have multiple calls to API, especially when it happens outside of Airflow
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21559 | https://github.com/apache/airflow/pull/21852 | c108f264abde68e8f458a401296a53ccbe7a47f6 | 12e9e2c695f9ebb9d3dde9c0f7dfaa112654f0d6 | "2022-02-14T10:08:01Z" | python | "2022-03-13T23:19:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,545 | ["airflow/providers/apache/beam/hooks/beam.py", "docs/docker-stack/docker-images-recipes/go-beam.Dockerfile", "docs/docker-stack/recipes.rst", "tests/providers/apache/beam/hooks/test_beam.py"] | Add Go to docker images | ### Description
Following https://github.com/apache/airflow/pull/20386 we are now supporting execution of Beam Pipeline written in Go.
We might want to add Go to the images.
Beam Go SDK first stable release is `v2.33.0` and requires `Go v1.16` minimum:
### Use case/motivation
This way people running airflow from docker can build/run their go pipelines.
### Related issues
Issue:
https://github.com/apache/airflow/issues/20283
PR:
https://github.com/apache/airflow/pull/20386
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21545 | https://github.com/apache/airflow/pull/22296 | 7bd165fbe2cbbfa8208803ec352c5d16ca2bd3ec | 4a1503b39b0aaf50940c29ac886c6eeda35a79ff | "2022-02-13T11:38:59Z" | python | "2022-03-17T03:57:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,537 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | add partition option for parquet files by columns in BaseSQLToGCSOperator | ### Description
Add the ability to partition parquet files by columns. Right now you can partition files only by size.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21537 | https://github.com/apache/airflow/pull/28677 | 07a17bafa1c3de86a993ee035f91b3bbd284e83b | 35a8ffc55af220b16ea345d770f80f698dcae3fb | "2022-02-12T10:56:36Z" | python | "2023-01-10T05:55:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,486 | ["airflow/providers/postgres/example_dags/example_postgres.py", "airflow/providers/postgres/operators/postgres.py", "docs/apache-airflow-providers-postgres/operators/postgres_operator_howto_guide.rst", "tests/providers/postgres/operators/test_postgres.py"] | Allow to set statement behavior for PostgresOperator | ### Body
Add the ability to pass parameters like `statement_timeout` from PostgresOperator.
https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-STATEMENT-TIMEOUT
The goal is to allow to control over specific query rather than setting the parameters on the connection level.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/21486 | https://github.com/apache/airflow/pull/21551 | ecc5b74528ed7e4ecf05c526feb2c0c85f463429 | 0ec56775df66063cab807d886e412ebf88c572bf | "2022-02-10T10:08:32Z" | python | "2022-03-18T15:09:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,469 | ["chart/templates/cleanup/cleanup-cronjob.yaml", "chart/tests/test_airflow_common.py", "chart/tests/test_annotations.py", "chart/values.schema.json", "chart/values.yaml"] | No way to supply custom annotations for cleanup cron job pods | ### Official Helm Chart version
1.4.0 (latest released)
### Apache Airflow version
2.2.3 (latest released)
### Kubernetes Version
v1.21.5
### Helm Chart configuration
```yaml
cleanup:
enabled: true
airflowPodAnnotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "false"
vault.hashicorp.com/agent-run-as-user: "50000"
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/agent-inject-status: "update"
```
### Docker Image customisations
We have customized the `ENTRYPOINT` for exporting some environment variables that get loaded from Hashicorp's vault.
The entrypoint line in the Dockerfile:
```Dockerfile
ENTRYPOINT ["/usr/bin/dumb-init", "--", "/opt/airflow/entrypoint.sh"]
```
The last line in the `/opt/airflow/entrypoint.sh` script:
```bash
# Call Airflow's default entrypoint after we source the vault secrets
exec /entrypoint "${@}"
```
### What happened
Install was successful and the `webserver` and `scheduler` pods are working as expected. The `cleanup` pods launched from the `cleanup` cronjob fail:
```
No vault secrets detected
....................
ERROR! Maximum number of retries (20) reached.
Last check result:
$ airflow db check
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 1129, in <module>
conf.validate()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 224, in validate
self._validate_config_dependencies()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 267, in _validate_config_dependencies
raise AirflowConfigException(f"error: cannot use sqlite with the {self.get('core', 'executor')}")
airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the KubernetesExecutor
```
### What you expected to happen
It looks like the annotations on the `cleanup` cronjob are static and only contain an istio annotation
https://github.com/apache/airflow/blob/c28c255e52255ea2060c1a802ec34f9e09cc4f52/chart/templates/cleanup/cleanup-cronjob.yaml#L56-L60
From the documentation in the values.yaml. I would expect the `cleanup` cronjob to have these annotations:
https://github.com/apache/airflow/blob/c28c255e52255ea2060c1a802ec34f9e09cc4f52/chart/values.yaml#L187-L189
### How to reproduce
From the root of the airflow repository:
```bash
cd chart
helm dep build
helm template . --set cleanup.enabled=true --set airflowPodAnnotations."my\.test"="somevalue" -s templates/cleanup/cleanup-cronjob.yaml
```
If you look at the annotations section of the output, you will only see the static `istio` annotation that is hard coded.
### Anything else
This could be a potentially breaking change even though the documentation says they should get applied to all Airflow pods. Another option would be to add `cleanup.podAnnotations` section for supplying them if fixing it by adding the global annotations would not work.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21469 | https://github.com/apache/airflow/pull/21484 | c25534be56cee39b6be38a9928cd5b2e107a32be | 8c1512b7094e092369b742c37857b7946b4033f4 | "2022-02-09T16:26:12Z" | python | "2022-02-11T23:12:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,450 | ["airflow/cli/commands/dag_command.py"] | `airflow dags status` fails if parse time is near `dagbag_import_timeout` | ### Apache Airflow version
2.2.3 (latest released)
### What happened
I had just kicked off a DAG and I was periodically running `airflow dags status ...` to see if it was done yet. At first it seemed to work, but later it failed with this error:
```
$ airflow dags state load_13 '2022-02-09T05:25:28+00:00'
[2022-02-09 05:26:56,493] {dagbag.py:500} INFO - Filling up the DagBag from /usr/local/airflow/dags
queued
$ airflow dags state load_13 '2022-02-09T05:25:28+00:00'
[2022-02-09 05:27:29,096] {dagbag.py:500} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2022-02-09 05:27:59,084] {timeout.py:36} ERROR - Process timed out, PID: 759
[2022-02-09 05:27:59,088] {dagbag.py:334} ERROR - Failed to import: /usr/local/airflow/dags/many_tasks.py
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 331, in _load_modules_from_file
loader.exec_module(new_module)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/local/airflow/dags/many_tasks.py", line 61, in <module>
globals()["dag_{:02d}".format(i)] = parameterized_load(i, step)
File "/usr/local/airflow/dags/many_tasks.py", line 50, in parameterized_load
return load()
File "/usr/local/lib/python3.9/site-packages/airflow/models/dag.py", line 2984, in factory
f(**f_kwargs)
File "/usr/local/airflow/dags/many_tasks.py", line 48, in load
[worker_factory(i) for i in range(1, size**2 + 1)]
File "/usr/local/airflow/dags/many_tasks.py", line 48, in <listcomp>
[worker_factory(i) for i in range(1, size**2 + 1)]
File "/usr/local/airflow/dags/many_tasks.py", line 37, in worker_factory
return worker(num)
File "/usr/local/lib/python3.9/site-packages/airflow/decorators/base.py", line 219, in factory
op = decorated_operator_class(
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 188, in apply_defaults
result = func(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/decorators/python.py", line 59, in __init__
super().__init__(kwargs_to_upstream=kwargs_to_upstream, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 152, in apply_defaults
dag_params = copy.deepcopy(dag.params) or {}
File "/usr/local/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/lib/python3.9/copy.py", line 264, in _reconstruct
y = func(*args)
File "/usr/local/lib/python3.9/copy.py", line 263, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/timeout.py", line 37, in handle_timeout
raise AirflowTaskTimeout(self.error_message)
airflow.exceptions.AirflowTaskTimeout: DagBag import timeout for /usr/local/airflow/dags/many_tasks.py after 30.0s.
Please take a look at these docs to improve your DAG import time:
* http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/best-practices.html#top-level-python-code
* http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/best-practices.html#reducing-dag-complexity, PID: 759
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py", line 241, in dag_state
dag = get_dag(args.subdir, args.dag_id)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/cli.py", line 192, in get_dag
raise AirflowException(
airflow.exceptions.AirflowException: Dag 'load_13' could not be found; either it does not exist or it failed to parse.
```
### What you expected to happen
If we were able to parse the DAG in the first place, I expect that downstream actions (like querying for status) would not fail due to a dag parsing timeout.
Also, is parsing the dag necessary for this action?
### How to reproduce
1. start with the dag shown here: https://gist.github.com/MatrixManAtYrService/842266aac42390aadee75fe014cd372e
2. increase "scale" until `airflow dags list` stop showing the load dags
3. decrease by one and check that they start showing back up
4. trigger a dag run
5. check its status (periodically), eventually the status check will fail
I initially discovered this using the `CeleryExecutor` and a much messier dag, but once I understood what I was looking for I was able to recreate it using the dag linked above and `astro dev start`
### Operating System
docker/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
```
FROM quay.io/astronomer/ap-airflow:2.2.3-onbuild
```
### Anything else
When I was running this via the CeleryExecutor (deployed via helm on a single-node k8s cluster), I noticed similar dag-parsing timeouts showing up in the worker logs. I failed to capture them because I didn't yet know what I was looking for, but if they would be helpful I can recreate that scenario and post them here.
----
I tried to work around this error by doubling the following configs:
- [dagbag_import_timeout](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dagbag-import-timeout)
- [dag_file_processor_timeout](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-file-processor-timeout)
This "worked", as in the status started showing up without error, but it seemed like making the dag __longer__ had also made it __slower__. As if whatever re-parsing steps were occurring along the way were also slowing it down. It used to take 1h to complete, but when I checked on it after 1h it was only 30% complete (the new tasks hadn't even started yet).
Should I expect that splitting my large dag into smaller dags will fix this? Or is the overall parsing workload going to eat into my runtime regardless of how it is sliced?
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21450 | https://github.com/apache/airflow/pull/21793 | 7e0c6e4fc7fcaccfa6c49efddea3aaae96c9260c | dade6e075f5229f15b8b0898393c529e0e9851bc | "2022-02-09T05:48:01Z" | python | "2022-02-24T21:45:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,421 | ["airflow/providers/amazon/aws/hooks/eks.py", "tests/providers/amazon/aws/hooks/test_eks.py"] | Unable to use EKSPodOperator with aws_conn_id parameter | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
### Apache Airflow version
2.2.2
### Operating System
AmazonLinux
### Deployment
MWAA
### Deployment details
Out of box deployment of MWAA with Airflow 2.2.2
### What happened
I try to launch a K8S pod using [EKSPodOperator](https://airflow.apache.org/docs/apache-airflow-providers-amazon/2.4.0/_api/airflow/providers/amazon/aws/operators/eks/index.html#airflow.providers.amazon.aws.operators.eks.EKSPodOperator) and the parameter **aws_conn_id** in order to authenticate to the EKS cluster through IAM / OIDC provider.
The pod is not starting and i've got the following error in my Airflow task log :
```
[2022-02-08, 10:55:30 CET] {{refresh_config.py:71}} ERROR - exec: process returned 1. Traceback (most recent call last):
File "/usr/lib64/python3.7/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib64/python3.7/runpy.py", line 109, in _get_module_details
__import__(pkg_name)
File "/usr/local/lib/python3.7/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/usr/local/lib/python3.7/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/usr/local/lib/python3.7/site-packages/airflow/configuration.py", line 1129, in <module>
conf.validate()
File "/usr/local/lib/python3.7/site-packages/airflow/configuration.py", line 226, in validate
self._validate_enums()
File "/usr/local/lib/python3.7/site-packages/airflow/configuration.py", line 253, in _validate_enums
+ f"{value!r}. Possible values: {', '.join(enum_options)}."
airflow.exceptions.AirflowConfigException: `[logging] logging_level` should not be 'fatal'. Possible values: CRITICAL, FATAL, ERROR, WARN, WARNING, INFO, DEBUG.
```
Indeed, the [EKSHook](https://github.com/apache/airflow/blob/providers-amazon/2.4.0/airflow/providers/amazon/aws/hooks/eks.py#L596) is setting **AIRFLOW__LOGGING__LOGGING_LEVEL** to **fatal** and [airflow core](https://github.com/apache/airflow/blob/2.2.2/airflow/configuration.py#L198) is checking that logging level is **FATAL**.
It seems we've got a case sensitive problem.
### What you expected to happen
I expected my POD starting using the IAM / OIDC provider auth on EKS.
### How to reproduce
```python
start_pod = EKSPodOperator(
task_id="run_pod",
cluster_name="<your_eks_cluster_name>",
pod_name="run_pod",
image="amazon/aws-cli:latest",
cmds=["sh", "-c", "ls"],
labels={"demo": "hello_world"},
get_logs=True,
# Delete the pod when it reaches its final state, or the execution is interrupted.
is_delete_operator_pod=True,
aws_conn_id="<your_aws_airflow_conn_id>"
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21421 | https://github.com/apache/airflow/pull/21427 | 8fe9783fcd813dced8de849c8130d0eb7f90bac3 | 33ca0f26544a4d280f2f56843e97deac7f33cea5 | "2022-02-08T10:08:38Z" | python | "2022-02-08T20:51:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,412 | ["airflow/providers/microsoft/azure/hooks/cosmos.py", "tests/providers/microsoft/azure/hooks/test_azure_cosmos.py", "tests/providers/microsoft/azure/operators/test_azure_cosmos.py"] | v3.5.0 airflow.providers.microsoft.azure.operators.cosmos not running | ### Apache Airflow version
2.2.3 (latest released)
### What happened
Submitting this on advice from the community Slack: Attempting to use the v3.5.0 `AzureCosmosInsertDocumentOperator` operator fails with an attribute error: `AttributeError: 'CosmosClient' object has no attribute 'QueryDatabases'`
### What you expected to happen
Expected behaviour is that the document is upserted correctly. I've traced through the source and `does_database_exist()` seems to call `QueryDatabases()` on the result of `self.get_conn()`. Thing is `get_conn()` (AFAICT) returns an actual MS/AZ `CosmosClient` which definitely does not have a `QueryDatabases()` method (it's `query_databases()`)
### How to reproduce
From what I can see, any attempt to use this operator on airflow 2.2.3 will fail in this way
### Operating System
Ubuntu 18.04.5 LTS
### Versions of Apache Airflow Providers
azure-batch==12.0.0
azure-common==1.1.28
azure-core==1.22.0
azure-cosmos==4.2.0
azure-datalake-store==0.0.52
azure-identity==1.7.1
azure-keyvault==4.1.0
azure-keyvault-certificates==4.3.0
azure-keyvault-keys==4.4.0
azure-keyvault-secrets==4.3.0
azure-kusto-data==0.0.45
azure-mgmt-containerinstance==1.5.0
azure-mgmt-core==1.3.0
azure-mgmt-datafactory==1.1.0
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-resource==20.1.0
azure-nspkg==3.0.2
azure-storage-blob==12.8.1
azure-storage-common==2.1.0
azure-storage-file==2.1.0
msrestazure==0.6.4
### Deployment
Virtualenv installation
### Deployment details
Clean standalone install I am using for evaluating airflow for our environment
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21412 | https://github.com/apache/airflow/pull/21514 | de41ccc922b3d1f407719744168bb6822bde9a58 | 3c4524b4ec2b42a8af0a8c7b9d8f1d065b2bfc83 | "2022-02-08T05:53:54Z" | python | "2022-02-23T16:39:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,388 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | Optionally raise an error if source file does not exist in GCSToGCSOperator | ### Description
Right now when using GCSToGCSOperator to copy a file from one bucket to another, if the source file does not exist, nothing happens and the task is considered successful. This could be good for some use cases, for example, when you want to copy all the files from a directory or that match a specific pattern.
But for some other cases, like when you only want to copy one specific blob, it might be useful to raise an exception if the source file can't be found. Otherwise, the task would be failing silently.
My proposal is to add a new flag to GCSToGCSOperator to enable this feature. By default, for backward compatibility, the behavior would be the current one. But it would be possible to force the source file to be required and mark the task as failed if it doesn't exist.
### Use case/motivation
Task would fail if the source file to copy does not exist, but only in the case you enable it.
### Related issues
If you want to be sure that the source file exists and it will be copied on every execution, currently the operator does not allow you to make the task fail. If the status is successful but nothing is written in the destination, it would be failing silently.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21388 | https://github.com/apache/airflow/pull/21391 | a2abf663157aea14525e1a55eb9735ba659ae8d6 | 51aff276ca4a33ee70326dd9eea6fba59f1463a3 | "2022-02-07T12:15:28Z" | python | "2022-02-10T19:30:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,380 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/hooks/test_databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Databricks: support for triggering jobs by name | ### Description
The DatabricksRunNowOperator supports triggering job runs by job ID. We would like to extend the operator to also support triggering jobs by name. This will likely require first making an API call to list jobs in order to find the appropriate job id.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21380 | https://github.com/apache/airflow/pull/21663 | 537c24433014d3d991713202df9c907e0f114d5d | a1845c68f9a04e61dd99ccc0a23d17a277babf57 | "2022-02-07T10:23:18Z" | python | "2022-02-26T21:55:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,348 | ["airflow/providers/amazon/aws/operators/glue.py"] | Status of testing Providers that were prepared on February 05, 2022 | ### Body
I have a kind request for all the contributors to the latest provider packages release.
Could you please help us to test the RC versions of the providers?
Let us know in the comment, whether the issue is addressed.
Those are providers that require testing as there were some substantial changes introduced:
## Provider [amazon: 3.0.0rc1](https://pypi.org/project/apache-airflow-providers-amazon/3.0.0rc1)
- [ ] [Rename params to cloudformation_parameter in CloudFormation operators. (#20989)](https://github.com/apache/airflow/pull/20989): @potiuk
- [ ] [[SQSSensor] Add opt-in to disable auto-delete messages (#21159)](https://github.com/apache/airflow/pull/21159): @LaPetiteSouris
- [x] [Create a generic operator SqlToS3Operator and deprecate the MySqlToS3Operator. (#20807)](https://github.com/apache/airflow/pull/20807): @mariotaddeucci
- [ ] [Move some base_aws logging from info to debug level (#20858)](https://github.com/apache/airflow/pull/20858): @o-nikolas
- [ ] [Adds support for optional kwargs in the EKS Operators (#20819)](https://github.com/apache/airflow/pull/20819): @ferruzzi
- [ ] [AwsAthenaOperator: do not generate client_request_token if not provided (#20854)](https://github.com/apache/airflow/pull/20854): @XD-DENG
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
- [ ] [fix: cloudwatch logs fetch logic (#20814)](https://github.com/apache/airflow/pull/20814): @ayushchauhan0811
- [ ] [Alleviate import warning for `EmrClusterLink` in deprecated AWS module (#21195)](https://github.com/apache/airflow/pull/21195): @josh-fell
- [ ] [Rename amazon EMR hook name (#20767)](https://github.com/apache/airflow/pull/20767): @vinitpayal
- [ ] [Standardize AWS SQS classes names (#20732)](https://github.com/apache/airflow/pull/20732): @eladkal
- [ ] [Standardize AWS Batch naming (#20369)](https://github.com/apache/airflow/pull/20369): @ferruzzi
- [ ] [Standardize AWS Redshift naming (#20374)](https://github.com/apache/airflow/pull/20374): @ferruzzi
- [ ] [Standardize DynamoDB naming (#20360)](https://github.com/apache/airflow/pull/20360): @ferruzzi
- [ ] [Standardize AWS ECS naming (#20332)](https://github.com/apache/airflow/pull/20332): @ferruzzi
- [ ] [Refactor operator links to not create ad hoc TaskInstances (#21285)](https://github.com/apache/airflow/pull/21285): @josh-fell
## Provider [apache.druid: 2.3.0rc1](https://pypi.org/project/apache-airflow-providers-apache-druid/2.3.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
## Provider [apache.hive: 2.2.0rc1](https://pypi.org/project/apache-airflow-providers-apache-hive/2.2.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
## Provider [apache.spark: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-apache-spark/2.1.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
## Provider [apache.sqoop: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-apache-sqoop/2.1.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
## Provider [cncf.kubernetes: 3.0.2rc1](https://pypi.org/project/apache-airflow-providers-cncf-kubernetes/3.0.2rc1)
- [ ] [Add missed deprecations for cncf (#20031)](https://github.com/apache/airflow/pull/20031): @dimon222
## Provider [docker: 2.4.1rc1](https://pypi.org/project/apache-airflow-providers-docker/2.4.1rc1)
- [ ] [Fixes Docker xcom functionality (#21175)](https://github.com/apache/airflow/pull/21175): @ferruzzi
## Provider [exasol: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-exasol/2.1.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
## Provider [google: 6.4.0rc1](https://pypi.org/project/apache-airflow-providers-google/6.4.0rc1)
- [ ] [[Part 1]: Add hook for integrating with Google Calendar (#20542)](https://github.com/apache/airflow/pull/20542): @rsg17
- [ ] [Add encoding parameter to `GCSToLocalFilesystemOperator` to fix #20901 (#20919)](https://github.com/apache/airflow/pull/20919): @danneaves-ee
- [ ] [batch as templated field in DataprocCreateBatchOperator (#20905)](https://github.com/apache/airflow/pull/20905): @wsmolak
- [ ] [Make timeout Optional for wait_for_operation (#20981)](https://github.com/apache/airflow/pull/20981): @MaksYermak
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
- [ ] [Cloudsql import links fix. (#21199)](https://github.com/apache/airflow/pull/21199): @subkanthi
- [ ] [Refactor operator links to not create ad hoc TaskInstances (#21285)](https://github.com/apache/airflow/pull/21285): @josh-fell
## Provider [http: 2.0.3rc1](https://pypi.org/project/apache-airflow-providers-http/2.0.3rc1)
- [ ] [Split out confusing path combination logic to separate method (#21247)](https://github.com/apache/airflow/pull/21247): @malthe
## Provider [imap: 2.2.0rc1](https://pypi.org/project/apache-airflow-providers-imap/2.2.0rc1)
- [ ] [Add "use_ssl" option to IMAP connection (#20441)](https://github.com/apache/airflow/pull/20441): @feluelle
## Provider [jdbc: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-jdbc/2.1.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
## Provider [microsoft.azure: 3.6.0rc1](https://pypi.org/project/apache-airflow-providers-microsoft-azure/3.6.0rc1)
- [ ] [Refactor operator links to not create ad hoc TaskInstances (#21285)](https://github.com/apache/airflow/pull/21285): @josh-fell
## Provider [microsoft.mssql: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-microsoft-mssql/2.1.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
## Provider [microsoft.psrp: 1.1.0rc1](https://pypi.org/project/apache-airflow-providers-microsoft-psrp/1.1.0rc1)
- [x] [PSRP improvements (#19806)](https://github.com/apache/airflow/pull/19806): @malthe
## Provider [mysql: 2.2.0rc1](https://pypi.org/project/apache-airflow-providers-mysql/2.2.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
(https://github.com/apache/airflow/pull/20618): @potiuk
## Provider [oracle: 2.2.0rc1](https://pypi.org/project/apache-airflow-providers-oracle/2.2.0rc1)
- [x] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
- [x] [Fix handling of Oracle bindvars in stored procedure call when parameters are not provided (#20720)](https://github.com/apache/airflow/pull/20720): @malthe
## Provider [postgres: 3.0.0rc1](https://pypi.org/project/apache-airflow-providers-postgres/3.0.0rc1)
- [ ] [Replaces the usage of postgres:// with postgresql:// (#21205)](https://github.com/apache/airflow/pull/21205): @potiuk
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
- [ ] [Remove `:type` lines now sphinx-autoapi supports typehints (#20951)](https://github.com/apache/airflow/pull/20951): @ashb
- [ ] [19489 - Pass client_encoding for postgres connections (#19827)](https://github.com/apache/airflow/pull/19827): @subkanthi
- [ ] [Amazon provider remove deprecation, second try (#19815)](https://github.com/apache/airflow/pull/19815): @uranusjr
## Provider [qubole: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-qubole/2.1.0rc1)
- [x] [Add Qubole how to documentation (#20058)](https://github.com/apache/airflow/pull/20058): @kazanzhy
## Provider [slack: 4.2.0rc1](https://pypi.org/project/apache-airflow-providers-slack/4.2.0rc1)
- [ ] [Return slack api call response in slack_hook (#21107)](https://github.com/apache/airflow/pull/21107): @pingzh
(https://github.com/apache/airflow/pull/20571): @potiuk
## Provider [snowflake: 2.5.0rc1](https://pypi.org/project/apache-airflow-providers-snowflake/2.5.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
- [ ] [Fix #21096: Support boolean in extra__snowflake__insecure_mode (#21155)](https://github.com/apache/airflow/pull/21155): @mik-laj
## Provider [sqlite: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-sqlite/2.1.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
## Provider [ssh: 2.4.0rc1](https://pypi.org/project/apache-airflow-providers-ssh/2.4.0rc1)
- [ ] [Add a retry with wait interval for SSH operator (#14489)](https://github.com/apache/airflow/issues/14489): @Gaurang033
- [ ] [Add banner_timeout feature to SSH Hook/Operator (#21262)](https://github.com/apache/airflow/pull/21262): @potiuk
## Provider [tableau: 2.1.4rc1](https://pypi.org/project/apache-airflow-providers-tableau/2.1.4rc1)
- [ ] [Squelch more deprecation warnings (#21003)](https://github.com/apache/airflow/pull/21003): @uranusjr
## Provider [vertica: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-vertica/2.1.0rc1)
- [ ] [Add more SQL template fields renderers (#21237)](https://github.com/apache/airflow/pull/21237): @josh-fell
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/21348 | https://github.com/apache/airflow/pull/21353 | 8da7af2bc0f27e6d926071439900ddb27f3ae6c1 | d1150182cb1f699e9877fc543322f3160ca80780 | "2022-02-05T20:59:27Z" | python | "2022-02-06T21:25:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,336 | ["airflow/www/templates/airflow/trigger.html", "airflow/www/views.py", "tests/www/views/test_views_trigger_dag.py"] | Override the dag run_id from within the ui | ### Description
It would be great to have the ability to override the generated run_ids like `scheduled__2022-01-27T14:00:00+00:00` so that it is easier to find specific dag runs in the ui. I know the rest api allows you to specify a run id, but it would be great if ui users could also specify a run_id using for example dag_run conf.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21336 | https://github.com/apache/airflow/pull/21851 | 340180423a687d8171413c0c305f2060f9722177 | 14a2d9d0078569988671116473b43f86aba1161b | "2022-02-04T21:10:04Z" | python | "2022-03-16T08:12:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,325 | ["airflow/providers/cncf/kubernetes/hooks/kubernetes.py", "airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", "tests/providers/apache/flink/operators/test_flink_kubernetes.py", "tests/providers/cncf/kubernetes/hooks/test_kubernetes_pod.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py"] | on_kill method for SparkKubernetesOperator | ### Description
In some cases the Airflow sends `SIGTERM` to the task, here to the `SparkKubernetesOperator`, it needs to send `SIGTERM` also to the corresponding pods/jobs
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21325 | https://github.com/apache/airflow/pull/29977 | feab21362e2fee309990a89aea39031d94c5f5bd | 9a4f6748521c9c3b66d96598036be08fd94ccf89 | "2022-02-04T13:31:39Z" | python | "2023-03-14T22:31:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,321 | ["airflow/providers/amazon/aws/example_dags/example_ecs_ec2.py", "airflow/providers/amazon/aws/example_dags/example_ecs_fargate.py", "airflow/providers/amazon/aws/operators/ecs.py", "docs/apache-airflow-providers-amazon/operators/ecs.rst", "tests/providers/amazon/aws/operators/test_ecs.py"] | ECS Operator does not support launch type "EXTERNAL" | ### Description
You can run ECS tasks either on EC2 instances or via AWS Fargate, and these will run in AWS. With ECS Anywhere, you are now able to run the same ECS tasks on any host that has the ECS agent - on prem, in another cloud provider, etc. The control plane resides in ECS, but the execution of the task is managed by the ECS agent.
To launch tasks on hosts that are managed by ECS Anywhere, you need to specify a launch type of EXTERNAL. This is currently not supported by the ECS Operator. When you attempt do this, you get an error of unsupported launch type.
The current work around is to use boto3 and create a task and then run it using the correct parameters.
### Use case/motivation
The ability to run your task code to support hybrid and multi-cloud orchestration scenarios.
### Related issues
_No response_
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21321 | https://github.com/apache/airflow/pull/22093 | 33ecca1b9ab99d9d15006df77757825c81c24f84 | e63f6e36d14a8cd2462e80f26fb4809ab8698380 | "2022-02-04T11:23:52Z" | python | "2022-03-11T07:25:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,302 | ["airflow/www/package.json", "airflow/www/static/js/graph.js", "airflow/www/static/js/tree/Tree.jsx", "airflow/www/static/js/tree/dagRuns/index.test.jsx", "airflow/www/static/js/tree/index.jsx", "airflow/www/static/js/tree/renderTaskRows.jsx", "airflow/www/static/js/tree/renderTaskRows.test.jsx", "airflow/www/static/js/tree/useTreeData.js", "airflow/www/static/js/tree/useTreeData.test.jsx", "airflow/www/yarn.lock"] | Pause auto-refresh when the document becomes hidden | ### Description
When running Airflow it can be common to leave some tabs of Airflow open but not active. I believe (but not 100% sure, if I am wrong I can close this issue) Airflow's auto-refresh keeps refreshing when the document becomes hidden (for example, you switched to another browser tab).
This is not desirable in the cases when you are running the Airflow services on your same machine and you have a long-running DAG (taking hours to run). This could cause your CPU utilization to ramp up in this scenario (which can be quite common for users, myself included):
1. You are running the Airflow services on your same machine
2. Your machine is not that powerful
3. You have a long-running DAG (taking hours to run)
4. You leave a auto-refreshing page(s) of that DAG open for a long time (such as tree or graph) in hidden (or non-focused) tabs of your browser
- What can make this even worse is if you have multiple tabs like this open, you are multiplying the extra processing power to refresh the page at a short interval
5. You have not increased the default `auto_refresh_interval` of 3
### Use case/motivation
I am proposing the following improvements to the auto-refresh method to improve this situation:
1. When you change tabs in your browser, there is a feature of Javascript in modern browsers called "Page Visibility API". It allows for the use of listeners on a `visibilitychange` event to know when a document becomes visible or hidden. This can be used to pause auto-refresh when the document becomes hidden.
- Discussion on Stack Overflow: https://stackoverflow.com/questions/1060008/is-there-a-way-to-detect-if-a-browser-window-is-not-currently-active
- MDN: https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibility_API
- W3C: https://www.w3.org/TR/page-visibility/
2. We should provide a message in the UI to alert the user that the auto-refreshing is paused until the page regains focus.
3. Lastely, the option to only auto-refresh if the document is visible should be a configurable setting.
Additionally, the older `onblur` and `onfocus` listeners on the entire document could be used too. That way if a user switches to a different window while the page is still visible, the auto-refresh can pause (although this might not be desirable if you want to have Airflow open side-by-side with something else, so maybe this will be overboard)
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21302 | https://github.com/apache/airflow/pull/21904 | dfd9805a23b2d366f5c332f4cb4131462c5ba82e | 635fe533700f284da9aa04a38a5dae9ad6485454 | "2022-02-03T19:57:20Z" | python | "2022-03-08T18:31:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,201 | ["airflow/www/static/js/gantt.js", "airflow/www/static/js/graph.js", "airflow/www/static/js/task_instances.js", "airflow/www/views.py"] | Add Trigger Rule Display to Graph View | ### Description
This feature would introduce some visual addition(s) (e.g. tooltip) to the Graph View to display the trigger rule between tasks.
### Use case/motivation
This would add more detail to Graph View, providing more information visually about the relationships between upstream and downstream tasks.
### Related issues
https://github.com/apache/airflow/issues/19939
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21201 | https://github.com/apache/airflow/pull/26043 | bdc3d4da3e0fb11661cede149f2768acb2080d25 | f94176bc7b28b496c34974b6e2a21781a9afa221 | "2022-01-28T23:17:52Z" | python | "2022-08-31T19:51:43Z" |