status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 23,005 | ["BREEZE.rst"] | Breeze: Add uninstallation instructions for Breeze | We should have information how to uninstall Breeze:
* in the cheatsheet
* in BREEZE.rst | https://github.com/apache/airflow/issues/23005 | https://github.com/apache/airflow/pull/23045 | 2597ea47944488f3756a84bd917fa780ff5594da | 2722c42659100474b21aae3504ee4cbe24f72ab4 | "2022-04-14T09:02:52Z" | python | "2022-04-25T12:33:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,969 | ["airflow/www/views.py", "tests/www/views/test_views.py"] | Invalid execution_date crashes pages accepting the query parameter | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Invalid execution_date in query parameter will crash durations page since pendulum parsing exception is not handled in several views
### What you think should happen instead
On `ParseError` the page should resort to some default value like in grid page or show an error flash message instead of crash.
### How to reproduce
1. Visit a dag duration page with invalid date in URL : http://localhost:8080/dags/raise_exception/duration?days=30&root=&num_runs=25&base_date=2022-04-12+16%3A29%3A21%2B05%3A30er
2. Stacktrace
```python
Python version: 3.10.4
Airflow version: 2.3.0.dev0
Node: laptop
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parsing/__init__.py", line 131, in _parse
dt = parser.parse(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/dateutil/parser/_parser.py", line 1368, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/dateutil/parser/_parser.py", line 643, in parse
raise ParserError("Unknown string format: %s", timestr)
dateutil.parser._parser.ParserError: Unknown string format: 2022-04-12 16:29:21+05:30er
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/karthikeyan/stuff/python/airflow/airflow/www/auth.py", line 40, in decorated
return func(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/airflow/www/decorators.py", line 80, in wrapper
return f(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/airflow/www/views.py", line 2870, in duration
base_date = timezone.parse(base_date)
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/timezone.py", line 205, in parse
return pendulum.parse(string, tz=timezone or TIMEZONE, strict=False) # type: ignore
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parser.py", line 29, in parse
return _parse(text, **options)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parser.py", line 45, in _parse
parsed = base_parse(text, **options)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parsing/__init__.py", line 74, in parse
return _normalize(_parse(text, **_options), **_options)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parsing/__init__.py", line 135, in _parse
raise ParserError("Invalid date string: {}".format(text))
pendulum.parsing.exceptions.ParserError: Invalid date string: 2022-04-12 16:29:21+05:30er
```
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22969 | https://github.com/apache/airflow/pull/23161 | 6f82fc70ec91b493924249f062306330ee929728 | 9e25bc211f6f7bba1aff133d21fe3865dabda53d | "2022-04-13T07:20:19Z" | python | "2022-05-16T19:15:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,947 | ["airflow/hooks/dbapi.py"] | closing connection chunks in DbApiHook.get_pandas_df | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Hi all,
Please be patient with me, it's my first Bugreport in git at all :)
**Affected function:** DbApiHook.get_pandas_df
**Short description**: If I use DbApiHook.get_pandas_df with parameter "chunksize" the connection is lost
**Error description**
I tried using the DbApiHook.get_pandas_df function instead of pandas.read_sql. Without the parameter "chunksize" both functions work the same. But as soon as I add the parameter chunksize to get_pandas_df, I lose the connection in the first iteration. This happens both when querying Oracle and Mysql (Mariadb) databases.
During my research I found a comment on a closed issue that describes the same -> [#8468
](https://github.com/apache/airflow/issues/8468)
My Airflow version: 2.2.5
I think it's something to do with the "with closing" argument, because when I remove that argument, the chunksize argument was working.
```
def get_pandas_df(self, sql, parameters=None, **kwargs):
"""
Executes the sql and returns a pandas dataframe
:param sql: the sql statement to be executed (str) or a list of
sql statements to execute
:param parameters: The parameters to render the SQL query with.
:param kwargs: (optional) passed into pandas.io.sql.read_sql method
"""
try:
from pandas.io import sql as psql
except ImportError:
raise Exception("pandas library not installed, run: pip install 'apache-airflow[pandas]'.")
# Not working
with closing(self.get_conn()) as conn:
return psql.read_sql(sql, con=conn, params=parameters, **kwargs)
# would working
# return psql.read_sql(sql, con=conn, params=parameters, **kwargs)_
```
### What you think should happen instead
It should give me a chunk of DataFrame
### How to reproduce
**not working**
```
src_hook = OracleHook(oracle_conn_id='oracle_source_conn_id')
query = "select * from example_table"
for chunk in src_hook.get_pandas_df(query,chunksize=2):
print(chunk.head())
```
**works**
```
for chunk in src_hook.get_pandas_df(query):
print(chunk.head())
```
**works**
```
for chunk in pandas.read_sql(query,src_hook.get_conn(),chunksize=2):
print(chunk.head())
```
### Operating System
MacOS Monetäre
### Versions of Apache Airflow Providers
apache-airflow 2.2.5
apache-airflow-providers-ftp 2.1.2
apache-airflow-providers-http 2.1.2
apache-airflow-providers-imap 2.2.3
apache-airflow-providers-microsoft-mssql 2.1.3
apache-airflow-providers-mongo 2.3.3
apache-airflow-providers-mysql 2.2.3
apache-airflow-providers-oracle 2.2.3
apache-airflow-providers-salesforce 3.4.3
apache-airflow-providers-sftp 2.5.2
apache-airflow-providers-sqlite 2.1.3
apache-airflow-providers-ssh 2.4.3
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22947 | https://github.com/apache/airflow/pull/23452 | 41e94b475e06f63db39b0943c9d9a7476367083c | ab1f637e463011a34d950c306583400b7a2fceb3 | "2022-04-12T11:41:24Z" | python | "2022-05-31T10:39:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,942 | ["airflow/models/taskinstance.py", "tests/models/test_trigger.py"] | Deferrable operator trigger event payload is not persisted in db and not passed to completion method | ### Apache Airflow version
2.2.5 (latest released)
### What happened
When trigger is fired, event payload is added in next_kwargs with 'event' key.
This gets persisted in db when next_kwargs are not provided by operator. but when present due to modification of existing dict its not persisted in db
### What you think should happen instead
It should persist trigger event payload in db even when next kwargs are provided
### How to reproduce
_No response_
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22942 | https://github.com/apache/airflow/pull/22944 | a801ea3927b8bf3ca154fea3774ebf2d90e74e50 | bab740c0a49b828401a8baf04eb297d083605ae8 | "2022-04-12T10:00:48Z" | python | "2022-04-13T18:26:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,931 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | XCom is cleared when a task resumes from deferral. | ### Apache Airflow version
2.2.5 (latest released)
### What happened
A task's XCom value is cleared when a task is rescheduled after being deferred.
### What you think should happen instead
XCom should not be cleared in this case, as it is still the same task run.
### How to reproduce
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.models import BaseOperator
from airflow.triggers.temporal import TimeDeltaTrigger
class XComPushDeferOperator(BaseOperator):
def execute(self, context):
context["ti"].xcom_push("test", "test_value")
self.defer(
trigger=TimeDeltaTrigger(delta=timedelta(seconds=10)),
method_name="next",
)
def next(self, context, event=None):
pass
with DAG(
"xcom_clear", schedule_interval=None, start_date=datetime(2022, 4, 11),
) as dag:
XComPushDeferOperator(task_id="xcom_push")
```
### Operating System
macOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22931 | https://github.com/apache/airflow/pull/22932 | 4291de218e0738f32f516afe0f9d6adce7f3220d | 8b687ec82a7047fc35410f5c5bb0726de434e749 | "2022-04-12T00:34:38Z" | python | "2022-04-12T06:12:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,912 | ["airflow/www/static/css/main.css"] | Text wrap for task group tooltips | ### Description
Improve the readability of task group tooltips by wrapping the text after a certain number of characters.
### Use case/motivation
When tooltips have a lot of words in them, and your computer monitor is fairly large, Airflow will display the task group tooltip on one very long line. This can be difficult to read. It would be nice if after, say, 60 characters, additional tooltip text would be displayed on a new line.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22912 | https://github.com/apache/airflow/pull/22978 | 0cd8833df74f4b0498026c4103bab130e1fc1068 | 2f051e303fd433e64619f931eab2180db44bba23 | "2022-04-11T15:46:34Z" | python | "2022-04-13T13:57:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,897 | ["airflow/www/views.py", "tests/www/views/test_views_log.py"] | Invalid JSON metadata in get_logs_with_metadata causes server error. | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Invalid JSON metadata in get_logs_with_metadata causes server error. The `json.loads` exception is not handled like validation in other endpoints.
http://127.0.0.1:8080/get_logs_with_metadata?execution_date=2015-11-16T14:34:15+00:00&metadata=invalid
### What you think should happen instead
A proper error message can be returned
### How to reproduce
Accessing below endpoint with invalid metadata payload
http://127.0.0.1:8080/get_logs_with_metadata?execution_date=2015-11-16T14:34:15+00:00&metadata=invalid
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22897 | https://github.com/apache/airflow/pull/22898 | 8af77127f1aa332c6e976c14c8b98b28c8a4cd26 | a3dd8473e4c5bbea214ebc8d5545b75281166428 | "2022-04-11T08:03:51Z" | python | "2022-04-11T10:48:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,878 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | ECS operator throws an error on attempting to reattach to ECS tasks | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 3.2.0
### Apache Airflow version
2.2.5 (latest released)
### Operating System
Linux / ECS
### Deployment
Other Docker-based deployment
### Deployment details
We are running Docker on Open Shift 4
### What happened
There seems to be a bug in the code for ECS operator, during the "reattach" flow. We are running into some instability issues that cause our Airflow scheduler to restart. When the scheduler restarts while a task is running using ECS, the ECS operator will try to reattach to the ECS task once the Airflow scheduler restarts. The code works fine finding the ECS task and attaching to it, but then when it tries to fetch the logs, it throws the following error:
`Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1334, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1460, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1516, in _execute_task
result = execute_callable(context=context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/ecs.py", line 295, in execute
self.task_log_fetcher = self._get_task_log_fetcher()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/ecs.py", line 417, in _get_task_log_fetcher
log_stream_name = f"{self.awslogs_stream_prefix}/{self.ecs_task_id}"
AttributeError: 'EcsOperator' object has no attribute 'ecs_task_id'`
At this point, the operator will fail and the task will be marked for retries and eventually gets marked as failed, while on the ECS side, the ECS task is running fine. The manual way to fix this would be to wait for the ECS task to complete, then mark the task as successful and trigger downstream tasks. This is not very practical, since the task can take a long time (in our case the task can take hours)
### What you think should happen instead
I expect that the ECS operator should be able to reattach and pull the logs as normal.
### How to reproduce
Configure a task that would run using the ECS operator, and make sure it takes a very long time. Start the task, and once the logs starts flowing to Airflow, restart the Airflow scheduler. Wait for the scheduler to restart and check that upon retry, the task would be able to attach and fetch the logs.
### Anything else
When restarting Airflow, it tries to kill the task at hand. In our case, we didn't give the permission to the AWS role to kill the running ECS tasks, and therefore the ECS tasks keep running during the restart of Airflow. Others might not have this setup, and therefore they won't run into the "reattach" flow, and they won't encounter the issue reported here. This is not a good option for us, since our tasks can take hours to complete, and we don't want to interfere with their execution.
We also need to improve the stability of the Open Shift infrastructure where Airflow is running, so that the scheduler doesn't restart so often, but that is a different story.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22878 | https://github.com/apache/airflow/pull/23370 | 3f6d5eef427f3ea33d0cd342143983f54226bf05 | d6141c6594da86653b15d67eaa99511e8fe37a26 | "2022-04-09T17:25:06Z" | python | "2022-05-01T10:58:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,843 | ["airflow/models/dag.py", "airflow/models/param.py", "tests/models/test_dag.py"] | When passing the 'False' value to the parameters of a decorated dag function I get this traceback | ### Apache Airflow version
2.2.3
### What happened
When passing the `False` value to a decorated dag function I get this traceback below. Also the default value is not shown when clicking 'trigger dag w/ config'.
```[2022-04-07, 20:08:57 UTC] {taskinstance.py:1259} INFO - Executing <Task(_PythonDecoratedOperator): value_consumer> on 2022-04-07 20:08:56.914410+00:00
[2022-04-07, 20:08:57 UTC] {standard_task_runner.py:52} INFO - Started process 2170 to run task
[2022-04-07, 20:08:57 UTC] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'check_ui_config', 'value_consumer', 'manual__2022-04-07T20:08:56.914410+00:00', '--job-id', '24', '--raw', '--subdir', 'DAGS_FOLDER/check_ui_config.py', '--cfg-path', '/tmp/tmpww9euksv', '--error-file', '/tmp/tmp7kjdfks5']
[2022-04-07, 20:08:57 UTC] {standard_task_runner.py:77} INFO - Job 24: Subtask value_consumer
[2022-04-07, 20:08:57 UTC] {logging_mixin.py:109} INFO - Running <TaskInstance: check_ui_config.value_consumer manual__2022-04-07T20:08:56.914410+00:00 [running]> on host a643f8828615
[2022-04-07, 20:08:57 UTC] {taskinstance.py:1700} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1418, in _execute_task_with_callbacks
self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1992, in render_templates
self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1061, in render_template_fields
self._do_render_template_fields(self, self.template_fields, context, jinja_env, set())
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1074, in _do_render_template_fields
rendered_content = self.render_template(content, context, jinja_env, seen_oids)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1125, in render_template
return tuple(self.render_template(element, context, jinja_env) for element in content)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1125, in <genexpr>
return tuple(self.render_template(element, context, jinja_env) for element in content)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1116, in render_template
return content.resolve(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/param.py", line 226, in resolve
raise AirflowException(f'No value could be resolved for parameter {self._name}')
airflow.exceptions.AirflowException: No value could be resolved for parameter test
[2022-04-07, 20:08:57 UTC] {taskinstance.py:1267} INFO - Marking task as FAILED. dag_id=check_ui_config, task_id=value_consumer, execution_date=20220407T200856, start_date=20220407T200857, end_date=20220407T200857
[2022-04-07, 20:08:57 UTC] {standard_task_runner.py:89} ERROR - Failed to execute job 24 for task value_consumer
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 298, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1418, in _execute_task_with_callbacks
self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1992, in render_templates
self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1061, in render_template_fields
self._do_render_template_fields(self, self.template_fields, context, jinja_env, set())
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1074, in _do_render_template_fields
rendered_content = self.render_template(content, context, jinja_env, seen_oids)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1125, in render_template
return tuple(self.render_template(element, context, jinja_env) for element in content)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1125, in <genexpr>
return tuple(self.render_template(element, context, jinja_env) for element in content)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1116, in render_template
return content.resolve(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/param.py", line 226, in resolve
raise AirflowException(f'No value could be resolved for parameter {self._name}')
airflow.exceptions.AirflowException: No value could be resolved for parameter test
```
### What you think should happen instead
I think airflow should be able to handle the False value when passing it as a dag param.
### How to reproduce
```
from airflow.decorators import dag, task
from airflow.models.param import Param
from datetime import datetime, timedelta
@task
def value_consumer(val):
print(val)
@dag(
start_date=datetime(2021, 1, 1),
schedule_interval=timedelta(days=365, hours=6)
)
def check_ui_config(test):
value_consumer(test)
the_dag = check_ui_config(False)
```
### Operating System
Docker (debian:buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro cli with this image:
quay.io/astronomer/ap-airflow-dev:2.2.3-2
### Anything else
![Screenshot from 2022-04-07 14-13-43](https://user-images.githubusercontent.com/102494105/162288264-bb6c6ca6-977f-4ff7-a0cc-9616c0ce8ac8.png)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22843 | https://github.com/apache/airflow/pull/22964 | e09b4f144d1edefad50a58ebef56bd40df4eb39c | a0f7e61497d547b82edc1154d39535d79aaedff3 | "2022-04-07T20:14:46Z" | python | "2022-04-13T07:48:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,833 | ["airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Allow mapped Task as input to another mapped task | This dag
```python
with DAG(dag_id="simple_mapping", start_date=pendulum.DateTime(2022, 4, 6), catchup=True) as d3:
@task(email='a@b.com')
def add_one(x: int):
return x + 1
two_three_four = add_one.expand(x=[1, 2, 3])
three_four_five = add_one.expand(x=two_three_four)
```
Fails with this error:
```
File "/home/ash/code/airflow/airflow/airflow/models/taskinstance.py", line 2239, in _record_task_map_for_downstreams
raise UnmappableXComTypePushed(value)
airflow.exceptions.UnmappableXComTypePushed: unmappable return type 'int'
``` | https://github.com/apache/airflow/issues/22833 | https://github.com/apache/airflow/pull/22849 | 1a8b8f521c887716d7e0c987a58e8e5c3b62bdaa | 8af77127f1aa332c6e976c14c8b98b28c8a4cd26 | "2022-04-07T14:21:14Z" | python | "2022-04-11T09:29:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,810 | ["airflow/providers/jira/sensors/jira.py"] | JiraTicketSensor duplicates TaskId | ### Apache Airflow Provider(s)
jira
### Versions of Apache Airflow Providers
apache-airflow-providers-jira==2.0.1
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux 2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
I've been trying to use the Jira Operator to create a Ticket from Airflow and use the JiraTicketSensor to check if the ticket was resolved. Creating the task works fine, but I can't get the Sensor to work.
If I don't provide the method_name I get an error that it is required, if I provide it as None, I get an error saying the Task id has already been added to the DAG.
```text
Broken DAG: [/usr/local/airflow/dags/jira_ticket_sensor.py] Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 553, in __init__
task_group.add(self)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/task_group.py", line 175, in add
raise DuplicateTaskIdFound(f"Task id '{key}' has already been added to the DAG")
airflow.exceptions.DuplicateTaskIdFound: Task id 'jira_sensor' has already been added to the DAG
```
### What you think should happen instead
_No response_
### How to reproduce
use this dag
```python
from datetime import datetime
from airflow import DAG
from airflow.providers.jira.sensors.jira import JiraTicketSensor
with DAG(
dag_id='jira_ticket_sensor',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
catchup=False
) as dag:
jira_sensor = JiraTicketSensor(
task_id='jira_sensor',
jira_conn_id='jira_default',
ticket_id='TEST-1',
field='status',
expected_value='Completed',
method_name='issue',
poke_interval=600
)
```
### Anything else
This error occurs every time
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22810 | https://github.com/apache/airflow/pull/23046 | e82a2fdf841dd571f3b8f456c4d054cf3a94fc03 | bf10545d8358bcdb9ca5dacba101482296251cab | "2022-04-07T10:43:06Z" | python | "2022-04-25T11:16:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,790 | ["chart/templates/secrets/metadata-connection-secret.yaml", "tests/charts/test_basic_helm_chart.py"] | Helm deployment fails when postgresql.nameOverride is used | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Helm installation fails with the following config:
```
postgresql:
enabled: true
nameOverride: overridename
```
The problem is manifested in the `-airflow-metadata` secret where the connection string will be generated without respect to the `nameOverride`
With the example config the generated string should be:
`postgresql://postgres:postgres@myrelease-overridename:5432/postgres?sslmode=disable`
but the actual string generated is:
`postgresql://postgres:postgres@myrelease-overridename.namespace:5432/postgres?sslmode=disable`
### What you think should happen instead
Installation should succeed with correctly generated metadata connection string
### How to reproduce
To reproduce just set the following in values.yaml and attempt `helm install`
```
postgresql:
enabled: true
nameOverride: overridename
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
using helm with kind cluster
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22790 | https://github.com/apache/airflow/pull/29214 | 338a633fc9faab54e72c408e8a47eeadb3ad55f5 | 56175e4afae00bf7ccea4116ecc09d987a6213c3 | "2022-04-06T16:28:38Z" | python | "2023-02-02T17:00:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,738 | ["airflow/models/taskinstance.py", "airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | Webserver doesn't mask rendered fields for pending tasks | ### Apache Airflow version
2.2.5 (latest released)
### What happened
When triggering a new dagrun the webserver will not mask secrets in the rendered fields for that dagrun's tasks which didn't start yet.
Tasks which have completed or are in state running are not affected by this.
### What you think should happen instead
The webserver should mask all secrets for tasks which have started or not started.
<img width="628" alt="Screenshot 2022-04-04 at 15 36 29" src="https://user-images.githubusercontent.com/7921017/161628806-c2c579e2-faea-40cc-835c-ac6802d15dc1.png">
.
### How to reproduce
Create a variable `my_secret` and run this DAG
```python
from datetime import timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.sensors.time_delta import TimeDeltaSensor
from airflow.utils.dates import days_ago
with DAG(
"secrets",
start_date=days_ago(1),
schedule_interval=None,
) as dag:
wait = TimeDeltaSensor(
task_id="wait",
delta=timedelta(minutes=1),
)
task = wait >> BashOperator(
task_id="secret_task",
bash_command="echo '{{ var.value.my_secret }}'",
)
```
While the first task `wait` is running, displaying rendered fields for the second task `secret_task` will show the unmasked secret variable.
<img width="1221" alt="Screenshot 2022-04-04 at 15 33 43" src="https://user-images.githubusercontent.com/7921017/161628734-b7b13190-a3fe-4898-8fa9-ff7537245c1c.png">
### Operating System
Debian (Astronomer Airflow Docker image)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!3.2.0
apache-airflow-providers-cncf-kubernetes==1!3.0.0
apache-airflow-providers-elasticsearch==1!3.0.2
apache-airflow-providers-ftp==1!2.1.2
apache-airflow-providers-google==1!6.7.0
apache-airflow-providers-http==1!2.1.2
apache-airflow-providers-imap==1!2.2.3
apache-airflow-providers-microsoft-azure==1!3.7.2
apache-airflow-providers-mysql==1!2.2.3
apache-airflow-providers-postgres==1!4.1.0
apache-airflow-providers-redis==1!2.0.4
apache-airflow-providers-slack==1!4.2.3
apache-airflow-providers-sqlite==1!2.1.3
apache-airflow-providers-ssh==1!2.4.3
```
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
We have seen this issue also in Airflow 2.2.3.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22738 | https://github.com/apache/airflow/pull/23807 | 10a0d8e7085f018b7328533030de76b48de747e2 | 2dc806367c3dc27df5db4b955d151e789fbc78b0 | "2022-04-04T20:47:44Z" | python | "2022-05-21T15:36:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,731 | ["airflow/models/dag.py", "airflow/models/taskmixin.py", "airflow/serialization/serialized_objects.py", "airflow/utils/task_group.py", "airflow/www/views.py", "tests/models/test_dag.py", "tests/serialization/test_dag_serialization.py", "tests/utils/test_task_group.py"] | Fix the order that tasks are displayed in Grid view | The order that tasks are displayed in Grid view do not correlate with the order that the tasks would be expected to execute in the DAG. See `example_bash_operator` below:
<img width="335" alt="Screen Shot 2022-04-04 at 11 47 31 AM" src="https://user-images.githubusercontent.com/4600967/161582603-dffea697-68d9-4145-909d-3240f3a65ad2.png">
<img width="426" alt="Screen Shot 2022-04-04 at 11 47 36 AM" src="https://user-images.githubusercontent.com/4600967/161582604-d59885cc-2c71-4a7d-b332-e439115d8c4c.png">
We should update the [task_group_to_tree](https://github.com/apache/airflow/blob/main/airflow/www/views.py#L232) function in views.py to better approximate the order that tasks would be run.
| https://github.com/apache/airflow/issues/22731 | https://github.com/apache/airflow/pull/22741 | e9df0f2de95bb69490d9530d5a27d7b05b71c32e | 34154803ac73d62d3e969e480405df3073032622 | "2022-04-04T15:49:06Z" | python | "2022-04-05T12:59:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,730 | ["airflow/providers/dbt/cloud/hooks/dbt.py", "tests/providers/dbt/cloud/hooks/test_dbt_cloud.py", "tests/providers/dbt/cloud/sensors/test_dbt_cloud.py"] | dbt Cloud Provider only works for Multi-tenant instances | ### Apache Airflow Provider(s)
dbt-cloud
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.5 (latest released)
### Operating System
any
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Some dbt Cloud deployments require the setting of a different base URL (could be X.getdbt.com) or cloud.X.getdbt.com)
Relevant line: https://github.com/apache/airflow/blame/436c17c655494eff5724df98d1a231ffa2142253/airflow/providers/dbt/cloud/hooks/dbt.py#L154
self.base_url = "https://cloud.getdbt.com/api/v2/accounts/"
### What you think should happen instead
A runtime paramater that defaults to cloud.getdbt
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22730 | https://github.com/apache/airflow/pull/24264 | 98b4e48fbc1262f1381e7a4ca6cce31d96e6f5e9 | 7498fba826ec477b02a40a2e23e1c685f148e20f | "2022-04-04T15:43:54Z" | python | "2022-06-06T23:32:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,705 | ["airflow/providers/google/cloud/transfers/local_to_gcs.py", "tests/providers/google/cloud/transfers/test_local_to_gcs.py"] | LocalFileSystemToGCSOperator give false positive while copying file from src to dest, even when src has no file | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.4.0
### Apache Airflow version
2.1.4
### Operating System
Debian GNU/Linux 10 (buster)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When you run LocalFilesSystemToGCSOperator with the params for src and dest, the operator reports a false positive when there are no files present under the specified src directory. I expected it to fail stating the specified directory doesn't have any file.
[2022-03-15 14:26:15,475] {taskinstance.py:1107} INFO - Executing <Task(LocalFilesystemToGCSOperator): upload_files_to_GCS> on 2022-03-15T14:25:59.554459+00:00
[2022-03-15 14:26:15,484] {standard_task_runner.py:52} INFO - Started process 709 to run task
[2022-03-15 14:26:15,492] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'dag', 'upload_files_to_GCS', '2022-03-15T14:25:59.554459+00:00', '--job-id', '1562', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/dag.py', '--cfg-path', '/tmp/tmp_e9t7pl9', '--error-file', '/tmp/tmpyij6m4er']
[2022-03-15 14:26:15,493] {standard_task_runner.py:77} INFO - Job 1562: Subtask upload_files_to_GCS
[2022-03-15 14:26:15,590] {logging_mixin.py:104} INFO - Running <TaskInstance: dag.upload_files_to_GCS 2022-03-15T14:25:59.554459+00:00 [running]> on host 653e566fd372
[2022-03-15 14:26:15,752] {taskinstance.py:1300} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=jet2
AIRFLOW_CTX_DAG_ID=dag
AIRFLOW_CTX_TASK_ID=upload_files_to_GCS
AIRFLOW_CTX_EXECUTION_DATE=2022-03-15T14:25:59.554459+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-03-15T14:25:59.554459+00:00
[2022-03-15 14:26:19,357] {taskinstance.py:1204} INFO - Marking task as SUCCESS. gag, task_id=upload_files_to_GCS, execution_date=20220315T142559, start_date=20220315T142615, end_date=20220315T142619
[2022-03-15 14:26:19,422] {taskinstance.py:1265} INFO - 1 downstream tasks scheduled from follow-on schedule check
[2022-03-15 14:26:19,458] {local_task_job.py:149} INFO - Task exited with return code 0
### What you think should happen instead
The operator should at least info that no files were copied than just making it successful.
### How to reproduce
- create a Dag with LocalFilesSystemToGCSOperator
- specify an empty directory as src and a gcp bucket as bucket_name, dest param(can be blank).
- run the dag
### Anything else
No
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22705 | https://github.com/apache/airflow/pull/22772 | 921ccedf7f90f15e8d18c27a77b29d232be3c8cb | 838cf401b9a424ad0fbccd5fb8d3040a8f4a7f44 | "2022-04-02T11:30:11Z" | python | "2022-04-06T19:22:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,693 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator failure email alert with actual error log from command executed | ### Description
When a command executed using KubernetesPodOperator fails, the alert email only says:
`Exception: Pod Launching failed: Pod pod_name_xyz returned a failure`
along with other parameters supplied to the operator but doesn't contain actual error message thrown by the command.
~~I am thinking similar to how xcom works with KubernetesPodOperator, if the command could write the error log in sidecar container in /airflow/log/error.log and airflow picks that up, then it could be included in the alert email (probably at the top). It can use same sidecar as for xcom (if that is easier to maintain) but write in different folder.~~
Looks like kubernetes has a way to send termination message.
https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/
Just need to pull that from container status message and include it in failure message at the top.
### Use case/motivation
Similar to how email alert for most other operator includes key error message right there without having to login to airflow to see the logs, i am expecting similar functionality from KubernetesPodOperator too.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22693 | https://github.com/apache/airflow/pull/22871 | ddb5d9b4a2b4e6605f66f82a6bec30393f096c05 | d81703c5778e13470fcd267578697158776b8318 | "2022-04-01T17:07:52Z" | python | "2022-04-14T00:16:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,689 | ["docs/apache-airflow-providers-apache-hdfs/index.rst"] | HDFS provider causes TypeError: __init__() got an unexpected keyword argument 'encoding' | ### Discussed in https://github.com/apache/airflow/discussions/22301
<div type='discussions-op-text'>
<sup>Originally posted by **frankie1211** March 16, 2022</sup>
I build the custom container image, below is my Dockerfile.
```dockerfile
FROM apache/airflow:2.2.4-python3.9
USER root
RUN apt-get update \
&& apt-get install -y gcc g++ vim libkrb5-dev build-essential libsasl2-dev \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER airflow
RUN pip install --upgrade pip
RUN pip install apache-airflow-providers-apache-spark --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.4/constraints-3.9.txt
RUN pip install apache-airflow-providers-apache-hdfs --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.4/constraints-3.9.txt"
```
But i got the error when i run the container
```
airflow-init_1 | The container is run as root user. For security, consider using a regular user account.
airflow-init_1 | ....................
airflow-init_1 | ERROR! Maximum number of retries (20) reached.
airflow-init_1 |
airflow-init_1 | Last check result:
airflow-init_1 | $ airflow db check
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line 5, in <module>
airflow-init_1 | from airflow.__main__ import main
airflow-init_1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 28, in <module>
airflow-init_1 | from airflow.cli import cli_parser
airflow-init_1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 621, in <module>
airflow-init_1 | type=argparse.FileType('w', encoding='UTF-8'),
airflow-init_1 | TypeError: __init__() got an unexpected keyword argument 'encoding'
airflow-init_1 |
airflow_airflow-init_1 exited with code 1
```
</div> | https://github.com/apache/airflow/issues/22689 | https://github.com/apache/airflow/pull/29614 | 79c07e3fc5d580aea271ff3f0887291ae9e4473f | 0a4184e34c1d83ad25c61adc23b838e994fc43f1 | "2022-04-01T14:05:22Z" | python | "2023-02-19T20:37:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,675 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | GCSToGCSOperator cannot copy a single file/folder without copying other files/folders with that prefix | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.4 (latest released)
### Operating System
MacOS 12.2.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
I have file "hourse.jpeg" and "hourse.jpeg.copy" and a folder "hourse.jpeg.folder" in source bucket.
I use the following code to try to copy only "hourse.jpeg" to another bucket.
gcs_to_gcs_op = GCSToGCSOperator(
task_id="gcs_to_gcs",
source_bucket=my_source_bucket,
source_object="hourse.jpeg",
destination_bucket=my_destination_bucket
)
The result is the two files and one folder mentioned above are copied.
From the source code it seems there is no way to do what i want.
### What you think should happen instead
Only the file specified should be copied, that means we should treat source_object as exact match instead of prefix.
To accomplish the current behavior as prefix, the user can/should use wild char
source_object="hourse.jpeg*"
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22675 | https://github.com/apache/airflow/pull/24039 | 5e6997ed45be0972bf5ea7dc06e4e1cef73b735a | ec84ffe71cfa8246155b9b4cb10bf2167e75adcf | "2022-04-01T06:25:57Z" | python | "2022-06-06T12:17:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,665 | ["airflow/models/mappedoperator.py"] | Superfluous TypeError when passing not-iterables to `expand()` | ### Apache Airflow version
main (development)
### What happened
Here's a problematic dag. `False` is invalid here.
```python3
from airflow.models import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
with DAG(
dag_id="singleton_expanded",
schedule_interval=timedelta(days=365),
start_date=datetime(2001, 1, 1),
) as dag:
# has problem
PythonOperator.partial(
task_id="foo",
python_callable=lambda x: "hi" if x else "bye",
).expand(op_args=False)
```
When I check for errors like `python dags/the_dag.py` I get the following error:
```
Traceback (most recent call last):
File "/Users/matt/2022/03/30/dags/the_dag.py", line 13, in <module>
PythonOperator.partial(
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 187, in expand
validate_mapping_kwargs(self.operator_class, "expand", mapped_kwargs)
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 116, in validate_mapping_kwargs
raise ValueError(error)
ValueError: PythonOperator.expand() got an unexpected type 'bool' for keyword argument op_args
Exception ignored in: <function OperatorPartial.__del__ at 0x10c63b1f0>
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 182, in __del__
warnings.warn(f"{self!r} was never mapped!")
File "/usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/warnings.py", line 109, in _showwarnmsg
sw(msg.message, msg.category, msg.filename, msg.lineno,
File "/Users/matt/src/airflow/airflow/settings.py", line 115, in custom_show_warning
from rich.markup import escape
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 982, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 925, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1414, in find_spec
File "<frozen importlib._bootstrap_external>", line 1380, in _get_spec
TypeError: 'NoneType' object is not iterable
```
### What you think should happen instead
I'm not sure what's up with that type error, the ValueError is what I needed to see. So I expected this:
```
Traceback (most recent call last):
File "/Users/matt/2022/03/30/dags/the_dag.py", line 13, in <module>
PythonOperator.partial(
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 187, in expand
validate_mapping_kwargs(self.operator_class, "expand", mapped_kwargs)
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 116, in validate_mapping_kwargs
raise ValueError(error)
ValueError: PythonOperator.expand() got an unexpected type 'bool' for keyword argument op_args
```
### How to reproduce
_No response_
### Operating System
Mac OS
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
- cloned main at 327eab3e2
- created fresh venv and used pip to install
- `airflow info`
- `airflow db init`
- add the dag
- `python dags/the_dag.py`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22665 | https://github.com/apache/airflow/pull/22678 | 9583c1cab65d28146e73aab0993304886c724bf3 | 17cf6367469c059c82bb7fa4289645682ef22dda | "2022-03-31T19:16:46Z" | python | "2022-04-01T10:14:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,657 | ["chart/templates/flower/flower-ingress.yaml", "chart/templates/webserver/webserver-ingress.yaml"] | Wrong apiVersion Detected During Ingress Creation | ### Official Helm Chart version
1.5.0 (latest released)
### Apache Airflow version
2.2.4 (latest released)
### Kubernetes Version
microk8s 1.23/stable
### Helm Chart configuration
```
executor: KubernetesExecutor
ingress:
enabled: true
## airflow webserver ingress configs
web:
annotations:
kubernetes.io/ingress.class: public
hosts:
-name: "example.com"
path: "/airflow"
## Disabled due to using KubernetesExecutor as recommended in the documentation
flower:
enabled: false
## Disabled due to using KubernetesExecutor as recommended in the documentation
redis:
enabled: false
```
### Docker Image customisations
No customization required to recreate, the default image has the same behavior.
### What happened
Installation notes below, as displayed the install fails due to the web ingress chart attempting a semVerCompare to check that the kube version is greater than 1.19 and, if it's not, it defaults back to the v1beta networking version. The microk8s install exceeds this version so I would expect the Webserver Ingress version to utilize "networking.k8s.io/v1" instead of the beta version.
Airflow installation
```
$: helm install airflow apache-airflow/airflow --namespace airflow --values ./custom-values.yaml
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
```
microk8s installation
```
$: kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.5-2+c812603a312d2b", GitCommit:"c812603a312d2b0c59687a1be1ae17c0878104cc", GitTreeState:"clean", BuildDate:"2022-03-17T16:14:08Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.5-2+c812603a312d2b", GitCommit:"c812603a312d2b0c59687a1be1ae17c0878104cc", GitTreeState:"clean", BuildDate:"2022-03-17T16:11:06Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
```
### What you think should happen instead
The Webserver Ingress chart should detect that the kube version is greater than 1.19 and utilize the version ```networking.k8s.io/v1```.
### How to reproduce
On Ubuntu 18.04, run:
1. ```sudo snap install microk8s --classic```
2. ```microk8s status --wait-ready```
3. ```microk8s enable dns ha-cluster helm3 ingress metrics-server storage```
4. ```microk8s helm3 repo add apache-airflow https://airflow.apache.org```
5. ```microk8s kubectl create namespace airflow```
6. ```touch ./custom-values.yaml```
7. ```vi ./custom-values.yaml``` and insert the values.yaml contents from above
8. ```microk8s helm3 install airflow apache-airflow/airflow --namespace airflow --values ./custom-values.yaml```
### Anything else
This problem can be reproduced consistently.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22657 | https://github.com/apache/airflow/pull/28461 | e377e869da9f0e42ac1e0a615347cf7cd6565d54 | 5c94ef0a77358dbee8ad8735a132b42d78843df7 | "2022-03-31T16:19:33Z" | python | "2022-12-19T15:03:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,647 | ["airflow/utils/sqlalchemy.py"] | SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Error
```
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/xcom.py:437: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
return query.delete()
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py:2214: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
for result in query.with_entities(XCom.task_id, XCom.value)
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:126: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
session.merge(self)
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:162: SAWarning: Coercing Subquery object into a select() for use in IN(); please pass a select() construct explicitly
tuple_(cls.dag_id, cls.task_id, cls.execution_date).notin_(subq1),
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:163: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
).delete(synchronize_session=False)
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/"
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sqlite==2.1.0
### Deployment
Other
### Deployment details
Pip package
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22647 | https://github.com/apache/airflow/pull/24499 | cc6a44bdc396a305fd53c7236427c578e9d4d0b7 | d9694733cafd9a3d637eb37d5154f0e1e92aadd4 | "2022-03-31T12:23:17Z" | python | "2022-07-05T12:50:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,606 | ["airflow/providers/jenkins/operators/jenkins_job_trigger.py"] | Jenkins JobTriggerOperator bug when polling for new build | ### Apache Airflow Provider(s)
jenkins
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.3
### Operating System
macOs Monterey 12.0.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
No specific details
### What happened
When using the JenkinsJobTriggerOperator there is a polling mechanism to check the built job to return the newly created job number. There is a retry mechanism if it fails to poll it retries couple of times to get the built job number. It is possible though the response when returned does not contain the exact info we are looking for, therefore the part of code which is checking the details of the `executable` key in the body of the jenkins response fail, due to an iteration code over the content of that key which could be None.
It is a faulty code and will result in multiple job creation which could happen in case of a retry on this operator.
```
[2022-03-29, 11:51:40 PDT] {taskinstance.py:1259} INFO - Executing <Task(JenkinsJobTriggerOperator): trigger_downstream_jenkins> on 2022-03-29 17:25:00+00:00
[2022-03-29, 11:51:40 PDT] {standard_task_runner.py:52} INFO - Started process 25 to run task
[2022-03-29, 11:51:40 PDT] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'dag_code', 'trigger_downstream_jenkins', 'scheduled__2022-03-29T17:25:00+00:00', '--job-id', '302242', '--raw', '--subdir', 'DAGS_FOLDER/git_dags/somecode.py', '--cfg-path', '/tmp/tmp84qyueun', '--error-file', '/tmp/tmp732pjeg5']
[2022-03-29, 11:51:40 PDT] {standard_task_runner.py:77} INFO - Job 302242: Subtask trigger_downstream_jenkins
[2022-03-29, 11:51:40 PDT] {logging_mixin.py:109} INFO - Running <TaskInstance: code_v1trigger_downstream_jenkins scheduled__2022-03-29T17:25:00+00:00 [running]> on host codetriggerdownstreamjenkins.79586cc902e641be9e9
[2022-03-29, 11:51:40 PDT] {taskinstance.py:1424} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=data
AIRFLOW_CTX_DAG_ID=Code_v1
AIRFLOW_CTX_TASK_ID=trigger_downstream_jenkins
AIRFLOW_CTX_EXECUTION_DATE=2022-03-29T17:25:00+00:00
AIRFLOW_CTX_DAG_RUN_ID=scheduled__2022-03-29T17:25:00+00:00
[2022-03-29, 11:51:40 PDT] {jenkins_job_trigger.py:182} INFO - Triggering the job/Production/Downstream Trigger - Production on the jenkins : JENKINS with the parameters : None
[2022-03-29, 11:51:40 PDT] {base.py:70} INFO - Using connection to: id: JENKINS. Host: server.com, Port: 443, Schema: , Login: user, Password: ***, extra: True
[2022-03-29, 11:51:40 PDT] {jenkins.py:43} INFO - Trying to connect to [https://server.com:443](https://server.com/)
[2022-03-29, 11:51:40 PDT] {kerberos_.py:325} ERROR - handle_other(): Mutual authentication unavailable on 403 response
[2022-03-29, 11:51:40 PDT] {jenkins_job_trigger.py:160} INFO - Polling jenkins queue at the url https://servercom/queue/item/5831525//api/json
[2022-03-29, 11:51:40 PDT] {taskinstance.py:1700} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1455, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1511, in _execute_task
result = execute_callable(context=context)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/jenkins/operators/jenkins_job_trigger.py", line 191, in execute
build_number = self.poll_job_in_queue(jenkins_response['headers']['Location'], jenkins_server)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/jenkins/operators/jenkins_job_trigger.py", line 167, in poll_job_in_queue
if 'executable' in json_response and 'number' in json_response['executable']:
TypeError: argument of type 'NoneType' is not iterable
[2022-03-29, 11:51:40 PDT] {taskinstance.py:1267} INFO - Marking task as UP_FOR_RETRY. dag_id=code_v1 task_id=trigger_downstream_jenkins, execution_date=20220329T172500, start_date=20220329T185140, end_date=20220329T185140
[2022-03-29, 11:51:40 PDT] {standard_task_runner.py:89} ERROR - Failed to execute job 302242 for task trigger_downstream_jenkins
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 298, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1455, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1511, in _execute_task
result = execute_callable(context=context)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/jenkins/operators/jenkins_job_trigger.py", line 191, in execute
build_number = self.poll_job_in_queue(jenkins_response['headers']['Location'], jenkins_server)
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/jenkins/operators/jenkins_job_trigger.py", line 167, in poll_job_in_queue
if 'executable' in json_response and 'number' in json_response['executable']:
TypeError: argument of type 'NoneType' is not iterable
[2022-03-29, 11:51:40 PDT] {local_task_job.py:154} INFO - Task exited with return code 1
[2022-03-29, 11:51:40 PDT] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
There should be an extra check to prevent the iteration over possible None type returned in the `executable` key from the body of the response from jenkins poll request.
This is the current code:
https://github.com/apache/airflow/blob/b0b69f3ea7186e76a04b733022b437b57a087a2e/airflow/providers/jenkins/operators/jenkins_job_trigger.py#L161
It should be updated to this code:
```python
if 'executable' in json_response and json_response['executable'] is not None and 'number' in json_response['executable']:
```
### How to reproduce
During calls to poling to jenkins it is happening randomly, it might not happen at all. It depends on the jenkins performance as well, how fast it could build the jobs might prevent this.
### Anything else
No further info.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22606 | https://github.com/apache/airflow/pull/22608 | 5247445ff13e4b9cf73c26f902af03791f48f04d | c30ab6945ea0715889d32e38e943c899a32d5862 | "2022-03-29T21:52:20Z" | python | "2022-04-04T12:01:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,576 | ["airflow/providers/ssh/hooks/ssh.py", "tests/providers/ssh/hooks/test_ssh.py"] | SFTP connection hook not working when using inline Ed25519 key from Airflow connection | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I am trying to create an SFTP connection which includes the extra params of `private_key` which includes a txt output of my private key. Ie: `{"look_for_keys": "false", "no_host_key_check": "true", "private_key": "-----BEGIN OPENSSH PRIVATE KEY-----
keygoeshere==\n----END OPENSSH PRIVATE KEY-----"}`
When I test the connection I get the error `expected str, bytes or os.PathLike object, not Ed25519Key`
When I try and use this connection I get the following error:
```
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/sftp/hooks/sftp.py", line 208, in list_directory
conn = self.get_conn()
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 324, in wrapped_f
return self(f, *args, **kw)
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 404, in __call__
do = self.iter(retry_state=retry_state)
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 407, in __call__
result = fn(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/sftp/hooks/sftp.py", line 172, in get_conn
self.conn = pysftp.Connection(**conn_params)
File "/home/airflow/.local/lib/python3.7/site-packages/pysftp/__init__.py", line 142, in __init__
self._set_authentication(password, private_key, private_key_pass)
File "/home/airflow/.local/lib/python3.7/site-packages/pysftp/__init__.py", line 164, in _set_authentication
private_key_file = os.path.expanduser(private_key)
File "/usr/local/lib/python3.7/posixpath.py", line 235, in expanduser
path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not Ed25519Key
```
This only seems to happen for Ed25519 keys. RSA worked fine!
### What you think should happen instead
It should work, I don't specify this as an `Ed25519Key` I think the connection manager code is saving it as a paraminko key but when testing / using it as a DAG it is expecting a string!
I don't see why you can't save it as a paraminko key and use it in the connection.
Also it seems to work fine when using RSA keys, but super short keys are cooler!
### How to reproduce
Create a new Ed25519 ssh key and a new SFTP connection and copy the following into the extra field:
{"look_for_keys": "false", "no_host_key_check": "true", "private_key": "-----BEGIN RSA PRIVATE KEY----- Ed25519_key_goes_here -----END RSA PRIVATE KEY-----"}
Test should yield the failure `TypeError: expected str, bytes or os.PathLike object, not Ed25519Key`
### Operating System
RHEL 7.9 on host OS and Docker image for the rest.
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==3.0.2
apache-airflow-providers-docker==2.4.1
apache-airflow-providers-elasticsearch==2.2.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.4.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-microsoft-azure==3.6.0
apache-airflow-providers-mysql==2.2.0
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.4.1
apache-airflow-providers-slack==4.2.0
apache-airflow-providers-sqlite==2.1.0
apache-airflow-providers-ssh==2.4.0
### Deployment
Other Docker-based deployment
### Deployment details
Docker image of 2.2.4 release with VERY minimal changes. (wget, curl, etc added)
### Anything else
RSA seems to work fine... only after a few hours of troubleshooting and writing this ticket did I learn that. 😿
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22576 | https://github.com/apache/airflow/pull/23043 | d7b85d9a0a09fd7b287ec928d3b68c38481b0225 | e63dbdc431c2fa973e9a4c0b48ec6230731c38d1 | "2022-03-28T20:06:31Z" | python | "2022-05-09T22:49:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,551 | ["docker_tests/test_prod_image.py", "docs/apache-airflow-providers-microsoft-azure/index.rst", "setup.py"] | Consider depending on `azure-keyvault-secrets` instead of `azure-keyvault` metapackage | ### Description
It appears that the `microsoft-azure` provider only depends on `azure-keyvault-secrets`:
https://github.com/apache/airflow/blob/388723950de9ca519108e0a8f6818f0fc0dd91d4/airflow/providers/microsoft/azure/secrets/key_vault.py#L24
and not the other 2 packages in the `azure-keyvault` metapackage.
### Use case/motivation
I am the maintainer of the `apache-airflow-providers-*` packages on `conda-forge` and I'm running into small issues with the way `azure-keyvault` is maintained as a metapackage on `conda-forge`. I think depending on `azure-keyvault-secrets` explicitly would solve my problem and also provide better clarity for the `microsoft-azure` provider in general.
### Related issues
https://github.com/conda-forge/azure-keyvault-feedstock/issues/6
https://github.com/conda-forge/apache-airflow-providers-microsoft-azure-feedstock/pull/13
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22551 | https://github.com/apache/airflow/pull/22557 | a6609d5268ebe55bcb150a828d249153582aa936 | 77d4e725c639efa68748e0ae51ddf1e11b2fd163 | "2022-03-27T12:24:12Z" | python | "2022-03-29T13:44:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,487 | ["airflow/cli/commands/task_command.py"] | "Running <TaskInstance: *.* * [queued]> on host *" written with WARNING level | ### Apache Airflow version
2.2.3
### What happened
"Running <TaskInstance: *.* * [queued]> on host *" written with WARNING level
### What you think should happen instead
This message should be written with INFO level
### How to reproduce
_No response_
### Operating System
Composer
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22487 | https://github.com/apache/airflow/pull/22488 | 388f4e8b032fe71ccc9a16d84d7c2064c80575b3 | acb1a100bbf889dddef1702c95bd7262a578dfcc | "2022-03-23T13:28:26Z" | python | "2022-03-25T09:40:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,474 | ["airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | CLI command "airflow dags next-execution" give unexpected results with paused DAG and catchup=False | ### Apache Airflow version
2.2.2
### What happened
Current time 16:54 UTC
Execution Schedule: * * * * *
Last Run: 16:19 UTC
DAG Paused
Catchup=False
`airflow dags next-execution sample_dag`
returns
```
[INFO] Please be reminded this DAG is PAUSED now.
2022-03-22T16:20:00+00:00
```
### What you think should happen instead
I would expect
```
[INFO] Please be reminded this DAG is PAUSED now.
2022-03-22T16:53:00+00:00
```
To be returned since when you unpause the DAG that is the next executed DAG
### How to reproduce
Create a simple sample dag with a schedule of * * * * * and pause with catchup=False and wait a few minutes, then run
`airflow dags next-execution sample_dag`
### Operating System
Debian
### Versions of Apache Airflow Providers
Airflow 2.2.2
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22474 | https://github.com/apache/airflow/pull/30117 | 1f2b0c21d5ebefc404d12c123674e6ac45873646 | c63836ccb763fd078e0665c7ef3353146b1afe96 | "2022-03-22T17:06:41Z" | python | "2023-03-22T14:22:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,473 | ["airflow/secrets/local_filesystem.py", "tests/cli/commands/test_connection_command.py", "tests/secrets/test_local_filesystem.py"] | Connections import and export should also support ".yml" file extensions | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Trying to export or import a yaml formatted connections file with ".yml" extension fails.
### What you think should happen instead
While the "official recommended extension" for YAML files is .yaml, many pipeline are built around using the .yml file extension. Importing and exporting of .yml files should also be supported.
### How to reproduce
Running airflow connections import or export with a file having a .yml file extension errors with:
`Unsupported file format. The file must have the extension .env or .json or .yaml`
### Operating System
debian 10 buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22473 | https://github.com/apache/airflow/pull/22872 | 1eab1ec74c426197af627c09817b76081c5c4416 | 3c0ad4af310483cd051e94550a7d857653dcee6d | "2022-03-22T15:36:21Z" | python | "2022-04-13T16:52:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,434 | ["airflow/providers/snowflake/example_dags/__init__.py", "docs/apache-airflow-providers-snowflake/index.rst", "docs/apache-airflow-providers-snowflake/operators/s3_to_snowflake.rst", "docs/apache-airflow-providers-snowflake/operators/snowflake.rst", "docs/apache-airflow-providers-snowflake/operators/snowflake_to_slack.rst", "tests/system/providers/snowflake/example_snowflake.py"] | Migrate Snowflake system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Snowflake` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/snowflake/operators/test_snowflake_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Snowflake example DAGs to new design`
| https://github.com/apache/airflow/issues/22434 | https://github.com/apache/airflow/pull/24151 | c60bb9edc0c9b55a2824eae879af8a4a90ccdd2d | c2f10a4ee9c2404e545d78281bf742a199895817 | "2022-03-22T13:48:43Z" | python | "2022-06-03T16:09:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,433 | ["tests/providers/postgres/operators/test_postgres_system.py"] | Migrate Postgres system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Postgres` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/postgres/operators/test_postgres_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Postgres example DAGs to new design`
| https://github.com/apache/airflow/issues/22433 | https://github.com/apache/airflow/pull/24223 | 487e229206396f8eaf7c933be996e6c0648ab078 | 95ab664bb6a8c94509b34ceb8c189f67db00c71a | "2022-03-22T13:48:42Z" | python | "2022-06-05T15:45:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,432 | ["tests/providers/microsoft/azure/hooks/test_fileshare_system.py", "tests/providers/microsoft/azure/operators/test_adls_delete_system.py", "tests/providers/microsoft/azure/transfers/test_local_to_adls_system.py", "tests/providers/microsoft/azure/transfers/test_local_to_wasb_system.py", "tests/providers/microsoft/azure/transfers/test_sftp_to_wasb_system.py"] | Migrate Microsoft system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Microsoft` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/microsoft/azure/operators/test_adls_delete_system.py (1)
- [x] tests/providers/microsoft/azure/transfers/test_sftp_to_wasb_system.py (1)
- [x] tests/providers/microsoft/azure/transfers/test_local_to_adls_system.py (1)
- [x] tests/providers/microsoft/azure/transfers/test_local_to_wasb_system.py (1)
- [x] tests/providers/microsoft/azure/hooks/test_fileshare_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Microsoft example DAGs to new design`
| https://github.com/apache/airflow/issues/22432 | https://github.com/apache/airflow/pull/24225 | 9d4da34c3b1d94369c2393df3d40c09963757601 | d71787e0d7423e8a116811e86edf76588b3c7017 | "2022-03-22T13:48:42Z" | python | "2022-06-05T15:41:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,431 | ["airflow/providers/http/example_dags/__init__.py", "docs/apache-airflow-providers-http/operators.rst", "tests/providers/http/operators/test_http_system.py", "tests/system/providers/http/example_http.py"] | Migrate HTTP system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `HTTP` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/http/operators/test_http_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate HTTP example DAGs to new design`
| https://github.com/apache/airflow/issues/22431 | https://github.com/apache/airflow/pull/23991 | 3dd7b1ddbaa3170fbda30a8323286abf075f30ba | 9398586a7cf66d9cf078c40ab0d939b3fcc58c2d | "2022-03-22T13:48:41Z" | python | "2022-06-01T20:12:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,429 | ["tests/providers/cncf/kubernetes/operators/test_spark_kubernetes_system.py"] | Migrate CNCF system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `CNCF` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/cncf/kubernetes/operators/test_spark_kubernetes_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate CNCF example DAGs to new design`
| https://github.com/apache/airflow/issues/22429 | https://github.com/apache/airflow/pull/24224 | d71787e0d7423e8a116811e86edf76588b3c7017 | 487e229206396f8eaf7c933be996e6c0648ab078 | "2022-03-22T13:48:39Z" | python | "2022-06-05T15:44:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,428 | ["tests/providers/asana/operators/test_asana_system.py"] | Migrate Asana system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Asana` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/asana/operators/test_asana_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Asana example DAGs to new design`
| https://github.com/apache/airflow/issues/22428 | https://github.com/apache/airflow/pull/24226 | b4d50d3be1c9917182f231135b8312eb284f0f7f | 9d4da34c3b1d94369c2393df3d40c09963757601 | "2022-03-22T13:48:38Z" | python | "2022-06-05T15:34:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,427 | ["tests/providers/apache/beam/operators/test_beam_system.py"] | Migrate Apache Beam system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Apache Beam` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/apache/beam/operators/test_beam_system.py (8)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Apache Beam example DAGs to new design`
| https://github.com/apache/airflow/issues/22427 | https://github.com/apache/airflow/pull/24256 | 42abbf0d61f94ec50026af0c0f95eb378e403042 | a01a94147c2db66a14101768b4bcbf3fad2a9cf0 | "2022-03-22T13:48:37Z" | python | "2022-06-06T21:06:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,426 | ["tests/providers/amazon/aws/hooks/test_base_aws_system.py", "tests/providers/amazon/aws/operators/test_eks_system.py", "tests/providers/amazon/aws/operators/test_emr_system.py", "tests/providers/amazon/aws/operators/test_glacier_system.py", "tests/providers/amazon/aws/operators/test_s3_bucket_tagging_system.py", "tests/providers/amazon/aws/transfers/test_google_api_to_s3_system.py", "tests/providers/amazon/aws/transfers/test_imap_attachment_to_s3_system.py", "tests/providers/amazon/aws/transfers/test_s3_to_redshift_system.py"] | Migrate Amazon system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Amazon` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/amazon/aws/operators/test_ecs_system.py (1)
- [x] tests/providers/amazon/aws/operators/test_eks_system.py (4)
- [x] tests/providers/amazon/aws/operators/test_glacier_system.py (1)
- [x] tests/providers/amazon/aws/operators/test_s3_bucket_tagging_system.py (1)
- [x] tests/providers/amazon/aws/operators/test_emr_system.py (2)
- [x] tests/providers/amazon/aws/transfers/test_imap_attachment_to_s3_system.py (1)
- [x] tests/providers/amazon/aws/transfers/test_google_api_to_s3_system.py (2)
- [x] tests/providers/amazon/aws/transfers/test_s3_to_redshift_system.py (1)
- [x] tests/providers/amazon/aws/hooks/test_base_aws_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Amazon example DAGs to new design`
| https://github.com/apache/airflow/issues/22426 | https://github.com/apache/airflow/pull/25655 | 6e41c7eb33a68ea3ccd6b67fb169ea2cf1ecc162 | bc46477d20802242ec9596279933742c1743b2f1 | "2022-03-22T13:48:35Z" | python | "2022-08-16T13:47:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,418 | ["airflow/www/static/css/main.css", "airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py"] | auto refresh Dags home page | ### Description
Similar to auto refresh in the DAG page, it would be nice to have this option in the home page as well.
![image](https://user-images.githubusercontent.com/7373236/159442263-60bbcd58-50e5-4a3d-8d6f-d31a65a6ff81.png)
### Use case/motivation
having an auto refresh at the home page will let users have a live view of running dags and tasks.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22418 | https://github.com/apache/airflow/pull/22900 | d6141c6594da86653b15d67eaa99511e8fe37a26 | cd70afdad92ee72d96edcc0448f2eb9b44c8597e | "2022-03-22T08:50:02Z" | python | "2022-05-01T10:59:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,417 | ["airflow/providers/jenkins/hooks/jenkins.py", "airflow/providers/jenkins/provider.yaml", "airflow/providers/jenkins/sensors/__init__.py", "airflow/providers/jenkins/sensors/jenkins.py", "tests/providers/jenkins/hooks/test_jenkins.py", "tests/providers/jenkins/sensors/__init__.py", "tests/providers/jenkins/sensors/test_jenkins.py"] | Jenkins Sensor to monitor a jenkins job finish | ### Description
Sensor for jenkins jobs in airflow. There are cases in which we need to monitor the state of a build in jenkins and pause the DAG until the build finishes.
### Use case/motivation
I am trying to achieve a way of pausing the DAG until a build or the last build in a jenkins job finishes.
This could be done in different ways but cleanest is to have a dedicated jenkins sensor in airflow and use jenkins hook and connection.
There are two cases to monitor a job in jenkins
1. Specify the build number to monitor
2. Get the last build automatically and check whether it is still running or not.
Technically the only important thing from a sensor perspective is to check whether the build is ongoing or finished. Monitoring for a specific Status or result doesn't make sense in this use case. This use case only concerns whether there is an ongoing build in a job or not. If a current build is ongoing then wait for it to finish.
If build number not specified, sensor should query for the latest build number and check whether it is running or not.
If build number is specified, it should check the run state of that specific build.
### Related issues
There are no related issues.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22417 | https://github.com/apache/airflow/pull/22421 | ac400ebdf3edc1e08debf3b834ade3809519b819 | 4e24b22379e131fe1007e911b93f52e1b6afcf3f | "2022-03-22T07:57:54Z" | python | "2022-03-24T08:01:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,413 | ["chart/templates/flower/flower-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_flower.py"] | Flower is missing extraVolumeMounts | ### Official Helm Chart version
1.5.0 (latest released)
### Apache Airflow version
2.2.4 (latest released)
### Kubernetes Version
1.19
### Helm Chart configuration
```
flower:
extraContainers:
- image: foo
imagePullPolicy: IfNotPresent
name: foo
volumeMounts:
- mountPath: /var/log/foo
name: foo
readOnly: false
extraVolumeMounts:
- mountPath: /var/log/foo
name: foo
extraVolumes:
- emptyDir: {}
name: foo
```
### Docker Image customisations
_No response_
### What happened
```
Error: values don't meet the specifications of the schema(s) in the following chart(s):
airflow:
- flower: Additional property extraVolumeMounts is not allowed
```
### What you think should happen instead
The flower pod should support the same extraVolumeMounts that other pods support.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22413 | https://github.com/apache/airflow/pull/22414 | 7667d94091b663f9d9caecf7afe1b018bcad7eda | f3bd2a35e6f7b9676a79047877dfc61e5294aff8 | "2022-03-21T22:58:02Z" | python | "2022-03-22T11:17:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,404 | ["airflow/task/task_runner/standard_task_runner.py"] | tempfile.TemporaryDirectory does not get deleted after task failure | ### Discussed in https://github.com/apache/airflow/discussions/22403
<div type='discussions-op-text'>
<sup>Originally posted by **m1racoli** March 18, 2022</sup>
### Apache Airflow version
2.2.4 (latest released)
### What happened
When creating a temporary directory with `tempfile.TemporaryDirectory()` and then failing a task, the corresponding directory does not get deleted.
This happens in Airflow on Astronomer as well as locally in for `astro dev` setups for LocalExecutor and CeleryExecutor.
### What you think should happen instead
As in normal Python environments, the directory should get cleaned up, even in the case of a raised exception.
### How to reproduce
Running this DAG will leave a temporary directory in the corresponding location:
```python
import os
import tempfile
from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
class MyException(Exception):
pass
@task
def run():
tmpdir = tempfile.TemporaryDirectory()
print(f"directory {tmpdir.name} created")
assert os.path.exists(tmpdir.name)
raise MyException("error!")
@dag(start_date=days_ago(1))
def tempfile_test():
run()
_ = tempfile_test()
```
### Operating System
Debian (Astronomer Airflow Docker image)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!3.0.0
apache-airflow-providers-cncf-kubernetes==1!3.0.2
apache-airflow-providers-elasticsearch==1!2.2.0
apache-airflow-providers-ftp==1!2.0.1
apache-airflow-providers-google==1!6.4.0
apache-airflow-providers-http==1!2.0.3
apache-airflow-providers-imap==1!2.2.0
apache-airflow-providers-microsoft-azure==1!3.6.0
apache-airflow-providers-mysql==1!2.2.0
apache-airflow-providers-postgres==1!3.0.0
apache-airflow-providers-redis==1!2.0.1
apache-airflow-providers-slack==1!4.2.0
apache-airflow-providers-sqlite==1!2.1.0
apache-airflow-providers-ssh==1!2.4.0
```
### Deployment
Astronomer
### Deployment details
GKE, vanilla `astro dev`, LocalExecutor and CeleryExecutor.
### Anything else
Always
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/22404 | https://github.com/apache/airflow/pull/22475 | 202a3a10e553a8a725a0edb6408de605cb79e842 | b0604160cf95f76ed75b4c4ab42b9c7902c945ed | "2022-03-21T16:16:30Z" | python | "2022-03-24T21:23:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,398 | ["airflow/providers/google/cloud/hooks/cloud_build.py", "tests/providers/google/cloud/hooks/test_cloud_build.py"] | CloudBuildRunBuildTriggerOPerator: 'property' object has no attribute 'build' | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.6.0
### Apache Airflow version
2.2.4 (latest released)
### Operating System
GCP Cloud Composer 2
### Deployment
Composer
### Deployment details
We're currently using the default set up of cloud composer 2 on GCP.
### What happened
When trying to trigger cloud build trigger using the `CloudBuildRunBuildTriggerOperator` We receive the following error:
```
[2022-03-21, 12:28:57 UTC] {credentials_provider.py:312} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook.
[2022-03-21, 12:28:58 UTC] {cloud_build.py:503} INFO - Start running build trigger: <TRIGGER ID>.
[2022-03-21, 12:29:00 UTC] {taskinstance.py:1702} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 75, in _get_build_id_from_operation
return operation.metadata.build.id
AttributeError: 'property' object has no attribute 'build'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1330, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1457, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1513, in _execute_task
result = execute_callable(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/cloud_build.py", line 739, in execute
result = hook.run_build_trigger(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 433, in inner_wrapper
return func(self, *args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 512, in run_build_trigger
id_ = self._get_build_id_from_operation(Operation)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 77, in _get_build_id_from_operation
raise AirflowException("Could not retrieve Build ID from Operation.")
airflow.exceptions.AirflowException: Could not retrieve Build ID from Operation.
[2022-03-21, 12:29:00 UTC] {taskinstance.py:1268} INFO - Marking task as FAILED. dag_id=deploy_index, task_id=trigger_build, execution_date=20220321T122848, start_date=20220321T122856, end_date=20220321T122900
[2022-03-21, 12:29:00 UTC] {standard_task_runner.py:89} ERROR - Failed to execute job 1003 for task trigger_build
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 75, in _get_build_id_from_operation
return operation.metadata.build.id
AttributeError: 'property' object has no attribute 'build'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 94, in wrapper
return f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 302, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1330, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1457, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1513, in _execute_task
result = execute_callable(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/cloud_build.py", line 739, in execute
result = hook.run_build_trigger(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 433, in inner_wrapper
return func(self, *args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 512, in run_build_trigger
id_ = self._get_build_id_from_operation(Operation)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_build.py", line 77, in _get_build_id_from_operation
raise AirflowException("Could not retrieve Build ID from Operation.")
airflow.exceptions.AirflowException: Could not retrieve Build ID from Operation.
[2022-03-21, 12:29:00 UTC] {local_task_job.py:154} INFO - Task exited with return code 1
[2022-03-21, 12:29:01 UTC] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
Below is the code snippet that's causing the above error.
```
trigger_deploy = CloudBuildRunBuildTriggerOperator(
task_id="trigger_deploy",
trigger_id="TRIGGER_ID",
project_id="PROEJCT_ID",
source=RepoSource({"project_id": "PROJECT_ID",
"repo_name": "REPO_NAME",
"branch_name": "BRANCH",
}),
wait=True,
do_xcom_push=True
)
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
We reckon the source of the bug is [here](https://github.com/apache/airflow/blob/71c980a8ffb3563bf16d8a23a58de54c9e8cf556/airflow/providers/google/cloud/hooks/cloud_build.py#L165).
``` python
id_ = self._get_build_id_from_operation(Operation)
```
Since the function signature is looking for an instance of Operation instead of the class itself.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22398 | https://github.com/apache/airflow/pull/22419 | f51a674dd907a00e4bd9b4b44fb036d28762b5cc | 0f0a1a7d22dffab4487c35d3598b3b6aaf24c4c6 | "2022-03-21T13:16:58Z" | python | "2022-03-23T13:37:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,392 | ["airflow/cli/commands/connection_command.py", "tests/cli/commands/test_connection_command.py"] | Unknown connection types fail in cryptic ways | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I created a connection like:
```
airflow connections add fsconn --conn-host /tmp --conn-type File
```
When I really should have created it like:
```
airflow connections add fsconn --conn-host /tmp --conn-type fs
```
While using this connection, I found that FileSensor would only work if I provided absolute paths. Relative paths would cause the sensor to timeout because it couldn't find the file. Using `fs` instead of `File` caused the FileSensor to start working like I expected.
### What you think should happen instead
Ideally I'd have gotten an error when I tried to create the connection with an invalid type.
Or if that's not practical, then I should have gotten an error in the task logs when I tried to use the FileSensor with a connection of the wrong type.
### How to reproduce
_No response_
### Operating System
debian (in docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start`
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22392 | https://github.com/apache/airflow/pull/22688 | 9a623e94cb3e4f02cbe566e02f75f4a894edc60d | d7993dca2f182c1d0f281f06ac04b47935016cf1 | "2022-03-21T04:36:56Z" | python | "2022-04-13T19:45:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,381 | ["airflow/providers/amazon/aws/hooks/athena.py", "airflow/providers/amazon/aws/hooks/emr.py", "airflow/providers/amazon/aws/operators/athena.py", "airflow/providers/amazon/aws/operators/emr.py"] | AthenaOperator retries max_tries mix-up | ### Apache Airflow version
2.2.4 (latest released)
### What happened
After a recent upgrade from 1.10.9 to 2.2.4, an odd behavior where the aforementioned Attributes, are wrongfully Coupled, is observed.
An example to showcase the issue:
```
AthenaOperator(
...
retries=3,
max_tries=30,
...)
```
Related Documentation states:
* retries: Number of retries that should be performed before failing the task
* max_tries: Number of times to poll for query state before function exits
Regardless of the above specification `max_tries=30`, inspection of related _Task Instance Details_ shows that the Value of both Attributes is **3**
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
Imagine a Query, executed on an hourly basis, with a varying scope, causing it to 'organically' execute for anywhere between 5 - 10 minutes. This Query Task should Fail after 3 execution attempts.
In such cases, we would like to poll the state of the Query frequently (every 15 seconds), in order to avoid redundant idle time for downstream Tasks.
A configuration matching the above description:
```
AthenaOperator(
...
retry_delay=15,
retries=3,
max_tries=40, # 40 polls * 15 seconds delay between polls = 10 minutes
...)
```
When deployed, `retries == max_tries == 3`, thus causing the Task to terminate after 45 seconds
In order to quickly avert this situation where our ETL breaks, we are using the following configuration:
```
AthenaOperator(
...
retry_delay=15,
retries=40,
max_tries=40,
...)
```
With the last configuration, our Task does not terminate preemptively but will retry **40 times** before failing - which causes an issue with downstream Tasks SLA, at the very least (that is before weighing in the waste of time and operational costs)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22381 | https://github.com/apache/airflow/pull/25971 | 18386026c28939fa6d91d198c5489c295a05dcd2 | d5820a77e896a1a3ceb671eddddb9c8e3bcfb649 | "2022-03-20T08:50:45Z" | python | "2022-09-11T23:25:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,380 | ["dev/provider_packages/SETUP_TEMPLATE.cfg.jinja2"] | Newest providers incorrectly include `gitpython` and `wheel` in `install_requires` | ### Apache Airflow Provider(s)
ftp, openfaas, sqlite
### Versions of Apache Airflow Providers
I am the maintainer of the Airflow Providers on conda-forge. The providers I listed above are the first 3 I have looked at but I believe all are affected. These are the new releases (as of yesterday) of all providers.
### Apache Airflow version
2.2.4 (latest released)
### Operating System
Linux (Azure CI)
### Deployment
Other Docker-based deployment
### Deployment details
This is on conda-forge Azure CI.
### What happened
All providers I have looked at (and I suspect all providers) now have `gitpython` and `wheel` in their `install_requires`:
From `apache-airflow-providers-ftp-2.1.1.tar.gz`:
```
install_requires =
gitpython
wheel
```
I believe these requirements are incorrect (neither should be needed at install time) and this will make maintaining these packages on conda-forge an absolute nightmare! (It's already a serious challenge because I get a PR to update each time each provider gets updated.)
### What you think should happen instead
These install requirements should be removed.
### How to reproduce
Open any of the newly released providers from pypi and look at `setup.cfg`.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22380 | https://github.com/apache/airflow/pull/22382 | 172df9ee247af62e9417cebb2e2a3bc2c261a204 | ab4ba6f1b770a95bf56965f3396f62fa8130f9e9 | "2022-03-20T07:48:57Z" | python | "2022-03-20T12:15:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,358 | ["airflow/api_connexion/openapi/v1.yaml"] | ScheduleInterval schema in OpenAPI specs should have "nullable: true" otherwise generated OpenAPI client will throw an error in case of nullable "schedule_interval" | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Currently we have this schema definition in the OpenAPI specs:
```
ScheduleInterval:
description: |
Schedule interval. Defines how often DAG runs, this object gets added to your latest task instance's
execution_date to figure out the next schedule.
readOnly: true
oneOf:
- $ref: '#/components/schemas/TimeDelta'
- $ref: '#/components/schemas/RelativeDelta'
- $ref: '#/components/schemas/CronExpression'
discriminator:
propertyName: __type
```
The issue with above is, when using an OpenAPI generator for Java for example (I think is same for other languages as well), it will treat `ScheduleInterval` as **non-nullable** property, although what is returned under `/dags/{dag_id}` or `/dags/{dag_id}/details` in case of a `None` `schedule_interval` is `null` for `schedule_interval`.
### What you think should happen instead
We should have `nullable: true` in `ScheduleInterval` schema which will allow `schedule_interval` to be parsed as `null`.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
If the maintainers think is a valid bug, I will be more than happy to submit a PR :)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22358 | https://github.com/apache/airflow/pull/24253 | b88ce951881914e51058ad71858874fdc00a3cbe | 7e56bf662915cd58849626d7a029a4ba70cdda4d | "2022-03-18T09:13:24Z" | python | "2022-06-07T11:21:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,340 | ["airflow/decorators/base.py", "airflow/models/baseoperator.py", "airflow/models/mappedoperator.py", "airflow/serialization/serialized_objects.py", "tests/models/test_taskinstance.py", "tests/serialization/test_dag_serialization.py"] | Expanding operators inside of task groups causes KeyError | ### Apache Airflow version
main (development)
### What happened
Given this DAG:
```python3
foo_var = {"VAR1": "FOO"}
bar_var = {"VAR1": "BAR"}
hi_cmd = 'echo "hello $VAR1"'
bye_cmd = 'echo "goodbye $VAR1"'
@task
def envs():
return [foo_var, bar_var]
@task
def cmds():
return [hi_cmd, bye_cmd]
with DAG(dag_id="mapped_bash", start_date=datetime(1970, 1, 1)) as dag:
with TaskGroup(group_id="dynamic"):
dynamic = BashOperator.partial(task_id="bash").expand(
env=envs(), bash_command=cmds()
)
```
I ran `airflow dags test mapped_bash $(date +%Y-%m-%dT%H:%M:%SZ)`
Got this output:
```
[2022-03-17 09:21:24,590] {taskinstance.py:1451} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=mapped_bash
AIRFLOW_CTX_TASK_ID=dynamic.bash
AIRFLOW_CTX_EXECUTION_DATE=2022-03-17T09:21:12+00:00
AIRFLOW_CTX_DAG_RUN_ID=backfill__2022-03-17T09:21:12+00:00
[2022-03-17 09:21:24,590] {subprocess.py:62} INFO - Tmp dir root location:
/var/folders/5m/nvs9yfcs6mlfm_63gnk6__3r0000gn/T
[2022-03-17 09:21:24,591] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'echo "hello $VAR1"']
[2022-03-17 09:21:24,597] {subprocess.py:85} INFO - Output:
[2022-03-17 09:21:24,601] {subprocess.py:92} INFO - hello FOO
[2022-03-17 09:21:24,601] {subprocess.py:96} INFO - Command exited with return code 0
[2022-03-17 09:21:24,616] {taskinstance.py:1277} INFO - Marking task as SUCCESS. dag_id=mapped_bash, task_id=dynamic.bash, execution_date=20220317T092112, start_date=20220317T152114, end_date=20220317T152124
[2022-03-17 09:21:24,638] {taskinstance.py:1752} WARNING - We expected to get frame set in local storage but it was not. Please report this as an issue with full logs at https://github.com/apache/airflow/issues/new
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 1335, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 1437, in _execute_task_with_callbacks
self.render_templates(context=context)
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 2091, in render_templates
task = self.task.render_template_fields(context)
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 602, in render_template_fields
unmapped_task = self.unmap()
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 454, in unmap
dag._remove_task(self.task_id)
File "/Users/matt/src/airflow/airflow/models/dag.py", line 2188, in _remove_task
task = self.task_dict.pop(task_id)
KeyError: 'dynamic.bash'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/models/taskinstance.py", line 1750, in get_truncated_error_traceback
execution_frame = _TASK_EXECUTION_FRAME_LOCAL_STORAGE.frame
AttributeError: '_thread._local' object has no attribute 'frame'
```
### What you think should happen instead
No error
### How to reproduce
Trigger the dag above
### Operating System
Mac OS 11.6
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
```
❯ cd ~/src/airflow
❯ git rev-parse --short HEAD
df6058c86
❯ airflow info
Apache Airflow
version | 2.3.0.dev0
executor | SequentialExecutor
task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler
sql_alchemy_conn | sqlite:////Users/matt/2022/03/16/airflow.db
dags_folder | /Users/matt/2022/03/16/dags
plugins_folder | /Users/matt/2022/03/16/plugins
base_log_folder | /Users/matt/2022/03/16/logs
remote_base_log_folder |
System info
OS | Mac OS
architecture | x86_64
uname | uname_result(system='Darwin', node='LIGO', release='20.6.0', version='Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64', machine='x86_64')
locale | ('en_US', 'UTF-8')
python_version | 3.9.10 (main, Jan 15 2022, 11:48:00) [Clang 13.0.0 (clang-1300.0.29.3)]
python_location | /Users/matt/src/qa-scenario-dags/venv/bin/python3.9
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22340 | https://github.com/apache/airflow/pull/22355 | 5eb63357426598f99ed50b002b72aebdf8790f73 | 87d363e217bf70028e512fd8ded09a01ffae0162 | "2022-03-17T15:30:06Z" | python | "2022-03-20T02:05:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,328 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | bigquery provider's - BigQueryCursor missing implementation for description property. | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When trying to run following code:
```
import pandas as pd
from airflow.providers.google.cloud.hooks.bigquery import BigqueryHook
#using default connection
hook = BigqueryHook()
df = pd.read_sql(
"SELECT * FROM table_name", con=hook.get_conn()
)
```
Running into following issue:
```Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<string>", line 1, in <module>
File "/Users/utkarsharma/sandbox/astronomer/astro/.nox/dev/lib/python3.8/site-packages/pandas/io/sql.py", line 602, in read_sql
return pandas_sql.read_query(
File "/Users/utkarsharma/sandbox/astronomer/astro/.nox/dev/lib/python3.8/site-packages/pandas/io/sql.py", line 2117, in read_query
columns = [col_desc[0] for col_desc in cursor.description]
File "/Users/utkarsharma/sandbox/astronomer/astro/.nox/dev/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2599, in description
raise NotImplementedError
NotImplementedError
```
### What you think should happen instead
The property should be implemented in a similar manner as [postgres_to_gcs.py](https://github.com/apache/airflow/blob/7bd165fbe2cbbfa8208803ec352c5d16ca2bd3ec/airflow/providers/google/cloud/transfers/postgres_to_gcs.py#L58)
### How to reproduce
_No response_
### Operating System
macOS
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.5.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22328 | https://github.com/apache/airflow/pull/25366 | e84d753015e5606c29537741cdbe8ae08012c3b6 | 7d2c2ee879656faf47829d1ad89fc4441e19a66e | "2022-03-17T00:12:26Z" | python | "2022-08-04T14:48:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,325 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_schema.py", "airflow/models/dag.py", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"] | ReST API : get_dag should return more than a simplified view of the dag | ### Description
The current response payload from https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_dag is a useful but simple view of the state of a given DAG. However it is missing some additional attributes that I feel would be useful for indiduals/groups who are choosing to interact with Airflow primarily through the ReST interface.
### Use case/motivation
As part of a testing workflow we upload DAGs to a running airflow instance and want to trigger an execution of the DAG after we know that the scheduler has updated it. We're currently automating this process through the ReST API, but the `last_updated` is not exposed.
This should be implemented from the dag_source endpoint.
https://github.com/apache/airflow/blob/main/airflow/api_connexion/endpoints/dag_source_endpoint.py
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22325 | https://github.com/apache/airflow/pull/22637 | 55ee62e28a0209349bf3e49a25565e7719324500 | 9798c8cad1c2fe7e674f8518cbe5151e91f1ca7e | "2022-03-16T20:49:07Z" | python | "2022-03-31T10:40:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,320 | ["airflow/www/templates/airflow/dag.html"] | Copying DAG ID from UI and pasting in Slack includes schedule | ### Apache Airflow version
2.2.3
### What happened
(Yes, I know the title says Slack and it might not seem like an Airflow issue, but so far this is the only application I noticed this on. There might be others.)
PR https://github.com/apache/airflow/pull/11503 was a fix to issue https://github.com/apache/airflow/issues/11500 to prevent text-selection of scheduler interval when selecting DAG ID. However it does not fix pasting the text into certain applications (such as Slack), at least on a Mac.
@ryanahamilton thanks for the fix, but this is fixed in the visible sense (double clicking the DAG ID to select it will now not show the schedule interval and next run as selected in the UI), however if you copy what is selected for some reason it still includes schedule interval and next run when pasted into certain applications.
I can't be sure why this is happening, but certain places such as pasting into Google Chrome, TextEdit, or Visual Studio Code it will only include the DAG ID and a new line. But other applications such as Slack (so far only one I can tell) it includes the schedule interval and next run, as you can see below:
- Schedule interval and next run **not shown as selected** on the DAG page:
![Screen Shot 2022-03-16 at 11 04 21 AM](https://user-images.githubusercontent.com/45696489/158659392-2df1f428-61e9-4785-be21-cdb1eda9ff6e.png)
- Schedule interval and next run **not pasted** in Google Chrome and TextEdit:
![Screen Shot 2022-03-16 at 11 05 10 AM](https://user-images.githubusercontent.com/45696489/158659521-adc2be64-1b31-403f-8630-b36b40900b42.png)
![Screen Shot 2022-03-16 at 11 15 14 AM](https://user-images.githubusercontent.com/45696489/158659539-0c76c079-3b44-4846-b41e-9038689bb33d.png)
- Schedule interval and next run **_pasted and visible_** in Slack:
![Screen Shot 2022-03-16 at 11 05 40 AM](https://user-images.githubusercontent.com/45696489/158659837-a57b0a57-306e-4ea2-9648-a4922d41c403.png)
### What you think should happen instead
When you select the DAG ID on the DAG page, copy what is selected, and then paste into a Slack message, only the DAG ID should be pasted.
### How to reproduce
Select the DAG ID on the DAG page (such as double-clicking the DAG ID), copy what is selected, and then paste into a Slack message.
### Operating System
macOS 1.15.7 (Catalina)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
This is something that possibly could be a Slack bug (one could say that Slack should strip out anything that is `user-select: none`), however it should be possible to fix the HTML layout so `user-select: none` is not even needed to prevent selection. It is sort of a band-aid fix.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22320 | https://github.com/apache/airflow/pull/28643 | 1da17be37627385fed7fc06584d72e0abda6a1b5 | 9aea857343c231319df4c5f47e8b4d9c8c3975e6 | "2022-03-16T18:31:26Z" | python | "2023-01-04T21:19:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,318 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py"] | KubernetesPodOperator xcom sidecar stuck in running | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When the main container errors and failed to write a return.json file, the xcom sidecar hangs and doesn't exit properly with an empty return.json.
This is a problem because we want to suppress the following error, as the reason the pod failed should not be that xcom failed.
```
[2022-03-16, 17:08:07 UTC] {pod_manager.py:342} INFO - Running command... cat /airflow/xcom/return.json
[2022-03-16, 17:08:07 UTC] {pod_manager.py:349} INFO - stderr from command: cat: can't open '/airflow/xcom/return.json': No such file or directory
[2022-03-16, 17:08:07 UTC] {pod_manager.py:342} INFO - Running command... kill -s SIGINT 1
[2022-03-16, 17:08:08 UTC] {kubernetes_pod.py:417} INFO - Deleting pod: test.20882a4c607d418d94e87231214d34c0
[2022-03-16, 17:08:08 UTC] {taskinstance.py:1718} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 385, in execute
result = self.extract_xcom(pod=self.pod)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 360, in extract_xcom
result = self.pod_manager.extract_xcom(pod)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 337, in extract_xcom
raise AirflowException(f'Failed to extract xcom from pod: {pod.metadata.name}')
airflow.exceptions.AirflowException: Failed to extract xcom from pod: test.20882a4c607d418d94e87231214d34c0
```
and have the KubernetesPodOperator exit gracefully
### What you think should happen instead
sidecar should exit with an empty xcom return value
### How to reproduce
KubernetesPodOperator with command `mkdir -p /airflow/xcom;touch /airflow/xcom/return.json; cat a >> /airflow/xcom/return.json`
### Operating System
-
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22318 | https://github.com/apache/airflow/pull/24993 | d872edacfe3cec65a9179eff52bf219c12361fef | f05a06537be4d12276862eae1960515c76aa11d1 | "2022-03-16T17:12:04Z" | python | "2022-07-16T20:37:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,298 | ["airflow/kubernetes/pod_generator.py", "tests/kubernetes/test_pod_generator.py"] | pod_override ignores namespace configuration | ### Apache Airflow version
2.0.2
### What happened
I've attempted to use the pod_override as per the documentation [here](https://airflow.apache.org/docs/apache-airflow/2.0.2/executor/kubernetes.html#pod-override). Labels, annotations, and service accounts work. However, attempting to override the namespace does not: I would expect the pod to be created in the namespace that I've set `pod_override` to, but it does not.
My speculation is that the `pod_generator.py` code is incorrect [here](https://github.com/apache/airflow/blob/2.0.2/airflow/kubernetes/pod_generator.py#L405). The order of reconciliation goes:
```
# Reconcile the pods starting with the first chronologically,
# Pod from the pod_template_File -> Pod from executor_config arg -> Pod from the K8s executor
pod_list = [base_worker_pod, pod_override_object, dynamic_pod]
```
Note that dynamic pod takes precedence. Note that `dynamic_pod` has the namespace from whatever namespace is [passed](https://github.com/apache/airflow/blob/2.0.2/airflow/kubernetes/pod_generator.py#L373) in. It is initialized from `self.kube_config.kube_namespace` [here](https://github.com/apache/airflow/blob/ac77c89018604a96ea4f5fba938f2fbd7c582793/airflow/executors/kubernetes_executor.py#L245). Therefore, the kube config's namespace takes precedence and will override the namespace of the `pod_override`, because only one namespace can exist (unlike labels and annotations, which can be additive). This unfortunately means pods created by the KubernetesExecutor will run in the config's namespace. For what it's worth, I set the namespace via an env var:
```
AIRFLOW__KUBERNETES__NAMESPACE = "foobar"
```
This code flow remains the same on the latest version of Airflow -- I speculate that the bug still might be in.
### What you think should happen instead
If we use pod_override with a namespace, the namespace where the pod is run should take the namespace desired in pod_override.
### How to reproduce
Create a DAG using a PythonOperator or any other operator. Use the pod_override:
```
"pod_override": k8s.V1Pod(
metadata=k8s.V1ObjectMeta(namespace="fakenamespace",
annotations={"test": "annotation"},
labels={'foo': 'barBAZ'}),
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base"
)
],
service_account_name="foobar"
)
)
```
Look at the k8s spec file or use `kubectl` to verify where this pod is attempted to start running. It should be `fakenamespace`, but instead will likely be in the same namespace as whatever is set in the config file.
### Operating System
Amazon EKS 1.19
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22298 | https://github.com/apache/airflow/pull/24342 | 6476afda208eb6aabd58cc00db8328451c684200 | 1fe07e5cebac5e8a0b3fe7e88c65f6d2b0c2134d | "2022-03-16T00:59:19Z" | python | "2022-07-06T11:56:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,248 | ["airflow/utils/docs.py", "docs/apache-airflow-providers/index.rst"] | Allow custom redirect for provider information in /provider | ### Description
`/provider` enables users to get amazing information via the UI, however if you've written a custom provider the documentation redirect defaults to `https://airflow.apache.org/docs/airflow-provider-{rest_of_name}/{version}/`, which isn't useful for custom operators. (If this feature exists then I must've missed the documentation on it, sorry!)
### Use case/motivation
As an airflow developer I've written a custom provider package and would like to link to my internal documentation as well as my private github repo via the `/provider` entry for my custom provider.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
As this is a UI change + more, I am willing to submit a PR, but would likely need help.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22248 | https://github.com/apache/airflow/pull/23012 | 3b2ef88f877fc5e4fcbe8038f0a9251263eaafbc | 7064a95a648286a4190a452425626c159e467d6e | "2022-03-14T15:27:23Z" | python | "2022-04-22T13:21:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,220 | ["airflow/providers/databricks/provider.yaml", "docs/apache-airflow-providers-databricks/index.rst", "setup.py", "tests/providers/databricks/operators/test_databricks_sql.py"] | Databricks SQL fails on Python 3.10 | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
The databricks SQL does not work on Python 3.10 due to "from collections import Iterable" in the `databricks-sql-connector`
* https://pypi.org/project/databricks-sql-connector/
Details of this issue dicussed in https://github.com/apache/airflow/pull/22050
For now we will likely just exclude the tests (and mark databricks provider as non-python 3.10 compatible). But once this is fixed (in either 1.0.2 or upcoming 2.0.0 version of the library, we wil restore it back).
### Apache Airflow version
main (development)
### Operating System
All
### Deployment
Other
### Deployment details
Just Breeze with Python 3.10
### What happened
The tests are failing:
```
self = <databricks.sql.common.ParamEscaper object at 0x7fe81c6dd6c0>
item = ['file1', 'file2', 'file3']
def escape_item(self, item):
if item is None:
return 'NULL'
elif isinstance(item, (int, float)):
return self.escape_number(item)
elif isinstance(item, basestring):
return self.escape_string(item)
> elif isinstance(item, collections.Iterable):
E AttributeError: module 'collections' has no attribute 'Iterable'
```
https://github.com/apache/airflow/runs/5523057543?check_suite_focus=true#step:8:16781
### What you expected to happen
Test succeed :)
### How to reproduce
Run `TestDatabricksSqlCopyIntoOperator` in Python 3.10 environment.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22220 | https://github.com/apache/airflow/pull/22886 | aa8c08db383ebfabf30a7c2b2debb64c0968df48 | 7be57eb2566651de89048798766f0ad5f267cdc2 | "2022-03-13T14:55:30Z" | python | "2022-04-10T18:32:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,191 | ["airflow/dag_processing/manager.py", "airflow/dag_processing/processor.py", "tests/dag_processing/test_manager.py"] | dag_processing code needs to handle OSError("handle is closed") in poll() and recv() calls | ### Apache Airflow version
2.1.4
### What happened
The problem also exists in the latest version of the Airflow code, but I experienced it in 2.1.4.
This is the root cause of problems experienced in [issue#13542](https://github.com/apache/airflow/issues/13542).
I'll provide a stack trace below. The problem is in the code of airflow/dag_processing/processor.py (and manager.py), all poll() and recv() calls to the multiprocessing communication channels need to be wrapped in exception handlers, handling OSError("handle is closed") exceptions. If one looks at the Python multiprocessing source code, it throws this exception when the channel's handle has been closed.
This occurs in Airflow when a DAG File Processor has been killed or terminated; the Airflow code closes the communication channel when it is killing or terminating a DAG File Processor process (for example, when a dag_file_processor_timeout occurs).This killing or terminating happens asynchronously (in another process) from the process calling the poll() or recv() on the communication channel. This is why an exception needs to be handled. A pre-check of the handle being open is not good enough, because the other process doing the kill or terminate may close the handle in between your pre-check and actually calling poll() or recv() (a race condition).
### What you expected to happen
Here is the stack trace of the occurence I saw:
```
[2022-03-08 17:41:06,101] {taskinstance.py:914} DEBUG - <TaskInstance: staq_report_daily.gs.wait_staq_csv_file 2022-03-06 17:15:00+00:00 [running]> dependency 'Not In Retry Period' PASSED: True, The context specified that being in a retry p
eriod was permitted.
[2022-03-08 17:41:06,101] {taskinstance.py:904} DEBUG - Dependencies all met for <TaskInstance: staq_report_daily.gs.wait_staq_csv_file 2022-03-06 17:15:00+00:00 [running]>
[2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: gdai_gcs_sync> because no tasks in DAG have SLAs
[2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: unity_creative_import_process> because no tasks in DAG have SLAs
[2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: sales_dm_to_bq> because no tasks in DAG have SLAs
[2022-03-08 17:44:50,454] {settings.py:302} DEBUG - Disposing DB connection pool (PID 1902)
Process ForkProcess-1:
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/python3.8/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 370, in _run_processor_manager
processor_manager.start()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 610, in start
return self._run_parsing_loop()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 671, in _run_parsing_loop
self._collect_results_from_processor(processor)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 981, in _collect_results_from_processor
if processor.result is not None:
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/processor.py", line 321, in result
if not self.done:
File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/processor.py", line 286, in done
if self._parent_channel.poll():
File "/opt/python3.8/lib/python3.8/multiprocessing/connection.py", line 255, in poll
self._check_closed()
File "/opt/python3.8/lib/python3.8/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
```
This corresponded in time to the following log entries:
```
% kubectl logs airflow-scheduler-58c997dd98-n8xr8 -c airflow-scheduler --previous | egrep 'Ran scheduling loop in|[[]heartbeat[]]'
[2022-03-08 17:40:47,586] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.56 seconds
[2022-03-08 17:40:49,146] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.56 seconds
[2022-03-08 17:40:50,675] {base_job.py:227} DEBUG - [heartbeat]
[2022-03-08 17:40:50,687] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.54 seconds
[2022-03-08 17:40:52,144] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.46 seconds
[2022-03-08 17:40:53,620] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.47 seconds
[2022-03-08 17:40:55,085] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.46 seconds
[2022-03-08 17:40:56,169] {base_job.py:227} DEBUG - [heartbeat]
[2022-03-08 17:40:56,180] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.49 seconds
[2022-03-08 17:40:57,667] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.49 seconds
[2022-03-08 17:40:59,148] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.48 seconds
[2022-03-08 17:41:00,618] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.47 seconds
[2022-03-08 17:41:01,742] {base_job.py:227} DEBUG - [heartbeat]
[2022-03-08 17:41:01,757] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.58 seconds
[2022-03-08 17:41:03,133] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.55 seconds
[2022-03-08 17:41:04,664] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.53 seconds
[2022-03-08 17:44:50,649] {base_job.py:227} DEBUG - [heartbeat]
[2022-03-08 17:44:50,814] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 225.15 seconds
```
You can see that when this exception occurred, there was a hang in the scheduler for almost 4 minutes, no scheduling loops, and no scheduler_job heartbeats.
This hang probably also caused stuck queued jobs as issue#13542 describes.
### How to reproduce
This is hard to reproduce because it is a race condition. But you might be able to reproduce by having in a dagfile top-level code that calls sleep, so that it takes longer to parse than core dag_file_processor_timeout setting. That would cause the parsing processes to be terminated, creating the conditions for this bug to occur.
### Operating System
NAME="Ubuntu" VERSION="18.04.6 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.6 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic
### Versions of Apache Airflow Providers
Not relevant, this is a core dag_processing issue.
### Deployment
Composer
### Deployment details
"composer-1.17.6-airflow-2.1.4"
In order to isolate the scheduler to a separate machine, so as to not have interference from other processes such as airflow-workers running on the same machine, we created an additional node-pool for the scheduler, and ran these k8s patches to move the scheduler to a separate machine.
New node pool definition:
```HCL
{
name = "scheduler-pool"
machine_type = "n1-highcpu-8"
autoscaling = false
node_count = 1
disk_type = "pd-balanced"
disk_size = 64
image_type = "COS"
auto_repair = true
auto_upgrade = true
max_pods_per_node = 32
},
```
patch.sh
```sh
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $0 namespace"
echo "Description: Isolate airflow-scheduler onto it's own node-pool (scheduler-pool)."
echo "Options:"
echo " namespace: kubernetes namespace used by Composer"
exit 1
fi
namespace=$1
set -eu
set -o pipefail
scheduler_patch="$(cat airflow-scheduler-patch.yaml)"
fluentd_patch="$(cat composer-fluentd-daemon-patch.yaml)"
set -x
kubectl -n default patch daemonset composer-fluentd-daemon -p "${fluentd_patch}"
kubectl -n ${namespace} patch deployment airflow-scheduler -p "${scheduler_patch}"
```
composer-fluentd-daemon-patch.yaml
```yaml
spec:
template:
spec:
nodeSelector: null
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-nodepool
operator: In
values:
- default-pool
- scheduler-pool
```
airflow-scheduler-patch.yaml
```yaml
spec:
template:
spec:
nodeSelector:
cloud.google.com/gke-nodepool: scheduler-pool
containers:
- name: gcs-syncd
resources:
limits:
memory: 2Gi
```
### Anything else
On the below checkbox of submitting a PR, I could submit one, but it'd be untested code, I don't really have the environment setup to test the patch.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22191 | https://github.com/apache/airflow/pull/22685 | c0c08b2bf23a54115f1ba5ac6bc8299f5aa54286 | 4a06f895bb2982ba9698b9f0cfeb26d1ff307fc3 | "2022-03-11T20:02:15Z" | python | "2022-04-06T20:42:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,174 | ["airflow/www/static/js/ti_log.js", "airflow/www/templates/airflow/ti_log.html"] | Support log download in task log view | ### Description
Support log downloading from the task log view by adding a download button in the UI.
### Use case/motivation
In the current version of Airflow, when we want to download a task try's log, we can click on the task node in Tree View or Graph view, and use the "Download" button in the task action modal, as in this screenshot:
<img width="752" alt="Screen Shot 2022-03-10 at 5 59 23 PM" src="https://user-images.githubusercontent.com/815701/157787811-feb7bdd4-4e32-4b85-b822-2d68662482e9.png">
It would make log downloading more convenient if we add a Download button in the task log view. This is the screenshot of the task log view, we can add a button on the right side of the "Toggle Wrap" button.
<img width="1214" alt="Screen Shot 2022-03-10 at 5 55 53 PM" src="https://user-images.githubusercontent.com/815701/157788262-a4cb8ff7-b813-4140-b8a1-41a5d0630e1f.png">
I work on Airflow at Pinterest, internally we get such a feature request from our users. I'd like to get your thoughts about adding this feature before I create a PR for it. Thanks.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22174 | https://github.com/apache/airflow/pull/22804 | 6aa65a38e0be3fee18ae9c1541e6091a47ab1f76 | b29cbbdc1bbc290d67e64aa3a531caf2b9f6846b | "2022-03-11T02:08:11Z" | python | "2022-04-08T14:55:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,152 | ["airflow/models/abstractoperator.py", "airflow/models/dag.py", "airflow/models/taskinstance.py", "tests/models/test_dag.py", "tests/models/test_taskinstance.py"] | render_template_as_native_obj=True in DAG constructor prevents failure e-mail from sending | ### Apache Airflow version
2.2.4 (latest released)
### What happened
A DAG constructed with render_template_as_native_obj=True does not send an e-mail notification on task failure.
DAGs constructed without render_template_as_native_obj send e-mail notification as expected.
default_email_on_failure is set to True in airflow.cfg.
### What you expected to happen
I expect DAGs to send an e-mail alert on task failure.
Logs for failed tasks show this:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1767, in handle_failure
self.email_alert(error)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2101, in email_alert
subject, html_content, html_content_err = self.get_email_subject_content(exception)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2093, in get_email_subject_content
subject = render('subject_template', default_subject)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2091, in render
return render_template_to_string(jinja_env.from_string(content), jinja_context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 268, in render_template_to_string
return render_template(template, context, native=False)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 263, in render_template
return "".join(nodes)
TypeError: sequence item 1: expected str instance, TaskInstance found
```
### How to reproduce
1. Construct a DAG with render_template_as_native_obj=True with 'email_on_failure':True.
2. Cause an error in a task. I used a PythonOperator with `assert False`.
3. Task will fail, but no alert e-mail will be sent.
4. Remove render_template_as_native_obj=True from DAG constructor.
5. Re-run DAG
6. Task will fail and alert e-mail will be sent.
I used the following for testing:
```
import datetime
from airflow.operators.python_operator import PythonOperator
from airflow.models import DAG
default_args = {
'owner': 'me',
'start_date': datetime.datetime(2022,3,9),
'email_on_failure':True,
'email':'myemail@email.com'
}
dag = DAG(dag_id = 'dagname',
schedule_interval = '@once',
default_args = default_args,
render_template_as_native_obj = True, #comment this out to test
)
def testfunc(**kwargs):
#intentional error
assert False
task_testfunc = PythonOperator(
task_id = "task_testfunc",
python_callable=testfunc,
dag=dag)
task_testfunc
```
### Operating System
Red Hat Enterprise Linux Server 7.9 (Maipo)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22152 | https://github.com/apache/airflow/pull/22770 | 91832a42d8124b040073481fd93c54e9e64c2609 | d80d52acf14034b0adf00e45b0fbac6ac03ab593 | "2022-03-10T16:13:04Z" | python | "2022-04-07T08:48:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,141 | ["airflow/cli/commands/scheduler_command.py", "airflow/utils/cli.py", "docs/apache-airflow/howto/set-config.rst", "tests/cli/commands/test_scheduler_command.py"] | Dump configurations in airflow scheduler logs based on the config it reads | ### Description
We don't have any way to cross-verify the configs that the airflow scheduler uses. It would be good to have it logged somewhere so that users can cross-verify it.
### Use case/motivation
How do you know for sure that the configs in airflow.cfg are being correctly parsed by airflow scheduler?
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22141 | https://github.com/apache/airflow/pull/22588 | c30ab6945ea0715889d32e38e943c899a32d5862 | 78586b45a0f6007ab6b94c35b33790a944856e5e | "2022-03-10T09:56:08Z" | python | "2022-04-04T12:05:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,129 | ["airflow/providers/google/cloud/operators/bigquery.py", "docs/apache-airflow-providers-google/operators/cloud/bigquery.rst", "tests/providers/google/cloud/operators/test_bigquery.py"] | Add autodetect arg in BQCreateExternalTable Operator | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.4 (latest released)
### Operating System
macOS Monterey
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Autodetect parameter is missing in the create_external_table function in BQCreateExternalTableOperator because of which one cannot create external tables if schema files are missing
See function on line 1140 in this [file](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py)
### What you expected to happen
If autodetect argument is passed to the create_external_table function then one can create external tables without mentioning the schema for a CSV file leveraging the automatic detection functionality provided by the big query
### How to reproduce
Simply call BigQueryCreateExternalTableOperator in the dag without mentioning schema_fields or schema_object
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22129 | https://github.com/apache/airflow/pull/22710 | 215993b75d0b3a568b01a29e063e5dcdb3b963e1 | f9e18472c0c228fc3de7c883c7c3d26d7ee49e81 | "2022-03-09T18:12:30Z" | python | "2022-04-04T13:32:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,115 | ["airflow/providers/docker/hooks/docker.py", "airflow/providers/docker/operators/docker.py", "tests/providers/docker/hooks/test_docker.py", "tests/providers/docker/operators/test_docker.py", "tests/providers/docker/operators/test_docker_swarm.py"] | add timeout to DockerOperator | ### Body
`APIClient` has [timeout](https://github.com/docker/docker-py/blob/b27faa62e792c141a5d20c4acdd240fdac7d282f/docker/api/client.py#L84) param which sets default timeout for API calls. The package [default](https://github.com/docker/docker-py/blob/7779b84e87bea3bac3a32b3ec1511bc1bfaa44f1/docker/constants.py#L6) is 60 seconds which may not be enough for some process.
The needed fix is to allow to set `timeout` to the APIClient constructor in DockerHook and DockerOperator:
for example in:
https://github.com/apache/airflow/blob/05f3a309668288e03988fc4774f9c801974b63d0/airflow/providers/docker/hooks/docker.py#L89
Note that in some cases DockerOperator uses APIClient directly for example in:
https://github.com/apache/airflow/blob/05f3a309668288e03988fc4774f9c801974b63d0/airflow/providers/docker/operators/docker.py#L390
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/22115 | https://github.com/apache/airflow/pull/22502 | 0d64d66ceab1c5da09b56bae5da339e2f608a2c4 | e1a42c4fc8a634852dd5ac5b16cade620851477f | "2022-03-09T12:19:41Z" | python | "2022-03-28T19:35:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,111 | ["Dockerfile", "Dockerfile.ci", "airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/ads/hooks/ads.py", "docs/apache-airflow-providers-google/index.rst", "setup.cfg", "setup.py", "tests/providers/google/ads/operators/test_ads.py"] | apache-airflow-providers-google uses deprecated Google Ads API V8 | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google = 6.4.0
### Apache Airflow version
2.1.3
### Operating System
Debian Buster
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
`apache-airflow-providers-google 6.4.0` has the requirement `google-ads >=12.0.0,<14.0.1`
The latest version of the Google Ads API supported by this is V8 - this was deprecated in November 2021, and is due to be disabled in April / May (see https://developers.google.com/google-ads/api/docs/sunset-dates)
### What you expected to happen
Update the requirements to so that the provider uses a version of the Google Ads API which hasn't been deprecated
At the moment, the only non-deprecated version is V10 - support for this was added in `google-ads=15.0.0`
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22111 | https://github.com/apache/airflow/pull/22965 | c92954418a21dcafcb0b87864ffcb77a67a707bb | c36bcc4c06c93dce11e2306a4aff66432bffd5a5 | "2022-03-09T10:05:51Z" | python | "2022-04-15T10:20:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,065 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_mssql_to_gcs.py", "tests/providers/google/cloud/transfers/test_mysql_to_gcs.py", "tests/providers/google/cloud/transfers/test_oracle_to_gcs.py", "tests/providers/google/cloud/transfers/test_postgres_to_gcs.py", "tests/providers/google/cloud/transfers/test_presto_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_trino_to_gcs.py"] | DB To GCS Operations Should Return/Save Row Count | ### Description
All DB to GCS Operators should track the per file and total row count written for metadata and validation purposes.
- Optionally, based on param, include the row count metadata as GCS file upload metadata.
- Always return row count data through XCom. Currently this operator has no return value.
### Use case/motivation
Currently, there is no way to check the uploaded files row count without opening the file. Downstream operations should have access to this information, and allowing it to be saved as GCS metadata and returning it through XCom makes it readily available for other uses.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22065 | https://github.com/apache/airflow/pull/24382 | 8e0bddaea69db4d175f03fa99951f6d82acee84d | 94257f48f4a3f123918b0d55c34753c7c413eb74 | "2022-03-07T23:36:35Z" | python | "2022-06-13T06:55:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,034 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py"] | BigQueryToGCSOperator: Invalid dataset ID error | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google==6.3.0`
### Apache Airflow version
2.2.3
### Operating System
Linux
### Deployment
Composer
### Deployment details
- Composer Environment version: `composer-2.0.3-airflow-2.2.3`
### What happened
When I use BigQueryToGCSOperator, I got following error.
```
Invalid dataset ID "MY_PROJECT:MY_DATASET". Dataset IDs must be alphanumeric (plus underscores and dashes) and must be at most 1024 characters long.
```
### What you expected to happen
I guess that it is due to I use colon (`:` ) as the separator between project_id and dataset_id in `source_project_dataset_table `.
I tried use dot(`.`) as separator and it worked.
However, [document of BigQueryToGCSOperator](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/bigquery_to_gcs/index.html) states that it is possible to use colon as the separator between project_id and dataset_id. In fact, at least untill Airflow1.10.15 version, it also worked with colon separator.
In Airflow 1.10.*, it separate and extract project_id and dataset_id by colon in bigquery hook. But `apache-airflow-providers-google==6.3.0` doesn't have this process.
https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/contrib/hooks/bigquery_hook.py#L2186-L2247
### How to reproduce
You can reproduce following steps.
- Create a test DAG to execute BigQueryToGCSOperator in Composer environment(`composer-2.0.3-airflow-2.2.3`).
- And give `source_project_dataset_table` arg source BigQuery table path in following format.
- Trigger DAG.
```
source_project_dataset_table = 'PROJECT_ID:DATASET_ID.TABLE_ID'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22034 | https://github.com/apache/airflow/pull/22506 | 02526b3f64d090e812ebaf3c37a23da2a3e3084e | 02976bef885a5da29a8be59b32af51edbf94466c | "2022-03-07T05:00:21Z" | python | "2022-03-27T20:21:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,015 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Allow showing more than 25 last DAG runs in the task duration view | ### Apache Airflow version
2.1.2
### What happened
Task duration view for triggered dags shows all dag runs instead of n last. Changing the number of runs in the `Runs` drop-down menu doesn't change the view. Additionally the chart loads slowly, it show all dag runs.
![screenshot_2022-03-05_170203](https://user-images.githubusercontent.com/7863204/156891332-5e461661-970d-49bf-82a8-5dd1da57bb02.png)
### What you expected to happen
The number of shown dag runs is 25 (like for scheduled dags), and the last runs are shown. The number of runs button should allow to increase / decrease the number of shown dag runs (respectively the task times of the dag runs).
### How to reproduce
Trigger a dag multiple (> 25) times. Look at the "Task Duration" chart.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22015 | https://github.com/apache/airflow/pull/29195 | de2889c2e9779177363d6b87dc9020bf210fdd72 | 8b8552f5c4111fe0732067d7af06aa5285498a79 | "2022-03-05T16:16:48Z" | python | "2023-02-25T21:50:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,007 | ["airflow/api_connexion/endpoints/variable_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/variable_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_variable_endpoint.py"] | Add Variable, Connection "description" fields available in the API | ### Description
I'd like to get the "description" field from the variable, and connection table available through the REST API for the calls:
1. /variables/{key}
2. /connections/{conn_id}
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22007 | https://github.com/apache/airflow/pull/25370 | faf3c4fe474733965ab301465f695e3cc311169c | 98f16aa7f3b577022791494e13b6aa7057afde9d | "2022-03-04T22:55:04Z" | python | "2022-08-02T21:05:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,996 | ["airflow/providers/ftp/hooks/ftp.py", "airflow/providers/sftp/hooks/sftp.py", "tests/providers/ftp/hooks/test_ftp.py"] | Add test_connection to FTP Hook | ### Description
I would like to test if FTP connections are properly setup.
### Use case/motivation
To test FTP connections via the Airflow UI similar to https://github.com/apache/airflow/pull/19609
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21996 | https://github.com/apache/airflow/pull/21997 | a9b7dd69008710f1e5b188e4f8bc2d09a5136776 | 26e8d6d7664bbaae717438bdb41766550ff57e4f | "2022-03-04T15:09:39Z" | python | "2022-03-06T10:16:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,989 | ["airflow/providers/google/cloud/operators/dataflow.py", "tests/providers/google/cloud/operators/test_dataflow.py"] | Potential Bug in DataFlowCreateJavaJobOperator | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.3
### Operating System
mac
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
Passing anything other than a GCS bucket path to the `jar` argument results in the job never being started on DataFlow.
### What you expected to happen
Passing in a local path to a jar should result in a job starting on Data Flow.
### How to reproduce
Create a task using the DataFlowCreateJavaJobOperator and pass in a non GCS path to the `jar` argument.
### Anything else
It's probably an indentation error in this [file](https://github.com/apache/airflow/blob/17d3e78e1b4011267e81846b5d496769934a5bcc/airflow/providers/google/cloud/operators/dataflow.py#L413) starting on line 413. The code for starting the job is over indented and causes any non GCS path for the `jar` to be effectively ignored.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21989 | https://github.com/apache/airflow/pull/22302 | 43dfec31521dcfcb45b95e2927d7c5eb5efa2c67 | a3ffbee7c9b5cd8cc5b7b246116f0254f1daa505 | "2022-03-04T10:31:02Z" | python | "2022-03-20T11:12:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,987 | ["airflow/providers/amazon/aws/hooks/s3.py"] | Airflow S3 connection name | ### Description
Hi,
I took a look some issues and PRs and noticed that `Elastic MapReduce` connection name has been changed to `Amazon Elastic MapReduce` lately. [#20746](https://github.com/apache/airflow/issues/20746)
I think it would be much intuitive if the connection name `S3` is changed to `Amazon S3`, and would look better on connection list in web UI. (also, it is the official name of s3)
Finally, AWS connections would be the followings:
```
Amazon Web Services
Amazon Redshift
Amazon Elastic MapReduce
Amazon S3
```
Would you mind to assign me for PR and change it to `Amazon S3`?
It would be a great start for my Airflow contribution journey.
Thank you!
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21987 | https://github.com/apache/airflow/pull/21988 | 2b4d14696b3f32bc5ab71884a6e434887755e5a3 | 9ce45ff756fa825bd363a5a00c2333d91c60c012 | "2022-03-04T07:38:44Z" | python | "2022-03-04T17:25:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,978 | ["airflow/providers/google/cloud/hooks/gcs.py", "tests/providers/google/cloud/hooks/test_gcs.py"] | Add Metadata Upload Support to GCSHook Upload Method | ### Description
When uploading a file using the GCSHook Upload method, allow for optional metadata to be uploaded with the file. This metadata would then be visible on the blob properties in GCS.
### Use case/motivation
Being able to associate metadata with a GCS blob is very useful for tracking information about the data stored in the GCS blob.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21978 | https://github.com/apache/airflow/pull/22058 | c1faaf3745dd631d4491202ed245cf8190f35697 | a11d831e3f978826d75e62bd70304c5277a8a1ea | "2022-03-03T21:08:50Z" | python | "2022-03-07T22:28:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,970 | ["docs/helm-chart/manage-dags-files.rst"] | helm chart - mounting-dags-from-a-private-github-repo-using-git-sync-sidecar | ### Describe the issue with documentation
doc link: https://airflow.apache.org/docs/helm-chart/stable/manage-dags-files.html#mounting-dags-from-a-private-github-repo-using-git-sync-sidecar
doc location:
"""
[...]
repo: ssh://git@github.com/<username>/<private-repo-name>.git
[...]
"""
I literally spent one working day making the helm deployment work with the git sync feature.
I prefixed my ssh git repo url with "ssh://" as written in the doc. This resulted in the git-sync container being stuck in a CrashLoopBackOff.
### How to solve the problem
Only when I removed the prefix it worked correctly.
### Anything else
chart version: 1.4.0
git-sync image tag: v3.1.6 (default v3.3.0)
Maybe the reason for the issue is the change of the image tag. However I want to share my experience. Maybe the doc is misleading. For me it was.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21970 | https://github.com/apache/airflow/pull/26632 | 5560a46bfe8a14205c5e8a14f0b5c2ae74ee100c | 05d351b9694c3e25843a8b0e548b07a70a673288 | "2022-03-03T16:10:41Z" | python | "2022-09-27T13:05:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,941 | ["airflow/providers/amazon/aws/hooks/sagemaker.py", "airflow/providers/amazon/aws/operators/sagemaker.py", "tests/providers/amazon/aws/operators/test_sagemaker_transform.py"] | Sagemaker Transform Job fails if there are job with Same name | ### Description
Sagemaker Transform Job fails if there are job with Same name exist. Let say I create a job name as 'transform-2021-01-01T00-30-00' . So if I clear the airflow task run id for this so that the operator re-triggers then the Sagemaker Job creation fails because job with same name exists. So can we add 'action_if_job_exists flag where Behaviour if the job name already exists. Possible options are "increment" (default) and "fail".'
### Use case/motivation
Now in production environment failures are inevitable and with Sagemaker Jobs we have to ensure there is unique name for each run of the Job. So like the Sagemaker Processing Job operator or training operator we have an option to increment a job name by appending the count like if I run same job twice the job name will be 'transform-2021-01-01T00-30-00-1' where 1 is appended at end with the help of 'action_if_job_exists ([str](https://docs.python.org/3/library/stdtypes.html#str)) -- Behaviour if the job name already exists. Possible options are "increment" (default) and "fail".'
I have faced this issue personally on one of the task I am working on and think will save time and cost instead of running entire workflow again to get unique job names if there are other dependent task in the job by just clearing failed task id post fixing the failure in Sagemaker code , docker image input etc and that will just continue from where it failed
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21941 | https://github.com/apache/airflow/pull/25263 | 1fd702e5e55cabb40fe7e480bc47e70d9a036944 | 007b1920ddcee1d78f871d039a6ed8f4d0d4089d | "2022-03-02T13:52:31Z" | python | "2022-08-02T18:20:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,929 | ["airflow/providers/elasticsearch/hooks/elasticsearch.py", "docs/apache-airflow-providers-elasticsearch/hooks/elasticsearch_python_hook.rst", "docs/apache-airflow-providers-elasticsearch/hooks/elasticsearch_sql_hook.rst", "docs/apache-airflow-providers-elasticsearch/hooks/index.rst", "docs/apache-airflow-providers-elasticsearch/index.rst", "tests/always/test_project_structure.py", "tests/providers/elasticsearch/hooks/test_elasticsearch.py", "tests/system/providers/elasticsearch/example_elasticsearch_query.py"] | Elasticsearch hook support DSL | ### Description
Current elasticsearch provider hook does not support query with DSL. Can we implement some methods that support user input json to get query results?
BTW, why current `ElasticsearchHook's` father class is `DbApiHook`? I thought DbApiHook is for relational database that supports `sqlalchemy`?
### Use case/motivation
I think the elasticsearch provider hook should be like `MongoHook` that inherit from `BaseHook` and provide more useful methods works out of the box.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21929 | https://github.com/apache/airflow/pull/24895 | 33b2cd8784dcbc626f79e2df432ad979727c9a08 | 2ddc1004050464c112c18fee81b03f87a7a11610 | "2022-03-02T08:38:07Z" | python | "2022-07-08T21:51:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,923 | ["airflow/api/common/trigger_dag.py", "airflow/jobs/scheduler_job.py", "airflow/models/dag.py", "airflow/models/dagrun.py", "airflow/timetables/base.py", "airflow/utils/types.py", "docs/apache-airflow/howto/timetable.rst", "tests/models/test_dag.py"] | Programmatic customization of run_id for scheduled DagRuns | ### Description
Allow DAG authors to control how `run_id`'s are generated for created DagRuns. Currently the only way to specify a DagRun's `run_id` is through the manual trigger workflow either through the CLI or API and passing in `run_id`. It would be great if DAG authors are able to write a custom logic to generate `run_id`'s from scheduled `DagRunInterval`'s.
### Use case/motivation
In Airflow 1.x, the semantics of `execution_date` were burdensome enough for users that DAG authors would subclass DAG to override `create_dagrun` so that when new DagRuns were created, they were created with `run_id`'s that provided context into semantics about the DagRun. For example,
```
def create_dagrun(self, **kwargs):
kwargs['run_id'] = kwargs['execution_date'] + self.following_schedule(kwargs['execution_date']).date()
return super().create_dagrun(kwargs)
```
would result in the UI DagRun dropdown to display the weekday of when the Dag actually ran.
<img width="528" alt="image001" src="https://user-images.githubusercontent.com/9851473/156280393-e261d7fa-dfe0-41db-9887-941510f4070f.png">
After upgrading to Airflow 2.0 and with Dag serialization in the scheduler overridden methods are no longer there in the SerializedDAG, so we are back to having `scheduled__<execution_date>` values in the UI dropdown. It would be great if some functionality could be exposed either through the DAG or just in the UI to display meaningful values in the DagRun dropdown.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21923 | https://github.com/apache/airflow/pull/25795 | 5c48ed19bd3b554f9c3e881a4d9eb61eeba4295b | 0254f30a5a90f0c3104782525fabdcfdc6d3b7df | "2022-03-02T02:02:30Z" | python | "2022-08-19T13:15:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,908 | ["airflow/api/common/mark_tasks.py", "airflow/www/views.py", "tests/api/common/test_mark_tasks.py"] | DAG will be set back to Running state after being marked Failed if there are scheduled tasks | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I was running a large DAG with a limited concurrency and wanted to cancel the current run. I marked the run as `Failed` via the UI which terminated all running tasks and marked them as `Failed`.
However, a few seconds later the run was set back to Running and other tasks started to execute.
<img width="1235" alt="image" src="https://user-images.githubusercontent.com/16950874/156228662-e06dd71a-e8ef-4cdd-b958-5ddefa1d5328.png">
I think that this is because of two things happening:
1) Marking a run as failed will only stop the currently running tasks and mark them as failed, does nothing to tasks in `scheduled` state:
https://github.com/apache/airflow/blob/0cd3b11f3a5c406fbbd4433d8e44d326086db634/airflow/api/common/mark_tasks.py#L455-L462
2) During scheduling, a DAG with tasks in non-finished states will be marked as `Running`:
https://github.com/apache/airflow/blob/feea143af9b1db3b1f8cd8d29677f0b2b2ab757a/airflow/models/dagrun.py#L583-L585
I'm assuming that this is unintended behaviour, is that correct?
### What you expected to happen
I think that marking a DAG as failed should cause the run to stop (and not be resumed) regardless of the state of its tasks.
When a DAGRun is marked failed, we should:
- mark all running tasks failed
- **mark all non-finished tasks as skipped**
- mark the DagRun as `failed`
This is consistent with the behaviour from a DagRun time out.
### How to reproduce
Run this DAG:
```python
from datetime import timedelta
from airflow.models import DAG
from airflow.operators.bash_operator import BashOperator
from airflow import utils
dag = DAG(
'cant-be-stopped',
start_date=utils.dates.days_ago(1),
max_active_runs=1,
dagrun_timeout=timedelta(minutes=60),
schedule_interval=None,
concurrency=1
)
for i in range(5):
task = BashOperator(
task_id=f'task_{i}',
bash_command='sleep 300',
retries=0,
dag=dag,
)
```
And once the first task is running, mark the run as failed. After the next scheduler loop the run will be set back to running and a different task will be started.
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
Noticed this in Airflow 2.2.2 but replicated in a Breeze environment on the latest main.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21908 | https://github.com/apache/airflow/pull/22410 | e97953ad871dc0078753c668680cce8096a31e32 | becbb4ab443995b21d783cadfba7fbfdf3b1530d | "2022-03-01T18:39:28Z" | python | "2022-03-31T17:09:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,897 | ["docs/apache-airflow/logging-monitoring/metrics.rst"] | Metrics documentation incorrectly lists dag_processing.processor_timeouts as a gauge | ### Describe the issue with documentation
According to the [documentation](https://airflow.apache.org/docs/apache-airflow/2.2.4/logging-monitoring/metrics.html), `dag_processing.processor_timeouts` is a gauge.
However, checking the code, it appears to be a counter: https://github.com/apache/airflow/blob/3035d3ab1629d56f3c1084283bed5a9c43258e90/airflow/dag_processing/manager.py#L1004
### How to solve the problem
Move the metric to the counter section.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21897 | https://github.com/apache/airflow/pull/23393 | 82c244f9c7f24735ee952951bcb5add45422d186 | fcfaa8307ac410283f1270a0df9e557570e5ffd3 | "2022-03-01T13:40:37Z" | python | "2022-05-08T21:11:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,891 | ["airflow/providers/apache/hive/provider.yaml", "setup.py", "tests/providers/apache/hive/hooks/test_hive.py", "tests/providers/apache/hive/transfers/test_hive_to_mysql.py", "tests/providers/apache/hive/transfers/test_hive_to_samba.py", "tests/providers/apache/hive/transfers/test_mssql_to_hive.py", "tests/providers/apache/hive/transfers/test_mysql_to_hive.py"] | hive provider support for python 3.9 | ### Apache Airflow Provider(s)
apache-hive
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.4 (latest released)
### Operating System
Debian “buster”
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Hive provider can not be used in python 3.9 airflow image without also manually installing `PyHive`, `sasl`, `thrift-sasl` python libraries.
### What you expected to happen
Hive provider can be used in python 3.9 airflow image after installing only hive provider.
### How to reproduce
_No response_
### Anything else
Looks like for python 3.9 hive provider support was removed in https://github.com/apache/airflow/pull/15515#issuecomment-860264240 , because `sasl` library did not support python 3.9. However now python 3.9 is supported in `sasl`: https://github.com/cloudera/python-sasl/issues/21#issuecomment-865914647 .
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21891 | https://github.com/apache/airflow/pull/21893 | 76899696fa00c9f267316f27e088852556ebcccf | 563ecfa0539f5cbd42a715de0e25e563bd62c179 | "2022-03-01T10:27:55Z" | python | "2022-03-01T22:16:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,832 | ["airflow/decorators/__init__.pyi", "airflow/providers/docker/decorators/docker.py", "airflow/providers/docker/example_dags/example_docker.py"] | Unmarshalling of function with '@task.docker()' decorator fails if 'python' alias is not defined in image | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I am using Airflow 2.2.4 docker which is to run a DAG, `test_dag.py`, defined as follows:
```
from airflow.decorators import dag, task
from airflow.utils import dates
@dag(schedule_interval=None,
start_date=dates.days_ago(1),
catchup=False)
def test_dag():
@task.docker(image='company/my-repo',
api_version='auto',
docker_url='tcp://docker-socket-proxy:2375/',
auto_remove=True)
def docker_task(inp):
print(inp)
return inp+1
@task.python()
def python_task(inp):
print(inp)
out = docker_task(10)
python_task(out)
_ = test_dag()
```
The Dockerfile for 'company/my-repo' is as follows:
```
FROM nvidia/cuda:11.2.2-runtime-ubuntu20.04
USER root
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python3 python3-pip
```
### What you expected to happen
I expected the DAG logs for `docker_task()` and `python_task()` to have 10 and 11 as output respectively.
Instead, the internal Airflow unmarshaller that is supposed to unpickle the function definition of `docker_task()` inside the container of image `company/my-repo` via `__PYTHON_SCRIPT` environmental variable to run it, makes an **incorrect assumption** that the symbol `python` is defined as an alias for either `/usr/bin/python2` or `/usr/bin/python3`. Most linux python installations require that users explicitly specify either `python2` or `python3` when running their scripts and `python` is NOT defined even when `python3` is installed via aptitude package manager.
This error can be resolved for now by adding the following to `Dockerfile` after python3 package installation:
`RUN apt-get install -y python-is-python3`
But this should NOT be a requirement.
`Dockerfile`s using base python images do not suffer from this problem as they have the alias `python` defined.
The error logged is:
```
[2022-02-26, 11:30:47 UTC] {docker.py:258} INFO - Starting docker container from image company/my-repo
[2022-02-26, 11:30:48 UTC] {docker.py:320} INFO - + python -c 'import base64, os;x = base64.b64decode(os.environ["__PYTHON_SCRIPT"]);f = open("/tmp/script.py", "wb"); f.write(x);'
[2022-02-26, 11:30:48 UTC] {docker.py:320} INFO - bash: python: command not found
[2022-02-26, 11:30:48 UTC] {taskinstance.py:1700} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1455, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1511, in _execute_task
result = execute_callable(context=context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/docker/decorators/docker.py", line 117, in execute
return super().execute(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/decorators/base.py", line 134, in execute
return_value = super().execute(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/docker/operators/docker.py", line 390, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/docker/operators/docker.py", line 265, in _run_image
return self._run_image_with_mounts(self.mounts + [tmp_mount], add_tmp_variable=True)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/docker/operators/docker.py", line 324, in _run_image_with_mounts
raise AirflowException('docker container failed: ' + repr(result) + f"lines {res_lines}")
airflow.exceptions.AirflowException: docker container failed: {'Error': None, 'StatusCode': 127}lines + python -c 'import base64, os;x = base64.b64decode(os.environ["__PYTHON_SCRIPT"]);f = open("/tmp/script.py", "wb"); f.write(x);'
bash: python: command not found
```
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04 WSL 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21832 | https://github.com/apache/airflow/pull/21973 | 73c6bf08780780ca5a318e74902cb05ba006e3ba | 188ac519964c6b6acf9d6ab144e7ff7e5538547c | "2022-02-26T12:18:44Z" | python | "2022-03-07T01:45:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,808 | ["airflow/providers/amazon/aws/operators/sagemaker.py", "tests/providers/amazon/aws/operators/test_sagemaker_base.py"] | Add default 'aws_conn_id' to SageMaker Operators | The SageMaker Operators not having a default value for `aws_conn_id` is a pain, we should fix that. See EKS operators for an example: https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/eks.py
_Originally posted by @ferruzzi in https://github.com/apache/airflow/pull/21673#discussion_r813414043_ | https://github.com/apache/airflow/issues/21808 | https://github.com/apache/airflow/pull/23515 | 828016747ac06f6fb2564c07bb8be92246f42567 | 5d1e6ff19ab4a63259a2c5aed02b601ca055a289 | "2022-02-24T22:58:10Z" | python | "2022-05-09T17:36:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,807 | ["UPDATING.md", "airflow/exceptions.py", "airflow/models/mappedoperator.py", "airflow/sensors/base.py", "airflow/ti_deps/deps/ready_to_reschedule.py", "tests/sensors/test_base.py", "tests/serialization/test_dag_serialization.py"] | Dynamically mapped sensors throw TypeError at DAG parse time | ### Apache Airflow version
main (development)
### What happened
Here's a DAG:
```python3
from datetime import datetime
from airflow import DAG
from airflow.sensors.date_time import DateTimeSensor
template = "{{{{ ti.start_date + macros.timedelta(seconds={}) }}}}"
with DAG(
dag_id="datetime_mapped",
start_date=datetime(1970, 1, 1),
) as dag:
@dag.task
def get_sleeps():
return [30, 60, 90]
@dag.task
def dt_templates(sleeps):
return [template.format(s) for s in sleeps]
templates_xcomarg = dt_templates(get_sleeps())
DateTimeSensor.partial(task_id="sleep", mode="reschedule").apply(
target_time=templates_xcomarg
)
```
I wanted to see if it would parse, so I ran:
```
$ python dags/the_dag.py
```
And I got this error:
```
Traceback (most recent call last):
File "/Users/matt/2022/02/22/dags/datetime_mapped.py", line 23, in <module>
DateTimeSensor.partial(task_id="sleep", mode="reschedule").apply(
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 203, in apply
deps=MappedOperator.deps_for(self.operator_class),
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 287, in deps_for
return operator_class.deps | {MappedTaskIsExpanded()}
TypeError: unsupported operand type(s) for |: 'property' and 'set'
Exception ignored in: <function OperatorPartial.__del__ at 0x10ed90160>
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 182, in __del__
warnings.warn(f"{self!r} was never mapped!")
File "/usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/warnings.py", line 109, in _showwarnmsg
sw(msg.message, msg.category, msg.filename, msg.lineno,
File "/Users/matt/src/airflow/airflow/settings.py", line 115, in custom_show_warning
from rich.markup import escape
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 982, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 925, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1414, in find_spec
File "<frozen importlib._bootstrap_external>", line 1380, in _get_spec
TypeError: 'NoneType' object is not iterable
```
### What you expected to happen
No errors. Instead a dag with three parallel sensors.
### How to reproduce
Try to use the DAG shown above.
### Operating System
Mac OS Bug Sur
### Versions of Apache Airflow Providers
N/A
### Deployment
Virtualenv installation
### Deployment details
version: 2.3.0.dev0
cloned at: [8ee8f2b34](https://github.com/apache/airflow/commit/8ee8f2b34b8df168a4d3f2664a9f418469079723)
### Anything else
A comment from @ashb about this
> We're assuming we can call deps on the class. Which we can for everything but a sensor.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21807 | https://github.com/apache/airflow/pull/21815 | 2c57ad4ff9ddde8102c62f2e25c2a2e82cceb3e7 | 8b276c6fc191254d96451958609faf81db994b94 | "2022-02-24T22:29:12Z" | python | "2022-03-02T14:19:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,801 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | Error in creating external table using GCSToBigQueryOperator when autodetect=True | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I was trying to create an external table for a CSV file in GCS using the GCSToBigQueryOperator with autodetect=True but ran into some issues. The error stated that either schema field or schema object must be mentioned for creating an external table configuration. On close inspection of the code, I found out that the operator cannot autodetect the schema of the file.
In the [file](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py), a piece of code seems to be missing when calling the create_external_table function at line 262.
This must be an oversight but it **prevents the creation of an external table with an automatically deduced schema.**
The **solution** is to pass autodetect=self.autodetect when calling the create_external_table function as mentioned below:
if self.external_table:
[...]
autodetect=self.autodetect,
[...]
### What you expected to happen
The operator should have autodetected the schema of the CSV file and created an external table but it threw an error stating that either schema field or schema object must be mentioned for creating external table configuration
This error is due to the fact that the value of autodetect is not being passed when calling the create_external_table function in this [file](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py) at line 262. Also, the default value of autodetect is False in create_external_table and so one gets the error as the function receives neither autodetect, schema_field or schema_object value
### How to reproduce
The above issue can be reproduced by calling the GCSToBigQueryOperator with the following parameters as follow:
create_external_table = GCSToBigQueryOperator(
task_id = <task_id>
bucket = <bucket_name>,
source_objects = [<gcs path excluding bucket name to csv file>],
destination_project_dataset_table = <project_id>.<dataset_name>.<table_name>,
schema_fields=None,
schema_object=None,
source_format='CSV',
autodetect = True,
external_table=True,
dag = dag
)
create_external_table
### Operating System
macOS Monterey 12.2.1
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21801 | https://github.com/apache/airflow/pull/21944 | 26e8d6d7664bbaae717438bdb41766550ff57e4f | 9020b3a89d4572572c50d6ac0f1724e09092e0b5 | "2022-02-24T18:07:25Z" | python | "2022-03-06T10:25:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,768 | ["airflow/models/baseoperator.py", "airflow/models/dag.py"] | raise TypeError when default_args not a dictionary | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When triggering this dag below it runs when it should fail. A set is being passed to default_args instead of what should be a dictionary yet the dag still succeeds.
### What you expected to happen
I expected the dag to fail as the default_args parameter should only be a dictionary.
### How to reproduce
```
from airflow.models import DAG
from airflow.operators.python import PythonVirtualenvOperator, PythonOperator
from airflow.utils.dates import days_ago
def callable1():
pass
with DAG(
dag_id="virtualenv_python_operator",
default_args={"owner: airflow"},
schedule_interval=None,
start_date=days_ago(2),
tags=["core"],
) as dag:
task = PythonOperator(
task_id="check_errors",
python_callable=callable1,
)
```
### Operating System
Docker (debian:buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro CLI with images:
- quay.io/astronomer/ap-airflow-dev:2.2.4-1-onbuild
- quay.io/astronomer/ap-airflow-dev:2.2.3-2
- quay.io/astronomer/ap-airflow-dev:2.2.0-5-buster-onbuild
### Anything else
Bug happens every time.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21768 | https://github.com/apache/airflow/pull/21809 | 7724a5a2ec9531f03497a259c4cd7823cdea5e0c | 7be204190d6079e49281247d3e2c644535932925 | "2022-02-23T18:50:03Z" | python | "2022-03-07T00:18:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,752 | ["airflow/cli/cli_parser.py"] | triggerer `--capacity` parameter does not work | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When you run `airflow triggerer --capacity 1000`, you get the following error:
```
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/bin/airflow", line 8, in <module>
sys.exit(main())
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/cli/commands/triggerer_command.py", line 34, in triggerer
job = TriggererJob(capacity=args.capacity)
File "<string>", line 4, in __init__
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/sqlalchemy/orm/state.py", line 433, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
compat.raise_(
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/sqlalchemy/orm/state.py", line 430, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/airflow2/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 63, in __init__
raise ValueError(f"Capacity number {capacity} is invalid")
ValueError: Capacity number 1000 is invalid
```
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Operating System
Linux / Mac
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21752 | https://github.com/apache/airflow/pull/21753 | 169b196d189242c4f7d26bf1fa4dd5a8b5da12d4 | 9076b67c05cdba23e8fa51ebe5ad7f7d53e1c2ba | "2022-02-23T07:02:13Z" | python | "2022-02-23T10:20:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,672 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "docs/apache-airflow-providers-amazon/connections/aws.rst", "tests/providers/amazon/aws/hooks/test_base_aws.py"] | [AWS] Configurable AWS SessionFactory for AwsBaseHook | ### Description
Add custom federated AWS access support through configurable [SessionFactory](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/base_aws.py#L424),
User will have the option to use their own `SessionFactory` implementation that can provide AWS credentials through external process.
### Use case/motivation
Some company uses custom [federated AWS access](https://aws.amazon.com/identity/federation/) to AWS services. It corresponds to [credential_process](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html) in AWS configuration option.
While the hook currently support AWS profile option, I think it would be great if we can add this support on the hook directly, without involving any AWS configuration files on worker nodes.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21672 | https://github.com/apache/airflow/pull/21778 | f0bbb9d1079e2660b4aa6e57c53faac84b23ce3d | c819b4f8d0719037ce73d845c4ff9f1e4cb6cc38 | "2022-02-18T18:25:12Z" | python | "2022-02-28T09:29:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,671 | ["airflow/providers/amazon/aws/utils/emailer.py", "docs/apache-airflow/howto/email-config.rst", "tests/providers/amazon/aws/utils/test_emailer.py"] | Amazon Airflow Provider | Broken AWS SES as backend for Email | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
```
apache-airflow==2.2.2
apache-airflow-providers-amazon==2.4.0
```
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux 2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
As part of this PR https://github.com/apache/airflow/pull/18042 the signature of the function `airflow.providers.amazon.aws.utils.emailer.send_email` is no longer compatible with how `airflow.utils.email.send_email` invokes the function. Essentially the functionally of using SES as Email Backend is broken.
### What you expected to happen
This behavior is erroneous because the signature of `airflow.providers.amazon.aws.utils.emailer.send_email` should be compatible with how we call the backend function in `airflow.utils.email.send_email`:
```
return backend(
to_comma_separated,
subject,
html_content,
files=files,
dryrun=dryrun,
cc=cc,
bcc=bcc,
mime_subtype=mime_subtype,
mime_charset=mime_charset,
conn_id=backend_conn_id,
**kwargs,
)
```
### How to reproduce
## Use AWS SES as Email Backend
```
[email]
email_backend = airflow.providers.amazon.aws.utils.emailer.send_email
email_conn_id = aws_default
```
## Try sending an Email
```
from airflow.utils.email import send_email
def email_callback(**kwargs):
send_email(to=['test@hello.io'], subject='test', html_content='content')
email_task = PythonOperator(
task_id='email_task',
python_callable=email_callback,
)
```
## The bug shows up
```
File "/usr/local/airflow/dags/environment_check.py", line 46, in email_callback
send_email(to=['test@hello.io'], subject='test', html_content='content')
File "/usr/local/lib/python3.7/site-packages/airflow/utils/email.py", line 66, in send_email
**kwargs,
TypeError: send_email() missing 1 required positional argument: 'html_content'
```
### Anything else
Every time.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21671 | https://github.com/apache/airflow/pull/21681 | b48dc4da9ec529745e689d101441a05a5579ef46 | b28f4c578c0b598f98731350a93ee87956d866ae | "2022-02-18T18:16:17Z" | python | "2022-02-19T09:34:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,656 | ["airflow/models/baseoperator.py"] | Airflow >= 2.2.0 execution date change is failing TaskInstance get_task_instances method and possibly others | ### Apache Airflow version
2.2.3 (latest released)
### What happened
This is my first time reporting or posting on this forum. Please let me know if I need to provide any more information. Thanks for looking at this!
I have a Python Operator that uses the BaseOperator get_task_instances method and during the execution of this method, I encounter the following error:
<img width="1069" alt="Screen Shot 2022-02-17 at 2 28 48 PM" src="https://user-images.githubusercontent.com/18559784/154581673-718bc199-8390-49cf-a3fe-8972b6f39f81.png">
This error is from doing an upgrade from airflow 1.10.15 -> 2.2.3.
I am using SQLAlchemy version 1.2.24 but I also tried with version 1.2.23 and encountered the same error. However, I do not think this is a sqlAlchemy issue.
The issue seems to have been introduced with Airflow 2.2.0 (pr: https://github.com/apache/airflow/pull/17719/files), where the TaskInstance.execution_date changed from being a column to this association_proxy. I do not have deep knowledge of SQLAlchemny so I am not sure why this change was made, but it results in it the error I'm getting.
2.2 .0 +
<img width="536" alt="Screen Shot 2022-02-17 at 2 41 00 PM" src="https://user-images.githubusercontent.com/18559784/154583252-4729b44d-40e2-4a89-9018-95b09ef4da76.png">
1.10.15
<img width="428" alt="Screen Shot 2022-02-17 at 2 56 15 PM" src="https://user-images.githubusercontent.com/18559784/154585325-4546309c-66b6-4e69-aba2-9b6979762a1b.png">
if you follow the stack trace you will get to this chunk of code that leads to the error because the association_proxy has a '__clause_element__' attr, but the attr raises the exception in the error when called.
<img width="465" alt="Screen Shot 2022-02-17 at 2 43 51 PM" src="https://user-images.githubusercontent.com/18559784/154583639-a7957209-b19e-4134-a5c2-88d53176709c.png">
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Operating System
Linux from the official airflow helm chart docker image python version 3.7
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 2.4.0
apache-airflow-providers-celery 2.1.0
apache-airflow-providers-cncf-kubernetes 2.2.0
apache-airflow-providers-databricks 2.2.0
apache-airflow-providers-docker 2.3.0
apache-airflow-providers-elasticsearch 2.1.0
apache-airflow-providers-ftp 2.0.1
apache-airflow-providers-google 6.2.0
apache-airflow-providers-grpc 2.0.1
apache-airflow-providers-hashicorp 2.1.1
apache-airflow-providers-http 2.0.1
apache-airflow-providers-imap 2.0.1
apache-airflow-providers-microsoft-azure 3.4.0
apache-airflow-providers-mysql 2.1.1
apache-airflow-providers-odbc 2.0.1
apache-airflow-providers-postgres 2.4.0
apache-airflow-providers-redis 2.0.1
apache-airflow-providers-sendgrid 2.0.1
apache-airflow-providers-sftp 2.3.0
apache-airflow-providers-slack 4.1.0
apache-airflow-providers-sqlite 2.0.1
apache-airflow-providers-ssh 2.3.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
The only extra dependency I am using is awscli==1.20.65. I have changed very little with the deployment besides a few environments variables and some pod annotations.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21656 | https://github.com/apache/airflow/pull/21705 | b2c0a921c155e82d1140029e6495594061945025 | bb577a98494369b22ae252ac8d23fb8e95508a1c | "2022-02-17T22:53:28Z" | python | "2022-02-22T20:12:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,647 | ["docs/apache-airflow-providers-jenkins/connections.rst", "docs/apache-airflow-providers-jenkins/index.rst"] | Jenkins Connection Example | ### Describe the issue with documentation
I need to configure a connection to our jenkins and I dont find anywhere an example.
I suppose that I need to define a http connection with the format:
`http://usename:password@jenkins_url`
However don't have any idea about adding `/api` so that the url would be:
`http://usename:password@jenkins_url/api`
### How to solve the problem
Is it possible to include at least a jenkins connection example in the documentation?
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21647 | https://github.com/apache/airflow/pull/22682 | 3849b4e709acfc9e85496aa2dededb2dae117fc7 | cb41d5c02e3c53a24f9dc148e45e696891c347c2 | "2022-02-17T16:40:43Z" | python | "2022-04-02T20:04:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,638 | ["airflow/models/connection.py", "tests/models/test_connection.py"] | Spark Connection with k8s in URL not mapped correctly | ### Official Helm Chart version
1.2.0
### Apache Airflow version
2.1.4
### Kubernetes Version
v1.21.6+bb8d50a
### Helm Chart configuration
I defined a new Connection String for AIRFLOW_CONN_SPARK_DEFAULT in values.yaml like the following section (base64 encoded or with correct string (spark://k8s://100.68.0.1:443?deploy-mode=cluster):
```
extraSecrets:
'{{ .Release.Name }}-airflow-connections':
data: |
AIRFLOW_CONN_SPARK_DEFAULT: 'c3Bhcms6Ly9rOHM6Ly8xMDAuNjguMC4xOjQ0Mz9kZXBsb3ktbW9kZT1jbHVzdGVy'
```
in Section extraEnvFrom i defined the following:
```
extraEnvFrom: |
- secretRef:
name: '{{ .Release.Name }}-airflow-connections'
```
### Docker Image customisations
added apache-airflow-providers-apache-spark to base Image
### What happened
Airflow Connection mapped wrong because of the k8s:// within the url. if i ask for the connection with cmd "airflow connections get spark_default" then host=k8s and schema=/100.60.0.1:443 which is wrong
### What you expected to happen
the spark Connection based on k8s (spark://k8s://100.68.0.1:443?deploy-mode=cluster) should be parsed correctly
### How to reproduce
define in values.yaml
```
extraSecrets:
'{{ .Release.Name }}-airflow-connections':
data: |
AIRFLOW_CONN_SPARK_DEFAULT: 'c3Bhcms6Ly9rOHM6Ly8xMDAuNjguMC4xOjQ0Mz9kZXBsb3ktbW9kZT1jbHVzdGVy'
extraEnvFrom: |
- secretRef:
name: '{{ .Release.Name }}-airflow-connections'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21638 | https://github.com/apache/airflow/pull/31465 | 232771869030d708c57f840aea735b18bd4bffb2 | 0560881f0eaef9c583b11e937bf1f79d13e5ac7c | "2022-02-17T09:39:46Z" | python | "2023-06-19T09:32:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,616 | ["airflow/models/trigger.py", "tests/models/test_trigger.py"] | Deferred tasks being ran on every triggerer instance | ### Apache Airflow version
2.2.3 (latest released)
### What happened
I'm currently using airflow triggers, but I've noticed that deferred tasks are being ran on every triggerer instance right away:
```
host 1 -> 2/10/2022 1:06:08 PM Job a702656f-01ce-4e7a-893a-5b42cdaa38e2 progressed from Unknown to RUNNABLE.
host 2 -> 2/10/2022 1:06:06 PM Job a702656f-01ce-4e7a-893a-5b42cdaa38e2 progressed from Unknown to RUNNABLE.
```
within 2 seconds, the job was issued on both triggerer instances.
### What you expected to happen
The deferred task is only scheduled on 1 triggerer instance.
### How to reproduce
Create many jobs that have to be deferred, and start multiple triggerers.
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
2.2.3
### Deployment
Other Docker-based deployment
### Deployment details
A docker setup w/ multiple triggerer instances running.
### Anything else
I believe this issue is due to a race condition here: https://github.com/apache/airflow/blob/84a59a6d74510aff4059a3ca2da793d996e86fa1/airflow/models/trigger.py#L175. If multiple instances start at the same time, each instance will get the same tasks in their alive_job_ids.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21616 | https://github.com/apache/airflow/pull/21770 | c8d64c916720f0be67a4f2ffd26af0d4c56005ff | b26d4d8a290ce0104992ba28850113490c1ca445 | "2022-02-16T14:23:07Z" | python | "2022-02-26T19:25:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,615 | ["chart/tests/test_create_user_job.py"] | ArgoCD deployment: Cannot synchronize after updating values | ### Official Helm Chart version
1.4.0 (latest released)
### Apache Airflow version
2.2.3 (latest released)
### Kubernetes Version
v1.20.12-gke.1500
### Helm Chart configuration
defaultAirflowTag: "2.2.3-python3.9"
airflowVersion: "2.2.3"
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
images:
migrationsWaitTimeout: 300
executor: "KubernetesExecutor"
### Docker Image customisations
_No response_
### What happened
I was able to configure the synchronization properly when I added the application to _ArgoCD_ the first time, but after updating an environment value, it is set properly (the scheduler is restarted and works fine), but _ArgoCD_ cannot synchronize the jobs (_airflow-run-airflow-migrations_ and _airflow-create-user_), so it shows that the application is not synchronized.
Since I deploy _Airflow_ with _ArgoCD_ and I disable the _Helm's_ hooks, these jobs are not deleted when finished and remain as completed.
The workaround I am doing is to delete these jobs manually, but I have to repeat this after an update.
Should the attribute `ttlSecondsAfterFinished: 0` be included below this line when the _Helm's_ hooks are disabled in the jobs templates?
https://github.com/apache/airflow/blob/af2c047320c5f0742f466943c171ec761d275bab/chart/templates/jobs/migrate-database-job.yaml#L48
https://github.com/apache/airflow/blob/af2c047320c5f0742f466943c171ec761d275bab/chart/templates/jobs/migrate-database-job.yaml#L48
PD: I created a custom chart in order to synchronize my value's files with _ArgoCD_, and this chart only includes a dependency with the _Airflow's_ chart and my values files (I use one by environment), and the _Helm_ configuration I put in the section _Helm Chart configuration_ is under a _airflow_ block in my value's files.
This is my _Chart.yaml_:
```yaml
apiVersion: v2
name: my-airflow
version: 1.0.0
description: Airflow Chart with my values
appVersion: "2.2.3"
dependencies:
- name: airflow
version: 1.4.0
repository: https://airflow.apache.org
```
### What you expected to happen
I expect that _ArgoCD_ synchronizes after changing an environment variable in my values file.
### How to reproduce
- Deploy the chart as an _ArgoCD_ application.
- Change an environment variable in the values file.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21615 | https://github.com/apache/airflow/pull/21776 | dade6e075f5229f15b8b0898393c529e0e9851bc | 608b8c4879c188881e057e6318a0a15f54c55c7b | "2022-02-16T13:19:19Z" | python | "2022-02-25T01:46:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,597 | ["airflow/providers/presto/hooks/presto.py", "airflow/providers/trino/hooks/trino.py"] | replace `hql` references to `sql` in `TrinoHook` and `PrestoHook` | ### Body
Both:
https://github.com/apache/airflow/blob/main/airflow/providers/presto/hooks/presto.py
https://github.com/apache/airflow/blob/main/airflow/providers/trino/hooks/trino.py
uses terminology of `hql` we should change it to `sql`.
The change needs to be backwards compatible. e.g deprecating hql with warning
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/21597 | https://github.com/apache/airflow/pull/21630 | 4e959358ac4ef9554ff5d82cdc85ab7dc142a639 | 2807193594ed4e1f3acbe8da7dd372fe1c2fff94 | "2022-02-15T20:40:22Z" | python | "2022-02-22T09:07:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,573 | ["airflow/providers/amazon/aws/operators/ecs.py"] | ECS Operator doesn't get logs from cloudwatch when ecs task has finished within 30 seconds. | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.6.0
### Apache Airflow version
2.2.3 (latest released)
### Operating System
Amazon Linux2
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
ECS Operator doesn't print ECS task logs stored in CloudWatchLogs when ECS task has finished within 30 seconds.
### What you expected to happen
I expected to see ECS task logs in the Airflow logs view.
### How to reproduce
You create a simple ECS task that executes only "echo hello world" and runs it by ECS Operator, and ECS Operator doesn't print "hello world" in Airflow logs view. But we can see it XCom's result.
### Anything else
I think EcsTaskLogFetcher needs to get log events after sleep. When EcsTaskLogFetcher thread receives a stop signal, EcsTaskLogFetcher will stop run method without getting a log event that is happened during a sleep period.
https://github.com/apache/airflow/blob/8155e8ac0abcaf3bb02b164fd7552e20fa702260/airflow/providers/amazon/aws/operators/ecs.py#L122
I think maybe #19426 is the same problem.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21573 | https://github.com/apache/airflow/pull/21574 | 2d6282d6b7d8c7603e96f0f28ebe0180855687f3 | 21a90c5b7e2f236229812f9017582d67d3d7c3f0 | "2022-02-15T07:10:13Z" | python | "2022-02-15T09:48:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,566 | ["setup.cfg"] | typing_extensions package isn't installed with apache-airflow-providers-amazon causing an issue for SqlToS3Operator | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
3.0.0rc2
### Apache Airflow version
2.2.3 (latest released)
### Python version
Python 3.9.7 (default, Oct 12 2021, 02:43:43)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
I was working on adding this operator to a DAG and it failed to import due to a lack of a required file
### What you expected to happen
_No response_
### How to reproduce
Add
```
from airflow.providers.amazon.aws.transfers.sql_to_s3 import SqlToS3Operator
```
to a dag
### Anything else
This can be resolved by adding `typing-extensions==4.1.1` to `requirements.txt` when building the project (locally)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21566 | https://github.com/apache/airflow/pull/21567 | 9407f11c814413064afe09c650a79edc45807965 | e4ead2b10dccdbe446f137f5624255aa2ff2a99a | "2022-02-14T20:21:15Z" | python | "2022-02-25T21:26:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,559 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/hooks/databricks_base.py", "airflow/providers/databricks/operators/databricks.py", "docs/apache-airflow-providers-databricks/operators/run_now.rst", "docs/apache-airflow-providers-databricks/operators/submit_run.rst", "tests/providers/databricks/hooks/test_databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Databricks hook: Retry also on HTTP Status 429 - rate limit exceeded | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
2.2.0
### Apache Airflow version
2.2.3 (latest released)
### Operating System
Any
### Deployment
Other
### Deployment details
_No response_
### What happened
Operations aren't retried when Databricks API returns HTTP Status 429 - rate limit exceeded
### What you expected to happen
the operation should retry
### How to reproduce
this happens when you have multiple calls to API, especially when it happens outside of Airflow
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21559 | https://github.com/apache/airflow/pull/21852 | c108f264abde68e8f458a401296a53ccbe7a47f6 | 12e9e2c695f9ebb9d3dde9c0f7dfaa112654f0d6 | "2022-02-14T10:08:01Z" | python | "2022-03-13T23:19:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,545 | ["airflow/providers/apache/beam/hooks/beam.py", "docs/docker-stack/docker-images-recipes/go-beam.Dockerfile", "docs/docker-stack/recipes.rst", "tests/providers/apache/beam/hooks/test_beam.py"] | Add Go to docker images | ### Description
Following https://github.com/apache/airflow/pull/20386 we are now supporting execution of Beam Pipeline written in Go.
We might want to add Go to the images.
Beam Go SDK first stable release is `v2.33.0` and requires `Go v1.16` minimum:
### Use case/motivation
This way people running airflow from docker can build/run their go pipelines.
### Related issues
Issue:
https://github.com/apache/airflow/issues/20283
PR:
https://github.com/apache/airflow/pull/20386
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21545 | https://github.com/apache/airflow/pull/22296 | 7bd165fbe2cbbfa8208803ec352c5d16ca2bd3ec | 4a1503b39b0aaf50940c29ac886c6eeda35a79ff | "2022-02-13T11:38:59Z" | python | "2022-03-17T03:57:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,537 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | add partition option for parquet files by columns in BaseSQLToGCSOperator | ### Description
Add the ability to partition parquet files by columns. Right now you can partition files only by size.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21537 | https://github.com/apache/airflow/pull/28677 | 07a17bafa1c3de86a993ee035f91b3bbd284e83b | 35a8ffc55af220b16ea345d770f80f698dcae3fb | "2022-02-12T10:56:36Z" | python | "2023-01-10T05:55:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,486 | ["airflow/providers/postgres/example_dags/example_postgres.py", "airflow/providers/postgres/operators/postgres.py", "docs/apache-airflow-providers-postgres/operators/postgres_operator_howto_guide.rst", "tests/providers/postgres/operators/test_postgres.py"] | Allow to set statement behavior for PostgresOperator | ### Body
Add the ability to pass parameters like `statement_timeout` from PostgresOperator.
https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-STATEMENT-TIMEOUT
The goal is to allow to control over specific query rather than setting the parameters on the connection level.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/21486 | https://github.com/apache/airflow/pull/21551 | ecc5b74528ed7e4ecf05c526feb2c0c85f463429 | 0ec56775df66063cab807d886e412ebf88c572bf | "2022-02-10T10:08:32Z" | python | "2022-03-18T15:09:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,469 | ["chart/templates/cleanup/cleanup-cronjob.yaml", "chart/tests/test_airflow_common.py", "chart/tests/test_annotations.py", "chart/values.schema.json", "chart/values.yaml"] | No way to supply custom annotations for cleanup cron job pods | ### Official Helm Chart version
1.4.0 (latest released)
### Apache Airflow version
2.2.3 (latest released)
### Kubernetes Version
v1.21.5
### Helm Chart configuration
```yaml
cleanup:
enabled: true
airflowPodAnnotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "false"
vault.hashicorp.com/agent-run-as-user: "50000"
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/agent-inject-status: "update"
```
### Docker Image customisations
We have customized the `ENTRYPOINT` for exporting some environment variables that get loaded from Hashicorp's vault.
The entrypoint line in the Dockerfile:
```Dockerfile
ENTRYPOINT ["/usr/bin/dumb-init", "--", "/opt/airflow/entrypoint.sh"]
```
The last line in the `/opt/airflow/entrypoint.sh` script:
```bash
# Call Airflow's default entrypoint after we source the vault secrets
exec /entrypoint "${@}"
```
### What happened
Install was successful and the `webserver` and `scheduler` pods are working as expected. The `cleanup` pods launched from the `cleanup` cronjob fail:
```
No vault secrets detected
....................
ERROR! Maximum number of retries (20) reached.
Last check result:
$ airflow db check
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 1129, in <module>
conf.validate()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 224, in validate
self._validate_config_dependencies()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 267, in _validate_config_dependencies
raise AirflowConfigException(f"error: cannot use sqlite with the {self.get('core', 'executor')}")
airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the KubernetesExecutor
```
### What you expected to happen
It looks like the annotations on the `cleanup` cronjob are static and only contain an istio annotation
https://github.com/apache/airflow/blob/c28c255e52255ea2060c1a802ec34f9e09cc4f52/chart/templates/cleanup/cleanup-cronjob.yaml#L56-L60
From the documentation in the values.yaml. I would expect the `cleanup` cronjob to have these annotations:
https://github.com/apache/airflow/blob/c28c255e52255ea2060c1a802ec34f9e09cc4f52/chart/values.yaml#L187-L189
### How to reproduce
From the root of the airflow repository:
```bash
cd chart
helm dep build
helm template . --set cleanup.enabled=true --set airflowPodAnnotations."my\.test"="somevalue" -s templates/cleanup/cleanup-cronjob.yaml
```
If you look at the annotations section of the output, you will only see the static `istio` annotation that is hard coded.
### Anything else
This could be a potentially breaking change even though the documentation says they should get applied to all Airflow pods. Another option would be to add `cleanup.podAnnotations` section for supplying them if fixing it by adding the global annotations would not work.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21469 | https://github.com/apache/airflow/pull/21484 | c25534be56cee39b6be38a9928cd5b2e107a32be | 8c1512b7094e092369b742c37857b7946b4033f4 | "2022-02-09T16:26:12Z" | python | "2022-02-11T23:12:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,450 | ["airflow/cli/commands/dag_command.py"] | `airflow dags status` fails if parse time is near `dagbag_import_timeout` | ### Apache Airflow version
2.2.3 (latest released)
### What happened
I had just kicked off a DAG and I was periodically running `airflow dags status ...` to see if it was done yet. At first it seemed to work, but later it failed with this error:
```
$ airflow dags state load_13 '2022-02-09T05:25:28+00:00'
[2022-02-09 05:26:56,493] {dagbag.py:500} INFO - Filling up the DagBag from /usr/local/airflow/dags
queued
$ airflow dags state load_13 '2022-02-09T05:25:28+00:00'
[2022-02-09 05:27:29,096] {dagbag.py:500} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2022-02-09 05:27:59,084] {timeout.py:36} ERROR - Process timed out, PID: 759
[2022-02-09 05:27:59,088] {dagbag.py:334} ERROR - Failed to import: /usr/local/airflow/dags/many_tasks.py
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 331, in _load_modules_from_file
loader.exec_module(new_module)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/local/airflow/dags/many_tasks.py", line 61, in <module>
globals()["dag_{:02d}".format(i)] = parameterized_load(i, step)
File "/usr/local/airflow/dags/many_tasks.py", line 50, in parameterized_load
return load()
File "/usr/local/lib/python3.9/site-packages/airflow/models/dag.py", line 2984, in factory
f(**f_kwargs)
File "/usr/local/airflow/dags/many_tasks.py", line 48, in load
[worker_factory(i) for i in range(1, size**2 + 1)]
File "/usr/local/airflow/dags/many_tasks.py", line 48, in <listcomp>
[worker_factory(i) for i in range(1, size**2 + 1)]
File "/usr/local/airflow/dags/many_tasks.py", line 37, in worker_factory
return worker(num)
File "/usr/local/lib/python3.9/site-packages/airflow/decorators/base.py", line 219, in factory
op = decorated_operator_class(
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 188, in apply_defaults
result = func(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/decorators/python.py", line 59, in __init__
super().__init__(kwargs_to_upstream=kwargs_to_upstream, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 152, in apply_defaults
dag_params = copy.deepcopy(dag.params) or {}
File "/usr/local/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/lib/python3.9/copy.py", line 264, in _reconstruct
y = func(*args)
File "/usr/local/lib/python3.9/copy.py", line 263, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/timeout.py", line 37, in handle_timeout
raise AirflowTaskTimeout(self.error_message)
airflow.exceptions.AirflowTaskTimeout: DagBag import timeout for /usr/local/airflow/dags/many_tasks.py after 30.0s.
Please take a look at these docs to improve your DAG import time:
* http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/best-practices.html#top-level-python-code
* http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/best-practices.html#reducing-dag-complexity, PID: 759
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py", line 241, in dag_state
dag = get_dag(args.subdir, args.dag_id)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/cli.py", line 192, in get_dag
raise AirflowException(
airflow.exceptions.AirflowException: Dag 'load_13' could not be found; either it does not exist or it failed to parse.
```
### What you expected to happen
If we were able to parse the DAG in the first place, I expect that downstream actions (like querying for status) would not fail due to a dag parsing timeout.
Also, is parsing the dag necessary for this action?
### How to reproduce
1. start with the dag shown here: https://gist.github.com/MatrixManAtYrService/842266aac42390aadee75fe014cd372e
2. increase "scale" until `airflow dags list` stop showing the load dags
3. decrease by one and check that they start showing back up
4. trigger a dag run
5. check its status (periodically), eventually the status check will fail
I initially discovered this using the `CeleryExecutor` and a much messier dag, but once I understood what I was looking for I was able to recreate it using the dag linked above and `astro dev start`
### Operating System
docker/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
```
FROM quay.io/astronomer/ap-airflow:2.2.3-onbuild
```
### Anything else
When I was running this via the CeleryExecutor (deployed via helm on a single-node k8s cluster), I noticed similar dag-parsing timeouts showing up in the worker logs. I failed to capture them because I didn't yet know what I was looking for, but if they would be helpful I can recreate that scenario and post them here.
----
I tried to work around this error by doubling the following configs:
- [dagbag_import_timeout](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dagbag-import-timeout)
- [dag_file_processor_timeout](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-file-processor-timeout)
This "worked", as in the status started showing up without error, but it seemed like making the dag __longer__ had also made it __slower__. As if whatever re-parsing steps were occurring along the way were also slowing it down. It used to take 1h to complete, but when I checked on it after 1h it was only 30% complete (the new tasks hadn't even started yet).
Should I expect that splitting my large dag into smaller dags will fix this? Or is the overall parsing workload going to eat into my runtime regardless of how it is sliced?
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21450 | https://github.com/apache/airflow/pull/21793 | 7e0c6e4fc7fcaccfa6c49efddea3aaae96c9260c | dade6e075f5229f15b8b0898393c529e0e9851bc | "2022-02-09T05:48:01Z" | python | "2022-02-24T21:45:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,421 | ["airflow/providers/amazon/aws/hooks/eks.py", "tests/providers/amazon/aws/hooks/test_eks.py"] | Unable to use EKSPodOperator with aws_conn_id parameter | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
### Apache Airflow version
2.2.2
### Operating System
AmazonLinux
### Deployment
MWAA
### Deployment details
Out of box deployment of MWAA with Airflow 2.2.2
### What happened
I try to launch a K8S pod using [EKSPodOperator](https://airflow.apache.org/docs/apache-airflow-providers-amazon/2.4.0/_api/airflow/providers/amazon/aws/operators/eks/index.html#airflow.providers.amazon.aws.operators.eks.EKSPodOperator) and the parameter **aws_conn_id** in order to authenticate to the EKS cluster through IAM / OIDC provider.
The pod is not starting and i've got the following error in my Airflow task log :
```
[2022-02-08, 10:55:30 CET] {{refresh_config.py:71}} ERROR - exec: process returned 1. Traceback (most recent call last):
File "/usr/lib64/python3.7/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib64/python3.7/runpy.py", line 109, in _get_module_details
__import__(pkg_name)
File "/usr/local/lib/python3.7/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/usr/local/lib/python3.7/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/usr/local/lib/python3.7/site-packages/airflow/configuration.py", line 1129, in <module>
conf.validate()
File "/usr/local/lib/python3.7/site-packages/airflow/configuration.py", line 226, in validate
self._validate_enums()
File "/usr/local/lib/python3.7/site-packages/airflow/configuration.py", line 253, in _validate_enums
+ f"{value!r}. Possible values: {', '.join(enum_options)}."
airflow.exceptions.AirflowConfigException: `[logging] logging_level` should not be 'fatal'. Possible values: CRITICAL, FATAL, ERROR, WARN, WARNING, INFO, DEBUG.
```
Indeed, the [EKSHook](https://github.com/apache/airflow/blob/providers-amazon/2.4.0/airflow/providers/amazon/aws/hooks/eks.py#L596) is setting **AIRFLOW__LOGGING__LOGGING_LEVEL** to **fatal** and [airflow core](https://github.com/apache/airflow/blob/2.2.2/airflow/configuration.py#L198) is checking that logging level is **FATAL**.
It seems we've got a case sensitive problem.
### What you expected to happen
I expected my POD starting using the IAM / OIDC provider auth on EKS.
### How to reproduce
```python
start_pod = EKSPodOperator(
task_id="run_pod",
cluster_name="<your_eks_cluster_name>",
pod_name="run_pod",
image="amazon/aws-cli:latest",
cmds=["sh", "-c", "ls"],
labels={"demo": "hello_world"},
get_logs=True,
# Delete the pod when it reaches its final state, or the execution is interrupted.
is_delete_operator_pod=True,
aws_conn_id="<your_aws_airflow_conn_id>"
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21421 | https://github.com/apache/airflow/pull/21427 | 8fe9783fcd813dced8de849c8130d0eb7f90bac3 | 33ca0f26544a4d280f2f56843e97deac7f33cea5 | "2022-02-08T10:08:38Z" | python | "2022-02-08T20:51:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,412 | ["airflow/providers/microsoft/azure/hooks/cosmos.py", "tests/providers/microsoft/azure/hooks/test_azure_cosmos.py", "tests/providers/microsoft/azure/operators/test_azure_cosmos.py"] | v3.5.0 airflow.providers.microsoft.azure.operators.cosmos not running | ### Apache Airflow version
2.2.3 (latest released)
### What happened
Submitting this on advice from the community Slack: Attempting to use the v3.5.0 `AzureCosmosInsertDocumentOperator` operator fails with an attribute error: `AttributeError: 'CosmosClient' object has no attribute 'QueryDatabases'`
### What you expected to happen
Expected behaviour is that the document is upserted correctly. I've traced through the source and `does_database_exist()` seems to call `QueryDatabases()` on the result of `self.get_conn()`. Thing is `get_conn()` (AFAICT) returns an actual MS/AZ `CosmosClient` which definitely does not have a `QueryDatabases()` method (it's `query_databases()`)
### How to reproduce
From what I can see, any attempt to use this operator on airflow 2.2.3 will fail in this way
### Operating System
Ubuntu 18.04.5 LTS
### Versions of Apache Airflow Providers
azure-batch==12.0.0
azure-common==1.1.28
azure-core==1.22.0
azure-cosmos==4.2.0
azure-datalake-store==0.0.52
azure-identity==1.7.1
azure-keyvault==4.1.0
azure-keyvault-certificates==4.3.0
azure-keyvault-keys==4.4.0
azure-keyvault-secrets==4.3.0
azure-kusto-data==0.0.45
azure-mgmt-containerinstance==1.5.0
azure-mgmt-core==1.3.0
azure-mgmt-datafactory==1.1.0
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-resource==20.1.0
azure-nspkg==3.0.2
azure-storage-blob==12.8.1
azure-storage-common==2.1.0
azure-storage-file==2.1.0
msrestazure==0.6.4
### Deployment
Virtualenv installation
### Deployment details
Clean standalone install I am using for evaluating airflow for our environment
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21412 | https://github.com/apache/airflow/pull/21514 | de41ccc922b3d1f407719744168bb6822bde9a58 | 3c4524b4ec2b42a8af0a8c7b9d8f1d065b2bfc83 | "2022-02-08T05:53:54Z" | python | "2022-02-23T16:39:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,388 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | Optionally raise an error if source file does not exist in GCSToGCSOperator | ### Description
Right now when using GCSToGCSOperator to copy a file from one bucket to another, if the source file does not exist, nothing happens and the task is considered successful. This could be good for some use cases, for example, when you want to copy all the files from a directory or that match a specific pattern.
But for some other cases, like when you only want to copy one specific blob, it might be useful to raise an exception if the source file can't be found. Otherwise, the task would be failing silently.
My proposal is to add a new flag to GCSToGCSOperator to enable this feature. By default, for backward compatibility, the behavior would be the current one. But it would be possible to force the source file to be required and mark the task as failed if it doesn't exist.
### Use case/motivation
Task would fail if the source file to copy does not exist, but only in the case you enable it.
### Related issues
If you want to be sure that the source file exists and it will be copied on every execution, currently the operator does not allow you to make the task fail. If the status is successful but nothing is written in the destination, it would be failing silently.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21388 | https://github.com/apache/airflow/pull/21391 | a2abf663157aea14525e1a55eb9735ba659ae8d6 | 51aff276ca4a33ee70326dd9eea6fba59f1463a3 | "2022-02-07T12:15:28Z" | python | "2022-02-10T19:30:03Z" |