status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 26,567 | ["airflow/providers/amazon/aws/transfers/sql_to_s3.py", "tests/providers/amazon/aws/transfers/test_sql_to_s3.py"] | Changes to SqlToS3Operator Breaking CSV formats | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
`apache-airflow-providers-amazon==5.1.0`
### Apache Airflow version
2.3.4
### Operating System
Linux
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
Once https://github.com/apache/airflow/pull/25083 was merged, when using CSV as the output format on the `SqlToS3Operator`, null strings started appearing as `"None"` in the actual CSV export. This will cause unintended behavior in most use cases for reading the CSV including uploading to databases.
Certain databases such as Snowflake allow for things like `NULL_IF` on import however there are times where you would want the actual string "None" to be in the field and there would be no way at that point to distinguish.
Before:
![Screen Shot 2022-09-21 at 11 36 00 AM](https://user-images.githubusercontent.com/30101670/191572950-f2abed8b-55bf-43f8-b166-acf81cb52f06.png)
After:
![Screen Shot 2022-09-21 at 11 35 52 AM](https://user-images.githubusercontent.com/30101670/191572967-bc61f563-b92b-4678-b22e-befa5511cca8.png)
### What you think should happen instead
The strings should be empty as they did previously. I understand the implementation of the recent PR for parquet and propose that we add an additional condition to line 138 of the `sql_to_s3.py` file restricting that to only if the chosen output is parquet.
### How to reproduce
Run the `SqlToS3Operator` with the default output format of `CSV` on any query that selects a column of type string that allows null. Look the outputted CSV in S3.
### Anything else
Every time we select a nullable column with the `SqlToS3Operator`
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26567 | https://github.com/apache/airflow/pull/26676 | fa0cb363b860b553af2ef9530ea2de706bd16e5d | 9c59312fbcf113d56ee0a61e018dfd7cef725af7 | "2022-09-21T17:39:05Z" | python | "2022-10-02T01:12:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,566 | ["docs/apache-airflow/concepts/tasks.rst"] | Have SLA docs reflect reality | ### What do you see as an issue?
The [SLA documentation](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html#slas) currently states the following:
> An SLA, or a Service Level Agreement, is an expectation for the maximum time a Task should take. If a task takes longer than this to run...
However this is not how SLAs currently work in Airflow, the SLA time is calculated from the start of the DAG not from the start of the task.
For example if you have a DAG like this the SLA will always trigger after the DAG has started for 5 minutes even though the task never takes 5 minutes to run:
```python
import datetime
from airflow import DAG
from airflow.sensors.time_sensor import TimeSensor
from airflow.operators.python import PythonOperator
with DAG(dag_id="my_dag", schedule_interval="0 0 * * *") as dag:
wait_time_mins = TimeSensor(target_time=datetime.time(minute=10))
run_fast = PythonOperator(
python_callable=lambda *a, **kw: True,
sla=datetime.timedelta(minutes=5),
)
run_fast.set_upstream(wait_time_mins)
```
### Solving the problem
Update the docs to explain how SLAs work in reality.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26566 | https://github.com/apache/airflow/pull/27111 | 671029bebc33a52d96f9513ae997e398bd0945c1 | 639210a7e0bfc3f04f28c7d7278292d2cae7234b | "2022-09-21T16:00:36Z" | python | "2022-10-27T14:34:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,565 | ["docs/apache-airflow/core-concepts/executor/local.rst"] | Documentation unclear about multiple LocalExecutors on HA Scheduler deployment | ### What do you see as an issue?
According to Airflow documentation, it's now possible to run multiple Airflow Schedulers starting with Airflow 2.x.
What's not clear from the documentation is what happens if each of the machines running the Scheduler has executor = LocalExecutor in the [core] section of airflow.cfg. In this context, if I have Airflow Scheduler running on 3 machines, does this mean that there will also be 3 LocalExecutors processing tasks in a distributed fashion?
### Solving the problem
Enhancing documentation to clarify the details about multiple LocalExecutors on HA Scheduler deployment
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26565 | https://github.com/apache/airflow/pull/32310 | 61f33304d587b3b0a48a876d3bfedab82e42bacc | e53320d62030a53c6ffe896434bcf0fc85803f31 | "2022-09-21T15:53:02Z" | python | "2023-07-05T09:22:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,555 | ["airflow/cli/commands/task_command.py", "tests/cli/commands/test_task_command.py"] | "airflow tasks render/state" cli commands do not work for mapped task instances | ### Apache Airflow version
Other Airflow 2 version
### What happened
Running following cli command:
```
airflow tasks render test-dynamic-mapping consumer scheduled__2022-09-18T15:14:15.107780+00:00 --map-index
```
fails with exception:
```
Traceback (most recent call last):
File "/opt/python3.8/bin/airflow", line 8, in <module>
sys.exit(main())
File "/opt/python3.8/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 101, in wrapper
return f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 337, in _wrapper
f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 576, in task_render
for attr in task.__class__.template_fields:
TypeError: 'member_descriptor' object is not iterable
```
Running following cli command:
```
airflow tasks state test-dynamic-mapping consumer scheduled__2022-09-18T15:14:15.107780+00:00 --map-index
```
fails with exception:
```
Traceback (most recent call last):
File "/opt/python3.8/bin/airflow", line 8, in <module>
sys.exit(main())
File "/opt/python3.8/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 101, in wrapper
return f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 337, in _wrapper
f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 422, in task_state
print(ti.current_state())
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 849, in current_state
session.query(TaskInstance.state)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2879, in scalar
ret = self.one()
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2856, in one
return self._iter().one()
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 1190, in one
return self._only_one_row(
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 613, in _only_one_row
raise exc.MultipleResultsFound(
sqlalchemy.exc.MultipleResultsFound: Multiple rows were found when exactly one was required
```
### What you think should happen instead
Command successfully executed
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26555 | https://github.com/apache/airflow/pull/28698 | a7e1cb2fbfc684508f4b832527ae2371f99ad37d | 1da17be37627385fed7fc06584d72e0abda6a1b5 | "2022-09-21T13:56:19Z" | python | "2023-01-04T20:43:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,548 | ["airflow/models/renderedtifields.py", "airflow/utils/sqlalchemy.py"] | Resolve warning about renderedtifields query | ### Body
This warning is emitted when running a task instance, at least on mysql:
```
[2022-09-21, 05:22:56 UTC] {logging_mixin.py:117} WARNING -
/home/airflow/.local/lib/python3.8/site-packages/airflow/models/renderedtifields.py:258
SAWarning: Coercing Subquery object into a select() for use in IN();
please pass a select() construct explicitly
```
Need to resolve.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26548 | https://github.com/apache/airflow/pull/26667 | 22d52c00f6397fde8d97cf2479c0614671f5b5ba | 0e79dd0b1722a610c898da0ba8557b8a94da568c | "2022-09-21T05:26:52Z" | python | "2022-09-26T13:49:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,544 | ["airflow/utils/db.py"] | Choose setting for sqlalchemy SQLALCHEMY_TRACK_MODIFICATIONS | ### Body
We need to determine what to do about this warning:
```
/Users/dstandish/.virtualenvs/2.4.0/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py:872 FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
```
Should we set to true or false?
@ashb @potiuk @jedcunningham @uranusjr
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26544 | https://github.com/apache/airflow/pull/26617 | 3396d1f822caac7cbeb14e1e67679b8378a84a6c | 051ba159e54b992ca0111107df86b8abfd8b7279 | "2022-09-21T00:57:27Z" | python | "2022-09-23T07:18:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,529 | ["airflow/serialization/serialized_objects.py", "docs/apache-airflow/best-practices.rst", "docs/apache-airflow/concepts/timetable.rst", "tests/serialization/test_dag_serialization.py"] | Variable.get inside of a custom Timetable breaks the Scheduler | ### Apache Airflow version
2.3.4
### What happened
If you try to use `Variable.get` from inside of a custom Timetable, the Scheduler will break with errors like:
```
scheduler | [2022-09-20 10:19:36,104] {variable.py:269} ERROR - Unable to retrieve variable from secrets backend (MetastoreBackend). Checking subsequent secrets backend.
scheduler | Traceback (most recent call last):
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/variable.py", line 265, in get_variable_from_secrets
scheduler | var_val = secrets_backend.get_variable(key=key)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
scheduler | return func(*args, session=session, **kwargs)
scheduler | File "/opt/conda/envs/production/lib/python3.9/contextlib.py", line 126, in __exit__
scheduler | next(self.gen)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 33, in create_session
scheduler | session.commit()
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1435, in commit
scheduler | self._transaction.commit(_to_root=self.future)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 829, in commit
scheduler | self._prepare_impl()
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 797, in _prepare_impl
scheduler | self.session.dispatch.before_commit(self.session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/sqlalchemy/event/attr.py", line 343, in __call__
scheduler | fn(*args, **kw)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/sqlalchemy.py", line 341, in _validate_commit
scheduler | raise RuntimeError("UNEXPECTED COMMIT - THIS WILL BREAK HA LOCKS!")
scheduler | RuntimeError: UNEXPECTED COMMIT - THIS WILL BREAK HA LOCKS!
scheduler | [2022-09-20 10:19:36,105] {plugins_manager.py:264} ERROR - Failed to import plugin /home/tsanders/airflow_standalone_sqlite/plugins/custom_timetable.py
scheduler | Traceback (most recent call last):
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/plugins_manager.py", line 256, in load_plugins_from_plugin_directory
scheduler | loader.exec_module(mod)
scheduler | File "<frozen importlib._bootstrap_external>", line 850, in exec_module
scheduler | File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
scheduler | File "/home/tsanders/airflow_standalone_sqlite/plugins/custom_timetable.py", line 9, in <module>
scheduler | class CustomTimetable(CronDataIntervalTimetable):
scheduler | File "/home/tsanders/airflow_standalone_sqlite/plugins/custom_timetable.py", line 10, in CustomTimetable
scheduler | def __init__(self, *args, something=Variable.get('something'), **kwargs):
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/variable.py", line 138, in get
scheduler | raise KeyError(f'Variable {key} does not exist')
scheduler | KeyError: 'Variable something does not exist'
scheduler | [2022-09-20 10:19:36,179] {scheduler_job.py:769} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
scheduler | Traceback (most recent call last):
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 752, in _execute
scheduler | self._run_scheduler_loop()
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 840, in _run_scheduler_loop
scheduler | num_queued_tis = self._do_scheduling(session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 914, in _do_scheduling
scheduler | self._start_queued_dagruns(session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1086, in _start_queued_dagruns
scheduler | dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/dagbag.py", line 179, in get_dag
scheduler | self._add_dag_from_db(dag_id=dag_id, session=session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/dagbag.py", line 254, in _add_dag_from_db
scheduler | dag = row.dag
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/serialized_dag.py", line 209, in dag
scheduler | dag = SerializedDAG.from_dict(self.data) # type: Any
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1099, in from_dict
scheduler | return cls.deserialize_dag(serialized_obj['dag'])
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1021, in deserialize_dag
scheduler | v = _decode_timetable(v)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 189, in _decode_timetable
scheduler | raise _TimetableNotRegistered(importable_string)
scheduler | airflow.serialization.serialized_objects._TimetableNotRegistered: Timetable class 'custom_timetable.CustomTimetable' is not registered
```
Note that in this case, the Variable in question *does* exist, and the `KeyError` is a red herring.
If you add a `default_var`, things seem to work, though I wouldn't trust it since there is clearly some context where it will fail to load the Variable and will always fall back to the default. Additionally, this still raises the `UNEXPECTED COMMIT - THIS WILL BREAK HA LOCKS!` error, which I assume is a bad thing.
### What you think should happen instead
I'm not sure whether or not this should be allowed. In my case, I was able to work around the error by making all Timetable initializer args required (no default values) and pulling the `Variable.get` out into a wrapper function.
### How to reproduce
`custom_timetable.py`
```
#!/usr/bin/env python3
from __future__ import annotations
from airflow.models.variable import Variable
from airflow.plugins_manager import AirflowPlugin
from airflow.timetables.interval import CronDataIntervalTimetable
class CustomTimetable(CronDataIntervalTimetable):
def __init__(self, *args, something=Variable.get('something'), **kwargs):
self._something = something
super().__init__(*args, **kwargs)
class CustomTimetablePlugin(AirflowPlugin):
name = 'custom_timetable_plugin'
timetables = [CustomTimetable]
```
`test_custom_timetable.py`
```
#!/usr/bin/env python3
import datetime
import pendulum
from airflow.decorators import dag, task
from custom_timetable import CustomTimetable
@dag(
start_date=datetime.datetime(2022, 9, 19),
timetable=CustomTimetable(cron='0 0 * * *', timezone=pendulum.UTC),
)
def test_custom_timetable():
@task
def a_task():
print('hello')
a_task()
dag = test_custom_timetable()
if __name__ == '__main__':
dag.cli()
```
```
airflow variables set something foo
airflow dags trigger test_custom_timetable
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
I was able to reproduce this with:
* Standalone mode, SQLite DB, SequentialExecutor
* Self-hosted deployment, Postgres DB, CeleryExecutor
### Anything else
Related: #21895
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26529 | https://github.com/apache/airflow/pull/26649 | 26f94c5370587f73ebd935cecf208c6a36bdf9b6 | 37c0cb6d3240062106388449cf8eed9c948fb539 | "2022-09-20T16:02:09Z" | python | "2022-09-26T22:01:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,527 | ["airflow/utils/json.py", "airflow/www/app.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_app.py"] | UI error when clicking on graph view when a task has pod overrides | ### Apache Airflow version
2.4.0
### What happened
When I click on the graph view or the gaant view for a DAG that has a task with pod_overrides, I get
```
Something bad has happened.
Airflow is used by many users, and it is very likely that others had similar problems and you can easily find
a solution to your problem.
Consider following these steps:
* gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment)
* find similar issues using:
* [GitHub Discussions](https://github.com/apache/airflow/discussions)
* [GitHub Issues](https://github.com/apache/airflow/issues)
* [Stack Overflow](https://stackoverflow.com/questions/tagged/airflow)
* the usual search engine you use on a daily basis
* if you run Airflow on a Managed Service, consider opening an issue using the service support channels
* if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a [bug report](https://github.com/apache/airflow/issues/new/choose).
Make sure however, to include all relevant details and results of your investigation so far.
Python version: 3.8.14
Airflow version: 2.4.0
Node: airflow-webserver-6d4d7d5ccd-qc2x5
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/auth.py", line 47, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 118, in view_func
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 81, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 2810, in graph
return self.render_template(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 541, in render_template
return super().render_template(
File "/home/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 322, in render_template
return render_template(
File "/home/airflow/.local/lib/python3.8/site-packages/flask/templating.py", line 147, in render_template
return _render(app, template, context)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/templating.py", line 130, in _render
rv = template.render(context)
File "/home/airflow/.local/lib/python3.8/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/home/airflow/.local/lib/python3.8/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/templates/airflow/graph.html", line 21, in top-level template code
{% from 'appbuilder/loading_dots.html' import loading_dots %}
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/templates/airflow/dag.html", line 37, in top-level template code
{% set execution_date_arg = request.args.get('execution_date') %}
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/templates/airflow/main.html", line 21, in top-level template code
{% from 'airflow/_messages.html' import show_message %}
File "/home/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
{% import 'appbuilder/baselib.html' as baselib %}
File "/home/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 50, in top-level template code
{% block tail %}
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/templates/airflow/graph.html", line 137, in block 'tail'
let taskInstances = {{ task_instances|tojson }};
File "/home/airflow/.local/lib/python3.8/site-packages/jinja2/filters.py", line 1688, in do_tojson
return htmlsafe_json_dumps(value, dumps=dumps, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/jinja2/utils.py", line 658, in htmlsafe_json_dumps
dumps(obj, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/json/provider.py", line 230, in dumps
return json.dumps(obj, **kwargs)
File "/usr/local/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/usr/local/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/json/provider.py", line 122, in _default
raise TypeError(f"Object of type {type(o).__name__} is not JSON serializable")
TypeError: Object of type V1Pod is not JSON serializable
```
### What you think should happen instead
The UI should render the dag visualization.
### How to reproduce
* Add a `pod_override` to a task
* Run the task
* click on the graph view
### Operating System
Debian GNU/Linux 11 (bullseye) docker image
### Versions of Apache Airflow Providers
apache-airflow-providers-airbyte==3.1.0
apache-airflow-providers-amazon==5.1.0
apache-airflow-providers-apache-spark==3.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.3.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-docker==3.1.0
apache-airflow-providers-elasticsearch==4.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.3.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-jira==3.0.1
apache-airflow-providers-microsoft-azure==4.2.0
apache-airflow-providers-odbc==3.1.1
apache-airflow-providers-pagerduty==3.0.0
apache-airflow-providers-postgres==5.2.1
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-salesforce==5.1.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.0.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.1.0
apache-airflow-providers-tableau==3.0.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
official helm
### Anything else
every time after the dag is run
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26527 | https://github.com/apache/airflow/pull/26554 | e61d823f18238a82570203b62fe986bd0bc91b51 | 378dfbe2fe266f17859dbabd34b9bc8cd5c904ab | "2022-09-20T15:05:42Z" | python | "2022-09-21T21:12:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,499 | ["airflow/models/xcom_arg.py"] | Dynamic task mapping zip() iterates unexpected number of times | ### Apache Airflow version
2.4.0
### What happened
When running `zip()` with different-length lists, I get an unexpected result:
```python
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(
dag_id="demo_dynamic_task_mapping_zip",
start_date=datetime(2022, 1, 1),
schedule=None,
):
@task
def push_letters():
return ["a", "b", "c"]
@task
def push_numbers():
return [1, 2, 3, 4]
@task
def pull(value):
print(value)
pull.expand(value=push_letters().zip(push_numbers()))
```
Iterates over `[("a", 1), ("b", 2), ("c", 3), ("a", 1)]`, so it iterates for the length of the longest collection, but restarts iterating elements when reaching the length of the shortest collection.
I would expect it to behave like Python's builtin `zip` and iterate for the length of the shortest collection, so 3x in the example above, i.e. `[("a", 1), ("b", 2), ("c", 3)]`.
Additionally, I went digging in the source code and found the `fillvalue` argument which works as expected:
```python
pull.expand(value=push_letters().zip(push_numbers(), fillvalue="foo"))
```
Iterates over `[("a", 1), ("b", 2), ("c", 3), ("foo", 4)]`.
However, with `fillvalue` not set, I would expect it to iterate only for the length of the shortest collection.
### What you think should happen instead
I expect `zip()` to iterate over the number of elements of the shortest collection (without `fillvalue` set).
### How to reproduce
See above.
### Operating System
MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
OSS Airflow
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26499 | https://github.com/apache/airflow/pull/26636 | df3bfe3219da340c566afc9602278e2751889c70 | f219bfbe22e662a8747af19d688bbe843e1a953d | "2022-09-19T18:51:49Z" | python | "2022-09-26T09:02:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,497 | ["airflow/migrations/env.py", "airflow/migrations/versions/0118_2_4_2_add_missing_autoinc_fab.py", "airflow/migrations/versions/0119_2_5_0_add_updated_at_to_dagrun_and_ti.py", "airflow/settings.py", "airflow/utils/db.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/migrations-ref.rst"] | Upgrading to airflow 2.4.0 from 2.3.4 causes NotNullViolation error | ### Apache Airflow version
2.4.0
### What happened
Stopped existing processes, upgraded from airflow 2.3.4 to 2.4.0, and ran airflow db upgrade successfully. Upon restarting the services, I'm not seeing any dag runs from the past 10 days. I kick off a new job, and I don't see it show up in the grid view. Upon checking the systemd logs, I see that there are a lot of postgress errors with webserver. Below is a sample of such errors.
```
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'Datasets'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,183] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 13, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 13, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,209] {manager.py:420} ERROR - Add View Menu Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, Datasets).
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'Datasets'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,212] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 17, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 17, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,229] {manager.py:420} ERROR - Add View Menu Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, DAG Warnings).
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'DAG Warnings'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,232] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 17, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 17, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,250] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 13, 23).
```
I tried running airflow db check, init, check-migration, upgrade without any errors, but the errors still remain.
Please let me know if I missed any steps during the upgrade, or if this is a known issue with a workaround.
### What you think should happen instead
All dag runs should be visible
### How to reproduce
upgrade airflow, upgrade db, restart the services
### Operating System
Ubuntu 18.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26497 | https://github.com/apache/airflow/pull/26885 | 2f326a6c03efed8788fe0263df96b68abb801088 | 7efdeed5eccbf5cb709af40c8c66757e59c957ed | "2022-09-19T18:13:02Z" | python | "2022-10-07T16:37:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,492 | ["airflow/utils/log/file_task_handler.py"] | Cannot fetch log from Celery worker | ### Discussed in https://github.com/apache/airflow/discussions/26490
<div type='discussions-op-text'>
<sup>Originally posted by **emredjan** September 19, 2022</sup>
### Apache Airflow version
2.4.0
### What happened
When running tasks on a remote celery worker, webserver fails to fetch logs from the machine, giving a '403 - Forbidden' error on version 2.4.0. This behavior does not happen on 2.3.3, where the remote logs are retrieved and displayed successfully.
The `webserver / secret_key` configuration is the same in all nodes (the config files are synced), and their time is synchronized using a central NTP server, making the solution in the warning message not applicable.
My limited analysis pointed to the `serve_logs.py` file, and the flask request object that's passed to it, but couldn't find the root cause.
### What you think should happen instead
It should fetch and show remote celery worker logs on the webserver UI correctly, as it did in previous versions.
### How to reproduce
Use airflow version 2.4.0
Use CeleryExecutor with RabbitMQ
Use a separate Celery worker machine
Run a dag/task on the remote worker
Try to display task log on the web UI
### Operating System
Red Hat Enterprise Linux 8.6 (Ootpa)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-hashicorp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-mssql==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-odbc==3.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
```
### Deployment
Virtualenv installation
### Deployment details
Using CeleryExecutor / rabbitmq with 2 servers
### Anything else
All remote task executions has the same problem.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/26492 | https://github.com/apache/airflow/pull/26493 | b9c4e98d8f8bcc129cbb4079548bd521cd3981b9 | 52560b87c991c9739791ca8419219b0d86debacd | "2022-09-19T14:10:25Z" | python | "2022-09-19T16:37:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,425 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | get_dags does not fetch more than 100 dags. | Hi,
The function does not return more than 100 dags even setting the limit to more than 100. So `get_dags(limit=500)` will only return max of 100 dags.
I have to do the hack to mitigate this problem.
```
def _get_dags(self, max_dags: int = 500):
i = 0
responses = []
while i <= max_dags:
response = self._api.get_dags(offset=i)
responses += response['dags']
i = i + 100
return [dag['dag_id'] for dag in responses]
```
Versions I am using are:
```
apache-airflow==2.3.2
apache-airflow-client==2.3.0
```
and
```
apache-airflow==2.2.2
apache-airflow-client==2.1.0
```
Best,
Hamid | https://github.com/apache/airflow/issues/27425 | https://github.com/apache/airflow/pull/29773 | a0e13370053452e992d45e7956ff33290563b3a0 | 228d79c1b3e11ecfbff5a27c900f9d49a84ad365 | "2022-09-16T22:11:08Z" | python | "2023-02-26T16:19:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,427 | ["airflow/www/static/js/main.js", "airflow/www/utils.py"] | Can not get task which status is null | ### Apache Airflow version
Other Airflow 2 version
### What happened
with List Task Instance airflow webUI,when we search the task which state is null,the result is:no records found.
### What you think should happen instead
should list the task which status is null
### How to reproduce
use airflow webui
List Task Instance
add filter
state equal to null
### Operating System
oracle linux
### Versions of Apache Airflow Providers
2.2.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26427 | https://github.com/apache/airflow/pull/26584 | 64622929a043436b235b9fb61fb076c5d2e02124 | 8e2e80a0ce0e1819874e183fb1662e879cdd8a08 | "2022-09-16T06:41:55Z" | python | "2022-10-11T19:31:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,424 | ["airflow/www/extensions/init_views.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | `POST /taskInstances/list` with wildcards returns unhelpful error | ### Apache Airflow version
2.3.4
### What happened
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_task_instances_batch
fails with an error with wildcards while
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_task_instances
succeeds with wildcards
Error:
```
400
"None is not of type 'object'"
```
### What you think should happen instead
_No response_
### How to reproduce
1) `astro dev init`
2) `astro dev start`
3) `test1.py` and `python test1.py`
```
import requests
host = "http://localhost:8080/api/v1"
kwargs = {
'auth': ('admin', 'admin'),
'headers': {'content-type': 'application/json'}
}
r = requests.post(f'{host}/dags/~/dagRuns/~/taskInstances/list', **kwargs, timeout=10)
print(r.url, r.text)
```
output
```
http://localhost:8080/api/v1/dags/~/dagRuns/~/taskInstances/list
{
"detail": "None is not of type 'object'",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
3) `test2.py` and `python test2.py`
```
import requests
host = "http://localhost:8080/api/v1"
kwargs = {
'auth': ('admin', 'admin'),
'headers': {'content-type': 'application/json'}
}
r = requests.get(f'{host}/dags/~/dagRuns/~/taskInstances', **kwargs, timeout=10) # change here
print(r.url, r.text)
```
```
<correct output>
```
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26424 | https://github.com/apache/airflow/pull/30596 | c2679c57aa0281dd455c6a01aba0e8cfbb6a0e1c | e89a7eeea6a7a5a5a30a3f3cf86dfabf7c343412 | "2022-09-15T22:52:20Z" | python | "2023-04-12T12:40:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,399 | ["airflow/providers/google/cloud/hooks/kubernetes_engine.py", "tests/providers/google/cloud/hooks/test_kubernetes_engine.py"] | GKEHook.create_cluster is not wait_for_operation using the input project_id parameter | ### Apache Airflow version
main (development)
### What happened
In the GKEHook, the `create_cluster` method is creating a GKE cluster in the project_id specified by the input, but in `wait_for_operation`, it's waiting for the operation to appear in the default project_id (because no project_id explicitly provided)
https://github.com/apache/airflow/blob/f6c579c1c0efb8cdd2eaf905909cda7bc7314f88/airflow/providers/google/cloud/hooks/kubernetes_engine.py#L231-L237
this throws a bug when we are trying to create clusters under different project_id (compared to the default project_id)
### What you think should happen instead
instead it should be
```python
resource = self.wait_for_operation(resource, project_id)
```
so that we won't get errors when trying to create a cluster under a different project_id
### How to reproduce
```python
create_cluster = GKECreateClusterOperator(
task_id="create_cluster",
project_id=GCP_PROJECT_ID,
location=GCP_LOCATION,
body=CLUSTER,
)
```
and make sure the GCP_PROJECT_ID is not the same as the default project_id used by the default google service account
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26399 | https://github.com/apache/airflow/pull/26418 | 0bca962cd2c9671adbe68923e17ebecf66a0c6be | e31590039634ff722ad005fe9f1fc02e5a669699 | "2022-09-14T17:15:25Z" | python | "2022-09-20T07:46:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,380 | ["airflow/datasets/__init__.py", "tests/datasets/test_dataset.py", "tests/models/test_dataset.py"] | UI doesn't handle whitespace/empty dataset URI's well | ### Apache Airflow version
main (development)
### What happened
Here are some poor choices for dataset URI's:
```python3
empty = Dataset("")
colons = Dataset("::::::")
whitespace = Dataset("\t\n")
emoji = Dataset("😊")
long = Dataset(5000 * "x")
injection = Dataset("105'; DROP TABLE 'dag")
```
And a dag file which replicates the problems mentioned below: https://gist.github.com/MatrixManAtYrService/a32bba5d382cd9a925da72571772b060 (full tracebacks included as comments)
Here's how they did:
|dataset|behavior|
|:-:|:--|
|empty| dag triggered with no trouble, not selectable in the datasets UI|
|emoji| `airflow dags reserialize`: `UnicodeEncodeError: 'ascii' codec can't encode character '\U0001f60a' in position 0: ordinal not in range(128)`|
|colons| no trouble|
|whitespace| dag triggered with no trouble, selectable in the datasets UI, but shows no history|
|long|sqlalchemy error during serialization|
|injection| no trouble|
Finally, here's a screenshot:
<img width="1431" alt="Screen Shot 2022-09-13 at 11 29 02 PM" src="https://user-images.githubusercontent.com/5834582/190069341-dc17c66a-f941-424d-a455-cd531580543a.png">
Notice that there are two empty rows in the datasets list, one for `empty`, the other for `whitespace`. Only `whitespace` is selectable, both look weird.
### What you think should happen instead
I propose that we add a uri sanity check during serialization and just reject dataset URI's that are:
- only whitespace
- empty
- long enough that they're going to cause a database problem
The `emoji` case failed in a nice way. Ideally `whitespace`, `long` and `empty` can fail in the same way. If implemented, this would prevent any of the weird cases above from making it to the UI in the first place.
### How to reproduce
Unpause the above dags
### Operating System
Docker/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26380 | https://github.com/apache/airflow/pull/26389 | af39faafb7fdd53adbe37964ba88a3814f431cd8 | bd181daced707680ed22f5fd74e1e13094f6b164 | "2022-09-14T05:53:23Z" | python | "2022-09-14T16:11:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,375 | ["airflow/www/extensions/init_views.py", "airflow/www/templates/airflow/error.html", "airflow/www/views.py", "tests/api_connexion/test_error_handling.py"] | Airflow Webserver returns incorrect HTTP Error Response for custom REST API endpoints | ### Apache Airflow version
Other Airflow 2 version
### What happened
We are using Airflow 2.3.1 Version. Apart from Airflow provided REST endpoints, we are also using the airflow webserver to host our own application REST API endpoints. We are doing this by loading our own blueprints and registering Flask Blueprint routes within the airflow plugin.
Issue: Our Custom REST API endpoints are returning incorrect HTTP Error response code of 404 when 405 is expected (Invoke the REST API endpoint with an incorrect HTTP method, say POST instead of PUT) . This was working in airflow 1.x but is giving an issue with airflow 2.x
Here is a sample airflow plugin code . If the '/sample-app/v1' API below is invoked with POST method, I would expect a 405 response. However, it returns a 404.
I tried registering a blueprint error handler for 405 inside the plugin, but that did not work.
```
test_bp = flask.Blueprint('test_plugin', __name__)
@test_bp.route(
'/sample-app/v1/tags/<tag>', methods=['PUT'])
def initialize_deployment(tag):
"""
Initialize the deployment of the metadata tag
:rtype: flask.Response
"""
return 'Hello, World'
class TestPlugin(plugins_manager.AirflowPlugin):
name = 'test_plugin'
flask_blueprints = [test_bp]
```
### What you think should happen instead
Correct HTTP Error response code should be returned.
### How to reproduce
Issue the following curl request after loading the plugin -
curl -X POST "http://localhost:8080/sample-app/v1/tags/abcd" -d ''
The response will be 404 instead of 405.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26375 | https://github.com/apache/airflow/pull/26880 | ea55626d79fdbd96b6d5f371883ac1df2a6313ec | 8efb678e771c8b7e351220a1eb7eb246ae8ed97f | "2022-09-13T21:56:54Z" | python | "2022-10-18T12:50:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,367 | ["airflow/providers/google/cloud/operators/bigquery.py", "docs/apache-airflow-providers-google/operators/cloud/bigquery.rst", "tests/system/providers/google/cloud/bigquery/example_bigquery_queries.py"] | Add SQLColumnCheck and SQLTableCheck Operators for BigQuery | ### Description
New operators under the Google provider for table and column data quality checking that is integrated with OpenLineage.
### Use case/motivation
Allow OpenLineage support for BigQuery when using column and table check operators.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26367 | https://github.com/apache/airflow/pull/26368 | 3cd4df16d4f383c27f7fc6bd932bca1f83ab9977 | c4256ca1a029240299b83841bdd034385665cdda | "2022-09-13T15:21:52Z" | python | "2022-09-21T08:49:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,360 | ["airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | dynamic dataset ref breaks when viewed in UI or when triggered (dagbag.py:_add_dag_from_db) | ### Apache Airflow version
2.4.0b1
### What happened
Here's a file which defines three dags. "source" uses `Operator.partial` to reference either "sink". I'm not sure if it's supported to do so, but airlflow should at least fail more gracefully than it does.
```python3
from datetime import datetime, timedelta
from time import sleep
from airflow import Dataset
from airflow.decorators import dag
from airflow.operators.dummy import DummyOperator
from airflow.operators.python import PythonOperator
ps = Dataset("partial_static")
p1 = Dataset("partial_dynamic_1")
p2 = Dataset("partial_dynamic_2")
p3 = Dataset("partial_dynamic_3")
def sleep_n(n):
sleep(n)
@dag(start_date=datetime(1970, 1, 1), schedule=timedelta(days=365 * 30))
def two_kinds_dynamic_source():
# dataset ref is not dynamic
PythonOperator.partial(
task_id="partial_static", python_callable=sleep_n, outlets=[ps]
).expand(op_args=[[1], [20], [40]])
# dataset ref is dynamic
PythonOperator.partial(
task_id="partial_dynamic", python_callable=sleep_n
).expand_kwargs(
[
{"op_args": [1], "outlets": [p1]},
{"op_args": [20], "outlets": [p2]},
{"op_args": [40], "outlets": [p3]},
]
)
two_kinds_dynamic_source()
@dag(schedule=[ps], start_date=datetime(1970, 1, 1))
def two_kinds_static_sink():
DummyOperator(task_id="dummy")
two_kinds_static_sink()
@dag(schedule=[p1, p2, p3], start_date=datetime(1970, 1, 1))
def two_kinds_dynamic_sink():
DummyOperator(task_id="dummy")
two_kinds_dynamic_sink()
```
Tried to trigger the dag in the browser, saw this traceback instead:
```
Python version: 3.9.13
Airflow version: 2.4.0.dev1640+astro.1
Node: airflow-webserver-6b969cbd87-4q5kh
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/usr/local/lib/python3.9/site-packages/airflow/www/auth.py", line 46, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/www/decorators.py", line 117, in view_func
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/www/decorators.py", line 80, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/www/views.py", line 2532, in grid
dag = get_airflow_app().dag_bag.get_dag(dag_id, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 176, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 251, in _add_dag_from_db
dag = row.dag
File "/usr/local/lib/python3.9/site-packages/airflow/models/serialized_dag.py", line 223, in dag
dag = SerializedDAG.from_dict(self.data) # type: Any
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1220, in from_dict
return cls.deserialize_dag(serialized_obj['dag'])
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1197, in deserialize_dag
setattr(task, k, kwargs_ref.deref(dag))
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 224, in deref
value = {k: v.deref(dag) if isinstance(v, _XComRef) else v for k, v in self.value.items()}
AttributeError: 'list' object has no attribute 'items'
```
I can also summon a similar traceback by just trying to view the dag in the grid view, or when running `airflow dags trigger`
### What you think should happen instead
If there's something invalid about this dag, it should fail to parse--rather than successfully parsing and then breaking the UI.
I'm a bit uncertain about what should happen in the dag dependency graph when the source dag runs. The dynamic outlets are not known until runtime, so it's reasonable that they don't show up in the graph. But what about after the dag runs?
- do they still trigger the "sink" dag even though we didn't know about the dependency up front?
- do we update the dependency graph now that we know about the dynamic dependency?
Because of this error, we don't get far enough to find out.
### How to reproduce
Include the dag above, try to display it in the grid view.
### Operating System
kubernetes-in-docker on MacOS via helm
### Versions of Apache Airflow Providers
n/a
### Deployment
Other 3rd-party Helm chart
### Deployment details
Deployed using [the astronomer helm chart ](https://github.com/astronomer/airflow-chart)and these values:
```yaml
airflow:
airflowHome: /usr/local/airflow
airflowVersion: $VERSION
defaultAirflowRepository: img
defaultAirflowTag: $TAG
executor: KubernetesExecutor
gid: 50000
images:
airflow:
repository: img
logs:
persistence:
enabled: true
size: 2Gi
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26360 | https://github.com/apache/airflow/pull/26369 | 5e9589c685bcec769041e0a1692035778869f718 | b816a6b243d16da87ca00e443619c75e9f6f5816 | "2022-09-13T06:54:16Z" | python | "2022-09-14T10:01:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,283 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator max_id_key Not Written to XCOM | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.3.0
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
`max_id` is not returned through XCOM when `max_id_key` is set.
### What you think should happen instead
When `max_id_key` is set, the `max_id` value should be returned as the default XCOM value.
This is based off of the parameter description:
```
The results will be returned by the execute() command, which in turn gets stored in XCom for future operators to use.
```
### How to reproduce
Execute the `GCSToBigQueryOperator` operator with a `max_id_key` parameter set. No XCOM value is set.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26283 | https://github.com/apache/airflow/pull/26285 | b4f8a069f07b18ce98c9b1286da5a5fcde2bff9f | 07fe356de0743ca64d936738b78704f7c05774d1 | "2022-09-09T20:01:59Z" | python | "2022-09-18T20:12:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,279 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator `max_id_key` Feature Throws Error | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
8.3.0
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using the `max_id_key` feature of `GCSToBigQueryOperator` it fails with the error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 312, in execute
row = list(bq_hook.get_job(job_id).result())
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/common/hooks/base_google.py", line 444, in inner_wrapper
raise AirflowException(
airflow.exceptions.AirflowException: You must use keyword arguments in this methods rather than positional
```
### What you think should happen instead
The max id value for the key should be returned.
### How to reproduce
Any use of this column fails, since the error is related to retrieving the job result.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26279 | https://github.com/apache/airflow/pull/26285 | b4f8a069f07b18ce98c9b1286da5a5fcde2bff9f | 07fe356de0743ca64d936738b78704f7c05774d1 | "2022-09-09T17:47:29Z" | python | "2022-09-18T20:12:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,273 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py"] | SQLToGCSOperators Add Support for Dumping JSON | ### Description
If your output format for a SQLToGCSOperator is `json`, then any "dict" type object returned from a database, for example a Postgres JSON column, is not dumped to a string and is kept as a nested JSON object.
Add option to dump `dict` objects to string in JSON exporter.
### Use case/motivation
Currently JSON type columns are hard to ingest into BQ since a JSON field in a source database does not enforce a schema, and we can't reliably generate a `RECORD` schema for the column.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26273 | https://github.com/apache/airflow/pull/26277 | 706a618014a6f94d5ead0476f26f79d9714bf93d | b4f8a069f07b18ce98c9b1286da5a5fcde2bff9f | "2022-09-09T15:25:54Z" | python | "2022-09-18T20:11:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,262 | ["docs/helm-chart/manage-dags-files.rst"] | helm chart doc Manage DAGs files recommended Bake DAGs in Docker image need improvement. | ### What do you see as an issue?
https://airflow.apache.org/docs/helm-chart/1.6.0/manage-dags-files.html#bake-dags-in-docker-image
> The recommended way to update your DAGs with this chart is to build a new Docker image with the latest DAG code:
In this doc , recommended user manage dags way is build in image.
But , ref this issue:
https://github.com/airflow-helm/charts/issues/211#issuecomment-859678503
> but having the scheduler being restarted and not scheduling any task each time you do a change that is not even scheduler related (just to deploy a new DAG!!)
> Helm Chart should be used to deploy "application" not to deploy another version of DAGs.
So, I think bake dags in docker image should not be the most recommended way.
At least. We should say this way weaknesses (restart all components when jsut deploy a new DAG!) in docs.
### Solving the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26262 | https://github.com/apache/airflow/pull/26401 | 2382c12cc3aa5d819fd089c73e62f8849a567a0a | 11f8be879ba2dd091adc46867814bcabe5451540 | "2022-09-09T08:11:29Z" | python | "2022-09-15T21:09:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,259 | ["airflow/models/dag.py", "airflow/models/dagrun.py", "airflow/www/views.py", "tests/models/test_dag.py"] | should we limit max queued dag runs for dataset-triggered dags | if a dataset-triggered dag is running, and upstreams are updated multiple times, many dag runs will be queued up because the scheduler checks frequently for new dag runs needed.
you can easily limit max active dag runs but cannot easily limit max queued dag runs. in the dataset case this represents a meaningful difference in behavior and seems undesirable.
i think it may make sense to limit max queued dag runs (for datasets) to 1. cc @ash @jedcunningham @uranusjr @blag @norm
the graph below illustrates what happens in this scenario. you can reproduce with the example datasets dag file. change consumes 1 to be `sleep 60` , produces 1 to be `sleep 1`, then trigger producer repeatedly.
![image](https://user-images.githubusercontent.com/15932138/189264897-bbb6abba-9cea-4307-b17b-554599a03821.png)
| https://github.com/apache/airflow/issues/26259 | https://github.com/apache/airflow/pull/26348 | 9444d9789bc88e1063d81d28e219446b2251c0e1 | b99d1cd5d32aea5721c512d6052b6b7b3e0dfefb | "2022-09-09T03:15:54Z" | python | "2022-09-14T12:28:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,256 | ["airflow/datasets/manager.py", "airflow/jobs/scheduler_job.py", "tests/models/test_taskinstance.py"] | "triggered runs" dataset counter doesn't update until *next* run and never goes above 1 | ### Apache Airflow version
2.4.0b1
### What happened
I have [this test dag](https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58) which I created to report [this issue](https://github.com/apache/airflow/issues/25210). The idea is that if you unpause "sink" and all of the "sources" then the sources will wait until the clock is like \*:\*:00 and they'll terminate at the same time.
Since each source triggers the sink with a dataset called "counter", the "sink" dag will run just once, and it will have output like: `INFO - [(16, 1)]`, that's 16 sources and 1 sink that ran.
At this point, you can look at the dataset history for "counter" and you'll see this:
<img width="524" alt="Screen Shot 2022-09-08 at 6 07 44 PM" src="https://user-images.githubusercontent.com/5834582/189248999-d31141a4-2d0b-4ec2-9ea5-c4c3536b3a28.png">
So we've got a timestamp, but the "triggered runs" count is empty. That's weird. One run was triggered (and it finished by the time the screenshot was taken), so why doesn't it say `1`?
So I redeploy and try it again, except this time I wait several seconds between each "unpause" click, the idea being that maybe some of them fire at 07:16:00 and the others fire at 07:17:00. I end up with this:
<img width="699" alt="Screen Shot 2022-09-08 at 6 19 12 PM" src="https://user-images.githubusercontent.com/5834582/189252116-69067189-751d-40e7-89c5-8d1da1720237.png">
So fifteen of them finished at once and caused the dataset to update, and then just one straggler (number 9) is waiting for an additional minute. I wait for the straggler to complete and go back to the dataset view:
<img width="496" alt="Screen Shot 2022-09-08 at 6 20 41 PM" src="https://user-images.githubusercontent.com/5834582/189253874-87bb3eb3-2237-42a1-bc3f-9fc210419f1a.png">
Now it's the straggler that is blank, but the rest of them are populated. Continuing to manually run these, I find that whichever one I have run most recently is blank, and all of the others are 1, even if this is the second or third time I've run them
### What you think should happen instead
- The triggered runs counter should increment beyond 1
- It should increment immediately after the dag was triggered, not wait until after the *next* dag gets triggered.
### How to reproduce
See dags in in this gist: https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58
1. unpause "sink"
2. unpause half of sources
3. wait one minute
4. unpause the other half of the sources
5. wait for "sink" to run a second time
6. view the dataset history for "counter"
7. ask why only half of the counts are populated
8. manually trigger some sources, wait for them to trigger sink
9. view the dataset history again
10. ask why none of them show more than 1 dagrun triggered
### Operating System
Kubernetes in Docker, deployed via helm
### Versions of Apache Airflow Providers
n/a
### Deployment
Other 3rd-party Helm chart
### Deployment details
see "deploy.sh" in the gist:
https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58
It's just a fresh install into a k8s cluster
### Anything else
n/a
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26256 | https://github.com/apache/airflow/pull/26276 | eb03959e437e11891b8c3696b76f664a991a37a4 | 954349a952d929dc82087e4bb20d19736f84d381 | "2022-09-09T01:45:19Z" | python | "2022-09-09T20:15:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,238 | ["airflow/__init__.py"] | [BUG] 2.4.0b1 - google-provider - AttributeError: 'str' object has no attribute 'version' | ### Apache Airflow version
main (development)
### What happened
when I start airflow 2.4.0b1 with the `apache-airflow-providers-google==8.3.0`
the webserver log give :
```log
[2022-09-08 14:39:53,158] {webserver_command.py:251} ERROR - [0 / 0] Some workers seem to have died and gunicorn did not restart them as expected
[2022-09-08 14:39:53,275] {providers_manager.py:228} WARNING - Exception when importing 'airflow.providers.google.common.hooks.base_google.GoogleBaseHook' from 'apache-airflow-providers-google' package
2022-09-08T14:39:53.276959961Z Traceback (most recent call last):
2022-09-08T14:39:53.276965441Z File "/usr/local/lib/python3.8/site-packages/airflow/providers_manager.py", line 260, in _sanity_check
2022-09-08T14:39:53.276969533Z imported_class = import_string(class_name)
2022-09-08T14:39:53.276973476Z File "/usr/local/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
2022-09-08T14:39:53.276977496Z module = import_module(module_path)
2022-09-08T14:39:53.276981203Z File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
2022-09-08T14:39:53.276985012Z return _bootstrap._gcd_import(name[level:], package, level)
2022-09-08T14:39:53.277005418Z File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
2022-09-08T14:39:53.277011581Z File "<frozen importlib._bootstrap>", line 991, in _find_and_load
2022-09-08T14:39:53.277016414Z File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
2022-09-08T14:39:53.277020883Z File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
2022-09-08T14:39:53.277025840Z File "<frozen importlib._bootstrap_external>", line 843, in exec_module
2022-09-08T14:39:53.277029603Z File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
2022-09-08T14:39:53.277032868Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 49, in <module>
2022-09-08T14:39:53.277036076Z from airflow.providers.google.cloud.utils.credentials_provider import (
2022-09-08T14:39:53.277038762Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/cloud/utils/credentials_provider.py", line 36, in <module>
2022-09-08T14:39:53.277041651Z from airflow.providers.google.cloud._internal_client.secret_manager_client import _SecretManagerClient
2022-09-08T14:39:53.277044383Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/cloud/_internal_client/secret_manager_client.py", line 26, in <module>
2022-09-08T14:39:53.277047248Z from airflow.providers.google.common.consts import CLIENT_INFO
2022-09-08T14:39:53.277050101Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/common/consts.py", line 23, in <module>
2022-09-08T14:39:53.277052974Z CLIENT_INFO = ClientInfo(client_library_version='airflow_v' + version.version)
2022-09-08T14:39:53.277055720Z AttributeError: 'str' object has no attribute 'version'
[2022-09-08 14:39:53,299] {providers_manager.py:228} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.cloud_sql.CloudSQLHook' from 'apache-airflow-providers-google' package
2022-09-08T14:39:53.300816697Z Traceback (most recent call last):
2022-09-08T14:39:53.300822358Z File "/usr/local/lib/python3.8/site-packages/airflow/providers_manager.py", line 260, in _sanity_check
2022-09-08T14:39:53.300827098Z imported_class = import_string(class_name)
2022-09-08T14:39:53.300831757Z File "/usr/local/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
2022-09-08T14:39:53.300836033Z module = import_module(module_path)
2022-09-08T14:39:53.300840058Z File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
2022-09-08T14:39:53.300844580Z return _bootstrap._gcd_import(name[level:], package, level)
2022-09-08T14:39:53.300862499Z File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
2022-09-08T14:39:53.300867522Z File "<frozen importlib._bootstrap>", line 991, in _find_and_load
2022-09-08T14:39:53.300871975Z File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
2022-09-08T14:39:53.300876819Z File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
2022-09-08T14:39:53.300880682Z File "<frozen importlib._bootstrap_external>", line 843, in exec_module
2022-09-08T14:39:53.300885112Z File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
2022-09-08T14:39:53.300889697Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 51, in <module>
2022-09-08T14:39:53.300893842Z from airflow.providers.google.common.hooks.base_google import GoogleBaseHook
2022-09-08T14:39:53.300898141Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 49, in <module>
2022-09-08T14:39:53.300903254Z from airflow.providers.google.cloud.utils.credentials_provider import (
2022-09-08T14:39:53.300906904Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/cloud/utils/credentials_provider.py", line 36, in <module>
2022-09-08T14:39:53.300911707Z from airflow.providers.google.cloud._internal_client.secret_manager_client import _SecretManagerClient
2022-09-08T14:39:53.300916818Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/cloud/_internal_client/secret_manager_client.py", line 26, in <module>
2022-09-08T14:39:53.300920595Z from airflow.providers.google.common.consts import CLIENT_INFO
2022-09-08T14:39:53.300926003Z File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/common/consts.py", line 23, in <module>
2022-09-08T14:39:53.300931078Z CLIENT_INFO = ClientInfo(client_library_version='airflow_v' + version.version)
2022-09-08T14:39:53.300934596Z AttributeError: 'str' object has no attribute 'version'
....
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
ubuntu 22.04.1
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26238 | https://github.com/apache/airflow/pull/26239 | a45ab47d7afa97ba6b03471b1dd8816a48cb9689 | b7a603cf89728e02a187409c83983d58cc554457 | "2022-09-08T14:46:59Z" | python | "2022-09-09T08:41:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,215 | ["airflow/example_dags/example_params_ui_tutorial.py", "airflow/www/static/js/trigger.js"] | Trigger DAG UI Extension w/ Flexible User Form Concept | ### Description
Proposal for Contribution for an extensible Trigger UI feature in Airflow.
## Design proposal (Feedback welcome)
### Part 1) Specifying Trigger UI on DAG Level
We propose to extend the DAG class with an additional attribute so that UI(s) (one or multiple per DAG) can be specified in the DAG.
* Attribute name proposal: `trigger_ui`
* Type proposal: `Union[TriggerUIBase, List[TriggerUIBase]` (One or a list of UI definition inherited from an abstract UI class which implements the trigger UI)
* Default value proposal: `[TriggerNoUI(), TriggerJsonUI()]` (Means the current/today's state, user can pick to trigger with or without parameters)
With this extension the current behavior is continued and users can specify if a specific or multiple UIs are offered for the Trigger DAG option.
### Part 2) UI Changes for Trigger Button
The function of the trigger DAG button in DAG overview landing ("Home" / `templates/airflow/dags.html`) as well as DAG detail pages (grid, graph, ... view / `templates/airflow/dag.html`) is adjusted so that:
1) If there is a single Trigger UI specified for the DAG, the button directly opens the form on click
2) If a list of Trigger UIs is defined for the DAG, then a list of UI's is presented, similar like today's drop-down with the today's two options (with and without parameters).
Menu names for (2) and URLs are determined by the UI class members linked to the DAG.
### Part 3) Standard implementations for TriggerNoUI, TriggerJsonUI
Two implementations for triggering w/o UI and parameters and the current JSON entry form will be migrated to the new UI structure, so that users can define that one, the other or both can be used for DAGs.
Name proposals:
0) TriggerUIBase: Base class for any Trigger UI, defines the base parameters and defaults which every Trigger UI is expected to provide:
* `url_template`: URL template (into which the DAG name is inserted to direct the user to)
* `name`: Name of the trigger UI to display in the drop-down
* `description`: Optional descriptive test to supply as hover-over/tool-tip)
1) TriggerNoUI (inherits TriggerUIBase): Skips a user confirmation and entry form and upon call runs the DAG w/o parameters (`DagRun.conf = {}`)
2) TriggerJsonUI (inherits TriggerUIBase): Same like the current UI to enter a JSON into a text box and trigger the DAG. Any valid JSON accepted.
### Part 4) Standard Implementation for Simple Forms (Actually core new feature)
Implement/Contribute a user-definable key/value entry form named `TriggerFormUI` (inherits TriggerUIBase) which allows the user to easily enter parameters for triggering a DAG. Form could look like:
```
Parameter 1: <HTML input box for entering a value>
(Optional Description and hints)
Parameter 2: <HTML Select box of options>
(Optional Description and hints)
Parameter 3: <HTML Checkbox on/off>
(Optional Description and hints)
<Trigger DAG Button>
```
The resulting JSON would use the parameter keys and values and render the following `DagRun.conf` and trigger the DAG:
```
{
"parameter_1": "user input",
"parameter_2": "user selection",
"parameter_3": true/false value
}
```
The number of form values, parameter names, parameter types, options, order and descriptions should be freely configurable in the DAG definition.
The trigger form should provide the following general parameters (at least):
* `name`: The name of the form to be used in pick lists and in the headline
* `description`: Descriptive test which is printed in hover over of menus and which will be rendered as description between headline and form start
* (Implicitly the DAG to which the form is linked to which will be triggered)
The trigger form elements (list of elements can be picked freely):
* General options of each form element (Base class `TriggerFormUIElement`:
* `name` (str): Name of the parameter, used as technical key in the JSON, must be unique per form (e.g. "param1")
* `display` (str): Label which is displayed on left side of entry field (e.g. "Parameter 1")
* `help` (Optional[str]=Null): Descriptive help text which is optionally rendered below the form element, might contain HTML formatting code
* `required` (Optional[bool]=False): Flag if the user is required to enter/pick a value before submission is possible
* `default` (Optional[str]=Null): Default value to present when the user opens the form
* Element types provided in the base implementation
* `TriggerFormUIString` (inherits `TriggerFormUIElement`): Provides a simple HTML string input box.
* `TriggerFormUISelect` (inherits `TriggerFormUIElement`): Provides a HTML select box with a list of pre-defined string options. Options are provided static as array of strings.
* `TriggerFormUIArray` (inherits `TriggerFormUIElement`): Provides a simple HTML text area allowing to enter multiple lines of text. Each line entered will be converted to a string and the strings will be used as value array.
* `TriggerFormUICheckbox` (inherits `TriggerFormUIElement`): Provides a HTML Checkbox to select on/off, will be converted to true/false as value
* Other element types (optionally, might be added later?) for making futher cool features - depending on how much energy is left
* `TriggerFormUIHelp` (inherits `TriggerFormUIElement`): Provides no actual parameter value but allows to add a HTML block of help
* `TriggerFormUIBreak` (inherits `TriggerFormUIElement`): Provides no actual parameter value but adds a horizontal splitter
* Adding the options to validate string values e.g. with a RegEx
* Allowing to provide int values (besides just strings)
* Allowing to have an "advanced" section for more options which the user might not need in all cases
* Allowing to view the generated `DagRun.conf` so that a user can copy/paste as well
* Allowing to user extend the form elements...
### Part 5) (Optional) Extended for Templated Form based on the Simple form but uses fields to run a template through Jinja
Implement (optionally, might be future extension as well?) a `TriggerTemplateFormUI` (inherits TriggerFormUI) which adds a Jinja2 JSON template which will be templated with the collected form fields so that more complex `DagRun.conf` parameter structures can be created on top of just key/value
### Part 6) Examples
Provide 1-2 example DAGs which show how the trigger forms can be used. Adjust existing examples as needed.
### Part 7) Documentation
Provide needed documentation to describe the feature and options. This would include an description how to add custom forms above the standards via Airflow Plugins and custom Python code.
### Use case/motivation
As user of Airflow for our custom workflows we often use `DagRun.conf` attributes to control content and flow. Current UI allows (only) to launch via REST API with given parameters or using a JSON structure in the UI to trigger with parameters. This is technically feasible but not user friendly. A user needs to model, check and understand the JSON and enter parameters manually without the option to validate before trigger.
Similar like Jenkins or Github/Azure pipelines we desire an UI option to trigger with a UI and specifying parameters. We'd like to have a similar capability in Airflow.
Current workarounds used in multiple places are:
1) Implementing a custom (additional) Web UI which implements the required forms outside/on top of Airflow. This UI accepts user input and in the back-end triggers Airflow via REST API. This is flexible but replicates the efforts for operation, deployment, release as well and redundantly need to implement access control, logging etc.
2) Implementing an custom Airflow Plugin which hosts additional launch/trigger UIs inside Airflow. We are using this but it is actually a bit redundant to other trigger options and is only 50% user friendly
I/we propose this as a feature and would like to contribute this with a following PR - would this be supported if we contribute this feature to be merged?
### Related issues
Note: This proposal is similar and/or related to #11054 but a bit more detailed and concrete. Might be also related to #22408 and contribute to AIP-38 (https://github.com/apache/airflow/projects/9)?
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26215 | https://github.com/apache/airflow/pull/29376 | 7ee1a5624497fc457af239e93e4c1af94972bbe6 | 9c6f83bb6f3e3b57ae0abbe9eb0582fcde265702 | "2022-09-07T14:36:30Z" | python | "2023-02-11T14:38:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,194 | ["airflow/www/static/js/dag/details/taskInstance/Logs/index.test.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx"] | Extra entry for logs generated with 0 try number when clearing any task instances | ### Apache Airflow version
main (development)
### What happened
When clearing any task instances an extra logs entry generated with Zero try number.
<img width="1344" alt="Screenshot 2022-09-07 at 1 06 54 PM" src="https://user-images.githubusercontent.com/88504849/188819289-13dd4936-cd03-48b6-8406-02ee5fbf293f.png">
### What you think should happen instead
It should not create a entry with zero try number
### How to reproduce
Clear a task instance by hitting clear button on UI and then observe the entry for logs in logs tab
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26194 | https://github.com/apache/airflow/pull/26556 | 6f1ab37d2091e26e67717d4921044029a01d6a22 | 6a69ad033fdc224aee14b8c83fdc1b672d17ac20 | "2022-09-07T07:43:59Z" | python | "2022-09-22T19:39:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,189 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py"] | GCSToBigQueryOperator Schema in Alternate GCS Bucket | ### Description
Currently the `GCSToBigQueryOperator` requires that a Schema object located in GCS be located in the same bucket as the Source Object(s). I'd like an option to have it located in a different bucket.
### Use case/motivation
I have a GCS bucket where I store files with a 90 day auto-expiration on the whole bucket. I want to be able to store a fixed schema in GCS, but since this bucket has an auto-expiration of 90 days the schema is auto deleted at that time.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26189 | https://github.com/apache/airflow/pull/26190 | 63562d7023a9d56783f493b7ea13accb2081121a | 8cac96918becf19a4a04eef1e5bcf175f815f204 | "2022-09-07T01:50:01Z" | python | "2022-09-07T20:26:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,185 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Webserver fails to pull secrets from Hashicorp Vault on start up | ### Apache Airflow version
2.3.4
### What happened
Since upgrading to Airflow 2.3.4 our webserver fails on start up to pull secrets from our Vault instance. Setting AIRFLOW__WEBSERVER_WORKERS = 1 allowed the webserver to start up successfully, but reverting the change added here [https://github.com/apache/airflow/pull/25556](url) was the only way we found to fix the issue without adjusting the webserver's worker count.
### What you think should happen instead
The airflow webserver should be able to successfully read from Vault with AIRFLOW__WEBSERVERS__WORKERS > 1.
### How to reproduce
Star a Webserver instance set to authenticate with Vault using the approle method and AIRFLOW__DATABASE__SQL_ALCHEMY_CONN_SECRET and AIRFLOW__WEBSERVER__SECRET_KEY_SECRET set. The webserver should fail to initialize all of the gunicorn workers and exit.
### Operating System
Fedora 29
### Versions of Apache Airflow Providers
apache-airflow-providers-hashicorp==3.1.0
### Deployment
Docker-Compose
### Deployment details
Python 3.9.13
Vault 1.9.4
### Anything else
None
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26185 | https://github.com/apache/airflow/pull/26223 | ebef9ed3fa4a9a1e69b4405945e7cd939f499ee5 | c63834cb24c6179c031ce0d95385f3fa150f442e | "2022-09-06T21:36:02Z" | python | "2022-09-08T00:35:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,174 | ["airflow/api_connexion/endpoints/xcom_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_xcom_endpoint.py"] | API Endpoints - /xcomEntries/{xcom_key} cannot deserialize customized xcom backend | ### Description
We use S3 as our xcom backend database and write serialize/deserialize method for xcoms.
However, when we want to access xcom through REST API, it returns the s3 file url instead of the deserialized value. Could you please add the feature to support customized xcom backend for REST API access?
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26174 | https://github.com/apache/airflow/pull/26343 | 3c9c0f940b67c25285259541478ebb413b94a73a | ffee6bceb32eba159a7a25a4613d573884a6a58d | "2022-09-06T09:35:30Z" | python | "2022-09-12T21:05:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,155 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/role_command.py", "tests/cli/commands/test_role_command.py"] | Add CLI to add/remove permissions from existed role | ### Body
Followup on https://github.com/apache/airflow/pull/25854
[Roles CLI](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#roles) currently support create, delete, export, import, list
It can be useful to have the ability to add/remove permissions from existed role.
This has also been asked in https://github.com/apache/airflow/issues/15318#issuecomment-872496184
cc @chenglongyan
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26155 | https://github.com/apache/airflow/pull/26338 | e31590039634ff722ad005fe9f1fc02e5a669699 | 94691659bd73381540508c3c7c8489d60efb2367 | "2022-09-05T08:01:19Z" | python | "2022-09-20T08:18:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,130 | ["Dockerfile.ci", "airflow/serialization/serialized_objects.py", "setup.cfg"] | Remove `cattrs` from project | Cattrs is currently only used in two places: Serialization for operator extra links, and for Lineage.
However cattrs is not a well maintained project and doesn't support many features that attrs itself does; in short, it's not worth the brain cycles to keep cattrs. | https://github.com/apache/airflow/issues/26130 | https://github.com/apache/airflow/pull/34672 | 0c8e30e43b70e9d033e1686b327eb00aab82479c | e5238c23b30dfe3556fb458fa66f28e621e160ae | "2022-09-02T12:15:18Z" | python | "2023-10-05T07:34:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,101 | ["airflow/utils/sqlalchemy.py", "tests/utils/test_sqlalchemy.py"] | Kubernetes Invalid executor_config, pod_override filled with Encoding.VAR | ### Apache Airflow version
2.3.4
### What happened
Trying to start Kubernetes tasks using a `pod_override` results in pods not starting after upgrading from 2.3.2 to 2.3.4
The pod_override look very odd, filled with many Encoding.VAR objects, see following scheduler log:
```
{kubernetes_executor.py:550} INFO - Add task TaskInstanceKey(dag_id='commit_check', task_id='sync_and_build', run_id='5776-2-1662037155', try_number=1, map_index=-1) with command ['airflow', 'tasks', 'run', 'commit_check', 'sync_and_build', '5776-2-1662037155', '--local', '--subdir', 'DAGS_FOLDER/dag_on_commit.py'] with executor_config {'pod_override': {'Encoding.VAR': {'Encoding.VAR': {'Encoding.VAR': {'metadata': {'Encoding.VAR': {'annotations': {'Encoding.VAR': {}, 'Encoding.TYPE': 'dict'}}, 'Encoding.TYPE': 'dict'}, 'spec': {'Encoding.VAR': {'containers': REDACTED 'Encoding.TYPE': 'k8s.V1Pod'}, 'Encoding.TYPE': 'dict'}}
{kubernetes_executor.py:554} ERROR - Invalid executor_config for TaskInstanceKey(dag_id='commit_check', task_id='sync_and_build', run_id='5776-2-1662037155', try_number=1, map_index=-1)
```
Looking in the UI, the task get stuck in scheduled state forever. By clicking instance details, it shows similar state of the pod_override with many Encoding.VAR.
This appears like a recent addition, in 2.3.4 via https://github.com/apache/airflow/pull/24356.
@dstandish do you understand if this is connected?
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.3.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-docker==3.1.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.2.0
apache-airflow-providers-sqlite==3.2.0
kubernetes==23.6.0
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26101 | https://github.com/apache/airflow/pull/26191 | af3a07427023d7089f3bc74a708723d13ce3cf73 | 87108d7b62a5c79ab184a50d733420c0930fdd93 | "2022-09-01T13:26:56Z" | python | "2022-09-07T22:44:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,099 | ["airflow/models/baseoperator.py", "airflow/ti_deps/deps/trigger_rule_dep.py", "airflow/utils/trigger_rule.py", "docs/apache-airflow/concepts/dags.rst", "tests/ti_deps/deps/test_trigger_rule_dep.py", "tests/utils/test_trigger_rule.py"] | Add one_done trigger rule | ### Body
Action: trigger as soon as 1 upstream task is in success or failuire
This has been requested in https://stackoverflow.com/questions/73501232/how-to-implement-the-one-done-trigger-rule-for-airflow
I think this can be useful for the community.
**The Task:**
Add support for new trigger rule `one_done`
You can use as reference previous PRs that added other trigger rules for example: https://github.com/apache/airflow/pull/21662
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26099 | https://github.com/apache/airflow/pull/26146 | 55d11464c047d2e74f34cdde75d90b633a231df2 | baaea097123ed22f62c781c261a1d9c416570565 | "2022-09-01T07:27:12Z" | python | "2022-09-23T17:05:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,097 | ["airflow/providers/microsoft/azure/operators/container_instances.py"] | Add the parameter `network_profile` in `AzureContainerInstancesOperator` | ### Description
[apache-airflow-providers-microsoft-azure](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/index.html) uses `azure-mgmt-containerinstance==>=1.5.0,<2.0`.
In `azure-mgmt-containerinstance==1.5.0`, [ContainerGroup](https://github.com/Azure/azure-sdk-for-python/blob/azure-mgmt-containerinstance_1.5.0/sdk/containerinstance/azure-mgmt-containerinstance/azure/mgmt/containerinstance/models/container_group_py3.py) accepts a parameter called `network_profile`, which is expecting a [ContainerGroupNetworkProfile](https://github.com/Azure/azure-sdk-for-python/blob/azure-mgmt-containerinstance_1.5.0/sdk/containerinstance/azure-mgmt-containerinstance/azure/mgmt/containerinstance/models/container_group_network_profile_py3.py).
### Use case/motivation
I received the following error when I provide value to `IpAddress` in the `AzureContainerInstancesOperator`.
```
msrestazure.azure_exceptions.CloudError: Azure Error: PrivateIPAddressNotSupported
Message: IP Address type in container group 'data-quality-test' is invalid. Private IP address is only supported when network profile is defined.
```
I would like to pass a ContainerGroupNetworkProfile object through so AzureContainerInstancesOperator can use in the [Container Group instantiation](https://github.com/apache/airflow/blob/main/airflow/providers/microsoft/azure/operators/container_instances.py#L243-L254).
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26097 | https://github.com/apache/airflow/pull/26117 | dd6b2e4e6cb89d9eea2f3db790cb003a2e89aeff | 5060785988f69d01ee2513b1e3bba73fbbc0f310 | "2022-08-31T23:41:27Z" | python | "2022-09-09T02:50:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,095 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | Creative use of BigQuery Hook Leads to Exception | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
8.3.0
### Apache Airflow version
2.3.4
### Operating System
Debian
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When executing a query through a BigQuery Hook Cursor that does not have a schema, an exception is thrown.
### What you think should happen instead
If a cursor does not contain a schema, revert to a `self.description` of None, like before the update.
### How to reproduce
Execute an `UPDATE` sql statement using a cursor.
```
conn = bigquery_hook.get_conn()
cursor = conn.cursor()
cursor.execute(sql)
```
### Anything else
I'll be the first to admit that my users are slightly abusing cursors in BigQuery by running all statement types through them, but BigQuery doesn't care and lets you.
Ref: https://github.com/apache/airflow/issues/22328
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26095 | https://github.com/apache/airflow/pull/26096 | b7969d4a404f8b441efda39ce5c2ade3e8e109dc | 12cbc0f1ddd9e8a66c5debe7f97b55a2c8001502 | "2022-08-31T21:43:47Z" | python | "2022-09-07T15:56:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,071 | ["airflow/example_dags/example_branch_day_of_week_operator.py", "airflow/operators/weekday.py", "airflow/sensors/weekday.py"] | BranchDayOfWeekOperator documentation don't mention how to use parameter use_taks_execution_day or how to use WeekDay | ### What do you see as an issue?
The constructor snippet shows clearly that there's a keyword parameter `use_task_exection_day=False`, but the doc does not explain how to use it. It also has `{WeekDay.TUESDAY}, {WeekDay.SATURDAY, WeekDay.SUNDAY}` as options for `week_day` but does not clarify how to import WeekDay. The tutorial is also very basic and only shows one usecase. The sensor has the same issues.
### Solving the problem
I think docs should be added for `use_taks_execution_day` and there should be mentions of how one uses `WeekDay` class and where to import it from. The tutorial is also incomplete there. I would like to see examples for, say, multiple different workdays branches and/or some graph for resulting dags
### Anything else
I feel like BranchDayOfWeekOperator is tragically underrepresented and hard to find, and I hope that improving docs would help make its use more common
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26071 | https://github.com/apache/airflow/pull/26098 | 4b26c8c541a720044fa96475620fc70f3ac6ccab | dd6b2e4e6cb89d9eea2f3db790cb003a2e89aeff | "2022-08-30T16:30:15Z" | python | "2022-09-09T02:05:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,067 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Include external_executor_id in zombie detection method | ### Description
Adjust the SimpleTaskInstance to include the external_executor_id so that it shows up when the zombie detection method prints the SimpleTaskInstance to logs.
### Use case/motivation
Since the zombie detection message originates in the dag file processor, further troubleshooting of the zombie task requires figuring out which worker was actually responsible for the task. Printing the external_executor_id makes it easier to find the task in a log aggregator like Kibana or Splunk than it is when using the combination of dag_id, task_id, logical_date, and map_index, at least for executors like Celery.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26067 | https://github.com/apache/airflow/pull/26141 | b6ba11ebece2c3aaf418738cb157174491a1547c | ef0b97914a6d917ca596200c19faed2f48dca88a | "2022-08-30T13:27:51Z" | python | "2022-09-03T13:23:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,059 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | [Graph view] After clearing the task (and its downstream tasks) in a task group the task group becomes disconnected from the dag | ### Apache Airflow version
2.3.4
### What happened
n the graph view of the dag, after clearing the task (and its downstream tasks) in a task group and refreshing the page the browser the task group becomes disconnected from the dag. See attached gif.
![airflow_2_3_4_task_group_bug](https://user-images.githubusercontent.com/6542519/187409008-767e13e6-ab91-4875-9f3e-bd261b346d0f.gif)
The issue is not persistent and consistent. The graph view becomes disconnected from time to time as you can see on the attached video.
### What you think should happen instead
The graph should be rendered properly and consistently.
### How to reproduce
1. Add the following dag to the dag folder:
```
import logging
import time
from typing import List
import pendulum
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow.utils.task_group import TaskGroup
def log_function(message: str, **kwargs):
logging.info(message)
time.sleep(3)
def create_file_handling_task_group(supplier):
with TaskGroup(group_id=f"file_handlig_task_group_{supplier}", ui_color='#666666') as file_handlig_task_group:
entry = PythonOperator(
task_id='entry',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_task_group-Entry-task'}
)
with TaskGroup(group_id=f"file_handling_task_sub_group-{supplier}",
ui_color='#666666') as file_handlig_task_sub_group:
sub_group_submit = PythonOperator(
task_id='sub_group_submit',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_submit'}
)
sub_group_monitor = PythonOperator(
task_id='sub_group_monitor',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_monitor'}
)
sub_group_submit >> sub_group_monitor
entry >> file_handlig_task_sub_group
return file_handlig_task_group
def get_stage_1_taskgroups(supplierlist: List) -> List[TaskGroup]:
return [create_file_handling_task_group(supplier) for supplier in supplierlist]
def connect_stage1_to_stage2(self, stage1_tasks: List[TaskGroup], stage2_tasks: List[TaskGroup]) -> None:
if stage2_tasks:
for stage1_task in stage1_tasks:
supplier_code: str = self.get_supplier_code(stage1_task)
stage2_task = self.get_suppliers_tasks(supplier_code, stage2_tasks)
stage1_task >> stage2_task
def get_stage_2_taskgroup(taskgroup_id: str):
with TaskGroup(group_id=taskgroup_id, ui_color='#666666') as stage_2_taskgroup:
sub_group_submit = PythonOperator(
task_id='sub_group_submit',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_submit'}
)
sub_group_monitor = PythonOperator(
task_id='sub_group_monitor',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_monitor'}
)
sub_group_submit >> sub_group_monitor
return stage_2_taskgroup
def create_dag():
with DAG(
dag_id="horizon-task-group-bug",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
description="description"
) as dag:
start = PythonOperator(
task_id='start_main',
python_callable=log_function,
op_kwargs={'message': 'Entry-task'}
)
end = PythonOperator(
task_id='end_main',
python_callable=log_function,
op_kwargs={'message': 'End-task'}
)
with TaskGroup(group_id=f"main_file_task_group", ui_color='#666666') as main_file_task_group:
end_main_file_task_stage_1 = PythonOperator(
task_id='end_main_file_task_stage_1',
python_callable=log_function,
op_kwargs={'message': 'end_main_file_task_stage_1'}
)
first_stage = get_stage_1_taskgroups(['9001', '9002'])
first_stage >> get_stage_2_taskgroup("stage_2_1_taskgroup")
first_stage >> get_stage_2_taskgroup("stage_2_2_taskgroup")
first_stage >> end_main_file_task_stage_1
start >> main_file_task_group >> end
return dag
dag = create_dag()
```
2. Go to de graph view of the dag.
3. Run the dag.
4. After the dag run has finished. Clear the "sub_group_submit" task within the "stage_2_1_taskgroup" with downstream tasks.
5. Refresh the page multiple times and notice how from time to time the "stage_2_1_taskgroup" becomes disconnected from the dag.
6. Clear the "sub_group_submit" task within the "stage_2_2_taskgroup" with downstream tasks.
7. Refresh the page multiple times and notice how from time to time the "stage_2_2_taskgroup" becomes disconnected from the dag.
### Operating System
Mac OS, Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Custom docker image based on apache/airflow:2.3.4-python3.10
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26059 | https://github.com/apache/airflow/pull/30129 | 4dde8ececf125abcded5910817caad92fcc82166 | 76a884c552a78bfb273fe8b65def58125fc7961a | "2022-08-30T10:12:04Z" | python | "2023-03-15T20:05:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,046 | ["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"] | `isinstance()` check in `_hook()` breaking provider hook usage | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
Using `apache-airflow-providers-common-sql==1.1.0`
### Apache Airflow version
2.3.2
### Operating System
Debian GNU/Linux 11 bullseye
### Deployment
Astronomer
### Deployment details
astro-runtime:5.0.5
### What happened
The `isinstance()` method to check that the hook is a `DbApiHook` is breaking when a snowflake connection is passed to an operator's `conn_id` parameter, as the check finds an instance of `SnowflakeHook` and not `DbApiHook`.
### What you think should happen instead
There should not be an error when subclasses of `DbApiHook` are used. This can be fixed by replacing `isinstance()` with something that checks the inheritance hierarchy.
### How to reproduce
Run an operator from the common-sql provider with a Snowflake connection passed to `conn_id`.
### Anything else
Occurs every time.
Log:
```
[2022-08-29, 19:10:42 UTC] {manager.py:49} ERROR - Failed to extract metadata The connection type is not supported by SQLColumnCheckOperator. The associated hook should be a subclass of `DbApiHook`. Got SnowflakeHook task_type=SQLColumnCheckOperator airflow_dag_id=complex_snowflake_transform task_id=quality_check_group_forestfire.forestfire_column_checks airflow_run_id=manual__2022-08-29T19:04:54.998289+00:00
Traceback (most recent call last):
File "/usr/local/airflow/include/openlineage/airflow/extractors/manager.py", line 38, in extract_metadata
task_metadata = extractor.extract_on_complete(task_instance)
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_check_extractors.py", line 26, in extract_on_complete
return super().extract()
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_extractor.py", line 50, in extract
authority=self._get_authority(),
File "/usr/local/airflow/include/openlineage/airflow/extractors/snowflake_extractor.py", line 57, in _get_authority
return self.conn.extra_dejson.get(
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_extractor.py", line 102, in conn
self._conn = get_connection(self._conn_id())
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_extractor.py", line 91, in _conn_id
return getattr(self.hook, self.hook.conn_name_attr)
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_extractor.py", line 96, in hook
self._hook = self._get_hook()
File "/usr/local/airflow/include/openlineage/airflow/extractors/snowflake_extractor.py", line 63, in _get_hook
return self.operator.get_db_hook()
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 112, in get_db_hook
return self._hook
File "/usr/local/lib/python3.9/functools.py", line 969, in __get__
val = self.func(instance)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 95, in _hook
raise AirflowException(
airflow.exceptions.AirflowException: The connection type is not supported by SQLColumnCheckOperator. The associated hook should be a subclass of `DbApiHook`. Got SnowflakeHook
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26046 | https://github.com/apache/airflow/pull/26051 | d356560baa5a41d4bda87e4010ea6d90855d25f3 | 27e2101f6ee5567b2843cbccf1dca0b0e7c96186 | "2022-08-29T19:58:59Z" | python | "2022-08-30T17:05:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,044 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | Backfill dagrun mistakenly evaluated as deadlocked | ### Apache Airflow version
Other Airflow 2 version
### What happened
I used a bash operator to run a backfill command. The dagrun was marked as failed and was alerted for a deadlock even though the task instances themselves were still ran normally. This happens occasionally.
```
[2022-08-23, 10:54:59 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:59 UTC] {dagrun.py:585} ERROR - Deadlock; marking run <DagRun load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental @ 2022-08-15 08:00:00+00:00: backfill__2022-08-15T08:00:00+00:00, externally triggered: False> failed
...
[2022-08-23, 10:55:19 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:19 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 4 | running: 0 | failed: 0 | skipped: 5 | deadlocked: 0 | not ready: 0
[2022-08-23, 10:55:19 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:19 UTC] {local_executor.py:390} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait.
[2022-08-23, 10:55:19 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:19 UTC] {backfill_job.py:879} INFO - Backfill done. Exiting.
```
Here is full backfill log.
```
[2022-08-23, 10:54:00 UTC] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'cd $AIRFLOW_HOME && airflow dags backfill -s "2022-08-15 00:00:00" -e "2022-08-16 00:00:00" -c \'{"start_val":"1","end_val":"4"}\' --rerun-failed-tasks --reset-dagruns --yes load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental']
...
[2022-08-23, 10:54:21 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:21 UTC] {task_command.py:371} INFO - Running <TaskInstance: load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental.source.extract_withdrawals_venmo_withdrawal_aud_incremental_load backfill__2022-08-15T08:00:00+00:00 [queued]> on host a8870cb5a3e0
[2022-08-23, 10:54:19 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:19 UTC] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental', 'source.extract_withdrawals_venmo_withdrawal_aud_incremental_load', 'backfill__2022-08-15T08:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/withdrawals_venmo_withdrawal_aud_jdbc_to_redshift_incremental_load.py', '--cfg-path', '/tmp/tmp92x61y3k']
[2022-08-23, 10:54:24 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:24 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 8 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 8
[2022-08-23, 10:54:29 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:29 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 8 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 8
[2022-08-23, 10:54:34 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:34 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 8 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 8
[2022-08-23, 10:54:39 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:39 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 8 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 8
[2022-08-23, 10:54:44 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:44 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 8 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 8
[2022-08-23, 10:54:49 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:49 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 8 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 8
[2022-08-23, 10:54:54 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:54 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 8 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 8
[2022-08-23, 10:54:59 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:59 UTC] {dagrun.py:585} ERROR - Deadlock; marking run <DagRun load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental @ 2022-08-15 08:00:00+00:00: backfill__2022-08-15T08:00:00+00:00, externally triggered: False> failed
[2022-08-23, 10:54:59 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:59 UTC] {dagrun.py:609} INFO - DagRun Finished: dag_id=load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental, execution_date=2022-08-15 08:00:00+00:00, run_id=backfill__2022-08-15T08:00:00+00:00, run_start_date=None, run_end_date=2022-08-23 10:54:59.121952+00:00, run_duration=None, state=failed, external_trigger=False, run_type=backfill, data_interval_start=2022-08-15 08:00:00+00:00, data_interval_end=2022-08-16 08:00:00+00:00, dag_hash=None
[2022-08-23, 10:54:59 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:59 UTC] {dagrun.py:795} WARNING - Failed to record duration of <DagRun load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental @ 2022-08-15 08:00:00+00:00: backfill__2022-08-15T08:00:00+00:00, externally triggered: False>: start_date is not set.
[2022-08-23, 10:54:59 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:59 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 8 | succeeded: 1 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 8
[2022-08-23, 10:54:59 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:59 UTC] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental', 'destination.post_marker_staging_withdrawals_venmo_withdrawal_aud', 'backfill__2022-08-15T08:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/withdrawals_venmo_withdrawal_aud_jdbc_to_redshift_incremental_load.py', '--cfg-path', '/tmp/tmpd1nq6xe2']
[2022-08-23, 10:54:59 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:54:59 UTC] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental', 'destination.post_marker_fdg_pii_fact_aw_venmo_withdrawal_aud', 'backfill__2022-08-15T08:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/withdrawals_venmo_withdrawal_aud_jdbc_to_redshift_incremental_load.py', '--cfg-path', '/tmp/tmps6ah6zww']
[2022-08-23, 10:55:04 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:04 UTC] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental', 'destination.post_marker_fdg_pii_fact_aw_venmo_withdrawal_aud', 'backfill__2022-08-15T08:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/withdrawals_venmo_withdrawal_aud_jdbc_to_redshift_incremental_load.py', '--cfg-path', '/tmp/tmps6ah6zww']
[2022-08-23, 10:55:04 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:04 UTC] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental', 'destination.post_marker_staging_withdrawals_venmo_withdrawal_aud', 'backfill__2022-08-15T08:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/withdrawals_venmo_withdrawal_aud_jdbc_to_redshift_incremental_load.py', '--cfg-path', '/tmp/tmpd1nq6xe2']
[2022-08-23, 10:55:04 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:04 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 3 | succeeded: 1 | running: 2 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 3
[2022-08-23, 10:55:06 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:06 UTC] {task_command.py:371} INFO - Running <TaskInstance: load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental.destination.post_marker_fdg_pii_fact_aw_venmo_withdrawal_aud backfill__2022-08-15T08:00:00+00:00 [queued]> on host a8870cb5a3e0
[2022-08-23, 10:55:06 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:06 UTC] {task_command.py:371} INFO - Running <TaskInstance: load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental.destination.post_marker_staging_withdrawals_venmo_withdrawal_aud backfill__2022-08-15T08:00:00+00:00 [queued]> on host a8870cb5a3e0
[2022-08-23, 10:55:09 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:09 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 1 | succeeded: 3 | running: 0 | failed: 0 | skipped: 5 | deadlocked: 0 | not ready: 1
[2022-08-23, 10:55:09 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:09 UTC] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental', 'post_execution.rotate_checkpoint_withdrawals_venmo_withdrawal_aud', 'backfill__2022-08-15T08:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/withdrawals_venmo_withdrawal_aud_jdbc_to_redshift_incremental_load.py', '--cfg-path', '/tmp/tmpkve4mv_q']
[2022-08-23, 10:55:14 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:14 UTC] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental', 'post_execution.rotate_checkpoint_withdrawals_venmo_withdrawal_aud', 'backfill__2022-08-15T08:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/withdrawals_venmo_withdrawal_aud_jdbc_to_redshift_incremental_load.py', '--cfg-path', '/tmp/tmpkve4mv_q']
[2022-08-23, 10:55:14 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:14 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 3 | running: 1 | failed: 0 | skipped: 5 | deadlocked: 0 | not ready: 0
[2022-08-23, 10:55:15 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:15 UTC] {task_command.py:371} INFO - Running <TaskInstance: load_withdrawals_venmo_withdrawal_aud_to_redshift_withdrawals_venmo_withdrawal_aud_incremental.post_execution.rotate_checkpoint_withdrawals_venmo_withdrawal_aud backfill__2022-08-15T08:00:00+00:00 [queued]> on host a8870cb5a3e0
[2022-08-23, 10:55:19 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:19 UTC] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 4 | running: 0 | failed: 0 | skipped: 5 | deadlocked: 0 | not ready: 0
[2022-08-23, 10:55:19 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:19 UTC] {local_executor.py:390} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait.
[2022-08-23, 10:55:19 UTC] {subprocess.py:92} INFO - [2022-08-23, 10:55:19 UTC] {backfill_job.py:879} INFO - Backfill done. Exiting.
```
### What you think should happen instead
The DAG is not deadlocked but still somehow was still [evaluated as deadlocked](https://github.com/apache/airflow/blob/main/airflow/models/dagrun.py#L581-L589).
### How to reproduce
_No response_
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26044 | https://github.com/apache/airflow/pull/26161 | 5b216e9480e965c7c1919cb241668beca53ab521 | 6931fbf8f7c0e3dfe96ce51ef03f2b1502baef07 | "2022-08-29T18:41:15Z" | python | "2022-09-06T09:43:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,019 | ["dev/breeze/src/airflow_breeze/commands/release_management_commands.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output_release-management_generate-constraints.svg", "scripts/in_container/_in_container_script_init.sh", "scripts/in_container/_in_container_utils.sh", "scripts/in_container/in_container_utils.py", "scripts/in_container/install_airflow_and_providers.py", "scripts/in_container/run_generate_constraints.py", "scripts/in_container/run_generate_constraints.sh", "scripts/in_container/run_system_tests.sh"] | Rewrite the in-container scripts in Python | We have a number of "in_container" scripts written in Bash, They are doing a number of houseekeeping stuff but since we already have Python 3.7+ inside the CI image, we could modularise them more and make them run from external and simplify entrypoint_ci (for example separate script for tests). | https://github.com/apache/airflow/issues/26019 | https://github.com/apache/airflow/pull/36158 | 36010f6d0e3231081dbae095baff5a5b5c5b34eb | f39cdcceff4fa64debcaaef6e30f345b7b21696e | "2022-08-28T09:23:08Z" | python | "2023-12-11T07:02:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,013 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | schedule_interval is not respecting the value assigned to that either it's one day or none | ### Apache Airflow version
main (development)
### What happened
Schedule_interval is `none` even if `timedelta(days=365, hours=6)` also its 1 day for `schedule_interval=None` and `schedule_interval=timedelta(days=3)`
### What you think should happen instead
It should respect the value assigned to it.
### How to reproduce
Create a dag with `schedule_interval=None` or `schedule_interval=timedelta(days=5)` and observe the behaviour.
![2022-08-27 17 42 07](https://user-images.githubusercontent.com/88504849/187039335-90de6855-b674-47ba-9c03-3c437722bae5.gif)
**DAG-**
```
with DAG(
dag_id="branch_python_operator",
start_date=days_ago(1),
schedule_interval="* * * * *",
doc_md=docs,
tags=['core']
) as dag:
```
**DB Results-**
```
postgres=# select schedule_interval from dag where dag_id='branch_python_operator';
schedule_interval
------------------------------------------------------------------------------
{"type": "timedelta", "attrs": {"days": 1, "seconds": 0, "microseconds": 0}}
(1 row)
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26013 | https://github.com/apache/airflow/pull/26082 | d4db9aecc3e534630c76e59c54d90329ed20a6ab | c982080ca1c824dd26c452bcb420df0f3da1afa8 | "2022-08-27T16:35:56Z" | python | "2022-08-31T09:09:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,000 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | `start_date` for an existing dagrun is not set when ran with backfill flags ` --reset-dagruns --yes` | ### Apache Airflow version
2.3.4
### What happened
When the dagrun already exists and is backfilled with the flags `--reset-dagruns --yes`, the dag run will not have a start date. This is because reset_dagruns calls [clear_task_instances](https://github.com/apache/airflow/blob/main/airflow/models/dag.py#L2020) which [sets the dagrun start date to None](https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L286-L291).
Since the dagrun goes into running via BackfillJob rather than the SchedulerJob, the start date is not set. This doesn't happen to a new dagrun created by a BackfillJob because the [start date is determined at creation](https://github.com/apache/airflow/blob/main/airflow/jobs/backfill_job.py#L310-L320).
Here is a recreation of the behaviour.
First run of the backfill dagrun. No odd warnings and start date exists for Airflow to calculate the duration.
```
astro@75512ab5e882:/usr/local/airflow$ airflow dags backfill -s 2021-12-01 -e 2021-12-01 test_module
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528: DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
option = self._get_environment_variables(deprecated_key, deprecated_section, key, section)
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
/usr/local/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py:57 PendingDeprecationWarning: --ignore-first-depends-on-past is deprecated as the value is always set to True
[2022-08-25 21:29:55,574] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
Nothing to clear.
[2022-08-25 21:29:55,650] {executor_loader.py:105} INFO - Loaded executor: LocalExecutor
[2022-08-25 21:29:55,896] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'test_module', 'run_python', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp_nuoic9m']
[2022-08-25 21:30:00,665] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'test_module', 'run_python', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp_nuoic9m']
[2022-08-25 21:30:00,679] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
[2022-08-25 21:30:00,695] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags/test_module.py
[2022-08-25 21:30:00,759] {task_command.py:371} INFO - Running <TaskInstance: test_module.run_python backfill__2021-12-01T00:00:00+00:00 [queued]> on host 75512ab5e882
[2022-08-25 21:30:05,686] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 1 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
[2022-08-25 21:30:05,709] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'test_module', 'test', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp3w9pm1jj']
[2022-08-25 21:30:10,659] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'test_module', 'test', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp3w9pm1jj']
[2022-08-25 21:30:10,668] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
[2022-08-25 21:30:10,693] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags/test_module.py
[2022-08-25 21:30:10,765] {task_command.py:371} INFO - Running <TaskInstance: test_module.test backfill__2021-12-01T00:00:00+00:00 [queued]> on host 75512ab5e882
[2022-08-25 21:30:15,678] {dagrun.py:564} INFO - Marking run <DagRun test_module @ 2021-12-01T00:00:00+00:00: backfill__2021-12-01T00:00:00+00:00, externally triggered: False> successful
[2022-08-25 21:30:15,679] {dagrun.py:609} INFO - DagRun Finished: dag_id=test_module, execution_date=2021-12-01T00:00:00+00:00, run_id=backfill__2021-12-01T00:00:00+00:00, run_start_date=2022-08-25 21:29:55.815199+00:00, run_end_date=2022-08-25 21:30:15.679256+00:00, run_duration=19.864057, state=success, external_trigger=False, run_type=backfill, data_interval_start=2021-12-01T00:00:00+00:00, data_interval_end=2021-12-02T00:00:00+00:00, dag_hash=None
[2022-08-25 21:30:15,680] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 2 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
[2022-08-25 21:30:15,684] {local_executor.py:390} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait.
[2022-08-25 21:30:15,829] {backfill_job.py:879} INFO - Backfill done. Exiting.
```
Second run of the backfill dagrun with the flags `--reset-dagruns --yes`. There is a warning about start_date is not set.
```
astro@75512ab5e882:/usr/local/airflow$ airflow dags backfill -s 2021-12-01 -e 2021-12-01 --reset-dagruns --yes test_module
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528: DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
option = self._get_environment_variables(deprecated_key, deprecated_section, key, section)
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
/usr/local/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py:57 PendingDeprecationWarning: --ignore-first-depends-on-past is deprecated as the value is always set to True
[2022-08-25 21:30:46,895] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dag
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
[2022-08-25 21:30:46,996] {executor_loader.py:105} INFO - Loaded executor: LocalExecutor
[2022-08-25 21:30:47,275] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'test_module', 'run_python', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp3s_3bn80']
[2022-08-25 21:30:52,010] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'test_module', 'run_python', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp3s_3bn80']
[2022-08-25 21:30:52,029] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
[2022-08-25 21:30:52,045] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags/test_module.py
[2022-08-25 21:30:52,140] {task_command.py:371} INFO - Running <TaskInstance: test_module.run_python backfill__2021-12-01T00:00:00+00:00 [queued]> on host 75512ab5e882
[2022-08-25 21:30:57,028] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 1 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
[2022-08-25 21:30:57,048] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'test_module', 'test', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmprxg7g5o8']
[2022-08-25 21:31:02,024] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'test_module', 'test', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmprxg7g5o8']
[2022-08-25 21:31:02,032] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
[2022-08-25 21:31:02,085] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags/test_module.py
[2022-08-25 21:31:02,178] {task_command.py:371} INFO - Running <TaskInstance: test_module.test backfill__2021-12-01T00:00:00+00:00 [queued]> on host 75512ab5e882
[2022-08-25 21:31:07,039] {dagrun.py:564} INFO - Marking run <DagRun test_module @ 2021-12-01 00:00:00+00:00: backfill__2021-12-01T00:00:00+00:00, externally triggered: False> successful
[2022-08-25 21:31:07,039] {dagrun.py:609} INFO - DagRun Finished: dag_id=test_module, execution_date=2021-12-01 00:00:00+00:00, run_id=backfill__2021-12-01T00:00:00+00:00, run_start_date=None, run_end_date=2022-08-25 21:31:07.039737+00:00, run_duration=None, state=success, external_trigger=False, run_type=backfill, data_interval_start=2021-12-01 00:00:00+00:00, data_interval_end=2021-12-02 00:00:00+00:00, dag_hash=None
[2022-08-25 21:31:07,040] {dagrun.py:795} WARNING - Failed to record duration of <DagRun test_module @ 2021-12-01 00:00:00+00:00: backfill__2021-12-01T00:00:00+00:00, externally triggered: False>: start_date is not set.
[2022-08-25 21:31:07,040] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 2 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
[2022-08-25 21:31:07,043] {local_executor.py:390} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait.
[2022-08-25 21:31:07,177] {backfill_job.py:879} INFO - Backfill done. Exiting.
```
### What you think should happen instead
When the BackfillJob fetches the dagrun, it will also need to set the start date.
It can be done right after setting the run variable. ([source](https://github.com/apache/airflow/blob/main/airflow/jobs/backfill_job.py#L310-L320))
### How to reproduce
Run the backfill command first without `--reset-dagruns --yes` flags.
```
airflow dags backfill -s 2021-12-01 -e 2021-12-01 test_module
```
Run the backfill command with the `--reset-dagruns --yes` flags.
```
airflow dags backfill -s 2021-12-01 -e 2021-12-01 --reset-dagruns --yes test_module
```
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26000 | https://github.com/apache/airflow/pull/26135 | 4644a504f2b64754efb40f4c61f8d050f3e7b1b7 | 2d031ee47bc7af347040069a3162273de308aef6 | "2022-08-27T01:00:57Z" | python | "2022-09-02T16:14:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,976 | ["airflow/api_connexion/schemas/pool_schema.py", "airflow/models/pool.py", "airflow/www/views.py", "tests/api_connexion/endpoints/test_pool_endpoint.py", "tests/api_connexion/schemas/test_pool_schemas.py", "tests/api_connexion/test_auth.py", "tests/www/views/test_views_pool.py"] | Include "Scheduled slots" column in Pools view | ### Description
It would be nice to have a "Scheduled slots" column to see how many slots want to enter each pool. Currently we are only displaying the running and queued slots.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25976 | https://github.com/apache/airflow/pull/26006 | 1c73304bdf26b19d573902bcdfefc8ca5160511c | bcdc25dd3fbda568b5ff2c04701623d6bf11a61f | "2022-08-26T07:53:27Z" | python | "2022-08-29T06:31:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,968 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Unable to configure Google Secrets Manager in 2.3.4 | ### Apache Airflow version
2.3.4
### What happened
I am attempting to configure a Google Secrets Manager secrets backend using the `gcp_keyfile_dict` param in a `.env` file with the following ENV Vars:
```
AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
AIRFLOW__SECRETS__BACKEND_KWARGS='{"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "gcp_keyfile_dict": <json-keyfile>}'
```
In previous versions including 2.3.3 this worked without issue
After upgrading to Astro Runtime 5.0.8 I get the following error taken from the scheduler container logs. The scheduler, webserver, and triggerer are continually restarting
```
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/usr/local/lib/python3.9/site-packages/airflow/__init__.py", line 35, in <module>
from airflow import settings
File "/usr/local/lib/python3.9/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/usr/local/lib/python3.9/site-packages/airflow/configuration.py", line 1618, in <module>
secrets_backend_list = initialize_secrets_backends()
File "/usr/local/lib/python3.9/site-packages/airflow/configuration.py", line 1540, in initialize_secrets_backends
custom_secret_backend = get_custom_secret_backend()
File "/usr/local/lib/python3.9/site-packages/airflow/configuration.py", line 1523, in get_custom_secret_backend
return _custom_secrets_backend(secrets_backend_cls, **alternative_secrets_config_dict)
TypeError: unhashable type: 'dict'
```
### What you think should happen instead
Containers should remain healthy and the secrets backend should successfully be added
### How to reproduce
`astro dev init` a fresh project
Dockerfile:
`FROM quay.io/astronomer/astro-runtime:5.0.8`
`.env` file:
```
AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
AIRFLOW__SECRETS__BACKEND_KWARGS='{"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "gcp_keyfile_dict": <service-acct-json-keyfile>}'
```
`astro dev start`
### Operating System
macOS 11.6.8
### Versions of Apache Airflow Providers
[apache-airflow-providers-google](https://airflow.apache.org/docs/apache-airflow-providers-google/8.1.0/) 8.1.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25968 | https://github.com/apache/airflow/pull/25970 | 876536ea3c45d5f15fcfbe81eda3ee01a101faa3 | aa877637f40ddbf3b74f99847606b52eb26a92d9 | "2022-08-25T22:01:21Z" | python | "2022-08-26T09:24:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,963 | ["airflow/providers/amazon/aws/operators/ecs.py", "airflow/providers/amazon/aws/sensors/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | Invalid arguments were passed to EcsRunTaskOperator (aws_conn_id) | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==5.0.0
### Apache Airflow version
2.4.4
### Operating System
linux
### Deployment
Docker-Compose
### Deployment details
Custom built docker image based on the official one.
### What happened
When I was migrating legacy EcsOperator to EcsRunTaskOperator I received this error:
```
airflow.exceptions.AirflowException: Invalid arguments were passed to EcsRunTaskOperator. Invalid arguments were:
**kwargs: {'aws_conn_id': 'aws_connection'}
```
From the source code and source code documentation it appears that `aws_conn_id` is a valid argument, but nevertheless the error gets thrown.
### What you think should happen instead
EcsRunTaskOperator should work with provided `aws_conn_id` argument.
### How to reproduce
Create an instance of EcsRunTaskOperator and provide valid `aws_conn_id` argument.
### Anything else
During my investigation I compared current version of ecs module and previous one. From that investigation it's clear that `aws_conn_id` argument was removed from keyword arguments before it was passed to parent classes in the legacy version, but now it's not getting removed. In the end this error is caused by Airflow's BaseOperator receiving unknown argument `aws_conn_id`.
[ecs.py L49](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/ecs.py#L49)
```
class EcsBaseOperator(BaseOperator):
"""This is the base operator for all Elastic Container Service operators."""
def __init__(self, **kwargs):
self.aws_conn_id = kwargs.get('aws_conn_id', DEFAULT_CONN_ID)
self.region = kwargs.get('region')
super().__init__(**kwargs)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25963 | https://github.com/apache/airflow/pull/25989 | c9c89e5c3be37dd2475abf4214d5efdd2ad48c2a | dbfa6487b820e6c94770404b3ba29ab11ae2a05e | "2022-08-25T19:36:24Z" | python | "2022-08-27T02:15:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,952 | ["airflow/providers/amazon/aws/operators/rds.py", "airflow/providers/amazon/aws/sensors/rds.py", "docs/apache-airflow-providers-amazon/operators/rds.rst", "tests/providers/amazon/aws/operators/test_rds.py", "tests/system/providers/amazon/aws/example_rds_instance.py"] | Add RDS operators/sensors | ### Description
I think adding the following operators/sensors would benefit companies that need to start/stop RDS instances programmatically.
Name | Description | PR
:- | :- | :-
`RdsStartDbOperator` | Start an instance, and optionally wait for it enter "available" state | #27076
`RdsStopDbOperator` | Start an instance, and optionally wait for it to enter "stopped" state | #27076
`RdsDbSensor` | Wait for the requested status (eg. available, stopped) | #26003
Is this something that would be accepted into the codebase?
Please let me know.
### Use case/motivation
#### 1. Saving money
RDS is expensive. To save money, a company keeps test/dev environment relational databases shutdown until it needs to use them. With Airflow, they can start a database instance before running a workload, then turn it off after the workload finishes (or errors).
#### 2. Force RDS to stay shutdown
RDS automatically starts a database after 1 week of downtime. A company does not need this feature. They can create a DAG to continuously run the shutdown command on a list of databases instance ids stored in a `Variable`. The alternative is to create a shell script or login to the console and manually shutdown each database every week.
#### 3. Making sure a database is running before scheduling workload
A company programmatically starts/stops its RDS instances. Before they run a workload, they want to make sure it's running. They can use a sensor to make sure a database is available before attempting to run any jobs that require access.
Also, during maintenance windows, RDS instances may be taken offline. Rather than tuning each DAG schedule to run outside of this window, a company can use a sensor to wait until the instance is available. (Yes, the availability check could also take place immediately before the maintenance window.)
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25952 | https://github.com/apache/airflow/pull/27076 | d4bfccb3c90d889863bb1d1500ad3158fc833aae | a2413cf6ca8b93e491a48af11d769cd13bce8884 | "2022-08-25T08:51:53Z" | python | "2022-10-19T05:36:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,949 | ["airflow/www/static/js/api/useGridData.test.js", "airflow/www/static/js/api/useGridData.ts"] | Auto-refresh is broken in 2.3.4 | ### Apache Airflow version
2.3.4
### What happened
In PR #25042 a bug was introduced that prevents auto-refresh from working when tasks of type `scheduled` are running.
### What you think should happen instead
Auto-refresh should work for any running or queued task, rather than only manually-scheduled tasks.
### How to reproduce
_No response_
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25949 | https://github.com/apache/airflow/pull/25950 | e996a88c7b19a1d30c529f5dd126d0a8871f5ce0 | 37ec752c818d4c42cba6e7fdb2e11cddc198e810 | "2022-08-25T03:39:42Z" | python | "2022-08-25T11:46:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,937 | ["airflow/providers/common/sql/hooks/sql.py", "airflow/providers/common/sql/provider.yaml", "airflow/providers/presto/hooks/presto.py", "airflow/providers/presto/provider.yaml", "airflow/providers/sqlite/hooks/sqlite.py", "airflow/providers/sqlite/provider.yaml", "airflow/providers/trino/hooks/trino.py", "airflow/providers/trino/provider.yaml", "generated/provider_dependencies.json"] | TrinoHook uses wrong parameter representation when inserting rows | ### Apache Airflow Provider(s)
trino
### Versions of Apache Airflow Providers
apache-airflow-providers-trino==4.0.0
### Apache Airflow version
2.3.3
### Operating System
macOS 12.5.1 (21G83)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
`TrinoHook.insert_rows()` throws a syntax error due to the underlying prepared statement using "%s" as representation for parameters, instead of "?" [which Trino uses](https://trino.io/docs/current/sql/prepare.html#description).
### What you think should happen instead
`TrinoHook.insert_rows()` should insert rows using Trino-compatible SQL statements.
The following exception is raised currently:
`trino.exceptions.TrinoUserError: TrinoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:88: mismatched input '%'. Expecting: ')', <expression>, <query>", query_id=xxx)`
### How to reproduce
Instantiate an `airflow.providers.trino.hooks.trino.TrinoHook` instance and use it's `insert_rows()` method.
Operators using this method internally are also broken: e.g. `airflow.providers.trino.transfers.gcs_to_trino.GCSToTrinoOperator`
### Anything else
The issue seems to come from `TrinoHook.insert_rows()` relying on `DbApiHook.insert_rows()`, which uses "%s" to represent query parameters.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25937 | https://github.com/apache/airflow/pull/25939 | 4c3fb1ff2b789320cc2f19bd921ac0335fc8fdf1 | a74d9349919b340638f0db01bc3abb86f71c6093 | "2022-08-24T14:02:00Z" | python | "2022-08-27T01:15:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,926 | ["docs/apache-airflow-providers-docker/decorators/docker.rst", "docs/apache-airflow-providers-docker/index.rst"] | How to guide for @task.docker decorator | ### Body
Hi.
[The documentation for apache-airflow-providers-docker](https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/index.html) does not provide information on how to use the `@task.dockker `decorator. We have this decorator described only in the API reference for this provider and documentation for the apache airflow package.
Best regards,
Kamil
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/25926 | https://github.com/apache/airflow/pull/28251 | fd5846d256b6d269b160deb8df67cd3d914188e0 | 74b69030efbb87e44c411b3563989d722fa20336 | "2022-08-24T04:39:14Z" | python | "2022-12-14T08:48:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,851 | ["airflow/providers/common/sql/hooks/sql.py", "tests/providers/common/sql/hooks/test_sqlparse.py", "tests/providers/databricks/hooks/test_databricks_sql.py", "tests/providers/oracle/hooks/test_oracle.py"] | PL/SQL statement stop working after upgrade common-sql to 1.1.0 | ### Apache Airflow Provider(s)
common-sql, oracle
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-oracle==3.3.0
### Apache Airflow version
2.3.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
After upgrade provider common-sql==1.0.0 to 1.1.0 version, SQL with DECLARE stop working.
Using OracleProvider 3.2.0 with common-sql 1.0.0:
```
[2022-08-19, 13:16:46 -04] {oracle.py:66} INFO - Executing: DECLARE
v_sql LONG;
BEGIN
v_sql := '
create table usr_bi_cgj.dim_tarefa
(
id_tarefa NUMBER(22) not null primary key,
ds_tarefa VARCHAR2(4000) not NULL
);
';
EXECUTE IMMEDIATE v_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN EXECUTE IMMEDIATE 'TRUNCATE TABLE usr_bi_cgj.dim_tarefa';
COMMIT;
END;
[2022-08-19, 13:16:46 -04] {base.py:68} INFO - Using connection ID 'bitjro' for task execution.
[2022-08-19, 13:16:46 -04] {sql.py:255} INFO - Running statement: DECLARE
v_sql LONG;
BEGIN
v_sql := '
create table usr_bi_cgj.dim_tarefa
(
id_tarefa NUMBER(22) not null primary key,
ds_tarefa VARCHAR2(4000) not NULL
);
';
EXECUTE IMMEDIATE v_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN EXECUTE IMMEDIATE 'TRUNCATE TABLE usr_bi_cgj.dim_tarefa';
COMMIT;
END;, parameters: None
[2022-08-19, 13:16:46 -04] {sql.py:264} INFO - Rows affected: 0
[2022-08-19, 13:16:46 -04] {taskinstance.py:1420} INFO - Marking task as SUCCESS. dag_id=caixa_tarefa_pje, task_id=cria_temp_dim_tarefa, execution_date=20220819T080000, start_date=20220819T171646, end_date=20220819T171646
[2022-08-19, 13:16:46 -04] {local_task_job.py:156} INFO - Task exited with return code 0
```
![image](https://user-images.githubusercontent.com/226773/185792377-2c0f9190-e315-4b9c-9731-c8e57aea282c.png)
After upgrade OracleProvider to 3.3.0 with common-sql to 1.1.0 version, same statement now throws an exception:
```
[2022-08-20, 14:58:14 ] {sql.py:315} INFO - Running statement: DECLARE
v_sql LONG;
BEGIN
v_sql := '
create table usr_bi_cgj.dim_tarefa
(
id_tarefa NUMBER(22) not null primary key,
ds_tarefa VARCHAR2(4000) not NULL
);
';
EXECUTE IMMEDIATE v_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN EXECUTE IMMEDIATE 'TRUNCATE TABLE usr_bi_cgj.dim_tarefa';
COMMIT;
END, parameters: None
[2022-08-20, 14:58:14 ] {taskinstance.py:1909} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/oracle/operators/oracle.py", line 69, in execute
hook.run(self.sql, autocommit=self.autocommit, parameters=self.parameters)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/common/sql/hooks/sql.py", line 295, in run
self._run_command(cur, sql_statement, parameters)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/common/sql/hooks/sql.py", line 320, in _run_command
cur.execute(sql_statement)
File "/home/airflow/.local/lib/python3.7/site-packages/oracledb/cursor.py", line 378, in execute
impl.execute(self)
File "src/oracledb/impl/thin/cursor.pyx", line 121, in oracledb.thin_impl.ThinCursorImpl.execute
File "src/oracledb/impl/thin/protocol.pyx", line 375, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 376, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 369, in oracledb.thin_impl.Protocol._process_message
oracledb.exceptions.DatabaseError: ORA-06550: linha 17, coluna 3:
PLS-00103: Encontrado o símbolo "end-of-file" quando um dos seguintes símbolos era esperado:
; <um identificador>
<um identificador delimitado por aspas duplas>
O símbolo ";" foi substituído por "end-of-file" para continuar.
```
![image](https://user-images.githubusercontent.com/226773/185762143-4f96e425-7eda-4140-a281-e096cc7d3148.png)
### What you think should happen instead
I think stripping `;` from statement is causing this error
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25851 | https://github.com/apache/airflow/pull/25855 | ccdd73ec50ab9fb9d18d1cce7a19a95fdedcf9b9 | 874a95cc17c3578a0d81c5e034cb6590a92ea310 | "2022-08-21T13:19:59Z" | python | "2022-08-21T23:51:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,836 | ["airflow/api/client/api_client.py", "airflow/api/client/json_client.py", "airflow/api/client/local_client.py", "airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | Support overriding `replace_microseconds` parameter for `airflow dags trigger` CLI command | ### Description
`airflow dags trigger` CLI command always defaults with `replace_microseconds=True` because of the default value in the API. It would be very nice to be able to control this flag from the CLI.
### Use case/motivation
We use AWS MWAA. The exposed interface is Airflow CLI (yes, we could also ask to get a different interface from AWS MWAA, but I think this is something that was just overlooked for the CLI?), which does not support overriding `replace_microseconds` parameter when calling `airflow dags trigger` CLI command.
For the most part, our dag runs for a given dag do not happen remotely at the same time. However, based on user behavior, they are sometimes triggered within the same second (albeit not microsecond). The first dag run is successfully triggered, but the second dag run fails the `replace_microseconds` parameter is wiping out the microseconds that we pass. Thus, DagRun.find_duplicates returns True for the second dag run that we're trying to trigger, and this raises the `DagRunAlreadyExists` exception.
### Related issues
Not quite - they all seem to be around the experimental api and not directly related to the CLI.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25836 | https://github.com/apache/airflow/pull/27640 | c30c0b5714e4ee217735649b9405f0f79af63059 | b6013c0b8f1064c523af2d905c3f32ff1cbec421 | "2022-08-19T17:04:24Z" | python | "2022-11-26T00:07:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,833 | ["airflow/providers/amazon/aws/hooks/s3.py", "tests/providers/amazon/aws/hooks/test_s3.py"] | Airflow Amazon provider S3Hook().download_file() fail when needs encryption arguments (SSECustomerKey etc..) | ### Apache Airflow version
2.3.3
### What happened
Bug when trying to use the S3Hook to download a file from S3 with extra parameters for security like an SSECustomerKey.
The function [download_file](https://github.com/apache/airflow/blob/dd72e67524c99e34ba4c62bfb554e4caf877d5ec/airflow/providers/amazon/aws/hooks/s3.py#L854) fetches the `extra_args` from `self` where we can specify the security parameters about encryption as a `dict`.
But [download_file](https://github.com/apache/airflow/blob/dd72e67524c99e34ba4c62bfb554e4caf877d5ec/airflow/providers/amazon/aws/hooks/s3.py#L854) is calling [get_key()](https://github.com/apache/airflow/blob/dd72e67524c99e34ba4c62bfb554e4caf877d5ec/airflow/providers/amazon/aws/hooks/s3.py#L870) which does not use these `extra_args` when calling the [load() method here](https://github.com/apache/airflow/blob/dd72e67524c99e34ba4c62bfb554e4caf877d5ec/airflow/providers/amazon/aws/hooks/s3.py#L472), this results in a `botocore.exceptions.ClientError: An error occurred (400) when calling the HeadObject operation: Bad Request.` error.
This could be fixed like this:
load as says [boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Object.load) is calling [S3.Client.head_object()](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.head_object) which can handle **kwargs and can have all the arguments below:
```
response = client.head_object(
Bucket='string',
IfMatch='string',
IfModifiedSince=datetime(2015, 1, 1),
IfNoneMatch='string',
IfUnmodifiedSince=datetime(2015, 1, 1),
Key='string',
Range='string',
VersionId='string',
SSECustomerAlgorithm='string',
SSECustomerKey='string',
RequestPayer='requester',
PartNumber=123,
ExpectedBucketOwner='string',
ChecksumMode='ENABLED'
)
```
An easy fix would be to give the `extra_args` to `get_key` then to `load(**self.extra_args) `
### What you think should happen instead
the extra_args should be used in get_key() and therefore obj.load()
### How to reproduce
Try to use the S3Hook as below to download an encrypted file:
```
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
extra_args={
'SSECustomerAlgorithm': 'YOUR_ALGO',
'SSECustomerKey': YOUR_SSE_C_KEY
}
hook = S3Hook(aws_conn_id=YOUR_S3_CONNECTION, extra_args=extra_args)
hook.download_file(
key=key, bucket_name=bucket_name, local_path=local_path
)
```
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25833 | https://github.com/apache/airflow/pull/35037 | 36c5c111ec00075db30fab7c67ac1b6900e144dc | 95980a9bc50c1accd34166ba608bbe2b4ebd6d52 | "2022-08-19T16:25:16Z" | python | "2023-10-25T15:30:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,815 | ["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"] | SQLTableCheckOperator fails for Postgres | ### Apache Airflow version
2.3.3
### What happened
`SQLTableCheckOperator` fails when used with Postgres.
### What you think should happen instead
From the logs:
```
[2022-08-19, 09:28:14 UTC] {taskinstance.py:1910} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 296, in execute
records = hook.get_first(self.sql)
File "/usr/local/lib/python3.9/site-packages/airflow/hooks/dbapi.py", line 178, in get_first
cur.execute(sql)
psycopg2.errors.SyntaxError: subquery in FROM must have an alias
LINE 1: SELECT MIN(row_count_check) FROM (SELECT CASE WHEN COUNT(*) ...
^
HINT: For example, FROM (SELECT ...) [AS] foo.
```
### How to reproduce
```python
import pendulum
from datetime import timedelta
from airflow import DAG
from airflow.decorators import task
from airflow.providers.common.sql.operators.sql import SQLTableCheckOperator
from airflow.providers.postgres.operators.postgres import PostgresOperator
_POSTGRES_CONN = "postgresdb"
_TABLE_NAME = "employees"
default_args = {
"owner": "cs",
"retries": 3,
"retry_delay": timedelta(seconds=15),
}
with DAG(
dag_id="sql_data_quality",
start_date=pendulum.datetime(2022, 8, 1, tz="UTC"),
schedule_interval=None,
) as dag:
create_table = PostgresOperator(
task_id="create_table",
postgres_conn_id=_POSTGRES_CONN,
sql=f"""
CREATE TABLE IF NOT EXISTS {_TABLE_NAME} (
employee_name VARCHAR NOT NULL,
employment_year INT NOT NULL
);
"""
)
populate_data = PostgresOperator(
task_id="populate_data",
postgres_conn_id=_POSTGRES_CONN,
sql=f"""
INSERT INTO {_TABLE_NAME} VALUES ('Adam', 2021);
INSERT INTO {_TABLE_NAME} VALUES ('Chris', 2021);
INSERT INTO {_TABLE_NAME} VALUES ('Frank', 2021);
INSERT INTO {_TABLE_NAME} VALUES ('Fritz', 2021);
INSERT INTO {_TABLE_NAME} VALUES ('Magda', 2022);
INSERT INTO {_TABLE_NAME} VALUES ('Phil', 2021);
""",
)
check_row_count = SQLTableCheckOperator(
task_id="check_row_count",
conn_id=_POSTGRES_CONN,
table=_TABLE_NAME,
checks={
"row_count_check": {"check_statement": "COUNT(*) >= 3"}
},
)
drop_table = PostgresOperator(
task_id="drop_table",
trigger_rule="all_done",
postgres_conn_id=_POSTGRES_CONN,
sql=f"DROP TABLE {_TABLE_NAME};",
)
create_table >> populate_data >> check_row_count >> drop_table
```
### Operating System
macOS
### Versions of Apache Airflow Providers
`apache-airflow-providers-common-sql==1.0.0`
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25815 | https://github.com/apache/airflow/pull/25821 | b535262837994ef3faf3993da8f246cce6cfd3d2 | dd72e67524c99e34ba4c62bfb554e4caf877d5ec | "2022-08-19T09:51:42Z" | python | "2022-08-19T15:08:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,781 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | BigQueryGetDataOperator does not support passing project_id | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.3.0
### Apache Airflow version
2.3.2
### Operating System
MacOS
### Deployment
Other
### Deployment details
_No response_
### What happened
Can not actively pass project_id as an argument when using `BigQueryGetDataOperator`. This operator internally fallbacks into `default` project id.
### What you think should happen instead
Should let developers pass project_id when needed
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25781 | https://github.com/apache/airflow/pull/25782 | 98a7701942c683f3126f9c4f450c352b510a2734 | fc6dfa338a76d02a426e2b7f0325d37ea5e95ac3 | "2022-08-18T04:40:01Z" | python | "2022-08-20T21:14:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,775 | ["airflow/models/abstractoperator.py", "airflow/models/taskmixin.py", "tests/models/test_baseoperator.py"] | XComs from another task group fail to populate dynamic task mapping metadata | ### Apache Airflow version
2.3.3
### What happened
When a task returns a mappable Xcom within a task group, the dynamic task mapping feature (via `.expand`) causes the Airflow Scheduler to infinitely loop with a runtime error:
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 921, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1163, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 524, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 614, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 600, in _resolve_map_lengths
raise RuntimeError(f"Failed to populate all mapping metadata; missing: {keys}")
RuntimeError: Failed to populate all mapping metadata; missing: 'x'
```
### What you think should happen instead
Xcoms from different task groups should be mappable within other group scopes.
### How to reproduce
```
from airflow import DAG
from airflow.decorators import task
from airflow.utils.task_group import TaskGroup
import pendulum
@task
def enumerate(x):
return [i for i in range(x)]
@task
def addOne(x):
return x+1
with DAG(
dag_id="TaskGroupMappingBug",
schedule_interval=None,
start_date=pendulum.now().subtract(days=1),
) as dag:
with TaskGroup(group_id="enumerateNine"):
y = enumerate(9)
with TaskGroup(group_id="add"):
# airflow scheduler throws error here so this is never reached
z = addOne.expand(x=y)
```
### Operating System
linux/amd64 via Docker (apache/airflow:2.3.3-python3.9)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
docker-compose version 1.29.2, build 5becea4c
Docker Engine v20.10.14
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25775 | https://github.com/apache/airflow/pull/25793 | 6e66dd7776707936345927f8fccee3ddb7f23a2b | 5c48ed19bd3b554f9c3e881a4d9eb61eeba4295b | "2022-08-17T18:42:22Z" | python | "2022-08-19T09:55:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,765 | ["airflow/jobs/scheduler_job.py"] | Deadlock in Scheduler Loop when Updating Dag Run | ### Apache Airflow version
2.3.3
### What happened
We have been getting occasional deadlock errors in our main scheduler loop that is causing the scheduler to error out of the main scheduler loop and terminate. The full stack trace of the error is below:
```
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: [2022-08-13 00:01:17,377] {{scheduler_job.py:768}} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: Traceback (most recent call last):
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1800, in _execute_context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: cursor, statement, parameters, context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 193, in do_executemany
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: rowcount = cursor.executemany(statement, parameters)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 239, in executemany
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.rowcount = sum(self.execute(query, arg) for arg in args)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 239, in <genexpr>
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.rowcount = sum(self.execute(query, arg) for arg in args)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: res = self._query(query)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: db.query(q)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/connections.py", line 259, in query
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: _mysql.connection.query(self, query)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: The above exception was the direct cause of the following exception:
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: Traceback (most recent call last):
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self._run_scheduler_loop()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: num_queued_tis = self._do_scheduling(session)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 924, in _do_scheduling
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: guard.commit()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/airflow/utils/sqlalchemy.py", line 296, in commit
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.session.commit()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self._transaction.commit(_to_root=self.future)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 829, in commit
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self._prepare_impl()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.session.flush()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3383, in flush
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self._flush(objects)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3523, in _flush
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: transaction.rollback(_capture_exception=True)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: with_traceback=exc_tb,
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: raise exception
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3483, in _flush
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: flush_context.execute()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: rec.execute(self)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 633, in execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: uow,
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 242, in save_obj
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: update,
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1002, in _emit_update_statements
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: statement, multiparams, execution_options=execution_options
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: return meth(self, args_10style, kwargs_10style, execution_options)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 333, in _execute_on_connection
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self, multiparams, params, execution_options
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1508, in _execute_clauseelement
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: cache_hit=cache_hit,
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1863, in _execute_context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: e, statement, parameters, cursor, context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2044, in _handle_dbapi_exception
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: sqlalchemy_exception, with_traceback=exc_info[2], from_=e
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: raise exception
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1800, in _execute_context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: cursor, statement, parameters, context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 193, in do_executemany
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: rowcount = cursor.executemany(statement, parameters)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 239, in executemany
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.rowcount = sum(self.execute(query, arg) for arg in args)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 239, in <genexpr>
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.rowcount = sum(self.execute(query, arg) for arg in args)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: res = self._query(query)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: db.query(q)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/connections.py", line 259, in query
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: _mysql.connection.query(self, query)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: [SQL: UPDATE dag_run SET last_scheduling_decision=%s WHERE dag_run.id = %s]
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: [parameters: ((datetime.datetime(2022, 8, 13, 0, 1, 17, 280720), 9), (datetime.datetime(2022, 8, 13, 0, 1, 17, 213661), 11), (datetime.datetime(2022, 8, 13, 0, 1, 17, 40686), 12))]
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
```
It appears the issue occurs when attempting to update the `last_scheduling_decision` field of the `dag_run` table, but we are unsure why this would cause a deadlock. This issue has only been occurring when we upgrade to version 2.3.3, this was not an issue with version 2.2.4.
### What you think should happen instead
The scheduler loop should not have any deadlocks that cause it to exit out of its main loop and terminate. I would expect the scheduler loop to always be running constantly, which is not the case if a deadlock occurs in this loop.
### How to reproduce
This is occurring for us when we run a `LocalExecutor` with smart sensors enabled (2 shards). We only have 3 other daily DAGs which run at different times, and the error seems to occur right when the start time comes for one DAG to start running. After we restart the scheduler after that first deadlock, it seems to run fine the rest of the day, but the next day when it comes time to start the DAG again, another deadlock occurs.
### Operating System
Ubuntu 18.04.6 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==4.1.0
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-mysql==3.1.0
apache-airflow-providers-sftp==4.0.0
apache-airflow-providers-sqlite==3.1.0
apache-airflow-providers-ssh==3.1.0
### Deployment
Other
### Deployment details
We deploy airflow to 2 different ec2 instances. The scheduler lives on one ec2 instances and the webserver lives on a separate ec2 instance. We only run a single scheduler.
### Anything else
This issue occurs once a day when the first of our daily DAGs gets triggered. When we restart the scheduler after the deadlock, it works fine for the rest of the day typically.
We use a `LocalExecutor` with a `PARALLELISM` of 32, smart sensors enabled using 2 shards.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25765 | https://github.com/apache/airflow/pull/26347 | f977804255ca123bfea24774328ba0a9ca63688b | 0da49935000476b1d1941b63d0d66d3c58d64fea | "2022-08-17T14:04:17Z" | python | "2022-10-02T03:33:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,743 | ["airflow/config_templates/airflow_local_settings.py"] | DeprecationWarning: Passing filename_template to FileTaskHandler is deprecated and has no effect | ### Apache Airflow version
2.3.3
### What happened
After upgrading or installing airflow 2.3.3 the remote_logging in airflow.cfg cant be set to true without creating depreciation warning.
I'm using remote logging to an s3 bucket.
It doesn't matter which version of **apache-airflow-providers-amazon** i have installed.
When using systemd units to start the airflow components, the webserver will spam the depreciation warning every second.
Tested with Python 3.10 and 3.7.3
### What you think should happen instead
When using the remote logging
It should not execute an action every second in the background which seems to be deprecated.
### How to reproduce
You could quickly install an Python virtual Environment on a machine of you choice.
After that install airflow and apache-airflow-providers-amazon over pip
Then change the logging part in the airflow.cfg:
**[logging]
remote_logging = True**
create a testdag.py containing at least: **from airflow import DAG**
run it with Python to see the errors:
python testdag.py
hint: some more deprecationWarnings will appear because the standard airflow.cfg which get created when installing airflow is not the current state.
The Deprication warning you should see when turning remote_logging to true is:
`.../lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py:52 DeprecationWarning: Passing filename_template to FileTaskHandler is deprecated and has no effect`
### Operating System
Debian GNU/Linux 10 (buster) and also tested Fedora release 36 (Thirty Six)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 4.0.0
### Deployment
Virtualenv installation
### Deployment details
Running a small setup. 2 Virtual Machines. Airflow installed over pip inside a Python virtual environment.
### Anything else
The Problem occurs every dag run and it gets logged every second inside the journal produced by the webserver systemd unit.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25743 | https://github.com/apache/airflow/pull/25764 | 0267a47e5abd104891e0ec6c741b5bed208eef1e | da616a1421c71c8ec228fefe78a0a90263991423 | "2022-08-16T14:26:29Z" | python | "2022-08-19T14:13:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,718 | ["airflow/providers/google/cloud/hooks/bigquery_dts.py"] | Incorrect config name generated for BigQueryDeleteDataTransferConfigOperator | ### Apache Airflow version
2.3.3
### What happened
When we try to delete a big query transfer config using BigQueryDeleteDataTransferConfigOperator, we are unable to find the config, as the generated transfer config name is erroneous.
As a result, although a transfer config id (that exists) is passed to the operator, we get an error saying that the transfer config doesn't exist.
### What you think should happen instead
On further analysis, it was revealed that, in the bigquery_dts hook, the project name is incorrectly created as follows on the line 171:
`project = f"/{project}/locations/{self.location}"`
That is there's an extra / prefixed to the project.
Removing the extra / shall fix this bug.
### How to reproduce
1. Create a transfer config in the BQ data transfers/or use the operator BigQueryCreateDataTransferOperator (in a project located in Europe).
2. Try to delete the transfer config using the BigQueryDeleteDataTransferConfigOperator by passing the location of the project along with the transfer config id. This step will throw the error.
### Operating System
Windows 11
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25718 | https://github.com/apache/airflow/pull/25719 | c6e9cdb4d013fec330deb79810dbb735d2c01482 | fa0cb363b860b553af2ef9530ea2de706bd16e5d | "2022-08-15T03:02:59Z" | python | "2022-10-02T00:56:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,712 | ["airflow/providers/postgres/provider.yaml", "generated/provider_dependencies.json"] | postgres provider: use non-binary psycopg2 (recommended for production use) | ### Apache Airflow Provider(s)
postgres
### Versions of Apache Airflow Providers
apache-airflow-providers-postgres==5.0.0
### Apache Airflow version
2.3.3
### Operating System
Debian 11 (airflow docker image)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
psycopg2-binary package is installed.
### What you think should happen instead
psycopg (non-binary) package is installed.
According to the [psycopg2 docs](https://www.psycopg.org/docs/install.html#psycopg-vs-psycopg-binary), (emphasis theirs) "**For production use you are advised to use the source distribution.**".
### How to reproduce
Either
```
docker run -it apache/airflow:2.3.3-python3.10
pip freeze |grep -E '(postgres|psycopg2)'
```
Or
```
docker run -it apache/airflow:slim-2.3.3-python3.10
curl -O curl https://raw.githubusercontent.com/apache/airflow/constraints-2.3.3/constraints-3.10.txt
pip install -c constraints-3.10.txt apache-airflow-providers-postgres
pip freeze |grep -E '(postgres|psycopg2)'
```
Either way, the output is:
```
apache-airflow-providers-postgres==5.0.0
psycopg2-binary==2.9.3
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25712 | https://github.com/apache/airflow/pull/25710 | 28165eef2ac26c66525849e7bebb55553ea5a451 | 14d56a5a9e78580c53cf85db504464daccffe21c | "2022-08-14T10:23:53Z" | python | "2022-08-23T15:08:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,698 | ["airflow/models/mappedoperator.py", "tests/jobs/test_backfill_job.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Backfill mode with mapped tasks: "Failed to populate all mapping metadata" | ### Apache Airflow version
2.3.3
### What happened
I was backfilling some DAGs that use dynamic tasks when I got an exception like the following:
```
Traceback (most recent call last):
File "/opt/conda/envs/production/bin/airflow", line 11, in <module>
sys.exit(main())
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py", line 107, in dag_backfill
dag.run(
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/dag.py", line 2288, in run
job.run()
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/backfill_job.py", line 847, in _execute
self._execute_dagruns(
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/backfill_job.py", line 737, in _execute_dagruns
processed_dag_run_dates = self._process_backfill_task_instances(
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/backfill_job.py", line 612, in _process_backfill_task_instances
for node, run_id, new_mapped_tis, max_map_index in self._manage_executor_state(
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/backfill_job.py", line 270, in _manage_executor_state
new_tis, num_mapped_tis = node.expand_mapped_task(ti.run_id, session=session)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 614, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 600, in _resolve_map_lengths
raise RuntimeError(f"Failed to populate all mapping metadata; missing: {keys}")
RuntimeError: Failed to populate all mapping metadata; missing: 'x'
```
Digging further, it appears this always happens if the task used as input to an `.expand` raises an Exception. Airflow doesn't handle this exception gracefully like it does with exceptions in "normal" tasks, which can lead to other errors from deeper within Airflow. This also means that since this is not a "typical" failure case, things like `--rerun-failed-tasks` do not work as expected.
### What you think should happen instead
Airflow should fail gracefully if exceptions are raised in dynamic task generators.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
import logging
from airflow.decorators import dag, task
logger = logging.getLogger(__name__)
@dag(
schedule_interval='@daily',
start_date=datetime.datetime(2022, 8, 12),
default_args={
'retries': 5,
'retry_delay': 5.0,
},
)
def test_backfill():
@task
def get_tasks(ti=None):
logger.info(f'{ti.try_number=}')
if ti.try_number < 3:
raise RuntimeError('')
return ['a', 'b', 'c']
@task
def do_stuff(x=None, ti=None):
logger.info(f'do_stuff: {x=}, {ti.try_number=}')
if ti.try_number < 3:
raise RuntimeError('')
do_stuff.expand(x=do_stuff.expand(x=get_tasks()))
do_stuff() >> do_stuff() # this works as expected
dag = test_backfill()
if __name__ == '__main__':
dag.cli()
```
```
airflow dags backfill test_backfill -s 2022-08-05 -e 2022-08-07 --rerun-failed-tasks
```
You can repeat the `backfill` command multiple times to slowly make progress through the DAG. Things will eventually succeed (assuming the exception that triggers this bug stops being raised), but obviously this is a pain when trying to backfill a non-trivial number of DAG Runs.
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
Standalone
### Anything else
I was able to reproduce this both with SQLite + `SequentialExecutor` as well as with Postgres + `LocalExecutor`.
I haven't yet been able to reproduce this outside of `backfill` mode.
Possibly related since they mention the same exception text:
* #23533
* #23642
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25698 | https://github.com/apache/airflow/pull/25757 | d51957165b2836fe0006d318c299c149fb5d35b0 | 728a3ce5c2f5abdd7aa01864a861ca18b1f27c1b | "2022-08-12T18:04:47Z" | python | "2022-08-19T09:45:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,681 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | Scheduler enters crash loop in certain cases with dynamic task mapping | ### Apache Airflow version
2.3.3
### What happened
The scheduler crashed when attempting to queue a dynamically mapped task which is directly downstream and only dependent on another dynamically mapped task.
<details><summary>scheduler.log</summary>
```
scheduler | ____________ _____________
scheduler | ____ |__( )_________ __/__ /________ __
scheduler | ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
scheduler | ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
scheduler | _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
scheduler | [2022-08-11 08:41:10,922] {scheduler_job.py:708} INFO - Starting the scheduler
scheduler | [2022-08-11 08:41:10,923] {scheduler_job.py:713} INFO - Processing each file at most -1 times
scheduler | [2022-08-11 08:41:10,926] {executor_loader.py:105} INFO - Loaded executor: SequentialExecutor
scheduler | [2022-08-11 08:41:10,929] {manager.py:160} INFO - Launched DagFileProcessorManager with pid: 52386
scheduler | [2022-08-11 08:41:10,932] {scheduler_job.py:1233} INFO - Resetting orphaned tasks for active dag runs
scheduler | [2022-08-11 08:41:11 -0600] [52385] [INFO] Starting gunicorn 20.1.0
scheduler | [2022-08-11 08:41:11 -0600] [52385] [INFO] Listening at: http://0.0.0.0:8793 (52385)
scheduler | [2022-08-11 08:41:11 -0600] [52385] [INFO] Using worker: sync
scheduler | [2022-08-11 08:41:11 -0600] [52387] [INFO] Booting worker with pid: 52387
scheduler | [2022-08-11 08:41:11,656] {settings.py:55} INFO - Configured default timezone Timezone('UTC')
scheduler | [2022-08-11 08:41:11,659] {manager.py:406} WARNING - Because we cannot use more than 1 thread (parsing_processes = 2) when using sqlite. So we set parallelism to 1.
scheduler | [2022-08-11 08:41:11 -0600] [52388] [INFO] Booting worker with pid: 52388
scheduler | [2022-08-11 08:41:28,118] {dag.py:2968} INFO - Setting next_dagrun for bug_test to 2022-08-11T14:00:00+00:00, run_after=2022-08-11T15:00:00+00:00
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:353} INFO - 20 tasks up for execution:
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=0 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=1 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=2 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=3 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=4 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=5 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=6 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=7 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=8 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=9 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=10 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=11 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=12 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=13 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=14 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=15 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]>
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 0/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 1/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 2/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 3/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 4/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 5/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 6/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 7/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 8/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 9/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 10/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 11/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 12/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 13/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 14/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 15/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 16/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:425} INFO - Not executing <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]> since the number of tasks running or queued from DAG bug_test is >= to the DAG's max_active_tasks limit of 16
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 16/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:425} INFO - Not executing <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]> since the number of tasks running or queued from DAG bug_test is >= to the DAG's max_active_tasks limit of 16
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 16/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:425} INFO - Not executing <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]> since the number of tasks running or queued from DAG bug_test is >= to the DAG's max_active_tasks limit of 16
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 16/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:425} INFO - Not executing <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]> since the number of tasks running or queued from DAG bug_test is >= to the DAG's max_active_tasks limit of 16
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:504} INFO - Setting the following tasks to queued state:
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=0 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=1 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=2 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=3 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=4 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=5 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=6 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=7 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=8 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=9 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=10 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=11 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=12 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=13 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=14 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=15 [scheduled]>
scheduler | [2022-08-11 08:41:28,164] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=0) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,165] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '0']
scheduler | [2022-08-11 08:41:28,165] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=1) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,165] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '1']
scheduler | [2022-08-11 08:41:28,165] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=2) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,165] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '2']
scheduler | [2022-08-11 08:41:28,165] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=3) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,165] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '3']
scheduler | [2022-08-11 08:41:28,165] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=4) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,166] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '4']
scheduler | [2022-08-11 08:41:28,166] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=5) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,166] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '5']
scheduler | [2022-08-11 08:41:28,166] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=6) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,166] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '6']
scheduler | [2022-08-11 08:41:28,166] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=7) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,166] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '7']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=8) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '8']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=9) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '9']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=10) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '10']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=11) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '11']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=12) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '12']
scheduler | [2022-08-11 08:41:28,168] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=13) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,168] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '13']
scheduler | [2022-08-11 08:41:28,168] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=14) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,168] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '14']
scheduler | [2022-08-11 08:41:28,168] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=15) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,168] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '15']
scheduler | [2022-08-11 08:41:28,170] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '0']
scheduler | [2022-08-11 08:41:29,131] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:29,227] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=0 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:29,584] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '1']
scheduler | [2022-08-11 08:41:30,492] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:30,593] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=1 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:30,969] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '2']
scheduler | [2022-08-11 08:41:31,852] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:31,940] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=2 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:32,308] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '3']
scheduler | [2022-08-11 08:41:33,199] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:33,289] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=3 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:33,656] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '4']
scheduler | [2022-08-11 08:41:34,535] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:34,631] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=4 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:35,013] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '5']
scheduler | [2022-08-11 08:41:35,928] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:36,024] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=5 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:36,393] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '6']
scheduler | [2022-08-11 08:41:37,296] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:37,384] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=6 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:37,758] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '7']
scheduler | [2022-08-11 08:41:38,642] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:38,732] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=7 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:39,113] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '8']
scheduler | [2022-08-11 08:41:39,993] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:40,086] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=8 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:40,461] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '9']
scheduler | [2022-08-11 08:41:41,383] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:41,473] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=9 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:41,865] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '10']
scheduler | [2022-08-11 08:41:42,761] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:42,858] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=10 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:43,236] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '11']
scheduler | [2022-08-11 08:41:44,124] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:44,222] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=11 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:44,654] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '12']
scheduler | [2022-08-11 08:41:45,545] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:45,635] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=12 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:45,998] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '13']
scheduler | [2022-08-11 08:41:46,867] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:46,955] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=13 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:47,386] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '14']
scheduler | [2022-08-11 08:41:48,270] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:48,362] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=14 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:48,718] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '15']
scheduler | [2022-08-11 08:41:49,569] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:49,669] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=15 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:50,022] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,022] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,022] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,022] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=0, run_start_date=2022-08-11 14:41:29.255370+00:00, run_end_date=2022-08-11 14:41:29.390095+00:00, run_duration=0.134725, state=success, executor_state=success, try_number=1, max_tries=0, job_id=5, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52421
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=1, run_start_date=2022-08-11 14:41:30.628702+00:00, run_end_date=2022-08-11 14:41:30.768539+00:00, run_duration=0.139837, state=success, executor_state=success, try_number=1, max_tries=0, job_id=6, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52423
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=2, run_start_date=2022-08-11 14:41:31.968933+00:00, run_end_date=2022-08-11 14:41:32.112968+00:00, run_duration=0.144035, state=success, executor_state=success, try_number=1, max_tries=0, job_id=7, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52425
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=3, run_start_date=2022-08-11 14:41:33.318972+00:00, run_end_date=2022-08-11 14:41:33.458203+00:00, run_duration=0.139231, state=success, executor_state=success, try_number=1, max_tries=0, job_id=8, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52429
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=4, run_start_date=2022-08-11 14:41:34.663829+00:00, run_end_date=2022-08-11 14:41:34.811273+00:00, run_duration=0.147444, state=success, executor_state=success, try_number=1, max_tries=0, job_id=9, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52437
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=5, run_start_date=2022-08-11 14:41:36.056658+00:00, run_end_date=2022-08-11 14:41:36.203243+00:00, run_duration=0.146585, state=success, executor_state=success, try_number=1, max_tries=0, job_id=10, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52440
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=6, run_start_date=2022-08-11 14:41:37.412705+00:00, run_end_date=2022-08-11 14:41:37.550794+00:00, run_duration=0.138089, state=success, executor_state=success, try_number=1, max_tries=0, job_id=11, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52442
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=7, run_start_date=2022-08-11 14:41:38.761691+00:00, run_end_date=2022-08-11 14:41:38.897424+00:00, run_duration=0.135733, state=success, executor_state=success, try_number=1, max_tries=0, job_id=12, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52446
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=8, run_start_date=2022-08-11 14:41:40.119057+00:00, run_end_date=2022-08-11 14:41:40.262712+00:00, run_duration=0.143655, state=success, executor_state=success, try_number=1, max_tries=0, job_id=13, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52450
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=9, run_start_date=2022-08-11 14:41:41.502857+00:00, run_end_date=2022-08-11 14:41:41.641680+00:00, run_duration=0.138823, state=success, executor_state=success, try_number=1, max_tries=0, job_id=14, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52452
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=10, run_start_date=2022-08-11 14:41:42.889206+00:00, run_end_date=2022-08-11 14:41:43.030804+00:00, run_duration=0.141598, state=success, executor_state=success, try_number=1, max_tries=0, job_id=15, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52454
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=11, run_start_date=2022-08-11 14:41:44.255197+00:00, run_end_date=2022-08-11 14:41:44.413457+00:00, run_duration=0.15826, state=success, executor_state=success, try_number=1, max_tries=0, job_id=16, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52461
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=12, run_start_date=2022-08-11 14:41:45.665373+00:00, run_end_date=2022-08-11 14:41:45.803094+00:00, run_duration=0.137721, state=success, executor_state=success, try_number=1, max_tries=0, job_id=17, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52463
scheduler | [2022-08-11 08:41:50,038] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=13, run_start_date=2022-08-11 14:41:46.988348+00:00, run_end_date=2022-08-11 14:41:47.159584+00:00, run_duration=0.171236, state=success, executor_state=success, try_number=1, max_tries=0, job_id=18, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52465
scheduler | [2022-08-11 08:41:50,038] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=14, run_start_date=2022-08-11 14:41:48.393004+00:00, run_end_date=2022-08-11 14:41:48.533408+00:00, run_duration=0.140404, state=success, executor_state=success, try_number=1, max_tries=0, job_id=19, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52472
scheduler | [2022-08-11 08:41:50,038] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=15, run_start_date=2022-08-11 14:41:49.699253+00:00, run_end_date=2022-08-11 14:41:49.833084+00:00, run_duration=0.133831, state=success, executor_state=success, try_number=1, max_tries=0, job_id=20, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52476
scheduler | [2022-08-11 08:41:51,632] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=0 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=1 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=2 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=3 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=4 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=5 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=6 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=7 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=8 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=9 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=10 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=11 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=12 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=13 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=14 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=15 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]>'
scheduler | [2022-08-11 08:41:51,635] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]>'
scheduler | [2022-08-11 08:41:51,636] {dagrun.py:937} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something_else scheduled__2022-08-11T13:00:00+00:00 [None]>'
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:353} INFO - 4 tasks up for execution:
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]>
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:418} INFO - DAG bug_test has 0/16 running and queued tasks
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:418} INFO - DAG bug_test has 1/16 running and queued tasks
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:418} INFO - DAG bug_test has 2/16 running and queued tasks
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:418} INFO - DAG bug_test has 3/16 running and queued tasks
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:504} INFO - Setting the following tasks to queued state:
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]>
scheduler | [2022-08-11 08:41:51,690] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=16) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:51,690] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '16']
scheduler | [2022-08-11 08:41:51,690] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=17) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:51,690] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '17']
scheduler | [2022-08-11 08:41:51,690] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=18) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:51,690] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '18']
scheduler | [2022-08-11 08:41:51,690] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=19) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:51,690] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '19']
scheduler | [2022-08-11 08:41:51,692] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '16']
scheduler | [2022-08-11 08:41:52,532] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:52,620] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:53,037] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '17']
scheduler | [2022-08-11 08:41:53,907] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:53,996] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:54,427] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '18']
scheduler | [2022-08-11 08:41:55,305] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:55,397] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:55,816] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '19']
scheduler | [2022-08-11 08:41:56,726] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:56,824] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [queued]> on host somehost.com
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlite3.IntegrityError: UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler |
scheduler | The above exception was the direct cause of the following exception:
scheduler |
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/bin/airflow", line 8, in <module>
scheduler | sys.exit(main())
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
scheduler | args.func(args)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
scheduler | return f(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 377, in task_run
scheduler | _run_task_by_selected_method(args, dag, ti)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 183, in _run_task_by_selected_method
scheduler | _run_task_by_local_task_job(args, ti)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 241, in _run_task_by_local_task_job
scheduler | run_job.run()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
scheduler | self._execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 133, in _execute
scheduler | self.handle_task_exit(return_code)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 171, in handle_task_exit
scheduler | self._run_mini_scheduler_on_child_tasks()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
scheduler | return func(*args, session=session, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 261, in _run_mini_scheduler_on_child_tasks
scheduler | info = dag_run.task_instance_scheduling_decisions(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
scheduler | schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
scheduler | expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 683, in expand_mapped_task
scheduler | session.flush()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
scheduler | self._flush(objects)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
scheduler | transaction.rollback(_capture_exception=True)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
scheduler | compat.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
scheduler | flush_context.execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
scheduler | rec.execute(self)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
scheduler | util.preloaded.orm_persistence.save_obj(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 236, in save_obj
scheduler | _emit_update_statements(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1000, in _emit_update_statements
scheduler | c = connection._execute_20(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20
scheduler | return meth(self, args_10style, kwargs_10style, execution_options)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
scheduler | return connection._execute_clauseelement(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1481, in _execute_clauseelement
scheduler | ret = self._execute_context(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1845, in _execute_context
scheduler | self._handle_dbapi_exception(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2026, in _handle_dbapi_exception
scheduler | util.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler | [SQL: UPDATE task_instance SET map_index=? WHERE task_instance.task_id = ? AND task_instance.dag_id = ? AND task_instance.run_id = ? AND task_instance.map_index = ?]
scheduler | [parameters: (0, 'do_something_else', 'bug_test', 'scheduled__2022-08-11T13:00:00+00:00', -1)]
scheduler | (Background on this error at: https://sqlalche.me/e/14/gkpj)
scheduler | [2022-08-11 08:41:57,311] {sequential_executor.py:66} ERROR - Failed to execute task Command '['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '19']' returned non-zero exit status 1..
scheduler | [2022-08-11 08:41:57,313] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:57,313] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:57,313] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:57,313] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status failed for try_number 1
scheduler | [2022-08-11 08:41:57,321] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=16, run_start_date=2022-08-11 14:41:52.649415+00:00, run_end_date=2022-08-11 14:41:52.787286+00:00, run_duration=0.137871, state=success, executor_state=success, try_number=1, max_tries=0, job_id=21, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:51.688924+00:00, queued_by_job_id=4, pid=52479
scheduler | [2022-08-11 08:41:57,321] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=17, run_start_date=2022-08-11 14:41:54.027712+00:00, run_end_date=2022-08-11 14:41:54.170371+00:00, run_duration=0.142659, state=success, executor_state=success, try_number=1, max_tries=0, job_id=22, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:51.688924+00:00, queued_by_job_id=4, pid=52484
scheduler | [2022-08-11 08:41:57,321] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=18, run_start_date=2022-08-11 14:41:55.426712+00:00, run_end_date=2022-08-11 14:41:55.566833+00:00, run_duration=0.140121, state=success, executor_state=success, try_number=1, max_tries=0, job_id=23, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:51.688924+00:00, queued_by_job_id=4, pid=52488
scheduler | [2022-08-11 08:41:57,321] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=19, run_start_date=2022-08-11 14:41:56.859387+00:00, run_end_date=2022-08-11 14:41:57.018604+00:00, run_duration=0.159217, state=success, executor_state=failed, try_number=1, max_tries=0, job_id=24, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:51.688924+00:00, queued_by_job_id=4, pid=52490
scheduler | [2022-08-11 08:41:57,403] {scheduler_job.py:768} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlite3.IntegrityError: UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler |
scheduler | The above exception was the direct cause of the following exception:
scheduler |
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
scheduler | self._run_scheduler_loop()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
scheduler | num_queued_tis = self._do_scheduling(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 921, in _do_scheduling
scheduler | callback_to_run = self._schedule_dag_run(dag_run, session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1163, in _schedule_dag_run
scheduler | schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 524, in update_state
scheduler | info = self.task_instance_scheduling_decisions(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
scheduler | schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
scheduler | expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 683, in expand_mapped_task
scheduler | session.flush()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
scheduler | self._flush(objects)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
scheduler | transaction.rollback(_capture_exception=True)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
scheduler | compat.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
scheduler | flush_context.execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
scheduler | rec.execute(self)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
scheduler | util.preloaded.orm_persistence.save_obj(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 236, in save_obj
scheduler | _emit_update_statements(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1000, in _emit_update_statements
scheduler | c = connection._execute_20(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20
scheduler | return meth(self, args_10style, kwargs_10style, execution_options)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
scheduler | return connection._execute_clauseelement(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1481, in _execute_clauseelement
scheduler | ret = self._execute_context(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1845, in _execute_context
scheduler | self._handle_dbapi_exception(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2026, in _handle_dbapi_exception
scheduler | util.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler | [SQL: UPDATE task_instance SET map_index=? WHERE task_instance.task_id = ? AND task_instance.dag_id = ? AND task_instance.run_id = ? AND task_instance.map_index = ?]
scheduler | [parameters: (0, 'do_something_else', 'bug_test', 'scheduled__2022-08-11T13:00:00+00:00', -1)]
scheduler | (Background on this error at: https://sqlalche.me/e/14/gkpj)
scheduler | [2022-08-11 08:41:58,421] {process_utils.py:125} INFO - Sending Signals.SIGTERM to group 52386. PIDs of all processes in the group: [52386]
scheduler | [2022-08-11 08:41:58,421] {process_utils.py:80} INFO - Sending the signal Signals.SIGTERM to group 52386
scheduler | [2022-08-11 08:41:58,609] {process_utils.py:75} INFO - Process psutil.Process(pid=52386, status='terminated', exitcode=0, started='08:41:10') (52386) terminated with exit code 0
scheduler | [2022-08-11 08:41:58,609] {scheduler_job.py:780} INFO - Exited execute loop
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | [2022-08-11 08:41:58 -0600] [52385] [INFO] Handling signal: term
scheduler | cursor.execute(statement, parameters)
scheduler | sqlite3.IntegrityError: UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler |
scheduler | The above exception was the direct cause of the following exception:
scheduler |
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/bin/airflow", line 8, in <module>
scheduler | sys.exit(main())
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
scheduler | args.func(args)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
scheduler | return f(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
scheduler | _run_scheduler_job(args=args)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
scheduler | job.run()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
scheduler | self._execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
scheduler | self._run_scheduler_loop()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
scheduler | num_queued_tis = self._do_scheduling(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 921, in _do_scheduling
scheduler | [2022-08-11 08:41:58 -0600] [52387] [INFO] Worker exiting (pid: 52387)
scheduler | [2022-08-11 08:41:58 -0600] [52388] [INFO] Worker exiting (pid: 52388)
scheduler | callback_to_run = self._schedule_dag_run(dag_run, session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1163, in _schedule_dag_run
scheduler | schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 524, in update_state
scheduler | info = self.task_instance_scheduling_decisions(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
scheduler | schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
scheduler | expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 683, in expand_mapped_task
scheduler | session.flush()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
scheduler | self._flush(objects)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
scheduler | transaction.rollback(_capture_exception=True)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
scheduler | compat.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
scheduler | flush_context.execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
scheduler | rec.execute(self)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
scheduler | util.preloaded.orm_persistence.save_obj(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 236, in save_obj
scheduler | _emit_update_statements(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1000, in _emit_update_statements
scheduler | c = connection._execute_20(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20
scheduler | return meth(self, args_10style, kwargs_10style, execution_options)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
scheduler | return connection._execute_clauseelement(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1481, in _execute_clauseelement
scheduler | ret = self._execute_context(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1845, in _execute_context
scheduler | self._handle_dbapi_exception(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2026, in _handle_dbapi_exception
scheduler | util.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler | [SQL: UPDATE task_instance SET map_index=? WHERE task_instance.task_id = ? AND task_instance.dag_id = ? AND task_instance.run_id = ? AND task_instance.map_index = ?]
scheduler | [parameters: (0, 'do_something_else', 'bug_test', 'scheduled__2022-08-11T13:00:00+00:00', -1)]
scheduler | (Background on this error at: https://sqlalche.me/e/14/gkpj)
scheduler | [2022-08-11 08:41:58 -0600] [52385] [INFO] Shutting down: Master
```
</details>
### What you think should happen instead
The scheduler does not crash and the dynamically mapped task executes normally
### How to reproduce
### Setup
- one DAG with two tasks, one directly downstream of the other
- the DAG has a schedule (e.g. @hourly)
- both tasks use task expansion
- the second task uses the output of the first task as its expansion parameter
- the scheduler's pool size is smaller than the number of map indices in each task
### Steps to reproduce
1. enable the DAG and let it run
### Operating System
MacOS and Dockerized Linux on MacOS
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
I have tested and confirmed this bug is present in three separate deployments:
1. `airflow standalone`
2. DaskExecutor using docker compose
3. KubernetesExecutor using Docker Desktop's builtin Kubernetes cluster
All three of these deployments were executed locally on a Macbook Pro.
### 1. `airflow standalone`
I created a new Python 3.9 virtual environment, installed Airflow 2.3.3, configured a few environment variables, and executed `airflow standalone`. Here is a bash script that completes all of these tasks:
<details><summary>airflow_standalone.sh</summary>
```bash
#!/bin/bash
# ensure working dir is correct
DIR=$(cd $(dirname $BASH_SOURCE[0]) && pwd)
cd $DIR
set -x
# set version parameters
AIRFLOW_VERSION="2.3.3"
PYTHON_VERSION="3.9"
# configure Python environment
if [ ~ -d "$DIR/.env" ]
then
python3 -m venv "$DIR/.env"
fi
source "$DIR/.env/bin/activate"
pip install --upgrade pip
# install Airflow
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
# configure Airflow
export AIRFLOW_HOME="$DIR/.airflow"
export AIRFLOW__CORE__DAGS_FOLDER="$DIR/dags"
export AIRFLOW__CORE__LOAD_EXAMPLES="False"
export AIRFLOW__DATABASE__LOAD_DEFAULT_CONNECTIONS="False"
# start Airflow
exec "$DIR/.env/bin/airflow" standalone
```
</details>
Here is the DAG code that can be placed in a `dags` directory in the same location as the above script. Note that this
DAG code triggers the bug in all environments I tested.
<details><summary>bug_test.py</summary>
```python
import pendulum
from airflow.decorators import dag, task
@dag(
'bug_test',
schedule_interval='@hourly',
start_date=pendulum.now().add(hours=-2)
)
def test_scheduler_bug():
@task
def do_something(i):
return i + 10
@task
def do_something_else(i):
import logging
log = logging.getLogger('airflow.task')
log.info("I'll never run")
nums = do_something.expand(i=[i+1 for i in range(20)])
do_something_else.expand(i=nums)
TEST_DAG = test_scheduler_bug()
```
</details>
Once set up, simply activating the DAG will demonstrate the bug.
### 2. DaskExecutor on docker compose with Postgres 12
I cannot provide a full replication of this setup as it is rather in depth. The Docker image is starts from `python:3.9-slim` then installs Airflow with appropriate constraints. It has a lot of additional packages installed, both system and python. It also has a custom entrypoint that can run the Dask scheduler in addition to regular Airflow commands. Here are the applicable Airflow configuration values:
<details><summary>airflow.cfg</summary>
```conf
[core]
donot_pickle = False
executor = DaskExecutor
load_examples = False
max_active_tasks_per_dag = 16
parallelism = 4
[scheduler]
dag_dir_list_interval = 0
catchup_by_default = False
parsing_processes = 3
scheduler_health_check_threshold = 90
```
</details>
Here is a docker-compose file that is nearly identical to the one I use (I just removed unrelated bits):
<details><summary>docker-compose.yml</summary>
```yml
version: '3.7'
services:
metastore:
image: postgres:12-alpine
ports:
- 5432:5432
container_name: airflow-metastore
volumes:
- ${AIRFLOW_HOME_DIR}/pgdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: ${AIRFLOW_DB_PASSWORD}
PGDATA: /var/lib/postgresql/data/pgdata
airflow-webserver:
image: 'my_custom_image:tag'
ports:
- '8080:8080'
depends_on:
- metastore
container_name: airflow-webserver
environment:
AIRFLOW_HOME: /opt/airflow
AIRFLOW__WEBSERVER__SECRET_KEY: ${AIRFLOW_SECRET_KEY}
AIRFLOW__CORE__FERNET_KEY: ${FERNET_KEY}
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:${AIRFLOW_DB_PASSWORD}@metastore:5432/${AIRFLOW_DB_DATABASE}
env_file: container_vars.env
command:
- webserver
- --daemon
- --access-logfile
- /var/log/airflow/webserver-access.log
- --error-logfile
- /var/log/airflow/webserver-errors.log
- --log-file
- /var/log/airflow/webserver.log
volumes:
- ${AIRFLOW_HOME_DIR}/logs:/var/log/airflow
airflow-scheduler:
image: 'my_custom_image:tag'
depends_on:
- metastore
- dask-scheduler
container_name: airflow-scheduler
environment:
AIRFLOW_HOME: /opt/airflow
AIRFLOW__CORE__FERNET_KEY: ${FERNET_KEY}
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:${AIRFLOW_DB_PASSWORD}@metastore:5432/${AIRFLOW_DB_DATABASE}
SCHEDULER_RESTART_INTERVAL: ${SCHEDULER_RESTART_INTERVAL}
env_file: container_vars.env
restart: unless-stopped
command:
- scheduler
- --daemon
- --log-file
- /var/log/airflow/scheduler.log
volumes:
- ${AIRFLOW_HOME_DIR}/logs:/var/log/airflow
dask-scheduler:
image: 'my_custom_image:tag'
ports:
- 8787:8787
container_name: airflow-dask-scheduler
command:
- dask-scheduler
dask-worker:
image: 'my_custom_image:tag'
depends_on:
- dask-scheduler
- metastore
container_name: airflow-worker
environment:
AIRFLOW_HOME: /opt/airflow
AIRFLOW__CORE__FERNET_KEY: ${FERNET_KEY}
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:${AIRFLOW_DB_PASSWORD}@metastore:5432/${AIRFLOW_DB_DATABASE}
env_file: container_vars.env
command:
- dask-worker
- dask-scheduler:8786
- --nprocs
- '8'
- --nthreads
- '1'
volumes:
- ${AIRFLOW_HOME_DIR}/logs:/var/log/airflow
```
</details>
I also had to manually change the default pool size to 15 in the UI in order to trigger the bug. With the default pool set to 128 the bug did not trigger.
### 3. KubernetesExecutor on Docker Desktop builtin Kubernetes cluster with Postgres 11
This uses the official [Airflow Helm Chart](https://airflow.apache.org/docs/helm-chart/stable/index.html) with the following values overrides:
<details><summary>values.yaml</summary>
```yml
defaultAirflowRepository: my_custom_image
defaultAirflowTag: "my_image_tag"
airflowVersion: "2.3.3"
executor: "KubernetesExecutor"
webserverSecretKeySecretName: airflow-webserver-secret-key
fernetKeySecretName: airflow-fernet-key
config:
webserver:
expose_config: 'True'
base_url: http://localhost:8080
scheduler:
catchup_by_default: 'False'
api:
auth_backends: airflow.api.auth.backend.default
triggerer:
enabled: false
statsd:
enabled: false
redis:
enabled: false
cleanup:
enabled: false
logs:
persistence:
enabled: true
workers:
extraVolumes:
- name: airflow-dags
hostPath:
path: /local/path/to/dags
type: Directory
extraVolumeMounts:
- name: airflow-dags
mountPath: /opt/airflow/dags
readOnly: true
scheduler:
extraVolumes:
- name: airflow-dags
hostPath:
path: /local/path/to/dags
type: Directory
extraVolumeMounts:
- name: airflow-dags
mountPath: /opt/airflow/dags
readOnly: true
```
</details>
The docker image is the official `airflow:2.3.3-python3.9` image with a single environment variable modified:
```conf
PYTHONPATH="/opt/airflow/dags/repo/dags:${PYTHONPATH}"
```
### Anything else
This is my understanding of the timeline that produces the crash:
1. The scheduler queues some of the subtasks in the first task
1. Some subtasks run and yield their XCom results
1. The scheduler runs, queueing the remainder of the subtasks for the first task and creates some subtasks in the second task using the XCom results produced thus far
1. The remainder of the subtasks from the first task complete
1. The scheduler attempts to recreate all of the subtasks of the second task, including the ones already created, and a unique constraint in the database is violated and the scheduler crashes
1. When the scheduler restarts, it attempts the previous step again and crashes again, thus entering a crash loop
It seems that if some but not all subtasks for the second task have been created when the scheduler attempts to queue
the mapped task, then the scheduler tries to create all of the subtasks again which causes a unique constraint violation.
**NOTES**
- IF the scheduler can queue as many or more tasks as there are map indices for the task, then this won't happen. The
provided test case succeeded on the DaskExecutor deployment when the default pool was 128, however when I reduced that pool to 15 this bug occurred.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25681 | https://github.com/apache/airflow/pull/25788 | 29c33165a06b7a6233af3657ace4f2bdb4ec27e4 | db818ae6665b37cd032aa6d2b0f97232462d41e1 | "2022-08-11T15:27:11Z" | python | "2022-08-25T19:11:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,671 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | `airflow dags test` command with run confs | ### Description
Currently, the command [`airflow dags test`](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#test) doesn't accept any configs to set run confs. We can do that in [`airflow dags trigger`](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#trigger) command through `--conf` argument.
The command `airflow dags test` is really useful when testing DAGs in local machines or CI/CD environment. Can we have that feature for the `airflow dags test` command?
### Use case/motivation
We may put run confs same as `airflow dags trigger` command does.
Example:
```
$ airflow dags test <DAG_ID> <EXECUTION_DATE> --conf '{"path": "some_path"}'
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25671 | https://github.com/apache/airflow/pull/25900 | bcdc25dd3fbda568b5ff2c04701623d6bf11a61f | bcc2fe26f6e0b7204bdf73f57d25b4e6c7a69548 | "2022-08-11T13:00:03Z" | python | "2022-08-29T08:51:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,669 | ["airflow/providers/atlassian/jira/CHANGELOG.rst", "airflow/providers/atlassian/jira/hooks/jira.py", "airflow/providers/atlassian/jira/operators/jira.py", "airflow/providers/atlassian/jira/provider.yaml", "airflow/providers/atlassian/jira/sensors/jira.py", "generated/provider_dependencies.json", "tests/providers/atlassian/jira/hooks/test_jira.py", "tests/providers/atlassian/jira/operators/test_jira.py", "tests/providers/atlassian/jira/sensors/test_jira.py"] | change Jira sdk to official atlassian sdk | ### Description
Jira is a product of atlassian https://www.atlassian.com/
There are
https://github.com/pycontribs/jira/issues
and
https://github.com/atlassian-api/atlassian-python-api
### Use case/motivation
Motivation is that now Airflow use unoffical SDK which is limited only to jira and can't also add operators for the other productions.
https://github.com/atlassian-api/atlassian-python-api is the official one and also contains more integrations to other atlassian products
https://github.com/atlassian-api/atlassian-python-api/issues/1027
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25669 | https://github.com/apache/airflow/pull/27633 | b5338b5825859355b017bed3586d5a42208f1391 | f3c68d7e153b8d417edf4cc4a68d18dbc0f30e64 | "2022-08-11T12:08:46Z" | python | "2022-12-07T12:48:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,668 | ["airflow/providers/cncf/kubernetes/hooks/kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py"] | SparkKubernetesOperator application file attribute "name" is not mandatory | ### Apache Airflow version
2.3.3
### What happened
Since commit https://github.com/apache/airflow/commit/3c5bc73579080248b0583d74152f57548aef53a2 the SparkKubernetesOperator application file is expected to have an attribute metadata:name and the operator execution fails with exception `KeyError: 'name'` if it not exists. Please find the example error stack below:
```
[2022-07-27, 12:58:07 UTC] {taskinstance.py:1909} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", line 69, in execute
response = hook.create_custom_object(
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/hooks/kubernetes.py", line 316, in create_custom_object
name=body_dict["metadata"]["name"],
KeyError: 'name'
```
### What you think should happen instead
The operator should start successfully, ignoring the field absence
The attribute metadata:name in NOT mandatory, and a pair metadata:generateName can be user instead - please find proof here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#objectmeta-v1-meta, particularly in the following:
```
GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided
```
### How to reproduce
Start a DAG with SparkKubernetesOperator with an application file like this in the beginning:
```
apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
metadata:
generateName: spark_app_name
[...]
```
### Operating System
linux
### Versions of Apache Airflow Providers
apache-airflow==2.3.3
apache-airflow-providers-cncf-kubernetes==4.2.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
Every time
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25668 | https://github.com/apache/airflow/pull/25787 | 4dc9b1c592497686dada05e45147b1364ec338ea | 2d2f0daad66416d565e874e35b6a487a21e5f7b1 | "2022-08-11T11:43:00Z" | python | "2022-11-08T12:58:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,653 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | Deferrable Operators get stuck as "scheduled" during backfill | ### Apache Airflow version
2.3.3
### What happened
If you try to backfill a DAG that uses any [deferrable operators](https://airflow.apache.org/docs/apache-airflow/stable/concepts/deferring.html), those tasks will get indefinitely stuck in a "scheduled" state.
If I watch the Grid View, I can see the task state change: "scheduled" (or sometimes "queued") -> "deferred" -> "scheduled". I've tried leaving in this state for over an hour, but there are no further state changes.
When the task is stuck like this, the log appears as empty in the web UI. The corresponding log file *does* exist on the worker, but it does not contain any errors or warnings that might point to the source of the problem.
Ctrl-C-ing the backfill at this point seems to hang on "Shutting down LocalExecutor; waiting for running tasks to finish." **Force-killing and restarting the backfill will "unstick" the stuck tasks.** However, any deferrable operators downstream of the first will get back into that stuck state, requiring multiple restarts to get everything to complete successfully.
### What you think should happen instead
Deferrable operators should work as normal when backfilling.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
import logging
import pendulum
from airflow.decorators import dag, task
from airflow.sensors.time_sensor import TimeSensorAsync
logger = logging.getLogger(__name__)
@dag(
schedule_interval='@daily',
start_date=datetime.datetime(2022, 8, 10),
)
def test_backfill():
time_sensor = TimeSensorAsync(
task_id='time_sensor',
target_time=datetime.time(0).replace(tzinfo=pendulum.UTC), # midnight - should succeed immediately when the trigger first runs
)
@task
def some_task():
logger.info('hello')
time_sensor >> some_task()
dag = test_backfill()
if __name__ == '__main__':
dag.cli()
```
`airflow dags backfill test_backfill -s 2022-08-01 -e 2022-08-04`
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
Self-hosted/standalone
### Anything else
I was able to reproduce this with the following configurations:
* `standalone` mode + SQLite backend + `SequentialExecutor`
* `standalone` mode + Postgres backend + `LocalExecutor`
* Production deployment (self-hosted) + Postgres backend + `CeleryExecutor`
I have not yet found anything telling in any of the backend logs.
Possibly related:
* #23693
* #23145
* #13542
- A modified version of the workaround mentioned in [this comment](https://github.com/apache/airflow/issues/13542#issuecomment-1011598836) works to unstick the first stuck task. However if you run it multiple times to try to unstick any downstream tasks, it causes the backfill command to crash.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25653 | https://github.com/apache/airflow/pull/26205 | f01eed6490acd3bb3a58824e7388c4c3cd50ae29 | 3396d1f822caac7cbeb14e1e67679b8378a84a6c | "2022-08-10T19:19:21Z" | python | "2022-09-23T05:08:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,641 | ["airflow/www/templates/airflow/dag_audit_log.html", "airflow/www/views.py"] | Improve audit log | ### Discussed in https://github.com/apache/airflow/discussions/25638
See the discussion. There are a couple of improvements that can be done:
* add atribute to download the log rather than open it in-browser
* add .log or similar (.txt?) extension
* sort the output
* possibly more
<div type='discussions-op-text'>
<sup>Originally posted by **V0lantis** August 10, 2022</sup>
### Apache Airflow version
2.3.3
### What happened
The audit log link crashes because there is too much data displayed.
### What you think should happen instead
The windows shouldn't crash
### How to reproduce
Display a dag audit log with thousand or millions lines should do the trick
### Operating System
```
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/"
```
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==4.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.1.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-docker==3.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-github==2.0.0
apache-airflow-providers-google==8.1.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-jira==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-postgres==5.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-slack==5.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
apache-airflow-providers-tableau==3.0.0
apache-airflow-providers-zendesk==4.0.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
k8s
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/25641 | https://github.com/apache/airflow/pull/25856 | 634b9c03330c8609949f070457e7b99a6e029f26 | 50016564fa6ab6c6b02bdb0c70fccdf9b75c2f10 | "2022-08-10T13:42:53Z" | python | "2022-08-23T00:31:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,635 | ["airflow/providers/microsoft/azure/hooks/batch.py", "airflow/providers/microsoft/azure/operators/batch.py"] | AzureBatchOperator not handling exit code correctly | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
[apache-airflow-providers-microsoft-azure 3.9.0](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/3.9.0/)
### Apache Airflow version
v2.3.2
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### What happened
I have a task in my Airflow DAG that uses AzureBatchOperator.
As `batch_task_command_line` we pass something like `/bin/bash -c "some-script.sh"`.
The Azure Batch task correctly executes this command, and runs `some-script.sh`. All good.
When `some-script.sh` exits with a non-zero exit code, the Azure Batch task is correctly marked as failed (in Azure Portal), as is the job containing the task. However, in Airflow, the AzureBatchOperator task _always_ shows up as succeeded, ignoring the underlying Azure Batch job or task status.
It even shows in the Airflow DAG logs. Below are the logs of a run with the shell script returning a non-zero exit code. Airflow still considers the task to be a SUCCESS.
```sh
[2022-08-10, 10:01:27 UTC] {batch.py:362} INFO - Waiting for {hidden} to complete, currently on running state
[2022-08-10, 10:01:42 UTC] {taskinstance.py:1395} INFO - Marking task as SUCCESS. dag_id={hidden}, task_id={hidden}, execution_date=20220809T141257, start_date=20220810T100024, end_date=20220810T100142
[2022-08-10, 10:01:42 UTC] {local_task_job.py:156} INFO - Task exited with return code 0
```
The `some-script.sh` contains the following at the top, so that can't be the issue I think.
```bash
#!/bin/bash
set -euo pipefail
```
I tried passing `set -e` to the `batch_task_command_line`, so `set -e; /bin/bash -c "some-script.sh"` but that doesn't work, it gives me a `CommandProgramNotFound` exception.
### What you think should happen instead
When using AzureBatchOperator, I want this task in Airflow to fail when the command line that is passed to AzureBatchOperator exits with a non-zero exit code.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25635 | https://github.com/apache/airflow/pull/25844 | 810f3847c241453195fa2c27f447ecf7fe06bbfc | afb282aee4329042b273d501586ff27505c16b22 | "2022-08-10T10:34:39Z" | python | "2022-08-26T22:25:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,627 | ["airflow/jobs/scheduler_job.py"] | MySQL Not Using Correct Index for Scheduler Critical Section Query | ### Apache Airflow version
Other Airflow 2 version
### What happened
Airflow Version: 2.2.5
MySQL Version: 8.0.18
In the Scheduler, we are coming across instances where MySQL is inefficiently optimizing the [critical section task queuing query](https://github.com/apache/airflow/blob/2.2.5/airflow/jobs/scheduler_job.py#L294-L303). When a large number of task instances are scheduled, MySQL failing to use the `ti_state` index to filter the `task_instance` table, resulting in a full table scan (about 7.3 million rows).
Normally, when running the critical section query the index on `task_instance.state` is used to filter scheduled `task_instances`.
```bash
| -> Limit: 512 row(s) (actual time=5.290..5.413 rows=205 loops=1)
-> Sort row IDs: <temporary>.tmp_field_0, <temporary>.execution_date, limit input to 512 row(s) per chunk (actual time=5.289..5.391 rows=205 loops=1)
-> Table scan on <temporary> (actual time=0.003..0.113 rows=205 loops=1)
-> Temporary table (actual time=5.107..5.236 rows=205 loops=1)
-> Nested loop inner join (cost=20251.99 rows=1741) (actual time=0.100..4.242 rows=205 loops=1)
-> Nested loop inner join (cost=161.70 rows=12) (actual time=0.071..2.436 rows=205 loops=1)
-> Index lookup on task_instance using ti_state (state='scheduled') (cost=80.85 rows=231) (actual time=0.051..1.992 rows=222 loops=1)
-> Filter: ((dag_run.run_type <> 'backfill') and (dag_run.state = 'running')) (cost=0.25 rows=0) (actual time=0.002..0.002 rows=1 loops=222)
-> Single-row index lookup on dag_run using dag_run_dag_id_run_id_key (dag_id=task_instance.dag_id, run_id=task_instance.run_id) (cost=0.25 rows=1) (actual time=0.001..0.001 rows=1 loops=222)
-> Filter: ((dag.is_paused = 0) and (task_instance.dag_id = dag.dag_id)) (cost=233.52 rows=151) (actual time=0.008..0.008 rows=1 loops=205)
-> Index range scan on dag (re-planned for each iteration) (cost=233.52 rows=15072) (actual time=0.008..0.008 rows=1 loops=205)
1 row in set, 1 warning (0.03 sec)
```
When a large number of task_instances are in scheduled state at the same time, the index on `task_instance.state` is not being used to filter scheduled `task_instances`.
```bash
| -> Limit: 512 row(s) (actual time=12110.251..12110.573 rows=512 loops=1)
-> Sort row IDs: <temporary>.tmp_field_0, <temporary>.execution_date, limit input to 512 row(s) per chunk (actual time=12110.250..12110.526 rows=512 loops=1)
-> Table scan on <temporary> (actual time=0.005..0.800 rows=1176 loops=1)
-> Temporary table (actual time=12109.022..12109.940 rows=1176 loops=1)
-> Nested loop inner join (cost=10807.83 rows=3) (actual time=1.328..12097.528 rows=1176 loops=1)
-> Nested loop inner join (cost=10785.34 rows=64) (actual time=1.293..12084.371 rows=1193 loops=1)
-> Filter: (dag.is_paused = 0) (cost=1371.40 rows=1285) (actual time=0.087..22.409 rows=13264 loops=1)
-> Table scan on dag (cost=1371.40 rows=12854) (actual time=0.085..15.796 rows=13508 loops=1)
-> Filter: ((task_instance.state = 'scheduled') and (task_instance.dag_id = dag.dag_id)) (cost=0.32 rows=0) (actual time=0.907..0.909 rows=0 loops=13264)
-> Index lookup on task_instance using PRIMARY (dag_id=dag.dag_id) (cost=0.32 rows=70) (actual time=0.009..0.845 rows=553 loops=13264)
-> Filter: ((dag_run.run_type <> 'backfill') and (dag_run.state = 'running')) (cost=0.25 rows=0) (actual time=0.010..0.011 rows=1 loops=1193)
-> Single-row index lookup on dag_run using dag_run_dag_id_run_id_key (dag_id=task_instance.dag_id, run_id=task_instance.run_id) (cost=0.25 rows=1) (actual time=0.009..0.010 rows=1 loops=1193)
1 row in set, 1 warning (12.14 sec)
```
### What you think should happen instead
To resolve this, I added a patch on the `scheduler_job.py` file, adding a MySQL index hint to use the `ti_state` index.
```diff
--- /usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py
+++ /usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py
@@ -293,6 +293,7 @@ class SchedulerJob(BaseJob):
# and the dag is not paused
query = (
session.query(TI)
+ .with_hint(TI, 'USE INDEX (ti_state)', dialect_name='mysql')
.join(TI.dag_run)
.filter(DR.run_type != DagRunType.BACKFILL_JOB, DR.state == DagRunState.RUNNING)
.join(TI.dag_model)
```
I think it makes sense to add this index hint upstream.
### How to reproduce
Schedule a large number of dag runs and tasks in a short period of time.
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
Airflow 2.2.5 on Kubernetes
MySQL Version: 8.0.18
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25627 | https://github.com/apache/airflow/pull/25673 | 4d9aa3ae48bae124793b1a8ee394150eba0eee9b | 134b5551db67f17b4268dce552e87a154aa1e794 | "2022-08-09T19:50:29Z" | python | "2022-08-12T11:28:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,588 | ["airflow/models/mappedoperator.py", "tests/models/test_mappedoperator.py"] | Mapped KubernetesPodOperater not rendering nested templates | ### Apache Airflow version
2.3.3
### What happened
Nested values, such as `env_vars` for the `KubernetesPodOperater` are not being rendered when used as a dynamically mapped operator.
Assuming the following:
```python
op = KubernetesPodOperater.partial(
env_vars=[k8s.V1EnvVar(name='AWS_ACCESS_KEY_ID', value='{{ var.value.aws_access_key_id }}')],
# Other arguments
).expand(arguments=[[1], [2]])
```
The *Rendered Template* results for `env_vars` should be:
```
("[{'name': 'AWS_ACCESS_KEY_ID', 'value': 'some-super-secret-value', 'value_from': None}]")
```
Instead the actual *Rendered Template* results for `env_vars` are un-rendered:
```
("[{'name': 'AWS_ACCESS_KEY_ID', 'value': '{{ var.value.aws_access_key_id }}', 'value_from': None}]")
```
This is probably caused by the fact that `MappedOperator` is not calling [`KubernetesPodOperater._render_nested_template_fields`](https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L286).
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25588 | https://github.com/apache/airflow/pull/25599 | 762588dcf4a05c47aa253b864bda00726a5569dc | ed39703cd4f619104430b91d7ba67f261e5bfddb | "2022-08-08T06:17:20Z" | python | "2022-08-15T12:02:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,580 | [".github/workflows/ci.yml", "BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_testing.svg", "images/breeze/output_testing_helm-tests.svg", "images/breeze/output_testing_tests.svg", "scripts/in_container/check_environment.sh"] | Convet running Helm unit tests to use the new breeze | The unit tests of Helm (using `helm template` still use bash scripts not the new breeze - we should switch them). | https://github.com/apache/airflow/issues/25580 | https://github.com/apache/airflow/pull/25581 | 0d34355ffa3f9f2ecf666d4518d36c4366a4c701 | a562cc396212e4d21484088ac5f363ade9ac2b8d | "2022-08-07T13:24:26Z" | python | "2022-08-08T06:56:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,555 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Airflow doesn't re-use a secrets backend instance when loading configuration values | ### Apache Airflow version
main (development)
### What happened
When airflow is loading its configuration, it creates a new secrets backend instance for each configuration backend it loads from secrets and then additionally creates a global secrets backend instance that is used in `ensure_secrets_loaded` which code outside of the configuration file uses. This can cause issues with the vault backend (and possibly others, not sure) since logging in to vault can be an expensive operation server-side and each instance of the vault secrets backend needs to re-login to use its internal client.
### What you think should happen instead
Ideally, airflow would attempt to create a single secrets backend instance and re-use this. This can possibly be patched in the vault secrets backend, but instead I think updating the `configuration` module to cache the secrets backend would be preferable since it would then apply to any secrets backend.
### How to reproduce
Use the hashicorp vault secrets backend and store some configuration in `X_secret` values. See that it logs in more than you'd expect.
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
```
apache-airflow==2.3.0
apache-airflow-providers-hashicorp==2.2.0
hvac==0.11.2
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25555 | https://github.com/apache/airflow/pull/25556 | 33fbe75dd5100539c697d705552b088e568d52e4 | 5863c42962404607013422a40118d8b9f4603f0b | "2022-08-05T16:13:36Z" | python | "2022-08-06T14:21:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,523 | ["airflow/www/static/js/graph.js"] | Mapped, classic operator tasks within TaskGroups prepend `group_id` in Graph View | ### Apache Airflow version
main (development)
### What happened
When mapped, classic operator tasks exist within TaskGroups, the `group_id` of the TaskGroup is prepended to the displayed `task_id` in the Graph View.
In the below screenshot, all displayed task IDs only contain the direct `task_id` except for the "mapped_classic_task". This particular task is a mapped `BashOperator` task. The prepended `group_id` does not appear for unmapped, classic operator tasks, nor mapped and unmapped TaskFlow tasks.
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/48934154/182760586-975a7886-bcd6-477d-927b-25e82139b5b7.png">
### What you think should happen instead
The pattern of the displayed task names should be consistent for all task types (mapped/unmapped, classic operators/TaskFlow functions). Additionally, having the `group_id` prepended to the mapped, classic operator tasks is a little redundant and less readable.
### How to reproduce
1. Use an example DAG of the following:
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.operators.bash import BashOperator
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_group_task_graph():
@task_group
def my_task_group():
BashOperator(task_id="not_mapped_classic_task", bash_command="echo")
BashOperator.partial(task_id="mapped_classic_task").expand(
bash_command=["echo", "echo hello", "echo world"]
)
@task
def another_task(input=None):
...
another_task.override(task_id="not_mapped_taskflow_task")()
another_task.override(task_id="mapped_taskflow_task").expand(input=[1, 2, 3])
my_task_group()
_ = task_group_task_graph()
```
2. Navigate to the Graph view
3. Notice that the `task_id` for the "mapped_classic_task" prepends the TaskGroup `group_id` of "my_task_group" while the other tasks in the TaskGroup do not.
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Breeze
### Anything else
Setting `prefix_group_id=False` for the TaskGroup does remove the prepended `group_id` from the tasks display name.
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25523 | https://github.com/apache/airflow/pull/26108 | 5697e9fdfa9d5af2d48f7037c31972c2db1f4397 | 3b76e81bcc9010cfec4d41fe33f92a79020dbc5b | "2022-08-04T04:13:48Z" | python | "2022-09-01T16:32:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,522 | ["airflow/providers/amazon/aws/hooks/batch_client.py", "airflow/providers/amazon/aws/operators/batch.py", "tests/providers/amazon/aws/hooks/test_batch_client.py", "tests/providers/amazon/aws/operators/test_batch.py"] | Support AWS Batch multinode job types | ### Description
Support [multinode job types](https://docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html) in the [AWS Batch Operator](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/batch.py).
The [boto3 `submit_job` method](https://boto3.amazonaws.com/v1/documentation/api/1.9.88/reference/services/batch.html#Batch.Client.submit_job) supports container, multinode, and array batch jobs with the mutually exclusive `nodeOverrides` and `containerOverrides` (+ `arrayProperties`) parameters. But currently the AWS Batch Operator only supports submission of container jobs and array jobs by hardcoding the boto3 `submit_job` parameter `containerOverrides`: https://github.com/apache/airflow/blob/3c08cefdfd2e2636a714bb835902f0cb34225563/airflow/providers/amazon/aws/operators/batch.py#L200 & https://github.com/apache/airflow/blob/3c08cefdfd2e2636a714bb835902f0cb34225563/airflow/providers/amazon/aws/hooks/batch_client.py#L99
The [`get_job_awslogs_info`](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/batch_client.py#L419) method in the batch client hook is also hardcoded for the container type job: https://github.com/apache/airflow/blob/3c08cefdfd2e2636a714bb835902f0cb34225563/airflow/providers/amazon/aws/hooks/batch_client.py#L425
To support multinode jobs the `get_job_awslogs_info` method would need to access `nodeProperties` from the [`describe_jobs`](https://boto3.amazonaws.com/v1/documentation/api/1.9.88/reference/services/batch.html#Batch.Client.describe_jobs) response.
### Use case/motivation
Multinode jobs are a supported job type of AWS Batch, are supported by the underlying boto3 library, and should be also be available to be managed by Airflow. I've extended the AWS Batch Operator for our own use cases, but would prefer to not maintain a separate operator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25522 | https://github.com/apache/airflow/pull/29522 | f080e1e3985f24293979f2f0fc28f1ddf72ee342 | 2ce11300064ec821ffe745980012100fc32cb4b4 | "2022-08-03T23:14:12Z" | python | "2023-04-12T04:29:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,512 | ["airflow/www/static/js/dag/grid/index.tsx"] | Vertical overlay scrollbar on Grid view blocks last DAG run column | ### Apache Airflow version
2.3.3 (latest released)
### What happened
The vertical overlay scrollbar in Grid view on the Web UI (#22134) covers up the final DAG run column and makes it impossible to click on the tasks for that DAG run:
![image](https://user-images.githubusercontent.com/12103194/182652473-e935fb33-0808-43ad-84d8-acabbf4e9b88.png)
![image](https://user-images.githubusercontent.com/12103194/182652203-0494efb5-8335-4005-920a-98bff42e1b21.png)
### What you think should happen instead
Either pad the Grid view so the scrollbar does not appear on top of the content or force the scroll bar to take up its own space
### How to reproduce
Have a DAG run with enough tasks to cause vertical overflow. Found on Linux + FF 102
### Operating System
Fedora 36
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25512 | https://github.com/apache/airflow/pull/25554 | 5668888a7e1074a620b3d38f407ecf1aa055b623 | fe9772949eba35c73101c3cd93a7c76b3e633e7e | "2022-08-03T16:10:55Z" | python | "2022-08-05T16:46:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,508 | ["airflow/migrations/versions/0118_2_5_0_add_updated_at_to_dagrun_and_ti.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst", "tests/models/test_taskinstance.py"] | add lastModified columns to DagRun and TaskInstance. | I wonder if we should add lastModified columns to DagRun and TaskInstance. It might help a lot of UI/API queries.
_Originally posted by @ashb in https://github.com/apache/airflow/issues/23805#issuecomment-1143752368_ | https://github.com/apache/airflow/issues/25508 | https://github.com/apache/airflow/pull/26252 | 768865e10c811bc544590ec268f9f5c334da89b5 | 4930df45f5bae89c297dbcd5cafc582a61a0f323 | "2022-08-03T14:49:55Z" | python | "2022-09-19T13:28:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,493 | ["airflow/www/views.py", "tests/www/views/test_views_base.py", "tests/www/views/test_views_home.py"] | URL contains tag query parameter but Airflow UI does not correctly visualize the tags | ### Apache Airflow version
2.3.3 (latest released)
### What happened
An URL I saved in the past, `https://astronomer.astronomer.run/dx4o2568/home?tags=test`, has the tag field in the query parameter though I was not aware of this. When I clicked on the URL, I was confused because I did not see any DAGs when I should have a bunch.
After closer inspection, I realized that the URL has the tag field in the query parameter but then noticed that the tag box in the Airflow UI wasn't properly populated.
![screen_shot_2022-07-12_at_8 11 07_am](https://user-images.githubusercontent.com/5952735/182496710-601b4a98-aacb-4482-bb9f-bb3fdf9e265f.png)
### What you think should happen instead
When I clicked on the URL, the tag box should have been populated with the strings in the URL.
### How to reproduce
Start an Airflow deployment with some DAGs and add the tag query parameter. More specifically, it has to be a tag that is not used by any DAG.
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25493 | https://github.com/apache/airflow/pull/25715 | ea306c9462615d6b215d43f7f17d68f4c62951b1 | 485142ac233c4ac9627f523465b7727c2d089186 | "2022-08-03T00:03:45Z" | python | "2022-11-24T10:27:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,492 | ["airflow/api_connexion/endpoints/plugin_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/plugin_schema.py", "airflow/www/static/js/types/api-generated.ts"] | API server /plugin crashes | ### Apache Airflow version
2.3.3 (latest released)
### What happened
The `/plugins` endpoint returned a 500 http status code.
```
curl -X GET http://localhost:8080/api/v1/plugins\?limit\=1 \
-H 'Cache-Control: no-cache' \
--user "admin:admin"
{
"detail": "\"{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}\" is not of type 'object'\n\nFailed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:\n {'nullable': True, 'type': 'object'}\n\nOn instance['plugins'][0]['appbuilder_views'][0]:\n (\"{'name': 'Test View', 'category': 'Test Plugin', 'view': \"\n \"'test.appbuilder_views.TestAppBuilderBaseView'}\")",
"status": 500,
"title": "Response body does not conform to specification",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/Unknown"
}
```
The error message in the webserver is as followed
```
[2022-08-03 17:07:57,705] {validation.py:244} ERROR - http://localhost:8080/api/v1/plugins?limit=1 validation error: "{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}" is not of type 'object'
Failed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:
{'nullable': True, 'type': 'object'}
On instance['plugins'][0]['appbuilder_views'][0]:
("{'name': 'Test View', 'category': 'Test Plugin', 'view': "
"'test.appbuilder_views.TestAppBuilderBaseView'}")
172.18.0.1 - admin [03/Aug/2022:17:10:17 +0000] "GET /api/v1/plugins?limit=1 HTTP/1.1" 500 733 "-" "curl/7.79.1"
```
### What you think should happen instead
The response should contain all the plugins integrated with Airflow.
### How to reproduce
Create a simple plugin in the plugin directory.
`appbuilder_views.py`
```
from flask_appbuilder import expose, BaseView as AppBuilderBaseView
# Creating a flask appbuilder BaseView
class TestAppBuilderBaseView(AppBuilderBaseView):
@expose("/")
def test(self):
return self.render_template("test_plugin/test.html", content="Hello galaxy!")
```
`plugin.py`
```
from airflow.plugins_manager import AirflowPlugin
from test.appbuilder_views import TestAppBuilderBaseView
class TestPlugin(AirflowPlugin):
name = "test"
appbuilder_views = [
{
"name": "Test View",
"category": "Test Plugin",
"view": TestAppBuilderBaseView()
}
]
```
Call the `/plugin` endpoint.
```
curl -X GET http://localhost:8080/api/v1/plugins\?limit\=1 \
-H 'Cache-Control: no-cache' \
--user "admin:admin"
```
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25492 | https://github.com/apache/airflow/pull/25524 | 7e3d2350dbb23b9c98bbadf73296425648e1e42d | 5de11e1410b432d632e8c0d1d8ca0945811a56f0 | "2022-08-02T23:44:07Z" | python | "2022-08-04T15:37:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,474 | ["airflow/providers/google/cloud/transfers/postgres_to_gcs.py"] | PostgresToGCSOperator parquet format mapping inconsistencies converts boolean data type to string | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.8.0
### Apache Airflow version
2.3.2
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When converting postgres native data type to bigquery data types, [this](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L288) function is responsible for converting from postgres types -> bigquery types -> parquet types.
The [map](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/postgres_to_gcs.py#L80) in the PostgresToGCSOperator indicates that the postgres boolean type matches to the bigquery `BOOLEAN` data type.
Then when converting from bigquery to parquet data types [here](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L288), the [map](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L289) does not have the `BOOLEAN` data type in its keys. Because the type defaults to string in the following [line](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L305), the BOOLEAN data type is converted into string, which then fails when converting the data into `pa.bool_()`.
When converting the boolean data type into `pa.string()` pyarrow raises an error.
### What you think should happen instead
I would expect the postgres boolean type to map to `pa.bool_()` data type.
Changing the [map](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/postgres_to_gcs.py#L80) to include the `BOOL` key instead of `BOOLEAN` would correctly map the postgres type to the final parquet type.
### How to reproduce
1. Create a postgres connection on airflow with id `postgres_test_conn`.
2. Create a gcp connection on airflow with id `gcp_test_conn`.
3. In the database referenced by the `postgres_test_conn`, in the public schema create a table `test_table` that includes a boolean data type, and insert data into the table.
4. Create a bucket named `issue_PostgresToGCSOperator_bucket`, in the gcp account referenced by the `gcp_test_conn`.
5. Run the dag below that inserts the data from the postgres table into the cloud storage bucket.
```python
import pendulum
from airflow import DAG
from airflow.providers.google.cloud.transfers.postgres_to_gcs import PostgresToGCSOperator
with DAG(
dag_id="issue_PostgresToGCSOperator",
start_date=pendulum.parse("2022-01-01"),
)as dag:
task = PostgresToGCSOperator(
task_id='extract_task',
filename='uploading-{}.parquet',
bucket="issue_PostgresToGCSOperator_bucket",
export_format='parquet',
sql="SELECT * FROM test_table",
postgres_conn_id='postgres_test_conn',
gcp_conn_id='gcp_test_conn',
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25474 | https://github.com/apache/airflow/pull/25475 | 4da2b0c216c92795f19862a3ff6634e5a5936138 | faf3c4fe474733965ab301465f695e3cc311169c | "2022-08-02T14:36:32Z" | python | "2022-08-02T20:28:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,446 | ["chart/templates/statsd/statsd-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_annotations.py"] | Helm Chart: Allow adding annotations to statsd deployment | ### Description
Helm Chart [does not allow adding annotations](https://github.com/apache/airflow/blob/40eefd84797f5085e6c3fef6cbd6f713ceb3c3d8/chart/templates/statsd/statsd-deployment.yaml#L60-L63) to StatsD deployment. We should add it.
### Use case/motivation
In our Kubernetes cluster we need to set annotations on deployments that should be scraped by Prometheus. Having an exporter that does not get scraped defeats the purpose :slightly_smiling_face:
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25446 | https://github.com/apache/airflow/pull/25732 | fdecf12051308a4e064f5e4bf5464ffc9b183dad | 951b7084619eca7229cdaadda99fd1191d4793e7 | "2022-08-01T14:23:14Z" | python | "2022-09-15T00:31:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,395 | ["airflow/providers/snowflake/provider.yaml", "airflow/providers/snowflake/transfers/copy_into_snowflake.py", "airflow/providers/snowflake/transfers/s3_to_snowflake.py", "scripts/in_container/verify_providers.py", "tests/providers/snowflake/transfers/test_copy_into_snowflake.py"] | GCSToSnowflakeOperator with feature parity to the S3ToSnowflakeOperator | ### Description
Require an operator similar to the S3ToSnowflakeOperator but for GCS to load data stored in GCS to a Snowflake table.
### Use case/motivation
Same as the S3ToSnowflakeOperator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25395 | https://github.com/apache/airflow/pull/25541 | 2ee099655b1ca46935dbf3e37ae0ec1139f98287 | 5c52bbf32d81291b57d051ccbd1a2479ff706efc | "2022-07-29T10:23:52Z" | python | "2022-08-26T22:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,388 | ["airflow/providers/jdbc/operators/jdbc.py", "tests/providers/jdbc/operators/test_jdbc.py"] | apache-airflow-providers-jdbc fails with jaydebeapi.Error | ### Apache Airflow Provider(s)
jdbc
### Versions of Apache Airflow Providers
I am using apache-airflow-providers-jdbc==3.0.0 for Airflow 2.3.3 as per constraint [file](https://raw.githubusercontent.com/apache/airflow/constraints-2.3.3/constraints-3.10.txt)
### Apache Airflow version
2.3.3 (latest released)
### Operating System
K8 on Linux
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
I am using JdbcOperator to execute one ALTER sql statement but it returns the following error:
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/jdbc/operators/jdbc.py", line 76, in execute
return hook.run(self.sql, self.autocommit, parameters=self.parameters, handler=fetch_all_handler)
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/hooks/dbapi.py", line 213, in run
result = handler(cur)
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/jdbc/operators/jdbc.py", line 30, in fetch_all_handler
return cursor.fetchall()
File "/usr/local/airflow/.local/lib/python3.10/site-packages/jaydebeapi/__init__.py", line 593, in fetchall
row = self.fetchone()
File "/usr/local/airflow/.local/lib/python3.10/site-packages/jaydebeapi/__init__.py", line 558, in fetchone
raise Error()
jaydebeapi.Error
### What you think should happen instead
The introduction of handler=fetch_all_handler in File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/jdbc/operators/jdbc.py", line 76, in execute
return hook.run(self.sql, self.autocommit, parameters=self.parameters, handler=fetch_all_handler) is breaking the script. With the previous version which did not have fetch_all_handler in jdbc.py, it was running perfectly.
### How to reproduce
Try submitting ALTER statement in airflow jdbcOperator.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25388 | https://github.com/apache/airflow/pull/25412 | 3dfa44566c948cb2db016e89f84d6fe37bd6d824 | 1708da9233c13c3821d76e56dbe0e383ff67b0fd | "2022-07-28T22:08:43Z" | python | "2022-08-07T09:18:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,360 | ["airflow/models/abstractoperator.py", "airflow/models/baseoperator.py", "airflow/operators/trigger_dagrun.py", "airflow/providers/qubole/operators/qubole.py", "airflow/www/static/js/dag.js", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "docs/spelling_wordlist.txt"] | Extra Links do not works with mapped operators | ### Apache Airflow version
main (development)
### What happened
I found that Extra Links do not work with dynamic tasks at all - links inaccessible, but same Extra Links works fine with not mapped operators.
I think the nature of that extra links assign to parent task instance (i do not know how to correct name this TI) but not to actual mapped TIs.
As result we only have `number extra links defined` in operator not `(number extra links defined in operator) x number of mapped TIs.`
### What you think should happen instead
_No response_
### How to reproduce
```python
from pendulum import datetime
from airflow.decorators import dag
from airflow.sensors.external_task import ExternalTaskSensor
from airflow.operators.empty import EmptyOperator
EXTERNAL_DAG_IDS = [f"example_external_dag_{ix:02d}" for ix in range(3)]
DAG_KWARGS = {
"start_date": datetime(2022, 7, 1),
"schedule_interval": "@daily",
"catchup": False,
"tags": ["mapped_extra_links", "AIP-42", "serialization"],
}
def external_dags():
EmptyOperator(task_id="dummy")
@dag(**DAG_KWARGS)
def external_regular_task_sensor():
for external_dag_id in EXTERNAL_DAG_IDS:
ExternalTaskSensor(
task_id=f'wait_for_{external_dag_id}',
external_dag_id=external_dag_id,
poke_interval=5,
)
@dag(**DAG_KWARGS)
def external_mapped_task_sensor():
ExternalTaskSensor.partial(
task_id='wait',
poke_interval=5,
).expand(external_dag_id=EXTERNAL_DAG_IDS)
dag_external_regular_task_sensor = external_regular_task_sensor()
dag_external_mapped_task_sensor = external_mapped_task_sensor()
for dag_id in EXTERNAL_DAG_IDS:
globals()[dag_id] = dag(dag_id=dag_id, **DAG_KWARGS)(external_dags)()
```
https://user-images.githubusercontent.com/3998685/180994213-847b3fd3-d351-4836-b246-b54056f34ad6.mp4
### Operating System
macOs 12.5
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25360 | https://github.com/apache/airflow/pull/25500 | 4ecaa9e3f0834ca0ef08002a44edda3661f4e572 | d9e924c058f5da9eba5bb5b85a04bfea6fb2471a | "2022-07-28T10:44:40Z" | python | "2022-08-05T03:41:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,353 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | Backfill stalls with certain combination of skipped tasks & trigger rules | ### Apache Airflow version
2.3.0
### What happened
While trying to run a backfill for one of our DAGs, we noticed that the backfill stalled after completing all the tasks for a given DAG. The `max_active_runs` for this DAG was set to `1`, so the entire backfill stalled even though all the tasks in the last DAG it ran completed successfully.
### What you think should happen instead
I would assume that once all the tasks are complete in a DAG (whether succeeded, skipped, or failed) during a backfill, the backfill should mark the DAG with the proper state and proceed on with the rest of the tasks.
### How to reproduce
Here is simulacrum of our DAG with all the actual logic stripped out:
```python
from datetime import datetime
from airflow.decorators import dag
from airflow.exceptions import AirflowSkipException
from airflow.operators.python import PythonOperator
from airflow.utils.task_group import TaskGroup
from airflow.utils.trigger_rule import TriggerRule
def skipme():
raise AirflowSkipException("Skip")
def run():
return
@dag(
schedule_interval="@daily",
start_date=datetime(2022, 7, 14),
catchup=False,
max_active_runs=1,
)
def sample_dag_with_skip():
a = PythonOperator(
task_id="first",
python_callable=run,
)
with TaskGroup(group_id="subgroup") as tg:
b = PythonOperator(
task_id="run_and_skip",
trigger_rule=TriggerRule.NONE_SKIPPED,
python_callable=skipme,
)
c = PythonOperator(
task_id="run_fine",
trigger_rule=TriggerRule.NONE_SKIPPED,
python_callable=skipme,
)
d = PythonOperator(
task_id="gather",
python_callable=run,
)
e = PythonOperator(
task_id="always_succeed",
trigger_rule=TriggerRule.ALL_DONE,
python_callable=run,
)
[b, c] >> d >> e
f = PythonOperator(
task_id="final",
trigger_rule=TriggerRule.ALL_DONE,
python_callable=run,
)
a >> tg >> f
skip_dag = sample_dag_with_skip()
```
Here's a screenshot of the DAG:
![image](https://user-images.githubusercontent.com/10214785/181389328-5183b041-1ba3-483f-b18e-c8e6d5338152.png)
Note that the DAG is still shown as "running" even though the last task ended several minutes ago:
![image](https://user-images.githubusercontent.com/10214785/181389434-6b528f4b-ece7-4c71-bfa2-2a1f879479c6.png)
Here's the backfill command I ran for this DAG: `airflow dags backfill -s 2022-07-25 -e 2022-07-26 sample_dag_with_skip --reset-dagruns -y`
And here are the logs from the backfill process:
<details> <summary>Backfill logs</summary>
```
airflow@42a81ed08a3d:~$ airflow dags backfill -s 2022-07-25 -e 2022-07-26 sample_dag_with_skip --reset-dagruns -y
/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/cli/commands/dag_command.py:57 PendingDeprecationWarning: --ignore-first-depends-on-past is deprecated as the value is always set to True
[2022-07-27 23:15:44,655] {dagbag.py:507} INFO - Filling up the DagBag from /usr/local/airflow/openverse_catalog/dags
[2022-07-27 23:15:44,937] {urls.py:74} INFO - https://creativecommons.org/publicdomain/zero/1.0 was rewritten to https://creativecommons.org/publicdomain/zero/1.0/
[2022-07-27 23:15:44,948] {media.py:63} INFO - Initialized image MediaStore with provider brooklynmuseum
[2022-07-27 23:15:44,948] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,948] {media.py:186} INFO - Output path: /var/workflow_output/brooklynmuseum_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,952] {media.py:63} INFO - Initialized image MediaStore with provider europeana
[2022-07-27 23:15:44,952] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,952] {media.py:186} INFO - Output path: /var/workflow_output/europeana_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,953] {media.py:63} INFO - Initialized image MediaStore with provider finnishmuseums
[2022-07-27 23:15:44,953] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,953] {media.py:186} INFO - Output path: /var/workflow_output/finnishmuseums_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,955] {media.py:63} INFO - Initialized image MediaStore with provider flickr
[2022-07-27 23:15:44,955] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,955] {media.py:186} INFO - Output path: /var/workflow_output/flickr_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,957] {media.py:63} INFO - Initialized audio MediaStore with provider freesound
[2022-07-27 23:15:44,957] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,957] {media.py:186} INFO - Output path: /var/workflow_output/freesound_audio_v001_20220727231544.tsv
[2022-07-27 23:15:44,959] {media.py:63} INFO - Initialized audio MediaStore with provider jamendo
[2022-07-27 23:15:44,959] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,959] {media.py:186} INFO - Output path: /var/workflow_output/jamendo_audio_v001_20220727231544.tsv
[2022-07-27 23:15:44,961] {media.py:63} INFO - Initialized image MediaStore with provider met
[2022-07-27 23:15:44,961] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,961] {media.py:186} INFO - Output path: /var/workflow_output/met_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,962] {media.py:63} INFO - Initialized image MediaStore with provider museumsvictoria
[2022-07-27 23:15:44,962] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,962] {media.py:186} INFO - Output path: /var/workflow_output/museumsvictoria_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,964] {media.py:63} INFO - Initialized image MediaStore with provider nypl
[2022-07-27 23:15:44,964] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,964] {media.py:186} INFO - Output path: /var/workflow_output/nypl_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,965] {media.py:63} INFO - Initialized image MediaStore with provider phylopic
[2022-07-27 23:15:44,965] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,965] {media.py:186} INFO - Output path: /var/workflow_output/phylopic_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,967] {media.py:63} INFO - Initialized image MediaStore with provider rawpixel
[2022-07-27 23:15:44,967] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,967] {media.py:186} INFO - Output path: /var/workflow_output/rawpixel_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,968] {media.py:63} INFO - Initialized image MediaStore with provider sciencemuseum
[2022-07-27 23:15:44,968] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,968] {media.py:186} INFO - Output path: /var/workflow_output/sciencemuseum_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,970] {media.py:63} INFO - Initialized image MediaStore with provider smithsonian
[2022-07-27 23:15:44,970] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,970] {media.py:186} INFO - Output path: /var/workflow_output/smithsonian_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,971] {media.py:63} INFO - Initialized image MediaStore with provider smk
[2022-07-27 23:15:44,972] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,972] {media.py:186} INFO - Output path: /var/workflow_output/smk_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,974] {media.py:63} INFO - Initialized image MediaStore with provider waltersartmuseum
[2022-07-27 23:15:44,974] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,974] {media.py:186} INFO - Output path: /var/workflow_output/waltersartmuseum_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,976] {media.py:63} INFO - Initialized audio MediaStore with provider wikimedia_audio
[2022-07-27 23:15:44,976] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,976] {media.py:186} INFO - Output path: /var/workflow_output/wikimedia_audio_audio_v001_20220727231544.tsv
[2022-07-27 23:15:44,976] {media.py:63} INFO - Initialized image MediaStore with provider wikimedia
[2022-07-27 23:15:44,976] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,977] {media.py:186} INFO - Output path: /var/workflow_output/wikimedia_image_v001_20220727231544.tsv
[2022-07-27 23:15:44,980] {media.py:63} INFO - Initialized image MediaStore with provider wordpress
[2022-07-27 23:15:44,980] {media.py:168} INFO - No given output directory. Using OUTPUT_DIR from environment.
[2022-07-27 23:15:44,980] {media.py:186} INFO - Output path: /var/workflow_output/wordpress_image_v001_20220727231544.tsv
[2022-07-27 23:15:45,043] {executor_loader.py:106} INFO - Loaded executor: LocalExecutor
[2022-07-27 23:15:45,176] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'first', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmpfowbb78c']
[2022-07-27 23:15:50,050] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'first', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmpfowbb78c']
[2022-07-27 23:15:50,060] {dagrun.py:647} WARNING - Failed to get task '<TaskInstance: sample_dag_with_skip.skipme backfill__2022-07-25T00:00:00+00:00 [skipped]>' for dag 'sample_dag_with_skip'. Marking it as removed.
[2022-07-27 23:15:50,061] {dagrun.py:647} WARNING - Failed to get task '<TaskInstance: sample_dag_with_skip.always_run backfill__2022-07-25T00:00:00+00:00 [success]>' for dag 'sample_dag_with_skip'. Marking it as removed.
[2022-07-27 23:15:50,063] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 2 | tasks waiting: 7 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 5
[2022-07-27 23:15:50,071] {dagbag.py:507} INFO - Filling up the DagBag from /usr/local/***/openverse_catalog/dags/simple_backfill_example.py
[2022-07-27 23:15:50,089] {task_command.py:369} INFO - Running <TaskInstance: sample_dag_with_skip.first backfill__2022-07-25T00:00:00+00:00 [queued]> on host 42a81ed08a3d
[2022-07-27 23:15:50,486] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:15:50,487] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:15:50,488] {base_aws.py:206} INFO - Credentials retrieved from login
[2022-07-27 23:15:50,488] {base_aws.py:100} INFO - Retrieving region_name from Connection.extra_config['region_name']
[2022-07-27 23:15:50,525] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:15:55,064] {dagrun.py:647} WARNING - Failed to get task '<TaskInstance: sample_dag_with_skip.skipme backfill__2022-07-25T00:00:00+00:00 [removed]>' for dag 'sample_dag_with_skip'. Marking it as removed.
[2022-07-27 23:15:55,065] {dagrun.py:647} WARNING - Failed to get task '<TaskInstance: sample_dag_with_skip.always_run backfill__2022-07-25T00:00:00+00:00 [removed]>' for dag 'sample_dag_with_skip'. Marking it as removed.
[2022-07-27 23:15:55,067] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 2 | tasks waiting: 7 | succeeded: 1 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 5
[2022-07-27 23:15:55,081] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'subgroup.run_and_skip', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmp1uesref4']
[2022-07-27 23:15:55,170] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'subgroup.run_fine', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmpib__7p5u']
[2022-07-27 23:16:00,058] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'subgroup.run_fine', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmpib__7p5u']
[2022-07-27 23:16:00,058] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'subgroup.run_and_skip', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmp1uesref4']
[2022-07-27 23:16:00,063] {dagrun.py:647} WARNING - Failed to get task '<TaskInstance: sample_dag_with_skip.skipme backfill__2022-07-25T00:00:00+00:00 [removed]>' for dag 'sample_dag_with_skip'. Marking it as removed.
[2022-07-27 23:16:00,064] {dagrun.py:647} WARNING - Failed to get task '<TaskInstance: sample_dag_with_skip.always_run backfill__2022-07-25T00:00:00+00:00 [removed]>' for dag 'sample_dag_with_skip'. Marking it as removed.
[2022-07-27 23:16:00,065] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 2 | tasks waiting: 5 | succeeded: 1 | running: 2 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 3
[2022-07-27 23:16:00,077] {dagbag.py:507} INFO - Filling up the DagBag from /usr/local/***/openverse_catalog/dags/simple_backfill_example.py
[2022-07-27 23:16:00,080] {dagbag.py:507} INFO - Filling up the DagBag from /usr/local/***/openverse_catalog/dags/simple_backfill_example.py
[2022-07-27 23:16:00,096] {task_command.py:369} INFO - Running <TaskInstance: sample_dag_with_skip.subgroup.run_fine backfill__2022-07-25T00:00:00+00:00 [queued]> on host 42a81ed08a3d
[2022-07-27 23:16:00,097] {task_command.py:369} INFO - Running <TaskInstance: sample_dag_with_skip.subgroup.run_and_skip backfill__2022-07-25T00:00:00+00:00 [queued]> on host 42a81ed08a3d
[2022-07-27 23:16:00,504] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:00,505] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:00,506] {base_aws.py:206} INFO - Credentials retrieved from login
[2022-07-27 23:16:00,506] {base_aws.py:100} INFO - Retrieving region_name from Connection.extra_config['region_name']
[2022-07-27 23:16:00,514] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:00,515] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:00,516] {base_aws.py:206} INFO - Credentials retrieved from login
[2022-07-27 23:16:00,516] {base_aws.py:100} INFO - Retrieving region_name from Connection.extra_config['region_name']
[2022-07-27 23:16:00,541] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:00,559] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:05,071] {dagrun.py:647} WARNING - Failed to get task '<TaskInstance: sample_dag_with_skip.skipme backfill__2022-07-25T00:00:00+00:00 [removed]>' for dag 'sample_dag_with_skip'. Marking it as removed.
[2022-07-27 23:16:05,072] {dagrun.py:647} WARNING - Failed to get task '<TaskInstance: sample_dag_with_skip.always_run backfill__2022-07-25T00:00:00+00:00 [removed]>' for dag 'sample_dag_with_skip'. Marking it as removed.
[2022-07-27 23:16:05,073] {dagrun.py:583} ERROR - Deadlock; marking run <DagRun sample_dag_with_skip @ 2022-07-25 00:00:00+00:00: backfill__2022-07-25T00:00:00+00:00, externally triggered: False> failed
[2022-07-27 23:16:05,073] {dagrun.py:607} INFO - DagRun Finished: dag_id=sample_dag_with_skip, execution_date=2022-07-25 00:00:00+00:00, run_id=backfill__2022-07-25T00:00:00+00:00, run_start_date=None, run_end_date=2022-07-27 23:16:05.073628+00:00, run_duration=None, state=failed, external_trigger=False, run_type=backfill, data_interval_start=2022-07-25 00:00:00+00:00, data_interval_end=2022-07-26 00:00:00+00:00, dag_hash=None
[2022-07-27 23:16:05,075] {dagrun.py:799} WARNING - Failed to record duration of <DagRun sample_dag_with_skip @ 2022-07-25 00:00:00+00:00: backfill__2022-07-25T00:00:00+00:00, externally triggered: False>: start_date is not set.
[2022-07-27 23:16:05,075] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 5 | succeeded: 1 | running: 0 | failed: 0 | skipped: 2 | deadlocked: 0 | not ready: 3
[2022-07-27 23:16:05,096] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'subgroup.always_succeed', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmpulo4p958']
[2022-07-27 23:16:10,065] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'subgroup.always_succeed', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmpulo4p958']
[2022-07-27 23:16:10,067] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 3 | succeeded: 1 | running: 1 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 1
[2022-07-27 23:16:10,084] {dagbag.py:507} INFO - Filling up the DagBag from /usr/local/***/openverse_catalog/dags/simple_backfill_example.py
[2022-07-27 23:16:10,104] {task_command.py:369} INFO - Running <TaskInstance: sample_dag_with_skip.subgroup.always_succeed backfill__2022-07-25T00:00:00+00:00 [queued]> on host 42a81ed08a3d
[2022-07-27 23:16:10,500] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:10,501] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:10,502] {base_aws.py:206} INFO - Credentials retrieved from login
[2022-07-27 23:16:10,502] {base_aws.py:100} INFO - Retrieving region_name from Connection.extra_config['region_name']
[2022-07-27 23:16:10,537] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:15,074] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 3 | succeeded: 2 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 1
[2022-07-27 23:16:15,091] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'final', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmp7ifr68s2']
[2022-07-27 23:16:20,073] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'sample_dag_with_skip', 'final', 'backfill__2022-07-25T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/simple_backfill_example.py', '--cfg-path', '/tmp/tmp7ifr68s2']
[2022-07-27 23:16:20,075] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 2 | running: 1 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:16:20,094] {dagbag.py:507} INFO - Filling up the DagBag from /usr/local/***/openverse_catalog/dags/simple_backfill_example.py
[2022-07-27 23:16:20,114] {task_command.py:369} INFO - Running <TaskInstance: sample_dag_with_skip.final backfill__2022-07-25T00:00:00+00:00 [queued]> on host 42a81ed08a3d
[2022-07-27 23:16:20,522] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:20,523] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:20,524] {base_aws.py:206} INFO - Credentials retrieved from login
[2022-07-27 23:16:20,524] {base_aws.py:100} INFO - Retrieving region_name from Connection.extra_config['region_name']
[2022-07-27 23:16:20,561] {base.py:68} INFO - Using connection ID 'aws_default' for task execution.
[2022-07-27 23:16:25,082] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:16:30,083] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:16:35,089] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:16:40,093] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:16:45,099] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:16:50,105] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:16:55,112] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:00,117] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:05,122] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:10,128] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:15,133] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:20,137] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:25,141] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:30,145] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:35,149] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:40,153] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:45,157] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:50,161] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:17:55,166] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:18:00,169] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:18:05,174] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
[2022-07-27 23:18:10,177] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 2 | tasks waiting: 2 | succeeded: 3 | running: 0 | failed: 0 | skipped: 3 | deadlocked: 0 | not ready: 0
^C[2022-07-27 23:18:11,199] {backfill_job.py:870} WARNING - Backfill terminated by user.
[2022-07-27 23:18:11,199] {local_executor.py:390} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait.
```
</details>
---
It's worth noting that I tried to replicate this with a DAG that only had a single skip task, and a DAG with skip -> run -> skip, and both succeeded with the backfill. So my guess would be that there's an odd interaction with the TaskGroup, skipped tasks, trigger rules, and possibly `max_active_runs`.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-sqlite==2.1.3
```
### Deployment
Docker-Compose
### Deployment details
This is a custom configured Docker image, but doesn't deviate too much from a standard deployment: https://github.com/WordPress/openverse-catalog/blob/main/docker/airflow/Dockerfile
### Anything else
I'll try to see if I can continue to pare down the DAG to see if there are pieces I can throw out and still replicate the error. I don't think I'd be comfortable submitting a PR for this one because my gut says it's probably deep in the bowels of the codebase 😅 If it's something clear or straightforward though, I'd be happy to take a stab at it! I'd just need to be pointed in the right direction within the codebase 🙂
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25353 | https://github.com/apache/airflow/pull/26161 | 5b216e9480e965c7c1919cb241668beca53ab521 | 6931fbf8f7c0e3dfe96ce51ef03f2b1502baef07 | "2022-07-27T23:34:34Z" | python | "2022-09-06T09:43:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,352 | ["airflow/decorators/base.py", "airflow/models/expandinput.py", "airflow/models/mappedoperator.py", "tests/models/test_taskinstance.py", "tests/models/test_xcom_arg_map.py"] | expand_kwargs.map(func) gives unhelpful error message if func returns list | ### Apache Airflow version
main (development)
### What happened
Here's a DAG:
```python3
with DAG(
dag_id="expand_list",
doc_md="try to get kwargs from a list",
schedule_interval=None,
start_date=datetime(2001, 1, 1),
) as expand_list:
@expand_list.task
def do_this():
return [
("echo hello $USER", "USER", "foo"),
("echo hello $USER", "USER", "bar"),
]
def mapper(tuple):
if tuple[2] == "bar":
return [1, 2, 3]
else:
return {"bash_command": tuple[0], "env": {tuple[1]: tuple[2]}}
BashOperator.partial(task_id="one_cmd").expand_kwargs(do_this().map(mapper))
```
The `foo` task instance succeeds as expected, and the `bar` task fails as expected. But the error message that it gives isn't particularly helpful to a user who doesn't know what they did wrong:
```
ERROR - Failed to execute task: resolve() takes 3 positional arguments but 4 were given.
Traceback (most recent call last):
File "/home/matt/src/airflow/airflow/executors/debug_executor.py", line 78, in _run_task
ti.run(job_id=ti.job_id, **params)
File "/home/matt/src/airflow/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1782, in run
self._run_raw_task(
File "/home/matt/src/airflow/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1445, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1580, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 2202, in render_templates
rendered_task = self.task.render_template_fields(context)
File "/home/matt/src/airflow/airflow/models/mappedoperator.py", line 751, in render_template_fields
unmapped_task = self.unmap(mapped_kwargs)
File "/home/matt/src/airflow/airflow/models/mappedoperator.py", line 591, in unmap
kwargs = self._expand_mapped_kwargs(resolve)
File "/home/matt/src/airflow/airflow/models/mappedoperator.py", line 546, in _expand_mapped_kwargs
return expand_input.resolve(*resolve)
TypeError: resolve() takes 3 positional arguments but 4 were given
```
### What you think should happen instead
Whatever checks the return value for mappability should do more to point the user to their error. Perhaps something like:
> UnmappableDataError: Expected a dict with keys that BashOperator accepts, got `[1, 2, 3]` instead
### How to reproduce
Run the dag above
### Operating System
Linux 5.10.101 #1-NixOS SMP Wed Feb 16 11:54:31 UTC 2022 x86_64 GNU/Linux
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25352 | https://github.com/apache/airflow/pull/25355 | f6b48ac6dfaf931a5433ec16369302f68f038c65 | 4e786e31bcdf81427163918e14d191e55a4ab606 | "2022-07-27T22:49:28Z" | python | "2022-07-29T08:58:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,349 | ["airflow/providers/hashicorp/_internal_client/vault_client.py", "tests/providers/hashicorp/_internal_client/test_vault_client.py", "tests/providers/hashicorp/hooks/test_vault.py"] | Vault client for hashicorp provider prints a deprecation warning when using kubernetes login | ### Apache Airflow Provider(s)
hashicorp
### Versions of Apache Airflow Providers
```
apache-airflow==2.3.0
apache-airflow-providers-hashicorp==2.2.0
hvac==0.11.2
```
### Apache Airflow version
2.3.0
### Operating System
Ubuntu 18.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Using the vault secrets backend prints a deprecation warning when using the kubernetes auth method:
```
/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/hashicorp/_internal_client/vault_client.py:284 DeprecationWarning: Call to deprecated function 'auth_kubernetes'. This method will be removed in version '1.0.0' Please use the 'login' method on the 'hvac.api.auth_methods.kubernetes' class moving forward.
```
This code is still present in `main` at https://github.com/apache/airflow/blob/main/airflow/providers/hashicorp/_internal_client/vault_client.py#L258-L260.
### What you think should happen instead
The new kubernetes authentication method should be used instead. This code:
```python
if self.auth_mount_point:
_client.auth_kubernetes(role=self.kubernetes_role, jwt=jwt, mount_point=self.auth_mount_point)
else:
_client.auth_kubernetes(role=self.kubernetes_role, jwt=jwt)
```
Should be able to be updated to:
```python
from hvac.api.auth_methods import Kubernetes
if self.auth_mount_point:
Kubernetes(_client.adapter).login(role=self.kubernetes_role, jwt=jwt, mount_point=self.auth_mount_point)
else:
Kubernetes(_client.adapter).login(role=self.kubernetes_role, jwt=jwt)
```
### How to reproduce
Use the vault secrets backend with the kubernetes auth method and look at the logs.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25349 | https://github.com/apache/airflow/pull/25351 | f4b93cc097dab95437c9c4b37474f792f80fd14e | ad0a4965aaf0702f0e8408660b912e87d3c75c22 | "2022-07-27T19:19:01Z" | python | "2022-07-28T18:23:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,344 | ["airflow/models/abstractoperator.py", "tests/models/test_baseoperator.py"] | Improve Airflow logging for operator Jinja template processing | ### Description
When an operator uses Jinja templating, debugging issues is difficult because the Airflow task log only displays a stack trace.
### Use case/motivation
When there's a templating issue, I'd like to have some specific, actionable info to help understand the problem. At minimum:
* Which operator or task had the issue?
* Which field had the issue?
* What was the Jinja template?
Possibly also the Jinja context, although that can be very verbose.
I have prototyped this in my local Airflow dev environment, and I propose something like the following. (Note the logging commands, which are not present in the Airflow repo.)
Please let me know if this sounds reasonable, and I will be happy to create a PR.
```
def _do_render_template_fields(
self,
parent,
template_fields,
context,
jinja_env,
seen_oids,
) -> None:
"""Copied from Airflow 2.2.5 with added logging."""
logger.info(f"BaseOperator._do_render_template_fields(): Task {self.task_id}")
for attr_name in template_fields:
content = getattr(parent, attr_name)
if content:
logger.info(f"Rendering template for '{attr_name}' field: {content!r}")
rendered_content = self.render_template(content, context, jinja_env, seen_oids)
+ setattr(parent, attr_name, rendered_content)
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25344 | https://github.com/apache/airflow/pull/25452 | 9c632684341fb3115d654aecb83aa951d80b19af | 4da2b0c216c92795f19862a3ff6634e5a5936138 | "2022-07-27T15:46:39Z" | python | "2022-08-02T19:40:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,343 | ["airflow/callbacks/callback_requests.py", "airflow/models/taskinstance.py", "tests/callbacks/test_callback_requests.py"] | Object of type datetime is not JSON serializable after detecting zombie jobs with CeleryExecutor and separated Scheduler and DAG-Processor | ### Apache Airflow version
2.3.3 (latest released)
### What happened
After running for a certain period (few minutes until several hours depending on the number of active DAGs in the environment) The scheduler crashes with the following error message:
```
[2022-07-26 15:07:24,362] {executor_loader.py:105} INFO - Loaded executor: CeleryExecutor
[2022-07-26 15:07:24,363] {scheduler_job.py:1252} INFO - Resetting orphaned tasks for active dag runs
[2022-07-26 15:07:25,585] {celery_executor.py:532} INFO - Adopted the following 1 tasks from a dead executor
<TaskInstance: freewheel_uafl_data_scala.freewheel.delivery_data scheduled__2022-07-25T04:15:00+00:00 [running]> in state STARTED
[2022-07-26 15:07:35,881] {scheduler_job.py:1381} WARNING - Failing (1) jobs without heartbeat after 2022-07-26 12:37:35.868798+00:00
[2022-07-26 15:07:35,881] {scheduler_job.py:1389} ERROR - Detected zombie job: {'full_filepath': '/data/dags/09_scala_apps/freewheel_uafl_data_scala.py', 'msg': 'Detected <TaskInstance: freewheel_uafl_data_scala.freewheel.delivery_data scheduled__2022-07-25T04:15:00+00:00 [running]> as zombie', 'simple_task_instance': <airflow.models.taskinstance.SimpleTaskInstance object at 0x7fb4a1105690>, 'is_failure_callback': True}
[2022-07-26 15:07:35,883] {scheduler_job.py:769} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 752, in _execute
self._run_scheduler_loop()
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 873, in _run_scheduler_loop
next_event = timers.run(blocking=False)
File "/usr/lib/python3.10/sched.py", line 151, in run
action(*argument, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/event_scheduler.py", line 36, in repeat
action(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 1390, in _find_zombies
self.executor.send_callback(request)
File "/usr/lib/python3.10/site-packages/airflow/executors/base_executor.py", line 363, in send_callback
self.callback_sink.send(request)
File "/usr/lib/python3.10/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/callbacks/database_callback_sink.py", line 34, in send
db_callback = DbCallbackRequest(callback=callback, priority_weight=10)
File "<string>", line 4, in __init__
File "/usr/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 481, in _initialize_instance
with util.safe_reraise():
File "/usr/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/usr/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
raise exception
File "/usr/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 479, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/models/db_callback_request.py", line 44, in __init__
self.callback_data = callback.to_json()
File "/usr/lib/python3.10/site-packages/airflow/callbacks/callback_requests.py", line 79, in to_json
return json.dumps(dict_obj)
File "/usr/lib/python3.10/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type datetime is not JSON serializable
[2022-07-26 15:07:36,100] {scheduler_job.py:781} INFO - Exited execute loop
Traceback (most recent call last):
File "/usr/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/lib/python3.10/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/usr/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/usr/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/usr/lib/python3.10/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 752, in _execute
self._run_scheduler_loop()
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 873, in _run_scheduler_loop
next_event = timers.run(blocking=False)
File "/usr/lib/python3.10/sched.py", line 151, in run
action(*argument, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/event_scheduler.py", line 36, in repeat
action(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 1390, in _find_zombies
self.executor.send_callback(request)
File "/usr/lib/python3.10/site-packages/airflow/executors/base_executor.py", line 363, in send_callback
self.callback_sink.send(request)
File "/usr/lib/python3.10/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/callbacks/database_callback_sink.py", line 34, in send
db_callback = DbCallbackRequest(callback=callback, priority_weight=10)
File "<string>", line 4, in __init__
File "/usr/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 481, in _initialize_instance
with util.safe_reraise():
File "/usr/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/usr/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
raise exception
File "/usr/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 479, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/models/db_callback_request.py", line 44, in __init__
self.callback_data = callback.to_json()
File "/usr/lib/python3.10/site-packages/airflow/callbacks/callback_requests.py", line 79, in to_json
return json.dumps(dict_obj)
File "/usr/lib/python3.10/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type datetime is not JSON serializable
```
### What you think should happen instead
The scheduler should handle zombie jobs without crashing.
### How to reproduce
The following conditions are necessary:
- dag-processor and scheduler run in separated containers
- AirFlow uses the CeleryExecutor
- There are zombie jobs
### Operating System
Alpine Linux 3.16.1
### Versions of Apache Airflow Providers
```
apache-airflow-providers-apache-hdfs==3.0.1
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.2.0
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-exasol==2.1.3
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-jenkins==3.0.0
apache-airflow-providers-microsoft-mssql==3.1.0
apache-airflow-providers-odbc==3.1.0
apache-airflow-providers-oracle==3.1.0
apache-airflow-providers-postgres==5.1.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-sqlite==3.1.0
apache-airflow-providers-ssh==3.1.0
```
### Deployment
Other 3rd-party Helm chart
### Deployment details
One Pod on Kubernetes containing the following containers
- 1 Container for the webserver service
- 1 Container for the scheduler service
- 1 Container for the dag-processor service
- 1 Container for the flower service
- 1 Container for the redis service
- 2 or 3 containers for the celery workers services
Due to a previous issue crashing the scheduler with the message `UNEXPECTED COMMIT - THIS WILL BREAK HA LOCKS`, we substitute `scheduler_job.py` with the file `https://raw.githubusercontent.com/tanelk/airflow/a4b22932e5ac9c2b6f37c8c58345eee0f63cae09/airflow/jobs/scheduler_job.py`.
Sadly I don't remember which issue or MR exactly but it was related to scheduler and dag-processor running in separate containers.
### Anything else
It looks like that only the **combination of CeleryExecutor and separated scheduler and dag-processor** services crashes the scheduler when handling zombie jobs.
The KubernetesExecutor with separated scheduler and dag-processor doesn't crash the scheduler.
It looks like the CeleryExecutor with scheduler and dag-processor in the same container doesn't crash the scheduler.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25343 | https://github.com/apache/airflow/pull/25471 | 3421ecc21bafaf355be5b79ec4ed19768e53275a | d7e14ba0d612d8315238f9d0cba4ef8c44b6867c | "2022-07-27T15:28:28Z" | python | "2022-08-02T21:50:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,330 | ["airflow/operators/bash.py"] | User defined `env` clobbers PATH, BashOperator can't find bash | ### Apache Airflow version
main (development)
### What happened
NixOS is unconventional in some ways. For instance `which bash` prints `/run/current-system/sw/bin/bash`, which isn't a place that most people expect to go looking for bash.
I can't be sure if this is the reason--or if it's some other peculiarity--but on NixOS, cases where `BashOperator` defines an `env` cause the task to fail with this error:
```
venv ❯ airflow dags test nopath "$(date +%Y-%m-%d)"
[2022-07-26 21:54:09,704] {dagbag.py:508} INFO - Filling up the DagBag from /home/matt/today/dags
[2022-07-26 21:54:10,129] {base_executor.py:91} INFO - Adding to queue: ['<TaskInstance: nopath.nopath backfill__2022-07-26T00:00:00+00:00 [queued]>']
[2022-07-26 21:54:15,148] {subprocess.py:62} INFO - Tmp dir root location:
/tmp
[2022-07-26 21:54:15,149] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'echo hello world']
[2022-07-26 21:54:15,238] {debug_executor.py:84} ERROR - Failed to execute task: [Errno 2] No such file or directory: 'bash'.
Traceback (most recent call last):
File "/home/matt/src/airflow/airflow/executors/debug_executor.py", line 78, in _run_task
ti.run(job_id=ti.job_id, **params)
File "/home/matt/src/airflow/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1782, in run
self._run_raw_task(
File "/home/matt/src/airflow/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1445, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1623, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1694, in _execute_task
result = execute_callable(context=context)
File "/home/matt/src/airflow/airflow/operators/bash.py", line 183, in execute
result = self.subprocess_hook.run_command(
File "/home/matt/src/airflow/airflow/hooks/subprocess.py", line 76, in run_command
self.sub_process = Popen(
File "/nix/store/cgxc3jz7idrb1wnb2lard9rvcx6aw2si-python3-3.9.6/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/nix/store/cgxc3jz7idrb1wnb2lard9rvcx6aw2si-python3-3.9.6/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'bash'
```
On the other hand, tasks succeed if:
- The author doesn't use the `env` kwarg
- `env` is replaced with `append_env`
- they use `env` to explicitly set `PATH` to a folder containing `bash`
- or they are run on a more conventional system (like my MacBook)
Here is a DAG which demonstrates this:
```python3
from airflow.models import DAG
from airflow.operators.bash import BashOperator
from datetime import datetime, timedelta
with DAG(
dag_id="withpath",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as withpath:
BashOperator(
task_id="withpath",
env={"PATH": "/run/current-system/sw/bin/", "WORLD": "world"},
bash_command="echo hello $WORLD",
)
with DAG(
dag_id="nopath",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as nopath:
BashOperator(
task_id="nopath",
env={"WORLD": "world"},
bash_command="echo hello $WORLD",
)
```
`withpath` succeeds, but `nopath` fails, showing the above error.
### What you think should happen instead
Unless the user explicitly sets PATH via the `env` kwarg, airflow should populate it with whatever it finds in the enclosing environment.
### How to reproduce
I can reproduce it reliably, but only on this machine. I'm willing to fix this myself--since I can test it right here--but I'm filing this issue because I need a hint. Where should I start?
### Operating System
NixOS 21.11 (Porcupine)
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25330 | https://github.com/apache/airflow/pull/25331 | 900c81b87a76a9df8a3a6435a0d42348e88c5bbb | c3adf3e65d32d8145e2341989a5336c3e5269e62 | "2022-07-27T04:25:45Z" | python | "2022-07-28T17:39:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,322 | ["docs/apache-airflow-providers-amazon/connections/aws.rst", "docs/apache-airflow-providers-amazon/img/aws-base-conn-airflow.png", "docs/apache-airflow-providers-amazon/logging/s3-task-handler.rst", "docs/spelling_wordlist.txt"] | Amazon S3 for logging using IAM role for service accounts(IRSA) | ### What do you see as an issue?
I am using the latest Helm Chart version (see the version below) to deploy Airflow on Amazon EKS and trying to configure S3 for logging. We have few docs that explain how to add logging variables through `values.yaml` but that isn't sufficient for configuring S3 logging with IRSA. I couldn't find any other logs that explains this configuration in detail hence i am adding solution below
Here is the link that i am referring to..
Amazon S3 for Logging https://github.com/apache/airflow/blob/main/docs/apache-airflow-providers-amazon/logging/s3-task-handler.rst
Airflow config
```
apiVersion: v2
name: airflow
version: 1.6.0
appVersion: 2.3.3
```
### Solving the problem
**I have managed to get S3 logging working with IAM role for service accounts(IRSA).**
# Writing logs to Amazon S3 using AWS IRSA
## Step1: Create IAM role for service account (IRSA)
Create IRSA using `eksctl or terraform`. This command uses eksctl to create IAM role and service account
```sh
eksctl create iamserviceaccount --cluster="<EKS_CLUSTER_ID>" --name="<SERVICE_ACCOUNT_NAME>" --namespace=airflow --attach-policy-arn="<IAM_POLICY_ARN>" --approve
# e.g.,
eksctl create iamserviceaccount --cluster=airflow-eks-cluster --name=airflow-sa --namespace=airflow --attach-policy-arn=arn:aws:iam::aws:policy/AmazonS3FullAccess --approve
```
## Step2: Update Helm Chart `values.yaml` with Service Account
Add the above Service Account (e.g., `airflow-sa`) to Helm Chart `values.yaml` under the following sections. We are using the existing `serviceAccount` hence `create: false` with existing name as `name: airflow-sa`. Annotations may not be required as this will be added by **Step1**. Adding this for readability
```yaml
workers:
serviceAccount:
create: false
name: airflow-sa
# Annotations to add to worker Kubernetes service account.
annotations:
eks.amazonaws.com/role-arn: <ENTER_IAM_ROLE_ARN_CREATED_BY_EKSCTL_COMMAND>
webserver:
serviceAccount:
create: false
name: airflow-sa
# Annotations to add to worker Kubernetes service account.
annotations:
eks.amazonaws.com/role-arn: <ENTER_IAM_ROLE_ARN_CREATED_BY_EKSCTL_COMMAND
config:
logging:
remote_logging: 'True'
logging_level: 'INFO'
remote_base_log_folder: 's3://<ENTER_YOUR_BUCKET_NAME>/<FOLDER_PATH'
remote_log_conn_id: 'aws_s3_conn' # notice this name is be used in Step3
delete_worker_pods: 'False'
encrypt_s3_logs: 'True'
```
## Step3: Create S3 connection in Airflow Web UI
Now the final step to create connections under Airflow UI before executing the DAGs
- Login to Airflow Web UI and Navigate to `Admin -> Connections`
- Create connection for S3 and select the options as shown in the image
<img width="861" alt="image (1)" src="https://user-images.githubusercontent.com/19464259/181126084-2a0ddf43-01a4-4abd-9031-b53fb4d8870f.png">
## Step4: Verify the logs
- Execute example DAGs
- Verify the logs in S3 bucket
- Verify the logs from Airflow UI from DAGs log
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25322 | https://github.com/apache/airflow/pull/25931 | 3326a0d493c92b15eea8cd9a874729db7b7a255c | bd3d6d3ee71839ec3628fa47294e0b3b8a6a6b9f | "2022-07-26T23:10:41Z" | python | "2022-10-10T08:40:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,313 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py"] | BaseSQLToGCSOperator parquet export format not limiting file size bug | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.8.0
### Apache Airflow version
2.3.2
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When using the `PostgresToGCSOperator(..., export_format='parquet', approx_max_file_size_bytes=Y, ...)`, when a temporary file exceeds the size defined by Y, the current file is not yielded, and no new chunk is created. Meaning that only 1 chunk will be uploaded irregardless of the size specified Y.
I believe [this](https://github.com/apache/airflow/blob/d876b4aa6d86f589b9957a2e69484c9e5365eba8/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L253) line of code which is responsible for verifying whether the temporary file has exceeded its size, to be the culprit, considering the call to `tmp_file_handle.tell()` is always returning 0 after a `parquet_writer.write_table(tbl)` call [[here]](https://github.com/apache/airflow/blob/d876b4aa6d86f589b9957a2e69484c9e5365eba8/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L240).
Therefore, regardless of the size of the temporary file already being bigger than the defined approximate limit Y, no new file will be created and only a single chunk will be uploaded.
### What you think should happen instead
This behaviour is erroneous as when the temporary file exceeds the size defined by Y, it should upload the current temporary file and then create a new file to upload after successfully uploading the current file to GCS.
A possible fix could be to use the `import os` package to determine the size of the temporary file with `os.stat(tmp_file_handle).st_size`, instead of using `tmp_file_handle.tell()`.
### How to reproduce
1. Create a postgres connection on airflow with id `postgres_test_conn`.
2. Create a gcp connection on airflow with id `gcp_test_conn`.
3. In the database referenced by the `postgres_test_conn`, in the public schema create a table `large_table`, where the total amount of data In the table is big enough to exceed the 10MB limit defined in the `approx_max_file_size_bytes` parameter.
4. Create a bucket named `issue_BaseSQLToGCSOperator_bucket`, in the gcp account referenced by the `gcp_test_conn`.
5. Create the dag exemplified in the excerpt below, and manually trigger the dag to fetch all the data from `large_table`, to insert in the `issue_BaseSQLToGCSOperator_bucket`. We should expect multiple chunks to be created, but due to this bug, only 1 chunk will be uploaded with the whole data from `large_table`.
```python
import pendulum
from airflow import DAG
from airflow.providers.google.cloud.transfers.postgres_to_gcs import PostgresToGCSOperator
with DAG(
dag_id="issue_BaseSQLToGCSOperator",
start_date=pendulum.parse("2022-01-01"),
)as dag:
task = PostgresToGCSOperator(
task_id='extract_task',
filename='uploading-{}.parquet',
bucket="issue_BaseSQLToGCSOperator_bucket",
export_format='parquet',
approx_max_file_size_bytes=10_485_760,
sql="SELECT * FROM large_table",
postgres_conn_id='postgres_test_conn',
gcp_conn_id='gcp_test_conn',
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25313 | https://github.com/apache/airflow/pull/25469 | d0048414a6d3bdc282cc738af0185a9a1cd63ef8 | 803c0e252fc78a424a181a34a93e689fa9aaaa09 | "2022-07-26T16:15:12Z" | python | "2022-08-03T06:06:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,297 | ["airflow/exceptions.py", "airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | on_failure_callback is not called when task is terminated externally | ### Apache Airflow version
2.2.5
### What happened
`on_failure_callback` is not called when task is terminated externally.
A similar issue was reported in [#14422](https://github.com/apache/airflow/issues/14422) and fixed in [#15172](https://github.com/apache/airflow/pull/15172).
However, the code that fixed this was changed in a later PR [#16301](https://github.com/apache/airflow/pull/16301), after which `task_instance._run_finished_callback` is no longer called when SIGTERM is received
(https://github.com/apache/airflow/pull/16301/files#diff-d80fa918cc75c4d6aa582d5e29eeb812ba21371d6977fde45a4749668b79a515L85).
### What you think should happen instead
`on_failure_callback` should be called when task fails regardless of how the task fails.
### How to reproduce
DAG file:
```
import datetime
import pendulum
from airflow.models import DAG
from airflow.operators.bash_operator import BashOperator
DEFAULT_ARGS = {
'email': ['example@airflow.com']
}
TZ = pendulum.timezone("America/Los_Angeles")
test_dag = DAG(
dag_id='test_callback_in_manually_terminated_dag',
schedule_interval='*/10 * * * *',
default_args=DEFAULT_ARGS,
catchup=False,
start_date=datetime.datetime(2022, 7, 14, 0, 0, tzinfo=TZ)
)
with test_dag:
BashOperator(
task_id='manually_terminated_task',
bash_command='echo start; sleep 60',
on_failure_callback=lambda context: print('This on_failure_back should be called when task fails.')
)
```
While the task instance is running, either force quitting the scheduler or manually updating its state to None in the database will cause the task to get SIGTERM and terminate. In either case, a failure callback will not be called which does not match the behavior of previous versions of Airflow.
The stack trace is attached below and `on_failure_callback` is not called.
```
[2022-07-15, 02:02:24 UTC] {process_utils.py:124} INFO - Sending Signals.SIGTERM to group 10571. PIDs of all processes in the group: [10573, 10575, 10571]
[2022-07-15, 02:02:24 UTC] {process_utils.py:75} INFO - Sending the signal Signals.SIGTERM to group 10571
[2022-07-15, 02:02:24 UTC] {taskinstance.py:1431} ERROR - Received SIGTERM. Terminating subprocesses.
[2022-07-15, 02:02:24 UTC] {subprocess.py:99} INFO - Sending SIGTERM signal to process group
[2022-07-15, 02:02:24 UTC] {process_utils.py:70} INFO - Process psutil.Process(pid=10575, status='terminated', started='02:02:11') (10575) terminated with exit code None
[2022-07-15, 02:02:24 UTC] {taskinstance.py:1776} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.7/lib/python3.7/site-packages/airflow/operators/bash.py", line 182, in execute
cwd=self.cwd,
File "/opt/python3.7/lib/python3.7/site-packages/airflow/hooks/subprocess.py", line 87, in run_command
for raw_line in iter(self.sub_process.stdout.readline, b''):
File "/opt/python3.7/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1433, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
[2022-07-15, 02:02:24 UTC] {taskinstance.py:1289} INFO - Marking task as FAILED. dag_id=test_callback_in_manually_terminated_dag, task_id=manually_terminated_task, execution_date=20220715T015100, start_date=20220715T020211, end_date=20220715T020224
[2022-07-15, 02:02:24 UTC] {logging_mixin.py:109} WARNING - /opt/python3.7/lib/python3.7/site-packages/airflow/utils/email.py:108 PendingDeprecationWarning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
[2022-07-15, 02:02:24 UTC] {configuration.py:381} WARNING - section/key [smtp/smtp_user] not found in config
[2022-07-15, 02:02:24 UTC] {email.py:214} INFO - Email alerting: attempt 1
[2022-07-15, 02:02:24 UTC] {configuration.py:381} WARNING - section/key [smtp/smtp_user] not found in config
[2022-07-15, 02:02:24 UTC] {email.py:214} INFO - Email alerting: attempt 1
[2022-07-15, 02:02:24 UTC] {taskinstance.py:1827} ERROR - Failed to send email to: ['example@airflow.com']
...
OSError: [Errno 101] Network is unreachable
[2022-07-15, 02:02:24 UTC] {standard_task_runner.py:98} ERROR - Failed to execute job 159 for task manually_terminated_task (Task received SIGTERM signal; 10571)
[2022-07-15, 02:02:24 UTC] {process_utils.py:70} INFO - Process psutil.Process(pid=10571, status='terminated', exitcode=1, started='02:02:11') (10571) terminated with exit code 1
[2022-07-15, 02:02:24 UTC] {process_utils.py:70} INFO - Process psutil.Process(pid=10573, status='terminated', started='02:02:11') (10573) terminated with exit code None
```
### Operating System
CentOS Linux 7
### Deployment
Other Docker-based deployment
### Anything else
This is an issue in 2.2.5. However, I notice that it appears to be fixed in the main branch by PR [#21877](https://github.com/apache/airflow/pull/21877/files#diff-62f7d8a52fefdb8e05d4f040c6d3459b4a56fe46976c24f68843dbaeb5a98487R1885-R1887) although it was not intended to fix this issue. Is there a timeline for getting that PR into a release? We are happy to test it out to see if it fixes the issue once it's released.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25297 | https://github.com/apache/airflow/pull/29743 | 38b901ec3f07e6e65880b11cc432fb8ad6243629 | 671b88eb3423e86bb331eaf7829659080cbd184e | "2022-07-26T04:32:52Z" | python | "2023-02-24T23:08:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,295 | ["airflow/models/param.py", "tests/models/test_param.py"] | ParamsDict represents the class object itself, not keys and values on Task Instance Details | ### Apache Airflow version
2.3.3 (latest released)
### What happened
ParamsDict's printable presentation shows the class object itself like `<airflow.models.param.ParamsDict object at 0x7fd0eba9bb80>` on the page of Task Instance Detail because it does not have `__repr__` method in its class.
<img width="791" alt="image" src="https://user-images.githubusercontent.com/16971553/180902761-88b9dd9f-7102-4e49-b8b8-0282b31dda56.png">
It used to be `dict` object and what keys and values Params include are shown on UI before replacing Params with the advanced Params by #17100.
### What you think should happen instead
It was originally shown below when it was `dict` object.
![image](https://user-images.githubusercontent.com/16971553/180904396-7b527877-5bc6-48d2-938f-7d338dfd79a7.png)
I think it can be fixed by adding `__repr__` method to the class like below.
```python
class ParamsDict(dict):
...
def __repr__(self):
return f"{self.dump()}"
```
### How to reproduce
I guess it all happens on Airflow using 2.2.0+
### Operating System
Linux, but it's not depending on OS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25295 | https://github.com/apache/airflow/pull/25305 | 285c23a2f90f4c765053aedbd3f92c9f58a84d28 | df388a3d5364b748993e61b522d0b68ff8b8124a | "2022-07-26T01:51:45Z" | python | "2022-07-27T07:13:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,286 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Error description cannot be shown | ### Apache Airflow version
2.3.3 (latest released)
### What happened
Unfortunately, I cannot get further information about my actual error because of the following KeyError
```
Traceback (most recent call last):
File "/usr/local/airflow/dags/common/databricks/operator.py", line 59, in execute
_handle_databricks_operator_execution(self, hook, self.log, context)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/databricks/operators/databricks.py", line 64, in _handle_databricks_operator_execution
notebook_error = run_output['error']
KeyError: 'error'
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
I Assume some Linux distribution
### Versions of Apache Airflow Providers
Astronomer Certified:[ v2.3.3.post1 ](https://www.astronomer.io/downloads/ac/v2-3-3)based on Apache Airflow v2.3.3
Git Version: .release:2.3.3+astro.1+4446ad3e6781ad048c8342993f7c1418db225b25
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25286 | https://github.com/apache/airflow/pull/25427 | 87a0bd969b5bdb06c6e93236432eff6d28747e59 | 679a85325a73fac814c805c8c34d752ae7a94312 | "2022-07-25T12:29:56Z" | python | "2022-08-03T10:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,274 | ["airflow/providers/common/sql/hooks/sql.py", "tests/providers/common/sql/hooks/test_dbapi.py"] | Apache Airflow SqlSensor DbApiHook Error | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I trying to make SqlSensor to work with Oracle database, I've installed all the required provider and successfully tested the connection. When I run SqlSensor I got this error message
`ERROR - Failed to execute job 32 for task check_exec_date (The connection type is not supported by SqlSensor. The associated hook should be a subclass of `DbApiHook`. Got OracleHook; 419)`
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-oracle==3.2.0
apache-airflow-providers-postgres==5.1.0
apache-airflow-providers-sqlite==3.0.0
### Deployment
Other
### Deployment details
Run on Windows Subsystem for Linux
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25274 | https://github.com/apache/airflow/pull/25293 | 7e295b7d992f4ed13911e593f15fd18e0d4c16f6 | b0fd105f4ade9933476470f6e247dd5fa518ffc9 | "2022-07-25T08:47:00Z" | python | "2022-07-27T22:11:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,271 | ["airflow/plugins_manager.py", "airflow/utils/entry_points.py", "tests/plugins/test_plugins_manager.py", "tests/utils/test_entry_points.py", "tests/www/views/test_views.py"] | Version 2.3.3 breaks "Plugins as Python packages" feature | ### Apache Airflow version
2.3.3 (latest released)
### What happened
In 2.3.3
If I use https://airflow.apache.org/docs/apache-airflow/stable/plugins.html#plugins-as-python-packages feature, then I see these Error:
short:
`ValueError: The name 'airs' is already registered for this blueprint. Use 'name=' to provide a unique name.`
long:
> i'm trying to reproduce it...
If I don't use it(workarounding by AIRFLOW__CORE__PLUGINS_FOLDER), errors doesn't occur.
It didn't happend in 2.3.2 and earlier
### What you think should happen instead
Looks like plugins are import multiple times if it is plugins-as-python-packages.
Perhaps flask's major version change is the main cause.
Presumably, in flask 1.0, duplicate registration of blueprint was quietly filtered out, but in 2.0 it seems to have been changed to generate an error. (I am trying to find out if this hypothesis is correct)
Anyway, use the latest version of FAB is important. we will have to adapt to this change, so plugins will have to be imported once regardless how it defined.
### How to reproduce
> It was reproduced in the environment used at work, but it is difficult to disclose or explain it.
> I'm working to reproduce it with the breeze command, and I open the issue first with the belief that it's not just me.
### Operating System
CentOS Linux release 7.9.2009 (Core)
### Versions of Apache Airflow Providers
```sh
$ SHIV_INTERPRETER=1 airsflow -m pip freeze | grep apache-
apache-airflow==2.3.3
apache-airflow-providers-apache-hive==3.1.0
apache-airflow-providers-apache-spark==2.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.1.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sqlite==3.1.0
```
but I think these are irrelevant.
### Deployment
Other 3rd-party Helm chart
### Deployment details
docker image based on centos7, python 3.9.10 interpreter, self-written helm2 chart ....
... but I think these are irrelevant.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25271 | https://github.com/apache/airflow/pull/25296 | cd14f3f65ad5011058ab53f2119198d6c082e82c | c30dc5e64d7229cbf8e9fbe84cfa790dfef5fb8c | "2022-07-25T07:11:29Z" | python | "2022-08-03T13:01:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,241 | ["airflow/www/views.py", "tests/www/views/test_views_grid.py"] | Add has_dataset_outlets in /grid_data | Return `has_dataset_outlets` in /grid_data so we can know whether to check for downstream dataset events in grid view.
Also: add `operator`
Also be mindful of performance on those endpoints (e.g do things in a bulk query), and it should be part of the acceptance criteria. | https://github.com/apache/airflow/issues/25241 | https://github.com/apache/airflow/pull/25323 | e994f2b0201ca9dfa3397d22b5ac9d10a11a8931 | d2df9fe7860d1e795040e40723828c192aca68be | "2022-07-22T19:28:28Z" | python | "2022-07-28T10:34:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,240 | ["airflow/www/forms.py", "tests/www/views/test_views_connection.py"] | Strip white spaces from values entered into fields in Airflow UI Connections form | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I accidentally (and then intentionally) added leading and trailing white spaces while adding connection parameters in Airflow UI Connections form. What followed was an error message that was not so helpful in tracking down the input error by the user.
### What you think should happen instead
Ideally, I expected that there should be a frontend or backend logic that strips off accidental leading or trailing white spaces when adding Connections parameters in Airflow.
### How to reproduce
Intentionally add leading or trailing white spaces while adding Connections parameters.
<img width="981" alt="Screenshot 2022-07-22 at 18 49 54" src="https://user-images.githubusercontent.com/9834450/180497315-0898d803-c104-4d93-b464-c0b33a466b4d.png">
### Operating System
Mac OS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25240 | https://github.com/apache/airflow/pull/32292 | 410d0c0f86aaec71e2c0050f5adbc53fb7b441e7 | 394cedb01abd6539f6334a40757bf186325eb1dd | "2022-07-22T18:02:47Z" | python | "2023-07-11T20:04:08Z" |