status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 25,836 | ["airflow/api/client/api_client.py", "airflow/api/client/json_client.py", "airflow/api/client/local_client.py", "airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | Support overriding `replace_microseconds` parameter for `airflow dags trigger` CLI command | ### Description
`airflow dags trigger` CLI command always defaults with `replace_microseconds=True` because of the default value in the API. It would be very nice to be able to control this flag from the CLI.
### Use case/motivation
We use AWS MWAA. The exposed interface is Airflow CLI (yes, we could also ask to get a different interface from AWS MWAA, but I think this is something that was just overlooked for the CLI?), which does not support overriding `replace_microseconds` parameter when calling `airflow dags trigger` CLI command.
For the most part, our dag runs for a given dag do not happen remotely at the same time. However, based on user behavior, they are sometimes triggered within the same second (albeit not microsecond). The first dag run is successfully triggered, but the second dag run fails the `replace_microseconds` parameter is wiping out the microseconds that we pass. Thus, DagRun.find_duplicates returns True for the second dag run that we're trying to trigger, and this raises the `DagRunAlreadyExists` exception.
### Related issues
Not quite - they all seem to be around the experimental api and not directly related to the CLI.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25836 | https://github.com/apache/airflow/pull/27640 | c30c0b5714e4ee217735649b9405f0f79af63059 | b6013c0b8f1064c523af2d905c3f32ff1cbec421 | "2022-08-19T17:04:24Z" | python | "2022-11-26T00:07:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,833 | ["airflow/providers/amazon/aws/hooks/s3.py", "tests/providers/amazon/aws/hooks/test_s3.py"] | Airflow Amazon provider S3Hook().download_file() fail when needs encryption arguments (SSECustomerKey etc..) | ### Apache Airflow version
2.3.3
### What happened
Bug when trying to use the S3Hook to download a file from S3 with extra parameters for security like an SSECustomerKey.
The function [download_file](https://github.com/apache/airflow/blob/dd72e67524c99e34ba4c62bfb554e4caf877d5ec/airflow/providers/amazon/aws/hooks/s3.py#L854) fetches the `extra_args` from `self` where we can specify the security parameters about encryption as a `dict`.
But [download_file](https://github.com/apache/airflow/blob/dd72e67524c99e34ba4c62bfb554e4caf877d5ec/airflow/providers/amazon/aws/hooks/s3.py#L854) is calling [get_key()](https://github.com/apache/airflow/blob/dd72e67524c99e34ba4c62bfb554e4caf877d5ec/airflow/providers/amazon/aws/hooks/s3.py#L870) which does not use these `extra_args` when calling the [load() method here](https://github.com/apache/airflow/blob/dd72e67524c99e34ba4c62bfb554e4caf877d5ec/airflow/providers/amazon/aws/hooks/s3.py#L472), this results in a `botocore.exceptions.ClientError: An error occurred (400) when calling the HeadObject operation: Bad Request.` error.
This could be fixed like this:
load as says [boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Object.load) is calling [S3.Client.head_object()](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.head_object) which can handle **kwargs and can have all the arguments below:
```
response = client.head_object(
Bucket='string',
IfMatch='string',
IfModifiedSince=datetime(2015, 1, 1),
IfNoneMatch='string',
IfUnmodifiedSince=datetime(2015, 1, 1),
Key='string',
Range='string',
VersionId='string',
SSECustomerAlgorithm='string',
SSECustomerKey='string',
RequestPayer='requester',
PartNumber=123,
ExpectedBucketOwner='string',
ChecksumMode='ENABLED'
)
```
An easy fix would be to give the `extra_args` to `get_key` then to `load(**self.extra_args) `
### What you think should happen instead
the extra_args should be used in get_key() and therefore obj.load()
### How to reproduce
Try to use the S3Hook as below to download an encrypted file:
```
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
extra_args={
'SSECustomerAlgorithm': 'YOUR_ALGO',
'SSECustomerKey': YOUR_SSE_C_KEY
}
hook = S3Hook(aws_conn_id=YOUR_S3_CONNECTION, extra_args=extra_args)
hook.download_file(
key=key, bucket_name=bucket_name, local_path=local_path
)
```
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25833 | https://github.com/apache/airflow/pull/35037 | 36c5c111ec00075db30fab7c67ac1b6900e144dc | 95980a9bc50c1accd34166ba608bbe2b4ebd6d52 | "2022-08-19T16:25:16Z" | python | "2023-10-25T15:30:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,815 | ["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"] | SQLTableCheckOperator fails for Postgres | ### Apache Airflow version
2.3.3
### What happened
`SQLTableCheckOperator` fails when used with Postgres.
### What you think should happen instead
From the logs:
```
[2022-08-19, 09:28:14 UTC] {taskinstance.py:1910} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 296, in execute
records = hook.get_first(self.sql)
File "/usr/local/lib/python3.9/site-packages/airflow/hooks/dbapi.py", line 178, in get_first
cur.execute(sql)
psycopg2.errors.SyntaxError: subquery in FROM must have an alias
LINE 1: SELECT MIN(row_count_check) FROM (SELECT CASE WHEN COUNT(*) ...
^
HINT: For example, FROM (SELECT ...) [AS] foo.
```
### How to reproduce
```python
import pendulum
from datetime import timedelta
from airflow import DAG
from airflow.decorators import task
from airflow.providers.common.sql.operators.sql import SQLTableCheckOperator
from airflow.providers.postgres.operators.postgres import PostgresOperator
_POSTGRES_CONN = "postgresdb"
_TABLE_NAME = "employees"
default_args = {
"owner": "cs",
"retries": 3,
"retry_delay": timedelta(seconds=15),
}
with DAG(
dag_id="sql_data_quality",
start_date=pendulum.datetime(2022, 8, 1, tz="UTC"),
schedule_interval=None,
) as dag:
create_table = PostgresOperator(
task_id="create_table",
postgres_conn_id=_POSTGRES_CONN,
sql=f"""
CREATE TABLE IF NOT EXISTS {_TABLE_NAME} (
employee_name VARCHAR NOT NULL,
employment_year INT NOT NULL
);
"""
)
populate_data = PostgresOperator(
task_id="populate_data",
postgres_conn_id=_POSTGRES_CONN,
sql=f"""
INSERT INTO {_TABLE_NAME} VALUES ('Adam', 2021);
INSERT INTO {_TABLE_NAME} VALUES ('Chris', 2021);
INSERT INTO {_TABLE_NAME} VALUES ('Frank', 2021);
INSERT INTO {_TABLE_NAME} VALUES ('Fritz', 2021);
INSERT INTO {_TABLE_NAME} VALUES ('Magda', 2022);
INSERT INTO {_TABLE_NAME} VALUES ('Phil', 2021);
""",
)
check_row_count = SQLTableCheckOperator(
task_id="check_row_count",
conn_id=_POSTGRES_CONN,
table=_TABLE_NAME,
checks={
"row_count_check": {"check_statement": "COUNT(*) >= 3"}
},
)
drop_table = PostgresOperator(
task_id="drop_table",
trigger_rule="all_done",
postgres_conn_id=_POSTGRES_CONN,
sql=f"DROP TABLE {_TABLE_NAME};",
)
create_table >> populate_data >> check_row_count >> drop_table
```
### Operating System
macOS
### Versions of Apache Airflow Providers
`apache-airflow-providers-common-sql==1.0.0`
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25815 | https://github.com/apache/airflow/pull/25821 | b535262837994ef3faf3993da8f246cce6cfd3d2 | dd72e67524c99e34ba4c62bfb554e4caf877d5ec | "2022-08-19T09:51:42Z" | python | "2022-08-19T15:08:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,781 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | BigQueryGetDataOperator does not support passing project_id | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.3.0
### Apache Airflow version
2.3.2
### Operating System
MacOS
### Deployment
Other
### Deployment details
_No response_
### What happened
Can not actively pass project_id as an argument when using `BigQueryGetDataOperator`. This operator internally fallbacks into `default` project id.
### What you think should happen instead
Should let developers pass project_id when needed
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25781 | https://github.com/apache/airflow/pull/25782 | 98a7701942c683f3126f9c4f450c352b510a2734 | fc6dfa338a76d02a426e2b7f0325d37ea5e95ac3 | "2022-08-18T04:40:01Z" | python | "2022-08-20T21:14:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,775 | ["airflow/models/abstractoperator.py", "airflow/models/taskmixin.py", "tests/models/test_baseoperator.py"] | XComs from another task group fail to populate dynamic task mapping metadata | ### Apache Airflow version
2.3.3
### What happened
When a task returns a mappable Xcom within a task group, the dynamic task mapping feature (via `.expand`) causes the Airflow Scheduler to infinitely loop with a runtime error:
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 921, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1163, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 524, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 614, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 600, in _resolve_map_lengths
raise RuntimeError(f"Failed to populate all mapping metadata; missing: {keys}")
RuntimeError: Failed to populate all mapping metadata; missing: 'x'
```
### What you think should happen instead
Xcoms from different task groups should be mappable within other group scopes.
### How to reproduce
```
from airflow import DAG
from airflow.decorators import task
from airflow.utils.task_group import TaskGroup
import pendulum
@task
def enumerate(x):
return [i for i in range(x)]
@task
def addOne(x):
return x+1
with DAG(
dag_id="TaskGroupMappingBug",
schedule_interval=None,
start_date=pendulum.now().subtract(days=1),
) as dag:
with TaskGroup(group_id="enumerateNine"):
y = enumerate(9)
with TaskGroup(group_id="add"):
# airflow scheduler throws error here so this is never reached
z = addOne.expand(x=y)
```
### Operating System
linux/amd64 via Docker (apache/airflow:2.3.3-python3.9)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
docker-compose version 1.29.2, build 5becea4c
Docker Engine v20.10.14
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25775 | https://github.com/apache/airflow/pull/25793 | 6e66dd7776707936345927f8fccee3ddb7f23a2b | 5c48ed19bd3b554f9c3e881a4d9eb61eeba4295b | "2022-08-17T18:42:22Z" | python | "2022-08-19T09:55:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,765 | ["airflow/jobs/scheduler_job.py"] | Deadlock in Scheduler Loop when Updating Dag Run | ### Apache Airflow version
2.3.3
### What happened
We have been getting occasional deadlock errors in our main scheduler loop that is causing the scheduler to error out of the main scheduler loop and terminate. The full stack trace of the error is below:
```
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: [2022-08-13 00:01:17,377] {{scheduler_job.py:768}} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: Traceback (most recent call last):
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1800, in _execute_context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: cursor, statement, parameters, context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 193, in do_executemany
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: rowcount = cursor.executemany(statement, parameters)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 239, in executemany
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.rowcount = sum(self.execute(query, arg) for arg in args)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 239, in <genexpr>
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.rowcount = sum(self.execute(query, arg) for arg in args)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: res = self._query(query)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: db.query(q)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/connections.py", line 259, in query
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: _mysql.connection.query(self, query)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: The above exception was the direct cause of the following exception:
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: Traceback (most recent call last):
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self._run_scheduler_loop()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: num_queued_tis = self._do_scheduling(session)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 924, in _do_scheduling
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: guard.commit()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/airflow/utils/sqlalchemy.py", line 296, in commit
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.session.commit()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self._transaction.commit(_to_root=self.future)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 829, in commit
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self._prepare_impl()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.session.flush()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3383, in flush
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self._flush(objects)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3523, in _flush
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: transaction.rollback(_capture_exception=True)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: with_traceback=exc_tb,
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: raise exception
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3483, in _flush
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: flush_context.execute()
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: rec.execute(self)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 633, in execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: uow,
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 242, in save_obj
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: update,
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1002, in _emit_update_statements
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: statement, multiparams, execution_options=execution_options
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: return meth(self, args_10style, kwargs_10style, execution_options)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 333, in _execute_on_connection
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self, multiparams, params, execution_options
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1508, in _execute_clauseelement
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: cache_hit=cache_hit,
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1863, in _execute_context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: e, statement, parameters, cursor, context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2044, in _handle_dbapi_exception
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: sqlalchemy_exception, with_traceback=exc_info[2], from_=e
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: raise exception
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1800, in _execute_context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: cursor, statement, parameters, context
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 193, in do_executemany
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: rowcount = cursor.executemany(statement, parameters)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 239, in executemany
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.rowcount = sum(self.execute(query, arg) for arg in args)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 239, in <genexpr>
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: self.rowcount = sum(self.execute(query, arg) for arg in args)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: res = self._query(query)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: db.query(q)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: File "/home/ubuntu/.virtualenvs/ycharts/lib/python3.7/site-packages/MySQLdb/connections.py", line 259, in query
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: _mysql.connection.query(self, query)
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: [SQL: UPDATE dag_run SET last_scheduling_decision=%s WHERE dag_run.id = %s]
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: [parameters: ((datetime.datetime(2022, 8, 13, 0, 1, 17, 280720), 9), (datetime.datetime(2022, 8, 13, 0, 1, 17, 213661), 11), (datetime.datetime(2022, 8, 13, 0, 1, 17, 40686), 12))]
Aug 13 00:01:17 ip-10-0-2-218 bash[26063]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
```
It appears the issue occurs when attempting to update the `last_scheduling_decision` field of the `dag_run` table, but we are unsure why this would cause a deadlock. This issue has only been occurring when we upgrade to version 2.3.3, this was not an issue with version 2.2.4.
### What you think should happen instead
The scheduler loop should not have any deadlocks that cause it to exit out of its main loop and terminate. I would expect the scheduler loop to always be running constantly, which is not the case if a deadlock occurs in this loop.
### How to reproduce
This is occurring for us when we run a `LocalExecutor` with smart sensors enabled (2 shards). We only have 3 other daily DAGs which run at different times, and the error seems to occur right when the start time comes for one DAG to start running. After we restart the scheduler after that first deadlock, it seems to run fine the rest of the day, but the next day when it comes time to start the DAG again, another deadlock occurs.
### Operating System
Ubuntu 18.04.6 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==4.1.0
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-mysql==3.1.0
apache-airflow-providers-sftp==4.0.0
apache-airflow-providers-sqlite==3.1.0
apache-airflow-providers-ssh==3.1.0
### Deployment
Other
### Deployment details
We deploy airflow to 2 different ec2 instances. The scheduler lives on one ec2 instances and the webserver lives on a separate ec2 instance. We only run a single scheduler.
### Anything else
This issue occurs once a day when the first of our daily DAGs gets triggered. When we restart the scheduler after the deadlock, it works fine for the rest of the day typically.
We use a `LocalExecutor` with a `PARALLELISM` of 32, smart sensors enabled using 2 shards.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25765 | https://github.com/apache/airflow/pull/26347 | f977804255ca123bfea24774328ba0a9ca63688b | 0da49935000476b1d1941b63d0d66d3c58d64fea | "2022-08-17T14:04:17Z" | python | "2022-10-02T03:33:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,743 | ["airflow/config_templates/airflow_local_settings.py"] | DeprecationWarning: Passing filename_template to FileTaskHandler is deprecated and has no effect | ### Apache Airflow version
2.3.3
### What happened
After upgrading or installing airflow 2.3.3 the remote_logging in airflow.cfg cant be set to true without creating depreciation warning.
I'm using remote logging to an s3 bucket.
It doesn't matter which version of **apache-airflow-providers-amazon** i have installed.
When using systemd units to start the airflow components, the webserver will spam the depreciation warning every second.
Tested with Python 3.10 and 3.7.3
### What you think should happen instead
When using the remote logging
It should not execute an action every second in the background which seems to be deprecated.
### How to reproduce
You could quickly install an Python virtual Environment on a machine of you choice.
After that install airflow and apache-airflow-providers-amazon over pip
Then change the logging part in the airflow.cfg:
**[logging]
remote_logging = True**
create a testdag.py containing at least: **from airflow import DAG**
run it with Python to see the errors:
python testdag.py
hint: some more deprecationWarnings will appear because the standard airflow.cfg which get created when installing airflow is not the current state.
The Deprication warning you should see when turning remote_logging to true is:
`.../lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py:52 DeprecationWarning: Passing filename_template to FileTaskHandler is deprecated and has no effect`
### Operating System
Debian GNU/Linux 10 (buster) and also tested Fedora release 36 (Thirty Six)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 4.0.0
### Deployment
Virtualenv installation
### Deployment details
Running a small setup. 2 Virtual Machines. Airflow installed over pip inside a Python virtual environment.
### Anything else
The Problem occurs every dag run and it gets logged every second inside the journal produced by the webserver systemd unit.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25743 | https://github.com/apache/airflow/pull/25764 | 0267a47e5abd104891e0ec6c741b5bed208eef1e | da616a1421c71c8ec228fefe78a0a90263991423 | "2022-08-16T14:26:29Z" | python | "2022-08-19T14:13:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,718 | ["airflow/providers/google/cloud/hooks/bigquery_dts.py"] | Incorrect config name generated for BigQueryDeleteDataTransferConfigOperator | ### Apache Airflow version
2.3.3
### What happened
When we try to delete a big query transfer config using BigQueryDeleteDataTransferConfigOperator, we are unable to find the config, as the generated transfer config name is erroneous.
As a result, although a transfer config id (that exists) is passed to the operator, we get an error saying that the transfer config doesn't exist.
### What you think should happen instead
On further analysis, it was revealed that, in the bigquery_dts hook, the project name is incorrectly created as follows on the line 171:
`project = f"/{project}/locations/{self.location}"`
That is there's an extra / prefixed to the project.
Removing the extra / shall fix this bug.
### How to reproduce
1. Create a transfer config in the BQ data transfers/or use the operator BigQueryCreateDataTransferOperator (in a project located in Europe).
2. Try to delete the transfer config using the BigQueryDeleteDataTransferConfigOperator by passing the location of the project along with the transfer config id. This step will throw the error.
### Operating System
Windows 11
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25718 | https://github.com/apache/airflow/pull/25719 | c6e9cdb4d013fec330deb79810dbb735d2c01482 | fa0cb363b860b553af2ef9530ea2de706bd16e5d | "2022-08-15T03:02:59Z" | python | "2022-10-02T00:56:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,712 | ["airflow/providers/postgres/provider.yaml", "generated/provider_dependencies.json"] | postgres provider: use non-binary psycopg2 (recommended for production use) | ### Apache Airflow Provider(s)
postgres
### Versions of Apache Airflow Providers
apache-airflow-providers-postgres==5.0.0
### Apache Airflow version
2.3.3
### Operating System
Debian 11 (airflow docker image)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
psycopg2-binary package is installed.
### What you think should happen instead
psycopg (non-binary) package is installed.
According to the [psycopg2 docs](https://www.psycopg.org/docs/install.html#psycopg-vs-psycopg-binary), (emphasis theirs) "**For production use you are advised to use the source distribution.**".
### How to reproduce
Either
```
docker run -it apache/airflow:2.3.3-python3.10
pip freeze |grep -E '(postgres|psycopg2)'
```
Or
```
docker run -it apache/airflow:slim-2.3.3-python3.10
curl -O curl https://raw.githubusercontent.com/apache/airflow/constraints-2.3.3/constraints-3.10.txt
pip install -c constraints-3.10.txt apache-airflow-providers-postgres
pip freeze |grep -E '(postgres|psycopg2)'
```
Either way, the output is:
```
apache-airflow-providers-postgres==5.0.0
psycopg2-binary==2.9.3
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25712 | https://github.com/apache/airflow/pull/25710 | 28165eef2ac26c66525849e7bebb55553ea5a451 | 14d56a5a9e78580c53cf85db504464daccffe21c | "2022-08-14T10:23:53Z" | python | "2022-08-23T15:08:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,698 | ["airflow/models/mappedoperator.py", "tests/jobs/test_backfill_job.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Backfill mode with mapped tasks: "Failed to populate all mapping metadata" | ### Apache Airflow version
2.3.3
### What happened
I was backfilling some DAGs that use dynamic tasks when I got an exception like the following:
```
Traceback (most recent call last):
File "/opt/conda/envs/production/bin/airflow", line 11, in <module>
sys.exit(main())
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py", line 107, in dag_backfill
dag.run(
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/dag.py", line 2288, in run
job.run()
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/backfill_job.py", line 847, in _execute
self._execute_dagruns(
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/backfill_job.py", line 737, in _execute_dagruns
processed_dag_run_dates = self._process_backfill_task_instances(
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/backfill_job.py", line 612, in _process_backfill_task_instances
for node, run_id, new_mapped_tis, max_map_index in self._manage_executor_state(
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/backfill_job.py", line 270, in _manage_executor_state
new_tis, num_mapped_tis = node.expand_mapped_task(ti.run_id, session=session)
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 614, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 600, in _resolve_map_lengths
raise RuntimeError(f"Failed to populate all mapping metadata; missing: {keys}")
RuntimeError: Failed to populate all mapping metadata; missing: 'x'
```
Digging further, it appears this always happens if the task used as input to an `.expand` raises an Exception. Airflow doesn't handle this exception gracefully like it does with exceptions in "normal" tasks, which can lead to other errors from deeper within Airflow. This also means that since this is not a "typical" failure case, things like `--rerun-failed-tasks` do not work as expected.
### What you think should happen instead
Airflow should fail gracefully if exceptions are raised in dynamic task generators.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
import logging
from airflow.decorators import dag, task
logger = logging.getLogger(__name__)
@dag(
schedule_interval='@daily',
start_date=datetime.datetime(2022, 8, 12),
default_args={
'retries': 5,
'retry_delay': 5.0,
},
)
def test_backfill():
@task
def get_tasks(ti=None):
logger.info(f'{ti.try_number=}')
if ti.try_number < 3:
raise RuntimeError('')
return ['a', 'b', 'c']
@task
def do_stuff(x=None, ti=None):
logger.info(f'do_stuff: {x=}, {ti.try_number=}')
if ti.try_number < 3:
raise RuntimeError('')
do_stuff.expand(x=do_stuff.expand(x=get_tasks()))
do_stuff() >> do_stuff() # this works as expected
dag = test_backfill()
if __name__ == '__main__':
dag.cli()
```
```
airflow dags backfill test_backfill -s 2022-08-05 -e 2022-08-07 --rerun-failed-tasks
```
You can repeat the `backfill` command multiple times to slowly make progress through the DAG. Things will eventually succeed (assuming the exception that triggers this bug stops being raised), but obviously this is a pain when trying to backfill a non-trivial number of DAG Runs.
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
Standalone
### Anything else
I was able to reproduce this both with SQLite + `SequentialExecutor` as well as with Postgres + `LocalExecutor`.
I haven't yet been able to reproduce this outside of `backfill` mode.
Possibly related since they mention the same exception text:
* #23533
* #23642
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25698 | https://github.com/apache/airflow/pull/25757 | d51957165b2836fe0006d318c299c149fb5d35b0 | 728a3ce5c2f5abdd7aa01864a861ca18b1f27c1b | "2022-08-12T18:04:47Z" | python | "2022-08-19T09:45:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,681 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | Scheduler enters crash loop in certain cases with dynamic task mapping | ### Apache Airflow version
2.3.3
### What happened
The scheduler crashed when attempting to queue a dynamically mapped task which is directly downstream and only dependent on another dynamically mapped task.
<details><summary>scheduler.log</summary>
```
scheduler | ____________ _____________
scheduler | ____ |__( )_________ __/__ /________ __
scheduler | ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
scheduler | ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
scheduler | _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
scheduler | [2022-08-11 08:41:10,922] {scheduler_job.py:708} INFO - Starting the scheduler
scheduler | [2022-08-11 08:41:10,923] {scheduler_job.py:713} INFO - Processing each file at most -1 times
scheduler | [2022-08-11 08:41:10,926] {executor_loader.py:105} INFO - Loaded executor: SequentialExecutor
scheduler | [2022-08-11 08:41:10,929] {manager.py:160} INFO - Launched DagFileProcessorManager with pid: 52386
scheduler | [2022-08-11 08:41:10,932] {scheduler_job.py:1233} INFO - Resetting orphaned tasks for active dag runs
scheduler | [2022-08-11 08:41:11 -0600] [52385] [INFO] Starting gunicorn 20.1.0
scheduler | [2022-08-11 08:41:11 -0600] [52385] [INFO] Listening at: http://0.0.0.0:8793 (52385)
scheduler | [2022-08-11 08:41:11 -0600] [52385] [INFO] Using worker: sync
scheduler | [2022-08-11 08:41:11 -0600] [52387] [INFO] Booting worker with pid: 52387
scheduler | [2022-08-11 08:41:11,656] {settings.py:55} INFO - Configured default timezone Timezone('UTC')
scheduler | [2022-08-11 08:41:11,659] {manager.py:406} WARNING - Because we cannot use more than 1 thread (parsing_processes = 2) when using sqlite. So we set parallelism to 1.
scheduler | [2022-08-11 08:41:11 -0600] [52388] [INFO] Booting worker with pid: 52388
scheduler | [2022-08-11 08:41:28,118] {dag.py:2968} INFO - Setting next_dagrun for bug_test to 2022-08-11T14:00:00+00:00, run_after=2022-08-11T15:00:00+00:00
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:353} INFO - 20 tasks up for execution:
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=0 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=1 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=2 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=3 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=4 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=5 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=6 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=7 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=8 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=9 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=10 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=11 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=12 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=13 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=14 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=15 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]>
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 0/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 1/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 2/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 3/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 4/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 5/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 6/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 7/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 8/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 9/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 10/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 11/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,161] {scheduler_job.py:418} INFO - DAG bug_test has 12/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 13/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 14/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 15/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 16/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:425} INFO - Not executing <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]> since the number of tasks running or queued from DAG bug_test is >= to the DAG's max_active_tasks limit of 16
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 16/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:425} INFO - Not executing <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]> since the number of tasks running or queued from DAG bug_test is >= to the DAG's max_active_tasks limit of 16
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 16/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:425} INFO - Not executing <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]> since the number of tasks running or queued from DAG bug_test is >= to the DAG's max_active_tasks limit of 16
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:418} INFO - DAG bug_test has 16/16 running and queued tasks
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:425} INFO - Not executing <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]> since the number of tasks running or queued from DAG bug_test is >= to the DAG's max_active_tasks limit of 16
scheduler | [2022-08-11 08:41:28,162] {scheduler_job.py:504} INFO - Setting the following tasks to queued state:
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=0 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=1 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=2 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=3 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=4 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=5 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=6 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=7 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=8 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=9 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=10 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=11 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=12 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=13 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=14 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=15 [scheduled]>
scheduler | [2022-08-11 08:41:28,164] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=0) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,165] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '0']
scheduler | [2022-08-11 08:41:28,165] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=1) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,165] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '1']
scheduler | [2022-08-11 08:41:28,165] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=2) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,165] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '2']
scheduler | [2022-08-11 08:41:28,165] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=3) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,165] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '3']
scheduler | [2022-08-11 08:41:28,165] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=4) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,166] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '4']
scheduler | [2022-08-11 08:41:28,166] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=5) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,166] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '5']
scheduler | [2022-08-11 08:41:28,166] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=6) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,166] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '6']
scheduler | [2022-08-11 08:41:28,166] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=7) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,166] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '7']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=8) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '8']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=9) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '9']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=10) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '10']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=11) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '11']
scheduler | [2022-08-11 08:41:28,167] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=12) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,167] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '12']
scheduler | [2022-08-11 08:41:28,168] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=13) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,168] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '13']
scheduler | [2022-08-11 08:41:28,168] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=14) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,168] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '14']
scheduler | [2022-08-11 08:41:28,168] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=15) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:28,168] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '15']
scheduler | [2022-08-11 08:41:28,170] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '0']
scheduler | [2022-08-11 08:41:29,131] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:29,227] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=0 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:29,584] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '1']
scheduler | [2022-08-11 08:41:30,492] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:30,593] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=1 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:30,969] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '2']
scheduler | [2022-08-11 08:41:31,852] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:31,940] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=2 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:32,308] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '3']
scheduler | [2022-08-11 08:41:33,199] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:33,289] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=3 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:33,656] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '4']
scheduler | [2022-08-11 08:41:34,535] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:34,631] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=4 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:35,013] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '5']
scheduler | [2022-08-11 08:41:35,928] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:36,024] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=5 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:36,393] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '6']
scheduler | [2022-08-11 08:41:37,296] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:37,384] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=6 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:37,758] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '7']
scheduler | [2022-08-11 08:41:38,642] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:38,732] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=7 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:39,113] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '8']
scheduler | [2022-08-11 08:41:39,993] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:40,086] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=8 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:40,461] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '9']
scheduler | [2022-08-11 08:41:41,383] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:41,473] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=9 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:41,865] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '10']
scheduler | [2022-08-11 08:41:42,761] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:42,858] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=10 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:43,236] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '11']
scheduler | [2022-08-11 08:41:44,124] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:44,222] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=11 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:44,654] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '12']
scheduler | [2022-08-11 08:41:45,545] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:45,635] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=12 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:45,998] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '13']
scheduler | [2022-08-11 08:41:46,867] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:46,955] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=13 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:47,386] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '14']
scheduler | [2022-08-11 08:41:48,270] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:48,362] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=14 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:48,718] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '15']
scheduler | [2022-08-11 08:41:49,569] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:49,669] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=15 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:50,022] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,022] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,022] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,022] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,023] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=0, run_start_date=2022-08-11 14:41:29.255370+00:00, run_end_date=2022-08-11 14:41:29.390095+00:00, run_duration=0.134725, state=success, executor_state=success, try_number=1, max_tries=0, job_id=5, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52421
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=1, run_start_date=2022-08-11 14:41:30.628702+00:00, run_end_date=2022-08-11 14:41:30.768539+00:00, run_duration=0.139837, state=success, executor_state=success, try_number=1, max_tries=0, job_id=6, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52423
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=2, run_start_date=2022-08-11 14:41:31.968933+00:00, run_end_date=2022-08-11 14:41:32.112968+00:00, run_duration=0.144035, state=success, executor_state=success, try_number=1, max_tries=0, job_id=7, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52425
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=3, run_start_date=2022-08-11 14:41:33.318972+00:00, run_end_date=2022-08-11 14:41:33.458203+00:00, run_duration=0.139231, state=success, executor_state=success, try_number=1, max_tries=0, job_id=8, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52429
scheduler | [2022-08-11 08:41:50,036] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=4, run_start_date=2022-08-11 14:41:34.663829+00:00, run_end_date=2022-08-11 14:41:34.811273+00:00, run_duration=0.147444, state=success, executor_state=success, try_number=1, max_tries=0, job_id=9, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52437
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=5, run_start_date=2022-08-11 14:41:36.056658+00:00, run_end_date=2022-08-11 14:41:36.203243+00:00, run_duration=0.146585, state=success, executor_state=success, try_number=1, max_tries=0, job_id=10, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52440
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=6, run_start_date=2022-08-11 14:41:37.412705+00:00, run_end_date=2022-08-11 14:41:37.550794+00:00, run_duration=0.138089, state=success, executor_state=success, try_number=1, max_tries=0, job_id=11, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52442
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=7, run_start_date=2022-08-11 14:41:38.761691+00:00, run_end_date=2022-08-11 14:41:38.897424+00:00, run_duration=0.135733, state=success, executor_state=success, try_number=1, max_tries=0, job_id=12, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52446
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=8, run_start_date=2022-08-11 14:41:40.119057+00:00, run_end_date=2022-08-11 14:41:40.262712+00:00, run_duration=0.143655, state=success, executor_state=success, try_number=1, max_tries=0, job_id=13, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52450
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=9, run_start_date=2022-08-11 14:41:41.502857+00:00, run_end_date=2022-08-11 14:41:41.641680+00:00, run_duration=0.138823, state=success, executor_state=success, try_number=1, max_tries=0, job_id=14, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52452
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=10, run_start_date=2022-08-11 14:41:42.889206+00:00, run_end_date=2022-08-11 14:41:43.030804+00:00, run_duration=0.141598, state=success, executor_state=success, try_number=1, max_tries=0, job_id=15, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52454
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=11, run_start_date=2022-08-11 14:41:44.255197+00:00, run_end_date=2022-08-11 14:41:44.413457+00:00, run_duration=0.15826, state=success, executor_state=success, try_number=1, max_tries=0, job_id=16, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52461
scheduler | [2022-08-11 08:41:50,037] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=12, run_start_date=2022-08-11 14:41:45.665373+00:00, run_end_date=2022-08-11 14:41:45.803094+00:00, run_duration=0.137721, state=success, executor_state=success, try_number=1, max_tries=0, job_id=17, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52463
scheduler | [2022-08-11 08:41:50,038] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=13, run_start_date=2022-08-11 14:41:46.988348+00:00, run_end_date=2022-08-11 14:41:47.159584+00:00, run_duration=0.171236, state=success, executor_state=success, try_number=1, max_tries=0, job_id=18, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52465
scheduler | [2022-08-11 08:41:50,038] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=14, run_start_date=2022-08-11 14:41:48.393004+00:00, run_end_date=2022-08-11 14:41:48.533408+00:00, run_duration=0.140404, state=success, executor_state=success, try_number=1, max_tries=0, job_id=19, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52472
scheduler | [2022-08-11 08:41:50,038] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=15, run_start_date=2022-08-11 14:41:49.699253+00:00, run_end_date=2022-08-11 14:41:49.833084+00:00, run_duration=0.133831, state=success, executor_state=success, try_number=1, max_tries=0, job_id=20, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:28.163024+00:00, queued_by_job_id=4, pid=52476
scheduler | [2022-08-11 08:41:51,632] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=0 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=1 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=2 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=3 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=4 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=5 [success]>'
scheduler | [2022-08-11 08:41:51,633] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=6 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=7 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=8 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=9 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=10 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=11 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=12 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=13 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=14 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=15 [success]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]>'
scheduler | [2022-08-11 08:41:51,634] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]>'
scheduler | [2022-08-11 08:41:51,635] {dagrun.py:912} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]>'
scheduler | [2022-08-11 08:41:51,636] {dagrun.py:937} INFO - Restoring mapped task '<TaskInstance: bug_test.do_something_else scheduled__2022-08-11T13:00:00+00:00 [None]>'
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:353} INFO - 4 tasks up for execution:
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]>
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:418} INFO - DAG bug_test has 0/16 running and queued tasks
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:418} INFO - DAG bug_test has 1/16 running and queued tasks
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:418} INFO - DAG bug_test has 2/16 running and queued tasks
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:418} INFO - DAG bug_test has 3/16 running and queued tasks
scheduler | [2022-08-11 08:41:51,688] {scheduler_job.py:504} INFO - Setting the following tasks to queued state:
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [scheduled]>
scheduler | <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [scheduled]>
scheduler | [2022-08-11 08:41:51,690] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=16) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:51,690] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '16']
scheduler | [2022-08-11 08:41:51,690] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=17) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:51,690] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '17']
scheduler | [2022-08-11 08:41:51,690] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=18) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:51,690] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '18']
scheduler | [2022-08-11 08:41:51,690] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='bug_test', task_id='do_something', run_id='scheduled__2022-08-11T13:00:00+00:00', try_number=1, map_index=19) to executor with priority 2 and queue default
scheduler | [2022-08-11 08:41:51,690] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '19']
scheduler | [2022-08-11 08:41:51,692] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '16']
scheduler | [2022-08-11 08:41:52,532] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:52,620] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=16 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:53,037] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '17']
scheduler | [2022-08-11 08:41:53,907] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:53,996] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=17 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:54,427] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '18']
scheduler | [2022-08-11 08:41:55,305] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:55,397] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=18 [queued]> on host somehost.com
scheduler | [2022-08-11 08:41:55,816] {sequential_executor.py:59} INFO - Executing command: ['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '19']
scheduler | [2022-08-11 08:41:56,726] {dagbag.py:508} INFO - Filling up the DagBag from /path/to/test/dir/bug_test/dags/bug_test.py
scheduler | [2022-08-11 08:41:56,824] {task_command.py:371} INFO - Running <TaskInstance: bug_test.do_something scheduled__2022-08-11T13:00:00+00:00 map_index=19 [queued]> on host somehost.com
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlite3.IntegrityError: UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler |
scheduler | The above exception was the direct cause of the following exception:
scheduler |
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/bin/airflow", line 8, in <module>
scheduler | sys.exit(main())
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
scheduler | args.func(args)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
scheduler | return f(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 377, in task_run
scheduler | _run_task_by_selected_method(args, dag, ti)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 183, in _run_task_by_selected_method
scheduler | _run_task_by_local_task_job(args, ti)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 241, in _run_task_by_local_task_job
scheduler | run_job.run()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
scheduler | self._execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 133, in _execute
scheduler | self.handle_task_exit(return_code)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 171, in handle_task_exit
scheduler | self._run_mini_scheduler_on_child_tasks()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
scheduler | return func(*args, session=session, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 261, in _run_mini_scheduler_on_child_tasks
scheduler | info = dag_run.task_instance_scheduling_decisions(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
scheduler | schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
scheduler | expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 683, in expand_mapped_task
scheduler | session.flush()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
scheduler | self._flush(objects)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
scheduler | transaction.rollback(_capture_exception=True)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
scheduler | compat.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
scheduler | flush_context.execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
scheduler | rec.execute(self)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
scheduler | util.preloaded.orm_persistence.save_obj(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 236, in save_obj
scheduler | _emit_update_statements(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1000, in _emit_update_statements
scheduler | c = connection._execute_20(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20
scheduler | return meth(self, args_10style, kwargs_10style, execution_options)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
scheduler | return connection._execute_clauseelement(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1481, in _execute_clauseelement
scheduler | ret = self._execute_context(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1845, in _execute_context
scheduler | self._handle_dbapi_exception(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2026, in _handle_dbapi_exception
scheduler | util.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler | [SQL: UPDATE task_instance SET map_index=? WHERE task_instance.task_id = ? AND task_instance.dag_id = ? AND task_instance.run_id = ? AND task_instance.map_index = ?]
scheduler | [parameters: (0, 'do_something_else', 'bug_test', 'scheduled__2022-08-11T13:00:00+00:00', -1)]
scheduler | (Background on this error at: https://sqlalche.me/e/14/gkpj)
scheduler | [2022-08-11 08:41:57,311] {sequential_executor.py:66} ERROR - Failed to execute task Command '['airflow', 'tasks', 'run', 'bug_test', 'do_something', 'scheduled__2022-08-11T13:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/bug_test.py', '--map-index', '19']' returned non-zero exit status 1..
scheduler | [2022-08-11 08:41:57,313] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:57,313] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:57,313] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status success for try_number 1
scheduler | [2022-08-11 08:41:57,313] {scheduler_job.py:599} INFO - Executor reports execution of bug_test.do_something run_id=scheduled__2022-08-11T13:00:00+00:00 exited with status failed for try_number 1
scheduler | [2022-08-11 08:41:57,321] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=16, run_start_date=2022-08-11 14:41:52.649415+00:00, run_end_date=2022-08-11 14:41:52.787286+00:00, run_duration=0.137871, state=success, executor_state=success, try_number=1, max_tries=0, job_id=21, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:51.688924+00:00, queued_by_job_id=4, pid=52479
scheduler | [2022-08-11 08:41:57,321] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=17, run_start_date=2022-08-11 14:41:54.027712+00:00, run_end_date=2022-08-11 14:41:54.170371+00:00, run_duration=0.142659, state=success, executor_state=success, try_number=1, max_tries=0, job_id=22, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:51.688924+00:00, queued_by_job_id=4, pid=52484
scheduler | [2022-08-11 08:41:57,321] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=18, run_start_date=2022-08-11 14:41:55.426712+00:00, run_end_date=2022-08-11 14:41:55.566833+00:00, run_duration=0.140121, state=success, executor_state=success, try_number=1, max_tries=0, job_id=23, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:51.688924+00:00, queued_by_job_id=4, pid=52488
scheduler | [2022-08-11 08:41:57,321] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=bug_test, task_id=do_something, run_id=scheduled__2022-08-11T13:00:00+00:00, map_index=19, run_start_date=2022-08-11 14:41:56.859387+00:00, run_end_date=2022-08-11 14:41:57.018604+00:00, run_duration=0.159217, state=success, executor_state=failed, try_number=1, max_tries=0, job_id=24, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-08-11 14:41:51.688924+00:00, queued_by_job_id=4, pid=52490
scheduler | [2022-08-11 08:41:57,403] {scheduler_job.py:768} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlite3.IntegrityError: UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler |
scheduler | The above exception was the direct cause of the following exception:
scheduler |
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
scheduler | self._run_scheduler_loop()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
scheduler | num_queued_tis = self._do_scheduling(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 921, in _do_scheduling
scheduler | callback_to_run = self._schedule_dag_run(dag_run, session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1163, in _schedule_dag_run
scheduler | schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 524, in update_state
scheduler | info = self.task_instance_scheduling_decisions(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
scheduler | schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
scheduler | expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 683, in expand_mapped_task
scheduler | session.flush()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
scheduler | self._flush(objects)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
scheduler | transaction.rollback(_capture_exception=True)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
scheduler | compat.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
scheduler | flush_context.execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
scheduler | rec.execute(self)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
scheduler | util.preloaded.orm_persistence.save_obj(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 236, in save_obj
scheduler | _emit_update_statements(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1000, in _emit_update_statements
scheduler | c = connection._execute_20(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20
scheduler | return meth(self, args_10style, kwargs_10style, execution_options)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
scheduler | return connection._execute_clauseelement(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1481, in _execute_clauseelement
scheduler | ret = self._execute_context(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1845, in _execute_context
scheduler | self._handle_dbapi_exception(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2026, in _handle_dbapi_exception
scheduler | util.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler | [SQL: UPDATE task_instance SET map_index=? WHERE task_instance.task_id = ? AND task_instance.dag_id = ? AND task_instance.run_id = ? AND task_instance.map_index = ?]
scheduler | [parameters: (0, 'do_something_else', 'bug_test', 'scheduled__2022-08-11T13:00:00+00:00', -1)]
scheduler | (Background on this error at: https://sqlalche.me/e/14/gkpj)
scheduler | [2022-08-11 08:41:58,421] {process_utils.py:125} INFO - Sending Signals.SIGTERM to group 52386. PIDs of all processes in the group: [52386]
scheduler | [2022-08-11 08:41:58,421] {process_utils.py:80} INFO - Sending the signal Signals.SIGTERM to group 52386
scheduler | [2022-08-11 08:41:58,609] {process_utils.py:75} INFO - Process psutil.Process(pid=52386, status='terminated', exitcode=0, started='08:41:10') (52386) terminated with exit code 0
scheduler | [2022-08-11 08:41:58,609] {scheduler_job.py:780} INFO - Exited execute loop
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | [2022-08-11 08:41:58 -0600] [52385] [INFO] Handling signal: term
scheduler | cursor.execute(statement, parameters)
scheduler | sqlite3.IntegrityError: UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler |
scheduler | The above exception was the direct cause of the following exception:
scheduler |
scheduler | Traceback (most recent call last):
scheduler | File "/path/to/test/dir/bug_test/.env/bin/airflow", line 8, in <module>
scheduler | sys.exit(main())
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
scheduler | args.func(args)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
scheduler | return f(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
scheduler | _run_scheduler_job(args=args)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
scheduler | job.run()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
scheduler | self._execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
scheduler | self._run_scheduler_loop()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
scheduler | num_queued_tis = self._do_scheduling(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 921, in _do_scheduling
scheduler | [2022-08-11 08:41:58 -0600] [52387] [INFO] Worker exiting (pid: 52387)
scheduler | [2022-08-11 08:41:58 -0600] [52388] [INFO] Worker exiting (pid: 52388)
scheduler | callback_to_run = self._schedule_dag_run(dag_run, session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1163, in _schedule_dag_run
scheduler | schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 524, in update_state
scheduler | info = self.task_instance_scheduling_decisions(session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
scheduler | schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
scheduler | expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 683, in expand_mapped_task
scheduler | session.flush()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
scheduler | self._flush(objects)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
scheduler | transaction.rollback(_capture_exception=True)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
scheduler | compat.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
scheduler | flush_context.execute()
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
scheduler | rec.execute(self)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
scheduler | util.preloaded.orm_persistence.save_obj(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 236, in save_obj
scheduler | _emit_update_statements(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1000, in _emit_update_statements
scheduler | c = connection._execute_20(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20
scheduler | return meth(self, args_10style, kwargs_10style, execution_options)
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
scheduler | return connection._execute_clauseelement(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1481, in _execute_clauseelement
scheduler | ret = self._execute_context(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1845, in _execute_context
scheduler | self._handle_dbapi_exception(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2026, in _handle_dbapi_exception
scheduler | util.raise_(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
scheduler | raise exception
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1802, in _execute_context
scheduler | self.dialect.do_execute(
scheduler | File "/path/to/test/dir/bug_test/.env/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
scheduler | cursor.execute(statement, parameters)
scheduler | sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: task_instance.dag_id, task_instance.task_id, task_instance.run_id, task_instance.map_index
scheduler | [SQL: UPDATE task_instance SET map_index=? WHERE task_instance.task_id = ? AND task_instance.dag_id = ? AND task_instance.run_id = ? AND task_instance.map_index = ?]
scheduler | [parameters: (0, 'do_something_else', 'bug_test', 'scheduled__2022-08-11T13:00:00+00:00', -1)]
scheduler | (Background on this error at: https://sqlalche.me/e/14/gkpj)
scheduler | [2022-08-11 08:41:58 -0600] [52385] [INFO] Shutting down: Master
```
</details>
### What you think should happen instead
The scheduler does not crash and the dynamically mapped task executes normally
### How to reproduce
### Setup
- one DAG with two tasks, one directly downstream of the other
- the DAG has a schedule (e.g. @hourly)
- both tasks use task expansion
- the second task uses the output of the first task as its expansion parameter
- the scheduler's pool size is smaller than the number of map indices in each task
### Steps to reproduce
1. enable the DAG and let it run
### Operating System
MacOS and Dockerized Linux on MacOS
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
I have tested and confirmed this bug is present in three separate deployments:
1. `airflow standalone`
2. DaskExecutor using docker compose
3. KubernetesExecutor using Docker Desktop's builtin Kubernetes cluster
All three of these deployments were executed locally on a Macbook Pro.
### 1. `airflow standalone`
I created a new Python 3.9 virtual environment, installed Airflow 2.3.3, configured a few environment variables, and executed `airflow standalone`. Here is a bash script that completes all of these tasks:
<details><summary>airflow_standalone.sh</summary>
```bash
#!/bin/bash
# ensure working dir is correct
DIR=$(cd $(dirname $BASH_SOURCE[0]) && pwd)
cd $DIR
set -x
# set version parameters
AIRFLOW_VERSION="2.3.3"
PYTHON_VERSION="3.9"
# configure Python environment
if [ ~ -d "$DIR/.env" ]
then
python3 -m venv "$DIR/.env"
fi
source "$DIR/.env/bin/activate"
pip install --upgrade pip
# install Airflow
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
# configure Airflow
export AIRFLOW_HOME="$DIR/.airflow"
export AIRFLOW__CORE__DAGS_FOLDER="$DIR/dags"
export AIRFLOW__CORE__LOAD_EXAMPLES="False"
export AIRFLOW__DATABASE__LOAD_DEFAULT_CONNECTIONS="False"
# start Airflow
exec "$DIR/.env/bin/airflow" standalone
```
</details>
Here is the DAG code that can be placed in a `dags` directory in the same location as the above script. Note that this
DAG code triggers the bug in all environments I tested.
<details><summary>bug_test.py</summary>
```python
import pendulum
from airflow.decorators import dag, task
@dag(
'bug_test',
schedule_interval='@hourly',
start_date=pendulum.now().add(hours=-2)
)
def test_scheduler_bug():
@task
def do_something(i):
return i + 10
@task
def do_something_else(i):
import logging
log = logging.getLogger('airflow.task')
log.info("I'll never run")
nums = do_something.expand(i=[i+1 for i in range(20)])
do_something_else.expand(i=nums)
TEST_DAG = test_scheduler_bug()
```
</details>
Once set up, simply activating the DAG will demonstrate the bug.
### 2. DaskExecutor on docker compose with Postgres 12
I cannot provide a full replication of this setup as it is rather in depth. The Docker image is starts from `python:3.9-slim` then installs Airflow with appropriate constraints. It has a lot of additional packages installed, both system and python. It also has a custom entrypoint that can run the Dask scheduler in addition to regular Airflow commands. Here are the applicable Airflow configuration values:
<details><summary>airflow.cfg</summary>
```conf
[core]
donot_pickle = False
executor = DaskExecutor
load_examples = False
max_active_tasks_per_dag = 16
parallelism = 4
[scheduler]
dag_dir_list_interval = 0
catchup_by_default = False
parsing_processes = 3
scheduler_health_check_threshold = 90
```
</details>
Here is a docker-compose file that is nearly identical to the one I use (I just removed unrelated bits):
<details><summary>docker-compose.yml</summary>
```yml
version: '3.7'
services:
metastore:
image: postgres:12-alpine
ports:
- 5432:5432
container_name: airflow-metastore
volumes:
- ${AIRFLOW_HOME_DIR}/pgdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: ${AIRFLOW_DB_PASSWORD}
PGDATA: /var/lib/postgresql/data/pgdata
airflow-webserver:
image: 'my_custom_image:tag'
ports:
- '8080:8080'
depends_on:
- metastore
container_name: airflow-webserver
environment:
AIRFLOW_HOME: /opt/airflow
AIRFLOW__WEBSERVER__SECRET_KEY: ${AIRFLOW_SECRET_KEY}
AIRFLOW__CORE__FERNET_KEY: ${FERNET_KEY}
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:${AIRFLOW_DB_PASSWORD}@metastore:5432/${AIRFLOW_DB_DATABASE}
env_file: container_vars.env
command:
- webserver
- --daemon
- --access-logfile
- /var/log/airflow/webserver-access.log
- --error-logfile
- /var/log/airflow/webserver-errors.log
- --log-file
- /var/log/airflow/webserver.log
volumes:
- ${AIRFLOW_HOME_DIR}/logs:/var/log/airflow
airflow-scheduler:
image: 'my_custom_image:tag'
depends_on:
- metastore
- dask-scheduler
container_name: airflow-scheduler
environment:
AIRFLOW_HOME: /opt/airflow
AIRFLOW__CORE__FERNET_KEY: ${FERNET_KEY}
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:${AIRFLOW_DB_PASSWORD}@metastore:5432/${AIRFLOW_DB_DATABASE}
SCHEDULER_RESTART_INTERVAL: ${SCHEDULER_RESTART_INTERVAL}
env_file: container_vars.env
restart: unless-stopped
command:
- scheduler
- --daemon
- --log-file
- /var/log/airflow/scheduler.log
volumes:
- ${AIRFLOW_HOME_DIR}/logs:/var/log/airflow
dask-scheduler:
image: 'my_custom_image:tag'
ports:
- 8787:8787
container_name: airflow-dask-scheduler
command:
- dask-scheduler
dask-worker:
image: 'my_custom_image:tag'
depends_on:
- dask-scheduler
- metastore
container_name: airflow-worker
environment:
AIRFLOW_HOME: /opt/airflow
AIRFLOW__CORE__FERNET_KEY: ${FERNET_KEY}
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:${AIRFLOW_DB_PASSWORD}@metastore:5432/${AIRFLOW_DB_DATABASE}
env_file: container_vars.env
command:
- dask-worker
- dask-scheduler:8786
- --nprocs
- '8'
- --nthreads
- '1'
volumes:
- ${AIRFLOW_HOME_DIR}/logs:/var/log/airflow
```
</details>
I also had to manually change the default pool size to 15 in the UI in order to trigger the bug. With the default pool set to 128 the bug did not trigger.
### 3. KubernetesExecutor on Docker Desktop builtin Kubernetes cluster with Postgres 11
This uses the official [Airflow Helm Chart](https://airflow.apache.org/docs/helm-chart/stable/index.html) with the following values overrides:
<details><summary>values.yaml</summary>
```yml
defaultAirflowRepository: my_custom_image
defaultAirflowTag: "my_image_tag"
airflowVersion: "2.3.3"
executor: "KubernetesExecutor"
webserverSecretKeySecretName: airflow-webserver-secret-key
fernetKeySecretName: airflow-fernet-key
config:
webserver:
expose_config: 'True'
base_url: http://localhost:8080
scheduler:
catchup_by_default: 'False'
api:
auth_backends: airflow.api.auth.backend.default
triggerer:
enabled: false
statsd:
enabled: false
redis:
enabled: false
cleanup:
enabled: false
logs:
persistence:
enabled: true
workers:
extraVolumes:
- name: airflow-dags
hostPath:
path: /local/path/to/dags
type: Directory
extraVolumeMounts:
- name: airflow-dags
mountPath: /opt/airflow/dags
readOnly: true
scheduler:
extraVolumes:
- name: airflow-dags
hostPath:
path: /local/path/to/dags
type: Directory
extraVolumeMounts:
- name: airflow-dags
mountPath: /opt/airflow/dags
readOnly: true
```
</details>
The docker image is the official `airflow:2.3.3-python3.9` image with a single environment variable modified:
```conf
PYTHONPATH="/opt/airflow/dags/repo/dags:${PYTHONPATH}"
```
### Anything else
This is my understanding of the timeline that produces the crash:
1. The scheduler queues some of the subtasks in the first task
1. Some subtasks run and yield their XCom results
1. The scheduler runs, queueing the remainder of the subtasks for the first task and creates some subtasks in the second task using the XCom results produced thus far
1. The remainder of the subtasks from the first task complete
1. The scheduler attempts to recreate all of the subtasks of the second task, including the ones already created, and a unique constraint in the database is violated and the scheduler crashes
1. When the scheduler restarts, it attempts the previous step again and crashes again, thus entering a crash loop
It seems that if some but not all subtasks for the second task have been created when the scheduler attempts to queue
the mapped task, then the scheduler tries to create all of the subtasks again which causes a unique constraint violation.
**NOTES**
- IF the scheduler can queue as many or more tasks as there are map indices for the task, then this won't happen. The
provided test case succeeded on the DaskExecutor deployment when the default pool was 128, however when I reduced that pool to 15 this bug occurred.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25681 | https://github.com/apache/airflow/pull/25788 | 29c33165a06b7a6233af3657ace4f2bdb4ec27e4 | db818ae6665b37cd032aa6d2b0f97232462d41e1 | "2022-08-11T15:27:11Z" | python | "2022-08-25T19:11:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,671 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | `airflow dags test` command with run confs | ### Description
Currently, the command [`airflow dags test`](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#test) doesn't accept any configs to set run confs. We can do that in [`airflow dags trigger`](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#trigger) command through `--conf` argument.
The command `airflow dags test` is really useful when testing DAGs in local machines or CI/CD environment. Can we have that feature for the `airflow dags test` command?
### Use case/motivation
We may put run confs same as `airflow dags trigger` command does.
Example:
```
$ airflow dags test <DAG_ID> <EXECUTION_DATE> --conf '{"path": "some_path"}'
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25671 | https://github.com/apache/airflow/pull/25900 | bcdc25dd3fbda568b5ff2c04701623d6bf11a61f | bcc2fe26f6e0b7204bdf73f57d25b4e6c7a69548 | "2022-08-11T13:00:03Z" | python | "2022-08-29T08:51:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,669 | ["airflow/providers/atlassian/jira/CHANGELOG.rst", "airflow/providers/atlassian/jira/hooks/jira.py", "airflow/providers/atlassian/jira/operators/jira.py", "airflow/providers/atlassian/jira/provider.yaml", "airflow/providers/atlassian/jira/sensors/jira.py", "generated/provider_dependencies.json", "tests/providers/atlassian/jira/hooks/test_jira.py", "tests/providers/atlassian/jira/operators/test_jira.py", "tests/providers/atlassian/jira/sensors/test_jira.py"] | change Jira sdk to official atlassian sdk | ### Description
Jira is a product of atlassian https://www.atlassian.com/
There are
https://github.com/pycontribs/jira/issues
and
https://github.com/atlassian-api/atlassian-python-api
### Use case/motivation
Motivation is that now Airflow use unoffical SDK which is limited only to jira and can't also add operators for the other productions.
https://github.com/atlassian-api/atlassian-python-api is the official one and also contains more integrations to other atlassian products
https://github.com/atlassian-api/atlassian-python-api/issues/1027
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25669 | https://github.com/apache/airflow/pull/27633 | b5338b5825859355b017bed3586d5a42208f1391 | f3c68d7e153b8d417edf4cc4a68d18dbc0f30e64 | "2022-08-11T12:08:46Z" | python | "2022-12-07T12:48:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,668 | ["airflow/providers/cncf/kubernetes/hooks/kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py"] | SparkKubernetesOperator application file attribute "name" is not mandatory | ### Apache Airflow version
2.3.3
### What happened
Since commit https://github.com/apache/airflow/commit/3c5bc73579080248b0583d74152f57548aef53a2 the SparkKubernetesOperator application file is expected to have an attribute metadata:name and the operator execution fails with exception `KeyError: 'name'` if it not exists. Please find the example error stack below:
```
[2022-07-27, 12:58:07 UTC] {taskinstance.py:1909} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", line 69, in execute
response = hook.create_custom_object(
File "/opt/bitnami/airflow/venv/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/hooks/kubernetes.py", line 316, in create_custom_object
name=body_dict["metadata"]["name"],
KeyError: 'name'
```
### What you think should happen instead
The operator should start successfully, ignoring the field absence
The attribute metadata:name in NOT mandatory, and a pair metadata:generateName can be user instead - please find proof here: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#objectmeta-v1-meta, particularly in the following:
```
GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided
```
### How to reproduce
Start a DAG with SparkKubernetesOperator with an application file like this in the beginning:
```
apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
metadata:
generateName: spark_app_name
[...]
```
### Operating System
linux
### Versions of Apache Airflow Providers
apache-airflow==2.3.3
apache-airflow-providers-cncf-kubernetes==4.2.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
Every time
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25668 | https://github.com/apache/airflow/pull/25787 | 4dc9b1c592497686dada05e45147b1364ec338ea | 2d2f0daad66416d565e874e35b6a487a21e5f7b1 | "2022-08-11T11:43:00Z" | python | "2022-11-08T12:58:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,653 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | Deferrable Operators get stuck as "scheduled" during backfill | ### Apache Airflow version
2.3.3
### What happened
If you try to backfill a DAG that uses any [deferrable operators](https://airflow.apache.org/docs/apache-airflow/stable/concepts/deferring.html), those tasks will get indefinitely stuck in a "scheduled" state.
If I watch the Grid View, I can see the task state change: "scheduled" (or sometimes "queued") -> "deferred" -> "scheduled". I've tried leaving in this state for over an hour, but there are no further state changes.
When the task is stuck like this, the log appears as empty in the web UI. The corresponding log file *does* exist on the worker, but it does not contain any errors or warnings that might point to the source of the problem.
Ctrl-C-ing the backfill at this point seems to hang on "Shutting down LocalExecutor; waiting for running tasks to finish." **Force-killing and restarting the backfill will "unstick" the stuck tasks.** However, any deferrable operators downstream of the first will get back into that stuck state, requiring multiple restarts to get everything to complete successfully.
### What you think should happen instead
Deferrable operators should work as normal when backfilling.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
import logging
import pendulum
from airflow.decorators import dag, task
from airflow.sensors.time_sensor import TimeSensorAsync
logger = logging.getLogger(__name__)
@dag(
schedule_interval='@daily',
start_date=datetime.datetime(2022, 8, 10),
)
def test_backfill():
time_sensor = TimeSensorAsync(
task_id='time_sensor',
target_time=datetime.time(0).replace(tzinfo=pendulum.UTC), # midnight - should succeed immediately when the trigger first runs
)
@task
def some_task():
logger.info('hello')
time_sensor >> some_task()
dag = test_backfill()
if __name__ == '__main__':
dag.cli()
```
`airflow dags backfill test_backfill -s 2022-08-01 -e 2022-08-04`
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
Self-hosted/standalone
### Anything else
I was able to reproduce this with the following configurations:
* `standalone` mode + SQLite backend + `SequentialExecutor`
* `standalone` mode + Postgres backend + `LocalExecutor`
* Production deployment (self-hosted) + Postgres backend + `CeleryExecutor`
I have not yet found anything telling in any of the backend logs.
Possibly related:
* #23693
* #23145
* #13542
- A modified version of the workaround mentioned in [this comment](https://github.com/apache/airflow/issues/13542#issuecomment-1011598836) works to unstick the first stuck task. However if you run it multiple times to try to unstick any downstream tasks, it causes the backfill command to crash.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25653 | https://github.com/apache/airflow/pull/26205 | f01eed6490acd3bb3a58824e7388c4c3cd50ae29 | 3396d1f822caac7cbeb14e1e67679b8378a84a6c | "2022-08-10T19:19:21Z" | python | "2022-09-23T05:08:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,641 | ["airflow/www/templates/airflow/dag_audit_log.html", "airflow/www/views.py"] | Improve audit log | ### Discussed in https://github.com/apache/airflow/discussions/25638
See the discussion. There are a couple of improvements that can be done:
* add atribute to download the log rather than open it in-browser
* add .log or similar (.txt?) extension
* sort the output
* possibly more
<div type='discussions-op-text'>
<sup>Originally posted by **V0lantis** August 10, 2022</sup>
### Apache Airflow version
2.3.3
### What happened
The audit log link crashes because there is too much data displayed.
### What you think should happen instead
The windows shouldn't crash
### How to reproduce
Display a dag audit log with thousand or millions lines should do the trick
### Operating System
```
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/"
```
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==4.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.1.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-docker==3.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-github==2.0.0
apache-airflow-providers-google==8.1.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-jira==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-postgres==5.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-slack==5.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
apache-airflow-providers-tableau==3.0.0
apache-airflow-providers-zendesk==4.0.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
k8s
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/25641 | https://github.com/apache/airflow/pull/25856 | 634b9c03330c8609949f070457e7b99a6e029f26 | 50016564fa6ab6c6b02bdb0c70fccdf9b75c2f10 | "2022-08-10T13:42:53Z" | python | "2022-08-23T00:31:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,627 | ["airflow/jobs/scheduler_job.py"] | MySQL Not Using Correct Index for Scheduler Critical Section Query | ### Apache Airflow version
Other Airflow 2 version
### What happened
Airflow Version: 2.2.5
MySQL Version: 8.0.18
In the Scheduler, we are coming across instances where MySQL is inefficiently optimizing the [critical section task queuing query](https://github.com/apache/airflow/blob/2.2.5/airflow/jobs/scheduler_job.py#L294-L303). When a large number of task instances are scheduled, MySQL failing to use the `ti_state` index to filter the `task_instance` table, resulting in a full table scan (about 7.3 million rows).
Normally, when running the critical section query the index on `task_instance.state` is used to filter scheduled `task_instances`.
```bash
| -> Limit: 512 row(s) (actual time=5.290..5.413 rows=205 loops=1)
-> Sort row IDs: <temporary>.tmp_field_0, <temporary>.execution_date, limit input to 512 row(s) per chunk (actual time=5.289..5.391 rows=205 loops=1)
-> Table scan on <temporary> (actual time=0.003..0.113 rows=205 loops=1)
-> Temporary table (actual time=5.107..5.236 rows=205 loops=1)
-> Nested loop inner join (cost=20251.99 rows=1741) (actual time=0.100..4.242 rows=205 loops=1)
-> Nested loop inner join (cost=161.70 rows=12) (actual time=0.071..2.436 rows=205 loops=1)
-> Index lookup on task_instance using ti_state (state='scheduled') (cost=80.85 rows=231) (actual time=0.051..1.992 rows=222 loops=1)
-> Filter: ((dag_run.run_type <> 'backfill') and (dag_run.state = 'running')) (cost=0.25 rows=0) (actual time=0.002..0.002 rows=1 loops=222)
-> Single-row index lookup on dag_run using dag_run_dag_id_run_id_key (dag_id=task_instance.dag_id, run_id=task_instance.run_id) (cost=0.25 rows=1) (actual time=0.001..0.001 rows=1 loops=222)
-> Filter: ((dag.is_paused = 0) and (task_instance.dag_id = dag.dag_id)) (cost=233.52 rows=151) (actual time=0.008..0.008 rows=1 loops=205)
-> Index range scan on dag (re-planned for each iteration) (cost=233.52 rows=15072) (actual time=0.008..0.008 rows=1 loops=205)
1 row in set, 1 warning (0.03 sec)
```
When a large number of task_instances are in scheduled state at the same time, the index on `task_instance.state` is not being used to filter scheduled `task_instances`.
```bash
| -> Limit: 512 row(s) (actual time=12110.251..12110.573 rows=512 loops=1)
-> Sort row IDs: <temporary>.tmp_field_0, <temporary>.execution_date, limit input to 512 row(s) per chunk (actual time=12110.250..12110.526 rows=512 loops=1)
-> Table scan on <temporary> (actual time=0.005..0.800 rows=1176 loops=1)
-> Temporary table (actual time=12109.022..12109.940 rows=1176 loops=1)
-> Nested loop inner join (cost=10807.83 rows=3) (actual time=1.328..12097.528 rows=1176 loops=1)
-> Nested loop inner join (cost=10785.34 rows=64) (actual time=1.293..12084.371 rows=1193 loops=1)
-> Filter: (dag.is_paused = 0) (cost=1371.40 rows=1285) (actual time=0.087..22.409 rows=13264 loops=1)
-> Table scan on dag (cost=1371.40 rows=12854) (actual time=0.085..15.796 rows=13508 loops=1)
-> Filter: ((task_instance.state = 'scheduled') and (task_instance.dag_id = dag.dag_id)) (cost=0.32 rows=0) (actual time=0.907..0.909 rows=0 loops=13264)
-> Index lookup on task_instance using PRIMARY (dag_id=dag.dag_id) (cost=0.32 rows=70) (actual time=0.009..0.845 rows=553 loops=13264)
-> Filter: ((dag_run.run_type <> 'backfill') and (dag_run.state = 'running')) (cost=0.25 rows=0) (actual time=0.010..0.011 rows=1 loops=1193)
-> Single-row index lookup on dag_run using dag_run_dag_id_run_id_key (dag_id=task_instance.dag_id, run_id=task_instance.run_id) (cost=0.25 rows=1) (actual time=0.009..0.010 rows=1 loops=1193)
1 row in set, 1 warning (12.14 sec)
```
### What you think should happen instead
To resolve this, I added a patch on the `scheduler_job.py` file, adding a MySQL index hint to use the `ti_state` index.
```diff
--- /usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py
+++ /usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py
@@ -293,6 +293,7 @@ class SchedulerJob(BaseJob):
# and the dag is not paused
query = (
session.query(TI)
+ .with_hint(TI, 'USE INDEX (ti_state)', dialect_name='mysql')
.join(TI.dag_run)
.filter(DR.run_type != DagRunType.BACKFILL_JOB, DR.state == DagRunState.RUNNING)
.join(TI.dag_model)
```
I think it makes sense to add this index hint upstream.
### How to reproduce
Schedule a large number of dag runs and tasks in a short period of time.
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
Airflow 2.2.5 on Kubernetes
MySQL Version: 8.0.18
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25627 | https://github.com/apache/airflow/pull/25673 | 4d9aa3ae48bae124793b1a8ee394150eba0eee9b | 134b5551db67f17b4268dce552e87a154aa1e794 | "2022-08-09T19:50:29Z" | python | "2022-08-12T11:28:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,588 | ["airflow/models/mappedoperator.py", "tests/models/test_mappedoperator.py"] | Mapped KubernetesPodOperater not rendering nested templates | ### Apache Airflow version
2.3.3
### What happened
Nested values, such as `env_vars` for the `KubernetesPodOperater` are not being rendered when used as a dynamically mapped operator.
Assuming the following:
```python
op = KubernetesPodOperater.partial(
env_vars=[k8s.V1EnvVar(name='AWS_ACCESS_KEY_ID', value='{{ var.value.aws_access_key_id }}')],
# Other arguments
).expand(arguments=[[1], [2]])
```
The *Rendered Template* results for `env_vars` should be:
```
("[{'name': 'AWS_ACCESS_KEY_ID', 'value': 'some-super-secret-value', 'value_from': None}]")
```
Instead the actual *Rendered Template* results for `env_vars` are un-rendered:
```
("[{'name': 'AWS_ACCESS_KEY_ID', 'value': '{{ var.value.aws_access_key_id }}', 'value_from': None}]")
```
This is probably caused by the fact that `MappedOperator` is not calling [`KubernetesPodOperater._render_nested_template_fields`](https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L286).
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25588 | https://github.com/apache/airflow/pull/25599 | 762588dcf4a05c47aa253b864bda00726a5569dc | ed39703cd4f619104430b91d7ba67f261e5bfddb | "2022-08-08T06:17:20Z" | python | "2022-08-15T12:02:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,580 | [".github/workflows/ci.yml", "BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_testing.svg", "images/breeze/output_testing_helm-tests.svg", "images/breeze/output_testing_tests.svg", "scripts/in_container/check_environment.sh"] | Convet running Helm unit tests to use the new breeze | The unit tests of Helm (using `helm template` still use bash scripts not the new breeze - we should switch them). | https://github.com/apache/airflow/issues/25580 | https://github.com/apache/airflow/pull/25581 | 0d34355ffa3f9f2ecf666d4518d36c4366a4c701 | a562cc396212e4d21484088ac5f363ade9ac2b8d | "2022-08-07T13:24:26Z" | python | "2022-08-08T06:56:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,555 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Airflow doesn't re-use a secrets backend instance when loading configuration values | ### Apache Airflow version
main (development)
### What happened
When airflow is loading its configuration, it creates a new secrets backend instance for each configuration backend it loads from secrets and then additionally creates a global secrets backend instance that is used in `ensure_secrets_loaded` which code outside of the configuration file uses. This can cause issues with the vault backend (and possibly others, not sure) since logging in to vault can be an expensive operation server-side and each instance of the vault secrets backend needs to re-login to use its internal client.
### What you think should happen instead
Ideally, airflow would attempt to create a single secrets backend instance and re-use this. This can possibly be patched in the vault secrets backend, but instead I think updating the `configuration` module to cache the secrets backend would be preferable since it would then apply to any secrets backend.
### How to reproduce
Use the hashicorp vault secrets backend and store some configuration in `X_secret` values. See that it logs in more than you'd expect.
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
```
apache-airflow==2.3.0
apache-airflow-providers-hashicorp==2.2.0
hvac==0.11.2
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25555 | https://github.com/apache/airflow/pull/25556 | 33fbe75dd5100539c697d705552b088e568d52e4 | 5863c42962404607013422a40118d8b9f4603f0b | "2022-08-05T16:13:36Z" | python | "2022-08-06T14:21:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,523 | ["airflow/www/static/js/graph.js"] | Mapped, classic operator tasks within TaskGroups prepend `group_id` in Graph View | ### Apache Airflow version
main (development)
### What happened
When mapped, classic operator tasks exist within TaskGroups, the `group_id` of the TaskGroup is prepended to the displayed `task_id` in the Graph View.
In the below screenshot, all displayed task IDs only contain the direct `task_id` except for the "mapped_classic_task". This particular task is a mapped `BashOperator` task. The prepended `group_id` does not appear for unmapped, classic operator tasks, nor mapped and unmapped TaskFlow tasks.
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/48934154/182760586-975a7886-bcd6-477d-927b-25e82139b5b7.png">
### What you think should happen instead
The pattern of the displayed task names should be consistent for all task types (mapped/unmapped, classic operators/TaskFlow functions). Additionally, having the `group_id` prepended to the mapped, classic operator tasks is a little redundant and less readable.
### How to reproduce
1. Use an example DAG of the following:
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.operators.bash import BashOperator
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_group_task_graph():
@task_group
def my_task_group():
BashOperator(task_id="not_mapped_classic_task", bash_command="echo")
BashOperator.partial(task_id="mapped_classic_task").expand(
bash_command=["echo", "echo hello", "echo world"]
)
@task
def another_task(input=None):
...
another_task.override(task_id="not_mapped_taskflow_task")()
another_task.override(task_id="mapped_taskflow_task").expand(input=[1, 2, 3])
my_task_group()
_ = task_group_task_graph()
```
2. Navigate to the Graph view
3. Notice that the `task_id` for the "mapped_classic_task" prepends the TaskGroup `group_id` of "my_task_group" while the other tasks in the TaskGroup do not.
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Breeze
### Anything else
Setting `prefix_group_id=False` for the TaskGroup does remove the prepended `group_id` from the tasks display name.
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25523 | https://github.com/apache/airflow/pull/26108 | 5697e9fdfa9d5af2d48f7037c31972c2db1f4397 | 3b76e81bcc9010cfec4d41fe33f92a79020dbc5b | "2022-08-04T04:13:48Z" | python | "2022-09-01T16:32:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,522 | ["airflow/providers/amazon/aws/hooks/batch_client.py", "airflow/providers/amazon/aws/operators/batch.py", "tests/providers/amazon/aws/hooks/test_batch_client.py", "tests/providers/amazon/aws/operators/test_batch.py"] | Support AWS Batch multinode job types | ### Description
Support [multinode job types](https://docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html) in the [AWS Batch Operator](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/batch.py).
The [boto3 `submit_job` method](https://boto3.amazonaws.com/v1/documentation/api/1.9.88/reference/services/batch.html#Batch.Client.submit_job) supports container, multinode, and array batch jobs with the mutually exclusive `nodeOverrides` and `containerOverrides` (+ `arrayProperties`) parameters. But currently the AWS Batch Operator only supports submission of container jobs and array jobs by hardcoding the boto3 `submit_job` parameter `containerOverrides`: https://github.com/apache/airflow/blob/3c08cefdfd2e2636a714bb835902f0cb34225563/airflow/providers/amazon/aws/operators/batch.py#L200 & https://github.com/apache/airflow/blob/3c08cefdfd2e2636a714bb835902f0cb34225563/airflow/providers/amazon/aws/hooks/batch_client.py#L99
The [`get_job_awslogs_info`](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/batch_client.py#L419) method in the batch client hook is also hardcoded for the container type job: https://github.com/apache/airflow/blob/3c08cefdfd2e2636a714bb835902f0cb34225563/airflow/providers/amazon/aws/hooks/batch_client.py#L425
To support multinode jobs the `get_job_awslogs_info` method would need to access `nodeProperties` from the [`describe_jobs`](https://boto3.amazonaws.com/v1/documentation/api/1.9.88/reference/services/batch.html#Batch.Client.describe_jobs) response.
### Use case/motivation
Multinode jobs are a supported job type of AWS Batch, are supported by the underlying boto3 library, and should be also be available to be managed by Airflow. I've extended the AWS Batch Operator for our own use cases, but would prefer to not maintain a separate operator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25522 | https://github.com/apache/airflow/pull/29522 | f080e1e3985f24293979f2f0fc28f1ddf72ee342 | 2ce11300064ec821ffe745980012100fc32cb4b4 | "2022-08-03T23:14:12Z" | python | "2023-04-12T04:29:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,512 | ["airflow/www/static/js/dag/grid/index.tsx"] | Vertical overlay scrollbar on Grid view blocks last DAG run column | ### Apache Airflow version
2.3.3 (latest released)
### What happened
The vertical overlay scrollbar in Grid view on the Web UI (#22134) covers up the final DAG run column and makes it impossible to click on the tasks for that DAG run:
![image](https://user-images.githubusercontent.com/12103194/182652473-e935fb33-0808-43ad-84d8-acabbf4e9b88.png)
![image](https://user-images.githubusercontent.com/12103194/182652203-0494efb5-8335-4005-920a-98bff42e1b21.png)
### What you think should happen instead
Either pad the Grid view so the scrollbar does not appear on top of the content or force the scroll bar to take up its own space
### How to reproduce
Have a DAG run with enough tasks to cause vertical overflow. Found on Linux + FF 102
### Operating System
Fedora 36
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25512 | https://github.com/apache/airflow/pull/25554 | 5668888a7e1074a620b3d38f407ecf1aa055b623 | fe9772949eba35c73101c3cd93a7c76b3e633e7e | "2022-08-03T16:10:55Z" | python | "2022-08-05T16:46:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,508 | ["airflow/migrations/versions/0118_2_5_0_add_updated_at_to_dagrun_and_ti.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst", "tests/models/test_taskinstance.py"] | add lastModified columns to DagRun and TaskInstance. | I wonder if we should add lastModified columns to DagRun and TaskInstance. It might help a lot of UI/API queries.
_Originally posted by @ashb in https://github.com/apache/airflow/issues/23805#issuecomment-1143752368_ | https://github.com/apache/airflow/issues/25508 | https://github.com/apache/airflow/pull/26252 | 768865e10c811bc544590ec268f9f5c334da89b5 | 4930df45f5bae89c297dbcd5cafc582a61a0f323 | "2022-08-03T14:49:55Z" | python | "2022-09-19T13:28:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,493 | ["airflow/www/views.py", "tests/www/views/test_views_base.py", "tests/www/views/test_views_home.py"] | URL contains tag query parameter but Airflow UI does not correctly visualize the tags | ### Apache Airflow version
2.3.3 (latest released)
### What happened
An URL I saved in the past, `https://astronomer.astronomer.run/dx4o2568/home?tags=test`, has the tag field in the query parameter though I was not aware of this. When I clicked on the URL, I was confused because I did not see any DAGs when I should have a bunch.
After closer inspection, I realized that the URL has the tag field in the query parameter but then noticed that the tag box in the Airflow UI wasn't properly populated.
![screen_shot_2022-07-12_at_8 11 07_am](https://user-images.githubusercontent.com/5952735/182496710-601b4a98-aacb-4482-bb9f-bb3fdf9e265f.png)
### What you think should happen instead
When I clicked on the URL, the tag box should have been populated with the strings in the URL.
### How to reproduce
Start an Airflow deployment with some DAGs and add the tag query parameter. More specifically, it has to be a tag that is not used by any DAG.
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25493 | https://github.com/apache/airflow/pull/25715 | ea306c9462615d6b215d43f7f17d68f4c62951b1 | 485142ac233c4ac9627f523465b7727c2d089186 | "2022-08-03T00:03:45Z" | python | "2022-11-24T10:27:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,492 | ["airflow/api_connexion/endpoints/plugin_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/plugin_schema.py", "airflow/www/static/js/types/api-generated.ts"] | API server /plugin crashes | ### Apache Airflow version
2.3.3 (latest released)
### What happened
The `/plugins` endpoint returned a 500 http status code.
```
curl -X GET http://localhost:8080/api/v1/plugins\?limit\=1 \
-H 'Cache-Control: no-cache' \
--user "admin:admin"
{
"detail": "\"{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}\" is not of type 'object'\n\nFailed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:\n {'nullable': True, 'type': 'object'}\n\nOn instance['plugins'][0]['appbuilder_views'][0]:\n (\"{'name': 'Test View', 'category': 'Test Plugin', 'view': \"\n \"'test.appbuilder_views.TestAppBuilderBaseView'}\")",
"status": 500,
"title": "Response body does not conform to specification",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/Unknown"
}
```
The error message in the webserver is as followed
```
[2022-08-03 17:07:57,705] {validation.py:244} ERROR - http://localhost:8080/api/v1/plugins?limit=1 validation error: "{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}" is not of type 'object'
Failed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:
{'nullable': True, 'type': 'object'}
On instance['plugins'][0]['appbuilder_views'][0]:
("{'name': 'Test View', 'category': 'Test Plugin', 'view': "
"'test.appbuilder_views.TestAppBuilderBaseView'}")
172.18.0.1 - admin [03/Aug/2022:17:10:17 +0000] "GET /api/v1/plugins?limit=1 HTTP/1.1" 500 733 "-" "curl/7.79.1"
```
### What you think should happen instead
The response should contain all the plugins integrated with Airflow.
### How to reproduce
Create a simple plugin in the plugin directory.
`appbuilder_views.py`
```
from flask_appbuilder import expose, BaseView as AppBuilderBaseView
# Creating a flask appbuilder BaseView
class TestAppBuilderBaseView(AppBuilderBaseView):
@expose("/")
def test(self):
return self.render_template("test_plugin/test.html", content="Hello galaxy!")
```
`plugin.py`
```
from airflow.plugins_manager import AirflowPlugin
from test.appbuilder_views import TestAppBuilderBaseView
class TestPlugin(AirflowPlugin):
name = "test"
appbuilder_views = [
{
"name": "Test View",
"category": "Test Plugin",
"view": TestAppBuilderBaseView()
}
]
```
Call the `/plugin` endpoint.
```
curl -X GET http://localhost:8080/api/v1/plugins\?limit\=1 \
-H 'Cache-Control: no-cache' \
--user "admin:admin"
```
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25492 | https://github.com/apache/airflow/pull/25524 | 7e3d2350dbb23b9c98bbadf73296425648e1e42d | 5de11e1410b432d632e8c0d1d8ca0945811a56f0 | "2022-08-02T23:44:07Z" | python | "2022-08-04T15:37:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,474 | ["airflow/providers/google/cloud/transfers/postgres_to_gcs.py"] | PostgresToGCSOperator parquet format mapping inconsistencies converts boolean data type to string | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.8.0
### Apache Airflow version
2.3.2
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When converting postgres native data type to bigquery data types, [this](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L288) function is responsible for converting from postgres types -> bigquery types -> parquet types.
The [map](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/postgres_to_gcs.py#L80) in the PostgresToGCSOperator indicates that the postgres boolean type matches to the bigquery `BOOLEAN` data type.
Then when converting from bigquery to parquet data types [here](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L288), the [map](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L289) does not have the `BOOLEAN` data type in its keys. Because the type defaults to string in the following [line](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L305), the BOOLEAN data type is converted into string, which then fails when converting the data into `pa.bool_()`.
When converting the boolean data type into `pa.string()` pyarrow raises an error.
### What you think should happen instead
I would expect the postgres boolean type to map to `pa.bool_()` data type.
Changing the [map](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/postgres_to_gcs.py#L80) to include the `BOOL` key instead of `BOOLEAN` would correctly map the postgres type to the final parquet type.
### How to reproduce
1. Create a postgres connection on airflow with id `postgres_test_conn`.
2. Create a gcp connection on airflow with id `gcp_test_conn`.
3. In the database referenced by the `postgres_test_conn`, in the public schema create a table `test_table` that includes a boolean data type, and insert data into the table.
4. Create a bucket named `issue_PostgresToGCSOperator_bucket`, in the gcp account referenced by the `gcp_test_conn`.
5. Run the dag below that inserts the data from the postgres table into the cloud storage bucket.
```python
import pendulum
from airflow import DAG
from airflow.providers.google.cloud.transfers.postgres_to_gcs import PostgresToGCSOperator
with DAG(
dag_id="issue_PostgresToGCSOperator",
start_date=pendulum.parse("2022-01-01"),
)as dag:
task = PostgresToGCSOperator(
task_id='extract_task',
filename='uploading-{}.parquet',
bucket="issue_PostgresToGCSOperator_bucket",
export_format='parquet',
sql="SELECT * FROM test_table",
postgres_conn_id='postgres_test_conn',
gcp_conn_id='gcp_test_conn',
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25474 | https://github.com/apache/airflow/pull/25475 | 4da2b0c216c92795f19862a3ff6634e5a5936138 | faf3c4fe474733965ab301465f695e3cc311169c | "2022-08-02T14:36:32Z" | python | "2022-08-02T20:28:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,446 | ["chart/templates/statsd/statsd-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_annotations.py"] | Helm Chart: Allow adding annotations to statsd deployment | ### Description
Helm Chart [does not allow adding annotations](https://github.com/apache/airflow/blob/40eefd84797f5085e6c3fef6cbd6f713ceb3c3d8/chart/templates/statsd/statsd-deployment.yaml#L60-L63) to StatsD deployment. We should add it.
### Use case/motivation
In our Kubernetes cluster we need to set annotations on deployments that should be scraped by Prometheus. Having an exporter that does not get scraped defeats the purpose :slightly_smiling_face:
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25446 | https://github.com/apache/airflow/pull/25732 | fdecf12051308a4e064f5e4bf5464ffc9b183dad | 951b7084619eca7229cdaadda99fd1191d4793e7 | "2022-08-01T14:23:14Z" | python | "2022-09-15T00:31:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,395 | ["airflow/providers/snowflake/provider.yaml", "airflow/providers/snowflake/transfers/copy_into_snowflake.py", "airflow/providers/snowflake/transfers/s3_to_snowflake.py", "scripts/in_container/verify_providers.py", "tests/providers/snowflake/transfers/test_copy_into_snowflake.py"] | GCSToSnowflakeOperator with feature parity to the S3ToSnowflakeOperator | ### Description
Require an operator similar to the S3ToSnowflakeOperator but for GCS to load data stored in GCS to a Snowflake table.
### Use case/motivation
Same as the S3ToSnowflakeOperator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25395 | https://github.com/apache/airflow/pull/25541 | 2ee099655b1ca46935dbf3e37ae0ec1139f98287 | 5c52bbf32d81291b57d051ccbd1a2479ff706efc | "2022-07-29T10:23:52Z" | python | "2022-08-26T22:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,388 | ["airflow/providers/jdbc/operators/jdbc.py", "tests/providers/jdbc/operators/test_jdbc.py"] | apache-airflow-providers-jdbc fails with jaydebeapi.Error | ### Apache Airflow Provider(s)
jdbc
### Versions of Apache Airflow Providers
I am using apache-airflow-providers-jdbc==3.0.0 for Airflow 2.3.3 as per constraint [file](https://raw.githubusercontent.com/apache/airflow/constraints-2.3.3/constraints-3.10.txt)
### Apache Airflow version
2.3.3 (latest released)
### Operating System
K8 on Linux
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
I am using JdbcOperator to execute one ALTER sql statement but it returns the following error:
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/jdbc/operators/jdbc.py", line 76, in execute
return hook.run(self.sql, self.autocommit, parameters=self.parameters, handler=fetch_all_handler)
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/hooks/dbapi.py", line 213, in run
result = handler(cur)
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/jdbc/operators/jdbc.py", line 30, in fetch_all_handler
return cursor.fetchall()
File "/usr/local/airflow/.local/lib/python3.10/site-packages/jaydebeapi/__init__.py", line 593, in fetchall
row = self.fetchone()
File "/usr/local/airflow/.local/lib/python3.10/site-packages/jaydebeapi/__init__.py", line 558, in fetchone
raise Error()
jaydebeapi.Error
### What you think should happen instead
The introduction of handler=fetch_all_handler in File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/jdbc/operators/jdbc.py", line 76, in execute
return hook.run(self.sql, self.autocommit, parameters=self.parameters, handler=fetch_all_handler) is breaking the script. With the previous version which did not have fetch_all_handler in jdbc.py, it was running perfectly.
### How to reproduce
Try submitting ALTER statement in airflow jdbcOperator.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25388 | https://github.com/apache/airflow/pull/25412 | 3dfa44566c948cb2db016e89f84d6fe37bd6d824 | 1708da9233c13c3821d76e56dbe0e383ff67b0fd | "2022-07-28T22:08:43Z" | python | "2022-08-07T09:18:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,360 | ["airflow/models/abstractoperator.py", "airflow/models/baseoperator.py", "airflow/operators/trigger_dagrun.py", "airflow/providers/qubole/operators/qubole.py", "airflow/www/static/js/dag.js", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "docs/spelling_wordlist.txt"] | Extra Links do not works with mapped operators | ### Apache Airflow version
main (development)
### What happened
I found that Extra Links do not work with dynamic tasks at all - links inaccessible, but same Extra Links works fine with not mapped operators.
I think the nature of that extra links assign to parent task instance (i do not know how to correct name this TI) but not to actual mapped TIs.
As result we only have `number extra links defined` in operator not `(number extra links defined in operator) x number of mapped TIs.`
### What you think should happen instead
_No response_
### How to reproduce
```python
from pendulum import datetime
from airflow.decorators import dag
from airflow.sensors.external_task import ExternalTaskSensor
from airflow.operators.empty import EmptyOperator
EXTERNAL_DAG_IDS = [f"example_external_dag_{ix:02d}" for ix in range(3)]
DAG_KWARGS = {
"start_date": datetime(2022, 7, 1),
"schedule_interval": "@daily",
"catchup": False,
"tags": ["mapped_extra_links", "AIP-42", "serialization"],
}
def external_dags():
EmptyOperator(task_id="dummy")
@dag(**DAG_KWARGS)
def external_regular_task_sensor():
for external_dag_id in EXTERNAL_DAG_IDS:
ExternalTaskSensor(
task_id=f'wait_for_{external_dag_id}',
external_dag_id=external_dag_id,
poke_interval=5,
)
@dag(**DAG_KWARGS)
def external_mapped_task_sensor():
ExternalTaskSensor.partial(
task_id='wait',
poke_interval=5,
).expand(external_dag_id=EXTERNAL_DAG_IDS)
dag_external_regular_task_sensor = external_regular_task_sensor()
dag_external_mapped_task_sensor = external_mapped_task_sensor()
for dag_id in EXTERNAL_DAG_IDS:
globals()[dag_id] = dag(dag_id=dag_id, **DAG_KWARGS)(external_dags)()
```
https://user-images.githubusercontent.com/3998685/180994213-847b3fd3-d351-4836-b246-b54056f34ad6.mp4
### Operating System
macOs 12.5
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25360 | https://github.com/apache/airflow/pull/25500 | 4ecaa9e3f0834ca0ef08002a44edda3661f4e572 | d9e924c058f5da9eba5bb5b85a04bfea6fb2471a | "2022-07-28T10:44:40Z" | python | "2022-08-05T03:41:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,352 | ["airflow/decorators/base.py", "airflow/models/expandinput.py", "airflow/models/mappedoperator.py", "tests/models/test_taskinstance.py", "tests/models/test_xcom_arg_map.py"] | expand_kwargs.map(func) gives unhelpful error message if func returns list | ### Apache Airflow version
main (development)
### What happened
Here's a DAG:
```python3
with DAG(
dag_id="expand_list",
doc_md="try to get kwargs from a list",
schedule_interval=None,
start_date=datetime(2001, 1, 1),
) as expand_list:
@expand_list.task
def do_this():
return [
("echo hello $USER", "USER", "foo"),
("echo hello $USER", "USER", "bar"),
]
def mapper(tuple):
if tuple[2] == "bar":
return [1, 2, 3]
else:
return {"bash_command": tuple[0], "env": {tuple[1]: tuple[2]}}
BashOperator.partial(task_id="one_cmd").expand_kwargs(do_this().map(mapper))
```
The `foo` task instance succeeds as expected, and the `bar` task fails as expected. But the error message that it gives isn't particularly helpful to a user who doesn't know what they did wrong:
```
ERROR - Failed to execute task: resolve() takes 3 positional arguments but 4 were given.
Traceback (most recent call last):
File "/home/matt/src/airflow/airflow/executors/debug_executor.py", line 78, in _run_task
ti.run(job_id=ti.job_id, **params)
File "/home/matt/src/airflow/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1782, in run
self._run_raw_task(
File "/home/matt/src/airflow/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1445, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1580, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 2202, in render_templates
rendered_task = self.task.render_template_fields(context)
File "/home/matt/src/airflow/airflow/models/mappedoperator.py", line 751, in render_template_fields
unmapped_task = self.unmap(mapped_kwargs)
File "/home/matt/src/airflow/airflow/models/mappedoperator.py", line 591, in unmap
kwargs = self._expand_mapped_kwargs(resolve)
File "/home/matt/src/airflow/airflow/models/mappedoperator.py", line 546, in _expand_mapped_kwargs
return expand_input.resolve(*resolve)
TypeError: resolve() takes 3 positional arguments but 4 were given
```
### What you think should happen instead
Whatever checks the return value for mappability should do more to point the user to their error. Perhaps something like:
> UnmappableDataError: Expected a dict with keys that BashOperator accepts, got `[1, 2, 3]` instead
### How to reproduce
Run the dag above
### Operating System
Linux 5.10.101 #1-NixOS SMP Wed Feb 16 11:54:31 UTC 2022 x86_64 GNU/Linux
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25352 | https://github.com/apache/airflow/pull/25355 | f6b48ac6dfaf931a5433ec16369302f68f038c65 | 4e786e31bcdf81427163918e14d191e55a4ab606 | "2022-07-27T22:49:28Z" | python | "2022-07-29T08:58:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,349 | ["airflow/providers/hashicorp/_internal_client/vault_client.py", "tests/providers/hashicorp/_internal_client/test_vault_client.py", "tests/providers/hashicorp/hooks/test_vault.py"] | Vault client for hashicorp provider prints a deprecation warning when using kubernetes login | ### Apache Airflow Provider(s)
hashicorp
### Versions of Apache Airflow Providers
```
apache-airflow==2.3.0
apache-airflow-providers-hashicorp==2.2.0
hvac==0.11.2
```
### Apache Airflow version
2.3.0
### Operating System
Ubuntu 18.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Using the vault secrets backend prints a deprecation warning when using the kubernetes auth method:
```
/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/hashicorp/_internal_client/vault_client.py:284 DeprecationWarning: Call to deprecated function 'auth_kubernetes'. This method will be removed in version '1.0.0' Please use the 'login' method on the 'hvac.api.auth_methods.kubernetes' class moving forward.
```
This code is still present in `main` at https://github.com/apache/airflow/blob/main/airflow/providers/hashicorp/_internal_client/vault_client.py#L258-L260.
### What you think should happen instead
The new kubernetes authentication method should be used instead. This code:
```python
if self.auth_mount_point:
_client.auth_kubernetes(role=self.kubernetes_role, jwt=jwt, mount_point=self.auth_mount_point)
else:
_client.auth_kubernetes(role=self.kubernetes_role, jwt=jwt)
```
Should be able to be updated to:
```python
from hvac.api.auth_methods import Kubernetes
if self.auth_mount_point:
Kubernetes(_client.adapter).login(role=self.kubernetes_role, jwt=jwt, mount_point=self.auth_mount_point)
else:
Kubernetes(_client.adapter).login(role=self.kubernetes_role, jwt=jwt)
```
### How to reproduce
Use the vault secrets backend with the kubernetes auth method and look at the logs.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25349 | https://github.com/apache/airflow/pull/25351 | f4b93cc097dab95437c9c4b37474f792f80fd14e | ad0a4965aaf0702f0e8408660b912e87d3c75c22 | "2022-07-27T19:19:01Z" | python | "2022-07-28T18:23:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,344 | ["airflow/models/abstractoperator.py", "tests/models/test_baseoperator.py"] | Improve Airflow logging for operator Jinja template processing | ### Description
When an operator uses Jinja templating, debugging issues is difficult because the Airflow task log only displays a stack trace.
### Use case/motivation
When there's a templating issue, I'd like to have some specific, actionable info to help understand the problem. At minimum:
* Which operator or task had the issue?
* Which field had the issue?
* What was the Jinja template?
Possibly also the Jinja context, although that can be very verbose.
I have prototyped this in my local Airflow dev environment, and I propose something like the following. (Note the logging commands, which are not present in the Airflow repo.)
Please let me know if this sounds reasonable, and I will be happy to create a PR.
```
def _do_render_template_fields(
self,
parent,
template_fields,
context,
jinja_env,
seen_oids,
) -> None:
"""Copied from Airflow 2.2.5 with added logging."""
logger.info(f"BaseOperator._do_render_template_fields(): Task {self.task_id}")
for attr_name in template_fields:
content = getattr(parent, attr_name)
if content:
logger.info(f"Rendering template for '{attr_name}' field: {content!r}")
rendered_content = self.render_template(content, context, jinja_env, seen_oids)
+ setattr(parent, attr_name, rendered_content)
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25344 | https://github.com/apache/airflow/pull/25452 | 9c632684341fb3115d654aecb83aa951d80b19af | 4da2b0c216c92795f19862a3ff6634e5a5936138 | "2022-07-27T15:46:39Z" | python | "2022-08-02T19:40:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,343 | ["airflow/callbacks/callback_requests.py", "airflow/models/taskinstance.py", "tests/callbacks/test_callback_requests.py"] | Object of type datetime is not JSON serializable after detecting zombie jobs with CeleryExecutor and separated Scheduler and DAG-Processor | ### Apache Airflow version
2.3.3 (latest released)
### What happened
After running for a certain period (few minutes until several hours depending on the number of active DAGs in the environment) The scheduler crashes with the following error message:
```
[2022-07-26 15:07:24,362] {executor_loader.py:105} INFO - Loaded executor: CeleryExecutor
[2022-07-26 15:07:24,363] {scheduler_job.py:1252} INFO - Resetting orphaned tasks for active dag runs
[2022-07-26 15:07:25,585] {celery_executor.py:532} INFO - Adopted the following 1 tasks from a dead executor
<TaskInstance: freewheel_uafl_data_scala.freewheel.delivery_data scheduled__2022-07-25T04:15:00+00:00 [running]> in state STARTED
[2022-07-26 15:07:35,881] {scheduler_job.py:1381} WARNING - Failing (1) jobs without heartbeat after 2022-07-26 12:37:35.868798+00:00
[2022-07-26 15:07:35,881] {scheduler_job.py:1389} ERROR - Detected zombie job: {'full_filepath': '/data/dags/09_scala_apps/freewheel_uafl_data_scala.py', 'msg': 'Detected <TaskInstance: freewheel_uafl_data_scala.freewheel.delivery_data scheduled__2022-07-25T04:15:00+00:00 [running]> as zombie', 'simple_task_instance': <airflow.models.taskinstance.SimpleTaskInstance object at 0x7fb4a1105690>, 'is_failure_callback': True}
[2022-07-26 15:07:35,883] {scheduler_job.py:769} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 752, in _execute
self._run_scheduler_loop()
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 873, in _run_scheduler_loop
next_event = timers.run(blocking=False)
File "/usr/lib/python3.10/sched.py", line 151, in run
action(*argument, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/event_scheduler.py", line 36, in repeat
action(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 1390, in _find_zombies
self.executor.send_callback(request)
File "/usr/lib/python3.10/site-packages/airflow/executors/base_executor.py", line 363, in send_callback
self.callback_sink.send(request)
File "/usr/lib/python3.10/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/callbacks/database_callback_sink.py", line 34, in send
db_callback = DbCallbackRequest(callback=callback, priority_weight=10)
File "<string>", line 4, in __init__
File "/usr/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 481, in _initialize_instance
with util.safe_reraise():
File "/usr/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/usr/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
raise exception
File "/usr/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 479, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/models/db_callback_request.py", line 44, in __init__
self.callback_data = callback.to_json()
File "/usr/lib/python3.10/site-packages/airflow/callbacks/callback_requests.py", line 79, in to_json
return json.dumps(dict_obj)
File "/usr/lib/python3.10/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type datetime is not JSON serializable
[2022-07-26 15:07:36,100] {scheduler_job.py:781} INFO - Exited execute loop
Traceback (most recent call last):
File "/usr/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/lib/python3.10/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/usr/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/usr/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/usr/lib/python3.10/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 752, in _execute
self._run_scheduler_loop()
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 873, in _run_scheduler_loop
next_event = timers.run(blocking=False)
File "/usr/lib/python3.10/sched.py", line 151, in run
action(*argument, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/event_scheduler.py", line 36, in repeat
action(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 1390, in _find_zombies
self.executor.send_callback(request)
File "/usr/lib/python3.10/site-packages/airflow/executors/base_executor.py", line 363, in send_callback
self.callback_sink.send(request)
File "/usr/lib/python3.10/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/callbacks/database_callback_sink.py", line 34, in send
db_callback = DbCallbackRequest(callback=callback, priority_weight=10)
File "<string>", line 4, in __init__
File "/usr/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 481, in _initialize_instance
with util.safe_reraise():
File "/usr/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/usr/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
raise exception
File "/usr/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 479, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/usr/lib/python3.10/site-packages/airflow/models/db_callback_request.py", line 44, in __init__
self.callback_data = callback.to_json()
File "/usr/lib/python3.10/site-packages/airflow/callbacks/callback_requests.py", line 79, in to_json
return json.dumps(dict_obj)
File "/usr/lib/python3.10/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type datetime is not JSON serializable
```
### What you think should happen instead
The scheduler should handle zombie jobs without crashing.
### How to reproduce
The following conditions are necessary:
- dag-processor and scheduler run in separated containers
- AirFlow uses the CeleryExecutor
- There are zombie jobs
### Operating System
Alpine Linux 3.16.1
### Versions of Apache Airflow Providers
```
apache-airflow-providers-apache-hdfs==3.0.1
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.2.0
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-exasol==2.1.3
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-jenkins==3.0.0
apache-airflow-providers-microsoft-mssql==3.1.0
apache-airflow-providers-odbc==3.1.0
apache-airflow-providers-oracle==3.1.0
apache-airflow-providers-postgres==5.1.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-sqlite==3.1.0
apache-airflow-providers-ssh==3.1.0
```
### Deployment
Other 3rd-party Helm chart
### Deployment details
One Pod on Kubernetes containing the following containers
- 1 Container for the webserver service
- 1 Container for the scheduler service
- 1 Container for the dag-processor service
- 1 Container for the flower service
- 1 Container for the redis service
- 2 or 3 containers for the celery workers services
Due to a previous issue crashing the scheduler with the message `UNEXPECTED COMMIT - THIS WILL BREAK HA LOCKS`, we substitute `scheduler_job.py` with the file `https://raw.githubusercontent.com/tanelk/airflow/a4b22932e5ac9c2b6f37c8c58345eee0f63cae09/airflow/jobs/scheduler_job.py`.
Sadly I don't remember which issue or MR exactly but it was related to scheduler and dag-processor running in separate containers.
### Anything else
It looks like that only the **combination of CeleryExecutor and separated scheduler and dag-processor** services crashes the scheduler when handling zombie jobs.
The KubernetesExecutor with separated scheduler and dag-processor doesn't crash the scheduler.
It looks like the CeleryExecutor with scheduler and dag-processor in the same container doesn't crash the scheduler.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25343 | https://github.com/apache/airflow/pull/25471 | 3421ecc21bafaf355be5b79ec4ed19768e53275a | d7e14ba0d612d8315238f9d0cba4ef8c44b6867c | "2022-07-27T15:28:28Z" | python | "2022-08-02T21:50:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,330 | ["airflow/operators/bash.py"] | User defined `env` clobbers PATH, BashOperator can't find bash | ### Apache Airflow version
main (development)
### What happened
NixOS is unconventional in some ways. For instance `which bash` prints `/run/current-system/sw/bin/bash`, which isn't a place that most people expect to go looking for bash.
I can't be sure if this is the reason--or if it's some other peculiarity--but on NixOS, cases where `BashOperator` defines an `env` cause the task to fail with this error:
```
venv ❯ airflow dags test nopath "$(date +%Y-%m-%d)"
[2022-07-26 21:54:09,704] {dagbag.py:508} INFO - Filling up the DagBag from /home/matt/today/dags
[2022-07-26 21:54:10,129] {base_executor.py:91} INFO - Adding to queue: ['<TaskInstance: nopath.nopath backfill__2022-07-26T00:00:00+00:00 [queued]>']
[2022-07-26 21:54:15,148] {subprocess.py:62} INFO - Tmp dir root location:
/tmp
[2022-07-26 21:54:15,149] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'echo hello world']
[2022-07-26 21:54:15,238] {debug_executor.py:84} ERROR - Failed to execute task: [Errno 2] No such file or directory: 'bash'.
Traceback (most recent call last):
File "/home/matt/src/airflow/airflow/executors/debug_executor.py", line 78, in _run_task
ti.run(job_id=ti.job_id, **params)
File "/home/matt/src/airflow/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1782, in run
self._run_raw_task(
File "/home/matt/src/airflow/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1445, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1623, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "/home/matt/src/airflow/airflow/models/taskinstance.py", line 1694, in _execute_task
result = execute_callable(context=context)
File "/home/matt/src/airflow/airflow/operators/bash.py", line 183, in execute
result = self.subprocess_hook.run_command(
File "/home/matt/src/airflow/airflow/hooks/subprocess.py", line 76, in run_command
self.sub_process = Popen(
File "/nix/store/cgxc3jz7idrb1wnb2lard9rvcx6aw2si-python3-3.9.6/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/nix/store/cgxc3jz7idrb1wnb2lard9rvcx6aw2si-python3-3.9.6/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'bash'
```
On the other hand, tasks succeed if:
- The author doesn't use the `env` kwarg
- `env` is replaced with `append_env`
- they use `env` to explicitly set `PATH` to a folder containing `bash`
- or they are run on a more conventional system (like my MacBook)
Here is a DAG which demonstrates this:
```python3
from airflow.models import DAG
from airflow.operators.bash import BashOperator
from datetime import datetime, timedelta
with DAG(
dag_id="withpath",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as withpath:
BashOperator(
task_id="withpath",
env={"PATH": "/run/current-system/sw/bin/", "WORLD": "world"},
bash_command="echo hello $WORLD",
)
with DAG(
dag_id="nopath",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as nopath:
BashOperator(
task_id="nopath",
env={"WORLD": "world"},
bash_command="echo hello $WORLD",
)
```
`withpath` succeeds, but `nopath` fails, showing the above error.
### What you think should happen instead
Unless the user explicitly sets PATH via the `env` kwarg, airflow should populate it with whatever it finds in the enclosing environment.
### How to reproduce
I can reproduce it reliably, but only on this machine. I'm willing to fix this myself--since I can test it right here--but I'm filing this issue because I need a hint. Where should I start?
### Operating System
NixOS 21.11 (Porcupine)
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25330 | https://github.com/apache/airflow/pull/25331 | 900c81b87a76a9df8a3a6435a0d42348e88c5bbb | c3adf3e65d32d8145e2341989a5336c3e5269e62 | "2022-07-27T04:25:45Z" | python | "2022-07-28T17:39:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,322 | ["docs/apache-airflow-providers-amazon/connections/aws.rst", "docs/apache-airflow-providers-amazon/img/aws-base-conn-airflow.png", "docs/apache-airflow-providers-amazon/logging/s3-task-handler.rst", "docs/spelling_wordlist.txt"] | Amazon S3 for logging using IAM role for service accounts(IRSA) | ### What do you see as an issue?
I am using the latest Helm Chart version (see the version below) to deploy Airflow on Amazon EKS and trying to configure S3 for logging. We have few docs that explain how to add logging variables through `values.yaml` but that isn't sufficient for configuring S3 logging with IRSA. I couldn't find any other logs that explains this configuration in detail hence i am adding solution below
Here is the link that i am referring to..
Amazon S3 for Logging https://github.com/apache/airflow/blob/main/docs/apache-airflow-providers-amazon/logging/s3-task-handler.rst
Airflow config
```
apiVersion: v2
name: airflow
version: 1.6.0
appVersion: 2.3.3
```
### Solving the problem
**I have managed to get S3 logging working with IAM role for service accounts(IRSA).**
# Writing logs to Amazon S3 using AWS IRSA
## Step1: Create IAM role for service account (IRSA)
Create IRSA using `eksctl or terraform`. This command uses eksctl to create IAM role and service account
```sh
eksctl create iamserviceaccount --cluster="<EKS_CLUSTER_ID>" --name="<SERVICE_ACCOUNT_NAME>" --namespace=airflow --attach-policy-arn="<IAM_POLICY_ARN>" --approve
# e.g.,
eksctl create iamserviceaccount --cluster=airflow-eks-cluster --name=airflow-sa --namespace=airflow --attach-policy-arn=arn:aws:iam::aws:policy/AmazonS3FullAccess --approve
```
## Step2: Update Helm Chart `values.yaml` with Service Account
Add the above Service Account (e.g., `airflow-sa`) to Helm Chart `values.yaml` under the following sections. We are using the existing `serviceAccount` hence `create: false` with existing name as `name: airflow-sa`. Annotations may not be required as this will be added by **Step1**. Adding this for readability
```yaml
workers:
serviceAccount:
create: false
name: airflow-sa
# Annotations to add to worker Kubernetes service account.
annotations:
eks.amazonaws.com/role-arn: <ENTER_IAM_ROLE_ARN_CREATED_BY_EKSCTL_COMMAND>
webserver:
serviceAccount:
create: false
name: airflow-sa
# Annotations to add to worker Kubernetes service account.
annotations:
eks.amazonaws.com/role-arn: <ENTER_IAM_ROLE_ARN_CREATED_BY_EKSCTL_COMMAND
config:
logging:
remote_logging: 'True'
logging_level: 'INFO'
remote_base_log_folder: 's3://<ENTER_YOUR_BUCKET_NAME>/<FOLDER_PATH'
remote_log_conn_id: 'aws_s3_conn' # notice this name is be used in Step3
delete_worker_pods: 'False'
encrypt_s3_logs: 'True'
```
## Step3: Create S3 connection in Airflow Web UI
Now the final step to create connections under Airflow UI before executing the DAGs
- Login to Airflow Web UI and Navigate to `Admin -> Connections`
- Create connection for S3 and select the options as shown in the image
<img width="861" alt="image (1)" src="https://user-images.githubusercontent.com/19464259/181126084-2a0ddf43-01a4-4abd-9031-b53fb4d8870f.png">
## Step4: Verify the logs
- Execute example DAGs
- Verify the logs in S3 bucket
- Verify the logs from Airflow UI from DAGs log
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25322 | https://github.com/apache/airflow/pull/25931 | 3326a0d493c92b15eea8cd9a874729db7b7a255c | bd3d6d3ee71839ec3628fa47294e0b3b8a6a6b9f | "2022-07-26T23:10:41Z" | python | "2022-10-10T08:40:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,313 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py"] | BaseSQLToGCSOperator parquet export format not limiting file size bug | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.8.0
### Apache Airflow version
2.3.2
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When using the `PostgresToGCSOperator(..., export_format='parquet', approx_max_file_size_bytes=Y, ...)`, when a temporary file exceeds the size defined by Y, the current file is not yielded, and no new chunk is created. Meaning that only 1 chunk will be uploaded irregardless of the size specified Y.
I believe [this](https://github.com/apache/airflow/blob/d876b4aa6d86f589b9957a2e69484c9e5365eba8/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L253) line of code which is responsible for verifying whether the temporary file has exceeded its size, to be the culprit, considering the call to `tmp_file_handle.tell()` is always returning 0 after a `parquet_writer.write_table(tbl)` call [[here]](https://github.com/apache/airflow/blob/d876b4aa6d86f589b9957a2e69484c9e5365eba8/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L240).
Therefore, regardless of the size of the temporary file already being bigger than the defined approximate limit Y, no new file will be created and only a single chunk will be uploaded.
### What you think should happen instead
This behaviour is erroneous as when the temporary file exceeds the size defined by Y, it should upload the current temporary file and then create a new file to upload after successfully uploading the current file to GCS.
A possible fix could be to use the `import os` package to determine the size of the temporary file with `os.stat(tmp_file_handle).st_size`, instead of using `tmp_file_handle.tell()`.
### How to reproduce
1. Create a postgres connection on airflow with id `postgres_test_conn`.
2. Create a gcp connection on airflow with id `gcp_test_conn`.
3. In the database referenced by the `postgres_test_conn`, in the public schema create a table `large_table`, where the total amount of data In the table is big enough to exceed the 10MB limit defined in the `approx_max_file_size_bytes` parameter.
4. Create a bucket named `issue_BaseSQLToGCSOperator_bucket`, in the gcp account referenced by the `gcp_test_conn`.
5. Create the dag exemplified in the excerpt below, and manually trigger the dag to fetch all the data from `large_table`, to insert in the `issue_BaseSQLToGCSOperator_bucket`. We should expect multiple chunks to be created, but due to this bug, only 1 chunk will be uploaded with the whole data from `large_table`.
```python
import pendulum
from airflow import DAG
from airflow.providers.google.cloud.transfers.postgres_to_gcs import PostgresToGCSOperator
with DAG(
dag_id="issue_BaseSQLToGCSOperator",
start_date=pendulum.parse("2022-01-01"),
)as dag:
task = PostgresToGCSOperator(
task_id='extract_task',
filename='uploading-{}.parquet',
bucket="issue_BaseSQLToGCSOperator_bucket",
export_format='parquet',
approx_max_file_size_bytes=10_485_760,
sql="SELECT * FROM large_table",
postgres_conn_id='postgres_test_conn',
gcp_conn_id='gcp_test_conn',
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25313 | https://github.com/apache/airflow/pull/25469 | d0048414a6d3bdc282cc738af0185a9a1cd63ef8 | 803c0e252fc78a424a181a34a93e689fa9aaaa09 | "2022-07-26T16:15:12Z" | python | "2022-08-03T06:06:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,297 | ["airflow/exceptions.py", "airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | on_failure_callback is not called when task is terminated externally | ### Apache Airflow version
2.2.5
### What happened
`on_failure_callback` is not called when task is terminated externally.
A similar issue was reported in [#14422](https://github.com/apache/airflow/issues/14422) and fixed in [#15172](https://github.com/apache/airflow/pull/15172).
However, the code that fixed this was changed in a later PR [#16301](https://github.com/apache/airflow/pull/16301), after which `task_instance._run_finished_callback` is no longer called when SIGTERM is received
(https://github.com/apache/airflow/pull/16301/files#diff-d80fa918cc75c4d6aa582d5e29eeb812ba21371d6977fde45a4749668b79a515L85).
### What you think should happen instead
`on_failure_callback` should be called when task fails regardless of how the task fails.
### How to reproduce
DAG file:
```
import datetime
import pendulum
from airflow.models import DAG
from airflow.operators.bash_operator import BashOperator
DEFAULT_ARGS = {
'email': ['example@airflow.com']
}
TZ = pendulum.timezone("America/Los_Angeles")
test_dag = DAG(
dag_id='test_callback_in_manually_terminated_dag',
schedule_interval='*/10 * * * *',
default_args=DEFAULT_ARGS,
catchup=False,
start_date=datetime.datetime(2022, 7, 14, 0, 0, tzinfo=TZ)
)
with test_dag:
BashOperator(
task_id='manually_terminated_task',
bash_command='echo start; sleep 60',
on_failure_callback=lambda context: print('This on_failure_back should be called when task fails.')
)
```
While the task instance is running, either force quitting the scheduler or manually updating its state to None in the database will cause the task to get SIGTERM and terminate. In either case, a failure callback will not be called which does not match the behavior of previous versions of Airflow.
The stack trace is attached below and `on_failure_callback` is not called.
```
[2022-07-15, 02:02:24 UTC] {process_utils.py:124} INFO - Sending Signals.SIGTERM to group 10571. PIDs of all processes in the group: [10573, 10575, 10571]
[2022-07-15, 02:02:24 UTC] {process_utils.py:75} INFO - Sending the signal Signals.SIGTERM to group 10571
[2022-07-15, 02:02:24 UTC] {taskinstance.py:1431} ERROR - Received SIGTERM. Terminating subprocesses.
[2022-07-15, 02:02:24 UTC] {subprocess.py:99} INFO - Sending SIGTERM signal to process group
[2022-07-15, 02:02:24 UTC] {process_utils.py:70} INFO - Process psutil.Process(pid=10575, status='terminated', started='02:02:11') (10575) terminated with exit code None
[2022-07-15, 02:02:24 UTC] {taskinstance.py:1776} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.7/lib/python3.7/site-packages/airflow/operators/bash.py", line 182, in execute
cwd=self.cwd,
File "/opt/python3.7/lib/python3.7/site-packages/airflow/hooks/subprocess.py", line 87, in run_command
for raw_line in iter(self.sub_process.stdout.readline, b''):
File "/opt/python3.7/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1433, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
[2022-07-15, 02:02:24 UTC] {taskinstance.py:1289} INFO - Marking task as FAILED. dag_id=test_callback_in_manually_terminated_dag, task_id=manually_terminated_task, execution_date=20220715T015100, start_date=20220715T020211, end_date=20220715T020224
[2022-07-15, 02:02:24 UTC] {logging_mixin.py:109} WARNING - /opt/python3.7/lib/python3.7/site-packages/airflow/utils/email.py:108 PendingDeprecationWarning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
[2022-07-15, 02:02:24 UTC] {configuration.py:381} WARNING - section/key [smtp/smtp_user] not found in config
[2022-07-15, 02:02:24 UTC] {email.py:214} INFO - Email alerting: attempt 1
[2022-07-15, 02:02:24 UTC] {configuration.py:381} WARNING - section/key [smtp/smtp_user] not found in config
[2022-07-15, 02:02:24 UTC] {email.py:214} INFO - Email alerting: attempt 1
[2022-07-15, 02:02:24 UTC] {taskinstance.py:1827} ERROR - Failed to send email to: ['example@airflow.com']
...
OSError: [Errno 101] Network is unreachable
[2022-07-15, 02:02:24 UTC] {standard_task_runner.py:98} ERROR - Failed to execute job 159 for task manually_terminated_task (Task received SIGTERM signal; 10571)
[2022-07-15, 02:02:24 UTC] {process_utils.py:70} INFO - Process psutil.Process(pid=10571, status='terminated', exitcode=1, started='02:02:11') (10571) terminated with exit code 1
[2022-07-15, 02:02:24 UTC] {process_utils.py:70} INFO - Process psutil.Process(pid=10573, status='terminated', started='02:02:11') (10573) terminated with exit code None
```
### Operating System
CentOS Linux 7
### Deployment
Other Docker-based deployment
### Anything else
This is an issue in 2.2.5. However, I notice that it appears to be fixed in the main branch by PR [#21877](https://github.com/apache/airflow/pull/21877/files#diff-62f7d8a52fefdb8e05d4f040c6d3459b4a56fe46976c24f68843dbaeb5a98487R1885-R1887) although it was not intended to fix this issue. Is there a timeline for getting that PR into a release? We are happy to test it out to see if it fixes the issue once it's released.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25297 | https://github.com/apache/airflow/pull/29743 | 38b901ec3f07e6e65880b11cc432fb8ad6243629 | 671b88eb3423e86bb331eaf7829659080cbd184e | "2022-07-26T04:32:52Z" | python | "2023-02-24T23:08:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,295 | ["airflow/models/param.py", "tests/models/test_param.py"] | ParamsDict represents the class object itself, not keys and values on Task Instance Details | ### Apache Airflow version
2.3.3 (latest released)
### What happened
ParamsDict's printable presentation shows the class object itself like `<airflow.models.param.ParamsDict object at 0x7fd0eba9bb80>` on the page of Task Instance Detail because it does not have `__repr__` method in its class.
<img width="791" alt="image" src="https://user-images.githubusercontent.com/16971553/180902761-88b9dd9f-7102-4e49-b8b8-0282b31dda56.png">
It used to be `dict` object and what keys and values Params include are shown on UI before replacing Params with the advanced Params by #17100.
### What you think should happen instead
It was originally shown below when it was `dict` object.
![image](https://user-images.githubusercontent.com/16971553/180904396-7b527877-5bc6-48d2-938f-7d338dfd79a7.png)
I think it can be fixed by adding `__repr__` method to the class like below.
```python
class ParamsDict(dict):
...
def __repr__(self):
return f"{self.dump()}"
```
### How to reproduce
I guess it all happens on Airflow using 2.2.0+
### Operating System
Linux, but it's not depending on OS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25295 | https://github.com/apache/airflow/pull/25305 | 285c23a2f90f4c765053aedbd3f92c9f58a84d28 | df388a3d5364b748993e61b522d0b68ff8b8124a | "2022-07-26T01:51:45Z" | python | "2022-07-27T07:13:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,274 | ["airflow/providers/common/sql/hooks/sql.py", "tests/providers/common/sql/hooks/test_dbapi.py"] | Apache Airflow SqlSensor DbApiHook Error | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I trying to make SqlSensor to work with Oracle database, I've installed all the required provider and successfully tested the connection. When I run SqlSensor I got this error message
`ERROR - Failed to execute job 32 for task check_exec_date (The connection type is not supported by SqlSensor. The associated hook should be a subclass of `DbApiHook`. Got OracleHook; 419)`
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-oracle==3.2.0
apache-airflow-providers-postgres==5.1.0
apache-airflow-providers-sqlite==3.0.0
### Deployment
Other
### Deployment details
Run on Windows Subsystem for Linux
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25274 | https://github.com/apache/airflow/pull/25293 | 7e295b7d992f4ed13911e593f15fd18e0d4c16f6 | b0fd105f4ade9933476470f6e247dd5fa518ffc9 | "2022-07-25T08:47:00Z" | python | "2022-07-27T22:11:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,271 | ["airflow/plugins_manager.py", "airflow/utils/entry_points.py", "tests/plugins/test_plugins_manager.py", "tests/utils/test_entry_points.py", "tests/www/views/test_views.py"] | Version 2.3.3 breaks "Plugins as Python packages" feature | ### Apache Airflow version
2.3.3 (latest released)
### What happened
In 2.3.3
If I use https://airflow.apache.org/docs/apache-airflow/stable/plugins.html#plugins-as-python-packages feature, then I see these Error:
short:
`ValueError: The name 'airs' is already registered for this blueprint. Use 'name=' to provide a unique name.`
long:
> i'm trying to reproduce it...
If I don't use it(workarounding by AIRFLOW__CORE__PLUGINS_FOLDER), errors doesn't occur.
It didn't happend in 2.3.2 and earlier
### What you think should happen instead
Looks like plugins are import multiple times if it is plugins-as-python-packages.
Perhaps flask's major version change is the main cause.
Presumably, in flask 1.0, duplicate registration of blueprint was quietly filtered out, but in 2.0 it seems to have been changed to generate an error. (I am trying to find out if this hypothesis is correct)
Anyway, use the latest version of FAB is important. we will have to adapt to this change, so plugins will have to be imported once regardless how it defined.
### How to reproduce
> It was reproduced in the environment used at work, but it is difficult to disclose or explain it.
> I'm working to reproduce it with the breeze command, and I open the issue first with the belief that it's not just me.
### Operating System
CentOS Linux release 7.9.2009 (Core)
### Versions of Apache Airflow Providers
```sh
$ SHIV_INTERPRETER=1 airsflow -m pip freeze | grep apache-
apache-airflow==2.3.3
apache-airflow-providers-apache-hive==3.1.0
apache-airflow-providers-apache-spark==2.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.1.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sqlite==3.1.0
```
but I think these are irrelevant.
### Deployment
Other 3rd-party Helm chart
### Deployment details
docker image based on centos7, python 3.9.10 interpreter, self-written helm2 chart ....
... but I think these are irrelevant.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25271 | https://github.com/apache/airflow/pull/25296 | cd14f3f65ad5011058ab53f2119198d6c082e82c | c30dc5e64d7229cbf8e9fbe84cfa790dfef5fb8c | "2022-07-25T07:11:29Z" | python | "2022-08-03T13:01:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,241 | ["airflow/www/views.py", "tests/www/views/test_views_grid.py"] | Add has_dataset_outlets in /grid_data | Return `has_dataset_outlets` in /grid_data so we can know whether to check for downstream dataset events in grid view.
Also: add `operator`
Also be mindful of performance on those endpoints (e.g do things in a bulk query), and it should be part of the acceptance criteria. | https://github.com/apache/airflow/issues/25241 | https://github.com/apache/airflow/pull/25323 | e994f2b0201ca9dfa3397d22b5ac9d10a11a8931 | d2df9fe7860d1e795040e40723828c192aca68be | "2022-07-22T19:28:28Z" | python | "2022-07-28T10:34:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,240 | ["airflow/www/forms.py", "tests/www/views/test_views_connection.py"] | Strip white spaces from values entered into fields in Airflow UI Connections form | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I accidentally (and then intentionally) added leading and trailing white spaces while adding connection parameters in Airflow UI Connections form. What followed was an error message that was not so helpful in tracking down the input error by the user.
### What you think should happen instead
Ideally, I expected that there should be a frontend or backend logic that strips off accidental leading or trailing white spaces when adding Connections parameters in Airflow.
### How to reproduce
Intentionally add leading or trailing white spaces while adding Connections parameters.
<img width="981" alt="Screenshot 2022-07-22 at 18 49 54" src="https://user-images.githubusercontent.com/9834450/180497315-0898d803-c104-4d93-b464-c0b33a466b4d.png">
### Operating System
Mac OS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25240 | https://github.com/apache/airflow/pull/32292 | 410d0c0f86aaec71e2c0050f5adbc53fb7b441e7 | 394cedb01abd6539f6334a40757bf186325eb1dd | "2022-07-22T18:02:47Z" | python | "2023-07-11T20:04:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,210 | ["airflow/datasets/manager.py", "airflow/models/dataset.py", "tests/datasets/test_manager.py", "tests/models/test_taskinstance.py"] | Many tasks updating dataset at once causes some of them to fail | ### Apache Airflow version
main (development)
### What happened
I have 16 dags which all update the same dataset. They're set to finish at the same time (when the seconds on the clock are 00). About three quarters of them behave as expected, but the other quarter fails with errors like:
```
[2022-07-21, 06:06:00 UTC] {standard_task_runner.py:97} ERROR - Failed to execute job 8 for task increment_source ((psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "dataset_dag_run_queue_pkey"
DETAIL: Key (dataset_id, target_dag_id)=(1, simple_dataset_sink) already exists.
[SQL: INSERT INTO dataset_dag_run_queue (dataset_id, target_dag_id, created_at) VALUES (%(dataset_id)s, %(target_dag_id)s, %(created_at)s)]
[parameters: {'dataset_id': 1, 'target_dag_id': 'simple_dataset_sink', 'created_at': datetime.datetime(2022, 7, 21, 6, 6, 0, 131730, tzinfo=Timezone('UTC'))}]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 375)
```
I've prepaired a gist with the details: https://gist.github.com/MatrixManAtYrService/b5e58be0949eab9180608d0760288d4d
### What you think should happen instead
All dags should succeed
### How to reproduce
See this gist: https://gist.github.com/MatrixManAtYrService/b5e58be0949eab9180608d0760288d4d
Summary: Unpause all of the dags which we expect to collide, wait two minutes. Some will have collided.
### Operating System
docker/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start` targeting commit: cff7d9194f549d801947f47dfce4b5d6870bfaaa
be sure to have `pause` in requirements.txt
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25210 | https://github.com/apache/airflow/pull/26103 | a2db8fcb7df1a266e82e17b937c9c1cf01a16a42 | 4dd628c26697d759aebb81a7ac2fe85a79194328 | "2022-07-21T06:28:32Z" | python | "2022-09-01T20:28:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,200 | ["airflow/models/baseoperator.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "airflow/ti_deps/dep_context.py", "airflow/ti_deps/deps/trigger_rule_dep.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py", "tests/ti_deps/deps/test_trigger_rule_dep.py"] | DAG Run fails when chaining multiple empty mapped tasks | ### Apache Airflow version
2.3.3 (latest released)
### What happened
On Kubernetes Executor and Local Executor (others not tested) a significant fraction of the DAG Runs of a DAG that has two consecutive mapped tasks which are are being passed an empty list are marked as failed when all tasks are either succeeding or being skipped.
![image](https://user-images.githubusercontent.com/13177948/180075030-705b3a15-c554-49c1-8470-ecd10ee1d2dc.png)
### What you think should happen instead
The DAG Run should be marked success.
### How to reproduce
Run the following DAG on Kubernetes Executor or Local Executor.
The real world version of this DAG has several mapped tasks that all point to the same list, and that list is frequently empty. I have made a minimal reproducible example.
```py
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(dag_id="break_mapping", start_date=datetime(2022, 3, 4)) as dag:
@task
def add_one(x: int):
return x + 1
@task
def say_hi():
print("Hi")
added_values = add_one.expand(x=[])
added_more_values = add_one.expand(x=[])
say_hi() >> added_values
added_values >> added_more_values
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!4.0.0
apache-airflow-providers-cncf-kubernetes==1!4.1.0
apache-airflow-providers-elasticsearch==1!4.0.0
apache-airflow-providers-ftp==1!3.0.0
apache-airflow-providers-google==1!8.1.0
apache-airflow-providers-http==1!3.0.0
apache-airflow-providers-imap==1!3.0.0
apache-airflow-providers-microsoft-azure==1!4.0.0
apache-airflow-providers-mysql==1!3.0.0
apache-airflow-providers-postgres==1!5.0.0
apache-airflow-providers-redis==1!3.0.0
apache-airflow-providers-slack==1!5.0.0
apache-airflow-providers-sqlite==1!3.0.0
apache-airflow-providers-ssh==1!3.0.0
```
### Deployment
Astronomer
### Deployment details
Local was tested on docker compose (from astro-cli)
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25200 | https://github.com/apache/airflow/pull/25995 | 1e19807c7ea0d7da11b224658cd9a6e3e7a14bc5 | 5697e9fdfa9d5af2d48f7037c31972c2db1f4397 | "2022-07-20T20:33:42Z" | python | "2022-09-01T12:03:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,179 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/apache/livy/operators/livy.py", "airflow/providers/apache/livy/sensors/livy.py", "tests/providers/apache/livy/hooks/test_livy.py"] | Add auth_type to LivyHook | ### Apache Airflow Provider(s)
apache-livy
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-livy==3.0.0
### Apache Airflow version
2.3.3 (latest released)
### Operating System
Ubuntu 18.04
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### What happened
This is a feature request as apposed to an issue.
I want to use the `LivyHook` to communicate with a Kerberized cluster.
As such, I am using `requests_kerberos.HTTPKerberosAuth` as the authentication type.
Currently, I am implementing this as follows:
```python
from airflow.providers.apache.livy.hooks.livy import LivyHook as NativeHook
from requests_kerberos import HTTPKerberosAuth as NativeAuth
class HTTPKerberosAuth(NativeAuth):
def __init__(self, *ignore_args, **kwargs):
super().__init__(**kwargs)
class LivyHook(NativeHook):
def __init__(self, auth_type=HTTPKerberosAuth, **kwargs):
super().__init__(**kwargs)
self.auth_type = auth_type
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25179 | https://github.com/apache/airflow/pull/25183 | ae7bf474109410fa838ab2728ae6d581cdd41808 | 7d3e799f7e012d2d5c1fe24ce2bea01e68a5a193 | "2022-07-20T10:09:03Z" | python | "2022-08-07T13:49:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,165 | ["airflow/decorators/base.py", "tests/decorators/test_mapped.py", "tests/utils/test_task_group.py"] | Dynamic Tasks inside of TaskGroup do not have group_id prepended to task_id | ### Apache Airflow version
2.3.3 (latest released)
### What happened
As the title states, if you have dynamically mapped tasks inside of a `TaskGroup`, those tasks do not get the `group_id` prepended to their respective `task_id`s. This causes at least a couple of undesirable side effects:
1. Task names are truncated in Grid/Graph* View. The tasks below are named `plus_one` and `plus_two`:
![Screenshot from 2022-07-19 13-29-05](https://user-images.githubusercontent.com/7269927/179826453-a4293c14-2a83-4739-acf2-8b378e4e85e9.png)
![Screenshot from 2022-07-19 13-47-47](https://user-images.githubusercontent.com/7269927/179826442-b9e3d24d-52ff-49fc-a8cc-fe1cb5143bcb.png)
Presumably this is because the UI normally strips off the `group_id` prefix.
\* Graph View was very inconsistent in my experience. Sometimes the names are truncated, and sometimes they render correctly. I haven't figured out the pattern behind this behavior.
2. Duplicate `task_id`s between groups result in a `airflow.exceptions.DuplicateTaskIdFound`, even if the `group_id` would normally disambiguate them.
### What you think should happen instead
These dynamic tasks inside of a group should have the `group_id` prepended for consistent behavior.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag, task
from airflow.utils.task_group import TaskGroup
@dag(
start_date=datetime.datetime(2022, 7, 19),
schedule_interval=None,
)
def test_dag():
with TaskGroup(group_id='group'):
@task
def plus_one(x: int):
return x + 1
plus_one.expand(x=[1, 2, 3])
with TaskGroup(group_id='ggg'):
@task
def plus_two(x: int):
return x + 2
plus_two.expand(x=[1, 2, 3])
dag = test_dag()
if __name__ == '__main__':
dag.cli()
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
Possibly related: #12309
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25165 | https://github.com/apache/airflow/pull/26081 | 6a8f0167436b8b582aeb92a93d3f69d006b36f7b | 9c4ab100e5b069c86bd00bb7860794df0e32fc2e | "2022-07-19T18:58:28Z" | python | "2022-09-01T08:46:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,163 | ["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"] | Common-SQL Operators Various Bugs | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
`apache-airflow-providers-common-sql==1.0.0`
### Apache Airflow version
2.3.3 (latest released)
### Operating System
macOS Monterey 12.3.1
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
- `SQLTableCheckOperator` builds multiple checks in such a way that if two or more checks are given, and one is not a fully aggregated statement, then the SQL fails as it is missing a `GROUP BY` clause.
- `SQLColumnCheckOperator` provides only the last SQL query built from the columns, so when a check fails, it will only give the correct SQL in the exception statement by coincidence.
### What you think should happen instead
- Multiple checks should not need a `GROUP BY` clause
- Either the correct SQL statement, or no SQL statement, should be returned in the exception message.
### How to reproduce
For the `SQLTableCheckOperator`, using the operator like so:
```
table_cheforestfire_costs_table_checkscks = SQLTableCheckOperator(
task_id="forestfire_costs_table_checks",
table=SNOWFLAKE_FORESTFIRE_COST_TABLE,
checks={
"row_count_check": {"check_statement": "COUNT(*) = 9"},
"total_cost_check": {"check_statement": "land_damage_cost + property_damage_cost + lost_profits_cost = total_cost"}
}
)
```
For the `SQLColumnCheckOperator`, using the operator like so:
```
cost_column_checks = SQLColumnCheckOperator(
task_id="cost_column_checks",
table=SNOWFLAKE_COST_TABLE,
column_mapping={
"ID": {"null_check": {"equal_to": 0}},
"LAND_DAMAGE_COST": {"min": {"geq_to": 0}},
"PROPERTY_DAMAGE_COST": {"min": {"geq_to": 0}},
"LOST_PROFITS_COST": {"min": {"geq_to": 0}},
}
)
```
and ensuring that any of the `ID`, `LAND_DAMAGE_COST`, or `PROPERTY_DAMAGE_COST` checks fail.
An example DAG with the correct environment and data can be found [here](https://github.com/astronomer/airflow-data-quality-demo/blob/main/dags/snowflake_examples/complex_snowflake_transform.py).
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25163 | https://github.com/apache/airflow/pull/25164 | d66e427c4d21bc479caa629299a786ca83747994 | be7cb1e837b875f44fcf7903329755245dd02dc3 | "2022-07-19T18:18:01Z" | python | "2022-07-22T14:01:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,149 | ["airflow/models/dagbag.py", "airflow/www/security.py", "tests/models/test_dagbag.py", "tests/www/views/test_views_home.py"] | DAG.access_control can't sync when clean access_control | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I change my DAG from
```python
with DAG(
'test',
access_control={'team':{'can_edit','can_read'}},
) as dag:
...
```
to
```python
with DAG(
'test',
) as dag:
...
```
Remove `access_control` arguments, Scheduler can't sync permissions to db.
If we write code like this,
```python
with DAG(
'test',
access_control = {'team': {}}
) as dag:
...
```
It works.
### What you think should happen instead
It should clear permissions to `test` DAG on Role `team`.
I think we should give a consistent behaviour of permissions sync. If we give `access_control` argument, permissions assigned in Web will clear when we update DAG file.
### How to reproduce
_No response_
### Operating System
CentOS Linux release 7.9.2009 (Core)
### Versions of Apache Airflow Providers
```
airflow-code-editor==5.2.2
apache-airflow==2.3.3
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-psrp==2.0.0
apache-airflow-providers-microsoft-winrm==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-samba==4.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
```
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25149 | https://github.com/apache/airflow/pull/30340 | 97ad7cee443c7f4ee6c0fbaabcc73de16f99a5e5 | 2c0c8b8bfb5287e10dc40b73f326bbf9a0437bb1 | "2022-07-19T09:37:48Z" | python | "2023-04-26T14:11:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,103 | ["airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_variable_endpoint.py"] | API `variables/{variable_key}` request fails if key has character `/` | ### Apache Airflow version
2.3.2
### What happened
Created a variable e.g. `a/variable` and couldn't get or delete it
### What you think should happen instead
i shouldn't've been allowed to create a variable with `/`, or the GET and DELETE should work
### How to reproduce
![image](https://user-images.githubusercontent.com/98349137/179311482-e3a74683-d855-4013-b27a-01dfae7db0ff.png)
![image](https://user-images.githubusercontent.com/98349137/179311639-7640b0b5-38c6-4002-a04c-5bcd8f8a0784.png)
```
DELETE /variables/{variable_key}
GET /variables/{variable_key}
```
create a variable with `/`, and then try and get it. the get will 404, even after html escape. delete also fails
`GET /variables/` works just fine
### Operating System
astro
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25103 | https://github.com/apache/airflow/pull/25774 | 98aac5dc282b139f0e726aac512b04a6693ba83d | a1beede41fb299b215f73f987a572c34f628de36 | "2022-07-15T21:22:11Z" | python | "2022-08-18T06:08:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,095 | ["airflow/models/taskinstance.py", "airflow/models/taskreschedule.py", "airflow/serialization/serialized_objects.py", "airflow/ti_deps/deps/ready_to_reschedule.py", "tests/models/test_taskinstance.py", "tests/serialization/test_dag_serialization.py", "tests/ti_deps/deps/test_ready_to_reschedule_dep.py"] | Dynamically mapped sensor with mode='reschedule' fails with violated foreign key constraint | ### Apache Airflow version
2.3.3 (latest released)
### What happened
If you are using [Dynamic Task Mapping](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dynamic-task-mapping.html) to map a Sensor with `.partial(mode='reschedule')`, and if that sensor fails its poke condition even once, the whole sensor task will immediately die with an error like:
```
[2022-07-14, 10:45:05 CDT] {standard_task_runner.py:92} ERROR - Failed to execute job 19 for task check_reschedule ((sqlite3.IntegrityError) FOREIGN KEY constraint failed
[SQL: INSERT INTO task_reschedule (task_id, dag_id, run_id, map_index, try_number, start_date, end_date, duration, reschedule_date) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: ('check_reschedule', 'test_dag', 'manual__2022-07-14T20:44:02.708517+00:00', -1, 1, '2022-07-14 20:45:05.874988', '2022-07-14 20:45:05.900895', 0.025907, '2022-07-14 20:45:10.898820')]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 2973372)
```
A similar error arises when using a Postgres backend:
```
[2022-07-14, 11:09:22 CDT] {standard_task_runner.py:92} ERROR - Failed to execute job 17 for task check_reschedule ((psycopg2.errors.ForeignKeyViolation) insert or update on table "task_reschedule" violates foreign key constraint "task_reschedule_ti_fkey"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(test_dag, check_reschedule, manual__2022-07-14T21:08:13.462782+00:00, -1) is not present in table "task_instance".
[SQL: INSERT INTO task_reschedule (task_id, dag_id, run_id, map_index, try_number, start_date, end_date, duration, reschedule_date) VALUES (%(task_id)s, %(dag_id)s, %(run_id)s, %(map_index)s, %(try_number)s, %(start_date)s, %(end_date)s, %(duration)s, %(reschedule_date)s) RETURNING task_reschedule.id]
[parameters: {'task_id': 'check_reschedule', 'dag_id': 'test_dag', 'run_id': 'manual__2022-07-14T21:08:13.462782+00:00', 'map_index': -1, 'try_number': 1, 'start_date': datetime.datetime(2022, 7, 14, 21, 9, 22, 417922, tzinfo=Timezone('UTC')), 'end_date': datetime.datetime(2022, 7, 14, 21, 9, 22, 464495, tzinfo=Timezone('UTC')), 'duration': 0.046573, 'reschedule_date': datetime.datetime(2022, 7, 14, 21, 9, 27, 458623, tzinfo=Timezone('UTC'))}]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 2983150)
```
`mode='poke'` seems to behave as expected. As far as I can tell, this affects all Sensor types.
### What you think should happen instead
This combination of features should work without error.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag, task
from airflow.sensors.bash import BashSensor
@dag(
start_date=datetime.datetime(2022, 7, 14),
schedule_interval=None,
)
def test_dag():
@task
def get_tasks():
return ['(($RANDOM % 2 == 0))'] * 10
tasks = get_tasks()
BashSensor.partial(task_id='check_poke', mode='poke', poke_interval=5).expand(bash_command=tasks)
BashSensor.partial(task_id='check_reschedule', mode='reschedule', poke_interval=5).expand(bash_command=tasks)
dag = test_dag()
if __name__ == '__main__':
dag.cli()
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25095 | https://github.com/apache/airflow/pull/25594 | 84718f92334b7e43607ab617ef31f3ffc4257635 | 5f3733ea310b53a0a90c660dc94dd6e1ad5755b7 | "2022-07-15T13:35:48Z" | python | "2022-08-11T07:30:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,092 | ["airflow/providers/microsoft/mssql/hooks/mssql.py", "tests/providers/microsoft/mssql/hooks/test_mssql.py"] | MsSqlHook.get_sqlalchemy_engine uses pyodbc instead of pymssql | ### Apache Airflow Provider(s)
microsoft-mssql
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-mssql==2.0.1
### Apache Airflow version
2.2.2
### Operating System
Ubuntu 20.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
`MsSqlHook.get_sqlalchemy_engine` uses the default mssql driver: `pyodbc` instead of `pymssql`.
- If pyodbc is installed: we get `sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError)`
- Otherwise we get: `ModuleNotFoundError`
PS: Looking at the code it should still apply up to provider version 3.0.0 (lastest version).
### What you think should happen instead
The default driver used by `sqlalchemy.create_engine` for mssql is `pyodbc`.
To use `pymssql` with `create_engine` we need to have the uri start with `mssql+pymssql://` (currently the hook uses `DBApiHook.get_uri` which starts with `mssql://`.
### How to reproduce
```python
>>> from contextlib import closing
>>> from airflow.providers.microsoft.mssql.hooks.mssql import MsSqlHook
>>>
>>> hook = MsSqlHook()
>>> with closing(hook.get_sqlalchemy_engine().connect()) as c:
>>> with closing(c.execute("SELECT SUSER_SNAME()")) as res:
>>> r = res.fetchone()
```
Will raise an exception due to the wrong driver being used.
### Anything else
Demo for sqlalchemy default mssql driver choice:
```bash
# pip install sqlalchemy
... Successfully installed sqlalchemy-1.4.39
# pip install pymssql
... Successfully installed pymssql-2.2.5
```
```python
>>> from sqlalchemy import create_engine
>>> create_engine("mssql://test:pwd@test:1433")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 2, in create_engine
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/deprecations.py", line 309, in warned
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/create.py", line 560, in create_engine
dbapi = dialect_cls.dbapi(**dbapi_args)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/connectors/pyodbc.py", line 43, in dbapi
return __import__("pyodbc")
ModuleNotFoundError: No module named 'pyodbc'
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25092 | https://github.com/apache/airflow/pull/25185 | a01cc5b0b8e4ce3b24970d763e4adccfb4e69f09 | df5a54d21d6991d6cae05c38e1562da2196e76aa | "2022-07-15T12:42:02Z" | python | "2022-08-05T15:41:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,090 | ["airflow/jobs/scheduler_job.py", "airflow/models/dag.py", "airflow/timetables/base.py", "airflow/timetables/simple.py", "airflow/www/views.py", "newsfragments/25090.significant.rst"] | More natural sorting of DAG runs in the grid view | ### Apache Airflow version
2.3.2
### What happened
Dag with schedule to run once every hour.
Dag was started manually at 12:44, lets call this run 1
At 13:00 the scheduled run started, lets call this run 2. It appears before run 1 in the grid view.
See attached screenshot
![image](https://user-images.githubusercontent.com/89977373/179212616-4113a1d5-ea61-4e0b-9c3f-3e4eba8318bc.png)
### What you think should happen instead
Dags in grid view should appear in the order they are started.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow==2.3.2
apache-airflow-client==2.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.0.2
apache-airflow-providers-docker==3.0.0
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-postgres==5.0.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25090 | https://github.com/apache/airflow/pull/25633 | a1beede41fb299b215f73f987a572c34f628de36 | 36eea1c8e05a6791d144e74f4497855e35baeaac | "2022-07-15T11:16:35Z" | python | "2022-08-18T06:28:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,036 | ["airflow/example_dags/example_datasets.py", "airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Test that dataset not updated when task skipped | the AIP specifies that when a task is skipped, that we don’t mark the dataset as “updated”. we should simply add a test that verifies that this is what happens (and make changes if necessary)
@blag, i tried to make this an issue so i could assign to you but can't. anyway, can reference in PR with `closes` | https://github.com/apache/airflow/issues/25036 | https://github.com/apache/airflow/pull/25086 | 808035e00aaf59a8012c50903a09d3f50bd92ca4 | f0c9ac9da6db3a00668743adc9b55329ec567066 | "2022-07-13T19:31:16Z" | python | "2022-07-19T03:43:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,033 | ["airflow/models/dag.py", "airflow/www/templates/airflow/dag.html", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "tests/models/test_dag.py", "tests/www/views/test_views_base.py"] | next run should show deps fulfillment e.g. 0 of 3 | on dags page (i.e. the home page) we have a "next run" column. for dataset-driven dags, since we can't know for certain when it will be, we could instead show how many deps are fulfilled, e.g. `0 of 1` and perhaps make it a link to the datasets that the dag is dependened on.
here's a sample query that returns the dags which _are_ ready to run. but for this feature you'd need to get the num deps fulfilled and the total num deps.
```python
# these dag ids are triggered by datasets, and they are ready to go.
dataset_triggered_dag_info_list = {
x.dag_id: (x.first_event_time, x.last_event_time)
for x in session.query(
DatasetDagRef.dag_id,
func.max(DDRQ.created_at).label('last_event_time'),
func.max(DDRQ.created_at).label('first_event_time'),
)
.join(
DDRQ,
and_(
DDRQ.dataset_id == DatasetDagRef.dataset_id,
DDRQ.target_dag_id == DatasetDagRef.dag_id,
),
isouter=True,
)
.group_by(DatasetDagRef.dag_id)
.having(func.count() == func.sum(case((DDRQ.target_dag_id.is_not(None), 1), else_=0)))
.all()
}
``` | https://github.com/apache/airflow/issues/25033 | https://github.com/apache/airflow/pull/25141 | 47b72056c46931aef09d63d6d80fbdd3d9128b09 | 03a81b66de408631147f9353de6ffd3c1df45dbf | "2022-07-13T19:19:26Z" | python | "2022-07-21T18:28:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,019 | ["airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/index.rst", "generated/provider_dependencies.json", "tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py"] | update watchtower version in amazon provider | ### Description
there is limitation to version 2
https://github.com/apache/airflow/blob/809d95ec06447c9579383d15136190c0963b3c1b/airflow/providers/amazon/provider.yaml#L48
### Use case/motivation
using up to date version of the library
### Related issues
didnt find
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25019 | https://github.com/apache/airflow/pull/34747 | 7764a51ac9b021a77a57707bc7e750168e9e0da0 | c01abd1c2eed8f60fec5b9d6cc0232b54efa52de | "2022-07-13T09:37:45Z" | python | "2023-10-06T14:35:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,996 | ["airflow/models/dag.py", "airflow/models/taskmixin.py", "tests/models/test_dag.py"] | Airflow doesn't set default task group while calling dag.add_tasks | ### Apache Airflow version
2.3.3 (latest released)
### What happened
Airflow set default task group while creating operator if dag parameter is set
https://github.com/apache/airflow/blob/main/airflow/models/baseoperator.py#L236
However, It doesn't set the default task group while adding a task using dag.add_task function
https://github.com/apache/airflow/blob/main/airflow/models/dag.py#L2179
This broke the code at line no
https://github.com/apache/airflow/blob/main/airflow/models/taskmixin.py#L312 and getting the error Cannot check for mapped dependants when not attached to a DAG.
Please add below line in dag.add_task function also:
if dag:
task_group = TaskGroupContext.get_current_task_group(dag)
if task_group:
task_id = task_group.child_id(task_id)
### What you think should happen instead
It should not break if task is added using dag.add_task
### How to reproduce
don't dag parameter while creating operator object. add task using add_task in dag.
### Operating System
Any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24996 | https://github.com/apache/airflow/pull/25000 | 45e5150714e0a5a8e82e3fa6d0b337b92cbeb067 | ce0a6e51c2d4ee87e008e28897b2450778b51003 | "2022-07-12T11:28:04Z" | python | "2022-08-05T15:17:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,953 | ["airflow/providers/oracle/example_dags/__init__.py", "airflow/providers/oracle/example_dags/example_oracle.py", "docs/apache-airflow-providers-oracle/index.rst", "docs/apache-airflow-providers-oracle/operators/index.rst"] | oracle hook _map_param() incorrect | ### Apache Airflow Provider(s)
oracle
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.3 (latest released)
### Operating System
OEL 7.6
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
[_map_param()](https://github.com/apache/airflow/blob/main/airflow/providers/oracle/hooks/oracle.py#L36) function from Oracle hook has an incorrect check of types:
```
PARAM_TYPES = {bool, float, int, str}
def _map_param(value):
if value in PARAM_TYPES:
# In this branch, value is a Python type; calling it produces
# an instance of the type which is understood by the Oracle driver
# in the out parameter mapping mechanism.
value = value()
return value
```
`if value in PARAM_TYPES` never gets True for all the mentioned variables types:
```
PARAM_TYPES = {bool, float, int, str}
>>> "abc" in PARAM_TYPES
False
>>> 123 in PARAM_TYPES
False
>>> True in PARAM_TYPES
False
>>> float(5.5) in PARAM_TYPES
False
```
The correct condition would be `if type(value) in PARAM_TYPES`
**But**, if we only fix this condition, next in positive case (type(value) in PARAM_TYPES = True) one more issue occurs with `value = value()`
`bool`, `float`, `int` or `str` are not callable
`TypeError: 'int' object is not callable`
This line is probaby here for passing a python callable into sql statement of procedure params in tasks, is it? If so, need to correct:
`if type(value) not in PARAM_TYPES`
Here is the full fix:
```
def _map_param(value):
if type(value) not in PARAM_TYPES:
value = value()
return value
```
Next casses are tested:
```
def oracle_callable(n=123):
return n
def oracle_pass():
return 123
task1 = OracleStoredProcedureOperator( task_id='task1', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable} )
task2 = OracleStoredProcedureOperator( task_id='task2', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable()} )
task3 = OracleStoredProcedureOperator( task_id='task3', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable(456)} )
task4 = OracleStoredProcedureOperator( task_id='task4', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oacle_pass} )
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24953 | https://github.com/apache/airflow/pull/30979 | 130b6763db364426d1d794641b256d7f2ce0b93d | edebfe3f2f2c7fc2b6b345c6bc5f3a82e7d47639 | "2022-07-10T23:01:34Z" | python | "2023-05-09T18:32:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,938 | ["airflow/providers/databricks/operators/databricks.py"] | Add support for dynamic databricks connection id | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==3.0.0 # Latest
### Apache Airflow version
2.3.2 (latest released)
### Operating System
Linux
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
_No response_
### What you think should happen instead
### Motivation
In a single airflow deployment, we are looking to have the ability to support multiple databricks connections ( `databricks_conn_id`) at runtime. This can be helpful to run the same DAG against multiple testing lanes(a.k.a. different development/testing Databricks environments).
### Potential Solution
We can pass the connection id via the Airflow DAG run configuration at runtime. For this, `databricks_conn_id` is required to be a templated field.
### How to reproduce
Minor enhancement/new feature
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24938 | https://github.com/apache/airflow/pull/24945 | 7fc5e0b24a8938906ad23eaa1262c9fb74ee2df1 | 8dfe7bf5ff090a675353a49da21407dffe2fc15e | "2022-07-09T07:55:53Z" | python | "2022-07-11T14:47:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,936 | ["airflow/example_dags/example_dag_decorator.py", "airflow/example_dags/example_sla_dag.py", "airflow/models/dag.py", "docs/spelling_wordlist.txt"] | Type hints for taskflow @dag decorator | ### Description
I find no type hints when write a DAG use TaskFlowApi. `dag` and `task` decorator is a simple wrapper without detail arguments provide in docstring.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24936 | https://github.com/apache/airflow/pull/25044 | 61fc4899d71821fd051944d5d9732f7d402edf6c | be63c36bf1667c8a420d34e70e5a5efd7ca42815 | "2022-07-09T03:25:14Z" | python | "2022-07-15T01:29:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,921 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Add options to Docker Operator | ### Description
I'm trying to add options like log-opt max-size 5 and I can't.
### Use case/motivation
I'm working in Hummingbot and I would like to offer the community a system to manage multiple bots, rebalance portfolio, etc. Our system needs a terminal to execute commands so currently I'm not able to use airflow to accomplish this task.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24921 | https://github.com/apache/airflow/pull/26653 | fd27584b3dc355eaf0c0cd7a4cd65e0e580fcf6d | 19d6f54704949d017b028e644bbcf45f5b53120b | "2022-07-08T12:01:04Z" | python | "2022-09-27T14:42:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,919 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Send default email if file "html_content_template" not found | ### Apache Airflow version
2.3.2 (latest released)
### What happened
I created a new email template to be sent when there are task failures. I accidentally added the path to the `[email] html_content_template` and `[email] subject_template` with a typo and no email was sent. The task's log is the following:
```
Traceback (most recent call last):
File "/home/user/.conda/envs/airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1942, in handle_failure
self.email_alert(error, task)
File "/home/user/.conda/envs/airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2323, in email_alert
subject, html_content, html_content_err = self.get_email_subject_content(exception, task=task)
File "/home/user/.conda/envs/airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2315, in get_email_subject_content
subject = render('subject_template', default_subject)
File "/home/user/.conda/envs/airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2311, in render
with open(path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/user/airflow/config/templates/email_failure_subject.tmpl'
```
I've looked the TaskInstance class (https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py).
I've seen that the `render` function (https://github.com/apache/airflow/blob/bcf2c418d261c6244e60e4c2d5de42b23b714bd1/airflow/models/taskinstance.py#L2271) has a `content` parameter, which is not used inside.
I guess the solution to this bug is simple: just add a `try - catch` block and return the default content in the `catch` part.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
CentOS Linux 8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Conda environment
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24919 | https://github.com/apache/airflow/pull/24943 | b7f51b9156b780ebf4ca57b9f10b820043f61651 | fd6f537eab7430cb10ea057194bfc9519ff0bb38 | "2022-07-08T11:07:00Z" | python | "2022-07-18T18:22:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,844 | ["airflow/www/static/js/api/useGridData.test.js", "airflow/www/static/js/api/useGridData.ts"] | grid_data api keep refreshing when backfill DAG runs | ### Apache Airflow version
2.3.2 (latest released)
### What happened
![image](https://user-images.githubusercontent.com/95274553/177323814-fa75af14-6018-4f9d-9468-4e681b572dcc.png)
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
186-Ubuntu
### Versions of Apache Airflow Providers
2.3.2
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24844 | https://github.com/apache/airflow/pull/25042 | 38d6c28f9cf9ee4f663d068032830911f7a8e3a3 | de6938e173773d88bd741e43c7b0aa16d8a1a167 | "2022-07-05T12:09:40Z" | python | "2022-07-20T10:30:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,820 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Dag disappears when DAG tag is longer than 100 char limit | ### Apache Airflow version
2.2.5
### What happened
We added new DAG tags to a couple of our DAGs. In the case when the tag was longer than the 100 character limit the DAG was not showing in the UI and wasn't scheduled. It was however possible to reach it by typing in the URL to the DAG.
Usually when DAGs are broken there will be an error message in the UI, but this problem did not render any error message.
This problem occurred to one of our templated DAGs. Only one DAG broke and it was the one with a DAG tag which was too long. When we fixed the length, the DAG was scheduled and was visible in the UI again.
### What you think should happen instead
Exclude the dag if it is over the 100 character limit or show an error message in the UI.
### How to reproduce
Add a DAG tag which is longer than 100 characters.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
Running Airflow in Kubernetes.
Syncing DAGs from S3 with https://tech.scribd.com/blog/2020/breaking-up-the-dag-repo.html
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24820 | https://github.com/apache/airflow/pull/25196 | a5cbcb56774d09b67c68f87187f2f48d6e70e5f0 | 4b28635b2085a07047c398be6cc1ac0252a691f7 | "2022-07-04T07:59:19Z" | python | "2022-07-25T13:46:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,783 | ["airflow/operators/python.py", "tests/operators/test_python.py"] | Check if virtualenv is installed fails | ### Apache Airflow version
2.3.0
### What happened
When using a `PythonVirtualenvOperator` it is checked if `virtualenv` is installed by
`if not shutil.which("virtualenv"):`
https://github.com/apache/airflow/blob/a1679be85aa49c0d6a7ba2c31acb519a5bcdf594/airflow/operators/python.py#L398
Actually, this expression checks if `virtualenv` is on PATH. If Airflow is installed in a virtual environment itself and `virtualenv` is not installed in the environment the check might pass but `virtualenv` cannot be used as it is not present in the environment.
### What you think should happen instead
It should be checked if `virtualenv` is actually available in the environment.
```python
if importlib.util.find_spec("virtualenv") is None:
raise AirflowException('PythonVirtualenvOperator requires virtualenv, please install it.')
```
https://stackoverflow.com/a/14050282
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24783 | https://github.com/apache/airflow/pull/32939 | 16e0830a5dfe42b9ab0bbca7f8023bf050bbced0 | ddcd474a5e2ce4568cca646eb1f5bce32b4ba0ed | "2022-07-01T12:24:38Z" | python | "2023-07-30T04:57:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,773 | ["airflow/providers/amazon/aws/secrets/secrets_manager.py"] | AWS secret manager: AccessDeniedException is not a valid Exception | ### Apache Airflow version
2.3.1
### What happened
Airflow AWS Secret manager handles `AccesssDeniedException` in [secret_manager.py](https://github.com/apache/airflow/blob/providers-amazon/4.0.0/airflow/providers/amazon/aws/secrets/secrets_manager.py#L272) whereas it's not a valid exception for the client
```
File "/usr/local/lib/python3.9/site-packages/airflow/models/variable.py", line 265, in get_variable_from_secrets
var_val = secrets_backend.get_variable(key=key)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/amazon/aws/secrets/secrets_manager.py", line 238, in get_variable
return self._get_secret(self.variables_prefix, key)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/amazon/aws/secrets/secrets_manager.py", line 275, in _get_secret
except self.client.exceptions.AccessDeniedException:
File "/home/astro/.local/lib/python3.9/site-packages/botocore/errorfactory.py", line 51, in __getattr__
raise AttributeError(
AttributeError: <botocore.errorfactory.SecretsManagerExceptions object at 0x7f19cd3c09a0> object has no attribute 'AccessDeniedException'. Valid exceptions are: DecryptionFailure, EncryptionFailure, InternalServiceError, InvalidNextTokenException, InvalidParameterException, InvalidRequestException, LimitExceededException, MalformedPolicyDocumentException, PreconditionNotMetException, PublicPolicyException, ResourceExistsException, ResourceNotFoundException
```
### What you think should happen instead
Handle exception specific to [get_secret_value](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/secretsmanager.html#SecretsManager.Client.get_secret_value)
### How to reproduce
This happened during a unique case where the 100s of secrets are loaded at once. I'm assuming the request is hanging over 30s
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.4.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24773 | https://github.com/apache/airflow/pull/24898 | f69e597dfcbb6fa7e7f1a3ff2b5013638567abc3 | 60c2a3bf82b4fe923b8006f6694f74823af87537 | "2022-07-01T05:15:40Z" | python | "2022-07-08T14:21:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,755 | ["airflow/utils/serve_logs.py", "newsfragments/24755.improvement.rst"] | Log server on celery worker does not work in IPv6-only setup | ### Apache Airflow version
2.2.5
### What happened
I deployed the Airflow helm chart in a Kubernetes cluster that only allows IPv6 traffic.
When I want to look at a task log in the UI there is this message:
```
*** Fetching from: http://airflow-v1-worker-0.airflow-v1-worker.airflow.svc.cluster.local:8793/log/my-dag/my-task/2022-06-28T00:00:00+00:00/1.log
*** Failed to fetch log file from worker. [Errno 111] Connection refused
```
So the webserver cannot fetch the logfile from the worker.
This happens in my opinion because the gunicorn application listens to `0.0.0.0` (IPv4), see [code](https://github.com/apache/airflow/blob/main/airflow/utils/serve_logs.py#L142) or worker log below, and the inter-pod communication in my cluster is IPv6.
```
~ » k logs airflow-v1-worker-0 -c airflow-worker -p
[2022-06-30 14:51:52 +0000] [49] [INFO] Starting gunicorn 20.1.0
[2022-06-30 14:51:52 +0000] [49] [INFO] Listening at: http://0.0.0.0:8793 (49)
[2022-06-30 14:51:52 +0000] [49] [INFO] Using worker: sync
[2022-06-30 14:51:52 +0000] [50] [INFO] Booting worker with pid: 50
[2022-06-30 14:51:52 +0000] [51] [INFO] Booting worker with pid: 51
-------------- celery@airflow-v1-worker-0 v5.2.3 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.10.118-x86_64-with-glibc2.28 2022-06-30 14:51:53
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f73b8d23d00
- ** ---------- .> transport: redis://:**@airflow-v1-redis-master.airflow.svc.cluster.local:6379/1
- ** ---------- .> results: postgresql://airflow:**@airflow-v1-pgbouncer.airflow.svc.cluster.local:6432/airflow_backend_db
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
[tasks]
. airflow.executors.celery_executor.execute_command
```
### What you think should happen instead
The gunicorn webserver should (configurably) listen to IPv6 traffic.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24755 | https://github.com/apache/airflow/pull/24846 | 7f749b653ce363b1450346b61c7f6c406f72cd66 | 2f29bfefb59b0014ae9e5f641d3f6f46c4341518 | "2022-06-30T14:09:25Z" | python | "2022-07-07T20:16:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,753 | ["airflow/providers/amazon/aws/operators/glue.py"] | Allow back script_location in Glue to be None *again* | ### Apache Airflow version
2.3.2 (latest released)
### What happened
On this commit someone broke the AWS Glue provider by enforcing the script_location to be a string:
https://github.com/apache/airflow/commit/27b77d37a9b2e63e95a123c31085e580fc82b16c
Then someone realized that (see comment thread [there](https://github.com/apache/airflow/commit/27b77d37a9b2e63e95a123c31085e580fc82b16c#r72466413)) and created a new PR to allow None to be parsed again here: https://github.com/apache/airflow/pull/23357
But the parameters no longer have the `Optional[str]` typing and now the error persists with this traceback:
```Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/glue.py", line 163, in __init__
super().__init__(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 373, in apply_defaults
raise AirflowException(f"missing keyword argument {missing_args.pop()!r}")
airflow.exceptions.AirflowException: missing keyword argument 'script_location'
```
### What you think should happen instead
Please revert the change and add `Optional[str]` here: https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/glue.py#L69
### How to reproduce
Use the class without a script_location
### Operating System
Linux
### Versions of Apache Airflow Providers
Apache airflow 2.3.2
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24753 | https://github.com/apache/airflow/pull/24754 | 1b3905ef6eb5630e8d12975d9e91600ffb832471 | 49925be66483ce942bcd4827df9dbd41c3ef41cf | "2022-06-30T13:37:57Z" | python | "2022-07-01T14:02:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,748 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/kubernetes/kube_client.py", "tests/kubernetes/test_client.py"] | Configuring retry policy of the the kubernetes CoreV1Api ApiClient | ### Description
Can we add the option to configure the Retry policy of the kubernetes CoreV1Api? Or set it to default have some more resilient configuration.
Today it appears to retry operations 3 times but with 0 backoff in between each try. Causing temporary network glitches to result in fatal errors.
Following the flow below:
1. `airflow.kubernetes.kube_client.get_kube_client()`
Calls `load_kube_config()` without any configuration set, this assigns a default configuration with `retries=None` to `CoreV1Api.set_default()`
1b. Creates `CoreV1Api()` with `api_client=None`
1c. `ApiClient()` default constructor creates a default configuration object via `Configuration.get_default_copy(), this is the default injected above`
2. On request, through some complicated flow inside `ApiClient` and urllib3, this `configuration.retries` eventually finds its way into urllib `HTTPConnectionPool`, where if unset, it uses `urllib3.util.Retry.DEFAULT`, this has a policy of 3x retries with 0 backoff time in between.
------
Configuring the ApiClient would mean changing the `get_kube_client()` to something roughly resembling:
```
client_config = Configuration()
client_config.retries = Retry(total=3, backoff=LOAD_FROM_CONFIG)
config.load_kube_config(...., client_configuration=client_config)
apiclient = ApiClient(client_config)
return CoreV1Api(apiclient)
```
I don't know myself how fine granularity is best to expose to be configurable from airflow. The retry object has a lot of different options, so do the rest of the kubernetes client Configuration object. Maybe it should be injected from a plugin rather than config-file? Maybe urllib or kubernets library have other ways to set default config?
### Use case/motivation
Our Kubernetes API server had some unknown hickup for 10 seconds, this caused the Airflow kubernetes executor to crash, restarting airflow and then it started killing pods that were running fine, showing following log: "Reset the following 1 orphaned TaskInstances"
If the retries would have had some backoff it would have likely survived this hickup.
See attachment for the full stack trace, it's too long to include inline. Here is the most interesting parts:
```
2022-06-29 21:25:49 Class={kubernetes_executor.py:111} Level=ERROR Unknown error in KubernetesJobWatcher. Failing
...
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe35de0c70>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
2022-06-29 21:25:49 urllib3.exceptions.ProtocolError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read))
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe315ec040>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe315ec670>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
...
2022-06-29 21:25:50 Class={kubernetes_executor.py:813} Level=INFO Shutting down Kubernetes executor
...
2022-06-29 21:26:08 Class={scheduler_job.py:696} Level=INFO Starting the scheduler
...
2022-06-29 21:27:29 Class={scheduler_job.py:1285} Level=INFO Message=Reset the following 1 orphaned TaskInstances:
```
[airflowkubernetsretrycrash.log](https://github.com/apache/airflow/files/9017815/airflowkubernetsretrycrash.log)
From airflow version 2.3.2
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24748 | https://github.com/apache/airflow/pull/29809 | 440bf46ff0b417c80461cf84a68bd99d718e19a9 | dcffbb4aff090e6c7b6dc96a4a68b188424ae174 | "2022-06-30T08:27:01Z" | python | "2023-04-14T13:37:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,736 | ["airflow/sensors/time_sensor.py", "tests/sensors/test_time_sensor.py"] | TimeSensorAsync breaks if target_time is timezone-aware | ### Apache Airflow version
2.3.2 (latest released)
### What happened
`TimeSensorAsync` fails with the following error if `target_time` is aware:
```
[2022-06-29, 05:09:11 CDT] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/sensors/time_sensor.py", line 60, in execute
trigger=DateTimeTrigger(moment=self.target_datetime),
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/triggers/temporal.py", line 42, in __init__
raise ValueError(f"The passed datetime must be using Pendulum's UTC, not {moment.tzinfo!r}")
ValueError: The passed datetime must be using Pendulum's UTC, not Timezone('America/Chicago')
```
### What you think should happen instead
Given the fact that `TimeSensor` correctly handles timezones (#9882), this seems like a bug. `TimeSensorAsync` should be a drop-in replacement for `TimeSensor`, and therefore should have the same timezone behavior.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag
from airflow.sensors.time_sensor import TimeSensor, TimeSensorAsync
import pendulum
@dag(
start_date=datetime.datetime(2022, 6, 29),
schedule_interval='@daily',
)
def time_sensor_dag():
naive_time1 = datetime.time( 0, 1)
aware_time1 = datetime.time( 0, 1).replace(tzinfo=pendulum.local_timezone())
naive_time2 = pendulum.time(23, 59)
aware_time2 = pendulum.time(23, 59).replace(tzinfo=pendulum.local_timezone())
TimeSensor(task_id='naive_time1', target_time=naive_time1, mode='reschedule')
TimeSensor(task_id='naive_time2', target_time=naive_time2, mode='reschedule')
TimeSensor(task_id='aware_time1', target_time=aware_time1, mode='reschedule')
TimeSensor(task_id='aware_time2', target_time=aware_time2, mode='reschedule')
TimeSensorAsync(task_id='async_naive_time1', target_time=naive_time1)
TimeSensorAsync(task_id='async_naive_time2', target_time=naive_time2)
TimeSensorAsync(task_id='async_aware_time1', target_time=aware_time1) # fails
TimeSensorAsync(task_id='async_aware_time2', target_time=aware_time2) # fails
dag = time_sensor_dag()
```
This can also happen if the `target_time` is naive and `core.default_timezone = system`.
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24736 | https://github.com/apache/airflow/pull/25221 | f53bd5df2a0b370a14f811b353229ad3e9c66662 | ddaf74df9b1e9a4698d719f81931e822b21b0a95 | "2022-06-29T15:28:16Z" | python | "2022-07-22T21:03:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,725 | ["airflow/www/templates/airflow/dag.html"] | Trigger DAG from templated view tab producing bad request | ### Body
Reproduced on main branch.
The bug:
When clicking Trigger DAG from templated view tab it resulted in a BAD REQUEST page however DAG run is created (it also produce the UI alert "it should start any moment now")
To compare trying to trigger DAG from log tab works as expected so the issue seems to be relevant only to to the specific tab.
![trigger dag](https://user-images.githubusercontent.com/45845474/176372793-02ca6760-57f7-4a89-b85e-68411561009f.gif)
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/24725 | https://github.com/apache/airflow/pull/25729 | f24e706ff7a84fd36ea39dc3399346c357d40bd9 | 69663b245a9a67b6f05324ce7b141a1bd9b05e0a | "2022-06-29T07:06:00Z" | python | "2022-08-17T13:21:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,692 | ["airflow/providers/apache/hive/hooks/hive.py", "tests/providers/apache/hive/hooks/test_hive.py"] | Error for Hive Server2 Connection Document | ### What do you see as an issue?
In this Document https://airflow.apache.org/docs/apache-airflow-providers-apache-hive/stable/connections/hiveserver2.html Describe , In Extra must use the "auth_mechanism " but in the sources Code used "authMechanism".
### Solving the problem
use same words.
### Anything else
None
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24692 | https://github.com/apache/airflow/pull/24713 | 13908c2c914cf08f9d962a4d3b6ae54fbdf1d223 | cef97fccd511c8e5485df24f27b82fa3e46929d7 | "2022-06-28T01:16:20Z" | python | "2022-06-29T14:12:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,681 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Docker is not pushing last line over xcom | ### Apache Airflow Provider(s)
docker
### Versions of Apache Airflow Providers
apache-airflow-providers-docker==2.7.0
docker==5.0.3
### Apache Airflow version
2.3.2 (latest released)
### Operating System
20.04.4 LTS (Focal Fossa)
### Deployment
Docker-Compose
### Deployment details
Deployed using docker compose command
### What happened
Below is my dockeroperator code
```
extract_data_from_presto = DockerOperator(
task_id='download_data',
image=IMAGE_NAME,
api_version='auto',
auto_remove=True,
mount_tmp_dir=False,
docker_url='unix://var/run/docker.sock',
network_mode="host",
tty=True,
xcom_all=False,
mounts=MOUNTS,
environment={
"PYTHONPATH": "/opt",
},
command=f"test.py",
retries=3,
dag=dag,
)
```
Last line printed in docker is not getting pushed over xcom. In my case last line in docker is
`[2022-06-27, 08:31:34 UTC] {docker.py:312} INFO - {"day": 20220627, "batch": 1656318682, "source": "all", "os": "ubuntu"}`
However the xcom value returned shown in UI is empty
<img width="1329" alt="image" src="https://user-images.githubusercontent.com/25153155/175916850-8f50c579-9d26-44bc-94ae-6d072701ff0b.png">
### What you think should happen instead
It should have return the `{"day": 20220627, "batch": 1656318682, "source": "all", "os": "ubuntu"}` as output of return_value
### How to reproduce
I am not able to exactly produce it with example but it's failing with my application. So I extended the DockerOperator class in my code & copy pasted the `_run_image_with_mounts` method and added 2 print statements
```
print(f"log lines from attach {log_lines}")
try:
if self.xcom_all:
return [stringify(line).strip() for line in self.cli.logs(**log_parameters)]
else:
lines = [stringify(line).strip() for line in self.cli.logs(**log_parameters, tail=1)]
print(f"lines from logs: {lines}")
```
Value of log_lines comes from this [line](https://github.com/apache/airflow/blob/main/airflow/providers/docker/operators/docker.py#L309)
The output of this is as below. First line is last print in my docker code
```
[2022-06-27, 14:43:26 UTC] {pipeline.py:103} INFO - {"day": 20220627, "batch": 1656340990, "os": "ubuntu", "source": "all"}
[2022-06-27, 14:43:27 UTC] {logging_mixin.py:115} INFO - log lines from attach ['2022-06-27, 14:43:15 UTC - root - read_from_presto - INFO - Processing datetime is 2022-06-27 14:43:10.755685', '2022-06-27, 14:43:15 UTC - pyhive.presto - presto - INFO - SHOW COLUMNS FROM <truncated data as it's too long>, '{"day": 20220627, "batch": 1656340990, "os": "ubuntu", "source": "all"}']
[2022-06-27, 14:43:27 UTC] {logging_mixin.py:115} INFO - lines from logs: ['{', '"', 'd', 'a', 'y', '"', ':', '', '2', '0', '2', '2', '0', '6', '2', '7', ',', '', '"', 'b', 'a', 't', 'c', 'h', '"', ':', '', '1', '6', '5', '6', '3', '4', '0', '9', '9', '0', ',', '', '"', 'o', 's', '"', ':', '', '"', 'u', 'b', 'u', 'n', 't', 'u', '"', ',', '', '"', 's', 'o', 'u', 'r', 'c', 'e', '"', ':', '', '"', 'a', 'l', 'l', '"', '}', '', '']
```
From above you can see for some unknown reason `self.cli.logs(**log_parameters, tail=1)` returns array of characters. This changes was brough as part of [change](https://github.com/apache/airflow/commit/2f4a3d4d4008a95fc36971802c514fef68e8a5d4) Before that it was returning the data from log_lines
My suggestion to modify the code as below
```
if self.xcom_all:
return [stringify(line).strip() for line in log_lines]
else:
lines = [stringify(line).strip() for line in log_lines]
return lines[-1] if lines else None
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24681 | https://github.com/apache/airflow/pull/24726 | 6fd06fa8c274b39e4ed716f8d347229e017ba8e5 | cc6a44bdc396a305fd53c7236427c578e9d4d0b7 | "2022-06-27T14:59:41Z" | python | "2022-07-05T10:43:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,678 | ["airflow/templates.py"] | Macro prev_execution_date is always empty | ### Apache Airflow version
2.3.2 (latest released)
### What happened
The variable `prev_execution_date` is empty on the first run meaning, all usage will automatically trigger a None error.
### What you think should happen instead
A default date should be provided instead, either the DAG's `start_date` or a default `datetime.min` as during the first run, it will always trigger an error effectively preventing the DAG from running and hence, always returning an error.
### How to reproduce
Pass the variables/macros to any Task:
```
{
"execution_datetime": '{{ ts_nodash }}',
"prev_execution_datetime": '{{ prev_start_date_success | ts_nodash }}' #.strftime("%Y%m%dT%H%M%S")
}
```
Whilst the logical execution date (`execution_datetime`) works, the previous succesful logical execution date `prev_execution_datetime` automatically blows up when applying the `ts_nodash` filter. This effectively makes it impossible to use said macro ever, as it will always fail.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24678 | https://github.com/apache/airflow/pull/25593 | 1594d7706378303409590c57ab1b17910e5d09e8 | 741c20770230c83a95f74fe7ad7cc9f95329f2cc | "2022-06-27T12:59:53Z" | python | "2022-08-09T10:34:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,653 | ["airflow/operators/trigger_dagrun.py"] | Mapped TriggerDagRunOperator causes SerializationError due to operator_extra_links 'property' object is not iterable | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Hi, I have a kind of issue with launching several subDags via mapping TriggerDagRunOperator (mapping over `conf` parameter). Here is the demo example of my typical DAG:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow import XComArg
from datetime import datetime
with DAG(
'triggerer',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : list(map(lambda i: {"x": i}, list(range(10)))),
)
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=XComArg(t1))
t1 >> t2
```
But when Airflow tries to import such DAG it throws the following SerializationError (which I observed both in UI and in $AIRFLOW_HOME/logs/scheduler/latest/<my_dag_name>.py.log):
```
Broken DAG: [/home/aliona/airflow/dags/triggerer_dag.py] Traceback (most recent call last):
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 638, in _serialize_node
serialize_op['_operator_extra_links'] = cls._serialize_operator_extra_links(
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 933, in _serialize_operator_extra_links
for operator_extra_link in operator_extra_links:
TypeError: 'property' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 1106, in to_dict
json_dict = {"__version": cls.SERIALIZER_VERSION, "dag": cls.serialize_dag(var)}
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 1014, in serialize_dag
raise SerializationError(f'Failed to serialize DAG {dag.dag_id!r}: {e}')
airflow.exceptions.SerializationError: Failed to serialize DAG 'triggerer': 'property' object is not iterable
```
How it appears in the UI:
![image](https://user-images.githubusercontent.com/23297330/175775674-f3375c0e-7ea7-4b6a-84e8-b02ee8f02062.png)
### What you think should happen instead
I think that TriggerDagRunOperator mapped over `conf` parameter should serialize and work well by default.
During the debugging process and trying to make everything work I found out that simple non-mapped TriggerDagRunOperator has value `['Triggered DAG']` in `operator_extra_links` field, so, it is Lisr. But as for mapped TriggerDagRunOperator, it is 'property'. I don't have any idea why during the serialization process Airflow cannot get value of this property, but I tried to reinitialize this field with `['Triggered DAG']` value and finally I fixed this issue in a such way.
For now, for every case of using mapped TriggerDagRunOperator I also use such code at the end of my dag file:
```python
# here 'second' is the name of corresponding mapped TriggerDagRunOperator task (see demo code above)
t2_patch = dag.task_dict['second']
t2_patch.operator_extra_links=['Triggered DAG']
dag.task_dict.update({'second': t2_patch})
```
So, for every mapped TriggerDagRunOperator task I manually change value of operator_extra_links property to `['Triggered DAG']` and as a result there is no any SerializationError. I have a lot of such cases, and all of them are working good with this fix, all subDags are launched, mapped configuration is passed correctly. Also I can wait for end of their execution or not, all this options also work correctly.
### How to reproduce
1. Create dag with mapped TriggerDagRunOperator tasks (main parameters such as task_id, trigger_dag_id and others are in `partial section`, in `expand` section use conf parameter with non-empty iterable value), as, for example:
```python
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=[{'x': 1}])
```
2. Try to serialize dag, and error will appear.
The full example of failing dag file:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow import XComArg
from datetime import datetime
with DAG(
'triggerer',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : list(map(lambda i: {"x": i}, list(range(10)))),
)
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=[{'a': 1}])
t1 >> t2
# uncomment these lines to fix an error
# t2_patch = dag.task_dict['second']
# t2_patch.operator_extra_links=['Triggered DAG']
# dag.task_dict.update({'second': t2_patch})
```
As subDag ('mydag') I use these DAG:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from datetime import datetime
with DAG(
'mydag',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : print("first"),
)
t2 = PythonOperator(
task_id='second',
python_callable=lambda : print("second"),
)
t1 >> t2
```
### Operating System
Ubuntu 22.04 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-sqlite==2.1.3
### Deployment
Virtualenv installation
### Deployment details
Python 3.10.4
pip 22.0.2
### Anything else
Currently for demonstration purposes I am using fully local Airflow installation: single node, SequentialExecutor and SQLite database backend. But such issue also appeared for multi-node installation with either CeleryExecutor or LocalExecutor and PostgreSQL database in the backend.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24653 | https://github.com/apache/airflow/pull/24676 | 48ceda22bdbee50b2d6ca24767164ce485f3c319 | 8dcafdfcdddc77fdfd2401757dcbc15bfec76d6b | "2022-06-25T14:13:29Z" | python | "2022-06-28T02:59:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,618 | ["airflow/providers/oracle/hooks/oracle.py", "airflow/utils/db.py", "docs/apache-airflow-providers-oracle/connections/oracle.rst", "tests/providers/oracle/hooks/test_oracle.py"] | Failed to retrieve data from Oracle database with UTF-8 charset | ### Apache Airflow Provider(s)
oracle
### Versions of Apache Airflow Providers
apache-airflow-providers-oracle==3.1.0
### Apache Airflow version
2.3.2 (latest released)
### Operating System
Linux 4.19.79-1.el7.x86_64
### Deployment
Docker-Compose
### Deployment details
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0
Python: 3.8
Oracle database charset: UTF-8 (returned by `SELECT value FROM nls_database_parameters WHERE parameter = 'NLS_NCHAR_CHARACTERSET'`)
Oracle's client environment:
- LC_CTYPE=C.UTF-8
- NLS_LANG=AMERICAN_AMERICA.CL8MSWIN1251
- LC_ALL=C.UTF-8
### What happened
Any query to Oracle database with UTF8 charset failed with error:
> oracledb.exceptions.NotSupportedError: DPY-3012: national character set id 871 is not supported by python-oracledb in thin mode
### What you think should happen instead
Definetelly, it should work, as it was in previous Oracle provider version (3.0.0).
Quick search shows that `python-oracledb` package, which replaces `cx_Oracle` in 3.1.0, uses **thin** driver mode by default, and it seems that UTF-8 codepage is not supported in that mode ( [see this issue](https://stackoverflow.com/questions/72465536/python-oracledb-new-cx-oracle-connection-generating-notsupportederror-dpy-3012) ). In order to get to thick mode, a call to `oracledb.init_oracle_client()` is required before any connection made ( [see here](https://python-oracledb.readthedocs.io/en/latest/api_manual/module.html#oracledb.init_oracle_client) ).
Indeed, if I add this call to `airflow/providers/oracle/hooks/oracle.py`, everything works fine. Resulting code looks like this:
```
import math
import warnings
from datetime import datetime
from typing import Dict, List, Optional, Union
import oracledb
oracledb.init_oracle_client()
...
```
Downgrade to version 3.0.0 also helps, but I suppose it should be some permanent solution, like adding a configuration parameter or so.
### How to reproduce
- Setup an Oracle database with UTF8 charset
- Setup an Airflow connection with `oracle` type
- Create an operator which issues a `SELECT` statement against the database
### Anything else
Task execution log as follows:
> [2022-06-23, 17:35:36 MSK] {task_command.py:370} INFO - Running <TaskInstance: nip-stage-load2.load-dict.load-sa_user scheduled__2022-06-22T00:00:00+00:00 [running]> on host dwh_develop_scheduler
> [2022-06-23, 17:35:37 MSK] {taskinstance.py:1569} INFO - Exporting the following env vars:
> AIRFLOW_CTX_DAG_EMAIL=airflow@example.com
> AIRFLOW_CTX_DAG_OWNER=airflow
> AIRFLOW_CTX_DAG_ID=nip-stage-load2
> AIRFLOW_CTX_TASK_ID=load-dict.load-sa_user
> AIRFLOW_CTX_EXECUTION_DATE=2022-06-22T00:00:00+00:00
> AIRFLOW_CTX_TRY_NUMBER=1
> AIRFLOW_CTX_DAG_RUN_ID=scheduled__2022-06-22T00:00:00+00:00
> [2022-06-23, 17:35:37 MSK] {base.py:68} INFO - Using connection ID 'nip_standby' for task execution.
> [2022-06-23, 17:35:37 MSK] {base.py:68} INFO - Using connection ID 'stage' for task execution.
> [2022-06-23, 17:35:37 MSK] {data_transfer.py:198} INFO - Executing:
> SELECT * FROM GMP.SA_USER
> [2022-06-23, 17:35:37 MSK] {base.py:68} INFO - Using connection ID 'nip_standby' for task execution.
> [2022-06-23, 17:35:37 MSK] {taskinstance.py:1889} ERROR - Task failed with exception
> Traceback (most recent call last):
> File "/home/airflow/.local/lib/python3.8/site-packages/dwh_etl/operators/data_transfer.py", line 265, in execute
> if not self.no_check and self.compare_datasets(self.object_name, src, dest):
> File "/home/airflow/.local/lib/python3.8/site-packages/dwh_etl/operators/data_transfer.py", line 199, in compare_datasets
> src_df = src.get_pandas_df(sql)
> File "/home/airflow/.local/lib/python3.8/site-packages/airflow/hooks/dbapi.py", line 128, in get_pandas_df
> with closing(self.get_conn()) as conn:
> File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/oracle/hooks/oracle.py", line 149, in get_conn
> conn = oracledb.connect(**conn_config)
> File "/home/airflow/.local/lib/python3.8/site-packages/oracledb/connection.py", line 1000, in connect
> return conn_class(dsn=dsn, pool=pool, params=params, **kwargs)
> File "/home/airflow/.local/lib/python3.8/site-packages/oracledb/connection.py", line 128, in __init__
> impl.connect(params_impl)
> File "src/oracledb/impl/thin/connection.pyx", line 345, in oracledb.thin_impl.ThinConnImpl.connect
> File "src/oracledb/impl/thin/connection.pyx", line 163, in oracledb.thin_impl.ThinConnImpl._connect_with_params
> File "src/oracledb/impl/thin/connection.pyx", line 129, in oracledb.thin_impl.ThinConnImpl._connect_with_description
> File "src/oracledb/impl/thin/connection.pyx", line 250, in oracledb.thin_impl.ThinConnImpl._connect_with_address
> File "src/oracledb/impl/thin/protocol.pyx", line 197, in oracledb.thin_impl.Protocol._connect_phase_two
> File "src/oracledb/impl/thin/protocol.pyx", line 263, in oracledb.thin_impl.Protocol._process_message
> File "src/oracledb/impl/thin/protocol.pyx", line 242, in oracledb.thin_impl.Protocol._process_message
> File "src/oracledb/impl/thin/messages.pyx", line 280, in oracledb.thin_impl.Message.process
> File "src/oracledb/impl/thin/messages.pyx", line 2094, in oracledb.thin_impl.ProtocolMessage._process_message
> File "/home/airflow/.local/lib/python3.8/site-packages/oracledb/errors.py", line 103, in _raise_err
> raise exc_type(_Error(message)) from cause
> oracledb.exceptions.NotSupportedError: DPY-3012: national character set id 871 is not supported by python-oracledb in thin mode
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24618 | https://github.com/apache/airflow/pull/26576 | ee21c1bac4cb5bb1c19ea9e5e84ee9b5854ab039 | b254a9f4bead4e5d4f74c633446da38550f8e0a1 | "2022-06-23T14:49:31Z" | python | "2022-09-28T06:14:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,574 | ["airflow/providers/airbyte/hooks/airbyte.py", "airflow/providers/airbyte/operators/airbyte.py", "tests/providers/airbyte/hooks/test_airbyte.py"] | `AirbyteHook` add cancel job option | ### Apache Airflow Provider(s)
airbyte
### Versions of Apache Airflow Providers
I want to cancel the job if it running more than specific time . Task is getting timeout however, airbyte job was not cancelled. it seems, on kill feature has not implemented
Workaround:
Create a custom operator and implement cancel hook and call it in on kill function.
def on_kill(self):
if (self.job_id):
self.log.error('on_kill: stopping airbyte Job %s',self.job_id)
self.hook.cancel_job(self.job_id)
### Apache Airflow version
2.0.2
### Operating System
Linux
### Deployment
MWAA
### Deployment details
Airflow 2.0.2
### What happened
airbyte job was not cancelled upon timeout
### What you think should happen instead
it should cancel the job
### How to reproduce
Make sure job runs more than timeout
sync_source_destination = AirbyteTriggerSyncOperator(
task_id=f'airbyte_{key}',
airbyte_conn_id='airbyte_con',
connection_id=key,
asynchronous=False,
execution_timeout=timedelta(minutes=2)
)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24574 | https://github.com/apache/airflow/pull/24593 | 45b11d4ed1412c00ebf32a03ab5ea3a06274f208 | c118b2836f7211a0c3762cff8634b7b9a0d1cf0b | "2022-06-21T03:16:53Z" | python | "2022-06-29T06:43:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,572 | ["docs/apache-airflow-providers-snowflake/connections/snowflake.rst"] | Snowflake Provider connection documentation is misleading | ### What do you see as an issue?
Relevant page: https://airflow.apache.org/docs/apache-airflow-providers-snowflake/stable/connections/snowflake.html
## Behavior in the Airflow package
The `SnowflakeHook` object in Airflow behaves oddly compared to some other database hooks like Postgres (so extra clarity in the documentation is beneficial).
Most notably, the `SnowflakeHook` does _not_ make use of the either the `host` or `port` of the `Connection` object it consumes. It is completely pointless to specify these two fields.
When constructing the URL in a runtime context, `snowflake.sqlalchemy.URL` is used for parsing. `URL()` allows for either `account` or `host` to be specified as kwargs. Either one of these 2 kwargs will correspond with what we'd conventionally call the host in a typical URL's anatomy. However, because `SnowflakeHook` never parses `host`, any `host` defined in the Connection object would never get this far into the parsing.
## Issue with the documentation
Right now the documentation does not make clear that it is completely pointless to specify the `host`. The documentation correctly omits the port, but says that the host is optional. It does not warn the user about this field never being consumed at all by the `SnowflakeHook` ([source here](https://github.com/apache/airflow/blob/main/airflow/providers/snowflake/hooks/snowflake.py)).
This can lead to some confusion especially because the Snowflake URI consumed by `SQLAlchemy` (which many people using Snowflake will be familiar with) uses either the "account" or "host" as its host. So a user coming from SQLAlchemy may think it is fine to post the account as the "host" and skip filling in the "account" inside the extras (after all, it's "extra"), whereas that doesn't work.
I would argue that if it is correct to omit the `port` in the documentation (which it is), then `host` should also be excluded.
Furthermore, the documentation reinforces this confusion with the last few lines, where an environment variable example connection is defined that uses a host.
Finally, the documentation says "When specifying the connection in environment variable you should specify it using URI syntax", which is no longer true as of 2.3.0.
### Solving the problem
I have 3 proposals for how the documentation should be updated to better reflect how the `SnowflakeHook` actually works.
1. The `Host` option should not be listed as part of the "Configuring the Connection" section.
2. The example URI should remove the host. The new example URI would look like this: `snowflake://user:password@/db-schema?account=account&database=snow-db®ion=us-east&warehouse=snow-warehouse`. This URI with a blank host works fine; you can test this yourself:
```python
from airflow.models.connection import Connection
c = Connection(conn_id="foo", uri="snowflake://user:password@/db-schema?account=account&database=snow-db®ion=us-east&warehouse=snow-warehouse")
print(c.host)
print(c.extra_dejson)
```
3. An example should be provided of a valid Snowflake construction using the JSON. This example would not only work on its own merits of defining an environment variable connection valid for 2.3.0, but it also would highlight some of the idiosyncrasies of how Airflow defines connections to Snowflake. This would also be valuable as a reference for the AWS `SecretsManagerBackend` for when `full_url_mode` is set to `False`.
### Anything else
I wasn't sure whether to label this issue as a provider issue or documentation issue; I saw templates for either but not both.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24572 | https://github.com/apache/airflow/pull/24573 | 02d8f96bfbc43e780db0220dd7647af0c0f46093 | 2fb93f88b120777330b6ed13b24fa07df279c41e | "2022-06-21T01:41:15Z" | python | "2022-06-27T21:58:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,566 | ["airflow/migrations/versions/0080_2_0_2_change_default_pool_slots_to_1.py"] | Migration changes column to NOT NULL without updating NULL data first | ### Apache Airflow version
2.3.2 (latest released)
### What happened
During upgrade from Airflow 1.x, I've encountered migration failure in migration https://github.com/apache/airflow/blob/05c542dfa8eee9b4cdca4e9370f459ce807354b2/airflow/migrations/versions/0080_2_0_2_change_default_pool_slots_to_1.py
In PR #20962 on these lines https://github.com/apache/airflow/pull/20962/files#diff-9e46226bab06a05ef0040d1f8cc08c81ba94455ca9a170a0417352466242f2c1L61-L63 the update was removed, which breaks if the original table contains nulls in that column (at least in postgres DB).
### What you think should happen instead
_No response_
### How to reproduce
- Have pre 2.0.2 version deployed, where the column was nullable.
- Have task instance with `pool_slots = NULL`
- Try to migrate to latest version (or any version after #20962 was merged)
### Operating System
Custom NixOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
We have NixOS with Airflow installed using setup.py with postgres as a DB.
### Anything else
```
INFO [alembic.runtime.migration] Running upgrade 449b4072c2da -> 8646922c8a04, Change default ``pool_slots`` to ``1``
Traceback (most recent call last):
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.NotNullViolation: column "pool_slots" contains null values
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/bin/.airflow-wrapped", line 9, in <module>
sys.exit(main())
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/cli/commands/db_command.py", line 35, in initdb
db.initdb()
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/utils/db.py", line 648, in initdb
upgradedb(session=session)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/utils/db.py", line 1449, in upgradedb
command.upgrade(config, revision=to_revision or 'heads')
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/command.py", line 320, in upgrade
script.run_env()
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/script/base.py", line 563, in run_env
util.load_python_file(self.dir, "env.py")
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 92, in load_python_file
module = load_module_py(module_id, path)
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 108, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/migrations/env.py", line 107, in <module>
run_migrations_online()
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/migrations/env.py", line 101, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/runtime/environment.py", line 851, in run_migrations
self.get_context().run_migrations(**kw)
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/runtime/migration.py", line 620, in run_migrations
step.migration_fn(**kw)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/migrations/versions/0080_2_0_2_change_default_pool_slots_to_1.py", line 41, in upgrade
batch_op.alter_column("pool_slots", existing_type=sa.Integer, nullable=False, server_default='1')
File "/nix/store/lb7982cwd56am6nzx1ix0aljz416w6mw-python3-3.9.6/lib/python3.9/contextlib.py", line 124, in __exit__
next(self.gen)
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/operations/base.py", line 374, in batch_alter_table
impl.flush()
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/operations/batch.py", line 108, in flush
fn(*arg, **kw)
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/ddl/postgresql.py", line 170, in alter_column
super(PostgresqlImpl, self).alter_column(
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/ddl/impl.py", line 227, in alter_column
self._exec(
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/ddl/impl.py", line 193, in _exec
return conn.execute(construct, multiparams)
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1290, in _execute_ddl
ret = self._execute_context(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.NotNullViolation) column "pool_slots" contains null values
[SQL: ALTER TABLE task_instance ALTER COLUMN pool_slots SET NOT NULL]
(Background on this error at: http://sqlalche.me/e/14/gkpj)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24566 | https://github.com/apache/airflow/pull/24585 | 75db755f4b06b4cfdd3eb2651dbf88ddba2d831f | 9f58e823329d525c0e2b3950ada7e0e047ee7cfd | "2022-06-20T17:57:34Z" | python | "2022-06-29T01:55:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,526 | ["docs/apache-airflow/installation/upgrading.rst", "docs/spelling_wordlist.txt"] | upgrading from 2.2.3 or 2.2.5 to 2.3.2 fails on migration-job | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Upgrade Airflow 2.2.3 or 2.2.5 -> 2.3.2 fails on migration-job.
**first time upgrade execution:**
```
Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
[SQL:
CREATE TABLE task_map (
dag_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
task_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
run_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
map_index INTEGER NOT NULL,
length INTEGER NOT NULL,
`keys` JSON,
PRIMARY KEY (dag_id, task_id, run_id, map_index),
CONSTRAINT task_map_length_not_negative CHECK (length >= 0),
CONSTRAINT task_map_task_instance_fkey FOREIGN KEY(dag_id, task_id, run_id, map_index) REFERENCES task_instance (dag_id, task_id, run_id, map_index) ON DELETE CASCADE
)
]
```
**after the first failed execution (should be due to the first failed execution):**
```
Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
[SQL: ALTER TABLE task_reschedule DROP FOREIGN KEY task_reschedule_ti_fkey[]
```
### What you think should happen instead
The migration-job shouldn't fail ;)
### How to reproduce
Everytime in my environment just need to create a snapshot from last working DB-Snapshot (Airflow Version 2.2.3)
and then deploy Airflow 2.3.2.
I can update in between to 2.2.5 but ran into the same issue by update to 2.3.2.
### Operating System
Debian GNU/Linux 10 (buster) - apache/airflow:2.3.2-python3.8 (hub.docker.com)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==2.2.0
apache-airflow-providers-docker==2.3.0
apache-airflow-providers-elasticsearch==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.2.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-microsoft-azure==3.4.0
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==2.4.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.3.0
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-sqlite==2.0.1
apache-airflow-providers-ssh==2.3.0
apache-airflow-providers-tableau==2.1.4
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
- K8s Rev: v1.21.12-eks-a64ea69
- helm chart version: 1.6.0
- Database: AWS RDS MySQL 8.0.28
### Anything else
Full error Log **first** execution:
```
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:529: DeprecationWarning: The auth_backend option in [api[] has been renamed to auth_backends - the old setting has been used, but please update your config.
option = self._get_option_from_config_file(deprecated_key, deprecated_section, key, kwargs, section)
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:356: FutureWarning: The auth_backends setting in [api[] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
warnings.warn(
DB: mysql+mysqldb://airflow:***@test-airflow2-db-blue.fsgfsdcfds76.eu-central-1.rds.amazonaws.com:3306/airflow
Performing upgrade with database mysql+mysqldb://airflow:***@test-airflow2-db-blue.fsgfsdcfds76.eu-central-1.rds.amazonaws.com:3306/airflow
[2022-06-17 12:19:59,724[] {db.py:920} WARNING - Found 33 duplicates in table task_fail. Will attempt to move them.
[2022-06-17 12:36:18,813[] {db.py:1448} INFO - Creating tables
INFO [alembic.runtime.migration[] Context impl MySQLImpl.
INFO [alembic.runtime.migration[] Will assume non-transactional DDL.
INFO [alembic.runtime.migration[] Running upgrade be2bfac3da23 -> c381b21cb7e4, Create a ``session`` table to store web session data
INFO [alembic.runtime.migration[] Running upgrade c381b21cb7e4 -> 587bdf053233, Add index for ``dag_id`` column in ``job`` table.
INFO [alembic.runtime.migration[] Running upgrade 587bdf053233 -> 5e3ec427fdd3, Increase length of email and username in ``ab_user`` and ``ab_register_user`` table to ``256`` characters
INFO [alembic.runtime.migration[] Running upgrade 5e3ec427fdd3 -> 786e3737b18f, Add ``timetable_description`` column to DagModel for UI.
INFO [alembic.runtime.migration[] Running upgrade 786e3737b18f -> f9da662e7089, Add ``LogTemplate`` table to track changes to config values ``log_filename_template``
INFO [alembic.runtime.migration[] Running upgrade f9da662e7089 -> e655c0453f75, Add ``map_index`` column to TaskInstance to identify task-mapping,
and a ``task_map`` table to track mapping values from XCom.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (3780, "Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 82, in upgradedb
db.upgradedb(to_revision=to_revision, from_revision=from_revision, show_sql_only=args.show_sql_only)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 1449, in upgradedb
command.upgrade(config, revision=to_revision or 'heads')
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/command.py", line 322, in upgrade
script.run_env()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/script/base.py", line 569, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 107, in <module>
run_migrations_online()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 101, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 853, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 623, in run_migrations
step.migration_fn(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/versions/0100_2_3_0_add_taskmap_and_map_id_on_taskinstance.py", line 75, in upgrade
op.create_table(
File "<string>", line 8, in create_table
File "<string>", line 3, in create_table
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/ops.py", line 1254, in create_table
return operations.invoke(op)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/base.py", line 394, in invoke
return fn(self, operation)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/toimpl.py", line 114, in create_table
operations.impl.create_table(table)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 354, in create_table
self._exec(schema.CreateTable(table))
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 195, in _exec
return conn.execute(construct, multiparams)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1290, in _execute_ddl
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (3780, "Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
[SQL:
CREATE TABLE task_map (
dag_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
task_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
run_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
map_index INTEGER NOT NULL,
length INTEGER NOT NULL,
`keys` JSON,
PRIMARY KEY (dag_id, task_id, run_id, map_index),
CONSTRAINT task_map_length_not_negative CHECK (length >= 0),
CONSTRAINT task_map_task_instance_fkey FOREIGN KEY(dag_id, task_id, run_id, map_index) REFERENCES task_instance (dag_id, task_id, run_id, map_index) ON DELETE CASCADE
)
]
(Background on this error at: http://sqlalche.me/e/14/e3q8)
```
Full error Log **after** first execution (should caused by first execution):
```
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:529: DeprecationWarning: The auth_backend option in [api[] has been renamed to auth_backends - the old setting has been used, but please update your config.
option = self._get_option_from_config_file(deprecated_key, deprecated_section, key, kwargs, section)
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:356: FutureWarning: The auth_backends setting in [api[] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
warnings.warn(
DB: mysql+mysqldb://airflow:***@test-airflow2-db-blue.cndbtlpttl69.eu-central-1.rds.amazonaws.com:3306/airflow
Performing upgrade with database mysql+mysqldb://airflow:***@test-airflow2-db-blue.cndbtlpttl69.eu-central-1.rds.amazonaws.com:3306/airflow
[2022-06-17 12:41:53,882[] {db.py:1448} INFO - Creating tables
INFO [alembic.runtime.migration[] Context impl MySQLImpl.
INFO [alembic.runtime.migration[] Will assume non-transactional DDL.
INFO [alembic.runtime.migration[] Running upgrade f9da662e7089 -> e655c0453f75, Add ``map_index`` column to TaskInstance to identify task-mapping,
and a ``task_map`` table to track mapping values from XCom.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1091, "Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 82, in upgradedb
db.upgradedb(to_revision=to_revision, from_revision=from_revision, show_sql_only=args.show_sql_only)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 1449, in upgradedb
command.upgrade(config, revision=to_revision or 'heads')
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/command.py", line 322, in upgrade
script.run_env()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/script/base.py", line 569, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 107, in <module>
run_migrations_online()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 101, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 853, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 623, in run_migrations
step.migration_fn(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/versions/0100_2_3_0_add_taskmap_and_map_id_on_taskinstance.py", line 49, in upgrade
batch_op.drop_index("idx_task_reschedule_dag_task_run")
File "/usr/local/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/base.py", line 376, in batch_alter_table
impl.flush()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/batch.py", line 111, in flush
fn(*arg, **kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/mysql.py", line 155, in drop_constraint
super(MySQLImpl, self).drop_constraint(const)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 338, in drop_constraint
self._exec(schema.DropConstraint(const))
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 195, in _exec
return conn.execute(construct, multiparams)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1290, in _execute_ddl
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1091, "Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
[SQL: ALTER TABLE task_reschedule DROP FOREIGN KEY task_reschedule_ti_fkey[]
(Background on this error at: http://sqlalche.me/e/14/e3q8)
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24526 | https://github.com/apache/airflow/pull/25938 | 994f18872af8d2977d78e6d1a27314efbeedb886 | e2592628cb0a6a37efbacc64064dbeb239e83a50 | "2022-06-17T13:59:27Z" | python | "2022-08-25T14:15:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,525 | ["airflow/models/baseoperator.py", "tests/models/test_baseoperator.py"] | mini-scheduler raises AttributeError: 'NoneType' object has no attribute 'keys' | ### Apache Airflow version
2.3.2 (latest released)
### What happened
The mini-scheduler run after a task finishes sometimes fails with an error "AttributeError: 'NoneType' object has no attribute 'keys'"; see full traceback below.
### What you think should happen instead
_No response_
### How to reproduce
The minimal reproducing example I could find is this:
```python
import pendulum
from airflow.models import BaseOperator
from airflow.utils.task_group import TaskGroup
from airflow.decorators import task
from airflow import DAG
@task
def task0():
pass
class Op0(BaseOperator):
template_fields = ["some_input"]
def __init__(self, some_input, **kwargs):
super().__init__(**kwargs)
self.some_input = some_input
if __name__ == "__main__":
with DAG("dag0", start_date=pendulum.now()) as dag:
with TaskGroup(group_id="tg1"):
Op0(task_id="task1", some_input=task0())
dag.partial_subset("tg1.task1")
```
Running this script with airflow 2.3.2 produces this traceback:
```
Traceback (most recent call last):
File "/app/airflow-bug-minimal.py", line 22, in <module>
dag.partial_subset("tg1.task1")
File "/venv/lib/python3.10/site-packages/airflow/models/dag.py", line 2013, in partial_subset
dag.task_dict = {
File "/venv/lib/python3.10/site-packages/airflow/models/dag.py", line 2014, in <dictcomp>
t.task_id: _deepcopy_task(t)
File "/venv/lib/python3.10/site-packages/airflow/models/dag.py", line 2011, in _deepcopy_task
return copy.deepcopy(t, memo)
File "/usr/local/lib/python3.10/copy.py", line 153, in deepcopy
y = copier(memo)
File "/venv/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 1156, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo))
File "/venv/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 1000, in __setattr__
self.set_xcomargs_dependencies()
File "/venv/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 1107, in set_xcomargs_dependencies
XComArg.apply_upstream_relationship(self, arg)
File "/venv/lib/python3.10/site-packages/airflow/models/xcom_arg.py", line 186, in apply_upstream_relationship
op.set_upstream(ref.operator)
File "/venv/lib/python3.10/site-packages/airflow/models/taskmixin.py", line 241, in set_upstream
self._set_relatives(task_or_task_list, upstream=True, edge_modifier=edge_modifier)
File "/venv/lib/python3.10/site-packages/airflow/models/taskmixin.py", line 185, in _set_relatives
dags: Set["DAG"] = {task.dag for task in [*self.roots, *task_list] if task.has_dag() and task.dag}
File "/venv/lib/python3.10/site-packages/airflow/models/taskmixin.py", line 185, in <setcomp>
dags: Set["DAG"] = {task.dag for task in [*self.roots, *task_list] if task.has_dag() and task.dag}
File "/venv/lib/python3.10/site-packages/airflow/models/dag.py", line 508, in __hash__
val = tuple(self.task_dict.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
Note that the call to `dag.partial_subset` usually happens in the mini-scheduler: https://github.com/apache/airflow/blob/2.3.2/airflow/jobs/local_task_job.py#L253
### Operating System
Linux (Debian 9)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24525 | https://github.com/apache/airflow/pull/24865 | 17564a40a7b8b5dee878cc634077e0a2e63e36fb | c23b31cd786760da8a8e39ecbcf2c0d31e50e594 | "2022-06-17T13:08:16Z" | python | "2022-07-06T10:34:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,487 | ["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"] | Dynamic mapping over KubernetesPodOperator results produces triplicate child tasks | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Attempting to use [dynamic task mapping](https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html#mapping-over-result-of-classic-operators) on the results of a `KubernetesPodOperator` (or `GKEStartPodOperator`) produces 3x as many downstream task instances as it should. Two-thirds of the downstream tasks fail more or less instantly.
### What you think should happen instead
The problem is that the number of downstream tasks is calculated by counting XCOMs associated with the upstream task, assuming that each `task_id` has a single XCOM:
https://github.com/apache/airflow/blob/fe5e689adfe3b2f9bcc37d3975ae1aea9b55e28a/airflow/models/mappedoperator.py#L606-L615
However the `KubernetesPodOperator` pushes two XCOMs in its `.execute()` method:
https://github.com/apache/airflow/blob/fe5e689adfe3b2f9bcc37d3975ae1aea9b55e28a/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L425-L426
So the number of downstream tasks ends up being 3x what it should.
### How to reproduce
Reproducing the behavior requires access to a kubernetes cluster, but in psedo-code, a dag like this should demonstrate the behavior:
```
with DAG(...) as dag:
# produces an output list with N elements
first_pod = GKEStartPodOperator(..., do_xcom_push=True)
# produces 1 output per input, so N task instances are created each with a single output
second_pod = GKEStartPodOperator.partial(..., do_xcom_push=True).expand(id=XComArg(first_pod))
# should have N task instances created, but actually gets 3N task instances created
third_pod = GKEStartPodOperator.partial(..., do_xcom_push=True).expand(id=XComArg(second_pod))
```
### Operating System
macOS 12.4
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==4.1.0
apache-airflow-providers-google==8.0.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
When I edit `mappedoperator.py` in my local deployment to filter on the XCOM key things behave as expected:
```
# Collect lengths from mapped upstreams.
xcom_query = (
session.query(XCom.task_id, func.count(XCom.map_index))
.group_by(XCom.task_id)
.filter(
XCom.dag_id == self.dag_id,
XCom.run_id == run_id,
XCom.key == 'return_value', <------- added this line
XCom.task_id.in_(mapped_dep_keys),
XCom.map_index >= 0,
)
)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24487 | https://github.com/apache/airflow/pull/24530 | df388a3d5364b748993e61b522d0b68ff8b8124a | a69095fea1722e153a95ef9da93b002b82a02426 | "2022-06-15T23:31:31Z" | python | "2022-07-27T08:36:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,484 | ["airflow/migrations/versions/0111_2_3_3_add_indexes_for_cascade_deletes.py", "airflow/models/taskfail.py", "airflow/models/taskreschedule.py", "airflow/models/xcom.py", "docs/apache-airflow/migrations-ref.rst"] | `airflow db clean task_instance` takes a long time | ### Apache Airflow version
2.3.1
### What happened
When I ran the `airflow db clean task_instance` command, it can take up to 9 hours to complete. The database around 3215220 rows in the `task_instance` table and 51602 rows in the `dag_run` table. The overall size of the database is around 1 TB.
I believe the issue is because of the cascade constraints on others tables as well as the lack of indexes on task_instance foreign keys.
Running delete on a small number of rows gives this shows most of the time is spent in xcom and task_fail tables
```
explain (analyze,buffers,timing) delete from task_instance t1 where t1.run_id = 'manual__2022-05-11T01:09:05.856703+00:00'; rollback;
Trigger for constraint task_reschedule_ti_fkey: time=3.208 calls=23
Trigger for constraint task_map_task_instance_fkey: time=1.848 calls=23
Trigger for constraint xcom_task_instance_fkey: time=4457.779 calls=23
Trigger for constraint rtif_ti_fkey: time=3.135 calls=23
Trigger for constraint task_fail_ti_fkey: time=1164.183 calls=23
```
I temporarily fixed it by adding these indexes.
```
create index idx_task_reschedule_dr_fkey on task_reschedule (dag_id, run_id);
create index idx_xcom_ti_fkey on xcom (dag_id, task_id, run_id, map_index);
create index idx_task_fail_ti_fkey on task_fail (dag_id, task_id, run_id, map_index);
```
### What you think should happen instead
It should not take 9 hours to complete a clean up process. Before upgrading to 2.3.x, it was taking no more than 5 minutes.
### How to reproduce
_No response_
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24484 | https://github.com/apache/airflow/pull/24488 | 127f8f4de02422ade8f2c84f84d3262d6efde185 | 677c42227c08f705142f298ab88915f133cd94e5 | "2022-06-15T21:21:18Z" | python | "2022-06-16T18:41:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,460 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/operators/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "docs/apache-airflow-providers-google/operators/cloud/bigquery.rst", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | let BigQueryGetData operator take a query string and as_dict flag | ### Description
Today the BigQueryGetData airflow.providers.google.cloud.operators.bigquery.BigQueryGetDataOperator only allows you to point to a specific dataset and table and how many rows you want.
It already sets up a BigQueryHook so it very easy to implement custom query from a string as well.
It would also be very efficient to have a as_dict flag to return the result as a list of dicts.
I am not an expert in python but here is my atempt at a modification of the current code (from 8.0.0rc2)
``` python
class BigQueryGetDataOperatorX(BaseOperator):
"""
Fetches the data from a BigQuery table (alternatively fetch data for selected columns)
and returns data in a python list. The number of elements in the returned list will
be equal to the number of rows fetched. Each element in the list will again be a list
where element would represent the columns values for that row.
**Example Result**: ``[['Tony', '10'], ['Mike', '20'], ['Steve', '15']]``
.. seealso::
For more information on how to use this operator, take a look at the guide:
:ref:`howto/operator:BigQueryGetDataOperator`
.. note::
If you pass fields to ``selected_fields`` which are in different order than the
order of columns already in
BQ table, the data will still be in the order of BQ table.
For example if the BQ table has 3 columns as
``[A,B,C]`` and you pass 'B,A' in the ``selected_fields``
the data would still be of the form ``'A,B'``.
**Example**: ::
get_data = BigQueryGetDataOperator(
task_id='get_data_from_bq',
dataset_id='test_dataset',
table_id='Transaction_partitions',
max_results=100,
selected_fields='DATE',
gcp_conn_id='airflow-conn-id'
)
:param dataset_id: The dataset ID of the requested table. (templated)
:param table_id: The table ID of the requested table. (templated)
:param max_results: The maximum number of records (rows) to be fetched
from the table. (templated)
:param selected_fields: List of fields to return (comma-separated). If
unspecified, all fields are returned.
:param gcp_conn_id: (Optional) The connection ID used to connect to Google Cloud.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param location: The location used for the operation.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
:param query: (Optional) A sql query to execute instead
:param as_dict: if True returns the result as a list of dictionaries. default to False
"""
template_fields: Sequence[str] = (
'dataset_id',
'table_id',
'max_results',
'selected_fields',
'impersonation_chain',
)
ui_color = BigQueryUIColors.QUERY.value
def __init__(
self,
*,
dataset_id: Optional[str] = None,
table_id: Optional[str] = None,
max_results: Optional[int] = 100,
selected_fields: Optional[str] = None,
gcp_conn_id: str = 'google_cloud_default',
delegate_to: Optional[str] = None,
location: Optional[str] = None,
impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
query: Optional[str] = None,
as_dict: bool = False,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.dataset_id = dataset_id
self.table_id = table_id
self.max_results = int(max_results)
self.selected_fields = selected_fields
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.location = location
self.impersonation_chain = impersonation_chain
self.query = query
self.as_dict = as_dict
if not query and not table_id:
self.log.error('Table_id or query not set. Please provide either a dataset_id + table_id or a query string')
def execute(self, context: 'Context') -> list:
self.log.info(
'Fetching Data from %s.%s max results: %s', self.dataset_id, self.table_id, self.max_results
)
hook = BigQueryHook(
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to,
impersonation_chain=self.impersonation_chain,
location=self.location,
)
if not self.query:
if not self.selected_fields:
schema: Dict[str, list] = hook.get_schema(
dataset_id=self.dataset_id,
table_id=self.table_id,
)
if "fields" in schema:
self.selected_fields = ','.join([field["name"] for field in schema["fields"]])
with hook.list_rows(
dataset_id=self.dataset_id,
table_id=self.table_id,
max_results=self.max_results,
selected_fields=self.selected_fields
) as rows:
if self.as_dict:
table_data = [json.dumps(dict(zip(self.selected_fields, row))).encode('utf-8') for row in rows]
else:
table_data = [row.values() for row in rows]
else:
with hook.get_conn().cursor().execute(self.query) as cursor:
if self.as_dict:
table_data = [json.dumps(dict(zip(self.keys,row))).encode('utf-8') for row in cursor.fetchmany(self.max_results)]
else:
table_data = [row for row in cursor.fetchmany(self.max_results)]
self.log.info('Total extracted rows: %s', len(table_data))
return table_data
```
### Use case/motivation
This would simplify getting data from BigQuery into airflow instead of having to first store the data in a separat table with BigQueryInsertJob and then fetch that.
Also simplifies handling the data with as_dict in the same way that many other database connectors in python does.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24460 | https://github.com/apache/airflow/pull/30887 | dff7e0de362e4cd318d7c285ec102923503eceb3 | b8f73768ec13f8d4cc1605cca3fa93be6caac473 | "2022-06-15T08:33:25Z" | python | "2023-05-09T06:05:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,388 | ["airflow/models/abstractoperator.py", "airflow/models/baseoperator.py", "airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "airflow/utils/context.py", "tests/decorators/test_python.py", "tests/models/test_mappedoperator.py"] | Unable to access operator attrs within Jinja context for mapped tasks | ### Apache Airflow version
2.3.2 (latest released)
### What happened
When attempting to generate mapped SQL tasks using a Jinja-templated query that access operator attributes, an exception like the following is thrown:
`jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute '<operator attribute>'`
For example, when attempting to map `SQLValueCheckOperator` tasks with respect to `database` using a query of `SELECT COUNT(*) FROM {{ task.database }}.tbl;`:
`jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute 'database'`
Or, when using `SnowflakeOperator` and mapping via `parameters` of a query like `SELECT * FROM {{ task.parameters.tbl }};`:
`jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute 'parameters'`
### What you think should happen instead
When using Jinja-template SQL queries, the attribute that is being using for the mapping should be accessible via `{{ task.<operator attribute> }}`. Executing the same SQL query with classic, non-mapped tasks allows for this operator attr access from the `task` context object.
Ideally, the same interface should apply for both non-mapped and mapped tasks. Also with the preference of using `parameters` over `params` in SQL-type operators, having the ability to map over `parameters` will help folks move from using `params` to `parameters`.
### How to reproduce
Consider the following DAG:
```python
from pendulum import datetime
from airflow.decorators import dag
from airflow.operators.sql import SQLValueCheckOperator
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
CORE_SQL = "SELECT COUNT(*) FROM {{ task.database }}.tbl;"
SNOWFLAKE_SQL = """SELECT * FROM {{ task.parameters.tbl }};"""
@dag(dag_id="map-city", start_date=datetime(2022, 6, 7), schedule_interval=None)
def map_city():
classic_sql_value_check = SQLValueCheckOperator(
task_id="classic_sql_value_check",
conn_id="snowflake",
sql=CORE_SQL,
database="dev",
pass_value=20000,
)
mapped_value_check = SQLValueCheckOperator.partial(
task_id="check_row_count",
conn_id="snowflake",
sql=CORE_SQL,
pass_value=20000,
).expand(database=["dev", "production"])
classic_snowflake_task = SnowflakeOperator(
task_id="classic_snowflake_task",
snowflake_conn_id="snowflake",
sql=SNOWFLAKE_SQL,
parameters={"tbl": "foo"},
)
mapped_snowflake_task = SnowflakeOperator.partial(
task_id="mapped_snowflake_task", snowflake_conn_id="snowflake", sql=SNOWFLAKE_SQL
).expand(
parameters=[
{"tbl": "foo"},
{"tbl": "bar"},
]
)
_ = map_city()
```
**`SQLValueCheckOperator` tasks**
The logs for the "classic_sql_value_check", non-mapped task show the query executing as expected:
`[2022-06-11, 02:01:03 UTC] {sql.py:204} INFO - Executing SQL check: SELECT COUNT(*) FROM dev.tbl;`
while the mapped "check_row_count" task fails with the following exception:
```bash
[2022-06-11, 02:01:03 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'map-city', 'check_row_count', 'manual__2022-06-11T02:01:01.831761+00:00', '--job-id', '350', '--raw', '--subdir', 'DAGS_FOLDER/map_city.py', '--cfg-path', '/tmp/tmpm5bg9mt5', '--map-index', '0', '--error-file', '/tmp/tmp2kbilt2l']
[2022-06-11, 02:01:03 UTC] {standard_task_runner.py:80} INFO - Job 350: Subtask check_row_count
[2022-06-11, 02:01:03 UTC] {task_command.py:370} INFO - Running <TaskInstance: map-city.check_row_count manual__2022-06-11T02:01:01.831761+00:00 map_index=0 [running]> on host 569596df5be5
[2022-06-11, 02:01:03 UTC] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1451, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1555, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2212, in render_templates
rendered_task = self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 726, in render_template_fields
self._do_render_template_fields(
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/abstractoperator.py", line 344, in _do_render_template_fields
rendered_content = self.render_template(
File "/usr/local/lib/python3.9/site-packages/airflow/models/abstractoperator.py", line 391, in render_template
return render_template_to_string(template, context)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 296, in render_template_to_string
return render_template(template, context, native=False)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 291, in render_template
return "".join(nodes)
File "<template>", line 13, in root
File "/usr/local/lib/python3.9/site-packages/jinja2/runtime.py", line 903, in _fail_with_undefined_error
raise self._undefined_exception(self._undefined_message)
jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute 'database'
```
**`SnowflakeOperator` tasks**
Similarly, the "classic_snowflake_task" non-mapped task is able to execute the SQL query as expected:
`[2022-06-11, 02:01:04 UTC] {snowflake.py:324} INFO - Running statement: SELECT * FROM foo;, parameters: {'tbl': 'foo'}`
while the mapped "mapped_snowflake_task task fails to execute the query:
```bash
[2022-06-11, 02:01:03 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'map-city', 'mapped_snowflake_task', 'manual__2022-06-11T02:01:01.831761+00:00', '--job-id', '347', '--raw', '--subdir', 'DAGS_FOLDER/map_city.py', '--cfg-path', '/tmp/tmp6kmqs5ew', '--map-index', '0', '--error-file', '/tmp/tmpkufg9xqx']
[2022-06-11, 02:01:03 UTC] {standard_task_runner.py:80} INFO - Job 347: Subtask mapped_snowflake_task
[2022-06-11, 02:01:03 UTC] {task_command.py:370} INFO - Running <TaskInstance: map-city.mapped_snowflake_task manual__2022-06-11T02:01:01.831761+00:00 map_index=0 [running]> on host 569596df5be5
[2022-06-11, 02:01:03 UTC] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1451, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1555, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2212, in render_templates
rendered_task = self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 726, in render_template_fields
self._do_render_template_fields(
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/abstractoperator.py", line 344, in _do_render_template_fields
rendered_content = self.render_template(
File "/usr/local/lib/python3.9/site-packages/airflow/models/abstractoperator.py", line 391, in render_template
return render_template_to_string(template, context)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 296, in render_template_to_string
return render_template(template, context, native=False)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 291, in render_template
return "".join(nodes)
File "<template>", line 13, in root
File "/usr/local/lib/python3.9/site-packages/jinja2/sandbox.py", line 326, in getattr
value = getattr(obj, attribute)
File "/usr/local/lib/python3.9/site-packages/jinja2/runtime.py", line 910, in __getattr__
return self._fail_with_undefined_error()
File "/usr/local/lib/python3.9/site-packages/jinja2/runtime.py", line 903, in _fail_with_undefined_error
raise self._undefined_exception(self._undefined_message)
jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute 'parameters'
```
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-snowflake==2.7.0
### Deployment
Astronomer
### Deployment details
Astronomer Runtime 5.0.3
### Anything else
Even though using the `{{ task.<operator attr> }}` method does not work for mapped tasks, there is a workaround. Given the `SnowflakeOperator` example from above attempting to execute the query: `SELECT * FROM {{ task.parameters.tbl }};`, users can modify the templated query to `SELECT * FROM {{ task.mapped_kwargs.parameters[ti.map_index].tbl }};` for successful execution. This workaround isn't very obvious though and requires from solid digging into the new 2.3.0 code.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24388 | https://github.com/apache/airflow/pull/26702 | ed494594ef213b3633aa3972e1b8b4ad18b88e42 | 5560a46bfe8a14205c5e8a14f0b5c2ae74ee100c | "2022-06-11T02:28:05Z" | python | "2022-09-27T12:52:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,360 | ["airflow/providers/snowflake/transfers/s3_to_snowflake.py", "airflow/providers/snowflake/utils/__init__.py", "airflow/providers/snowflake/utils/common.py", "docs/apache-airflow-providers-snowflake/operators/s3_to_snowflake.rst", "tests/providers/snowflake/transfers/test_s3_to_snowflake.py", "tests/providers/snowflake/utils/__init__.py", "tests/providers/snowflake/utils/test_common.py", "tests/system/providers/snowflake/example_snowflake.py"] | Pattern parameter in S3ToSnowflakeOperator | ### Description
I would like to propose to add a pattern parameter to allow loading only those files that satisfy the given regex pattern.
This function is supported on the Snowflake side, it just requires passing a parameter to the COPY INTO command.
[Snowflake documentation/](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#loading-using-pattern-matching)
### Use case/motivation
I have multiple files with different schema in one folder. I would like to move to Snowflake only files which meet the given name filter, and I am not able to do it with the prefix parameter.
### Related issues
I am not aware
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24360 | https://github.com/apache/airflow/pull/24571 | 5877f45d65d5aa864941efebd2040661b6f89cb1 | 66e84001df069c76ba8bfefe15956c4018844b92 | "2022-06-09T22:13:38Z" | python | "2022-06-22T07:49:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,352 | ["airflow/providers/google/cloud/operators/gcs.py", "tests/providers/google/cloud/operators/test_gcs.py"] | GCSDeleteObjectsOperator raises unexpected ValueError for prefix set as empty string | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
All versions.
```
apache-airflow-providers-google>=1.0.0b1
apache-airflow-backport-providers-google>=2020.5.20rc1
```
### Apache Airflow version
2.3.2 (latest released)
### Operating System
macOS 12.3.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
I'm currently doing the upgrade check in Airflow 1.10.15 and one of the topics is to change the import locations from contrib to the specific provider.
While replacing:
`airflow.contrib.operators.gcs_delete_operator.GoogleCloudStorageDeleteOperator`
By:
`airflow.providers.google.cloud.operators.gcs.GCSDeleteObjectsOperator`
An error appeared in the UI: `Broken DAG: [...] Either object or prefix should be set. Both are None`
---
Upon further investigation, I found out that while the `GoogleCloudStorageDeleteOperator` from contrib module had this parameter check (as can be seen [here](https://github.com/apache/airflow/blob/v1-10-stable/airflow/contrib/operators/gcs_delete_operator.py#L63)):
```python
assert objects is not None or prefix is not None
```
The new `GCSDeleteObjectsOperator` from Google provider module have the following (as can be seen [here](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/gcs.py#L308-L309)):
```python
if not objects and not prefix:
raise ValueError("Either object or prefix should be set. Both are None")
```
---
As it turns out, these conditions are not equivalent, because a variable `prefix` containing the value of an empty string won't raise an error on the first case, but will raise it in the second one.
### What you think should happen instead
This behavior does not match with the documentation description, since using a prefix as an empty string is perfectly valid in case the user wants to delete all objects within the bucket.
Furthermore, there were no philosophical changes within the API in that timeframe. This code change happened in [this commit](https://github.com/apache/airflow/commit/25e9047a4a4da5fad4f85c366e3a6262c0a4f68e#diff-c45d838a139b258ab703c23c30fd69078108f14a267731bd2be5cc1c8a7c02f5), where the developer's intent was clearly to remove assertions, not to change the logic behind the validation. In fact, it even relates to a PR for [this Airflow JIRA ticket](https://issues.apache.org/jira/browse/AIRFLOW-6193).
### How to reproduce
Add a `GCSDeleteObjectsOperator` with a parameter `prefix=""` to a DAG.
Example:
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.providers.google.cloud.operators.gcs import GCSDeleteObjectsOperator
with DAG('test_dag', schedule_interval=timedelta(days=1), start_date=datetime(2022, 1, 1)) as dag:
task = GCSDeleteObjectsOperator(
task_id='task_that_generates_ValueError',
bucket_name='some_bucket',
prefix=''
)
```
### Anything else
In my opinion, the error message wasn't very accurate as well, since it just breaks the DAG without pointing out which task is causing the issue. It took me 20 minutes to pinpoint the exact task in my case, since I was dealing with a DAG with a lot of tasks.
Adding the `task_id` to the error message could improve the developer experience in that case.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24352 | https://github.com/apache/airflow/pull/24353 | dd35fdaf35b6e46fd69a1b1da36ae7ffc0505dcb | e7a1c50d62680a521ef90a424b7eff03635081d5 | "2022-06-09T17:23:11Z" | python | "2022-06-19T22:07:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,346 | ["airflow/utils/db.py"] | Add salesforce_default to List Connection | ### Apache Airflow version
2.1.2
### What happened
salesforce_default is not in the list of Connection.
### What you think should happen instead
Should be added salesforce_default to ListConnection.
### How to reproduce
After resetting DB, look at List Connection.
### Operating System
GCP Container
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24346 | https://github.com/apache/airflow/pull/24347 | e452949610cff67c0e0a9918a8fefa7e8cc4b8c8 | 6d69dd062f079a8fbf72563fd218017208bfe6c1 | "2022-06-09T14:56:06Z" | python | "2022-06-13T18:25:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,343 | ["airflow/providers/google/cloud/operators/bigquery.py"] | BigQueryCreateEmptyTableOperator do not deprecated bigquery_conn_id yet | ### Apache Airflow version
2.3.2 (latest released)
### What happened
`bigquery_conn_id` is deprecated for other operators like `BigQueryDeleteTableOperator`
and replaced by `gcp_conn_id` but it's not the case for `BigQueryCreateEmptyTableOperator`
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24343 | https://github.com/apache/airflow/pull/24376 | dd78e29a8c858769c9c21752f319e19af7f64377 | 8e0bddaea69db4d175f03fa99951f6d82acee84d | "2022-06-09T09:19:46Z" | python | "2022-06-12T21:07:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,338 | ["airflow/exceptions.py", "airflow/models/xcom_arg.py", "tests/decorators/test_python.py"] | TaskFlow AirflowSkipException causes downstream step to fail | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Using TaskFlow API and have 2 tasks that lead to the same downstream task. These tasks check for new data and when found will set an XCom entry of the new filename for the downstream to handle. If no data is found the upstream tasks raise a skip exception.
The downstream task has the trigger_rule = none_failed_min_one_success.
Problem is that a task which is set to Skip doesn't set any XCom. When the downstream task starts it raises the error:
`airflow.exceptions.AirflowException: XComArg result from task2 at airflow_2_3_xcomarg_render_error with key="return_value" is not found!`
### What you think should happen instead
Based on trigger rule of "none_failed_min_one_success", expectation is that an upstream task should be allowed to skip and the downstream task will still run. While the downstream does try to start based on trigger rules, it never really gets to run since the error is raised when rendering the arguments.
### How to reproduce
Example dag will generate the error if run.
```
from airflow.decorators import dag, task
from airflow.exceptions import AirflowSkipException
@task
def task1():
return "example.csv"
@task
def task2():
raise AirflowSkipException()
@task(trigger_rule="none_failed_min_one_success")
def downstream_task(t1, t2):
print("task ran")
@dag(
default_args={"owner": "Airflow", "start_date": "2022-06-07"},
schedule_interval=None,
)
def airflow_2_3_xcomarg_render_error():
t1 = task1()
t2 = task2()
downstream_task(t1, t2)
example_dag = airflow_2_3_xcomarg_render_error()
```
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24338 | https://github.com/apache/airflow/pull/25661 | c7215a28f9df71c63408f758ed34253a4dfaa318 | a4e38978194ef46565bc1e5ba53ecc65308d09aa | "2022-06-08T20:07:42Z" | python | "2022-08-16T12:05:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,331 | ["dev/example_dags/README.md", "dev/example_dags/update_example_dags_paths.py"] | "Example DAGs" link under kubernetes-provider documentation is broken. Getting 404 | ### What do you see as an issue?
_Example DAGs_ folder is not available for _apache-airflow-providers-cncf-kubernetes_ , which results in broken link on documentation page ( https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/index.html ).
Getting 404 error when clicked on _Example DAGs_ link (https://github.com/apache/airflow/tree/main/airflow/providers/cncf/kubernetes/example_dags )
<img width="1464" alt="Screenshot 2022-06-08 at 9 01 56 PM" src="https://user-images.githubusercontent.com/11991059/172657376-8a556e9e-72e5-4aab-9c71-b1da239dbf5c.png">
<img width="1475" alt="Screenshot 2022-06-08 at 9 01 39 PM" src="https://user-images.githubusercontent.com/11991059/172657413-c72d14f2-071f-4452-baf7-0f41504a5a3a.png">
### Solving the problem
Folder named _example_dags_ should be created under link (https://github.com/apache/airflow/tree/main/airflow/providers/cncf/kubernetes/) which should include kubernetes specific DAG examples.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24331 | https://github.com/apache/airflow/pull/24348 | 74ac9f788c31512b1fcd9254282905f34cc40666 | 85c247ae10da5ee93f26352d369f794ff4f2e47c | "2022-06-08T15:33:29Z" | python | "2022-06-09T17:33:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,328 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | `TI.log_url` is incorrect with mapped tasks | ### Apache Airflow version
2.3.0
### What happened
I had an `on_failure_callback` that sent a `task_instance.log_url` to slack, it no longer behaves correctly - giving me a page with no logs rendered instead of the logs for my task.
(Example of failure, URL like: https://XYZ.astronomer.run/dhp2pmdd/log?execution_date=2022-06-05T00%3A00%3A00%2B00%3A00&task_id=create_XXX_zip_files_and_upload&dag_id=my_dag )
![image](https://user-images.githubusercontent.com/80706212/172645178-b0efb329-d3b4-40f1-81e3-cd358dde9906.png)
### What you think should happen instead
The correct behavior would be the URL:
https://XYZ.astronomer.run/dhp2pmdd/log?execution_date=2022-06-05T00%3A00%3A00%2B00%3A00&task_id=create_XXX_zip_files_and_upload&dag_id=my_dag&map_index=0
as exemplified:
![image](https://user-images.githubusercontent.com/80706212/172645902-e6179b0c-9612-4ff4-824e-30684aca13b1.png)
### How to reproduce
_No response_
### Operating System
Debian/Docker
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24328 | https://github.com/apache/airflow/pull/24335 | a9c350762db4dca7ab5f6c0bfa0c4537d697b54c | 48a6155bb1478245c1dd8b6401e4cce00e129422 | "2022-06-08T14:44:49Z" | python | "2022-06-14T20:15:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,321 | ["airflow/providers/amazon/aws/sensors/s3.py", "tests/providers/amazon/aws/sensors/test_s3_key.py"] | S3KeySensor wildcard_match only matching key prefixes instead of full patterns | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
3.4.0
### Apache Airflow version
2.3.2 (latest released)
### Operating System
Debian GNU/Linux 10
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
For patterns like "*.zip" the S3KeySensor succeeds for all files, does not take full pattern into account i.e. the ".zip" part).
Bug introduced in https://github.com/apache/airflow/pull/22737
### What you think should happen instead
Full pattern match as in version 3.3.0 (in S3KeySensor poke()):
```
...
if self.wildcard_match:
return self.get_hook().check_for_wildcard_key(self.bucket_key, self.bucket_name)
...
```
alternatively the files obtained by `files = self.get_hook().get_file_metadata(prefix, bucket_name)` which only match the prefix should be further filtered.
### How to reproduce
create a DAG with a key sensor task containing wildcard and suffix, e.g. the following task should succeed only if any ZIP-files are available in "my-bucket", but succeeds for all instead:
`S3KeySensor(task_id="wait_for_file", bucket_name="my-bucket", bucket_key="*.zip", wildcard_match=True)`
### Anything else
Not directly part of this issue but at the same time I would suggest to include additional file attributes in method _check_key, e.g. the actual key of the files. This way more filters, e.g. exclude specific keys, could be implemented by using the check_fn.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24321 | https://github.com/apache/airflow/pull/24378 | f8e106a531d2dc502bdfe47c3f460462ab0a156d | 7fed7f31c3a895c0df08228541f955efb16fbf79 | "2022-06-08T11:44:58Z" | python | "2022-06-11T19:31:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,318 | ["airflow/providers/amazon/aws/hooks/emr.py", "airflow/providers/amazon/aws/operators/emr.py"] | `EmrCreateJobFlowOperator` does not work if emr_conn_id param contain credential | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.2 (latest released)
### Operating System
os
### Deployment
Other
### Deployment details
_No response_
### What happened
EmrCreateJobFlowOperator currently have two params for connection `emr_conn_id` and `aws_conn_id`. So it works only when I set `aws_conn_id` containing credentials and an empty `emr_conn_id` and it does not work in the below case
- when I set both aws_conn_id and emr_conn_id in the operator and both connection contains credentials i.e it has aws_access_key_id and other params in airflow connection extra
```
Unknown parameter in input: "aws_access_key_id", must be one of: Name, LogUri, LogEncryptionKmsKeyId, AdditionalInfo, AmiVersion, ReleaseLabel, Instances, Steps, BootstrapActions, SupportedProducts, NewSupportedProducts, Applications, Configurations, VisibleToAllUsers, JobFlowRole, ServiceRole, Tags, SecurityConfiguration, AutoScalingRole, ScaleDownBehavior, CustomAmiId, EbsRootVolumeSize, RepoUpgradeOnBoot, KerberosAttributes, StepConcurrencyLevel, ManagedScalingPolicy, PlacementGroupConfigs, AutoTerminationPolicy, OSReleaseLabel
```
- when I set both aws_conn_id and emr_conn_id in the operator and only emr_conn_id connection contains credentials i.e it has aws_access_key_id and other params in airflow connection extra
```
[2022-06-07, 20:49:19 UTC] {taskinstance.py:1826} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/airflow/airflow/providers/amazon/aws/operators/emr.py", line 324, in execute
response = emr.create_job_flow(job_flow_overrides)
File "/opt/airflow/airflow/providers/amazon/aws/hooks/emr.py", line 87, in create_job_flow
response = self.get_conn().run_job_flow(**job_flow_overrides)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 508, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 895, in _make_api_call
operation_model, request_dict, request_context
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 917, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 195, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 134, in create_request
operation_name=operation_model.name,
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 412, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 256, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 239, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/signers.py", line 103, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python3.7/site-packages/botocore/signers.py", line 187, in sign
auth.add_auth(request)
File "/usr/local/lib/python3.7/site-packages/botocore/auth.py", line 405, in add_auth
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials
```
- When I set only aws_conn_id in the operator and it contains credentials
```
Traceback (most recent call last):
File "/opt/airflow/airflow/providers/amazon/aws/operators/emr.py", line 324, in execute
response = emr.create_job_flow(job_flow_overrides)
File "/opt/airflow/airflow/providers/amazon/aws/hooks/emr.py", line 90, in create_job_flow
emr_conn = self.get_connection(self.emr_conn_id)
File "/opt/airflow/airflow/hooks/base.py", line 67, in get_connection
conn = Connection.get_connection_from_secrets(conn_id)
File "/opt/airflow/airflow/models/connection.py", line 430, in get_connection_from_secrets
raise AirflowNotFoundException(f"The conn_id `{conn_id}` isn't defined")
```
- When I set only emr_conn_id in the operator and it contains credentials
```
[2022-06-07, 20:49:19 UTC] {taskinstance.py:1826} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/airflow/airflow/providers/amazon/aws/operators/emr.py", line 324, in execute
response = emr.create_job_flow(job_flow_overrides)
File "/opt/airflow/airflow/providers/amazon/aws/hooks/emr.py", line 87, in create_job_flow
response = self.get_conn().run_job_flow(**job_flow_overrides)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 508, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 895, in _make_api_call
operation_model, request_dict, request_context
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 917, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 195, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 134, in create_request
operation_name=operation_model.name,
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 412, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 256, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 239, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/signers.py", line 103, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python3.7/site-packages/botocore/signers.py", line 187, in sign
auth.add_auth(request)
File "/usr/local/lib/python3.7/site-packages/botocore/auth.py", line 405, in add_auth
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials
```
- When I set aws_conn_id having credential and emr_conn_id having no credential i.e empty extra field in airflow connection then it work
### What you think should happen instead
It should work with even one connection id i.e with aws_conn_id or emr_conn_id and should not fail even emr_conn_id has credentials
### How to reproduce
- Create EmrCreateJobFlowOperator and pass both aws_conn_id and emr_conn_id or
- Create EmrCreateJobFlowOperator and pass aws_conn_id or emr_conn_id
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24318 | https://github.com/apache/airflow/pull/24306 | 4daf51a2c388b41201a0a8095e0a97c27d6704c8 | 99d98336312d188a078721579a3f71060bdde542 | "2022-06-08T09:40:10Z" | python | "2022-06-10T13:25:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,281 | ["breeze"] | Fix command in breeze | ### What do you see as an issue?
There is a mistake in the command displayed in breeze.
```
The answer is 'no'. Skipping Installing pipx?.
Please run those commands manually (you might need to restart shell between them):i
pip -m install pipx
pipx ensurepath
pipx install -e '/Users/ishiis/github/airflow/dev/breeze/'
breeze setup-autocomplete --force
After that, both pipx and breeze should be available on your path
```
There is no -m option in pip.
```bash
% pip -m install pipx
Usage:
pip <command> [options]
no such option: -m
```
### Solving the problem
Fix the command.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24281 | https://github.com/apache/airflow/pull/24282 | 6dc474fc82aa9325081b0c5f2b92c948e2f16f74 | 69ca427754c54c5496bf90b7fc70fdd646bc92e5 | "2022-06-07T11:10:12Z" | python | "2022-06-07T11:13:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,197 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | KubernetesPodOperator rendered template tab does not pretty print `env_vars` | ### Apache Airflow version
2.2.5
### What happened
I am using the `KubernetesPodOperator` for airflow tasks in `Airflow 2.2.5` and it doesnot render the `env_vars` in the `rendered template` in a easily human consumable format as it did in `Airflow 1.10.x`
![image](https://user-images.githubusercontent.com/3241700/172024886-81fafb11-62c9-4daf-baff-7b47f3baf7d7.png)
### What you think should happen instead
The `env_vars` should be pretty printed in human legible form.
### How to reproduce
Create a task with the `KubernetesPodOperator` and check the `Rendered template` tab of the task instance.
### Operating System
Docker
### Versions of Apache Airflow Providers
2.2.5
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24197 | https://github.com/apache/airflow/pull/25850 | 1a087bca3d6ecceab96f9ab818b3b75262222d13 | db5543ef608bdd7aefdb5fefea150955d369ddf4 | "2022-06-04T20:45:01Z" | python | "2022-08-22T15:43:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,160 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | `BigQueryCreateExternalTableOperator` uses deprecated function | ### Body
The `BigQueryCreateExternalTableOperator` uses `create_external_table`:
https://github.com/apache/airflow/blob/cd49a8b9f64c57b5622025baee9247712c692e72/airflow/providers/google/cloud/operators/bigquery.py#L1131-L1147
this function is deprecated:
https://github.com/apache/airflow/blob/511d0ee256b819690ccf0f6b30d12340b1dd7f0a/airflow/providers/google/cloud/hooks/bigquery.py#L598-L602
**The task:**
Refactor/change the operator to replace `create_external_table` with `create_empty_table`.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/24160 | https://github.com/apache/airflow/pull/24363 | 626d9db2908563c4b7675db5de2cb1e3acde82e9 | c618da444e841afcfd73eeb0bce9c87648c89140 | "2022-06-03T11:29:43Z" | python | "2022-07-12T11:17:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,103 | ["chart/templates/workers/worker-kedaautoscaler.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_keda.py"] | Add support for KEDA HPA Config to Helm Chart | ### Description
> When managing the scale of a group of replicas using the HorizontalPodAutoscaler, it is possible that the number of replicas keeps fluctuating frequently due to the dynamic nature of the metrics evaluated. This is sometimes referred to as thrashing, or flapping.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#flapping
Sometimes clusters need to restrict the flapping of Airflow worker replicas.
KEDA supports [`advanced.horizontalPodAutoscalerConfig`](https://keda.sh/docs/1.4/concepts/scaling-deployments/).
It would be great if the users would have the option in the helm chart to configure scale down behavior.
### Use case/motivation
KEDA currently cannot set advanced options.
We want to set advanced options like `scaleDown.stabilizationWindowSeconds`, `scaleDown.policies`.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24103 | https://github.com/apache/airflow/pull/24220 | 97948ecae7fcbb7dfdfb169cfe653bd20a108def | 8639c70f187a7d5b8b4d2f432d2530f6d259eceb | "2022-06-02T10:15:04Z" | python | "2022-06-30T17:16:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,077 | ["docs/exts/exampleinclude.py"] | Fix style of example-block | ### What do you see as an issue?
Style of example-block in the document is broken.
<img width="810" alt="example-block" src="https://user-images.githubusercontent.com/12693596/171412272-70ca791b-c798-4080-83ab-e358f290ac31.png">
This problem occurs when browser width is between 1000px and 1280px.
See: https://airflow.apache.org/docs/apache-airflow-providers-http/stable/operators.html
### Solving the problem
The container class should be removed.
```html
<div class="example-block-wrapper docutils container">
^^^^^^^^^
...
</div>
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24077 | https://github.com/apache/airflow/pull/24078 | e41b5a012427b5e7eab49de702b83dba4fc2fa13 | 5087f96600f6d7cc852b91079e92d00df6a50486 | "2022-06-01T14:08:48Z" | python | "2022-06-01T17:50:57Z" |