status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 24,060 | ["airflow/utils/db.py", "tests/utils/test_db.py"] | `airflow db check-migrations -t 0` fails to check migrations in airflow 2.3 | ### Apache Airflow version
2.3.1 (latest released)
### What happened
As of Airflow 2.3.0 the `airflow db check-migrations -t 0` command will ALWAYS think there are unapplied migrations (even if there are none to apply), whereas, in Airflow 2.2.5, a single check would be run succeessfully.
This was caused by PR https://github.com/apache/airflow/pull/18439, which updated the loop from [`while True`](https://github.com/apache/airflow/blob/2.2.5/airflow/utils/db.py#L638) (which always loops at least once) to [`for ticker in range(timeout)`](https://github.com/apache/airflow/blob/2.3.0/airflow/utils/db.py#L696) (which will NOT loop if timeout=0).
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24060 | https://github.com/apache/airflow/pull/24068 | 841ed271017ff35a3124f1d1a53a5c74730fed60 | 84d7b5ba39b3ff1fb5b856faec8fd4e731d3f397 | "2022-05-31T23:36:50Z" | python | "2022-06-01T11:03:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,037 | [".gitignore", "chart/.gitignore", "chart/Chart.lock", "chart/Chart.yaml", "chart/INSTALL", "chart/NOTICE", "chart/charts/postgresql-10.5.3.tgz", "scripts/ci/libraries/_kind.sh", "tests/charts/conftest.py"] | Frequent failures of helm chart tests | ### Apache Airflow version
main (development)
### What happened
We keep on getting very frequent failures of Helm Chart tests and seems that a big number of those errors are because of errors when pulling charts from bitnami for postgres:
Example here (but I saw it happening very often recently):
https://github.com/apache/airflow/runs/6666449965?check_suite_focus=true#step:9:314
```
Save error occurred: could not find : chart postgresql not found in https://charts.bitnami.com/bitnami: looks like "https://charts.bitnami.com/bitnami" is not a valid chart repository or cannot be reached: stream error: stream ID 1; INTERNAL_ERROR
Deleting newly downloaded charts, restoring pre-update state
Error: could not find : chart postgresql not found in https://charts.bitnami.com/bitnami: looks like "https://charts.bitnami.com/bitnami" is not a valid chart repository or cannot be reached: stream error: stream ID 1; INTERNAL_ERROR
Dumping logs from KinD
```
It is not only a problem for our CI but it might be similar problem for our users who want to install the chart - they might also get the same kinds of error.
I guess we should either make it more resilient to intermittent problems with bitnami charts or use another chart (or maybe even host the chart ourselves somewhere within apache infrastructure. While the postgres chart is not really needed for most "production" users, it is still a dependency of our chart and it makes our chart depend on external and apparently flaky service.
### What you think should happen instead
We should find (or host ourselves) more stable dependency or get rid of it.
### How to reproduce
Look at some recent CI builds and see that they often fail in K8S tests and more often than not the reason is missing postgresql chart.
### Operating System
any
### Versions of Apache Airflow Providers
not relevant
### Deployment
Other
### Deployment details
CI
### Anything else
Happy to make the change once we agree what's the best way :).
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24037 | https://github.com/apache/airflow/pull/24395 | 5d5976c08c867b8dbae8301f46e0c422d4dde1ed | 779571b28b4ae1906b4a23c031d30fdc87afb93e | "2022-05-31T08:08:25Z" | python | "2022-06-14T16:07:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,015 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator/KubernetesExecutor: Failed to adopt pod 422 | ### Apache Airflow version
2.3.0
### What happened
Here i provide steps to reproduce this.
Goal of this: to describe how to reproduce the "Failed to Adopt pod" error condition.
The DAG->step Described Below should be of type KubernetesPodOperator
NOTE: under normal operation,
(where the MAIN_AIRFLOW_POD is never recycled by k8s, we will never see this edge-case)
(it is only when the workerPod is still running, but the MAIN_AIRFLOW_POD is suddenly restarted/stopped)
(that we would see orphan->workerPods)
1] Implement a contrived-DAG, with a single step -> which is long-running (e.g. 6 minutes)
2] Deploy your airflow-2.1.4 / airfow-2.3.0 together with the contrived-DAG
3] Run your contrived-DAG.
4] in the middle of running the single-step, check via "kubectl" that your Kubernetes->workerPod has been created / running
5] while workerPod still running, do "kubectl delete pod <OF_MAIN_AIRFLOW_POD>". This will mean that the workerPod becomes an orphan.
6] the workerPod still continues to run through to completion. after which the K8S->status of the pod will be Completed, however the pod doesn't shut down itself.
7] "kubectl" start up a new <MAIN_AIRFLOW_POD> so the web-ui is running again.
8] MAIN_AIRFLOW_POD->webUi - Run your contrived-DAG again
9] while the contrived-DAG is starting/tryingToStart etc, you will see in the logs printed out "Failed to adopt pod" -> with 422 error code.
The step-9 with the error message, you will find two appearances of this error msg in the airflow-2.1.4, airflow-2.3.0 source-code.
The step-7 may also - general logging from the MAIN_APP - may also output the "Failed to adopt pod" error message also.
### What you think should happen instead
On previous versions of airflow e.g. 1.10.x, the orphan-workerPods would be adopted by the 2nd run-time of the airflowMainApp and either used to continue the same DAG and/or cleared away when complete.
This is not happening with the newer airflow 2.1.4 / 2.3.0 (presumably because the code changed), and upon the 2nd run-time of the airflowMainApp - it would seem to try to adopt-workerPod but fails at that point ("Failed to adopt pod" in the logs and hence it cannot clear away orphan pods).
Given this is an edge-case only, (i.e. we would not expect k8s to be recycling the main airflowApp/pod anyway), it doesn't seem totally urgent bug. However, the only reason for me raising this issue with yourselves is that given any k8s->namespace, in particular in PROD, over time (e.g. 1 month?) the namespace will slowly be being filled up with orphanPods and somebody would need to manually log-in to delete old pods.
### How to reproduce
Here i provide steps to reproduce this.
Goal of this: to describe how to reproduce the "Failed to Adopt pod" error condition.
The DAG->step Described Below should be of type KubernetesPodOperator
NOTE: under normal operation,
(where the MAIN_AIRFLOW_POD is never recycled by k8s, we will never see this edge-case)
(it is only when the workerPod is still running, but the MAIN_AIRFLOW_POD is suddenly restarted/stopped)
(that we would see orphan->workerPods)
1] Implement a contrived-DAG, with a single step -> which is long-running (e.g. 6 minutes)
2] Deploy your airflow-2.1.4 / airfow-2.3.0 together with the contrived-DAG
3] Run your contrived-DAG.
4] in the middle of running the single-step, check via "kubectl" that your Kubernetes->workerPod has been created / running
5] while workerPod still running, do "kubectl delete pod <OF_MAIN_AIRFLOW_POD>". This will mean that the workerPod becomes an orphan.
6] the workerPod still continues to run through to completion. after which the K8S->status of the pod will be Completed, however the pod doesn't shut down itself.
7] "kubectl" start up a new <MAIN_AIRFLOW_POD> so the web-ui is running again.
8] MAIN_AIRFLOW_POD->webUi - Run your contrived-DAG again
9] while the contrived-DAG is starting/tryingToStart etc, you will see in the logs printed out "Failed to adopt pod" -> with 422 error code.
The step-9 with the error message, you will find two appearances of this error msg in the airflow-2.1.4, airflow-2.3.0 source-code.
The step-7 may also - general logging from the MAIN_APP - may also output the "Failed to adopt pod" error message also.
### Operating System
kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
nothing special.
it (CI/CD pipeline) builds the app. using requirements.txt to pull-in all the required python dependencies (including there is a dependency for the airflow-2.1.4 / 2.3.0)
it (CI/CD pipeline) packages the app as an ECR image & then deploy directly to k8s namespace.
### Anything else
this is 100% reproducible each & every time.
i have tested this multiple times.
also - i tested this on the old airflow-1.10.x a couple of times to verify that the bug did not exist previously
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24015 | https://github.com/apache/airflow/pull/29279 | 05fb80ee9373835b2f72fad3e9976cf29aeca23b | d26dc223915c50ff58252a709bb7b33f5417dfce | "2022-05-30T07:49:27Z" | python | "2023-02-01T11:50:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,955 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | Add missing parameter documentation for `KubernetesHook` and `KubernetesPodOperator` | ### Body
Currently the following modules are missing certain parameters in their docstrings. Because of this, these parameters are not captured in the [Python API docs for the Kubernetes provider](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/index.html).
- [ ] KubernetesHook: `in_cluster`, `config_file`, `cluster_context`, `client_configuration`
- [ ] KubernetesPodOperator: `env_from`, `node_selectors`, `pod_runtime_info_envs`, `configmaps`
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23955 | https://github.com/apache/airflow/pull/24054 | 203fe71b49da760968c26752957f765c4649423b | 98b4e48fbc1262f1381e7a4ca6cce31d96e6f5e9 | "2022-05-27T03:23:54Z" | python | "2022-06-06T22:20:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,954 | ["airflow/providers/databricks/operators/databricks.py", "airflow/providers/databricks/operators/databricks_sql.py", "docs/apache-airflow-providers-databricks/operators/submit_run.rst", "docs/spelling_wordlist.txt"] | Add missing parameter documentation in `DatabricksSubmitRunOperator` and `DatabricksSqlOperator` | ### Body
Currently the following modules are missing certain parameters in their docstrings. Because of this, these parameters are not captured in the [Python API docs for the Databricks provider](https://airflow.apache.org/docs/apache-airflow-providers-databricks/stable/_api/airflow/providers/databricks/index.html).
- [ ] DatabricksSubmitRunOperator: `tasks`
- [ ] DatabricksSqlOperator: `do_xcom_push`
- Granted this is really part of the `BaseOperator`, but this operator specifically sets the default value to False so it would be good if this was explicitly listed for users.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23954 | https://github.com/apache/airflow/pull/24599 | 2e5737df531410d2d678d09b5d2bba5d37a06003 | 82f842ffc56817eb039f1c4f1e2c090e6941c6af | "2022-05-27T03:10:32Z" | python | "2022-07-28T15:19:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,949 | ["airflow/www/static/js/dags.js"] | Only autorefresh active dags on home page | ### Description
In https://github.com/apache/airflow/pull/22900, we added auto-refresh for the home page. Right now, we pass all dag_ids to the `last_dagruns`, `dag_stats` and `task_stats` endpoints. During auto-refresh, we should only request info for dags that are not paused.
On page load, we still want to check all three endpoints for all dags in view. But for subsequent auto-refresh requests we should only check active dags.
See [here](https://github.com/apache/airflow/blob/main/airflow/www/static/js/dags.js#L429) for where the homepage auto-refresh lives.
### Use case/motivation
Smaller requests should make a faster home page.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23949 | https://github.com/apache/airflow/pull/24770 | e9d19a60a017224165e835949f623f106b97e1cb | 2a1472a6bef57fc57cfe4577bcbed5ba00521409 | "2022-05-26T21:08:49Z" | python | "2022-07-13T14:55:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,945 | ["airflow/www/static/js/grid/dagRuns/Bar.jsx", "airflow/www/static/js/grid/dagRuns/Tooltip.jsx", "airflow/www/static/js/grid/details/Header.jsx", "airflow/www/static/js/grid/details/content/dagRun/index.jsx"] | Add backfill icon to grid view dag runs | ### Description
In the grid view, we use a play icon to indicate manually triggered dag runs. We should do the same for a backfilled dag run.
Possible icons can be found [here](https://react-icons.github.io/react-icons).
Note: We use the manual run icon in both the [dag run bar component](https://github.com/apache/airflow/blob/main/airflow/www/static/js/grid/dagRuns/Bar.jsx) and in the [details panel header](https://github.com/apache/airflow/blob/main/airflow/www/static/js/grid/details/Header.jsx)
### Use case/motivation
Help users quickly differentiate between manual, scheduled and backfill runs.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23945 | https://github.com/apache/airflow/pull/23970 | 6962d8a3556999af2eec459c944417ddd6d2cfb3 | d470a8ef8df152eceee88b95365ff923db7cb2d7 | "2022-05-26T16:40:00Z" | python | "2022-05-27T20:25:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,935 | ["airflow/providers/ftp/hooks/ftp.py", "tests/providers/ftp/hooks/test_ftp.py"] | No option to set blocksize when retrieving a file in ftphook | ### Apache Airflow version
2.0.0
### What happened
using ftphook, im trying to download a file in chunks but the deafult blocksize is 8192 and cannot be changed.
retrieve_file code is calling conn.retrbinary(f'RETR {remote_file_name}', callback) but no blocksize is passed while this function is declared:
def retrbinary(self, cmd, callback, blocksize=8192, rest=None):
### What you think should happen instead
allow passing a blocksize
### How to reproduce
_No response_
### Operating System
gcp
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23935 | https://github.com/apache/airflow/pull/24860 | 2f29bfefb59b0014ae9e5f641d3f6f46c4341518 | 64412ee867fe0918cc3b616b3fb0b72dcd88125c | "2022-05-26T12:06:34Z" | python | "2022-07-07T20:54:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,917 | ["setup.py"] | Wrong dependecy version of requests for Databricks provider | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==2.7.0
### Apache Airflow version
2.2.2
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other
### Deployment details
_No response_
### What happened
There was added import statement from the `requests` library to `airflow/providers/databricks/hooks/databricks_base.py` by [this MR](https://github.com/apache/airflow/pull/22422/files#diff-bfbc446378c91e1c398eb07d02dc333703bb1dda4cbe078193b16199f11db8a5R34).
But minimal version of requests wasn't bumped up to `>=2.27.0`, as we can see[ it's still >=2.26.0](https://github.com/apache/airflow/blob/main/setup.py#L269), but that JSONDecodeError class added to `requests` library starting from [2.27.0 (see release notes)](https://github.com/psf/requests/releases/tag/v2.27.0).
It brings inconsistency between libraries and leads to following error:
```
File "/usr/local/lib/python3.7/site-packages/airflow/providers/databricks/hooks/databricks.py", line 33, in <module>
from airflow.providers.databricks.hooks.databricks_base import BaseDatabricksHook
File "/usr/local/lib/python3.7/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 34, in <module>
from requests.exceptions import JSONDecodeError
ImportError: cannot import name 'JSONDecodeError' from 'requests.exceptions'
```
### What you think should happen instead
it breaks DAGs
### How to reproduce
Use any Databricks operator with `requests==2.26.0`, which is defined is minimal compatible version
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23917 | https://github.com/apache/airflow/pull/23927 | b170dc7d66a628e405a824bfbc9fb48a3b3edd63 | 80c3fcd097c02511463b2c4f586757af0e5f41b2 | "2022-05-25T18:10:21Z" | python | "2022-05-27T01:57:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,871 | ["airflow/dag_processing/manager.py", "airflow/utils/process_utils.py", "tests/utils/test_process_utils.py"] | `dag-processor` failed to start in docker | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Standalone DagProcessor which run in Apache Airflow Production Docker Image failed with error
```
airflow-dag-processor_1 |
airflow-dag-processor_1 | Traceback (most recent call last):
airflow-dag-processor_1 | File "/home/airflow/.local/bin/airflow", line 8, in <module>
airflow-dag-processor_1 | sys.exit(main())
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 38, in main
airflow-dag-processor_1 | args.func(args)
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 51, in command
airflow-dag-processor_1 | return func(*args, **kwargs)
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 99, in wrapper
airflow-dag-processor_1 | return f(*args, **kwargs)
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/dag_processor_command.py", line 80, in dag_processor
airflow-dag-processor_1 | manager.start()
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 475, in start
airflow-dag-processor_1 | os.setpgid(0, 0)
airflow-dag-processor_1 | PermissionError: [Errno 1] Operation not permitted
```
This error not happen if directly run in host system by `airflow dag-processor`
Seems like this issue happen because when we run in Apache Airflow Production Docker Image `airflow` process is session leader, and according to `man setpgid`
```
ERRORS
setpgid() will fail and the process group will not be altered if:
[EPERM] The process indicated by the pid argument is a session leader.
```
### What you think should happen instead
`dag-processor` should start in docker without error
### How to reproduce
1. Use simple docker-compose file which use official Airflow 2.3.0 image
```yaml
# docker-compose-dag-processor.yaml
version: '3'
volumes:
postgres-db-volume:
aiflow-logs-volume:
x-airflow-common:
&airflow-common
image: apache/airflow:2.3.0-python3.7
environment:
AIRFLOW__SCHEDULER__STANDALONE_DAG_PROCESSOR: 'True'
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:insecurepassword@postgres/airflow
volumes:
- aiflow-logs-volume:/opt/airflow/logs
- ${PWD}/dags:/opt/airflow/dags
user: "${AIRFLOW_UID:-50000}:0"
extra_hosts:
- "host.docker.internal:host-gateway"
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: insecurepassword
POSTGRES_DB: airflow
ports:
- 55432:5432
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: unless-stopped
airflow-upgrade-db:
<<: *airflow-common
command: ["db", "upgrade"]
depends_on:
postgres:
condition: service_healthy
airflow-dag-processor:
<<: *airflow-common
command: dag-processor
restart: unless-stopped
depends_on:
airflow-upgrade-db:
condition: service_completed_successfully
```
2. `docker-compose -f docker-compose-dag-processor.yaml up`
### Operating System
macOS Monterey 12.3.1
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Docker: **20.10.12**
docker-compose: **1.29.2**
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23871 | https://github.com/apache/airflow/pull/23872 | 8ccff9244a6d1a936d8732721373b967e95ec404 | 9216489d9a25f56f7a55d032b0ebfc1bf0bf4a83 | "2022-05-23T15:15:46Z" | python | "2022-05-27T14:29:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,868 | ["dev/breeze/src/airflow_breeze/commands/testing_commands.py"] | Don’t show traceback on 'breeze tests' subprocess returning non-zero | ### Body
Currently, if any tests fail when `breeze tests` is run, Breeze 2 would emit a traceback pointing to the `docker-compose` subprocess call. This is due to Docker propagating the exit call of the underlying `pytest` subprocess. While it is technically correct to emit an exception, the traceback is useless in this context, and only clutters output. It may be a good idea to add a special case for this and suppress the exception.
A similar situation can be observed with `breeze shell` if you run `exit 1` in the shell.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23868 | https://github.com/apache/airflow/pull/23897 | 1bf6dded9a5dcc22238b8943028b08741e36dfe5 | d788f4b90128533b1ac3a0622a8beb695b52e2c4 | "2022-05-23T14:12:38Z" | python | "2022-05-24T20:56:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,867 | ["dev/breeze/src/airflow_breeze/commands/ci_image_commands.py", "dev/breeze/src/airflow_breeze/utils/md5_build_check.py", "images/breeze/output-commands-hash.txt"] | Don’t prompt for 'breeze build-image' | ### Body
Currently, running the (new) `breeze build-image` brings up two prompts if any of the meta files are outdated:
```
$ breeze build-image
Good version of Docker: 20.10.14.
Good version of docker-compose: 2.5.1
The following important files are modified in ./airflow since last time image was built:
* setup.py
* Dockerfile.ci
* scripts/docker/common.sh
* scripts/docker/install_additional_dependencies.sh
* scripts/docker/install_airflow.sh
* scripts/docker/install_airflow_dependencies_from_branch_tip.sh
* scripts/docker/install_from_docker_context_files.sh
Likely CI image needs rebuild
Do you want to build the image (this works best when you have good connection and can take usually from 20 seconds to few minutes depending how old your image is)?
Press y/N/q. Auto-select n in 10 seconds (add `--answer n` to avoid delay next time): y
This might take a lot of time (more than 10 minutes) even if you havea good network connection. We think you should attempt to rebase first.
But if you really, really want - you can attempt it. Are you really sure?
Press y/N/q. Auto-select n in 10 seconds (add `--answer n` to avoid delay next time): y
```
While the prompts are shown in good nature, they don’t really make sense for `build-image` since the user already gave an explicit answer by running `build-image`. They should be suppressed.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23867 | https://github.com/apache/airflow/pull/23898 | cac7ab5c4f4239b04d7800712ee841f0e6f6ab86 | 90940b529340ef7f9b8c51d5c7d9b6a848617dea | "2022-05-23T13:44:37Z" | python | "2022-05-24T16:27:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,838 | ["airflow/models/mappedoperator.py", "tests/serialization/test_dag_serialization.py"] | AIP-45 breaks follow-on mini scheduler for mapped tasks | I've just noticed that this causes a problem for the follow-on mini scheduler for mapped tasks. I guess that code path wasn't sufficiently unit tested.
DAG
```python
import csv
import io
import os
import json
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
from airflow.models.xcom_arg import XComArg
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
from airflow.providers.amazon.aws.operators.s3 import S3ListOperator
with DAG(dag_id='mapped_s3', start_date=datetime(2022, 5, 19)) as dag:
files = S3ListOperator(
task_id="get_inputs",
bucket="airflow-summit-2022",
prefix="data_provider_a/{{ data_interval_end | ds }}/",
delimiter='/',
do_xcom_push=True,
)
@task
def csv_to_json(aws_conn_id, input_bucket, key, output_bucket):
hook = S3Hook(aws_conn_id=aws_conn_id)
csv_data = hook.read_key(key, input_bucket)
reader = csv.DictReader(io.StringIO(csv_data))
output = io.BytesIO()
for row in reader:
output.write(json.dumps(row, indent=None).encode('utf-8'))
output.write(b"\n")
output.seek(0, os.SEEK_SET)
hook.load_file_obj(output, key=key.replace(".csv", ".json"), bucket_name=output_bucket)
csv_to_json.partial(
aws_conn_id="aws_default", input_bucket=files.bucket, output_bucket="airflow-summit-2022-processed"
).expand(key=XComArg(files))
```
Error:
```
File "/home/ash/code/airflow/airflow/airflow/jobs/local_task_job.py", line 253, in _run_mini_scheduler_on_child_tasks
info = dag_run.task_instance_scheduling_decisions(session)
File "/home/ash/code/airflow/airflow/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/ash/code/airflow/airflow/airflow/models/dagrun.py", line 658, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
File "/home/ash/code/airflow/airflow/airflow/models/dagrun.py", line 714, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/home/ash/code/airflow/airflow/airflow/models/mappedoperator.py", line 609, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/home/ash/code/airflow/airflow/airflow/models/mappedoperator.py", line 591, in _resolve_map_lengths
expansion_kwargs = self._get_expansion_kwargs()
File "/home/ash/code/airflow/airflow/airflow/models/mappedoperator.py", line 526, in _get_expansion_kwargs
return getattr(self, self._expansion_kwargs_attr)
AttributeError: 'MappedOperator' object has no attribute 'mapped_op_kwargs'
```
_Originally posted by @ashb in https://github.com/apache/airflow/issues/21877#issuecomment-1133409500_ | https://github.com/apache/airflow/issues/23838 | https://github.com/apache/airflow/pull/24772 | 1abdf3fd1e048f53e061cc9ad59177be7b5245ad | 6fd06fa8c274b39e4ed716f8d347229e017ba8e5 | "2022-05-20T21:41:26Z" | python | "2022-07-05T09:36:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,833 | ["airflow/decorators/base.py", "airflow/models/mappedoperator.py", "airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | Dynamic Task Mapping not working with op_kwargs in PythonOperator | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The following DAG was written and expected to generate 3 tasks (one for each string in the list)
**dag_code**
```python
import logging
from airflow.decorators import dag, task
from airflow.operators.python import PythonOperator
from airflow.utils.dates import datetime
def log_strings_operator(string, *args, **kwargs):
logging.info("we've made it into the method")
logging.info(f"operator log - {string}")
@dag(
dag_id='dynamic_dag_test',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
catchup=False,
tags=['example', 'dynamic_tasks']
)
def tutorial_taskflow_api_etl():
op2 = (PythonOperator
.partial(task_id="logging_with_operator_task",
python_callable=log_strings_operator)
.expand(op_kwargs=[{"string": "a"}, {"string": "b"}, {"string": "c"}]))
return op2
tutorial_etl_dag = tutorial_taskflow_api_etl()
```
**error message**
```python
Broken DAG: [/usr/local/airflow/dags/dynamic_dag_test.py] Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 343, in _serialize
return SerializedBaseOperator.serialize_mapped_operator(var)
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 608, in serialize_mapped_operator
assert op_kwargs[Encoding.TYPE] == DAT.DICT
TypeError: list indices must be integers or slices, not Encoding
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1105, in to_dict
json_dict = {"__version": cls.SERIALIZER_VERSION, "dag": cls.serialize_dag(var)}
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1013, in serialize_dag
raise SerializationError(f'Failed to serialize DAG {dag.dag_id!r}: {e}')
airflow.exceptions.SerializationError: Failed to serialize DAG 'dynamic_dag_test': list indices must be integers or slices, not Encoding
```
### What you think should happen instead
Dag should contain 1 task `logging_with_operator_task` that contains 3 indices
### How to reproduce
copy/paste dag code into a dag file and run on airflow 2.3.0. Airflow UI will flag the error
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23833 | https://github.com/apache/airflow/pull/23860 | 281e54b442f0a02bda53ae847aae9f371306f246 | 5877f45d65d5aa864941efebd2040661b6f89cb1 | "2022-05-20T17:19:23Z" | python | "2022-06-22T07:48:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,826 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | BigQueryInsertJobOperator is broken on any type of job except `query` | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==7.0.0
### Apache Airflow version
2.2.5
### Operating System
MacOS 12.2.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
We are using `BigQueryInsertJobOperator` to load data from parquet files in Google Cloud Storage with this kind of configuration:
```
BigQueryInsertJobOperator(
task_id="load_to_bq",
configuration={
"load": {
"writeDisposition": "WRITE_APPEND",
"createDisposition": "CREATE_IF_NEEDED",
"destinationTable": destination_table,
"sourceUris": source_files
"sourceFormat": "PARQUET"
}
}
```
After upgrade to `apache-airflow-providers-google==7.0.0` all load jobs are now broken. I believe that problem lies in this line: https://github.com/apache/airflow/blob/5bfacf81c63668ea63e7cb48f4a708a67d0ac0a2/airflow/providers/google/cloud/operators/bigquery.py#L2170
So it's trying to get the destination table from `query` job config and makes it impossible to use any other type of job.
### What you think should happen instead
_No response_
### How to reproduce
Use BigQueryInsertJobOperator to submit any type of job except `query`
### Anything else
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 2170, in execute
table = job.to_api_repr()["configuration"]["query"]["destinationTable"]
KeyError: 'query'
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23826 | https://github.com/apache/airflow/pull/24165 | 389e858d934a7813c7f15ab4e46df33c5720e415 | a597a76e8f893865e7380b072de612763639bfb9 | "2022-05-20T09:58:37Z" | python | "2022-06-03T17:52:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,824 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Race condition between Triggerer and Scheduler | ### Apache Airflow version
2.2.5
### What happened
Deferable tasks, that trigger instantly after getting defered, might get its state set to `FAILED` by the scheduler.
The triggerer can fire the trigger and scheduler can re-queue the task instance before it has a chance to process the executor event for when the ti got defered.
### What you think should happen instead
This code block should not run in this instance:
https://github.com/apache/airflow/blob/5bfacf81c63668ea63e7cb48f4a708a67d0ac0a2/airflow/jobs/scheduler_job.py#L667-L692
### How to reproduce
Most importantly have a trigger, that instantly fires. I'm not sure if the executor type is important - I'm running `CeleryExecutor`. Also having two schedulers might be important.
### Operating System
Arch Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23824 | https://github.com/apache/airflow/pull/23846 | 94f4f81efb8c424bee8336bf6b8720821e48898a | 66ffe39b0b3ae233aeb80e77eea1b2b867cc8c45 | "2022-05-20T09:22:34Z" | python | "2022-06-28T22:14:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,823 | ["airflow/providers_manager.py"] | ModuleNotFoundExceptions not matched as optional features | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The `providers_manager.py` logs an import warning with stack trace (see example) for optional provider features instead of an info message noting the optional feature is disabled. Sample message:
```
[2022-05-19 21:46:53,065] {providers_manager.py:223} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.compute_ssh.ComputeEngineSSHHook' from 'apache-airflow-providers-google' package
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/providers_manager.py", line 257, in _sanity_check
imported_class = import_string(class_name)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/hooks/compute_ssh.py", line 23, in <module>
import paramiko
ModuleNotFoundError: No module named 'paramiko'
```
### What you think should happen instead
There is explicit code for catching `ModuleNotFoundException`s so these import errors should be logged as info messages like:
```
[2022-05-20 08:18:54,680] {providers_manager.py:215} INFO - Optional provider feature disabled when importing 'airflow.providers.google.cloud.hooks.compute_ssh.ComputeEngineSSHHook' from 'apache-airflow-providers-google' package
```
### How to reproduce
Install the `google` provider but do not install the `ssh` submodule (or alternatively the `mysql` module). Various airflow components will produce the above warning logs.
### Operating System
Debian bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23823 | https://github.com/apache/airflow/pull/23825 | 5bfacf81c63668ea63e7cb48f4a708a67d0ac0a2 | 6f5749c0d04bd732b320fcbe7713f2611e3d3629 | "2022-05-20T08:44:08Z" | python | "2022-05-20T12:08:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,822 | ["airflow/providers/amazon/aws/example_dags/example_dms.py", "airflow/providers/amazon/aws/operators/rds.py", "docs/apache-airflow-providers-amazon/operators/rds.rst", "tests/providers/amazon/aws/operators/test_rds.py", "tests/system/providers/amazon/aws/rds/__init__.py", "tests/system/providers/amazon/aws/rds/example_rds_instance.py"] | Add an AWS operator for Create RDS Database | ### Description
@eladkal suggested we add the operator and then incorporate it into https://github.com/apache/airflow/pull/23681. I have a little bit of a backlog right now trying to get the System Tests up and running for AWS, but if someone wants to get to it before me, it should be a pretty easy first contribution.
The required API call is documented [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.create_db_instance) and I'm happy to help with any questions and./or help review it if someone wants to take a stab at it before I get the time.
### Use case/motivation
_No response_
### Related issues
Could be used to simplify https://github.com/apache/airflow/pull/23681
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23822 | https://github.com/apache/airflow/pull/24099 | c7feb31786c7744da91d319f499d9f6015d82454 | bf727525e1fd777e51cc8bc17285f6093277fdef | "2022-05-20T01:28:34Z" | python | "2022-06-28T19:32:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,796 | ["airflow/config_templates/airflow_local_settings.py", "airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/utils/log/colored_log.py", "airflow/utils/log/timezone_aware.py", "airflow/www/static/js/grid/details/content/taskInstance/Logs/utils.js", "airflow/www/static/js/ti_log.js", "newsfragments/24373.significant.rst"] | Webserver shows wrong datetime (timezone) in log | ### Apache Airflow version
2.3.0 (latest released)
### What happened
same as #19401 , when I open task`s log in web interface, it shifts this time forward by 8 hours (for Asia/Shanghai), but it's already in Asia/Shanghai.
here is the log in web:
```
*** Reading local file: /opt/airflow/logs/forecast/cal/2022-05-18T09:50:00+00:00/1.log
[2022-05-19, 13:54:52] {taskinstance.py:1037} INFO ...
```
As you seee, UTC time is 2022-05-18T09:50:00,and My timezone is Asia/Shanghai(should shift forward 8 hours),but it shift forward 16hours!
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)(docker)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
1. build my docker image from apache/airflow:2.3.0 to change timezone
```Dockerfile
FROM apache/airflow:2.3.0
# bugfix of log UI in web, here I change ti_log.js file by following on #19401
COPY ./ti_log.js /home/airflow/.local/lib/python3.7/site-packages/airflow/www/static/js/ti_log.js
USER root
# container timezone changed to CST time
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& rm -rf /etc/timezone \
&& echo Asia/Shanghai >> /etc/timezone \
&& chown airflow /home/airflow/.local/lib/python3.7/site-packages/airflow/www/static/js/ti_log.js
USER airflow
```
2. use my image to run airflow by docker-compose
3. check task logs in web
Although I have changed the file `airflow/www/static/js/ti_log.js`, but it did not work! Then check source from Web, I found another file : `airflow/www/static/dist/tiLog.e915520196109d459cf8.js`, then I replace "+00:00" by "+08:00" in this file. Finally it works!
```js
# origin tiLog.e915520196109d459cf8.js
replaceAll(c,(e=>`<time datetime="${e}+00:00" data-with-tz="true">${Object(a.f)(`${e}+00:00`)}</time>`))
```
```js
# what I changed
replaceAll(c,(e=>`<time datetime="${e}+08:00" data-with-tz="true">${Object(a.f)(`${e}+08:00`)}</time>`))
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23796 | https://github.com/apache/airflow/pull/24373 | 5a8209e5096528b6f562efebbe71b6b9c378aaed | 7de050ceeb381fb7959b65acd7008e85b430c46f | "2022-05-19T09:11:51Z" | python | "2022-06-24T13:17:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,792 | ["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"] | Dynamic task mapping creates too many mapped instances when task pushed non-default XCom | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Excess tasks are created when using dynamic task mapping with KubernetesPodOperator, but only in certain cases which I do not understand. I have a simple working example of this where the flow is:
- One task that returns a list XCom (list of lists, since I'm partial-ing to `KubernetesPodOperator`'s `arguments`) of length 3. This looks like `[["a"], ["b"], ["c"]]`
- A `partial` from this, which is expanded on the above's result. Each resulting task has an XCom of a single element list that looks like `["d"]`. We expect the `expand` to result in 3 tasks, which it does. So far so good. Why doesn't the issue occur at this stage? No clue.
- A `partial` from the above. We expect 3 tasks in this final stage, but get 9. 3 succeed and 6 fail consistently. This 3x rule scales to as many tasks as you define in step 2 (e.g. 2 tasks in step 2 -> 6 tasks in step 3, where 4 fail)
![image](https://user-images.githubusercontent.com/71299310/169179360-d1ddfe49-f20e-4f27-909f-4dd101386a5a.png)
If I recreate this using the TaskFlow API with `PythonOperator`s, I get the expected result of 1 task -> 3 tasks -> 3 tasks
![image](https://user-images.githubusercontent.com/71299310/169179409-d2f71f3e-6e8c-42e0-8120-1ecccff439c0.png)
Futhermore, if I attempt to look at the `Rendered Template` of the failed tasks in the `KubernetesPodOperator` implementation (first image), I consistently get `Error rendering template` and all the fields are `None`. The succeeded tasks look normal.
![image](https://user-images.githubusercontent.com/71299310/169181052-8d182722-197b-44a5-b145-3a983e259036.png)
Since the `Rendered Template` view fails to load, I can't confirm what is actually getting provided to these failing tasks' `argument` parameter. If there's a way I can query the meta database to see this, I'd be glad to if given instruction.
### What you think should happen instead
I think this has to do with how XComs are specially handled with the `KubernetesPodOperator`. If we look at the XComs tab of the upstream task (`some-task-2` in the above images), we see that the return value specifies `pod_name` and `pod_namespace` along with `return_value`.
![image](https://user-images.githubusercontent.com/71299310/169179724-984682d0-c2fc-4097-9527-fa3cbf3ad93f.png)
Whereas in the `t2` task of the TaskFlow version, it only contains `return_value`.
![image](https://user-images.githubusercontent.com/71299310/169179983-b22347de-eef0-4a9a-ae85-6b5e5d2bfa42.png)
I haven't dug through the code to verify, but I have a strong feeling these extra values `pod_name` and `pod_namespace` are being used to generate the `OperatorPartial`/`MappedOperator` as well when they shouldn't be.
### How to reproduce
Run this DAG in a k8s context:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='test-pod-xcoms',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
cmds=['python3', '-c' 'import json;f=open("/airflow/xcom/return.json", "w");f.write(json.dumps([["a"], ["b"], ["c"]]))'],
image='python:3.9-alpine',
name='airflow-private-image-pod-1',
task_id='some-task-1',
do_xcom_push=True
)
op2 = make_partial_operator(
cmds=['python3', '-c' 'import json;f=open("/airflow/xcom/return.json", "w");f.write(json.dumps(["d"]))'],
image='python:3.9-alpine',
name='airflow-private-image-pod-2',
task_id='some-task-2',
do_xcom_push=True
)
op3 = make_partial_operator(
cmds=['echo', 'helloworld'],
image='alpine:latest',
name='airflow-private-image-pod-3',
task_id='some-task-3',
)
op3.expand(arguments=XComArg(op2.expand(arguments=XComArg(op1))))
```
For the TaskFlow version of this that works, run this DAG (doesn't have to be k8s context):
```
from datetime import datetime
from airflow.decorators import task
from airflow.models import DAG, Variable
@task
def t1():
return [[1], [2], [3]]
@task
def t2(val):
return val
@task
def t3(val):
print(val)
with DAG(dag_id='test-mapping',
schedule_interval=None,
start_date=datetime(2020, 1, 1)) as dag:
t3.partial().expand(val=t2.partial().expand(val=t1()))
```
### Operating System
MacOS 11.6.5
### Versions of Apache Airflow Providers
Relevant:
```
apache-airflow-providers-cncf-kubernetes==4.0.1
```
Full:
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Docker (Docker Desktop)
- Server Version: 20.10.13
- API Version: 1.41
- Builder: 2
Kubernetes (Docker Desktop)
- Env: docker-desktop
- Context: docker-desktop
- Cluster Name: docker-desktop
- Namespace: default
- Container Runtime: docker
- Version: v1.22.5
Helm:
- version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23792 | https://github.com/apache/airflow/pull/24530 | df388a3d5364b748993e61b522d0b68ff8b8124a | a69095fea1722e153a95ef9da93b002b82a02426 | "2022-05-19T01:02:41Z" | python | "2022-07-27T08:36:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,786 | ["airflow/www/utils.py", "airflow/www/views.py"] | DAG Loading Slow with Dynamic Tasks - Including Test Results and Benchmarking | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The web UI is slow to load the default (grid) view for DAGs when there are mapped tasks with a high number of expansions.
I did some testing with DAGs that have a variable number of tasks, along with changing the webserver resources to see how this affects the load times.
Here is a graph showing that testing. Let me know if you have any other questions about this.
<img width="719" alt="image" src="https://user-images.githubusercontent.com/89415310/169158337-ffb257ae-21bc-4c19-aaec-b29873d9fe93.png">
My findings based on what I'm seeing here:
The jump from 5->10 AUs makes a difference but 10 to 20 does not make a difference. There are diminishing returns when bumping up the webserver resources which leads me to believe that this could be a factor of database performance after the webserver is scaled to a certain point.
If we look at the graph on a log scale, it's almost perfectly linear for the 10 and 20AU lines on the plot. This leads me to believe that the time that it takes to load is a direct function of the number of task expansions that we have for a mapped task.
### What you think should happen instead
Web UI loads in a reasonable amount of time, anything less than 10 seconds would be acceptable relatively speaking with the performance that we're getting now, ideally somewhere under 2-3 second I think would be best, if possible.
### How to reproduce
```
from datetime import datetime
from airflow.models import DAG
from airflow.operators.empty import EmptyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
}
initial_scale = 7
max_scale = 12
scaling_factor = 2
for scale in range(initial_scale, max_scale + 1):
dag_id = f"dynamic_task_mapping_{scale}"
with DAG(
dag_id=dag_id,
default_args=default_args,
catchup=False,
schedule_interval=None,
start_date=datetime(1970, 1, 1),
render_template_as_native_obj=True,
) as dag:
start = EmptyOperator(task_id="start")
mapped = PythonOperator.partial(
task_id="mapped",
python_callable=lambda m: print(m),
).expand(
op_args=[[x] for x in list(range(2**scale))]
)
end = EmptyOperator(task_id="end")
start >> mapped >> end
globals()[dag_id] = dag
```
### Operating System
Debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23786 | https://github.com/apache/airflow/pull/23813 | 86cfd1244a641a8f17c9b33a34399d9be264f556 | 7ab5ea7853df9d99f6da3ab804ffe085378fbd8a | "2022-05-18T21:23:59Z" | python | "2022-05-20T04:18:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,783 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "kubernetes_tests/test_kubernetes_pod_operator_backcompat.py"] | Partial of a KubernetesPodOperator does not allow for limit_cpu and limit_memory in the resources argument | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When performing dynamic task mapping and providing Kubernetes limits to the `resources` argument, the DAG raises an import error:
```
Broken DAG: [/opt/airflow/dags/bulk_image_processing.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 287, in partial
partial_kwargs["resources"] = coerce_resources(partial_kwargs["resources"])
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 133, in coerce_resources
return Resources(**resources)
TypeError: __init__() got an unexpected keyword argument 'limit_cpu'
```
The offending code is:
```
KubernetesPodOperator.partial(
get_logs: True,
in_cluster: True,
is_delete_operator_pod: True,
namespace: settings.namespace,
resources={'limit_cpu': settings.IMAGE_PROCESSING_OPERATOR_CPU, 'limit_memory': settings.IMAGE_PROCESSING_OPERATOR_MEMORY},
service_account_name: settings.SERVICE_ACCOUNT_NAME,
startup_timeout_seconds: 600,
**kwargs,
)
```
But you can see this in any DAG utilizing a `KubernetesPodOperator.partial` where the `partial` contains the `resources` argument.
### What you think should happen instead
The `resources` argument should be taken at face value and applied to the `OperatorPartial` and subsequently the `MappedOperator`.
### How to reproduce
Try to import this DAG using Airflow 2.3.0:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='bulk_image_processing',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
arguments=['--bucket-name', f'{{{{ dag_run.conf.get("bucket", "some-fake-default") }}}}'],
cmds=['python3', 'some_entrypoint'],
image='some-image',
name='airflow-private-image-pod-1',
task_id='some-task',
do_xcom_push=True
)
op2 = make_partial_operator(
image='another-image',
name=f'airflow-private-image-pod-2',
resources={'limit_cpu': '2000m', 'limit_memory': '16Gi'},
task_id='another-task',
cmds=[
'some',
'set',
'of',
'cmds'
]
).expand(arguments=XComArg(op1))
```
### Operating System
MacOS 11.6.5
### Versions of Apache Airflow Providers
Relevant:
```
apache-airflow-providers-cncf-kubernetes==4.0.1
```
Full:
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Docker (Docker Desktop)
- Server Version: 20.10.13
- API Version: 1.41
- Builder: 2
Kubernetes (Docker Desktop)
- Env: docker-desktop
- Context: docker-desktop
- Cluster Name: docker-desktop
- Namespace: default
- Container Runtime: docker
- Version: v1.22.5
Helm:
- version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}
### Anything else
You can get around this by creating the `partial` first without calling `expand` on it, setting the resources via the `kwargs` parameter, then calling `expand`. Example:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='bulk_image_processing',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
arguments=['--bucket-name', f'{{{{ dag_run.conf.get("bucket", "some-fake-default") }}}}'],
cmds=['python3', 'some_entrypoint'],
image='some-image',
name='airflow-private-image-pod-1',
task_id='some-task',
do_xcom_push=True
)
op2 = make_partial_operator(
image='another-image',
name=f'airflow-private-image-pod-2',
task_id='another-task',
cmds=[
'some',
'set',
'of',
'cmds'
]
)
op2.kwargs['resources'] = {'limit_cpu': '2000m', 'limit_memory': '16Gi'}
op2.expand(arguments=XComArg(op1))
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23783 | https://github.com/apache/airflow/pull/24673 | 40f08900f2d1fb0d316b40dde583535a076f616b | 45f4290712f5f779e57034f81dbaab5d77d5de85 | "2022-05-18T18:46:44Z" | python | "2022-06-28T06:45:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,772 | ["airflow/www/utils.py", "airflow/www/views.py"] | New grid view in Airflow 2.3.0 has very slow performance on large DAGs relative to tree view in 2.2.5 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I upgraded a local dev deployment of Airflow from 2.2.5 to 2.3.0, then loaded the new `/dags/<dag_id>/grid` page for a few dag ids.
On a big DAG, I’m seeing 30+ second latency on the `/grid` API, followed by a 10+ second delay each time I click a green rectangle. For a smaller DAG I tried, the page was pretty snappy.
I went back to 2.2.5 and loaded the tree view for comparison, and saw that the `/tree/` endpoint on the large DAG had 9 seconds of latency, and clicking a green rectangle had instant responsiveness.
This is slow enough that it would be a blocker for my team to upgrade.
### What you think should happen instead
The grid view should be equally performant to the tree view it replaces
### How to reproduce
Generate a large DAG. Mine looks like the following:
- 900 tasks
- 150 task groups
- 25 historical runs
Compare against a small DAG, in my case:
- 200 tasks
- 36 task groups
- 25 historical runs
The large DAG is unusable, the small DAG is usable.
### Operating System
Ubuntu 20.04.3 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Docker-compose deployment on an EC2 instance running ubuntu.
Airflow web server is nearly stock image from `apache/airflow:2.3.0-python3.9`
### Anything else
Screenshot of load time:
<img width="1272" alt="image" src="https://user-images.githubusercontent.com/643593/168957215-74eefcb0-578e-46c9-92b8-74c4a6a20769.png">
GIF of click latency:
![2022-05-17 21 26 26](https://user-images.githubusercontent.com/643593/168957242-a2a95eec-c565-4a75-8725-bdae0bdd645f.gif)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23772 | https://github.com/apache/airflow/pull/23947 | 5ab58d057abb6b1f28eb4e3fb5cec7dc9850f0b0 | 1cf483fa0c45e0110d99e37b4e45c72c6084aa97 | "2022-05-18T04:37:28Z" | python | "2022-05-26T19:53:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,733 | ["airflow/www/templates/airflow/dag.html"] | Task Instance pop-up menu - some buttons not always clickable | ### Apache Airflow version
2.3.0 (latest released)
### What happened
See recorded screencap - in the task instance pop-up menu, sometimes the top menu options aren't clickable until you move the mouse around a bit and find an area where it will allow you to click
This only seems to affect the `Instance Details`, `Rendered, Log`, and `XCom` options - but not `List Instances, all runs`, or `Filter Upstream`
https://user-images.githubusercontent.com/15913202/168657933-532f58c6-7f33-4693-80cf-26436ff78ceb.mp4
### What you think should happen instead
The entire 'bubble' for the options such as 'XCom' should always be clickable, without having to find a 'sweet spot'
### How to reproduce
I am using Astro Runtime 5.0.0 in a localhost environment
### Operating System
macOS 11.5.2
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-databricks==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
I experience this in an Astro deployment as well (not just localhost) using the same runtime 5.0.0 image
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23733 | https://github.com/apache/airflow/pull/23736 | 71e4deb1b093b7ad9320eb5eb34eca8ea440a238 | 239a9dce5b97d45620862b42fd9018fdc9d6d505 | "2022-05-16T18:28:42Z" | python | "2022-05-17T02:58:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,727 | ["airflow/exceptions.py", "airflow/executors/kubernetes_executor.py", "airflow/kubernetes/pod_generator.py", "airflow/models/taskinstance.py", "tests/executors/test_kubernetes_executor.py", "tests/kubernetes/test_pod_generator.py", "tests/models/test_taskinstance.py"] | Airflow 2.3 scheduler error: 'V1Container' object has no attribute '_startup_probe' | ### Apache Airflow version
2.3.0 (latest released)
### What happened
After migrating from Airflow 2.2.4 to 2.3.0 scheduler fell into crash loop throwing:
```
--- Logging error ---
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 826, in _run_scheduler_loop
self.executor.heartbeat()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/base_executor.py", line 171, in heartbeat
self.sync()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 613, in sync
self.kube_scheduler.run_next(task)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 300, in run_next
self.log.info('Kubernetes job is %s', str(next_job).replace("\n", " "))
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 214, in __repr__
return self.to_str()
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 210, in to_str
return pprint.pformat(self.to_dict())
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 196, in to_dict
result[attr] = value.to_dict()
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 1070, in to_dict
result[attr] = list(map(
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 1071, in <lambda>
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_container.py", line 672, in to_dict
value = getattr(self, attr)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_container.py", line 464, in startup_probe
return self._startup_probe
AttributeError: 'V1Container' object has no attribute '_startup_probe'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/logging/__init__.py", line 1083, in emit
msg = self.format(record)
File "/usr/local/lib/python3.9/logging/__init__.py", line 927, in format
return fmt.format(record)
File "/usr/local/lib/python3.9/logging/__init__.py", line 663, in format
record.message = record.getMessage()
File "/usr/local/lib/python3.9/logging/__init__.py", line 367, in getMessage
msg = msg % self.args
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 214, in __repr__
return self.to_str()
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 210, in to_str
return pprint.pformat(self.to_dict())
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 196, in to_dict
result[attr] = value.to_dict()
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 1070, in to_dict
result[attr] = list(map(
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 1071, in <lambda>
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_container.py", line 672, in to_dict
value = getattr(self, attr)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_container.py", line 464, in startup_probe
return self._startup_probe
AttributeError: 'V1Container' object has no attribute '_startup_probe'
Call stack:
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 757, in _execute
self.executor.end()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 809, in end
self._flush_task_queue()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 767, in _flush_task_queue
self.log.warning('Executor shutting down, will NOT run task=%s', task)
Unable to print the message and arguments - possible formatting error.
Use the traceback above to help find the error.
```
kubernetes python library version was exactly as specified in constraints file: https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt
### What you think should happen instead
Scheduler should work
### How to reproduce
Not 100% sure but:
1. Run Airflow 2.2.4 using official Helm Chart
2. Run some dags to have some records in DB
3. Migrate to 2.3.0 (replace 2.2.4 image with 2.3.0 one)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
irrelevant
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
KubernetesExecutor
PostgreSQL (RDS) as Airflow DB
Python 3.9
Docker images build from `apache/airflow:2.3.0-python3.9` (some additional libraries installed)
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23727 | https://github.com/apache/airflow/pull/24117 | c8fa9e96e29f9f8b4ff9c7db416097fb70a87c2d | 0c41f437674f135fe7232a368bf9c198b0ecd2f0 | "2022-05-16T15:09:06Z" | python | "2022-06-15T04:30:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,722 | ["airflow/providers/google/cloud/operators/cloud_sql.py", "tests/providers/google/cloud/operators/test_cloud_sql.py"] | Add fields to CLOUD_SQL_EXPORT_VALIDATION | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==5.0.0
### Apache Airflow version
2.1.2
### Operating System
GCP Container
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### What happened
I got a validation warning.
Same as #23613.
### What you think should happen instead
The following fields are not implemented in CLOUD_SQL_EXPORT_VALIDATION.
The following fields should be added to CLOUD_SQL_EXPORT_VALIDATION.
- sqlExportOptions
- mysqlExportOptions
- masterData
- csvExportOptions
- escapeCharacter
- quoteCharacter
- fieldsTerminatedBy
- linesTerminatedBy
These are all the fields that have not been added.
https://cloud.google.com/sql/docs/mysql/admin-api/rest/v1beta4/operations#exportcontext
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23722 | https://github.com/apache/airflow/pull/23724 | 9e25bc211f6f7bba1aff133d21fe3865dabda53d | 3bf9a1df38b1ccfaf965a207d047b30452df1ba5 | "2022-05-16T11:05:33Z" | python | "2022-05-16T19:16:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,705 | ["chart/templates/redis/redis-statefulset.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_annotations.py"] | Adding PodAnnotations for Redis Statefulset | ### Description
Most Airflow services come with the ability of adding annotation, a part from Redis. This feature request adds this capability into the Redis helm template as well.
### Use case/motivation
Specifically for us, annotations and labels are used to integrate Airflow with external services, such as Datadog, and without it, the integration becomes a bit more complex.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23705 | https://github.com/apache/airflow/pull/23708 | ef79a0d1c4c0a041d7ebf83b93cbb25aa3778a70 | 2af19f16a4d94e749bbf6c7c4704e02aac35fc11 | "2022-05-14T07:46:23Z" | python | "2022-07-11T21:27:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,698 | ["airflow/utils/db_cleanup.py"] | airflow db clean - table missing exception not captured | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I am running on the Kubernetes Executor, so Celery-related tables were never created. I am using PostgreSQL as the database.
When I ran `airflow db clean`, it gave me the following exception:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedTable: relation "celery_taskmeta" does not exist
LINE 3: FROM celery_taskmeta
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/db_command.py", line 195, in cleanup_tables
run_cleanup(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/db_cleanup.py", line 311, in run_cleanup
_cleanup_table(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/db_cleanup.py", line 228, in _cleanup_table
_print_entities(query=query, print_rows=False)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/db_cleanup.py", line 137, in _print_entities
num_entities = query.count()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 3062, in count
return self._from_self(col).enable_eagerloads(False).scalar()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2803, in scalar
ret = self.one()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2780, in one
return self._iter().one()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2818, in _iter
result = self.session.execute(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1670, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1520, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 313, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1389, in _execute_clauseelement
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "celery_taskmeta" does not exist
LINE 3: FROM celery_taskmeta
^
[SQL: SELECT count(*) AS count_1
FROM (SELECT celery_taskmeta.id AS celery_taskmeta_id, celery_taskmeta.task_id AS celery_taskmeta_task_id, celery_taskmeta.status AS celery_taskmeta_status, celery_taskmeta.result AS celery_taskmeta_result, celery_taskmeta.date_done AS celery_taskmeta_date_done, celery_taskmeta.traceback AS celery_taskmeta_traceback
FROM celery_taskmeta
WHERE celery_taskmeta.date_done < %(date_done_1)s) AS anon_1]
[parameters: {'date_done_1': DateTime(2022, 1, 1, 0, 0, 0, tzinfo=Timezone('UTC'))}]
(Background on this error at: http://sqlalche.me/e/14/f405)
```
### What you think should happen instead
_No response_
### How to reproduce
1. Use an executor that do not require Celery
2. Use PostgreSQL as the database
3. Run `airflow db clean`
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23698 | https://github.com/apache/airflow/pull/23699 | 252ef66438ecda87a8aac4beed1f689f14ee8bec | a80b2fcaea984813995d4a2610987a1c9068fdb5 | "2022-05-13T11:35:22Z" | python | "2022-05-19T16:47:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,692 | ["docs/apache-airflow/extra-packages-ref.rst"] | Conflicts with airflow constraints for airflow 2.3.0 python 3.9 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When installing airflow 2.3.0 using pip command with "all" it fails on dependency google-ads
`pip install "apache-airflow[all]==2.3.0" -c "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
> The conflict is caused by:
apache-airflow[all] 2.3.0 depends on google-ads>=15.1.1; extra == "all"
The user requested (constraint) google-ads==14.0.0
I changed the version of google-ads to 15.1.1, but then it failed on dependency databricks-sql-connector
> The conflict is caused by:
apache-airflow[all] 2.3.0 depends on databricks-sql-connector<3.0.0 and >=2.0.0; extra == "all"
The user requested (constraint) databricks-sql-connector==1.0.2
and then on different dependencies...
### What you think should happen instead
_No response_
### How to reproduce
(venv) [root@localhost]# `python -V`
Python 3.9.7
(venv) [root@localhost]# `pip install "apache-airflow[all]==2.3.0" -c "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
### Operating System
CentOS 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23692 | https://github.com/apache/airflow/pull/23697 | 4afa8e3cecf1e4a2863715d14a45160034ad31a6 | 310002e44887847991b0864bbf9a921c7b11e930 | "2022-05-13T00:01:55Z" | python | "2022-05-13T11:33:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,689 | ["airflow/timetables/_cron.py", "airflow/timetables/interval.py", "airflow/timetables/trigger.py", "tests/timetables/test_interval_timetable.py"] | Data Interval wrong when manually triggering with a specific logical date | ### Apache Airflow version
2.2.5
### What happened
When I use the date picker in the “Trigger DAG w/ config” page to choose a specific logical date for some reason on a scheduled daily DAG the Data Interval Start (circled in red) is 2 days before the logical date (circled in blue), instead of the same as the logical date. And the Data Interval End is one day before the logical date. So the interval is the correct length, but on wrong days.
![Screen Shot 2022-05-11 at 5 14 10 PM](https://user-images.githubusercontent.com/45696489/168159891-b080273b-4b22-4ef8-a2ae-98327a503f9f.png)
I encountered this with a DAG with a daily schedule which typically runs at 09:30 UTC. I am testing this in a dev environment (with catchup off) and trying to trigger a run for 2022-05-09 09:30:00. I would expect the data interval to start at that same time and the data interval end to be 1 day after.
It has nothing to do with the previous run since that was way back on 2022-04-26
### What you think should happen instead
The data interval start date should be the same as the logical date (if it is a custom logical date)
### How to reproduce
I made a sample DAG as shown below:
```python
import pendulum
from airflow.models import DAG
from airflow.operators.python import PythonOperator
def sample(data_interval_start, data_interval_end):
return "data_interval_start: {}, data_interval_end: {}".format(str(data_interval_start), str(data_interval_end))
args = {
'start_date': pendulum.datetime(2022, 3, 10, 9, 30)
}
with DAG(
dag_id='sample_data_interval_issue',
default_args=args,
schedule_interval='30 9 * * *' # 09:30 UTC
) as sample_data_interval_issue:
task = PythonOperator(
task_id='sample',
python_callable=sample
)
```
I then start it to start a scheduled DAG run (`2022-05-11, 09:30:00 UTC`), and the `data_interval_start` is the same as I expect, `2022-05-11T09:30:00+00:00`.
However, when I went to "Trigger DAG w/ config" page and in the date chooser choose `2022-05-09 09:30:00+00:00`, and then triggered that. It shows the run datetime is `2022-05-09, 09:30:00 UTC`, but the `data_interval_start` is incorrectly set to `2022-05-08T09:30:00+00:00`, 2 days before the date I choose.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23689 | https://github.com/apache/airflow/pull/22658 | 026f1bb98cd05a26075bd4e4fb68f7c3860ce8db | d991d9800e883a2109b5523ae6354738e4ac5717 | "2022-05-12T22:29:26Z" | python | "2022-08-16T13:26:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,688 | ["airflow/decorators/base.py", "tests/decorators/test_python.py"] | _TaskDecorator has no __wrapped__ attribute in v2.3.0 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I run a unit test on a task which is defined using the task decorator. In the unit test, I unwrap the task decorator with the `__wrapped__` attribute, but this no longer works in v2.3.0. It works in v2.2.5.
### What you think should happen instead
I expect the wrapped function to be returned. This was what occurred in v2.2.5
When running pytest on the airflow v2.3.0 the following error is thrown:
```AttributeError: '_TaskDecorator' object has no attribute '__wrapped__'```
### How to reproduce
Here's a rough outline of the code.
A module `hello.py` contains the task definition:
```
from airflow.decorators import task
@task
def hello_airflow():
print('hello airflow')
```
and the test contains
```
from hello import hello_airflow
def test_hello_airflow():
hello_airflow.__wrapped__()
```
Then run pytest
### Operating System
Rocky Linux 8.5 (Green Obsidian)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23688 | https://github.com/apache/airflow/pull/23830 | a8445657996f52b3ac5ce40a535d9c397c204d36 | a71e4b789006b8f36cd993731a9fb7d5792fccc2 | "2022-05-12T22:12:05Z" | python | "2022-05-23T01:24:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,679 | ["airflow/config_templates/config.yml.schema.json", "airflow/configuration.py", "tests/config_templates/deprecated.cfg", "tests/config_templates/deprecated_cmd.cfg", "tests/config_templates/deprecated_secret.cfg", "tests/config_templates/empty.cfg", "tests/core/test_configuration.py", "tests/utils/test_config.py"] | exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of | ### Apache Airflow version
main (development)
### What happened
trying to run airflow tasks run command locally and force `StandardTaskRunner` to use `_start_by_exec` instead of `_start_by_fork`
```
airflow tasks run example_bash_operator also_run_this scheduled__2022-05-08T00:00:00+00:00 --job-id 237 --local --subdir /Users/ping_zhang/airlab/repos/airflow/airflow/example_dags/example_bash_operator.py -f -i
```
However, it always errors out:
see https://user-images.githubusercontent.com/8662365/168164336-a75bfac8-cb59-43a9-b9f3-0c345c5da79f.png
```
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this Traceback (most recent call last):
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/miniforge3/envs/apache-***/bin/***", line 33, in <module>
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this sys.exit(load_entry_point('apache-***', 'console_scripts', '***')())
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/__main__.py", line 38, in main
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this args.func(args)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/cli_parser.py", line 51, in command
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return func(*args, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/utils/cli.py", line 99, in wrapper
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return f(*args, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 369, in task_run
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this ti, _ = _get_ti(task, args.execution_date_or_run_id, args.map_index, pool=args.pool)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/utils/session.py", line 71, in wrapper
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return func(*args, session=session, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 152, in _get_ti
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this dag_run, dr_created = _get_dag_run(
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 112, in _get_dag_run
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this raise DagRunNotFound(
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this ***.exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of 'scheduled__2022-05-08T00:00:00+00:00' not found
[2022-05-12 12:08:33,014] {local_task_job.py:163} INFO - Task exited with return code 1
[2022-05-12 12:08:33,048] {local_task_job.py:265} INFO - 0 downstream tasks scheduled from follow-on schedule check
[2022-05-12 12:11:30,742] {taskinstance.py:1120} INFO - Dependencies not met for <TaskInstance: example_bash_operator.also_run_this scheduled__2022-05-08T00:00:00+00:00 [running]>, dependency 'Task Instance Not Running' FAILED: Task is in the running state
[2022-05-12 12:11:30,743] {local_task_job.py:102} INFO - Task is not able to be run
```
i have checked the dag_run does exist in my db:
![image](https://user-images.githubusercontent.com/8662365/168151064-b610f6c9-c9ab-40b9-9b5b-fb7e8773aed9.png)
### What you think should happen instead
_No response_
### How to reproduce
pull the latest main branch with this commit: `7277122ae62305de19ceef33607f09cf030a3cd4`
run airflow scheduler, webserver and worker locally with `CeleryExecutor`.
### Operating System
Apple M1 Max, version: 12.2
### Versions of Apache Airflow Providers
NA
### Deployment
Other
### Deployment details
on my local mac with latest main branch, latest commit: `7277122ae62305de19ceef33607f09cf030a3cd4`
### Anything else
Python version:
Python 3.9.7
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23679 | https://github.com/apache/airflow/pull/23723 | ce8ea6691820140a0e2d9a5dad5254bc05a5a270 | 888bc2e233b1672a61433929e26b82210796fd71 | "2022-05-12T19:15:54Z" | python | "2022-05-20T14:09:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,670 | ["airflow/www/static/js/dags.js", "airflow/www/views.py", "tests/www/views/test_views_acl.py"] | Airflow 2.3.0: can't filter by owner if selected from dropdown | ### Apache Airflow version
2.3.0 (latest released)
### What happened
On a clean install of 2.3.0, whenever I try to filter by owner, if I select it from the dropdown (which correctly detects the owner's name) it returns the following error:
`DAG "ecodina" seems to be missing from DagBag.`
Webserver's log:
```
127.0.0.1 - - [12/May/2022:12:27:47 +0000] "GET /dagmodel/autocomplete?query=ecodin&status=all HTTP/1.1" 200 17 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "GET /dags/ecodina/grid?search=ecodina HTTP/1.1" 302 217 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "GET /home HTTP/1.1" 200 35774 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /blocked HTTP/1.1" 200 2 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /last_dagruns HTTP/1.1" 200 402 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /dag_stats HTTP/1.1" 200 333 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /task_stats HTTP/1.1" 200 1194 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
```
Instead, if I write the owner's name fully and avoid selecting it from the dropdown, it works as expected since it constructs the correct URL:
`my.airflow.com/home?search=ecodina`
### What you think should happen instead
The DAGs table should only show the selected owner's DAGs.
### How to reproduce
- Start the Airflow Webserver
- Connect to the Airflow webpage
- Type an owner name in the _Search DAGs_ textbox and select it from the dropdown
### Operating System
CentOS Linux 8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Installed on a conda environment, as if it was a virtualenv:
- `conda create -c conda-forge -n airflow python=3.9`
- `conda activate airflow`
- `pip install "apache-airflow[postgres]==2.3.0" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
Database: PostgreSQL 13
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23670 | https://github.com/apache/airflow/pull/23804 | 70b41e46b46e65c0446a40ab91624cb2291a5039 | 29afd35b9cfe141b668ce7ceccecdba60775a8ff | "2022-05-12T12:33:06Z" | python | "2022-05-24T13:43:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,669 | ["docs/README.rst"] | Fix ./breeze build-docs command options in docs/README.rst | ### What do you see as an issue?
I got an error when executing `./breeze build-docs -- --help` command in docs/README.rst.
```bash
% ./breeze build-docs -- --help
Usage: breeze build-docs [OPTIONS]
Try running the '--help' flag for more information.
╭─ Error ─────────────────────────────────────────────────╮
│ Got unexpected extra argument (--help) │
╰─────────────────────────────────────────────────────────╯
To find out more, visit
https://github.com/apache/airflow/blob/main/BREEZE.rst
```
### Solving the problem
"--" in option should be removed.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23669 | https://github.com/apache/airflow/pull/23671 | 3138604b264878f27505223bd14c7814eacc1e57 | 3fa57168a520d8afe0c06d8a0200dd3517f43078 | "2022-05-12T12:17:00Z" | python | "2022-05-12T12:33:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,666 | ["airflow/providers/amazon/aws/transfers/s3_to_sql.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/operators/transfer/s3_to_sql.rst", "tests/providers/amazon/aws/transfers/test_s3_to_sql.py", "tests/system/providers/amazon/aws/example_s3_to_sql.py"] | Add transfers operator S3 to SQL / SQL to SQL | ### Description
Should we add S3 to SQL to aws transfers?
### Use case/motivation
1. After process data from spark/glue(more), we need to publish data to sql
2. Synchronize data between 2 sql databases.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23666 | https://github.com/apache/airflow/pull/29085 | e5730364b4eb5a3b30e815ca965db0f0e710edb6 | efaed34213ad4416e2f4834d0cd2f60c41814507 | "2022-05-12T09:41:35Z" | python | "2023-01-23T21:53:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,642 | ["airflow/models/mappedoperator.py", "tests/models/test_taskinstance.py"] | Dynamic Task Crashes scheduler - Non Empty Return | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I have a dag that looks like this.
When I uncomment `py_job`(Dynamically mapped PythonOperator) it works well with `pull_messages` (Taskflow API).
When I try to do the same with `DatabricksRunNowOperator` it crashes the scheduler with error
Related issues #23486
### Sample DAG
```
import json
import pendulum
from airflow.decorators import dag, task
from airflow.operators.python import PythonOperator
from airflow.providers.databricks.operators.databricks import DatabricksRunNowOperator
@dag(
schedule_interval=None,
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
tags=['example'],
)
def tutorial_taskflow_api_etl():
def random(*args, **kwargs):
print ("==== kwargs inside random ====", args, kwargs)
print ("I'm random")
return 49
@task
def pull_messages():
return [["hi"], ["hello"]]
op = DatabricksRunNowOperator.partial(
task_id = "new_job",
job_id=42,
notebook_params={"dry-run": "true"},
python_params=["douglas adams", "42"],
spark_submit_params=["--class", "org.apache.spark.examples.SparkPi"]
).expand(jar_params=pull_messages())
# py_job = PythonOperator.partial(
# task_id = 'py_job',
# python_callable=random
# ).expand(op_args= pull_messages())
tutorial_etl_dag = tutorial_taskflow_api_etl()
```
### Error
```
[2022-05-11 11:46:30 +0000] [40] [INFO] Worker exiting (pid: 40)
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.9/site-packages/astronomer/airflow/version_check/plugin.py", line 29, in run_before
fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 824, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 906, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1148, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 522, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 658, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 714, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 609, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 595, in _resolve_map_lengths
raise RuntimeError(f"Failed to populate all mapping metadata; missing: {keys}")
RuntimeError: Failed to populate all mapping metadata; missing: 'jar_params'
[2022-05-11 11:46:30 +0000] [31] [INFO] Shutting down: Master
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23642 | https://github.com/apache/airflow/pull/23771 | 5e3f652397005c5fac6c6b0099de345b5c39148d | 3849ebb8d22bbc229d464c4171c9b5ff960cd089 | "2022-05-11T11:56:36Z" | python | "2022-05-18T19:43:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,639 | ["airflow/models/trigger.py"] | Triggerer process die with DB Deadlock | ### Apache Airflow version
2.2.5
### What happened
When create many Deferrable operator (eg. `TimeDeltaSensorAsync`), triggerer component died because of DB Deadlock issue.
```
[2022-05-11 02:45:08,420] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5397) starting
[2022-05-11 02:45:08,421] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5398) starting
[2022-05-11 02:45:09,459] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5400) starting
[2022-05-11 02:45:09,461] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5399) starting
[2022-05-11 02:45:10,503] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5401) starting
[2022-05-11 02:45:10,504] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5402) starting
[2022-05-11 02:45:11,113] {triggerer_job.py:108} ERROR - Exception when executing TriggererJob._run_trigger_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 106, in _execute
self._run_trigger_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 127, in _run_trigger_loop
Trigger.clean_unused()
File "/usr/local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/models/trigger.py", line 91, in clean_unused
session.query(TaskInstance).filter(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 4063, in update
update_op.exec_()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1697, in exec_
self._do_exec()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1895, in _do_exec
self._execute_stmt(update_stmt)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1702, in _execute_stmt
self.result = self.query._execute_crud(stmt, self.mapper)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3568, in _execute_crud
return conn.execute(stmt, self._params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: UPDATE task_instance SET trigger_id=%s WHERE task_instance.state != %s AND task_instance.trigger_id IS NOT NULL]
[parameters: (None, <TaskInstanceState.DEFERRED: 'deferred'>)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
[2022-05-11 02:45:11,118] {triggerer_job.py:111} INFO - Waiting for triggers to clean up
[2022-05-11 02:45:11,592] {triggerer_job.py:117} INFO - Exited trigger loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/cli/commands/triggerer_command.py", line 56, in triggerer
job.run()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 246, in run
self._execute()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 106, in _execute
self._run_trigger_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 127, in _run_trigger_loop
Trigger.clean_unused()
File "/usr/local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/models/trigger.py", line 91, in clean_unused
session.query(TaskInstance).filter(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 4063, in update
update_op.exec_()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1697, in exec_
self._do_exec()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1895, in _do_exec
self._execute_stmt(update_stmt)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1702, in _execute_stmt
self.result = self.query._execute_crud(stmt, self.mapper)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3568, in _execute_crud
return conn.execute(stmt, self._params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: UPDATE task_instance SET trigger_id=%s WHERE task_instance.state != %s AND task_instance.trigger_id IS NOT NULL]
[parameters: (None, <TaskInstanceState.DEFERRED: 'deferred'>)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
```
### What you think should happen instead
Triggerer processor does not raise Deadlock error.
### How to reproduce
Create "test_timedelta" DAG and run it.
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.sensors.time_delta import TimeDeltaSensorAsync
default_args = {
"owner": "user",
"start_date": datetime(2021, 2, 8),
"retries": 2,
"retry_delay": timedelta(minutes=20),
"depends_on_past": False,
}
with DAG(
dag_id="test_timedelta",
default_args=default_args,
schedule_interval="10 11 * * *",
max_active_runs=1,
max_active_tasks=2,
catchup=False,
) as dag:
start = DummyOperator(task_id="start")
end = DummyOperator(task_id="end")
for idx in range(800):
tx = TimeDeltaSensorAsync(
task_id=f"sleep_{idx}",
delta=timedelta(days=3),
)
start >> tx >> end
```
### Operating System
uname_result(system='Linux', node='d2845d6331fd', release='5.10.104-linuxkit', version='#1 SMP Thu Mar 17 17:08:06 UTC 2022', machine='x86_64', processor='')
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-druid | 2.3.3
apache-airflow-providers-apache-hive | 2.3.2
apache-airflow-providers-apache-spark | 2.1.3
apache-airflow-providers-celery | 2.1.3
apache-airflow-providers-ftp | 2.1.2
apache-airflow-providers-http | 2.1.2
apache-airflow-providers-imap | 2.2.3
apache-airflow-providers-jdbc | 2.1.3
apache-airflow-providers-mysql | 2.2.3
apache-airflow-providers-postgres | 4.1.0
apache-airflow-providers-redis | 2.0.4
apache-airflow-providers-sqlite | 2.1.3
apache-airflow-providers-ssh | 2.4.3
### Deployment
Other Docker-based deployment
### Deployment details
webserver: 1 instance
scheduler: 1 instance
worker: 1 instance (Celery)
triggerer: 1 instance
redis: 1 instance
Database: 1 instance (mysql)
### Anything else
webserver: 172.19.0.9
scheduler: 172.19.0.7
triggerer: 172.19.0.5
worker: 172.19.0.8
MYSQL (`SHOW ENGINE INNODB STATUS;`)
```
------------------------
LATEST DETECTED DEADLOCK
------------------------
2022-05-11 07:47:49 139953955817216
*** (1) TRANSACTION:
TRANSACTION 544772, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 7 lock struct(s), heap size 1128, 2 row lock(s)
MySQL thread id 20, OS thread handle 139953861383936, query id 228318 172.19.0.5 airflow_user updating
UPDATE task_instance SET trigger_id=NULL WHERE task_instance.state != 'deferred' AND task_instance.trigger_id IS NOT NULL
*** (1) HOLDS THE LOCK(S):
RECORD LOCKS space id 125 page no 231 n bits 264 index ti_state of table `airflow_db`.`task_instance` trx id 544772 lock_mode X locks rec but not gap
Record lock, heap no 180 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
0: len 6; hex 717565756564; asc queued;;
1: len 14; hex 746573745f74696d6564656c7461; asc test_timedelta;;
2: len 9; hex 736c6565705f323436; asc sleep_246;;
3: len 30; hex 7363686564756c65645f5f323032322d30352d30395431313a31303a3030; asc scheduled__2022-05-09T11:10:00; (total 36 bytes);
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 125 page no 47 n bits 128 index PRIMARY of table `airflow_db`.`task_instance` trx id 544772 lock_mode X locks rec but not gap waiting
Record lock, heap no 55 PHYSICAL RECORD: n_fields 28; compact format; info bits 0
0: len 14; hex 746573745f74696d6564656c7461; asc test_timedelta;;
1: len 9; hex 736c6565705f323436; asc sleep_246;;
2: len 30; hex 7363686564756c65645f5f323032322d30352d30395431313a31303a3030; asc scheduled__2022-05-09T11:10:00; (total 36 bytes);
3: len 6; hex 000000085001; asc P ;;
4: len 7; hex 01000001411e2f; asc A /;;
5: len 7; hex 627b6a250b612d; asc b{j% a-;;
6: SQL NULL;
7: SQL NULL;
8: len 7; hex 72756e6e696e67; asc running;;
9: len 4; hex 80000001; asc ;;
10: len 12; hex 643238343564363333316664; asc d2845d6331fd;;
11: len 4; hex 726f6f74; asc root;;
12: len 4; hex 8000245e; asc $^;;
13: len 12; hex 64656661756c745f706f6f6c; asc default_pool;;
14: len 7; hex 64656661756c74; asc default;;
15: len 4; hex 80000002; asc ;;
16: len 20; hex 54696d6544656c746153656e736f724173796e63; asc TimeDeltaSensorAsync;;
17: len 7; hex 627b6a240472e0; asc b{j$ r ;;
18: SQL NULL;
19: len 4; hex 80000002; asc ;;
20: len 5; hex 80057d942e; asc } .;;
21: len 4; hex 80000001; asc ;;
22: len 4; hex 800021c7; asc ! ;;
23: len 30; hex 36353061663737642d363762372d343166382d383439342d636637333061; asc 650af77d-67b7-41f8-8494-cf730a; (total 36 bytes);
24: SQL NULL;
25: SQL NULL;
26: SQL NULL;
27: len 2; hex 0400; asc ;;
*** (2) TRANSACTION:
TRANSACTION 544769, ACTIVE 0 sec updating or deleting
mysql tables in use 1, locked 1
LOCK WAIT 7 lock struct(s), heap size 1128, 4 row lock(s), undo log entries 2
MySQL thread id 12010, OS thread handle 139953323235072, query id 228319 172.19.0.8 airflow_user updating
UPDATE task_instance SET start_date='2022-05-11 07:47:49.745773', state='running', try_number=1, hostname='d2845d6331fd', job_id=9310 WHERE task_instance.task_id = 'sleep_246' AND task_instance.dag_id = 'test_timedelta' AND task_instance.run_id = 'scheduled__2022-05-09T11:10:00+00:00'
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 125 page no 47 n bits 120 index PRIMARY of table `airflow_db`.`task_instance` trx id 544769 lock_mode X locks rec but not gap
Record lock, heap no 55 PHYSICAL RECORD: n_fields 28; compact format; info bits 0
0: len 14; hex 746573745f74696d6564656c7461; asc test_timedelta;;
1: len 9; hex 736c6565705f323436; asc sleep_246;;
2: len 30; hex 7363686564756c65645f5f323032322d30352d30395431313a31303a3030; asc scheduled__2022-05-09T11:10:00; (total 36 bytes);
3: len 6; hex 000000085001; asc P ;;
4: len 7; hex 01000001411e2f; asc A /;;
5: len 7; hex 627b6a250b612d; asc b{j% a-;;
6: SQL NULL;
7: SQL NULL;
8: len 7; hex 72756e6e696e67; asc running;;
9: len 4; hex 80000001; asc ;;
10: len 12; hex 643238343564363333316664; asc d2845d6331fd;;
11: len 4; hex 726f6f74; asc root;;
12: len 4; hex 8000245e; asc $^;;
13: len 12; hex 64656661756c745f706f6f6c; asc default_pool;;
14: len 7; hex 64656661756c74; asc default;;
15: len 4; hex 80000002; asc ;;
16: len 20; hex 54696d6544656c746153656e736f724173796e63; asc TimeDeltaSensorAsync;;
17: len 7; hex 627b6a240472e0; asc b{j$ r ;;
18: SQL NULL;
19: len 4; hex 80000002; asc ;;
20: len 5; hex 80057d942e; asc } .;;
21: len 4; hex 80000001; asc ;;
22: len 4; hex 800021c7; asc ! ;;
23: len 30; hex 36353061663737642d363762372d343166382d383439342d636637333061; asc 650af77d-67b7-41f8-8494-cf730a; (total 36 bytes);
24: SQL NULL;
25: SQL NULL;
26: SQL NULL;
27: len 2; hex 0400; asc ;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 125 page no 231 n bits 264 index ti_state of table `airflow_db`.`task_instance` trx id 544769 lock_mode X locks rec but not gap waiting
Record lock, heap no 180 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
0: len 6; hex 717565756564; asc queued;;
1: len 14; hex 746573745f74696d6564656c7461; asc test_timedelta;;
2: len 9; hex 736c6565705f323436; asc sleep_246;;
3: len 30; hex 7363686564756c65645f5f323032322d30352d30395431313a31303a3030; asc scheduled__2022-05-09T11:10:00; (total 36 bytes);
*** WE ROLL BACK TRANSACTION (1)
```
Airflow env
```
AIRFLOW__CELERY__RESULT_BACKEND=db+mysql://airflow_user:airflow_pass@mysql/airflow_db
AIRFLOW__CORE__DEFAULT_TIMEZONE=KST
AIRFLOW__CELERY__BROKER_URL=redis://redis:6379/0
AIRFLOW__CORE__LOAD_EXAMPLES=False
AIRFLOW__WEBSERVER__DEFAULT_UI_TIMEZONE=KST
AIRFLOW_HOME=/home/deploy/airflow
AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL=30
AIRFLOW__CORE__EXECUTOR=CeleryExecutor
AIRFLOW__WEBSERVER__SECRET_KEY=aoiuwernholo
AIRFLOW__DATABASE__LOAD_DEFAULT_CONNECTIONS=False
AIRFLOW__CORE__SQL_ALCHEMY_CONN=mysql+mysqldb://airflow_user:airflow_pass@mysql/airflow_db
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23639 | https://github.com/apache/airflow/pull/24071 | 5087f96600f6d7cc852b91079e92d00df6a50486 | d86ae090350de97e385ca4aaf128235f4c21f158 | "2022-05-11T08:03:17Z" | python | "2022-06-01T17:54:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,623 | ["airflow/providers/snowflake/hooks/snowflake.py", "tests/providers/snowflake/hooks/test_snowflake.py"] | SnowflakeHook.run() raises UnboundLocalError exception if sql argument is empty | ### Apache Airflow Provider(s)
snowflake
### Versions of Apache Airflow Providers
apache-airflow-providers-snowflake==2.3.0
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux AMI
### Deployment
MWAA
### Deployment details
_No response_
### What happened
If the sql parameter is an empty list, the execution_info list variable is attempted to be returned when it hasn't been initialized.
The execution_info variable is [defined](https://github.com/apache/airflow/blob/2.3.0/airflow/providers/snowflake/hooks/snowflake.py#L330) only within parsing through each sql query, so if the sql queries list is empty, it never gets defined.
```
[...]
snowflake_hook.run(sql=queries, autocommit=True)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/providers/snowflake/hooks/snowflake.py", line 304, in run
return execution_info
UnboundLocalError: local variable 'execution_info' referenced before assignment
```
### What you think should happen instead
The function could either return an empty list or None.
Perhaps the `execution_info` variable definition could just be moved further up in the function definition so that returning it at the end doesn't raise issues.
Or, there should be a check in the `run` implementation to see if the `sql` argument is empty or not, and appropriately handle what to return from there.
### How to reproduce
Pass an empty list to the sql argument when calling `SnowflakeHook.run()`.
### Anything else
My script that utilizes the `SnowflakeHook.run()` function is automated in a way where there isn't always a case that there are sql queries to run.
Of course, on my end I would update my code to first check if the sql queries list is populated before calling the hook to run.
However, it would save for unintended exceptions if the hook's `run()` function also appropriately handles what gets returned in the event that the `sql` argument is empty.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23623 | https://github.com/apache/airflow/pull/23767 | 4c9f7560355eefd57a29afee73bf04273e81a7e8 | 86cfd1244a641a8f17c9b33a34399d9be264f556 | "2022-05-10T14:37:36Z" | python | "2022-05-20T03:59:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,622 | ["airflow/providers/databricks/operators/databricks.py"] | DatabricksSubmitRunOperator and DatabricksRunNowOperator cannot define .json as template_ext | ### Apache Airflow version
2.2.2
### What happened
Introduced here https://github.com/apache/airflow/commit/0a2d0d1ecbb7a72677f96bc17117799ab40853e0 databricks operators now define template_ext property as `('.json',)`. This change broke a few dags we have currently as they basically define a config json file that needs to be posted to databricks. Example:
```python
DatabricksRunNowOperator(
task_id=...,
job_name=...,
python_params=["app.py", "--config", "/path/to/config/inside-docker-image.json"],
databricks_conn_id=...,
email_on_failure=...,
)
```
This snippet will make airflow to load /path/to/config/inside-docker-image.json and it is not desired.
@utkarsharma2 @potiuk can this change be reverted, please? It's causing headaches when a json file is provided as part of the dag parameters.
### What you think should happen instead
Use a more specific extension for databricks operators, like ```.json-tpl```
### How to reproduce
_No response_
### Operating System
Any
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==2.6.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23622 | https://github.com/apache/airflow/pull/23641 | 84c9f4bf70cbc2f4ba19fdc5aa88791500d4daaa | acf89510cd5a18d15c1a45e674ba0bcae9293097 | "2022-05-10T13:54:23Z" | python | "2022-06-04T21:51:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,613 | ["airflow/providers/google/cloud/example_dags/example_cloud_sql.py", "airflow/providers/google/cloud/operators/cloud_sql.py", "tests/providers/google/cloud/operators/test_cloud_sql.py"] | Add an offload option to CloudSQLExportInstanceOperator validation specification | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==5.0.0
### Apache Airflow version
2.1.2
### Operating System
GCP Container
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### What happened
I want to use serverless export to offload the export operation from the primary instance.
https://cloud.google.com/sql/docs/mysql/import-export#serverless
Used CloudSQLExportInstanceOperator with the exportContext.offload flag to perform a serverless export operation.
I got the following warning:
```
{field_validator.py:266} WARNING - The field 'exportContext.offload' is in the body, but is not specified in the validation specification '[{'name': 'fileType', 'allow_empty': False}, {'name': 'uri', 'allow_empty': False}, {'name': 'databases', 'optional': True, 'type': 'list'}, {'name': 'sqlExportOptions', 'type': 'dict', 'optional': True, 'fields': [{'name': 'tables', 'optional': True, 'type': 'list'}, {'name': 'schemaOnly', 'optional': True}]}, {'name': 'csvExportOptions', 'type': 'dict', 'optional': True, 'fields': [{'name': 'selectQuery'}]}]'. This might be because you are using newer API version and new field names defined for that version. Then the warning can be safely ignored, or you might want to upgrade the operatorto the version that supports the new API version.
```
### What you think should happen instead
I think a validation specification for `exportContext.offload` should be added.
### How to reproduce
Try to use `exportContext.offload`, as in the example below.
```python
CloudSQLExportInstanceOperator(
task_id='export_task',
project_id='some_project',
instance='cloud_sql_instance',
body={
"exportContext": {
"fileType": "csv",
"uri": "gs://my-bucket/export.csv",
"databases": ["some_db"],
"csvExportOptions": {"selectQuery": "select * from some_table limit 10"},
"offload": True
}
},
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23613 | https://github.com/apache/airflow/pull/23614 | 1bd75ddbe3b1e590e38d735757d99b43db1725d6 | 74557e41e3dcedec241ea583123d53176994cccc | "2022-05-10T07:23:07Z" | python | "2022-05-10T09:49:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,610 | ["airflow/executors/celery_kubernetes_executor.py", "airflow/executors/local_kubernetes_executor.py", "tests/executors/test_celery_kubernetes_executor.py", "tests/executors/test_local_kubernetes_executor.py"] | AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback' | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The issue started to occur after upgrading airflow from v2.2.5 to v2.3.0. The schedulers are crashing when DAG's SLA is configured. Only occurred when I used `CeleryKubernetesExecutor`. Tested on `CeleryExecutor` and it works as expected.
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 824, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 919, in _do_scheduling
self._send_dag_callbacks_to_processor(dag, callback_to_run)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1179, in _send_dag_callbacks_to_processor
self._send_sla_callbacks_to_processor(dag)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1195, in _send_sla_callbacks_to_processor
self.executor.send_callback(request)
AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback'
```
### What you think should happen instead
Work like previous version
### How to reproduce
1. Use `CeleryKubernetesExecutor`
2. Configure DAG's SLA
DAG to reproduce:
```
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Example DAG demonstrating the usage of the BashOperator."""
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
DEFAULT_ARGS = {
"sla": timedelta(hours=1),
}
with DAG(
dag_id="example_bash_operator",
default_args=DEFAULT_ARGS,
schedule_interval="0 0 * * *",
start_date=datetime(2021, 1, 1),
catchup=False,
dagrun_timeout=timedelta(minutes=60),
tags=["example", "example2"],
params={"example_key": "example_value"},
) as dag:
run_this_last = DummyOperator(
task_id="run_this_last",
)
# [START howto_operator_bash]
run_this = BashOperator(
task_id="run_after_loop",
bash_command="echo 1",
)
# [END howto_operator_bash]
run_this >> run_this_last
for i in range(3):
task = BashOperator(
task_id="runme_" + str(i),
bash_command='echo "{{ task_instance_key_str }}" && sleep 1',
)
task >> run_this
# [START howto_operator_bash_template]
also_run_this = BashOperator(
task_id="also_run_this",
bash_command='echo "run_id={{ run_id }} | dag_run={{ dag_run }}"',
)
# [END howto_operator_bash_template]
also_run_this >> run_this_last
# [START howto_operator_bash_skip]
this_will_skip = BashOperator(
task_id="this_will_skip",
bash_command='echo "hello world"; exit 99;',
dag=dag,
)
# [END howto_operator_bash_skip]
this_will_skip >> run_this_last
if __name__ == "__main__":
dag.cli()
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23610 | https://github.com/apache/airflow/pull/23617 | 60a1d9d191fb8fc01893024c897df9632ad5fbf4 | c5b72bf30c8b80b6c022055834fc7272a1a44526 | "2022-05-10T03:29:05Z" | python | "2022-05-10T17:13:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,588 | ["airflow/www/static/js/dag/details/taskInstance/taskActions/ClearInstance.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/MarkInstanceAs.tsx"] | After upgrade from Airflow 2.2.4, grid disappears for some DAGs | ### Apache Airflow version
2.3.0 (latest released)
### What happened
After the upgrade from 2.2.4 to 2.3.0, some DAGs grid data seems missing and it renders the UI blank
### What you think should happen instead
When I click the grid for a specific execution date, I expect to be able to click the tasks and view the log, render jinja templating, and clear status
### How to reproduce
Run an upgrade from 2.2.4 to 2.3.0 with a huge database (we have ~750 DAGs with a minimum of 10 tasks each).
In addition, we heavily rely on XCom.
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow apache_airflow-2.3.0-py3-none-any.whl
apache-airflow-providers-amazon apache_airflow_providers_amazon-3.3.0-py3-none-any.whl
apache-airflow-providers-ftp apache_airflow_providers_ftp-2.1.2-py3-none-any.whl
apache-airflow-providers-http apache_airflow_providers_http-2.1.2-py3-none-any.whl
apache-airflow-providers-imap apache_airflow_providers_imap-2.2.3-py3-none-any.whl
apache-airflow-providers-mongo apache_airflow_providers_mongo-2.3.3-py3-none-any.whl
apache-airflow-providers-mysql apache_airflow_providers_mysql-2.2.3-py3-none-any.whl
apache-airflow-providers-pagerduty apache_airflow_providers_pagerduty-2.1.3-py3-none-any.whl
apache-airflow-providers-postgres apache_airflow_providers_postgres-4.1.0-py3-none-any.whl
apache-airflow-providers-sendgrid apache_airflow_providers_sendgrid-2.0.4-py3-none-any.whl
apache-airflow-providers-slack apache_airflow_providers_slack-4.2.3-py3-none-any.whl
apache-airflow-providers-sqlite apache_airflow_providers_sqlite-2.1.3-py3-none-any.whl
apache-airflow-providers-ssh apache_airflow_providers_ssh-2.4.3-py3-none-any.whl
apache-airflow-providers-vertica apache_airflow_providers_vertica-2.1.3-py3-none-any.whl
### Deployment
Virtualenv installation
### Deployment details
Python 3.8.10
### Anything else
For the affected DAGs, all the time
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23588 | https://github.com/apache/airflow/pull/32992 | 8bfad056d8ef481cc44288c5749fa5c54efadeaa | 943b97850a1e82e4da22e8489c4ede958a42213d | "2022-05-09T13:37:42Z" | python | "2023-08-03T08:29:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,580 | ["airflow/www/static/js/grid/AutoRefresh.jsx", "airflow/www/static/js/grid/Grid.jsx", "airflow/www/static/js/grid/Grid.test.jsx", "airflow/www/static/js/grid/Main.jsx", "airflow/www/static/js/grid/ToggleGroups.jsx", "airflow/www/static/js/grid/api/useGridData.test.jsx", "airflow/www/static/js/grid/details/index.jsx", "airflow/www/static/js/grid/index.jsx", "airflow/www/static/js/grid/renderTaskRows.jsx", "airflow/www/static/js/grid/renderTaskRows.test.jsx"] | `task_id` with `.` e.g. `hello.world` is not rendered in grid view | ### Apache Airflow version
2.3.0 (latest released)
### What happened
`task_id` with `.` e.g. `hello.world` is not rendered in grid view.
### What you think should happen instead
The task should be rendered just fine in Grid view.
### How to reproduce
```
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Example DAG demonstrating the usage of the BashOperator."""
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
with DAG(
dag_id="example_bash_operator",
schedule_interval="0 0 * * *",
start_date=datetime(2021, 1, 1),
catchup=False,
dagrun_timeout=timedelta(minutes=60),
tags=["example", "example2"],
params={"example_key": "example_value"},
) as dag:
run_this_last = DummyOperator(
task_id="run.this.last",
)
# [START howto_operator_bash]
run_this = BashOperator(
task_id="run.after.loop",
bash_command="echo 1",
)
# [END howto_operator_bash]
run_this >> run_this_last
for i in range(3):
task = BashOperator(
task_id="runme." + str(i),
bash_command='echo "{{ task_instance_key_str }}" && sleep 1',
)
task >> run_this
# [START howto_operator_bash_template]
also_run_this = BashOperator(
task_id="also.run.this",
bash_command='echo "run_id={{ run_id }} | dag_run={{ dag_run }}"',
)
# [END howto_operator_bash_template]
also_run_this >> run_this_last
# [START howto_operator_bash_skip]
this_will_skip = BashOperator(
task_id="this.will.skip",
bash_command='echo "hello world"; exit 99;',
dag=dag,
)
# [END howto_operator_bash_skip]
this_will_skip >> run_this_last
if __name__ == "__main__":
dag.cli()
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23580 | https://github.com/apache/airflow/pull/23590 | 028087b5a6e94fd98542d0e681d947979eb1011f | afdfece9372fed83602d50e2eaa365597b7d0101 | "2022-05-09T07:04:00Z" | python | "2022-05-12T19:48:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,576 | ["setup.py"] | The xmltodict 0.13.0 breaks some emr tests | ### Apache Airflow version
main (development)
### What happened
The xmltodict 0.13.0 breaks some EMR tests (this is happening in `main` currently:
Example: https://github.com/apache/airflow/runs/6343826225?check_suite_focus=true#step:9:13417
```
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_create_job_flow_extra_args: ValueError: Malformatted input
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_create_job_flow_uses_the_emr_config_to_create_a_cluster: ValueError: Malformatted input
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_get_cluster_id_by_name: ValueError: Malformatted input
```
Downgrading to 0.12.0 fixes the problem.
### What you think should happen instead
The tests should work
### How to reproduce
* Run Breeze
* Run `pytest tests/providers/amazon/aws/hooks/test_emr.py` -> observe it to succeed
* Run `pip install xmltodict==0.13.0` -> observe it being upgraded from 0.12.0
* Run `pytest tests/providers/amazon/aws/hooks/test_emr.py` -> observe it to fail with `Malformed input` error
### Operating System
Any
### Versions of Apache Airflow Providers
Latest from main
### Deployment
Other
### Deployment details
CI
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23576 | https://github.com/apache/airflow/pull/23992 | 614b2329c1603ef1e2199044e2cc9e4b7332c2e0 | eec85d397ef0ecbbe5fd679cf5790adae2ad9c9f | "2022-05-09T01:07:36Z" | python | "2022-05-28T21:58:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,572 | ["airflow/cli/commands/dag_processor_command.py", "tests/cli/commands/test_dag_processor_command.py"] | cli command `dag-processor` uses `[core] sql_alchemy_conn` | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Dag processor failed to start if `[core] sql_alchemy_conn` not defined
```
airflow-local-airflow-dag-processor-1 | [2022-05-08 16:42:35,835] {configuration.py:494} WARNING - section/key [core/sql_alchemy_conn] not found in config
airflow-local-airflow-dag-processor-1 | Traceback (most recent call last):
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/bin/airflow", line 8, in <module>
airflow-local-airflow-dag-processor-1 | sys.exit(main())
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
airflow-local-airflow-dag-processor-1 | args.func(args)
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
airflow-local-airflow-dag-processor-1 | return func(*args, **kwargs)
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
airflow-local-airflow-dag-processor-1 | return f(*args, **kwargs)
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/dag_processor_command.py", line 53, in dag_processor
airflow-local-airflow-dag-processor-1 | sql_conn: str = conf.get('core', 'sql_alchemy_conn').lower()
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/configuration.py", line 486, in get
airflow-local-airflow-dag-processor-1 | return self._get_option_from_default_config(section, key, **kwargs)
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/configuration.py", line 496, in _get_option_from_default_config
airflow-local-airflow-dag-processor-1 | raise AirflowConfigException(f"section/key [{section}/{key}] not found in config")
airflow-local-airflow-dag-processor-1 | airflow.exceptions.AirflowConfigException: section/key [core/sql_alchemy_conn] not found in config
```
### What you think should happen instead
Since https://github.com/apache/airflow/pull/22284 `sql_alchemy_conn` moved to `[database]` section `dag-processor` should use this configuration
### How to reproduce
Run `airflow dag-processor` without defined `[core] sql_alchemy_conn`
https://github.com/apache/airflow/blob/6e5955831672c71bfc0424dd50c8e72f6fd5b2a7/airflow/cli/commands/dag_processor_command.py#L52-L53
### Operating System
Arch Linux
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23572 | https://github.com/apache/airflow/pull/23575 | 827bfda59b7a0db6ada697ccd01c739d37430b9a | 9837e6d813744e3c5861c32e87b3aeb496d0f88d | "2022-05-08T16:48:55Z" | python | "2022-05-09T08:50:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,557 | ["airflow/operators/python.py", "tests/operators/test_python.py"] | templates_dict, op_args, op_kwargs no longer rendered in PythonVirtualenvOperator | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Templated strings in templates_dict, op_args, op_kwargs of PythonVirtualenvOperator are no longer rendered.
### What you think should happen instead
All templated strings in templates_dict, op_args and op_kwargs must be rendered, i.e. these 3 arguments must be template_fields of PythonVirtualenvOperator, as it was in Airflow 2.2.3
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
This is due to template_fields class variable being set in PythonVirtualenvOperator
`template_fields: Sequence[str] = ('requirements',)`
that overrode class variable of PythonOperator
`template_fields = ('templates_dict', 'op_args', 'op_kwargs')`.
I read in some discussion that wanted to make requirements a template field for PythonVirtualenvOperator, but we must specify all template fields of parent class as well.
`template_fields: Sequence[str] = ('templates_dict', 'op_args', 'op_kwargs', 'requirements',)`
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23557 | https://github.com/apache/airflow/pull/23559 | 7132be2f11db24161940f57613874b4af86369c7 | 1657bd2827a3299a91ae0abbbfe4f6b80bd4cdc0 | "2022-05-07T11:49:44Z" | python | "2022-05-09T15:17:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,550 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | Dynamic Task Mapping is Immutable within a Run | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Looks like mapped tasks are immutable, even when the source XCOM that created them changes.
This is a problem for things like Late Arriving Data and Data Reprocessing
### What you think should happen instead
Mapped tasks should change in response to a change of input
### How to reproduce
Here is a writeup and MVP DAG demonstrating the issue
https://gist.github.com/fritz-astronomer/d159d0e29d57458af5b95c0f253a3361
### Operating System
docker/debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
Can look into a fix - but may not be able to submit a full PR
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23550 | https://github.com/apache/airflow/pull/23667 | ad297c91777277e2b76dd7b7f0e3e3fc5c32e07c | b692517ce3aafb276e9d23570e9734c30a5f3d1f | "2022-05-06T21:42:12Z" | python | "2022-06-18T07:32:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,546 | ["airflow/www/views.py", "tests/www/views/test_views_graph_gantt.py"] | Gantt Chart Broken After Deleting a Task | ### Apache Airflow version
2.2.5
### What happened
After a task was deleted from a DAG we received the following message when visiting the gantt view for the DAG in the webserver.
```
{
"detail": null,
"status": 404,
"title": "Task delete-me not found",
"type": "https://airflow.apache.org/docs/apache-airflow/2.2.5/stable-rest-api-ref.html#section/Errors/NotFound"
}
```
This was only corrected by manually deleting the offending task instances from the `task_instance` and `task_fail` tables.
### What you think should happen instead
I would expect the gantt chart to load either excluding the non-existent task or flagging that the task associated with task instance no longer exists.
### How to reproduce
* Create a DAG with multiple tasks.
* Run the DAG.
* Delete one of the tasks.
* Attempt to open the gantt view for the DAG.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Custom docker container hosted on Amazon ECS.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23546 | https://github.com/apache/airflow/pull/23627 | e09e4635b0dc50cbd3a18f8be02ce9b2e2f3d742 | 4b731f440734b7a0da1bbc8595702aaa1110ad8d | "2022-05-06T20:07:01Z" | python | "2022-05-20T19:24:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,532 | ["airflow/utils/file.py", "tests/utils/test_file.py"] | Airflow .airflowignore not handling soft link properly. | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Soft link and folder under same root folder will be handled as the same relative path. Say i have dags folder which looks like this:
```
-dags:
-- .airflowignore
-- folder
-- soft-links-to-folder -> folder
```
and .airflowignore:
```
folder/
```
both folder and soft-links-to-folder will be ignored.
### What you think should happen instead
Only the folder should be ignored. This is the expected behavior in airflow 2.2.4, before i upgraded. ~~The root cause is that both _RegexpIgnoreRule and _GlobIgnoreRule is calling `relative_to` method to get search path.~~
### How to reproduce
check @tirkarthi comment for the test case.
### Operating System
ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23532 | https://github.com/apache/airflow/pull/23535 | 7ab5ea7853df9d99f6da3ab804ffe085378fbd8a | 8494fc7036c33683af06a0e57474b8a6157fda05 | "2022-05-06T13:57:32Z" | python | "2022-05-20T06:35:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,529 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | Provide resources attribute in KubernetesPodOperator to be templated | ### Description
Make resources in KubernetesPodOperator as templated. We need to modify this during several runs and it needs code change for each run.
### Use case/motivation
For running CPU and memory intensive workloads, we want to continuously optimise the "limt_cpu" and "limit_memory" parameters. Hence, we want to provide these parameters as a part of the pipeline definition.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23529 | https://github.com/apache/airflow/pull/27457 | aefadb8c5b9272613d5806b054a1b46edf29d82e | 47a2b9ee7f1ff2cc1cc1aa1c3d1b523c88ba29fb | "2022-05-06T13:35:16Z" | python | "2022-11-09T08:47:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,523 | ["scripts/ci/docker-compose/integration-cassandra.yml"] | Cassandra container 3.0.26 fails to start on CI | ### Apache Airflow version
main (development)
### What happened
Cassandra released a new image (3.0.26) on 05.05.2022 and it broke our builds, for example:
* https://github.com/apache/airflow/runs/6320170343?check_suite_focus=true#step:10:6651
* https://github.com/apache/airflow/runs/6319805534?check_suite_focus=true#step:10:12629
* https://github.com/apache/airflow/runs/6319710486?check_suite_focus=true#step:10:6759
The problem was that container for cassandra did not cleanly start:
```
ERROR: for airflow Container "3bd115315ba7" is unhealthy.
Encountered errors while bringing up the project.
3bd115315ba7 cassandra:3.0 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes (unhealthy) 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp airflow-integration-postgres_cassandra_1
```
The logs of cassandra container do not show anything suspected, cassandra seems to start ok, but the health-checks for the :
```
INFO 08:45:22 Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
INFO 08:45:22 Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO 08:45:23 Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
INFO 08:45:23 Startup complete
INFO 08:45:24 Created default superuser role ‘cassandra’
```
We mitigated it by #23522 and pinned cassandra to 3.0.25 version but more investigation/reachout is needed.
### What you think should happen instead
Cassandra should start properly.
### How to reproduce
Revert #23522 and make. PR. The builds will start to fail with "cassandra unhealthy"
### Operating System
Github Actions
### Versions of Apache Airflow Providers
not relevant
### Deployment
Other
### Deployment details
CI
### Anything else
Always.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23523 | https://github.com/apache/airflow/pull/23537 | 953b85d8a911301c040a3467ab2a1ba2b6d37cd7 | 22a564296be1aee62d738105859bd94003ad9afc | "2022-05-06T10:40:06Z" | python | "2022-05-07T13:36:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,514 | ["airflow/providers/amazon/aws/hooks/s3.py", "tests/providers/amazon/aws/hooks/test_s3.py"] | Json files from S3 downloading as text files | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.0 (latest released)
### Operating System
Mac OS Mojave 10.14.6
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When I download a json file from S3 using the S3Hook:
`filename=s3_hook.download_file(bucket_name=self.source_s3_bucket, key=key, local_path="./data")
`
The file is being downloaded as a text file starting with `airflow_temp_`.
### What you think should happen instead
It would be nice to have them download as a json file or keep the same filename as in S3. Since it requires additional code to go back and read the file as a dictionary (ast.literal_eval) and there is no guarantee that the json structure is maintained.
### How to reproduce
Where s3_conn_id is the Airflow connection and s3_bucket is a bucket on AWS S3.
This is the custom operator class:
```
from airflow.models.baseoperator import BaseOperator
from airflow.utils.decorators import apply_defaults
from airflow.hooks.S3_hook import S3Hook
import logging
class S3SearchFilingsOperator(BaseOperator):
"""
Queries the Datastore API and uploads the processed info as a csv to the S3 bucket.
:param source_s3_bucket: Choose source s3 bucket
:param source_s3_directory: Source s3 directory
:param s3_conn_id: S3 Connection ID
:param destination_s3_bucket: S3 Bucket Destination
"""
@apply_defaults
def __init__(
self,
source_s3_bucket=None,
source_s3_directory=True,
s3_conn_id=True,
destination_s3_bucket=None,
destination_s3_directory=None,
search_terms=[],
*args,
**kwargs) -> None:
super().__init__(*args, **kwargs)
self.source_s3_bucket = source_s3_bucket
self.source_s3_directory = source_s3_directory
self.s3_conn_id = s3_conn_id
self.destination_s3_bucket = destination_s3_bucket
self.destination_s3_directory = destination_s3_directory
def execute(self, context):
"""
Executes the operator.
"""
s3_hook = S3Hook(self.s3_conn_id)
keys = s3_hook.list_keys(bucket_name=self.source_s3_bucket)
for key in keys:
# download file
filename=s3_hook.download_file(bucket_name=self.source_s3_bucket, key=key, local_path="./data")
logging.info(filename)
with open(filename, 'rb') as handle:
filing = handle.read()
filing = pickle.loads(filing)
logging.info(filing.keys())
```
And this is the dag file:
```
from keywordSearch.operators.s3_search_filings_operator import S3SearchFilingsOperator
from airflow import DAG
from airflow.utils.dates import days_ago
from datetime import timedelta
# from aws_pull import aws_pull
default_args = {
"owner" : "airflow",
"depends_on_past" : False,
"start_date": days_ago(2),
"email" : ["airflow@example.com"],
"email_on_failure" : False,
"email_on_retry" : False,
"retries" : 1,
"retry_delay": timedelta(seconds=30)
}
with DAG("keyword-search-full-load",
default_args=default_args,
description="Syntax Keyword Search",
max_active_runs=1,
schedule_interval=None) as dag:
op3 = S3SearchFilingsOperator(
task_id="s3_search_filings",
source_s3_bucket="processed-filings",
source_s3_directory="citations",
s3_conn_id="Syntax_S3",
destination_s3_bucket="keywordsearch",
destination_s3_directory="results",
dag=dag
)
op3
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23514 | https://github.com/apache/airflow/pull/26886 | d544e8fbeb362e76e14d7615d354a299445e5b5a | 777b57f0c6a8ca16df2b96fd17c26eab56b3f268 | "2022-05-05T21:59:08Z" | python | "2022-10-26T11:01:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,512 | ["airflow/cli/commands/webserver_command.py", "tests/cli/commands/test_webserver_command.py"] | Random "duplicate key value violates unique constraint" errors when initializing the postgres database | ### Apache Airflow version
2.3.0 (latest released)
### What happened
while testing airflow 2.3.0 locally (using postgresql 12.4), the webserver container shows random errors:
```
webserver_1 | + airflow db init
...
webserver_1 | + exec airflow webserver
...
webserver_1 | [2022-05-04 18:58:46,011] {{manager.py:568}} INFO - Added Permission menu access on Permissions to role Admin
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] ERROR: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] DETAIL: Key (permission_view_id, role_id)=(204, 1) already exists.
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] STATEMENT: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), 204, 1) RETURNING ab_permission_view_role.id
webserver_1 | [2022-05-04 18:58:46,015] {{manager.py:570}} ERROR - Add Permission to Role Error: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(204, 1) already exists.
webserver_1 |
webserver_1 | [SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
webserver_1 | [parameters: {'permission_view_id': 204, 'role_id': 1}]
```
notes:
1. when the db is first initialized, i have ~40 errors like this (with ~40 different `permission_view_id` but always the same `'role_id': 1`)
2. when it's not the first time initializing db, i always have 1 error like this but it shows different `permission_view_id` each time
3. all these errors don't seem to have any real negative effects, the webserver is still running and airflow is still running and scheduling tasks
4. "occasionally" i do get real exceptions which render the webserver workers all dead:
```
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] ERROR: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] STATEMENT: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), 214, 1) RETURNING ab_permission_view_role.id
webserver_1 | [2022-05-05 20:03:30 +0000] [121] [ERROR] Exception in worker process
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
webserver_1 | self.dialect.do_execute(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
webserver_1 | cursor.execute(statement, parameters)
webserver_1 | psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
webserver_1 |
webserver_1 |
webserver_1 | The above exception was the direct cause of the following exception:
webserver_1 |
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
webserver_1 | worker.init_process()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 134, in init_process
webserver_1 | self.load_wsgi()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
webserver_1 | self.wsgi = self.app.wsgi()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
webserver_1 | self.callable = self.load()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
webserver_1 | return self.load_wsgiapp()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
webserver_1 | return util.import_app(self.app_uri)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 412, in import_app
webserver_1 | app = app(*args, **kwargs)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 158, in cached_app
webserver_1 | app = create_app(config=config, testing=testing)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 146, in create_app
webserver_1 | sync_appbuilder_roles(flask_app)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 68, in sync_appbuilder_roles
webserver_1 | flask_app.appbuilder.sm.sync_roles()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/security.py", line 580, in sync_roles
webserver_1 | self.update_admin_permission()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/security.py", line 562, in update_admin_permission
webserver_1 | self.get_session.commit()
webserver_1 | File "<string>", line 2, in commit
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1423, in commit
webserver_1 | self._transaction.commit(_to_root=self.future)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
webserver_1 | self._prepare_impl()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
webserver_1 | self.session.flush()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3255, in flush
webserver_1 | self._flush(objects)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3395, in _flush
webserver_1 | transaction.rollback(_capture_exception=True)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
webserver_1 | compat.raise_(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
webserver_1 | raise exception
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3355, in _flush
webserver_1 | flush_context.execute()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 453, in execute
webserver_1 | rec.execute(self)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 576, in execute
webserver_1 | self.dependency_processor.process_saves(uow, states)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1182, in process_saves
webserver_1 | self._run_crud(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1245, in _run_crud
webserver_1 | connection.execute(statement, secondary_insert)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
webserver_1 | return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 313, in _execute_on_connection
webserver_1 | return connection._execute_clauseelement(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1389, in _execute_clauseelement
webserver_1 | ret = self._execute_context(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
webserver_1 | self._handle_dbapi_exception(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
webserver_1 | util.raise_(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
webserver_1 | raise exception
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
webserver_1 | self.dialect.do_execute(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
webserver_1 | cursor.execute(statement, parameters)
webserver_1 | sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
webserver_1 |
webserver_1 | [SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
webserver_1 | [parameters: {'permission_view_id': 214, 'role_id': 1}]
webserver_1 | (Background on this error at: http://sqlalche.me/e/14/gkpj)
webserver_1 | [2022-05-05 20:03:30 +0000] [121] [INFO] Worker exiting (pid: 121)
flower_1 | + exec airflow celery flower
scheduler_1 | + exec airflow scheduler
webserver_1 | [2022-05-05 20:03:31 +0000] [118] [INFO] Worker exiting (pid: 118)
webserver_1 | [2022-05-05 20:03:31 +0000] [119] [INFO] Worker exiting (pid: 119)
webserver_1 | [2022-05-05 20:03:31 +0000] [120] [INFO] Worker exiting (pid: 120)
worker_1 | + exec airflow celery worker
```
However such exceptions are rare and pure random, i can't find a way to reproduce them consistently.
### What you think should happen instead
prior to 2.3.0 there were no such errors
### How to reproduce
_No response_
### Operating System
Linux Mint 20.3
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23512 | https://github.com/apache/airflow/pull/27297 | 9ab1a6a3e70b32a3cddddf0adede5d2f3f7e29ea | 8f99c793ec4289f7fc28d890b6c2887f0951e09b | "2022-05-05T20:00:11Z" | python | "2022-10-27T04:25:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,497 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | Tasks stuck indefinitely when following container logs | ### Apache Airflow version
2.2.4
### What happened
I observed that some workers hanged randomly after being running. Also, logs were not being reported. After some time, the pod status was on "Completed" when inspecting from k8s api, but wasn't on Airflow, which showed "status:running" for the pod.
After some investigation, the issue is in the new kubernetes pod operator and is dependant of a current issue in the kubernetes api.
When a log rotate event occurs in kubernetes, the stream we consume on fetch_container_logs(follow=True,...) is no longer being feeded.
Therefore, the k8s pod operator hangs indefinetly at the middle of the log. Only a sigterm could terminate it as logs consumption is blocking execute() to finish.
Ref to the issue in kubernetes: https://github.com/kubernetes/kubernetes/issues/59902
Linking to https://github.com/apache/airflow/issues/12103 for reference, as the result is more or less the same for end user (although the root cause is different)
### What you think should happen instead
Pod operator should not hang.
Pod operator could follow the new logs from the container - this is out of scope of airflow as ideally the k8s api does it automatically.
### Solution proposal
I think there are many possibilities to walk-around this from airflow-side to not hang indefinitely (like making `fetch_container_logs` non-blocking for `execute` and instead always block until status.phase.completed as it's currently done when get_logs is not true).
### How to reproduce
Running multiple tasks will sooner or later trigger this. Also, one can configure a more aggressive logs rotation in k8s so this race is triggered more often.
#### Operating System
Debian GNU/Linux 11 (bullseye)
#### Versions of Apache Airflow Providers
```
apache-airflow==2.2.4
apache-airflow-providers-google==6.4.0
apache-airflow-providers-cncf-kubernetes==3.0.2
```
However, this should be reproducible with master.
#### Deployment
Official Apache Airflow Helm Chart
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23497 | https://github.com/apache/airflow/pull/28336 | 97006910a384579c9f0601a72410223f9b6a0830 | 6d2face107f24b7e7dce4b98ae3def1178e1fc4c | "2022-05-05T09:06:19Z" | python | "2023-03-04T18:08:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,476 | ["airflow/www/static/js/grid/TaskName.jsx"] | Grid View - Multilevel taskgroup shows white text on the UI | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Blank text if there are nested Task Groups .
Nested TaskGroup - Graph view:
![image](https://user-images.githubusercontent.com/6821208/166685216-8a13e691-4e33-400e-9ee2-f489b7113853.png)
Nested TaskGroup - Grid view:
![image](https://user-images.githubusercontent.com/6821208/166685452-a3b59ee5-95da-43b2-a352-97d52a0acbbd.png)
### What you think should happen instead
We should see the text as up task group level.
### How to reproduce
### deploy below DAG:
```
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import datetime
from airflow.utils.task_group import TaskGroup
with DAG(dag_id="grid_view_dag", start_date=datetime(2022, 5, 3, 0, 00), schedule_interval=None, concurrency=2,
max_active_runs=2) as dag:
parent_task_group = None
for i in range(0, 10):
with TaskGroup(group_id=f"tg_level_{i}", parent_group=parent_task_group) as tg:
t = DummyOperator(task_id=f"task_level_{i}")
parent_task_group = tg
```
### got to grid view and expand the nodes:
![image](https://user-images.githubusercontent.com/6821208/166683975-0ed583a4-fa24-43e7-8caa-1cd610c07187.png)
#### you can see the text after text selection:
![image](https://user-images.githubusercontent.com/6821208/166684102-03482eb3-1207-4f79-abc3-8c1a0116d135.png)
### Operating System
N/A
### Versions of Apache Airflow Providers
N/A
### Deployment
Docker-Compose
### Deployment details
reproducible using the following docker-compose file: https://airflow.apache.org/docs/apache-airflow/2.3.0/docker-compose.yaml
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23476 | https://github.com/apache/airflow/pull/23482 | d9902958448b9d6e013f90f14d2d066f3121dcd5 | 14befe3ad6a03f27e20357e9d4e69f99d19a06d1 | "2022-05-04T13:01:20Z" | python | "2022-05-04T15:30:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,473 | ["airflow/models/dagbag.py", "airflow/security/permissions.py", "airflow/www/security.py", "tests/www/test_security.py"] | Could not get DAG access permission after upgrade to 2.3.0 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I upgraded my airflow instance from version 2.1.3 to 2.3.0 but got issue that there are no permission for new DAGs.
**The issue only happens in DAG which has dag_id contains dot symbol.**
### What you think should happen instead
There should be 3 new permissions for a DAG.
### How to reproduce
+ Create a new DAG with id, lets say: `dag.id_1`
+ Go to the UI -> Security -> List Role
+ Edit any Role
+ Try to insert permissions of new DAG above to chosen role.
-> Could not get any permission for created DAG above.
There are 3 DAG permissions named `can_read_DAG:dag`, `can_edit_DAG:dag`, `can_delete_DAG:dag`
There should be 3 new permissions: `can_read_DAG:dag.id_1`, `can_edit_DAG:dag.id_1`, `can_delete_DAG:dag.id_1`
### Operating System
Kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23473 | https://github.com/apache/airflow/pull/23510 | ae3e68af3c42a53214e8264ecc5121049c3beaf3 | cc35fcaf89eeff3d89e18088c2e68f01f8baad56 | "2022-05-04T09:37:57Z" | python | "2022-06-08T07:47:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,460 | ["README.md", "breeze-complete", "dev/breeze/src/airflow_breeze/global_constants.py", "images/breeze/output-commands-hash.txt", "images/breeze/output-commands.svg", "images/breeze/output-config.svg", "images/breeze/output-shell.svg", "images/breeze/output-start-airflow.svg", "scripts/ci/libraries/_initialization.sh"] | Add Postgres 14 support | ### Description
_No response_
### Use case/motivation
Using Postgres 14 as backend
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23460 | https://github.com/apache/airflow/pull/23506 | 9ab9cd47cff5292c3ad602762ae3e371c992ea92 | 6169e0a69875fb5080e8d70cfd9d5e650a9d13ba | "2022-05-03T18:15:31Z" | python | "2022-05-11T16:26:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,447 | ["airflow/cli/commands/dag_processor_command.py", "tests/cli/commands/test_dag_processor_command.py"] | External DAG processor not working | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Running a standalone Dag Processor instance with `airflow dag-processor` throws the following exception:
```
Standalone DagProcessor is not supported when using sqlite.
```
### What you think should happen instead
The `airflow dag-processor` should start without an exception in case of Postgres database
### How to reproduce
The error is in the following line: https://github.com/apache/airflow/blob/6f146e721c81e9304bf7c0af66fc3d203d902dab/airflow/cli/commands/dag_processor_command.py#L53
It should be
```python
sql_conn: str = conf.get('database', 'sql_alchemy_conn').lower()
```
due to the change in the configuration file done in https://github.com/apache/airflow/pull/22284
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23447 | https://github.com/apache/airflow/pull/23575 | 827bfda59b7a0db6ada697ccd01c739d37430b9a | 9837e6d813744e3c5861c32e87b3aeb496d0f88d | "2022-05-03T13:36:02Z" | python | "2022-05-09T08:50:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,435 | ["airflow/decorators/base.py", "airflow/models/mappedoperator.py", "airflow/serialization/serialized_objects.py", "tests/api_connexion/endpoints/test_task_endpoint.py", "tests/models/test_taskinstance.py"] | Empty `expand()` crashes the scheduler | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I've found a DAG that will crash the scheduler:
```
@task
def hello():
return "hello"
hello.expand()
```
```
[2022-05-03 03:41:23,779] {scheduler_job.py:753} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 824, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 906, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1148, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 522, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 661, in task_instance_scheduling_decisions
session=session,
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 714, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/mappedoperator.py", line 609, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
TypeError: reduce() of empty sequence with no initial value
```
### What you think should happen instead
A user DAG shouldn't crash the scheduler. This specific case could likely be an ImportError at parse time, but it makes me think we might be missing some exception handling?
### How to reproduce
_No response_
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23435 | https://github.com/apache/airflow/pull/23463 | c9b21b8026c595878ee4cc934209fc1fc2ca2396 | 9214018153dd193be6b1147629f73b23d8195cce | "2022-05-03T03:46:12Z" | python | "2022-05-27T04:25:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,425 | ["airflow/models/mappedoperator.py", "tests/models/test_taskinstance.py"] | Mapping over multiple parameters results in 1 task fewer than expected | ### Apache Airflow version
2.3.0 (latest released)
### What happened
While testing the [example](https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html#mapping-over-multiple-parameters) given for `Mapping over multiple parameters` I noticed only 5 tasks are being mapped rather than the expected 6.
task example from the doc:
```
@task
def add(x: int, y: int):
return x + y
added_values = add.expand(x=[2, 4, 8], y=[5, 10])
```
The doc mentions:
```
# This results in the add function being called with
# add(x=2, y=5)
# add(x=2, y=10)
# add(x=4, y=5)
# add(x=4, y=10)
# add(x=8, y=5)
# add(x=8, y=10)
```
But when I create a DAG with the example, only 5 tasks are mapped instead of 6:
![image](https://user-images.githubusercontent.com/15913202/166302366-64c23767-2e5f-418d-a58f-fd997a75937e.png)
### What you think should happen instead
A task should be mapped for all 6 possible outcomes, rather than only 5
### How to reproduce
Create a DAG using the example provided [here](Mapping over multiple parameters) and check the number of mapped instances:
![image](https://user-images.githubusercontent.com/15913202/166302419-b10d5c87-9b95-4b30-be27-030929ab1fcd.png)
### Operating System
macOS 11.5.2
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-databricks==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Astronomer
### Deployment details
Localhost instance of Astronomer Runtime 5.0.0
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23425 | https://github.com/apache/airflow/pull/23434 | 0fde90d92ae306f37041831f5514e9421eee676b | 3fb8e0b0b4e8810bedece873949871a94dd7387a | "2022-05-02T18:17:23Z" | python | "2022-05-04T19:02:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,420 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Add a queue DAG run endpoint to REST API | ### Description
Add a POST endpoint to queue a dag run like we currently do [here](https://github.com/apache/airflow/issues/23419).
Url format: `api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/queue`
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23420 | https://github.com/apache/airflow/pull/23481 | 1220c1a7a9698cdb15289d7066b29c209aaba6aa | 4485393562ea4151a42f1be47bea11638b236001 | "2022-05-02T17:42:15Z" | python | "2022-05-09T12:25:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,419 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Add a DAG Run clear endpoint to REST API | ### Description
Add a POST endpoint to clear a dag run like we currently do [here](https://github.com/apache/airflow/blob/main/airflow/www/views.py#L2087).
Url format: `api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/clear`
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23419 | https://github.com/apache/airflow/pull/23451 | f352ee63a5d09546a7997ba8f2f8702a1ddb4af7 | b83cc9b5e2c7e2516b0881861bbc0f8589cb531d | "2022-05-02T17:40:44Z" | python | "2022-05-24T03:30:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,415 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/api_connexion/schemas/test_dag_run_schema.py"] | Add more fields to DAG Run API endpoints | ### Description
There are a few fields that would be useful to include in the REST API for getting a DAG run or list of DAG runs:
`data_interval_start`
`data_interval_end`
`last_scheduling_decision`
`run_type` as (backfill, manual and scheduled)
### Use case/motivation
We use this information in the Grid view as part of `tree_data`. If we added these extra fields to the REST APi we could remove all dag run info from tree_data.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23415 | https://github.com/apache/airflow/pull/23440 | 22b49d334ef0008be7bd3d8481b55b8ab5d71c80 | 6178491a117924155963586b246d2bf54be5320f | "2022-05-02T17:26:24Z" | python | "2022-05-03T12:27:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,414 | ["airflow/migrations/utils.py", "airflow/migrations/versions/0110_2_3_2_add_cascade_to_dag_tag_foreignkey.py", "airflow/models/dag.py", "docs/apache-airflow/migrations-ref.rst"] | airflow db clean - Dag cleanup won't run if dag is tagged | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When running `airflow db clean`, if a to-be-cleaned dag is also tagged, a foreign key constraint in dag_tag is violated. Full error:
```
sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) update or delete on table "dag" violates foreign key constraint "dag_tag_dag_id_fkey" on table "dag_tag"
DETAIL: Key (dag_id)=(some-dag-id-here) is still referenced from table "dag_tag".
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-mssql==2.1.3
apache-airflow-providers-oracle==2.2.3
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-samba==3.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23414 | https://github.com/apache/airflow/pull/23444 | e2401329345dcc5effa933b92ca969b8779755e4 | 8ccff9244a6d1a936d8732721373b967e95ec404 | "2022-05-02T17:23:19Z" | python | "2022-05-27T14:28:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,411 | ["airflow/sensors/base.py", "tests/serialization/test_dag_serialization.py", "tests/ti_deps/deps/test_ready_to_reschedule_dep.py"] | PythonSensor is not considering mode='reschedule', instead marking task UP_FOR_RETRY | ### Apache Airflow version
2.3.0 (latest released)
### What happened
A PythonSensor that works on versions <2.3.0 in mode reschedule is now marking the task as `UP_FOR_RETRY` instead.
Log says:
```
[2022-05-02, 15:48:23 UTC] {python.py:66} INFO - Poking callable: <function test at 0x7fd56286bc10>
[2022-05-02, 15:48:23 UTC] {taskinstance.py:1853} INFO - Rescheduling task, marking task as UP_FOR_RESCHEDULE
[2022-05-02, 15:48:23 UTC] {local_task_job.py:156} INFO - Task exited with return code 0
[2022-05-02, 15:48:23 UTC] {local_task_job.py:273} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
But it directly marks it as `UP_FOR_RETRY` and then follows `retry_delay` and `retries`
### What you think should happen instead
It should mark the task as `UP_FOR_RESCHEDULE` and reschedule it according to the `poke_interval`
### How to reproduce
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.sensors.python import PythonSensor
def test():
return False
default_args = {
"owner": "airflow",
"depends_on_past": False,
"start_date": datetime(2022, 5, 2),
"email_on_failure": False,
"email_on_retry": False,
"retries": 1,
"retry_delay": timedelta(minutes=1),
}
dag = DAG("dag_csdepkrr_development_v001",
default_args=default_args,
catchup=False,
max_active_runs=1,
schedule_interval=None)
t1 = PythonSensor(task_id="PythonSensor",
python_callable=test,
poke_interval=30,
mode='reschedule',
dag=dag)
```
### Operating System
Latest Docker image
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-oracle==2.2.3
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.5.2
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Docker-Compose
### Deployment details
Latest Docker compose from the documentation
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23411 | https://github.com/apache/airflow/pull/23674 | d3b08802861b006fc902f895802f460a72d504b0 | f9e2a3051cd3a5b6fcf33bca4c929d220cf5661e | "2022-05-02T16:07:22Z" | python | "2022-05-17T12:18:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,408 | ["airflow/configuration.py"] | Airflow 2.3.0 does not keep promised backward compatibility regarding database configuration using _CMD Env | ### Apache Airflow version
2.3.0 (latest released)
### What happened
We used to configure the Database using the AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD Environment variable.
Now the config option moved from CORE to DATABASE. However, we intended to keep backward compatibility as stated in the [Release Notes](AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD).
Upon 2.3.0 update however, the _CMD suffixed variables are no longer recognized for database configuration in Core - I think due to a missing entry here:
https://github.com/apache/airflow/blob/8622808aa79531bcaa5099d26fbaf54b4afe931a/airflow/configuration.py#L135
### What you think should happen instead
We should only get a deprecation warning but the Database should be configured correctly.
### How to reproduce
Configure Airflow using an external Database using the AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD environment variable. Notice that Airflow falls back to SQLight.
### Operating System
kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23408 | https://github.com/apache/airflow/pull/23441 | 6178491a117924155963586b246d2bf54be5320f | 0cdd401cda61006a42afba243f1ad813315934d4 | "2022-05-02T14:49:36Z" | python | "2022-05-03T12:48:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,396 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py"] | Airflow kubernetes pod operator fetch xcom fails | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Airflow kubernetes pod operator load xcom fails
def _exec_pod_command(self, resp, command: str) -> Optional[str]:
if resp.is_open():
self.log.info('Running command... %s\n', command)
resp.write_stdin(command + '\n')
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
return resp.read_stdout()
if resp.peek_stderr():
self.log.info("stderr from command: %s", resp.read_stderr())
break
return None
_exec_pod_command read only peek stdout doesn't read full response.This content is loaded as json file json. loads function which causes system break with error "unterminated string"
### What you think should happen instead
It should not read partial content
### How to reproduce
When json size is larger
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23396 | https://github.com/apache/airflow/pull/23490 | b0406f58f0c51db46d2da7c7c84a0b5c3d4f09ae | faae9faae396610086d5ea18d61c356a78a3d365 | "2022-05-02T00:42:02Z" | python | "2022-05-10T15:46:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,361 | ["airflow/models/taskinstance.py", "tests/jobs/test_scheduler_job.py"] | Scheduler crashes with psycopg2.errors.DeadlockDetected exception | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Customer has a dag that generates around 2500 tasks dynamically using a task group. While running the dag, a subset of the tasks (~1000) run successfully with no issue and (~1500) of the tasks are getting "skipped", and the dag fails. The same DAG runs successfully in Airflow v2.1.3 with same Airflow configuration.
While investigating the Airflow processes, We found that both the scheduler got restarted with below error during the DAG execution.
```
[2022-04-27 20:42:44,347] {scheduler_job.py:742} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1256, in _execute_context
self.dialect.do_executemany(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/psycopg2.py", line 912, in do_executemany
cursor.executemany(statement, parameters)
psycopg2.errors.DeadlockDetected: deadlock detected
DETAIL: Process 1646244 waits for ShareLock on transaction 3915993452; blocked by process 1640692.
Process 1640692 waits for ShareLock on transaction 3915992745; blocked by process 1646244.
HINT: See server log for query details.
CONTEXT: while updating tuple (189873,4) in relation "task_instance"
```
This issue seems to be related to #19957
### What you think should happen instead
This issue was observed while running huge number of concurrent task created dynamically by a DAG. Some of the tasks are getting skipped due to restart of scheduler with Deadlock exception.
### How to reproduce
DAG file:
```
from propmix_listings_details import BUCKET, ZIPS_FOLDER, CITIES_ZIP_COL_NAME, DETAILS_DEV_LIMIT, DETAILS_RETRY, DETAILS_CONCURRENCY, get_api_token, get_values, process_listing_ids_based_zip
from airflow.utils.task_group import TaskGroup
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.python_operator import PythonOperator
from datetime import datetime, timedelta
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 0,
}
date = '{{ execution_date }}'
email_to = ['example@airflow.com']
# Using a DAG context manager, you don't have to specify the dag property of each task
state = 'Maha'
with DAG('listings_details_generator_{0}'.format(state),
start_date=datetime(2021, 11, 18),
schedule_interval=None,
max_active_runs=1,
concurrency=DETAILS_CONCURRENCY,
dagrun_timeout=timedelta(minutes=10),
catchup=False # enable if you don't want historical dag runs to run
) as dag:
t0 = DummyOperator(task_id='start')
with TaskGroup(group_id='group_1') as tg1:
token = get_api_token()
zip_list = get_values(BUCKET, ZIPS_FOLDER+state, CITIES_ZIP_COL_NAME)
for zip in zip_list[0:DETAILS_DEV_LIMIT]:
details_operator = PythonOperator(
task_id='details_{0}_{1}'.format(state, zip), # task id is generated dynamically
pool='pm_details_pool',
python_callable=process_listing_ids_based_zip,
task_concurrency=40,
retries=3,
retry_delay=timedelta(seconds=10),
op_kwargs={'zip': zip, 'date': date, 'token':token, 'state':state}
)
t0 >> tg1
```
### Operating System
kubernetes cluster running on GCP linux (amd64)
### Versions of Apache Airflow Providers
pip freeze | grep apache-airflow-providers
apache-airflow-providers-amazon==1!3.2.0
apache-airflow-providers-cncf-kubernetes==1!3.0.0
apache-airflow-providers-elasticsearch==1!2.2.0
apache-airflow-providers-ftp==1!2.1.2
apache-airflow-providers-google==1!6.7.0
apache-airflow-providers-http==1!2.1.2
apache-airflow-providers-imap==1!2.2.3
apache-airflow-providers-microsoft-azure==1!3.7.2
apache-airflow-providers-mysql==1!2.2.3
apache-airflow-providers-postgres==1!4.1.0
apache-airflow-providers-redis==1!2.0.4
apache-airflow-providers-slack==1!4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==1!2.1.3
apache-airflow-providers-ssh==1!2.4.3
### Deployment
Astronomer
### Deployment details
Airflow v2.2.5-2
Scheduler count: 2
Scheduler resources: 20AU (2CPU and 7.5GB)
Executor used: Celery
Worker count : 2
Worker resources: 24AU (2.4 CPU and 9GB)
Termination grace period : 2mins
### Anything else
This issue happens in all the dag runs. Some of the tasks are getting skipped and some are getting succeeded and the scheduler fails with the Deadlock exception error.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23361 | https://github.com/apache/airflow/pull/25312 | 741c20770230c83a95f74fe7ad7cc9f95329f2cc | be2b53eaaf6fc136db8f3fa3edd797a6c529409a | "2022-04-29T13:05:15Z" | python | "2022-08-09T14:17:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,356 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Tasks set to queued by a backfill get cleared and rescheduled by the kubernetes executor, breaking the backfill | ### Apache Airflow version
2.2.5 (latest released)
### What happened
A backfill launched from the scheduler pod, queues tasks as it should but while they are in the process of starting the kubernentes executor loop running in the scheduler clears these tasks and reschedules them via this function https://github.com/apache/airflow/blob/9449a107f092f2f6cfa9c8bbcf5fd62fadfa01be/airflow/executors/kubernetes_executor.py#L444
This causes the backfill to not queue any more tasks and enters an endless loop of waiting for the task it has queued to complete.
The way I have mitigated this is to set the `AIRFLOW__KUBERNETES__WORKER_PODS_QUEUED_CHECK_INTERVAL` to 3600, which is not ideal
### What you think should happen instead
The function clear_not_launched_queued_tasks should respect tasks launched by a backfill process and not clear them.
### How to reproduce
start a backfill with large number of tasks and watch as they get queued and then subsequently rescheduled by the kubernetes executor running in the scheduler pod
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
```
apache-airflow 2.2.5 py38h578d9bd_0
apache-airflow-providers-cncf-kubernetes 3.0.2 pyhd8ed1ab_0
apache-airflow-providers-docker 2.4.1 pyhd8ed1ab_0
apache-airflow-providers-ftp 2.1.2 pyhd8ed1ab_0
apache-airflow-providers-http 2.1.2 pyhd8ed1ab_0
apache-airflow-providers-imap 2.2.3 pyhd8ed1ab_0
apache-airflow-providers-postgres 3.0.0 pyhd8ed1ab_0
apache-airflow-providers-sqlite 2.1.3 pyhd8ed1ab_0
```
### Deployment
Other 3rd-party Helm chart
### Deployment details
Deployment is running the latest helm chart of Airflow Community Edition
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23356 | https://github.com/apache/airflow/pull/23720 | 49cfb6498eed0acfc336a24fd827b69156d5e5bb | 640d4f9636d3867d66af2478bca15272811329da | "2022-04-29T08:57:18Z" | python | "2022-11-18T01:09:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,343 | ["tests/cluster_policies/__init__.py", "tests/dags_corrupted/test_nonstring_owner.py", "tests/models/test_dagbag.py"] | Silent DAG import error by making owner a list | ### Apache Airflow version
2.2.5 (latest released)
### What happened
If the argument `owner` is unhashable, such as a list, the DAG will fail to be imported, but will also not report as an import error. If the DAG is new, it will simply be missing. If this is an update to the existing DAG, the webserver will continue to show the old version.
### What you think should happen instead
A DAG import error should be raised.
### How to reproduce
Set the `owner` argument for a task to be a list. See this minimal reproduction DAG.
```
from datetime import datetime
from airflow.decorators import dag, task
@dag(
schedule_interval="@daily",
start_date=datetime(2021, 1, 1),
catchup=False,
default_args={"owner": ["person"]},
tags=['example'])
def demo_bad_owner():
@task()
def say_hello():
print("hello")
demo_bad_owner()
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
None needed.
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
The worker appears to still be able to execute the tasks when updating an existing DAG. Not sure how that's possible.
Also reproduced on 2.3.0rc2.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23343 | https://github.com/apache/airflow/pull/23359 | 9a0080c20bb2c4a9c0f6ccf1ece79bde895688ac | c4887bcb162aab9f381e49cecc2f212600c493de | "2022-04-28T22:09:14Z" | python | "2022-05-02T10:58:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,327 | ["airflow/providers/google/cloud/operators/gcs.py"] | GCSTransformOperator: provide Jinja templating in source and destination object names | ### Description
Provide an option to receive the source_object and destination_object via Jinja params.
### Use case/motivation
Usecase: Need to execute a DAG to fetch a video from GCS bucket based on paramater and then transform it and store it back.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23327 | https://github.com/apache/airflow/pull/23328 | 505af06303d8160c71f6a7abe4792746f640083d | c82b3b94660a38360f61d47676ed180a0d32c189 | "2022-04-28T12:27:11Z" | python | "2022-04-28T17:07:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,315 | ["airflow/utils/dot_renderer.py", "tests/utils/test_dot_renderer.py"] | `airflow dags show` Exception: "The node ... should be TaskGroup and is not" | ### Apache Airflow version
main (development)
### What happened
This happens for any dag with a task expansion. For instance:
```python
from datetime import datetime
from airflow import DAG
from airflow.operators.bash import BashOperator
with DAG(
dag_id="simple_mapped",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as dag:
BashOperator.partial(task_id="hello_world").expand(
bash_command=["echo hello", "echo world"]
)
```
I ran `airflow dags show simple_mapped` and instead of graphviz DOT notation, I saw this:
```
{dagbag.py:507} INFO - Filling up the DagBag from /Users/matt/2022/04/27/dags
Traceback (most recent call last):
File .../bin/airflow", line 8, in <module>
sys.exit(main())
File ... lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File ... lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File ... lib/python3.9/site-packages/airflow/cli/commands/dag_command.py", line 205, in dag_show
dot = render_dag(dag)
File ... lib/python3.9/site-packages/airflow/utils/dot_renderer.py", line 188, in render_dag
_draw_nodes(dag.task_group, dot, states_by_task_id)
File ... lib/python3.9/site-packages/airflow/utils/dot_renderer.py", line 125, in _draw_nodes
_draw_task_group(node, parent_graph, states_by_task_id)
File ... lib/python3.9/site-packages/airflow/utils/dot_renderer.py", line 110, in _draw_task_group
_draw_nodes(child, parent_graph, states_by_task_id)
File ... lib/python3.9/site-packages/airflow/utils/dot_renderer.py", line 121, in _draw_nodes
raise AirflowException(f"The node {node} should be TaskGroup and is not")
airflow.exceptions.AirflowException: The node <Mapped(BashOperator): hello_world> should be TaskGroup and is not
```
### What you think should happen instead
I should see something about the dag structure.
### How to reproduce
run `airflow dags show` for any dag with a task expansion
### Operating System
MacOS, venv
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
```
❯ airflow version
2.3.0.dev0
```
cloned at 4f6fe727a
### Anything else
There's a related card on this board https://github.com/apache/airflow/projects/12
> Support Mapped task groups in the DAG "dot renderer" (i.e. backfill job with --show-dagrun)
But I don't think that functionality is making it into 2.3.0, so maybe we need to add a fix here in the meantime?
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23315 | https://github.com/apache/airflow/pull/23339 | d3028e1e9036a3c67ec4477eee6cd203c12f7f5c | 59e93106d55881163a93dac4a5289df1ba6e1db5 | "2022-04-28T01:49:46Z" | python | "2022-04-30T17:46:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,306 | ["docs/helm-chart/production-guide.rst"] | Helm chart production guide fails to inform resultBackendSecretName parameter should be used | ### What do you see as an issue?
The [production guide](https://airflow.apache.org/docs/helm-chart/stable/production-guide.html) indicates that the code below is what is necessary for deploying with secrets. But `resultBackendSecretName` should also be filled, or Airflow wont start.
```
data:
metadataSecretName: mydatabase
```
In addition to that, the expected URL is different in both variables.
`resultBackendSecretName` expects a url that starts with `db+postgresql://`, while `metadataSecretName` expects `postgresql://` or `postgres://` and wont work with `db+postgresql://`. To solve this, it might be necessary to create multiple secrets.
Just in case this is relevant, I'm using CeleryKubernetesExecutor.
### Solving the problem
Docs should warn about the issue above.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23306 | https://github.com/apache/airflow/pull/23307 | 3977e1798d8294ba628b5f330f43702c1a5c79fc | 48915bd149bd8b58853880d63b8c6415688479ec | "2022-04-27T20:34:07Z" | python | "2022-05-04T21:28:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,292 | ["airflow/providers/google/cloud/hooks/cloud_sql.py"] | GCP Composer v1.18.6 and 2.0.10 incompatible with CloudSqlProxyRunner | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
6.6.0 or above
### Apache Airflow version
2.2.3
### Operating System
n/a
### Deployment
Composer
### Deployment details
_No response_
### What happened
Hi! A [user on StackOverflow](https://stackoverflow.com/questions/71975635/gcp-composer-v1-18-6-and-2-0-10-incompatible-with-cloudsqlproxyrunner
) and some Cloud SQL engineers at Google noticed that the CloudSQLProxyRunner was broken by [this commit](https://github.com/apache/airflow/pull/22127/files#diff-5992ce7fff93c23c57833df9ef892e11a023494341b80a9fefa8401f91988942L454)
### What you think should happen instead
Ideally DAGs should continue to work as they did before
### How to reproduce
Make a DAG that connects to Cloud SQL using the CloudSQLProxyRunner in Composer 1.18.6 or above using the google providers 6.6.0 or above and see a 404
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23292 | https://github.com/apache/airflow/pull/23299 | 0c9c1cf94acc6fb315a9bc6f5bf1fbd4e4b4c923 | 1f3260354988b304cf31d5e1d945ce91798bed48 | "2022-04-27T17:34:37Z" | python | "2022-04-28T13:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,285 | ["airflow/models/taskmixin.py", "airflow/utils/edgemodifier.py", "airflow/utils/task_group.py", "tests/utils/test_edgemodifier.py"] | Cycle incorrectly detected in DAGs when using Labels within Task Groups | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
When attempting to create a DAG containing Task Groups and in those Task Groups there are Labels between nodes, the DAG fails to import due to cycle detection.
Consider this DAG:
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.utils.edgemodifier import Label
@task
def begin():
...
@task
def end():
...
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_groups_with_edge_labels():
@task_group
def group():
begin() >> Label("label") >> end()
group()
_ = task_groups_with_edge_labels()
```
When attempting to import the DAG, this error message is displayed:
<img width="1395" alt="image" src="https://user-images.githubusercontent.com/48934154/165566299-3dd65cff-5e36-47d3-a243-7bc33d4344d6.png">
This also occurs on the `main` branch as well.
### What you think should happen instead
Users should be able to specify Labels between tasks within a Task Group.
### How to reproduce
- Use the DAG mentioned above and try to import into an Airflow environment
- Or, create a simple unit test of the following and execute said test.
```python
def test_cycle_task_group_with_edge_labels(self):
from airflow.models.baseoperator import chain
from airflow.utils.task_group import TaskGroup
from airflow.utils.edgemodifier import Label
dag = DAG('dag', start_date=DEFAULT_DATE, default_args={'owner': 'owner1'})
with dag:
with TaskGroup(group_id="task_group") as task_group:
op1 = EmptyOperator(task_id='A')
op2 = EmptyOperator(task_id='B')
op1 >> Label("label") >> op2
assert not check_cycle(dag)
```
A `AirflowDagCycleException` should be thrown:
```
tests/utils/test_dag_cycle.py::TestCycleTester::test_cycle_task_group_with_edge_labels FAILED [100%]
=============================================================================================== FAILURES ===============================================================================================
________________________________________________________________________ TestCycleTester.test_cycle_task_group_with_edge_labels ________________________________________________________________________
self = <tests.utils.test_dag_cycle.TestCycleTester testMethod=test_cycle_task_group_with_edge_labels>
def test_cycle_task_group_with_edge_labels(self):
from airflow.models.baseoperator import chain
from airflow.utils.task_group import TaskGroup
from airflow.utils.edgemodifier import Label
dag = DAG('dag', start_date=DEFAULT_DATE, default_args={'owner': 'owner1'})
with dag:
with TaskGroup(group_id="task_group") as task_group:
op1 = EmptyOperator(task_id='A')
op2 = EmptyOperator(task_id='B')
op1 >> Label("label") >> op2
> assert not check_cycle(dag)
tests/utils/test_dag_cycle.py:168:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/utils/dag_cycle_tester.py:76: in check_cycle
child_to_check = _check_adjacent_tasks(current_task_id, task)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
task_id = 'task_group.B', current_task = <Task(EmptyOperator): task_group.B>
def _check_adjacent_tasks(task_id, current_task):
"""Returns first untraversed child task, else None if all tasks traversed."""
for adjacent_task in current_task.get_direct_relative_ids():
if visited[adjacent_task] == CYCLE_IN_PROGRESS:
msg = f"Cycle detected in DAG. Faulty task: {task_id}"
> raise AirflowDagCycleException(msg)
E airflow.exceptions.AirflowDagCycleException: Cycle detected in DAG. Faulty task: task_group.B
airflow/utils/dag_cycle_tester.py:62: AirflowDagCycleException
---------------------------------------------------------------------------------------- Captured stdout setup -----------------------------------------------------------------------------------------
========================= AIRFLOW ==========================
Home of the user: /root
Airflow home /root/airflow
Skipping initializing of the DB as it was initialized already.
You can re-initialize the database by adding --with-db-init flag when running tests.
======================================================================================= short test summary info ========================================================================================
FAILED tests/utils/test_dag_cycle.py::TestCycleTester::test_cycle_task_group_with_edge_labels - airflow.exceptions.AirflowDagCycleException: Cycle detected in DAG. Faulty task: task_group.B
==================================================================================== 1 failed, 2 warnings in 1.08s =====================================================================================
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
This issue also occurs on the `main` branch using Breeze.
### Anything else
Possibly related to #21404
When the Label is removed, no cycle is detected.
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.utils.edgemodifier import Label
@task
def begin():
...
@task
def end():
...
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_groups_with_edge_labels():
@task_group
def group():
begin() >> end()
group()
_ = task_groups_with_edge_labels()
```
<img width="1437" alt="image" src="https://user-images.githubusercontent.com/48934154/165566908-a521d685-a032-482e-9e6b-ef85f0743e64.png">
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23285 | https://github.com/apache/airflow/pull/23291 | 726b27f86cf964924e5ee7b29a30aefe24dac45a | 3182303ce50bda6d5d27a6ef4e19450fb4e47eea | "2022-04-27T16:28:04Z" | python | "2022-04-27T18:12:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,284 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_schema.py", "tests/api_connexion/endpoints/test_task_endpoint.py", "tests/api_connexion/schemas/test_task_schema.py"] | Get DAG tasks in REST API does not include is_mapped | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
The rest API endpoint for get [/dags/{dag_id}/tasks](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_tasks) does not include `is_mapped`.
Example: `consumer` is mapped but I have no way to tell that from the API response:
<img width="306" alt="Screen Shot 2022-04-27 at 11 35 54 AM" src="https://user-images.githubusercontent.com/4600967/165556420-f8ade6e6-e904-4be0-a759-5281ddc04cba.png">
<img width="672" alt="Screen Shot 2022-04-27 at 11 35 25 AM" src="https://user-images.githubusercontent.com/4600967/165556310-742ec23d-f5a8-4cae-bea1-d00fd6c6916f.png">
### What you think should happen instead
Someone should be able to know if a task from get /tasks is mapped or not.
### How to reproduce
call get /tasks on a dag with mapped tasks. see there is no way to determine if it is mapped from the response body.
### Operating System
Mac OSX
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23284 | https://github.com/apache/airflow/pull/23319 | 98ec8c6990347fda60cbad33db915dc21497b1f0 | f3d80c2a0dce93b908d7c9de30c9cba673eb20d5 | "2022-04-27T15:37:09Z" | python | "2022-04-28T12:54:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,272 | ["breeze-legacy"] | Breeze-legacy missing flag_build_docker_images | ### Apache Airflow version
main (development)
### What happened
Running `./breeze-legacy` warns about a potential issue:
```shell
❯ ./breeze-legacy --help
Good version of docker 20.10.13.
./breeze-legacy: line 1434: breeze::flag_build_docker_images: command not found
...
```
And sure enough, `flag_build_docker_images` is referenced but not defined anywhere:
```shell
❯ ag flag_build_docker_images
breeze-legacy
1433:$(breeze::flag_build_docker_images)
```
And I believe that completely breaks `breeze-legacy`:
```shell
❯ ./breeze-legacy
Good version of docker 20.10.13.
ERROR: Allowed platform: [ ]. Passed: 'linux/amd64'
Switch to supported value with --platform flag.
ERROR: The previous step completed with error. Please take a look at output above
```
### What you think should happen instead
Breeze-legacy should still work. Bash functions should be defined if they are still in use.
### How to reproduce
Pull `main` branch.
Run `./breeze-legacy`.
### Operating System
macOS 11.6.4 Big Sur (Intel)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23272 | https://github.com/apache/airflow/pull/23276 | 1e87f51d163a8db7821d3a146c358879aff7ec0e | aee40f82ccec7651abe388d6a2cbac35f5f4c895 | "2022-04-26T19:20:12Z" | python | "2022-04-26T22:43:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,266 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | wasb hook not using AZURE_CLIENT_ID environment variable as client_id for ManagedIdentityCredential | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==3.8.0
### Apache Airflow version
2.2.4
### Operating System
Ubuntu 20.04.2 LTS
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Have deployed airflow using the official helm chart on aks cluster.
### What happened
I have deployed apache airflow using the official helm chart on an AKS cluster.
The pod has multiple user assigned identity assigned to it.
i have set the AZURE_CLIENT_ID environment variable to the client id that i want to use for authentication.
_Airflow connection:_
wasb_default = '{"login":"storageaccountname"}'
**Env**
AZURE_CLIENT_ID="user-managed-identity-client-id"
_**code**_
```
# suppress azure.core logs
import logging
logger = logging.getLogger("azure.core")
logger.setLevel(logging.ERROR)
from airflow.providers.microsoft.azure.hooks.wasb import WasbHook
conn_id = 'wasb-default'
hook = WasbHook(conn_id)
for blob_name in hook.get_blobs_list("testcontainer"):
print(blob_name)
```
**error**
```
azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: failed to get service principal token, error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Multiple user assigned identities exist, please specify the clientId / resourceId of the identity in the token request"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com
```
**trace**
```
[2022-04-26 16:37:23,446] {environment.py:103} WARNING - Incomplete environment configuration. These variables are set: AZURE_CLIENT_ID
[2022-04-26 16:37:23,446] {managed_identity.py:89} INFO - ManagedIdentityCredential will use IMDS
[2022-04-26 16:37:23,605] {chained.py:84} INFO - DefaultAzureCredential acquired a token from ManagedIdentityCredential
#Note: azure key vault azure.secrets.key_vault.AzureKeyVaultBackend uses DefaultAzureCredential to get the connection
[2022-04-26 16:37:23,687] {base.py:68} INFO - Using connection ID 'wasb-default' for task execution.
[2022-04-26 16:37:23,687] {managed_identity.py:89} INFO - ManagedIdentityCredential will use IMDS
[2022-04-26 16:37:23,688] {wasb.py:155} INFO - Using managed identity as credential
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_universal.py", line 561, in deserialize_from_text
return json.loads(data_as_str)
File "/usr/local/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/managed_identity_client.py", line 51, in _process_response
content = ContentDecodePolicy.deserialize_from_text(
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_universal.py", line 563, in deserialize_from_text
raise DecodeError(message="JSON is invalid: {}".format(err), response=response, error=err)
azure.core.exceptions.DecodeError: JSON is invalid: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_credentials/imds.py", line 97, in _request_token
token = self._client.request_token(*scopes, headers={"Metadata": "true"})
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/managed_identity_client.py", line 126, in request_token
token = self._process_response(response, request_time)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/managed_identity_client.py", line 59, in _process_response
six.raise_from(ClientAuthenticationError(message=message, response=response.http_response), ex)
File "<string>", line 3, in raise_from
azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: failed to get service principal token, error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Multiple user assigned identities exist, please specify the clientId / resourceId of the identity in the token request"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/test.py", line 7, in <module>
for blob_name in hook.get_blobs_list("test_container"):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/microsoft/azure/hooks/wasb.py", line 231, in get_blobs_list
for blob in blobs:
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/paging.py", line 129, in __next__
return next(self._page_iterator)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/paging.py", line 76, in __next__
self._response = self._get_next(self.continuation_token)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_list_blobs_helper.py", line 79, in _get_next_cb
process_storage_error(error)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_shared/response_handlers.py", line 89, in process_storage_error
raise storage_error
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_list_blobs_helper.py", line 72, in _get_next_cb
return self._command(
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_generated/operations/_container_operations.py", line 1572, in list_blob_hierarchy_segment
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 211, in run
return first_node.send(pipeline_request) # type: ignore
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
[Previous line repeated 2 more times]
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_redirect.py", line 158, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_shared/policies.py", line 515, in send
raise err
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_shared/policies.py", line 489, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_authentication.py", line 117, in send
self.on_request(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_authentication.py", line 94, in on_request
self._token = self._credential.get_token(*self._scopes)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/decorators.py", line 32, in wrapper
token = fn(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_credentials/managed_identity.py", line 123, in get_token
return self._credential.get_token(*scopes, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/get_token_mixin.py", line 76, in get_token
token = self._request_token(*scopes, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_credentials/imds.py", line 111, in _request_token
six.raise_from(ClientAuthenticationError(message=ex.message, response=ex.response), ex)
File "<string>", line 3, in raise_from
azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: failed to get service principal token, error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Multiple user assigned identities exist, please specify the clientId / resourceId of the identity in the token request"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com
```
### What you think should happen instead
The wasb hook should be able to authenticate using the user identity specified in the AZURE_CLIENT_ID and list the blobs
### How to reproduce
In an environment with multiple user assigned identity.
```
import logging
logger = logging.getLogger("azure.core")
logger.setLevel(logging.ERROR)
from airflow.providers.microsoft.azure.hooks.wasb import WasbHook
conn_id = 'wasb-default'
hook = WasbHook(conn_id)
for blob_name in hook.get_blobs_list("testcontainer"):
print(blob_name)
```
### Anything else
the issue is caused because we are not passing client_id to ManagedIdentityCredential in
[azure.hooks.wasb.WasbHook](https://github.com/apache/airflow/blob/1d875a45994540adef23ad6f638d78c9945ef873/airflow/providers/microsoft/azure/hooks/wasb.py#L153-L160)
```
if not credential:
credential = ManagedIdentityCredential()
self.log.info("Using managed identity as credential")
return BlobServiceClient(
account_url=f"https://{conn.login}.blob.core.windows.net/",
credential=credential,
**extra,
)
```
solution 1:
instead of ManagedIdentityCredential use [Azure.identity.DefaultAzureCredential](https://github.com/Azure/azure-sdk-for-python/blob/aa35d07aebf062393f14d147da54f0342e6b94a8/sdk/identity/azure-identity/azure/identity/_credentials/default.py#L32)
solution 2:
pass the client id from env [as done in DefaultAzureCredential](https://github.com/Azure/azure-sdk-for-python/blob/aa35d07aebf062393f14d147da54f0342e6b94a8/sdk/identity/azure-identity/azure/identity/_credentials/default.py#L104-L106):
`ManagedIdentityCredential(client_id=os.environ.get("AZURE_CLIENT_ID")`
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23266 | https://github.com/apache/airflow/pull/23394 | fcfaa8307ac410283f1270a0df9e557570e5ffd3 | 8f181c10344bd319ac5f6aeb102ee3c06e1f1637 | "2022-04-26T17:23:24Z" | python | "2022-05-08T21:12:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,249 | ["airflow/cli/commands/task_command.py", "tests/cli/commands/test_task_command.py"] | Pool option does not work in backfill command | ### Apache Airflow version
2.2.4
### What happened
Discussion Ref: https://github.com/apache/airflow/discussions/22201
Added the pool option to the backfill command, but only uses default_pool.
The log appears as below, but if you check the Task Instance Details / List Pool UI, default_pool is used.
```--------------------------------------------------------------------------------
[2022-03-12, 20:03:44 KST] {taskinstance.py:1244} INFO - Starting attempt 1 of 1
[2022-03-12, 20:03:44 KST] {taskinstance.py:1245} INFO -
--------------------------------------------------------------------------------
[2022-03-12, 20:03:44 KST] {taskinstance.py:1264} INFO - Executing <Task(BashOperator): runme_0> on 2022-03-05 00:00:00+00:00
[2022-03-12, 20:03:44 KST] {standard_task_runner.py:52} INFO - Started process 555 to run task
[2022-03-12, 20:03:45 KST] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'example_bash_operator', 'runme_0', 'backfill__2022-03-05T00:00:00+00:00', '--job-id', '127', '--pool', 'backfill_pool', '--raw', '--subdir', '/home/***/.local/lib/python3.8/site-packages/***/example_dags/example_bash_operator.py', '--cfg-path', '/tmp/tmprhjr0bc_', '--error-file', '/tmp/tmpkew9ufim']
[2022-03-12, 20:03:45 KST] {standard_task_runner.py:77} INFO - Job 127: Subtask runme_0
[2022-03-12, 20:03:45 KST] {logging_mixin.py:109} INFO - Running <TaskInstance: example_bash_operator.runme_0 backfill__2022-03-05T00:00:00+00:00 [running]> on host 56d55382c860
[2022-03-12, 20:03:45 KST] {taskinstance.py:1429} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=***
AIRFLOW_CTX_DAG_ID=example_bash_operator
AIRFLOW_CTX_TASK_ID=runme_0
AIRFLOW_CTX_EXECUTION_DATE=2022-03-05T00:00:00+00:00
AIRFLOW_CTX_DAG_RUN_ID=backfill__2022-03-05T00:00:00+00:00
[2022-03-12, 20:03:45 KST] {subprocess.py:62} INFO - Tmp dir root location:
/tmp
[2022-03-12, 20:03:45 KST] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'echo "example_bash_operator__runme_0__20220305" && sleep 1']
[2022-03-12, 20:03:45 KST] {subprocess.py:85} INFO - Output:
[2022-03-12, 20:03:46 KST] {subprocess.py:89} INFO - example_bash_operator__runme_0__20220305
[2022-03-12, 20:03:47 KST] {subprocess.py:93} INFO - Command exited with return code 0
[2022-03-12, 20:03:47 KST] {taskinstance.py:1272} INFO - Marking task as SUCCESS. dag_id=example_bash_operator, task_id=runme_0, execution_date=20220305T000000, start_date=20220312T110344, end_date=20220312T110347
[2022-03-12, 20:03:47 KST] {local_task_job.py:154} INFO - Task exited with return code 0
[2022-03-12, 20:03:47 KST] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
The backfill task instance should use a slot in the backfill_pool.
### How to reproduce
1. Create a backfill_pool in UI.
2. Run the backfill command on the example dag.
```
$ docker exec -it airflow_airflow-scheduler_1 /bin/bash
$ airflow dags backfill example_bash_operator -s 2022-03-05 -e 2022-03-06 \
--pool backfill_pool --reset-dagruns -y
[2022-03-12 11:03:52,720] {backfill_job.py:386} INFO - [backfill progress] | finished run 0 of 2 | tasks waiting: 2 | succeeded: 8 | running: 2 | failed: 0 | skipped: 2 | deadlocked: 0 | not ready: 2
[2022-03-12 11:03:57,574] {dagrun.py:545} INFO - Marking run <DagRun example_bash_operator @ 2022-03-05T00:00:00+00:00: backfill__2022-03-05T00:00:00+00:00, externally triggered: False> successful
[2022-03-12 11:03:57,575] {dagrun.py:590} INFO - DagRun Finished: dag_id=example_bash_operator, execution_date=2022-03-05T00:00:00+00:00, run_id=backfill__2022-03-05T00:00:00+00:00, run_start_date=2022-03-12 11:03:37.530158+00:00, run_end_date=2022-03-12 11:03:57.575869+00:00, run_duration=20.045711, state=success, external_trigger=False, run_type=backfill, data_interval_start=2022-03-05T00:00:00+00:00, data_interval_end=2022-03-06 00:00:00+00:00, dag_hash=None
[2022-03-12 11:03:57,582] {dagrun.py:545} INFO - Marking run <DagRun example_bash_operator @ 2022-03-06T00:00:00+00:00: backfill__2022-03-06T00:00:00+00:00, externally triggered: False> successful
[2022-03-12 11:03:57,583] {dagrun.py:590} INFO - DagRun Finished: dag_id=example_bash_operator, execution_date=2022-03-06T00:00:00+00:00, run_id=backfill__2022-03-06T00:00:00+00:00, run_start_date=2022-03-12 11:03:37.598927+00:00, run_end_date=2022-03-12 11:03:57.583295+00:00, run_duration=19.984368, state=success, external_trigger=False, run_type=backfill, data_interval_start=2022-03-06 00:00:00+00:00, data_interval_end=2022-03-07 00:00:00+00:00, dag_hash=None
[2022-03-12 11:03:57,584] {backfill_job.py:386} INFO - [backfill progress] | finished run 2 of 2 | tasks waiting: 0 | succeeded: 10 | running: 0 | failed: 0 | skipped: 4 | deadlocked: 0 | not ready: 0
[2022-03-12 11:03:57,589] {backfill_job.py:851} INFO - Backfill done. Exiting.
```
### Operating System
MacOS BigSur, docker-compose
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Follow the guide - [Running Airflow in Docker]. Use CeleryExecutor.
https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23249 | https://github.com/apache/airflow/pull/23258 | 511d0ee256b819690ccf0f6b30d12340b1dd7f0a | 3970ea386d5e0a371143ad1e69b897fd1262842d | "2022-04-26T10:48:39Z" | python | "2022-04-30T19:11:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,246 | ["airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Add api call for changing task instance status | ### Description
In the UI you can change the status of a task instance, but there is no API call available for the same feature.
It would be nice to have an api call for this as well.
### Use case/motivation
I found a solution on stack-overflow on [How to add manual tasks in an Apache Airflow Dag]. There is a suggestion to set a task on failed and change it manually to succeed when the task is done.
Our project has many manual tasks. This suggestions seems like a good option, but there is no api call yet to call instead of change all status manually. I would like to use an api call for this instead.
You can change the status of on a dag run so it also seems natural to have something similar for task instances.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23246 | https://github.com/apache/airflow/pull/26165 | 5c37b503f118b8ad2585dff9949dd8fdb96689ed | 1e6f1d54c54e5dc50078216e23ba01560ebb133c | "2022-04-26T09:17:52Z" | python | "2022-10-31T05:31:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,227 | ["airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/schemas/test_task_instance_schema.py"] | Ability to clear a specific DAG Run's task instances via REST APIs | ### Discussed in https://github.com/apache/airflow/discussions/23220
<div type='discussions-op-text'>
<sup>Originally posted by **yashk97** April 25, 2022</sup>
Hi,
My use case is in case multiple DAG Runs fail on some task (not the same one in all of them), I want to individually re-trigger each of these DAG Runs. Currently, I have to rely on the Airflow UI (attached screenshots) where I select the failed task and clear its state (along with the downstream tasks) to re-run from that point. While this works, it becomes tedious if the number of failed DAG runs is huge.
I checked the REST API Documentation and came across the clear Task Instances API with the following URL: /api/v1/dags/{dag_id}/clearTaskInstances
However, it filters task instances of the specified DAG in a given date range.
I was wondering if, for a specified DAG Run, we can clear a task along with its downstream tasks irrespective of the states of the tasks or the DAG run through REST API.
This will give us more granular control over re-running DAGs from the point of failure.
![image](https://user-images.githubusercontent.com/25115516/165099593-46ce449a-d303-49ee-9edb-fc5d524f4517.png)
![image](https://user-images.githubusercontent.com/25115516/165099683-4ba7f438-3660-4a16-a66c-2017aee5042f.png)
</div> | https://github.com/apache/airflow/issues/23227 | https://github.com/apache/airflow/pull/23516 | 3221ed5968423ea7a0dc7e1a4b51084351c2d56b | eceb4cc5888a7cf86a9250fff001fede2d6aba0f | "2022-04-25T18:40:24Z" | python | "2022-08-05T17:27:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,174 | ["CONTRIBUTORS_QUICK_START.rst", "CONTRIBUTORS_QUICK_START_CODESPACES.rst", "CONTRIBUTORS_QUICK_START_GITPOD.rst", "CONTRIBUTORS_QUICK_START_PYCHARM.rst", "CONTRIBUTORS_QUICK_START_VSCODE.rst"] | Some links in contributor's quickstart table of contents are broken | ### What do you see as an issue?
In `CONTRIBUTORS_QUICK_START.rst`, the links in the table of contents that direct users to parts of the guide that are hidden by the drop down don't work if the drop down isn't expanded. For example, clicking "[Setup Airflow with Breeze](https://github.com/apache/airflow/blob/main/CONTRIBUTORS_QUICK_START.rst#setup-airflow-with-breeze)" does nothing until you open the appropriate drop down `Setup and develop using <PyCharm, Visual Studio Code, Gitpod>`
### Solving the problem
Instead of having the entire documentation blocks under `Setup and develop using {method}` dropdowns, there could be drop downs under each section so that the guide remains concise without sacrificing the functionality of the table of contents.
### Anything else
I'm happy to submit a PR eventually, but I might not be able to get around to it for a bit if anyone else wants to handle it real quick.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23174 | https://github.com/apache/airflow/pull/23762 | e08b59da48743ff0d0ce145d1bc06bb7b5f86e68 | 1bf6dded9a5dcc22238b8943028b08741e36dfe5 | "2022-04-22T17:29:05Z" | python | "2022-05-24T17:03:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,171 | ["airflow/api/common/mark_tasks.py", "airflow/models/dag.py", "tests/models/test_dag.py", "tests/test_utils/mapping.py"] | Mark Success on a mapped task, reruns other failing mapped tasks | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Have a DAG with mapped tasks. Mark at least two mapped tasks as failed. Mark one of the failures as success. See the other task(s) switch to `no_status` and rerun.
![Apr-22-2022 10-21-41](https://user-images.githubusercontent.com/4600967/164734320-bafe267d-6ef0-46fb-b13f-6d85f9ef86ba.gif)
### What you think should happen instead
Marking a single mapped task as a success probably shouldn't affect other failed mapped tasks.
### How to reproduce
_No response_
### Operating System
OSX
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23171 | https://github.com/apache/airflow/pull/23177 | d262a72ca7ab75df336b93cefa338e7ba3f90ebb | 26a9ec65816e3ec7542d63ab4a2a494931a06c9b | "2022-04-22T14:25:54Z" | python | "2022-04-25T09:03:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,168 | ["airflow/api_connexion/schemas/connection_schema.py", "tests/api_connexion/endpoints/test_connection_endpoint.py"] | Getting error "Extra Field may not be null" while hitting create connection api with extra=null | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Getting error "Extra Field may not be null" while hitting create connection api with extra=null
```
{
"detail": "{'extra': ['Field may not be null.']}",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
### What you think should happen instead
I should be able to create connection through API
### How to reproduce
Steps to reproduce:
1. Hit connection end point with json body
Api Endpoint - api/v1/connections
HTTP Method - Post
Json Body -
```
{
"connection_id": "string6",
"conn_type": "string",
"host": "string",
"login": null,
"schema": null,
"port": null,
"password": "pa$$word",
"extra":null
}
```
### Operating System
debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro dev start
### Anything else
As per code I am assuming it may be null.
```
Connection:
description: Full representation of the connection.
allOf:
- $ref: '#/components/schemas/ConnectionCollectionItem'
- type: object
properties:
password:
type: string
format: password
writeOnly: true
description: Password of the connection.
extra:
type: string
nullable: true
description: Other values that cannot be put into another field, e.g. RSA keys.
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23168 | https://github.com/apache/airflow/pull/23183 | b33cd10941dd10d461023df5c2d3014f5dcbb7ac | b45240ad21ca750106931ba2b882b3238ef2b37d | "2022-04-22T10:48:23Z" | python | "2022-04-25T14:55:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,162 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | GCSToGCSOperator ignores replace parameter when there is no wildcard | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
Latest
### Apache Airflow version
2.2.5 (latest released)
### Operating System
MacOS 12.2.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
Ran the same DAG twice with 'replace = False', in the second run files are overwritten anyway.
source_object does not include wildcard.
Not sure whether this incorrect behavior happens to "with wildcard" scenario, but from source code
https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/gcs_to_gcs.py
in line 346 (inside _copy_source_with_wildcard) we have
if not self.replace:
but in _copy_source_without_wildcard we don't check self.replace at all.
### What you think should happen instead
When 'replace = False', the second run should skip copying files since they are already there.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23162 | https://github.com/apache/airflow/pull/23340 | 03718194f4fa509f16fcaf3d41ff186dbae5d427 | 82c244f9c7f24735ee952951bcb5add45422d186 | "2022-04-22T06:45:06Z" | python | "2022-05-08T19:46:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,159 | ["airflow/providers/docker/operators/docker.py", "airflow/providers/docker/operators/docker_swarm.py"] | docker container still running while dag run failed | ### Apache Airflow version
2.1.4
### What happened
I have operator run with docker .
When dag run failed , docker.py try to remove container but remove failed and got the following error:
`2022-04-20 00:03:50,381] {taskinstance.py:1463} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 301, in _run_image_with_mounts
for line in lines:
File "/home/airflow/.local/lib/python3.8/site-packages/docker/types/daemon.py", line 32, in __next__
return next(self._stream)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 412, in <genexpr>
gen = (data for (_, data) in gen)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/socket.py", line 92, in frames_iter_no_tty
(stream, n) = next_frame_header(socket)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/socket.py", line 64, in next_frame_header
data = read_exactly(socket, 8)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/socket.py", line 49, in read_exactly
next_data = read(socket, n - len(data))
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/socket.py", line 29, in read
select.select([socket], [], [])
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1238, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/airflow/.local/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localhost/v1.35/containers/de4cd812f8b0dcc448d591d1bd28fa736b1712237c8c8848919be512938bd515?v=False&link=False&force=False
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1165, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1283, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1313, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/airflow/dags/operators/byx_base_operator.py", line 611, in execute
raise e
File "/usr/local/airflow/dags/operators/byx_base_operator.py", line 591, in execute
self.execute_job(context)
File "/usr/local/airflow/dags/operators/byx_datax_operator.py", line 93, in execute_job
result = call_datax.execute(context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 343, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 265, in _run_image
return self._run_image_with_mounts(self.mounts, add_tmp_variable=False)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 317, in _run_image_with_mounts
self.cli.remove_container(self.container['Id'])
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/container.py", line 1010, in remove_container
self._raise_for_status(res)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 409 Client Error for http+docker://localhost/v1.35/containers/de4cd812f8b0dcc448d591d1bd28fa736b1712237c8c8848919be512938bd515?v=False&link=False&force=False: Conflict ("You cannot remove a running container de4cd812f8b0dcc448d591d1bd28fa736b1712237c8c8848919be512938bd515. Stop the container before attempting removal or force remove")
`
### What you think should happen instead
the container should removed successful when dag run failed
### How to reproduce
step 1: create a dag with execute DockerOperator operation
step 2: trigger dag
step 3: mark dag run to failed simulate dag run failed, and the remove container failed error will appear and the docker container still running.
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23159 | https://github.com/apache/airflow/pull/23160 | 5d5d62e41e93fe9845c96ab894047422761023d8 | 237d2225d6b92a5012a025ece93cd062382470ed | "2022-04-22T00:15:38Z" | python | "2022-07-02T15:44:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,146 | ["airflow/providers/google/cloud/sensors/bigquery_dts.py", "tests/providers/google/cloud/sensors/test_bigquery_dts.py"] | location is missing in BigQueryDataTransferServiceTransferRunSensor | ### Apache Airflow version
2.2.3
### What happened
Location is missing in [BigQueryDataTransferServiceTransferRunSensor](airflow/providers/google/cloud/sensors/bigquery_dts.py).
This forces us to execute data transfers only in the us. When starting a transfer the location can be provided.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Google Cloud Composer
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23146 | https://github.com/apache/airflow/pull/23166 | 692a0899430f86d160577c3dd0f52644c4ffad37 | 967140e6c3bd0f359393e018bf27b7f2310a2fd9 | "2022-04-21T12:32:26Z" | python | "2022-04-25T21:05:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,145 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Task stuck in "scheduled" when running in backfill job | ### Apache Airflow version
2.2.4
### What happened
We are running airflow 2.2.4 with KubernetesExecutor. I have created a dag to run airflow backfill command with SubprocessHook. What was observed is that when I started to backfill a few days' dagruns the backfill would get stuck with some dag runs having tasks staying in the "scheduled" state and never getting running.
We are using the default pool and the pool is totoally free when the tasks got stuck.
I could find some logs saying:
`TaskInstance: <TaskInstance: test_dag_2.task_1 backfill__2022-03-29T00:00:00+00:00 [queued]> found in queued state but was not launched, rescheduling` and nothing else in the log.
### What you think should happen instead
The tasks stuck in "scheduled" should start running when there is free slot in the pool.
### How to reproduce
Airflow 2.2.4 with python 3.8.13, KubernetesExecutor running in AWS EKS.
One backfill command example is: `airflow dags backfill test_dag_2 -s 2022-03-01 -e 2022-03-10 --rerun-failed-tasks`
The test_dag_2 dag is like:
```
import time
from datetime import timedelta
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.models.dag import dag
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['airflow@example.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
}
def get_execution_date(**kwargs):
ds = kwargs['ds']
print(ds)
with DAG(
'test_dag_2',
default_args=default_args,
description='Testing dag',
start_date=pendulum.datetime(2022, 4, 2, tz='UTC'),
schedule_interval="@daily", catchup=True, max_active_runs=1,
) as dag:
t1 = BashOperator(
task_id='task_1',
depends_on_past=False,
bash_command='sleep 30'
)
t2 = PythonOperator(
task_id='get_execution_date',
python_callable=get_execution_date
)
t1 >> t2
```
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==3.0.2
apache-airflow-providers-docker==2.4.1
apache-airflow-providers-elasticsearch==2.2.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.4.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-microsoft-azure==3.6.0
apache-airflow-providers-microsoft-mssql==2.1.0
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.4.1
apache-airflow-providers-slack==4.2.0
apache-airflow-providers-snowflake==2.5.0
apache-airflow-providers-sqlite==2.1.0
apache-airflow-providers-ssh==2.4.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23145 | https://github.com/apache/airflow/pull/23720 | 49cfb6498eed0acfc336a24fd827b69156d5e5bb | 640d4f9636d3867d66af2478bca15272811329da | "2022-04-21T12:29:32Z" | python | "2022-11-18T01:09:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,114 | ["airflow/providers/cncf/kubernetes/sensors/spark_kubernetes.py", "tests/providers/cncf/kubernetes/sensors/test_spark_kubernetes.py"] | SparkKubernetesSensor Cannot Attach Log When There Are Sidecars in the Driver Pod | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==3.0.0
### Apache Airflow version
2.2.5 (latest released)
### Operating System
Debian GNU/Linux 10 (buster)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When using `SparkKubernetesSensor` with `attach_log=True`, it cannot get the log correctly with the below error:
``` [2022-04-20, 08:42:04 UTC] {spark_kubernetes.py:95} WARNING - Could not read logs for pod spark-pi-0.4753748373914717-1-driver. It may have been disposed.
Make sure timeToLiveSeconds is set on your SparkApplication spec.
underlying exception: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': '29ac5abb-452d-4411-a420-8d74155e187d', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Wed, 20 Apr 2022 08:42:04 GMT', 'Content-Length': '259'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"a container name must be specified for pod spark-pi-0.4753748373914717-1-driver, choose one of: [istio-init istio-proxy spark-kubernetes-driver]","reason":"BadRequest","code":400}\n'
```
It is because no container is specified when calling kubernetes hook.get_pod_logs
https://github.com/apache/airflow/blob/501a3c3fbefbcc0d6071a00eb101110fc4733e08/airflow/providers/cncf/kubernetes/sensors/spark_kubernetes.py#L85
### What you think should happen instead
It should get the log of container `spark-kubernetes-driver`
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23114 | https://github.com/apache/airflow/pull/26560 | 923f1ef30e8f4c0df2845817b8f96373991ad3ce | 5c97e5be484ff572070b0ad320c5936bc028be93 | "2022-04-20T09:58:18Z" | python | "2022-10-10T05:36:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,107 | ["airflow/dag_processing/processor.py", "airflow/models/taskfail.py", "airflow/models/taskinstance.py", "tests/api/common/test_delete_dag.py", "tests/callbacks/test_callback_requests.py", "tests/jobs/test_scheduler_job.py"] | Mapped KubernetesPodOperator "fails" but UI shows it is as still running | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
This dag has a problem. The `name` kwarg is missing from one of the mapped instances.
```python3
from datetime import datetime
from airflow import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import (
KubernetesPodOperator,
)
from airflow.configuration import conf
namespace = conf.get("kubernetes", "NAMESPACE")
with DAG(
dag_id="kpo_mapped",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as dag:
KubernetesPodOperator(
task_id="cowsay_static_named",
name="cowsay_statc",
namespace=namespace,
image="docker.io/rancher/cowsay",
cmds=["cowsay"],
arguments=["moo"],
)
KubernetesPodOperator.partial(
task_id="cowsay_mapped",
# name="cowsay_mapped", # required field missing
image="docker.io/rancher/cowsay",
namespace=namespace,
cmds=["cowsay"],
).expand(arguments=[["mooooove"], ["cow"], ["get out the way"]])
KubernetesPodOperator.partial(
task_id="cowsay_mapped_named",
name="cowsay_mapped",
namespace=namespace,
image="docker.io/rancher/cowsay",
cmds=["cowsay"],
).expand(arguments=[["mooooove"], ["cow"], ["get out the way"]])
```
If you omit that field in an unmapped task, you get a dag parse error, which is appropriate. But omitting it from the mapped task gives you this runtime error in the task logs:
```
[2022-04-20, 05:11:02 UTC] {standard_task_runner.py:52} INFO - Started process 60 to run task
[2022-04-20, 05:11:02 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'kpo_mapped', 'cowsay_mapped', 'manual__2022-04-20T05:11:01+00:00', '--job-id', '12', '--raw', '--subdir', 'DAGS_FOLDER/dags/taskmap/kpo_mapped.py', '--cfg-path', '/tmp/tmp_g3sj496', '--map-index', '0', '--error-file', '/tmp/tmp2_313wxj']
[2022-04-20, 05:11:02 UTC] {standard_task_runner.py:80} INFO - Job 12: Subtask cowsay_mapped
[2022-04-20, 05:11:02 UTC] {task_command.py:369} INFO - Running <TaskInstance: kpo_mapped.cowsay_mapped manual__2022-04-20T05:11:01+00:00 map_index=0 [running]> on host airflow-worker-65f9fd9d5b-vpgnk
[2022-04-20, 05:11:02 UTC] {taskinstance.py:1863} WARNING - We expected to get frame set in local storage but it was not. Please report this as an issue with full logs at https://github.com/apache/airflow/issues/new
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1440, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1544, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2210, in render_templates
rendered_task = self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 722, in render_template_fields
unmapped_task = self.unmap(unmap_kwargs=kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 508, in unmap
op = self.operator_class(**unmap_kwargs, _airflow_from_mapped=True)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 390, in apply_defaults
result = func(self, **kwargs, default_args=default_args)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 259, in __init__
self.name = self._set
_name(name)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 442, in _set_name
raise AirflowException("`name` is required unless `pod_template_file` or `full_pod_spec` is set")
airflow.exceptions.AirflowException: `name` is required unless `pod_template_file` or `full_pod_spec` is set
```
But rather than failing the task, Airflow just thinks that the task is still running:
<img width="833" alt="Screen Shot 2022-04-19 at 11 13 47 PM" src="https://user-images.githubusercontent.com/5834582/164156155-41986d3a-d171-4943-8443-a0fc3c542988.png">
### What you think should happen instead
Ideally this error would be surfaced when the dag is first parsed. If that's not possible, then it should fail the task completely (i.e. a red square should show up in the grid view).
### How to reproduce
Run the dag above
### Operating System
ubuntu (microk8s)
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes | 4.0.0
### Deployment
Astronomer
### Deployment details
Deployed via the astronomer airflow helm chart, values:
```
airflow:
airflowHome: /usr/local/airflow
defaultAirflowRepository: 172.28.11.191:30500/airflow
defaultAirflowTag: tb11c-inner-operator-expansion
env:
- name: AIRFLOW__CORE__DAGBAG_IMPORT_ERROR_TRACEBACK_DEPTH
value: '99'
executor: CeleryExecutor
gid: 50000
images:
airflow:
pullPolicy: Always
repository: 172.28.11.191:30500/airflow
flower:
pullPolicy: Always
pod_template:
pullPolicy: Always
logs:
persistence:
enabled: true
size: 2Gi
scheduler:
livenessProbe:
timeoutSeconds: 45
triggerer:
livenessProbe:
timeoutSeconds: 45
```
Image base: `quay.io/astronomer/ap-airflow-dev:main`
Airflow version: `2.3.0.dev20220414`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23107 | https://github.com/apache/airflow/pull/23119 | 1e8ac47589967f2a7284faeab0f65b01bfd8202d | 91b82763c5c17e8ab021f2d4f2a5681ea90adf6b | "2022-04-20T05:29:38Z" | python | "2022-04-21T15:08:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,092 | ["airflow/www/static/css/bootstrap-theme.css"] | UI: Transparent border causes dropshadow to render 1px away from Action dropdown menu in Task Instance list | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Airflow:
> Astronomer Certified: v2.2.5.post1 based on Apache Airflow v2.2.5
> Git Version: .release:2.2.5+astro.1+90fc013e6e4139e2d4bfe438ad46c3af1d523668
Due to this CSS in `airflowDefaultTheme.ce329611a683ab0c05fd.css`:
```css
.dropdown-menu {
background-clip: padding-box;
background-color: #fff;
border: 1px solid transparent; /* <-- transparent border */
}
```
the dropdown border and dropshadow renders...weirdly:
![Screen Shot 2022-04-19 at 9 50 45 AM](https://user-images.githubusercontent.com/597113/164063925-10aaec58-ce6b-417e-a90f-4fa93eee4f9e.png)
Zoomed in - take a close look at the border and how the contents underneath the dropdown bleed through the border, making the dropshadow render 1px away from the dropdown menu:
![Screen Shot 2022-04-19 at 9 51 24 AM](https://user-images.githubusercontent.com/597113/164063995-e2d266ae-2cbf-43fc-9d97-7f90080c5507.png)
### What you think should happen instead
When I remove the abberrant line of CSS above, it cascades to this in `bootstrap.min.css`:
```css
.dropdown-menu {
...
border: 1px solid rgba(0,0,0,.15);
...
}
```
which renders the border as gray:
![Screen Shot 2022-04-19 at 9 59 23 AM](https://user-images.githubusercontent.com/597113/164064014-d575d039-aeb1-4a99-ab80-36c8cd6ca39e.png)
So I think we should not use a transparent border, or we should remove the explicit border from the dropdown and let Bootstrap control it.
### How to reproduce
Spin up an instance of Airflow with `astro dev start`, trigger a DAG, inspect the DAG details, and list all task instances of a DAG run. Then click the Actions dropdown menu.
### Operating System
macOS 11.6.4 Big Sur (Intel)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Astro installed via Homebrew:
> Astro CLI Version: 0.28.1, Git Commit: 980c0d7bd06b818a2cb0e948bb101d0b27e3a90a
> Astro Server Version: 0.28.4-rc9
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23092 | https://github.com/apache/airflow/pull/27789 | 8b1ebdacd8ddbe841a74830f750ed8f5e6f38f0a | d233c12c30f9a7f3da63348f3f028104cb14c76b | "2022-04-19T17:56:36Z" | python | "2022-11-19T23:57:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,083 | ["BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing.py", "dev/breeze/src/airflow_breeze/shell/enter_shell.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output-commands.svg", "images/breeze/output-tests.svg"] | Breeze: Running integration tests in Breeze | We should be able to run integration tests with Breeze - this is extension of `test` unit tests command that should allow to enable --integrations (same as in Shell) and run the tests with only the integration tests selected. | https://github.com/apache/airflow/issues/23083 | https://github.com/apache/airflow/pull/23445 | 83784d9e7b79d2400307454ccafdacddaee16769 | 7ba4e35a9d1b65b4c1a318ba4abdf521f98421a2 | "2022-04-19T14:17:28Z" | python | "2022-05-06T09:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,082 | ["BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing.py", "dev/breeze/src/airflow_breeze/shell/enter_shell.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output-commands.svg", "images/breeze/output-tests.svg"] | Breeze: Add running unit tests with Breeze | We should be able to run unit tests automatically from breeze (`test` command in legacy-breeze) | https://github.com/apache/airflow/issues/23082 | https://github.com/apache/airflow/pull/23445 | 83784d9e7b79d2400307454ccafdacddaee16769 | 7ba4e35a9d1b65b4c1a318ba4abdf521f98421a2 | "2022-04-19T14:15:49Z" | python | "2022-05-06T09:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,068 | ["airflow/www/static/js/tree/InstanceTooltip.jsx", "airflow/www/static/js/tree/details/content/dagRun/index.jsx", "airflow/www/static/js/tree/details/content/taskInstance/Details.jsx", "airflow/www/static/js/tree/details/content/taskInstance/MappedInstances.jsx", "airflow/www/utils.py"] | Grid view: "duration" shows 00:00:00 | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Run [a dag with an expanded TimedeltaSensor and a normal TimedeltaSensor](https://gist.github.com/MatrixManAtYrService/051fdc7164d187ab215ff8087e4db043), and navigate to the corresponding entries in the grid view.
While the dag runs:
- The unmapped task shows its "duration" to be increasing
- The mapped task shows a blank entry for the duration
Once the dag has finished:
- both show `00:00:00` for the duration
### What you think should happen instead
I'm not sure what it should show, probably time spent running? Or maybe queued + running? Whatever it should be, 00:00:00 doesn't seem right if it spent 90 seconds waiting around (e.g. in the "running" state)
Also, if we're going to update duration continuously while the normal task is running, we should do the same for the expanded task.
### How to reproduce
run a dag with expanded sensors, notice 00:00:00 duration
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astrocloud dev start`
Dockerfile:
```
FROM quay.io/astronomer/ap-airflow-dev:main
```
image at airflow version 6d6ac2b2bcbb0547a488a1a13fea3cb1a69d24e8
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23068 | https://github.com/apache/airflow/pull/23259 | 511ea702d5f732582d018dad79754b54d5e53f9d | 9e2531fa4d9890f002d184121e018e3face5586b | "2022-04-19T03:11:17Z" | python | "2022-04-26T15:42:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,059 | ["airflow/providers/presto/hooks/presto.py", "airflow/providers/trino/hooks/trino.py"] | Presto hook is broken in the latest provider release (2.2.0) | ### Apache Airflow version
2.2.5 (latest released)
### What happened
The latest presto provider release https://pypi.org/project/apache-airflow-providers-presto/ is broken due to:
```
File "/usr/local/lib/python3.8/site-packages/airflow/providers/presto/hooks/presto.py", line 117, in get_conn
http_headers = {"X-Presto-Client-Info": generate_presto_client_info()}
File "/usr/local/lib/python3.8/site-packages/airflow/providers/presto/hooks/presto.py", line 56, in generate_presto_client_info
'try_number': context_var['try_number'],
KeyError: 'try_number'
```
### What you think should happen instead
This is due to the latest airflow release 2.2.5 does not include this PR:
https://github.com/apache/airflow/pull/22297/
the presto hook changes were introduced in this pr https://github.com/apache/airflow/pull/22416
### How to reproduce
_No response_
### Operating System
Mac
### Versions of Apache Airflow Providers
https://pypi.org/project/apache-airflow-providers-presto/
version: 2.2.0
### Deployment
Other
### Deployment details
local
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
cc @levyitay | https://github.com/apache/airflow/issues/23059 | https://github.com/apache/airflow/pull/23061 | b24650c0cc156ceb5ef5791f1647d4d37a529920 | 5164cdbe98ad63754d969b4b300a7a0167565e33 | "2022-04-18T17:23:45Z" | python | "2022-04-19T05:29:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,042 | ["airflow/www/static/css/graph.css", "airflow/www/static/js/graph.js"] | Graph view: Nodes arrows are cut | ### Body
<img width="709" alt="Screen Shot 2022-04-15 at 17 37 37" src="https://user-images.githubusercontent.com/45845474/163584251-f1ea5bc7-e132-41c4-a20c-cc247b81b899.png">
Reproduce example using [example_emr_job_flow_manual_steps ](https://github.com/apache/airflow/blob/b3cae77218788671a72411a344aab42a3c58e89c/airflow/providers/amazon/aws/example_dags/example_emr_job_flow_manual_steps.py)in AWS provider
Already discussed with @bbovenzi this issue will be fixed after 2.3.0 as it requires quite a bit of changes... also this is not a regression and it's just a "comsitic" issue in very specific DAGs.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23042 | https://github.com/apache/airflow/pull/23044 | 749e53def43055225a2e5d09596af7821d91b4ac | 028087b5a6e94fd98542d0e681d947979eb1011f | "2022-04-15T14:45:05Z" | python | "2022-05-12T19:47:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,040 | ["airflow/providers/google/cloud/transfers/mssql_to_gcs.py", "airflow/providers/google/cloud/transfers/mysql_to_gcs.py", "airflow/providers/google/cloud/transfers/oracle_to_gcs.py", "airflow/providers/google/cloud/transfers/postgres_to_gcs.py", "airflow/providers/google/cloud/transfers/presto_to_gcs.py", "airflow/providers/google/cloud/transfers/sql_to_gcs.py", "airflow/providers/google/cloud/transfers/trino_to_gcs.py", "tests/providers/google/cloud/transfers/test_postgres_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | PostgresToGCSOperator does not allow nested JSON | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.3.0
### Apache Airflow version
2.1.4
### Operating System
macOS Big Sur version 11.6.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
Postgres JSON column output contains extra `\`:
`{"info": "{\"phones\": [{\"type\": \"mobile\", \"phone\": \"001001\"}, {\"type\": \"fix\", \"phone\": \"002002\"}]}", "name": null}`
While in the previous version the output looks like
`{"info": {"phones": [{"phone": "001001", "type": "mobile"}, {"phone": "002002", "type": "fix"}]}, "name": null}`
The introduced extra `\` will cause JSON parsing error in following `GCSToBigQueryOperator`
### What you think should happen instead
The output should NOT contain extra `\`:
`{"info": {"phones": [{"phone": "001001", "type": "mobile"}, {"phone": "002002", "type": "fix"}]}, "name": null}`
It is caused by this new code change in https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/postgres_to_gcs.py
should comment out this block
> if isinstance(value, dict):
> return json.dumps(value)
### How to reproduce
Try to output a Postgres table with JSON column --- you may use the the `info` above as example.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23040 | https://github.com/apache/airflow/pull/23063 | ca3fbbbe14203774a16ddd23e82cfe652b22eb4a | 766726f2e3a282fcd2662f5dc6e9926dc38a6540 | "2022-04-15T14:19:53Z" | python | "2022-05-08T22:06:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,033 | ["airflow/providers_manager.py", "tests/core/test_providers_manager.py"] | providers_manager | Exception when importing 'apache-airflow-providers-google' package ModuleNotFoundError: No module named 'airflow.providers.mysql' | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
```shell
airflow users create -r Admin -u admin -e admin@example.com -f admin -l user -p admin
```
give
```log
[2022-04-15 07:08:30,801] {manager.py:807} WARNING - No user yet created, use flask fab command to do it.
[2022-04-15 07:08:31,024] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-04-15 07:08:31,049] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-04-15 07:08:31,149] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-04-15 07:08:31,160] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
[2022-04-15 07:08:32,250] {providers_manager.py:237} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.cloud_sql.CloudSQLHook' from 'apache-airflow-providers-google' package
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/providers_manager.py", line 215, in _sanity_check
imported_class = import_string(class_name)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 52, in <module>
from airflow.providers.mysql.hooks.mysql import MySqlHook
ModuleNotFoundError: No module named 'airflow.providers.mysql'
[2022-04-15 07:29:12,007] {manager.py:213} INFO - Added user admin
User "admin" created with role "Admin"
```
### What you think should happen instead
it do not log this warning with
```
apache-airflow==2.2.5
apache-airflow-providers-google==6.7.0
```
```log
[2022-04-15 07:44:45,962] {manager.py:779} WARNING - No user yet created, use flask fab command to do it.
[2022-04-15 07:44:46,304] {manager.py:512} WARNING - Refused to delete permission view, assoc with role exists DAG Runs.can_create Admin
[2022-04-15 07:44:48,310] {manager.py:214} INFO - Added user admin
User "admin" created with role "Admin"
```
### How to reproduce
_No response_
### Operating System
ubuntu
### Versions of Apache Airflow Providers
requirements.txt :
```
apache-airflow-providers-google==6.8.0
```
pip install -r requirements.txt --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0b1/constraints-3.8.txt"
### Deployment
Other Docker-based deployment
### Deployment details
pip install apache-airflow[postgres]==2.3.0b1 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0b1/constraints-3.8.txt"
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23033 | https://github.com/apache/airflow/pull/23037 | 4fa718e4db2daeb89085ea20e8b3ce0c895e415c | 8dedd2ac13a6cdc0c363446985f492e0f702f639 | "2022-04-15T07:31:53Z" | python | "2022-04-20T21:52:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,028 | ["airflow/cli/commands/task_command.py"] | `airflow tasks states-for-dag-run` has no `map_index` column | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
I ran:
```
$ airflow tasks states-for-dag-run taskmap_xcom_pull 'manual__2022-04-14T13:27:04.958420+00:00'
dag_id | execution_date | task_id | state | start_date | end_date
==================+==================================+===========+=========+==================================+=================================
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | foo | success | 2022-04-14T13:27:05.343134+00:00 | 2022-04-14T13:27:05.598641+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | bar | success | 2022-04-14T13:27:06.256684+00:00 | 2022-04-14T13:27:06.462664+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.480364+00:00 | 2022-04-14T13:27:07.713226+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.512084+00:00 | 2022-04-14T13:27:07.768716+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.546097+00:00 | 2022-04-14T13:27:07.782719+00:00
```
...targeting a dagrun for which `identity` had three expanded tasks. All three showed up, but the output didn't show me enough to know which one was which.
### What you think should happen instead
There should be a `map_index` column so that I know which one is which.
### How to reproduce
Run a dag with expanded tasks, then try to view their states via the cli
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23028 | https://github.com/apache/airflow/pull/23030 | 10c9cb5318fd8a9e41a7b4338e5052c8feece7ae | b24650c0cc156ceb5ef5791f1647d4d37a529920 | "2022-04-14T23:35:08Z" | python | "2022-04-19T02:23:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,018 | ["airflow/jobs/backfill_job.py", "airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "airflow/models/taskmixin.py", "airflow/models/xcom_arg.py", "tests/models/test_taskinstance.py"] | A task's returned object should not be checked for mappability if the dag doesn't use it in an expansion. | ### Apache Airflow version
main (development)
### What happened
Here's a dag:
```python3
with DAG(...) as dag:
@dag.task
def foo():
return "foo"
@dag.task
def identity(thing):
return thing
foo() >> identity.expand(thing=[1, 2, 3])
```
`foo` fails with these task logs:
```
[2022-04-14, 14:15:26 UTC] {python.py:173} INFO - Done. Returned value was: foo
[2022-04-14, 14:15:26 UTC] {taskinstance.py:1837} WARNING - We expected to get frame set in local storage but it was not. Please report this as an issue with full logs at https://github.com/apache/airflow/issues/new
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1417, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1564, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1634, in _execute_task
self._record_task_map_for_downstreams(task_orig, result, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2314, in _record_task_map_for_downstreams
raise UnmappableXComTypePushed(value)
airflow.exceptions.UnmappableXComTypePushed: unmappable return type 'str'
```
### What you think should happen instead
Airflow shouldn't bother checking `foo`'s return type for mappability because its return value is never used in an expansion.
### How to reproduce
Run the dag, notice the failure
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
using image with ref: e5dd6fdcfd2f53ed90e29070711c121de447b404
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23018 | https://github.com/apache/airflow/pull/23053 | b8bbfd4b318108b4fdadc78cd46fd1735da243ae | 197cff3194e855b9207c3c0da8ae093a0d5dda55 | "2022-04-14T14:28:26Z" | python | "2022-04-19T18:02:15Z" |