status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 33,871 | ["airflow/api_connexion/endpoints/variable_endpoint.py", "tests/api_connexion/endpoints/test_variable_endpoint.py"] | Airflow API for PATCH a variable key which doesn't exist, sends Internal Server Error | ### Apache Airflow version
2.7.0
### What happened
```
Something bad has happened.
Airflow is used by many users, and it is very likely that others had similar problems and you can easily find
a solution to your problem.
Consider following these steps:
* gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment)
* find similar issues using:
* <b><a href="https://github.com/apache/airflow/discussions">GitHub Discussions</a></b>
* <b><a href="https://github.com/apache/airflow/issues">GitHub Issues</a></b>
* <b><a href="https://stackoverflow.com/questions/tagged/airflow">Stack Overflow</a></b>
* the usual search engine you use on a daily basis
* if you run Airflow on a Managed Service, consider opening an issue using the service support channels
* if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a <b><a href="https://github.com/apache/airflow/issues/new/choose">bug report</a></b>.
Make sure however, to include all relevant details and results of your investigation so far.
Python version: 3.11.4
Airflow version: 2.7.0
Node: redact
-------------------------------------------------------------------------------
Error! Please contact server admin.
```
The traceback received is:
```
[2023-08-29T08:26:46.947+0000] {app.py:1744} ERROR - Exception on /api/v1/variables/TEST_VAR [PATCH]
Traceback (most recent call last):
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/flask/app.py", line 2529, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/connexion/decorators/decorator.py", line 68, in wrapper
response = function(request)
^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/connexion/decorators/uri_parsing.py", line 149, in wrapper
response = function(request)
^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/connexion/decorators/validation.py", line 196, in wrapper
response = function(request)
^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/connexion/decorators/validation.py", line 399, in wrapper
return function(request)
^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/connexion/decorators/response.py", line 112, in wrapper
response = function(request)
^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/connexion/decorators/parameter.py", line 120, in wrapper
return function(**kwargs)
^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/api_connexion/security.py", line 52, in decorated
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/www/decorators.py", line 127, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/api_connexion/endpoints/variable_endpoint.py", line 118, in patch_variable
setattr(variable, key, val)
AttributeError: 'NoneType' object has no attribute 'key'
127.0.0.1 - admin [29/Aug/2023:08:26:46 +0000] "PATCH /api/v1/variables/TEST_VAR HTTP/1.1" 500 1543 "-" "curl/8.1.2"
```
### What you think should happen instead
The API call should have thrown a different HTTP Response. Say, 400 BAD REQUEST and Variable does not exist.
### How to reproduce
```
curl -vvvv -X PATCH "http://localhost:8888/api/v1/variables/TEST_VAR" -H 'Content-Type: application/json' -H "Authorization: Basic YWRtaW46YWRtaW4=" -d '{"key":"TEST_VAR","value":"TRUE"}'
```
I used a basic auth with admin:admin as creds.
### Operating System
macOS 13.5.1 22G90 arm64
### Versions of Apache Airflow Providers
Not relevant
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33871 | https://github.com/apache/airflow/pull/33885 | d361761deeffe628f3c17ab0debd0e11515c22da | 701c3b80107adb9f4c697f04331c1c7c4e315cd8 | "2023-08-29T08:32:08Z" | python | "2023-08-30T07:06:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,854 | [".pre-commit-config.yaml", "STATIC_CODE_CHECKS.rst", "dev/breeze/src/airflow_breeze/pre_commit_ids.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_static-checks.svg", "pyproject.toml"] | `pyproject.toml` `[project]` section without `name` and `version` attributes is not pep 621 compliant | ### Apache Airflow version
2.7.0
### What happened
https://peps.python.org/pep-0621/
https://github.com/apache/airflow/blob/83d09c0c423f3e8e3bbbfa6e0171d88893d1c18a/pyproject.toml#L31
Newer setuptools will complain and fail to install. When building the NixOS package for 2.7.0 we now get:
```
ValueError: invalid pyproject.toml config: `project`.
configuration error: `project` must contain ['name'] properties
```
### What you think should happen instead
_No response_
### How to reproduce
- Read https://peps.python.org/pep-0621/#name
- Read https://peps.python.org/pep-0621/#version
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33854 | https://github.com/apache/airflow/pull/34014 | 6c649aefd2dccbc1765c077c5154c7edf384caeb | ba8ee909e4532318649df9c2d5a7ed70b357913d | "2023-08-28T20:52:38Z" | python | "2023-09-04T07:56:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,850 | ["airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py", "airflow/providers/microsoft/azure/CHANGELOG.rst", "airflow/providers/microsoft/azure/hooks/fileshare.py", "airflow/providers/microsoft/azure/provider.yaml", "docs/apache-airflow-providers-microsoft-azure/connections/azure_fileshare.rst", "generated/provider_dependencies.json", "tests/providers/google/cloud/transfers/test_azure_fileshare_to_gcs.py", "tests/providers/microsoft/azure/hooks/test_azure_fileshare.py", "tests/test_utils/azure_system_helpers.py"] | Upgrade Azure File Share to v12 | ### Description
In November 2019 the Azure File Share python package was "renamed from `azure-storage-file` to `azure-storage-file-share` along with renamed client modules":
https://azure.github.io/azure-sdk/releases/2019-11/python.html
Yet it is 2023 and we still have `azure-storage-file>=2.1.0` as a dependency for `apache-airflow-providers-microsoft-azure`. I am opening this issue to propose removing this over three year old deprecated package.
I am aware of the challenges with earlier attempts to upgrade Azure Storage packages to v12 as discussed in https://github.com/apache/airflow/pull/8184. I hope those challenges are gone by now? Especially since `azure-storage-blob` already has been upgraded to v12 in this provider (https://github.com/apache/airflow/pull/12188).
Also,
I believe this is why `azure-storage-common>=2.1.0` is also still a dependency. Which is listed as deprecated on https://azure.github.io/azure-sdk/releases/deprecated/python.html:
- I have not fully investigated but I believe it is possible once we upgrade to `azure-storage-file-share` v12 this provider will no longer need `azure-storage-common` as a dependency as as it just contains the common code shared by the old 2.x versions of "blob", "file" and "queue". We already upgraded "blob" to v12, and we don't have "queue" support, so "file" is the last remaining.
- Also removing "azure-storage-common" will remove the warning:
```
... site-packages/azure/storage/common/_connection.py:82 SyntaxWarning: "is" with a literal. Did you mean "=="?
```
(a fix was merged to "main" in 2020 however Microsoft no longer will release new versions of this package)
I _used_ to be an active Azure Storage user (up until last year), but I am now mainly an AWS user, so I would appreciate it if someone else will create a PR for this, but if nobody does I suppose I could look into it.
### Use case/motivation
Mainly to remove deprecated packages, and secondly to remove one SyntaxWarning
### Related issues
This is the related issue to upgrade `azure-storage-blob` to v12: https://github.com/apache/airflow/issues/11968
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33850 | https://github.com/apache/airflow/pull/33904 | caf135f7c40ff07b31a9a026282695ac6202e6aa | b7f84e913b6aa4cee7fa63009082b0608b3a0bf1 | "2023-08-28T20:08:20Z" | python | "2023-09-02T12:15:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,744 | ["airflow/providers/redis/provider.yaml", "generated/provider_dependencies.json"] | Celery Executor is not working with redis-py 5.0.0 | ### Apache Airflow version
2.7.0
### What happened
After upgrading to Airflow 2.7.0 in my local environment my Airflow DAGs won't run with Celery Executor using Redis even after changing `celery_app_name` configuration in `celery` section from `airflow.executors.celery_executor` to `airflow.providers.celery.executors.celery_executor`.
I see the error actually is unrelated to the recent Airflow Celery provider changes, but is related to Celery's Redis support. What is happening is Airflow fails to send jobs to the worker as the Kombu module is not compatible with Redis 5.0.0 (released last week). It gives this error (I will update this to the full traceback once I can reproduce this error one more time):
```
AttributeError: module 'redis' has no attribute 'client'
```
Celery actually is limiting redis-py to 4.x in an upcoming version of Celery 5.3.x (it is merged to main on August 17, 2023 but it is not yet released: https://github.com/celery/celery/pull/8442 . The latest version is v5.3.1 released on June 18, 2023).
Kombu is also going to match Celery and limit redis-py to 4.x in an upcoming version as well (the PR is draft, I am assuming they are waiting for the Celery change to be released: https://github.com/celery/kombu/pull/1776)
For now there is not really a way to fix this unless there is a way we can do a redis constraint to avoid 5.x. Or maybe once the next Celery 5.3.x release includes limiting redis-py to 4.x we can possibly limit Celery provider to that version of Celery?
### What you think should happen instead
Airflow should be able to send jobs to workers when using Celery Executor with Redis
### How to reproduce
1. Start Airflow 2.7.0 with Celery Executor with Redis 5.0.0 installed by default (at the time of this writing)
2. Run a DAG task
3. The scheduler fails to send the job to the worker
Workaround:
1. Limit redis-py to 4.x the same way the upcoming release of Celery 5.3.x does, by using this in requirements.txt: `redis>=4.5.2,<5.0.0,!=4.5.5`
2. Start Airflow 2.7.0 with Celery Executor
3. Run a DAG task
4. The task runs successfully
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.3.2
### Deployment
Docker-Compose
### Deployment details
I am using `bitnami/airflow:2.7.0` image in Docker Compose when I first encountered this issue, but I will test with Breeze as well shortly and then update this issue.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33744 | https://github.com/apache/airflow/pull/33773 | 42bc8fcb6bab2b02ef2ff62c3015b54a1ad2df62 | 3ba994d8f4c4b5ce3828bebcff28bbfc25170004 | "2023-08-25T20:06:08Z" | python | "2023-08-26T16:02:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,711 | ["airflow/providers/amazon/CHANGELOG.rst", "airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | EcsRunTaskOperator waiter default waiter_max_attempts too low - all Airflow tasks detach from ECS tasks at 10 minutes | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Running on MWAA v2.5.1 with `apache-airflow-providers-amazon` (EcsRunTaskOperator) upgraded to v8.3.0
All `EcsRunTaskOperator` tasks appear to 'detach' from the underlying ECS Task after 10 minutes. Running a command:
```
sleep 800
```
results in:
```
[2023-08-25, 10:15:12 NZST] {{ecs.py:533}} INFO - EcsOperator overrides: {'containerOverrides': [{'name': 'meltano', 'command': ['sleep', '800']}]}
...
[2023-08-25, 10:15:13 NZST] {{ecs.py:651}} INFO - ECS task ID is: b2681954f66148e8909d5e74c4b94c1a
[2023-08-25, 10:15:13 NZST] {{ecs.py:565}} INFO - Starting ECS Task Log Fetcher
[2023-08-25, 10:15:43 NZST] {{base_aws.py:554}} WARNING - Unable to find AWS Connection ID 'aws_ecs', switching to empty.
[2023-08-25, 10:15:43 NZST] {{base_aws.py:160}} INFO - No connection ID provided. Fallback on boto3 credential strategy (region_name='ap-southeast-2'). See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html
[2023-08-25, 10:25:13 NZST] {{taskinstance.py:1768}} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/operators/ecs.py", line 570, in execute
self._wait_for_task_ended()
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/operators/ecs.py", line 684, in _wait_for_task_ended
waiter.wait(
File "/usr/local/airflow/.local/lib/python3.10/site-packages/botocore/waiter.py", line 55, in wait
Waiter.wait(self, **kwargs)
File "/usr/local/airflow/.local/lib/python3.10/site-packages/botocore/waiter.py", line 388, in wait
raise WaiterError(
botocore.exceptions.WaiterError: Waiter TasksStopped failed: Max attempts exceeded
```
It appears to be caused by the addition of `waiter.wait` with different max_attempts (defaults to 100 instead of sys.maxsize - usually a very large number):
```
waiter.config.max_attempts = sys.maxsize # timeout is managed by airflow
waiter.wait(
cluster=self.cluster,
tasks=[self.arn],
WaiterConfig={
"Delay": self.waiter_delay,
"MaxAttempts": self.waiter_max_attempts,
},
)
```
### What you think should happen instead
Set the default `waiter_max_attempts` in `EcsRunTaskOperator` to `sys.maxsize` to revert back to previous behaviour
### How to reproduce
1. You would need to set up ECS with a task definition, cluster, etc.
2. Assuming ECS is all setup - build a DAG with a EcsRunTaskOperator task
3. Run a task that should take more than 10 minutes, e.g. in `overrides` set `command` to `["sleep","800"]`
4. The Airflow task should fail while the ECS task should run for 800 seconds and complete successfully
### Operating System
MWAA v2.5.1 Python 3.10 (Linux)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.3.0
### Deployment
Amazon (AWS) MWAA
### Deployment details
n/a
### Anything else
n/a
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33711 | https://github.com/apache/airflow/pull/33712 | 539797fdfb2e0b2aca82376095e74edaad775439 | ea44ed9f54f6c0083aa6283b2f3f3712bc710a1f | "2023-08-24T23:36:43Z" | python | "2023-08-30T10:48:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,699 | ["airflow/www/static/js/dag/grid/index.tsx"] | Scrolling issues on DAG page | ### Apache Airflow version
2.7.0
### What happened
When on a DAG page, there's an issue with scrolling behavior on the Grid and Gantt tabs:
While my pointer is over the grid, the entire page should scroll once you get to the bottom of the grid, but instead I cannot scroll any further. This means that not only can't I get to the bottom of the page (where the Aiflow version, etc., is), but I can't even see the bottom of the grid if there are enough rows.
Details, Graph, and Code tabs scroll fine.
Important to note - this seems to only happen when there are enough DAG runs to require horizontal scrolling to be activated.
### What you think should happen instead
Instead of stopping here:
![image](https://github.com/apache/airflow/assets/79997320/fcb2259f-f9fc-44f3-bc24-8bbfb5afb76c)
I should be able to scroll all the way down to here:
![image](https://github.com/apache/airflow/assets/79997320/b6443c2d-d61b-457c-95bc-4713b9c38f9b)
### How to reproduce
You should be able to see this behavior with any DAG that has enough DAG runs to cause the horizontal scroll bar to appear. I was able to replicate it with this DAG and triggering it 10 times. I have the vertical divider moved almost all the way to the left.
```
from airflow import DAG
from airflow.operators.empty import EmptyOperator
from datetime import datetime
with DAG(
dag_id='bad_scrolling',
default_args={'start_date': datetime(2023, 8, 23, 14, 0, 0)},
) as dag:
t1 = EmptyOperator(
task_id='fairly_long_name_here'
)
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-celery==3.3.2
apache-airflow-providers-cncf-kubernetes==7.4.2
apache-airflow-providers-common-sql==1.7.0
apache-airflow-providers-datadog==3.3.1
apache-airflow-providers-elasticsearch==5.0.0
apache-airflow-providers-ftp==3.5.0
apache-airflow-providers-google==10.6.0
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-microsoft-azure==6.2.4
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-redis==3.3.1
apache-airflow-providers-salesforce==5.4.1
apache-airflow-providers-slack==7.3.2
apache-airflow-providers-snowflake==4.4.2
apache-airflow-providers-sqlite==3.4.3
```
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33699 | https://github.com/apache/airflow/pull/35717 | 4f060a482c3233504e7905b3ab2d00fe56ea43cd | d37b91c102856e62322450606474aebd74ddf376 | "2023-08-24T16:48:17Z" | python | "2023-11-28T21:37:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,698 | ["airflow/www/views.py"] | UI DAG counts including deleted DAGs | ### Apache Airflow version
2.7.0
### What happened
On the DAGs page, the All, Active, and Paused counts include deleted DAGs. This is different from <= 2.6.1 (at least), where they were not included in the totals. Specifically this is for DAGs for which the DAG files have been removed, not DAGs that have been deleted via the UI.
### What you think should happen instead
Including deleted DAGs in those counts is confusing, and this behavior should revert to the previous behavior.
### How to reproduce
Create a DAG.
Wait for totals to increment.
Remove the DAG file.
The totals will not change.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-celery==3.3.2
apache-airflow-providers-cncf-kubernetes==7.4.2
apache-airflow-providers-common-sql==1.7.0
apache-airflow-providers-datadog==3.3.1
apache-airflow-providers-elasticsearch==5.0.0
apache-airflow-providers-ftp==3.5.0
apache-airflow-providers-google==10.6.0
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-microsoft-azure==6.2.4
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-redis==3.3.1
apache-airflow-providers-salesforce==5.4.1
apache-airflow-providers-slack==7.3.2
apache-airflow-providers-snowflake==4.4.2
apache-airflow-providers-sqlite==3.4.3
```
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
I suspect the issue is with [DagModel.deactivate_deleted_dags](https://github.com/apache/airflow/blob/f971ba2f2f9703d0e1954e52aaded52a83c2f844/airflow/models/dag.py#L3564), but I'm unable to verify.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33698 | https://github.com/apache/airflow/pull/33778 | 02af225e7b75552e074d7dfcfc1af5336c42b84d | 64948fa7824d004e65089c2d159c5e6074727826 | "2023-08-24T16:01:16Z" | python | "2023-08-27T17:02:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,697 | ["airflow/providers/cncf/kubernetes/operators/pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/operators/test_pod.py"] | skip_on_exit_code parameter in KPO does not take effect | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
I am using following task to simulate the skipping of KPO
```
@task.kubernetes(image="python:3.8-slim-buster", namespace="dev", skip_on_exit_code=100 )
def print_pattern():
import sys
some_condition = True
if some_condition :
sys.exit(100)
```
This task task results in the following logs -
```
'container_statuses': [{'container_id': 'containerd://0e38f55c0d0b8ac21b2d0d4d4a58a0f',
'image': 'docker.io/library/python:3.8-slim-buster',
'image_id': 'docker.io/library/python@sha256:8799b0564103a9f36cfb8a8e1c562e11a9a6f2e3bb214e2adc23982b36a04511',
'last_state': {'running': None,
'terminated': None,
'waiting': None},
'name': 'base',
'ready': False,
'restart_count': 0,
'started': False,
'state': {'running': None,
'terminated': {'container_id': 'containerd://0cd4eddd219dd25b658d240e675c59d0a0e38f55c0d0b8ac21b2d0d4d4a58a0f',
'exit_code': 100,
'finished_at': datetime.datetime(2023, 8, 23, 9, 38, 9, tzinfo=tzlocal()),
'message': None,
'reason': 'Error',
'signal': None,
'started_at': datetime.datetime(2023, 8, 23, 9, 38, 8, tzinfo=tzlocal())},
'waiting': None}}],
```
The state in airflow upon execution of task is failed.
### What you think should happen instead
I would expect the task to be skipped based on "skip_on_exit_code" paramter.
### How to reproduce
Run the task in airflow installed using latest helm chart version 1.10.0 and airflow 2.6.2
### Operating System
"Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon | 8.1.0
apache-airflow-providers-celery | 3.2.0
apache-airflow-providers-cncf-kubernetes | 7.0.0
apache-airflow-providers-common-sql | 1.5.1
apache-airflow-providers-docker | 3.7.0
apache-airflow-providers-elasticsearch | 4.5.0
apache-airflow-providers-ftp | 3.4.1
apache-airflow-providers-google | 10.1.1
apache-airflow-providers-grpc | 3.2.0
apache-airflow-providers-hashicorp | 3.4.0
apache-airflow-providers-http | 4.4.1
apache-airflow-providers-imap | 3.2.1
apache-airflow-providers-microsoft-azure | 6.1.1
apache-airflow-providers-microsoft-mssql | 3.2.0
apache-airflow-providers-mysql | 5.1.0
apache-airflow-providers-odbc | 3.3.0
apache-airflow-providers-postgres | 5.5.0
apache-airflow-providers-redis | 3.2.0
apache-airflow-providers-sendgrid | 3.2.0
apache-airflow-providers-sftp | 4.3.0
apache-airflow-providers-slack | 7.3.0
apache-airflow-providers-snowflake | 4.1.0
apache-airflow-providers-sqlite | 3.4.1
apache-airflow-providers-ssh | 3.7.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
k8s version v1.24.16
### Anything else
The airflow code base uses `laststate.terminated.exit_code ` for matching the exit code as described [here](https://github.com/apache/airflow/blob/b5a4d36383c4143f46e168b8b7a4ba2dc7c54076/airflow/providers/cncf/kubernetes/operators/pod.py#L718C16-L718C16). However correct code should `state.terminated.exit_code`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33697 | https://github.com/apache/airflow/pull/33702 | f971ba2f2f9703d0e1954e52aaded52a83c2f844 | c47703103982ec4730ea28c8a5eda12ed2ce008a | "2023-08-24T14:40:50Z" | python | "2023-08-24T18:22:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,694 | ["airflow/template/templater.py", "airflow/utils/template.py", "docs/apache-airflow/core-concepts/operators.rst", "tests/models/test_baseoperator.py", "tests/template/test_templater.py"] | airflow jinja template render error | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
version 2.6.2
An error occurs when *.json is included in the parameters of BigQueryInsertJobOperator.
``` py
to_gcs_task = BigQueryInsertJobOperator(
dag=dag,
task_id='to_gcs',
gcp_conn_id='xxxx',
configuration={
"extract": {
# The error occurred at this location.
"destinationUris": ['gs://xxx/yyy/*.json'],
"sourceTable": {
"projectId": "abc",
"datasetId": "def",
"tableId": "ghi"
},
"destinationFormat": "NEWLINE_DELIMITED_JSON"
}
}
)
```
error log
```
jinja2.exceptions.TemplateNotFound: gs://xxx/yyy/*.json
```
### What you think should happen instead
According to the airflow.template.templater
source : https://github.com/apache/airflow/blob/main/airflow/template/templater.py#L152
```py
if isinstance(value, str):
if any(value.endswith(ext) for ext in self.template_ext): # A filepath.
template = jinja_env.get_template(value)
else:
template = jinja_env.from_string(value)
return self._render(template, context)
```
In the Jinja template source, if the value ends with .json or .sql, an attempt is made to read the resource file by calling jinja_env.get_template.
### How to reproduce
just call BigQueryInsertJobOperator with configuration what i added
### Operating System
m2 mac
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33694 | https://github.com/apache/airflow/pull/35017 | 69cea850cb37217675ccfef28917a9bd9679387d | 46c0f85ba6dd654501fc429ddd831461ebfefd3c | "2023-08-24T13:54:27Z" | python | "2023-11-17T08:58:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,693 | ["airflow/www/static/js/dag/details/graph/Node.tsx"] | Long custom operator name overflows in graph view | ### Apache Airflow version
main (development)
### What happened
1. There was support added to configure UI elements in graph view in https://github.com/apache/airflow/issues/31949
2. Long custom operator names overflow out of the box. Meanwhile long task id are truncated with ellipsis. I guess same could be done by removing width attribute that has "fit-content" and "noOfLines" should be added.
Originally wrapped before commit :
![Screenshot 2023-08-24 at 18-05-06 gh32757 - Grid - Airflow](https://github.com/apache/airflow/assets/3972343/2118975d-8467-4039-879f-13a87d9bcd79)
main :
![Screenshot 2023-08-24 at 18-17-31 gh32757 - Grid - Airflow](https://github.com/apache/airflow/assets/3972343/7d0b86e1-39c3-4fb4-928c-1ea54697128e)
### What you think should happen instead
_No response_
### How to reproduce
Sample dag to reproduce the issue in UI
```python
from datetime import datetime
from airflow.decorators import dag, task
from airflow.models.baseoperator import BaseOperator
from airflow.operators.bash import BashOperator
class HelloOperator(BashOperator):
custom_operator_name = "SampleLongNameOfOperator123456789"
@dag(dag_id="gh32757", start_date=datetime(2023, 1, 1), catchup=False)
def mydag():
bash = BashOperator(task_id="t1", bash_command="echo hello")
hello = HelloOperator(task_id="SampleLongTaskId1234567891234890", bash_command="echo test")
bash2 = BashOperator(task_id="t3", bash_command="echo bye")
bash >> hello >> bash2
mydag()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33693 | https://github.com/apache/airflow/pull/35382 | aaed909344b12aa4691a9e23ea9f9c98d641d853 | 4d872b87efac9950f125aff676b30f0a637b471e | "2023-08-24T13:00:06Z" | python | "2023-11-17T20:32:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,679 | ["airflow/providers/snowflake/operators/snowflake.py"] | SnowflakeCheckOperator connection id template issue | ### Apache Airflow version
2.7.0
### What happened
When upgrading to apache-airflow-providers-snowflake==4.4.2, our SnowflakeCheckOperators are all failing with similar messages. The affected code seems to be from [this PR](https://github.com/apache/airflow/pull/30784).
Code:
```
check_order_load = SnowflakeCheckOperator(
task_id="check_row_count",
sql='check_orders_load.sql',
snowflake_conn_id=SF_CONNECTION_ID,
)
```
Errors:
```
[2023-08-23, 20:58:23 UTC] {taskinstance.py:1943} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/airflow/models/abstractoperator.py", line 664, in _do_render_template_fields
value = getattr(parent, attr_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'SnowflakeCheckOperator' object has no attribute 'snowflake_conn_id'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/airflow/models/taskinstance.py", line 1518, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode, session=session)
File "/usr/local/lib/python3.11/site-packages/airflow/models/taskinstance.py", line 1646, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/models/taskinstance.py", line 2291, in render_templates
original_task.render_template_fields(context)
File "/usr/local/lib/python3.11/site-packages/airflow/models/baseoperator.py", line 1244, in render_template_fields
self._do_render_template_fields(self, self.template_fields, context, jinja_env, set())
File "/usr/local/lib/python3.11/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/models/abstractoperator.py", line 666, in _do_render_template_fields
raise AttributeError(
AttributeError: 'snowflake_conn_id' is configured as a template field but SnowflakeCheckOperator does not have this attribute.```
```
### What you think should happen instead
This works fine in apache-airflow-providers-snowflake==4.4.1 - no errors.
### How to reproduce
With `apache-airflow-providers-snowflake==4.4.2`
Try running this code:
```
from airflow.providers.snowflake.operators.snowflake import SnowflakeCheckOperator
check_task = SnowflakeCheckOperator(
task_id='check_gmv_yoy',
sql='select 1',
snowflake_conn_id='NAME_OF_CONNECTION_ID',
)
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-snowflake==4.4.2
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
This happens every time with 4.4.2, never with <= 4.4.1.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33679 | https://github.com/apache/airflow/pull/33681 | 2dbb9633240777d658031d32217255849150684b | d06c14f52757321f2049bb54212421f68bf3ed06 | "2023-08-23T22:02:11Z" | python | "2023-08-24T07:22:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,667 | ["airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | Google Cloud Dataproc cluster creation should eagerly delete ERROR state clusters. | ### Description
Google Cloud Dataproc cluster creation should eagerly delete ERROR state clusters.
It is possible for Google Cloud Dataproc clusters to create in the ERROR state. The current operator (DataprocCreateClusterOperator) will require three total task attempts (original + two retries) in order to create the cluster, assuming underlying GCE infrastructure resolves itself between task attempts. This can be reduced to two total attempts by eagerly deleting a cluster in ERROR state before failing the current task attempt.
Clusters in the ERROR state are not useable to submit Dataproc based jobs via the Dataproc API.
### Use case/motivation
Reducing the number of task attempts can reduce GCP based cost as delays between retry attempts could be minutes. There's no reason to keep a running, costly cluster in the ERROR state if it can be detected in the initial create task.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33667 | https://github.com/apache/airflow/pull/33668 | 075afe5a2add74d9e4e9fd57768b8354489cdb2b | d361761deeffe628f3c17ab0debd0e11515c22da | "2023-08-23T18:21:03Z" | python | "2023-08-30T05:29:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,661 | ["airflow/jobs/scheduler_job_runner.py", "airflow/utils/state.py", "tests/jobs/test_scheduler_job.py"] | Zombie tasks in RESTARTING state are not cleaned | ### Apache Airflow version
2.7.0
Also reproduced on 2.5.0
### What happened
Recently we added some automation to restarting Airflow tasks with "clear" command so we use this feature a lot. We often clear tasks in RUNNING state, which means that they go into RESTARTING state. We noticed that a lot of those tasks get stuck in RESTARTING state. Our Airflow infrastructure runs in an environment where any process can get suddenly killed without graceful shutdown.
We run Airflow on GKE but I managed to reproduce this behaviour on local environment with SequentialExecutor. See **"How to reproduce"** below for details.
### What you think should happen instead
Tasks should get cleaned after scheduler restart and eventually get scheduled and executed.
### How to reproduce
After some code investigation, I reproduced this kind of behaviour on local environment and it seems that RESTARTING tasks are only properly handled if the original restarting task is gracefully shut down so it can mark task as UP_FOR_RETRY or at least there is a healthy scheduler to do it if they fail for any other reason. The problem is with the following scenario:
1. Task is initially in RUNNING state.
2. Scheduler process dies suddenly.
3. The task process also dies suddenly.
4. "clear" command is executed on the task so the state is changed to RESTARTING state by webserver process.
5. From now on, even if we restart scheduler, the task will never get scheduled or change its state. It needs to have its state manually fixed, e.g. by clearing it again.
A recording of steps to reproduce on local environment:
https://vimeo.com/857192666?share=copy
### Operating System
MacOS Ventura 13.4.1
### Versions of Apache Airflow Providers
N/A
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
N/A
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33661 | https://github.com/apache/airflow/pull/33706 | 3f984edd0009ad4e3177a3c95351c563a6ac00da | 5c35786ca29aa53ec08232502fc8a16fb1ef847a | "2023-08-23T15:41:09Z" | python | "2023-08-24T23:55:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,634 | ["airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py"] | Unable to fetch CloudWatch Logs of previous run attempts | ### Apache Airflow version
2.7.0
### What happened
After upgrading to `apache-airflow-providers-amazon==8.5.1`, I am no longer able to view logs from previous run attempts.
Airflow is able to find the log stream successfully, but there's no content viewable (even though there are logs in the actual streams):
```
REDACTED.us-west-2.compute.internal
*** Reading remote log from Cloudwatch log_group: REDACTED log_stream: dag_id=REDACTED/run_id=REDACTED/task_id=REDACTED/attempt=1.log.
```
I believe this issue occurred from #33231 - Looking at the [CloudWatch Logs code](https://github.com/apache/airflow/blob/providers-amazon/8.5.1/airflow/providers/amazon/aws/log/cloudwatch_task_handler.py#L109-L133), I think `task_instance.start_date` and `task_instance.end_date` somehow refer to its __latest__ run, and so the log contents are getting filtered out on previous attempts.
### What you think should happen instead
_No response_
### How to reproduce
1. Configure environment with remote CloudWatch logging
2. Run task
3. Clear task and re-run
4. The logs for the first attempt now no longer show
### Operating System
Debian 11
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.5.1
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33634 | https://github.com/apache/airflow/pull/33673 | b1a3b4288022c67db22cbc7d24b0c4b2b122453b | 53a89739528cda26b8b53670fc51769850eb263e | "2023-08-22T22:37:46Z" | python | "2023-08-24T05:03:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,606 | ["airflow/utils/db_cleanup.py", "tests/utils/test_db_cleanup.py"] | 'airflow db clean' with --skip-archive flag fails | ### Apache Airflow version
2.7.0
### What happened
Running `airflow db clean -y -v --skip-archive --clean-before-timestamp '2023-05-24 00:00:00'` fails.
Running the same command without the `--skip-archive` flag passes successfully.
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/__main__.py", line 60, in main
args.func(args)
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/cli/cli_config.py", line 49, in command
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/utils/cli.py", line 113, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/utils/providers_configuration_loader.py", line 56, in wrapped_function
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/cli/commands/db_command.py", line 241, in cleanup_tables
run_cleanup(
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/utils/db_cleanup.py", line 437, in run_cleanup
_cleanup_table(
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/utils/db_cleanup.py", line 302, in _cleanup_table
_do_delete(query=query, orm_model=orm_model, skip_archive=skip_archive, session=session)
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/utils/db_cleanup.py", line 197, in _do_delete
target_table.drop()
File "/home/airflow/.local/lib/python3.11/site-packages/sqlalchemy/sql/schema.py", line 978, in drop
bind = _bind_or_error(self)
^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/sqlalchemy/sql/base.py", line 1659, in _bind_or_error
raise exc.UnboundExecutionError(msg)
sqlalchemy.exc.UnboundExecutionError: Table object '_airflow_deleted__dag_run__20230822091212' is not bound to an Engine or Connection. Execution can not proceed without a database to execute against.
```
### What you think should happen instead
db clean command with --skip-archive should pass
### How to reproduce
`airflow db clean -y -v --skip-archive --clean-before-timestamp '2023-05-24 00:00:00'`
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-celery==3.3.2
apache-airflow-providers-cncf-kubernetes==7.4.2
apache-airflow-providers-common-sql==1.7.0
apache-airflow-providers-daskexecutor==1.0.0
apache-airflow-providers-docker==3.7.3
apache-airflow-providers-elasticsearch==5.0.0
apache-airflow-providers-ftp==3.5.0
apache-airflow-providers-google==8.3.0
apache-airflow-providers-grpc==3.2.1
apache-airflow-providers-hashicorp==3.4.2
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-jenkins==3.3.1
apache-airflow-providers-microsoft-azure==6.2.4
apache-airflow-providers-mysql==5.2.1
apache-airflow-providers-odbc==4.0.0
apache-airflow-providers-openlineage==1.0.1
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-redis==3.3.1
apache-airflow-providers-salesforce==5.4.1
apache-airflow-providers-sendgrid==3.2.1
apache-airflow-providers-sftp==4.5.0
apache-airflow-providers-slack==7.3.2
apache-airflow-providers-snowflake==4.4.2
apache-airflow-providers-sqlite==3.4.3
apache-airflow-providers-ssh==3.7.1
apache-airflow-providers-tableau==4.2.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Helm version 1.9.0
Deployed on AWS EKS cluster
MetaDB is a AWS Postgres RDS connected using a PGBouncer
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33606 | https://github.com/apache/airflow/pull/33622 | 0ca5f700ab5e153ff8eea2c27b0629f2f44c8cb3 | 911cf466218bcd548519a50c9a32c9df58ec8b2e | "2023-08-22T09:15:57Z" | python | "2023-08-23T09:38:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,586 | ["airflow/www/templates/appbuilder/navbar_right.html"] | Airflow 2.7 Webserver unreacheable with new authentication manager | ### Apache Airflow version
2.7.0
### What happened
When connecting to the Airflow UI, we get the following message:
```
> Python version: 3.11.4
> Airflow version: 2.7.0
> Node: redact
> -------------------------------------------------------------------------------
> Error! Please contact server admin.
```
If we investigate further and we look at the Kubernetes pod logs, we see that following error message is thrown:
> File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/views.py", line 989, in index
> return self.render_template(
> ^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/views.py", line 694, in render_template
> return super().render_template(
> ^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/flask_appbuilder/baseviews.py", line 339, in render_template
> return render_template(
> ^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/flask/templating.py", line 147, in render_template
> return _render(app, template, context)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/flask/templating.py", line 130, in _render
> rv = template.render(context)
> ^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/jinja2/environment.py", line 1301, in render
> self.environment.handle_exception()
> File "/home/airflow/.local/lib/python3.11/site-packages/jinja2/environment.py", line 936, in handle_exception
> raise rewrite_traceback_stack(source=source)
> File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/templates/airflow/dags.html", line 44, in top-level template code
> {% elif curr_ordering_direction == 'asc' and request.args.get('sorting_key') == attribute_name %}
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/templates/airflow/main.html", line 21, in top-level template code
> {% from 'airflow/_messages.html' import show_message %}
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
> {% import 'appbuilder/baselib.html' as baselib %}
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 42, in top-level template code
> {% block body %}
> File "/home/airflow/.local/lib/python3.11/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 8, in block 'body'
> {% block navbar %}
> File "/home/airflow/.local/lib/python3.11/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 10, in block 'navbar'
> {% include 'appbuilder/navbar.html' %}
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/templates/appbuilder/navbar.html", line 53, in top-level template code
> {% include 'appbuilder/navbar_right.html' %}
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/templates/appbuilder/navbar_right.html", line 71, in top-level template code
> <span>{% for name in user_names %}{{ name[0].upper() }}{% endfor %}</span>
> ^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/home/airflow/.local/lib/python3.11/site-packages/jinja2/environment.py", line 485, in getattr
> return getattr(obj, attribute)
> ^^^^^^^^^^^^^^^^^^^^^^^
> jinja2.exceptions.UndefinedError: str object has no element 0
>
### What you think should happen instead
Show the Airflow UI
### How to reproduce
The deployment is done using the official Airflow helm chart with Azure AD authentication on the webserver.
As soon as we did the upgrade to Airflow 2.7, the webserver became unreacheable when trying to access it with the error shown above.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==7.4.2
apache-airflow-providers-docker==3.6.0
apache-airflow-providers-microsoft-azure==4.3.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Webserver config:
AUTH_TYPE = AUTH_OAUTH
AUTH_ROLE_ADMIN = 'Admin'
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Admin"
OAUTH_PROVIDERS = [
{
"name": "azure",
"icon": "fa-microsoft",
"token_key": "access_token",
"remote_app": {
"client_id": "${airflow_client_id}",
"client_secret": "${airflow_client_secret}",
"api_base_url": "https://login.microsoftonline.com/${airflow_tenant_id}/oauth2",
"client_kwargs": {
"scope": "User.read name preferred_username email profile",
"resource": "${airflow_client_id}",
},
"request_token_url": None,
"access_token_url": "https://login.microsoftonline.com/${airflow_tenant_id}/oauth2/token",
"authorize_url": "https://login.microsoftonline.com/${airflow_tenant_id}/oauth2/authorize",
},
},
]
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33586 | https://github.com/apache/airflow/pull/33617 | 41d9be072abacc47393f700aa8fb98bc2b9a3713 | 62b917a6ac61fd6882c377e3b04f72d908f52a58 | "2023-08-21T15:08:38Z" | python | "2023-08-22T16:51:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,577 | ["airflow/providers/celery/provider.yaml", "dev/breeze/src/airflow_breeze/utils/path_utils.py", "generated/provider_dependencies.json", "setup.cfg", "setup.py"] | Airflow 2.7 is incompatible with SodaCore versions 3.0.24 and beyond | ### Apache Airflow version
2.7.0
### What happened
When trying to install SodaCore on Airflow 2.7, the following error is received due to a conflict with `opentelemetry-api`.
```
ERROR: Cannot install apache-airflow==2.7.0 and soda-core==3.0.48 because these package versions have conflicting dependencies.
The conflict is caused by:
apache-airflow 2.7.0 depends on opentelemetry-api==1.15.0
soda-core 3.0.48 depends on opentelemetry-api~=1.16.0
```
SodaCore has depended on `opentelemetry-api~=1.16.0` ever since v[3.0.24](https://github.com/sodadata/soda-core/releases/tag/v3.0.24).
### What you think should happen instead
Airflow needs to support versions of `opentelemetry-api` 1.16.x.
### How to reproduce
Simply running the following commands to install the two packages should reproduce the error.
```
$ python3 -m venv /tmp/soda
$ /tmp/soda/bin/pip install apache-airflow==2.7.0 soda-core-bigquery==3.0.48
```
### Operating System
n/a
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33577 | https://github.com/apache/airflow/pull/33579 | 73a37333918abe0612120d95169b9e377274810b | ae25a52ae342c9e0bc3afdb21d613447c3687f6c | "2023-08-21T12:10:44Z" | python | "2023-08-21T15:49:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,497 | ["airflow/www/jest-setup.js", "airflow/www/static/js/cluster-activity/live-metrics/Health.tsx", "airflow/www/static/js/index.d.ts", "airflow/www/templates/airflow/cluster_activity.html", "airflow/www/views.py"] | DAG Processor should not be visible in the Cluster Activity Page if there is no stand alone processor | ### Apache Airflow version
2.7.0rc2
### What happened
In the Airflow UI, currently, the DAG Processor is visible in the Cluster Activity page even if there is no stand-alone dag processor.
### What you think should happen instead
It should be hidden if there is no stand-alone dag processor.
### How to reproduce
Run Airflow 2.7
### Operating System
Mac OS
### Versions of Apache Airflow Providers
Airflow > 2.7
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33497 | https://github.com/apache/airflow/pull/33611 | b6318ffabce8cc3fdb02c30842726476b7e1fcca | c055e1da0b50e98820ffff8f8d10d0882f753384 | "2023-08-18T15:59:33Z" | python | "2023-09-02T13:56:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,485 | ["airflow/utils/sqlalchemy.py"] | SQL_ALCHEMY_CONN_CMD causes triggerers to fail liveness probes on peak | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow version: **2.5.3**
Related to this comment from @vchiapaikeo: https://github.com/apache/airflow/pull/33172#issuecomment-1677501450
A couple of mins after midnight UTC - when 100s of DAGs are kicked off - we noticed our triggerer replicas failing liveness probe checks and restarting systematically.
Further profiling led to the discovery that the triggerer’s sync loop hangs for several minutes when there are 1000s of triggers running simultaneously, specifically while [bulk fetching triggers](https://github.com/apache/airflow/blob/v2-5-test/airflow/jobs/triggerer_job.py#L398), which causes the triggerer to miss heartbeats and eventually get restarted by k8s.
With profiling still enabled, we observed that while the trigger is hanging and we profile the execution, we get this stack trace:
```
ncalls tottime percall cumtime percall filename:lineno(function) [506/45463]
1 0.000 0.000 29.928 29.928 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/query.py:2757(all)
1 0.000 0.000 29.923 29.923 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py:1468(all)
1 0.000 0.000 29.923 29.923 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py:395(_allrows)
1 0.000 0.000 29.923 29.923 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py:1388(_fetchall_impl)
1 0.000 0.000 29.923 29.923 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py:1808(_fetchall_impl)
2 0.000 0.000 29.922 14.961 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/loading.py:135(chunks)
1 0.000 0.000 29.921 29.921 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py:390(_raw_all_rows)
1 0.001 0.001 29.921 29.921 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py:393(<listcomp>)
125 0.000 0.000 29.919 0.239 /home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/sql/type_api.py:1711(process)
125 0.002 0.000 29.915 0.239 /home/airflow/.local/lib/python3.10/site-packages/airflow/utils/sqlalchemy.py:146(process_result_value)
125 0.001 0.000 29.909 0.239 /home/airflow/.local/lib/python3.10/site-packages/airflow/utils/sqlalchemy.py:122(db_supports_json)
125 0.001 0.000 29.908 0.239 /home/airflow/.local/lib/python3.10/site-packages/airflow/configuration.py:562(get)
125 0.000 0.000 29.907 0.239 /home/airflow/.local/lib/python3.10/site-packages/airflow/configuration.py:732(_get_environment_variables)
125 0.002 0.000 29.907 0.239 /home/airflow/.local/lib/python3.10/site-packages/airflow/configuration.py:478(_get_env_var_option)
125 0.002 0.000 29.902 0.239 /home/airflow/.local/lib/python3.10/site-packages/airflow/configuration.py:103(run_command)
125 0.001 0.000 29.786 0.238 /usr/local/lib/python3.10/subprocess.py:1110(communicate)
125 0.006 0.000 29.785 0.238 /usr/local/lib/python3.10/subprocess.py:1952(_communicate)
250 0.003 0.000 29.762 0.119 /usr/local/lib/python3.10/selectors.py:403(select)
250 29.758 0.119 29.758 0.119 {method 'poll' of 'select.poll' objects}
125 0.002 0.000 0.100 0.001 /usr/local/lib/python3.10/subprocess.py:758(__init__)
125 0.004 0.000 0.094 0.001 /usr/local/lib/python3.10/subprocess.py:1687(_execute_child)
```
Which indicates that airflow is running a subprocess for each fetched row and that takes the vast majority of the execution time.
We found that during the unmarshaling of the resulting rows into the Trigger model, the [kwargs column](https://github.com/apache/airflow/blob/v2-5-test/airflow/models/trigger.py#L57) (ExtendedJSON) runs [process_returned_value](https://github.com/apache/airflow/blob/v2-5-test/airflow/utils/sqlalchemy.py#L146), on each row, and reads the `SQL_ALCHEMY_CONN` configuration to determine whether the engine supports json or not and parse kwargs accordingly. However, in our case we define `SQL_ALCHEMY_CONN_CMD` as opposed to `SQL_ALCHEMY_CONN`, which causes the sync loop to spawn a new subprocess for every row ([here](https://github.com/apache/airflow/blob/v2-5-test/airflow/configuration.py#L485-L488)).
We workaround it by using `SQL_ALCHEMY_CONN` instead of `SQL_ALCHEMY_CONN_CMD`, as it involves reading an environment variable instead of spawning a new subprocess.
### What you think should happen instead
The triggerer model caches caches either the `SQL_ALCHEMY_CONN` or the [db_supports_json](https://github.com/apache/airflow/blob/v2-5-stable/airflow/utils/sqlalchemy.py#L122) property.
### How to reproduce
Simultaneously kick off 100s of DAGs with at least a few deferrable operators each and use `SQL_ALCHEMY_CONN_CMD` instead of `SQL_ALCHEMY_CONN`
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-airbyte==3.2.0
apache-airflow-providers-alibaba==2.2.0
apache-airflow-providers-amazon==7.3.0
apache-airflow-providers-apache-beam==4.3.0
apache-airflow-providers-apache-cassandra==3.1.1
apache-airflow-providers-apache-drill==2.3.1
apache-airflow-providers-apache-druid==3.3.1
apache-airflow-providers-apache-hdfs==3.2.0
apache-airflow-providers-apache-hive==5.1.3
apache-airflow-providers-apache-kylin==3.1.0
apache-airflow-providers-apache-livy==3.3.0
apache-airflow-providers-apache-pig==4.0.0
apache-airflow-providers-apache-pinot==4.0.1
apache-airflow-providers-apache-spark==4.0.0
apache-airflow-providers-apache-sqoop==3.1.1
apache-airflow-providers-arangodb==2.1.1
apache-airflow-providers-asana==2.1.0
apache-airflow-providers-atlassian-jira==2.0.1
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cloudant==3.1.0
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-databricks==4.0.0
apache-airflow-providers-datadog==3.1.0
apache-airflow-providers-dbt-cloud==3.1.0
apache-airflow-providers-dingding==3.1.0
apache-airflow-providers-discord==3.1.0
apache-airflow-providers-docker==3.5.1
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-exasol==4.1.3
apache-airflow-providers-facebook==3.1.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-github==2.2.1
apache-airflow-providers-google==8.11.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.0
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-influxdb==2.1.0
apache-airflow-providers-jdbc==3.3.0
apache-airflow-providers-jenkins==3.2.0
apache-airflow-providers-microsoft-azure==5.2.1
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-microsoft-psrp==2.2.0
apache-airflow-providers-microsoft-winrm==3.1.1
apache-airflow-providers-mongo==3.1.1
apache-airflow-providers-mysql==4.0.2
apache-airflow-providers-neo4j==3.2.1
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-openfaas==3.1.0
apache-airflow-providers-opsgenie==5.0.0
apache-airflow-providers-oracle==3.6.0
apache-airflow-providers-pagerduty==3.1.0
apache-airflow-providers-papermill==3.1.1
apache-airflow-providers-plexus==3.1.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-presto==4.2.2
apache-airflow-providers-qubole==3.3.1
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-salesforce==5.3.0
apache-airflow-providers-samba==4.1.0
apache-airflow-providers-segment==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-singularity==3.1.0
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.4
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.5.0
apache-airflow-providers-tableau==4.1.0
apache-airflow-providers-tabular==1.1.0
apache-airflow-providers-telegram==4.0.0
apache-airflow-providers-trino==4.3.2
apache-airflow-providers-vertica==3.3.1
apache-airflow-providers-yandex==3.3.0
apache-airflow-providers-zendesk==4.2.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
Chart based on the official helm chart. Airflow running on Google Kubernetes Engine (GKE) using `KubernetesExecutor`.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33485 | https://github.com/apache/airflow/pull/33503 | d1e6a5c48d03322dda090113134f745d1f9c34d4 | 46aa4294e453d800ef6d327addf72a004be3765f | "2023-08-17T21:15:48Z" | python | "2023-08-18T19:40:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,482 | ["airflow/api_connexion/endpoints/dag_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_schema.py", "airflow/models/dag.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"] | The `/dags/{dag_id}/details` endpoint returns less data than is documented | ### What do you see as an issue?
The [/dags/{dag_id}/details](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/post_set_task_instances_state) endpoint of the REST API does not return all of the keys that are listed in the documentation. If I run `curl -X GET localhost:8080/api/v1/dags/{my_dag}/details`, then compare the results with the results in the documentation, you can see the following missing keys:
```python
>>> for key in docs.keys():
... if not key in actual.keys():
... print(key)
...
root_dag_id
last_parsed_time
last_pickled
last_expired
scheduler_lock
timetable_description
has_task_concurrency_limits
has_import_errors
next_dagrun
next_dagrun_data_interval_start
next_dagrun_data_interval_end
next_dagrun_create_after
template_search_path
```
### Solving the problem
Either remove these keys from the documentation or fix the API endpoint
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33482 | https://github.com/apache/airflow/pull/34947 | 0e157b38a3e44b5a6fc084c581a025434a97a4c0 | e8f62e8ee56519459d8282dadb1d8c198ea5b9f5 | "2023-08-17T19:15:44Z" | python | "2023-11-24T09:47:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,478 | ["airflow/www/views.py"] | Rendered template malfunction when `execution_date` parameter is malformed | null | https://github.com/apache/airflow/issues/33478 | https://github.com/apache/airflow/pull/33516 | 533afb5128383958889bc653226f46947c642351 | d9814eb3a2fc1dbbb885a0a2c1b7a23ce1cfa148 | "2023-08-17T16:46:41Z" | python | "2023-08-19T16:03:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,461 | ["airflow/providers/amazon/aws/waiters/appflow.json"] | AppflowHook with wait_for_completion = True does not finish executing the task although the appflow flow does. | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
I'm using airflow 2.6.2 with apache-airflow-providers-amazon 8.5.1
When I use AppflowHook with the wait_for_completion parameter set to True the task execution never finishes.
I have checked in Appflow and the flow executes correctly and finishes in a couple of seconds, however, AppflowHook does not finish responding.
If I change wait_for_completion to False, everything works correctly.
The logs show a "403 FORBIDDEN" error and marking the task as success or failed fixes the logs.
**Logs during task execution:**
```console
470b2412b735
*** Found local files:
*** * /opt/airflow/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe/attempt=1.log ***
!!!! Please make sure that all your Airflow components (e.g. schedulers, webservers, workers and triggerer) have the same 'secret_key' configured in 'webserver' section and time is synchronized on all your machines (for example with ntpd) See more at https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key
*** Could not read served logs: Client error '403 FORBIDDEN' for url 'http://470b2412b735:8793/log/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe/attempt=1.log' For more information check: https://httpstatuses.com/403
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00
[queued]>
[2023-08-16, 19:04:44 CST] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00
[queued]>
[2023-08-16, 19:04:44 CST] {taskinstance.py:1308} INFO - Starting attempt 1 of 1
[2023-08-16, 19:04:44 CST] {taskinstance.py:1327} INFO - Executing <Task(_PythonDecoratedOperator): extract_from_stripe> on 2023-08-17 01:04:41.723261+00:00
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:57} INFO - Started process 796 to run task
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:84} INFO - Running:
['***', 'tasks', 'run', 'stripe_ingest_flow', 'extract_from_stripe', 'manual__2023-08-17T01:04:41.723261+00:00', '--job-id', '903', '--raw', '--subdir', 'DAGS_FOLDER/stripe_ingest_flow_to_lakehouse/dag.py', '--cfg-path', '/tmp/tmpqz8uvben']
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:85} INFO - Job 903: Subtask extract_from_stripe
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {task_command.py:410} INFO - Running <TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00
[running]> on host 470b2412b735
[2023-08-16, 19:04:44 CST] {taskinstance.py:1545} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='dhernandez' AIRFLOW_CTX_DAG_ID='stripe_ingest_flow' AIRFLOW_CTX_TASK_ID='extract_from_stripe' AIRFLOW_CTX_EXECUTION_DATE='2023-08-17T01:04:41.723261+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2023-08-17T01:04:41.723261+00:00'
[2023-08-16, 19:04:44 CST] {crypto.py:83} WARNING - empty cryptography key - values will not be stored encrypted.
[2023-08-16, 19:04:44 CST] {base.py:73} INFO - Using connection ID 'siclo_***_lakehouse_conn' for task execution.
[2023-08-16, 19:04:44 CST] {connection_wrapper.py:340} INFO - AWS Connection (conn_id='siclo_***_lakehouse_conn', conn_type='aws') credentials retrieved from login and password.
[2023-08-16, 19:04:45 CST] {appflow.py:63} INFO - executionId: 58ad6275-0a70-48d9-8414-f0215924c876
```
**Logs when marking the task as success or failed**
```console
470b2412b735
*** Found local files:
*** * /opt/airflow/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe/attempt=1.log
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00 [queued]>
[2023-08-16, 19:04:44 CST] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00 [queued]>
[2023-08-16, 19:04:44 CST] {taskinstance.py:1308} INFO - Starting attempt 1 of 1
[2023-08-16, 19:04:44 CST] {taskinstance.py:1327} INFO - Executing <Task(_PythonDecoratedOperator): extract_from_stripe> on 2023-08-17 01:04:41.723261+00:00
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:57} INFO - Started process 796 to run task
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:84} INFO - Running: ['***', 'tasks', 'run', 'stripe_ingest_flow', 'extract_from_stripe', 'manual__2023-08-17T01:04:41.723261+00:00', '--job-id', '903', '--raw', '--subdir', 'DAGS_FOLDER/stripe_ingest_flow_to_lakehouse/dag.py', '--cfg-path', '/tmp/tmpqz8uvben']
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:85} INFO - Job 903: Subtask extract_from_stripe
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {task_command.py:410} INFO - Running <TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00 [running]> on host 470b2412b735
[2023-08-16, 19:04:44 CST] {taskinstance.py:1545} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='dhernandez' AIRFLOW_CTX_DAG_ID='stripe_ingest_flow' AIRFLOW_CTX_TASK_ID='extract_from_stripe' AIRFLOW_CTX_EXECUTION_DATE='2023-08-17T01:04:41.723261+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2023-08-17T01:04:41.723261+00:00'
[2023-08-16, 19:04:44 CST] {crypto.py:83} WARNING - empty cryptography key - values will not be stored encrypted.
[2023-08-16, 19:04:44 CST] {base.py:73} INFO - Using connection ID 'siclo_***_lakehouse_conn' for task execution.
[2023-08-16, 19:04:44 CST] {connection_wrapper.py:340} INFO - AWS Connection (conn_id='siclo_***_lakehouse_conn', conn_type='aws') credentials retrieved from login and password.
[2023-08-16, 19:04:45 CST] {appflow.py:63} INFO - executionId: 58ad6275-0a70-48d9-8414-f0215924c876
[2023-08-16, 19:05:24 CST] {local_task_job_runner.py:291} WARNING - State of this instance has been externally set to failed. Terminating instance.
[2023-08-16, 19:05:24 CST] {process_utils.py:131} INFO - Sending 15 to group 796. PIDs of all processes in the group: [796]
[2023-08-16, 19:05:24 CST] {process_utils.py:86} INFO - Sending the signal 15 to group 796
[2023-08-16, 19:05:24 CST] {taskinstance.py:1517} ERROR - Received SIGTERM. Terminating subprocesses.
```
### What you think should happen instead
That having wait_for_completion set to True, the task finishes successfully and retrieves the execution id from appflow.
### How to reproduce
With a dag that has the following task
```python
@task
def extract():
appflow = AppflowHook(
aws_conn_id='conn_id'
)
execution_id = appflow.run_flow(
flow_name='flow_name',
wait_for_completion=True
# with wait_for_completion=False if it works
)
return execution_id
```
The aws connection has the following permissions
- "appflow:DescribeFlow",
- "appflow:StartFlow",
- "appflow:RunFlow",
- "appflow:ListFlows",
- "appflow:DescribeFlowExecutionRecords"
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```
apache-airflow==2.6.2
apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-common-sql==1.5.1
apache-airflow-providers-http==4.4.1
boto3==1.26.76
asgiref==3.7.2
watchtower==2.0.1
jsonpath-ng==1.5.3
redshift-connector==2.0.911
sqlalchemy-redshift==0.8.14
mypy-boto3-appflow==1.28.16
mypy-boto3-rds==1.26.144
mypy-boto3-redshift-data==1.26.109
mypy-boto3-s3==1.26.153
celery==5.3.0
```
### Deployment
Docker-Compose
### Deployment details
Docker 4.10.1 (82475)
Airflow image apache/airflow:2.6.2-python3.11
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33461 | https://github.com/apache/airflow/pull/33613 | 2363fb562db1abaa5bc3bc93b67c96e018c1d78a | 41d9be072abacc47393f700aa8fb98bc2b9a3713 | "2023-08-17T01:47:56Z" | python | "2023-08-22T15:31:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,446 | ["airflow/utils/task_group.py", "tests/decorators/test_task_group.py"] | Task group gets marked as upstream_failed when dynamically mapped with expand_kwargs even though all upstream tasks were skipped or successfully finished. | ### Apache Airflow version
2.6.3
### What happened
I am writing a DAG that transfers data from MSSQL to BigQuery, The part of the ETL process that actually fetches the data from MSSQL and moves it to BQ needs to parallelized.
I am trying to write it as a task group where the first task moves data from MSSQL to GCS, and the 2nd task loads the file into BQ.
for some odd reason when I expand the task group it is automatically marked as upstream_failed , at the very first moment the DAG is triggered.
I have tested this with a simple dag (provided below) as well and the bug was reproduced.
I found a similar issue [here](https://github.com/apache/airflow/issues/27449) but the bug seems to persist even after configuring `AIRFLOW__SCHEDULER__SCHEDULE_AFTER_TASK_EXECUTION=False`
### What you think should happen instead
The task group should be dynamically expanded **after all upstream tasks have finished** since `expand_kwargs` needs the previous task's output.
### How to reproduce
```from datetime import timedelta
from airflow.decorators import dag, task, task_group
from airflow.operators.bash import BashOperator
from pendulum import datetime
@dag(
dag_id="example_task_group_expansion",
schedule="@once",
default_args={
"depends_on_past": False,
"email": ["airflow@example.com"],
"email_on_failure": True,
"email_on_retry": True,
"retries": 0,
"retry_delay": timedelta(minutes=5),
},
start_date=datetime(2023, 8, 1),
catchup=False,
)
def example_dag():
@task(task_id="TaskDistributer")
def task_distributer():
step = 10_000
return [dict(interval_start=i, interval_end=i + step) for i in range(0, 100_000, step)]
@task_group(group_id="tg1")
def tg(interval_start, interval_end):
task1 = BashOperator(
task_id="task1",
bash_command="echo $interval_start -- $interval_end",
env={"interval_start": str(interval_start), "interval_end": str(interval_end)},
)
task2 = BashOperator(
task_id="task2",
bash_command="echo $interval_start -- $interval_end",
env={"interval_start": str(interval_start), "interval_end": str(interval_end)},
)
task1 >> task2
return task2
tg.expand_kwargs(task_distributer())
example_dag()
```
### Operating System
MacOS 13.4.1
### Versions of Apache Airflow Providers
No providers needed to reproduce
### Deployment
Docker-Compose
### Deployment details
Docker-compose
Airflow image: apache/airflow:2.6.3-python3.9
Executor: Celery
Messaging queue: redis
Metadata DB: MySQL 5.7
### Anything else
The problem occurs every time.
Here are some of the scheduler logs that may be relevant.
```
docker logs 3d4e47791238 | grep example_task_group_expansion
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:189 DeprecationWarning: The '[celery] stalled_task_timeout' config option is deprecated. Please update your config to use '[scheduler] task_queued_timeout' instead.
[2023-08-16 14:09:33 +0000] [15] [INFO] Starting gunicorn 20.1.0
[2023-08-16 14:09:33 +0000] [15] [INFO] Listening at: http://[::]:8793 (15)
[2023-08-16 14:09:33 +0000] [15] [INFO] Using worker: sync
[2023-08-16 14:09:33 +0000] [16] [INFO] Booting worker with pid: 16
[2023-08-16 14:09:33 +0000] [17] [INFO] Booting worker with pid: 17
[2023-08-16T14:10:04.870+0000] {dag.py:3504} INFO - Setting next_dagrun for example_task_group_expansion to None, run_after=None
[2023-08-16T14:10:04.881+0000] {scheduler_job_runner.py:1449} DEBUG - DAG example_task_group_expansion not changed structure, skipping dagrun.verify_integrity
[2023-08-16T14:10:04.883+0000] {dagrun.py:711} DEBUG - number of tis tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 3 task(s)
[2023-08-16T14:10:04.883+0000] {dagrun.py:732} DEBUG - number of scheduleable tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 3 task(s)
[2023-08-16T14:10:04.883+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1103} DEBUG - Dependencies all met for dep_context=None ti=<TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [None]>
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task1 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task1 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task1 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1103} DEBUG - Dependencies all met for dep_context=None ti=<TaskInstance: example_task_group_expansion.tg1.task1 scheduled__2023-08-01T00:00:00+00:00 [None]>
[2023-08-16T14:10:04.895+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2023-08-16T14:10:04.895+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2023-08-16T14:10:04.897+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Trigger Rule' PASSED: False, Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_states=_UpstreamTIStates(success=0, skipped=0, failed=0, upstream_failed=0, removed=0, done=0), upstream_task_ids={'tg1.task1'}
[2023-08-16T14:10:04.897+0000] {taskinstance.py:1093} DEBUG - Dependencies not met for <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_states=_UpstreamTIStates(success=0, skipped=0, failed=0, upstream_failed=0, removed=0, done=0), upstream_task_ids={'tg1.task1'}
[2023-08-16T14:10:04.902+0000] {scheduler_job_runner.py:1476} DEBUG - Skipping SLA check for <DAG: example_task_group_expansion> because no tasks in DAG have SLAs
<TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [scheduled]>
[2023-08-16T14:10:04.910+0000] {scheduler_job_runner.py:476} INFO - DAG example_task_group_expansion has 0/16 running and queued tasks
<TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [scheduled]>
[2023-08-16T14:10:04.911+0000] {scheduler_job_runner.py:625} INFO - Sending TaskInstanceKey(dag_id='example_task_group_expansion', task_id='TaskDistributer', run_id='scheduled__2023-08-01T00:00:00+00:00', try_number=1, map_index=-1) to executor with priority 1 and queue default
[2023-08-16T14:10:04.911+0000] {base_executor.py:147} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'example_task_group_expansion', 'TaskDistributer', 'scheduled__2023-08-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/example.py']
[2023-08-16T14:10:04.915+0000] {local_executor.py:86} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'example_task_group_expansion', 'TaskDistributer', 'scheduled__2023-08-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/example.py']
[2023-08-16T14:10:04.948+0000] {scheduler_job_runner.py:1449} DEBUG - DAG example_task_group_expansion not changed structure, skipping dagrun.verify_integrity
[2023-08-16T14:10:04.954+0000] {dagrun.py:711} DEBUG - number of tis tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 3 task(s)
[2023-08-16T14:10:04.954+0000] {dagrun.py:732} DEBUG - number of scheduleable tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 1 task(s)
[2023-08-16T14:10:04.954+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2023-08-16T14:10:04.954+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2023-08-16T14:10:04.958+0000] {taskinstance.py:899} DEBUG - Setting task state for <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> to upstream_failed
[2023-08-16T14:10:04.958+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [upstream_failed]> dependency 'Trigger Rule' PASSED: False, Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_states=_UpstreamTIStates(success=0, skipped=0, failed=0, upstream_failed=1, removed=0, done=1), upstream_task_ids={'tg1.task1'}
[2023-08-16T14:10:04.958+0000] {taskinstance.py:1093} DEBUG - Dependencies not met for <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [upstream_failed]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_states=_UpstreamTIStates(success=0, skipped=0, failed=0, upstream_failed=1, removed=0, done=1), upstream_task_ids={'tg1.task1'}
[2023-08-16T14:10:04.963+0000] {scheduler_job_runner.py:1476} DEBUG - Skipping SLA check for <DAG: example_task_group_expansion> because no tasks in DAG have SLAs
[2023-08-16T14:10:05.236+0000] {dagbag.py:506} DEBUG - Loaded DAG <DAG: example_task_group_expansion>
Changing /usr/local/airflow/logs/dag_id=example_task_group_expansion/run_id=scheduled__2023-08-01T00:00:00+00:00/task_id=TaskDistributer permission to 509
[2023-08-16T14:10:05.265+0000] {task_command.py:410} INFO - Running <TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [queued]> on host 3d4e47791238
[2023-08-16T14:10:05.453+0000] {listener.py:32} INFO - TaskInstance Details: dag_id=example_task_group_expansion, task_id=TaskDistributer, dagrun_id=scheduled__2023-08-01T00:00:00+00:00, map_index=-1, run_start_date=2023-08-16 14:10:05.346669+00:00, try_number=1, job_id=302, op_classpath=airflow.decorators.python._PythonDecoratedOperator, airflow.decorators.base.DecoratedOperator, airflow.operators.python.PythonOperator
[2023-08-16T14:10:06.001+0000] {scheduler_job_runner.py:1449} DEBUG - DAG example_task_group_expansion not changed structure, skipping dagrun.verify_integrity
[2023-08-16T14:10:06.002+0000] {dagrun.py:711} DEBUG - number of tis tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 3 task(s)
[2023-08-16T14:10:06.002+0000] {dagrun.py:609} ERROR - Marking run <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False> failed
[2023-08-16T14:10:06.002+0000] {dagrun.py:681} INFO - DagRun Finished: dag_id=example_task_group_expansion, execution_date=2023-08-01 00:00:00+00:00, run_id=scheduled__2023-08-01T00:00:00+00:00, run_start_date=2023-08-16 14:10:04.875813+00:00, run_end_date=2023-08-16 14:10:06.002810+00:00, run_duration=1.126997, state=failed, external_trigger=False, run_type=scheduled, data_interval_start=2023-08-01 00:00:00+00:00, data_interval_end=2023-08-01 00:00:00+00:00, dag_hash=a89f91f4d5dab071c49b1d98a4bd5c13
[2023-08-16T14:10:06.004+0000] {dag.py:3504} INFO - Setting next_dagrun for example_task_group_expansion to None, run_after=None
[2023-08-16T14:10:06.005+0000] {scheduler_job_runner.py:1476} DEBUG - Skipping SLA check for <DAG: example_task_group_expansion> because no tasks in DAG have SLAs
[2023-08-16T14:10:06.010+0000] {base_executor.py:299} DEBUG - Changing state: TaskInstanceKey(dag_id='example_task_group_expansion', task_id='TaskDistributer', run_id='scheduled__2023-08-01T00:00:00+00:00', try_number=1, map_index=-1)
[2023-08-16T14:10:06.011+0000] {scheduler_job_runner.py:677} INFO - Received executor event with state success for task instance TaskInstanceKey(dag_id='example_task_group_expansion', task_id='TaskDistributer', run_id='scheduled__2023-08-01T00:00:00+00:00', try_number=1, map_index=-1)
[2023-08-16T14:10:06.012+0000] {scheduler_job_runner.py:713} INFO - TaskInstance Finished: dag_id=example_task_group_expansion, task_id=TaskDistributer, run_id=scheduled__2023-08-01T00:00:00+00:00, map_index=-1, run_start_date=2023-08-16 14:10:05.346669+00:00, run_end_date=2023-08-16 14:10:05.518275+00:00, run_duration=0.171606, state=success, executor_state=success, try_number=1, max_tries=0, job_id=302, pool=default_pool, queue=default, priority_weight=1, operator=_PythonDecoratedOperator, queued_dttm=2023-08-16 14:10:04.910449+00:00, queued_by_job_id=289, pid=232
```
As can be seen from the logs, no upstream tasks are in `done` state yet the expanded task is set as `upstream_failed`.
[slack discussion](https://apache-airflow.slack.com/archives/CCQ7EGB1P/p1692107385230939)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33446 | https://github.com/apache/airflow/pull/33732 | 869f84e9c398dba453456e89357876ed8a11c547 | fe27031382e2034b59a23db1c6b9bdbfef259137 | "2023-08-16T15:21:54Z" | python | "2023-08-29T16:48:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,377 | ["docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | Statsd metrics description is incorrect | ### What do you see as an issue?
<img width="963" alt="Screenshot 2023-08-14 at 9 55 11 AM" src="https://github.com/apache/airflow/assets/10162465/bb493eb2-1cfd-45bb-928a-a4e21e015251">
Here dagrun duration success and failure have description where success is stored in seconds while that of failure is shown as stored in milliseconds from the description.
But when checked in code part, where this two metrics are recorded, it is the same duration time that gets stored varying depending on the dag_id run state.
<img width="963" alt="Screenshot 2023-08-14 at 10 00 28 AM" src="https://github.com/apache/airflow/assets/10162465/53d5aaa8-4c57-4357-b9c2-8d64164fad7c">
### Solving the problem
It looks the documentation description for that statsd metrics is misleading
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33377 | https://github.com/apache/airflow/pull/34532 | 08729eddbd7414b932a654763bf62c6221a0e397 | 117e40490865f04aed38a18724fc88a8cf94aacc | "2023-08-14T04:31:46Z" | python | "2023-09-21T18:53:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,375 | ["airflow/models/taskinstance.py", "airflow/operators/python.py", "airflow/utils/context.py", "airflow/utils/context.pyi", "tests/operators/test_python.py"] | Ability to retrieve prev_end_date_success | ### Discussed in https://github.com/apache/airflow/discussions/33345
<div type='discussions-op-text'>
<sup>Originally posted by **vuphamcs** August 11, 2023</sup>
### Description
Is there a variable similar to `prev_start_date_success` but for the previous DAG run’s completion date? The value I’m hoping to retrieve to use within the next DAG run is `2023-08-10 16:04:30`
![image](https://github.com/apache/airflow/assets/1600760/9e2af349-4c3e-47e1-8a1a-9d8827b56f57)
### Use case/motivation
One particular use case is to help guarantee that the next DAG run only queries data that was inserted during the existing DAG run, and not the past DAG run.
```python
prev_ts = context['prev_end_date_success']
sql = f"SELECT * FROM results WHERE created_at > {context['prev_end_date_success']}"
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/33375 | https://github.com/apache/airflow/pull/34528 | 2bcd450e84426fd678b3fa2e4a15757af234e98a | 61a9ab7600a856bb2b1031419561823e227331da | "2023-08-13T23:52:10Z" | python | "2023-11-03T18:31:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,368 | ["BREEZE.rst"] | Preview feature broken for ./BREEZE.rst | ### What do you see as an issue?
The preview for [BREEZE.rst](https://github.com/apache/airflow/blob/main/BREEZE.rst) does not show up since #33318.
This is most probably due to the tilde `~` character for marking sections that is used in this commit.
[The markup specification for sections](https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html#sections) allows for several characters including `~` but it seems that it breaks the GitHub preview feature.
Screenshot of the preview being broken:
<img width="814" alt="preview feature broken" src="https://github.com/apache/airflow/assets/9881262/3dca3de9-68c5-4ed9-861c-accf6d0abdf1">
### Solving the problem
The problem can be solved by reverting to a more consensual character like `-`.
Screenshot of the preview feature restored after replacing `~` with `-`:
<img width="802" alt="preview feature restored" src="https://github.com/apache/airflow/assets/9881262/d1e7c8f8-db77-4423-8a5f-c939d3d4cfce">
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33368 | https://github.com/apache/airflow/pull/33369 | 0cb256411a02516dc9eca88b570abfb8c8a3c35b | 42638549efb5fccce8b5a93e3c2d05716f4ec59c | "2023-08-13T17:32:10Z" | python | "2023-08-13T18:40:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,344 | ["airflow/config_templates/config.yml", "airflow/www/views.py", "newsfragments/33351.significant.rst"] | Not able to trigger DAG with config from UI if param is not defined in a DAG and dag_run.conf is used | ### Apache Airflow version
2.7.0rc1
### What happened
As per https://github.com/apache/airflow/pull/31583, now we can only run DAG with config from UI if DAG has params, however, if a DAG is using dag_run.conf there is no way to run with config from UI and as dag_run.conf is not deprecated most of the users will be impacted by this
@hussein-awala also mentioned it in his [voting](https://lists.apache.org/thread/zd9ppxw1xwxsl66w0tyw1wch9flzb03w)
### What you think should happen instead
I think there should be a way to provide param values from UI when dag_run.conf is used in DAG
### How to reproduce
Use below DAG in 2.7.0rc and you will notice there is no way to provide conf value from airflow UI
DAG CODE
```
from airflow.models import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.python import PythonOperator
from airflow.utils.dates import days_ago
dag = DAG(
dag_id="trigger_target_dag",
default_args={"start_date": days_ago(2), "owner": "Airflow"},
tags=["core"],
schedule_interval=None, # This must be none so it's triggered by the controller
is_paused_upon_creation=False, # This must be set so other workers can pick this dag up. mabye it's a bug idk
)
def run_this_func(**context):
print(
f"Remotely received value of {context['dag_run'].conf['message']} for key=message "
)
run_this = PythonOperator(
task_id="run_this",
python_callable=run_this_func,
dag=dag,
)
bash_task = BashOperator(
task_id="bash_task",
bash_command='echo "Here is the message: $message"',
env={"message": '{{ dag_run.conf["message"] if dag_run else "" }}'},
dag=dag,
)
```
### Operating System
MAS os Monterey
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33344 | https://github.com/apache/airflow/pull/33351 | 45713446f37ee4b1ee972ab8b5aa1ac0b2482197 | c0362923fd8250328eab6e60f0cf7e855bfd352e | "2023-08-12T09:34:55Z" | python | "2023-08-13T12:57:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,325 | ["airflow/www/views.py"] | providers view shows description with HTML element | ### Body
In Admin -> Providers view
The description shows a `<br>`
<img width="1286" alt="Screenshot 2023-08-11 at 21 54 25" src="https://github.com/apache/airflow/assets/45845474/2cdba81e-9cea-4ed4-8420-1e9ab4c2eee2">
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/33325 | https://github.com/apache/airflow/pull/33326 | 682176d57263aa2aab1aa8703723270ab3148af4 | 23d542462a1aaa5afcd36dedc3c2a12c840e1d2c | "2023-08-11T19:04:02Z" | python | "2023-08-11T22:58:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,323 | ["tests/jobs/test_triggerer_job.py"] | Flaky test `test_trigger_firing` with ' SQLite objects created in a thread can only be used in that same thread.' | ### Body
Observed in https://github.com/apache/airflow/actions/runs/5835505313/job/15827357798?pr=33309
```
___________________ ERROR at teardown of test_trigger_firing ___________________
self = <sqlalchemy.future.engine.Connection object at 0x7f92d3327910>
def _rollback_impl(self):
assert not self.__branch_from
if self._has_events or self.engine._has_events:
self.dispatch.rollback(self)
if self._still_open_and_dbapi_connection_is_valid:
if self._echo:
if self._is_autocommit_isolation():
self._log_info(
"ROLLBACK using DBAPI connection.rollback(), "
"DBAPI should ignore due to autocommit mode"
)
else:
self._log_info("ROLLBACK")
try:
> self.engine.dialect.do_rollback(self.connection)
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1062:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x7f92d9698650>
dbapi_connection = <sqlalchemy.pool.base._ConnectionFairy object at 0x7f92d33839d0>
def do_rollback(self, dbapi_connection):
> dbapi_connection.rollback()
E sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140269149157120 and this is thread id 140269532822400.
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py:683: ProgrammingError
The above exception was the direct cause of the following exception:
@pytest.fixture(autouse=True, scope="function")
def close_all_sqlalchemy_sessions():
from sqlalchemy.orm import close_all_sessions
close_all_sessions()
yield
> close_all_sessions()
tests/conftest.py:953:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:4315: in close_all_sessions
sess.close()
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:1816: in close
self._close_impl(invalidate=False)
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:1858: in _close_impl
transaction.close(invalidate)
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:926: in close
transaction.close()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2426: in close
self._do_close()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2649: in _do_close
self._close_impl()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2635: in _close_impl
self._connection_rollback_impl()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2627: in _connection_rollback_impl
self.connection._rollback_impl()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1064: in _rollback_impl
self._handle_dbapi_exception(e, None, None, None, None)
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2134: in _handle_dbapi_exception
util.raise_(
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/compat.py:211: in raise_
raise exception
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1062: in _rollback_impl
self.engine.dialect.do_rollback(self.connection)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x7f92d9698650>
dbapi_connection = <sqlalchemy.pool.base._ConnectionFairy object at 0x7f92d33839d0>
def do_rollback(self, dbapi_connection):
> dbapi_connection.rollback()
E sqlalchemy.exc.ProgrammingError: (sqlite3.ProgrammingError) SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140269149157120 and this is thread id 140269532822400.
E (Background on this error at: https://sqlalche.me/e/14/f405)
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py:683: ProgrammingError
------------------------------ Captured log call -------------------------------
INFO airflow.jobs.triggerer_job_runner:triggerer_job_runner.py:171 Setting up TriggererHandlerWrapper with handler <FileTaskHandler (NOTSET)>
INFO airflow.jobs.triggerer_job_runner:triggerer_job_runner.py:227 Setting up logging queue listener with handlers [<LocalQueueHandler (NOTSET)>, <TriggererHandlerWrapper (NOTSET)>]
INFO airflow.jobs.triggerer_job_runner.TriggerRunner:triggerer_job_runner.py:596 trigger test_dag/test_run/test_ti/-1/1 (ID 1) starting
INFO airflow.jobs.triggerer_job_runner.TriggerRunner:triggerer_job_runner.py:600 Trigger test_dag/test_run/test_ti/-1/1 (ID 1) fired: TriggerEvent<True>
Level 100 airflow.triggers.testing.SuccessTrigger:triggerer_job_runner.py:633 trigger end
INFO airflow.jobs.triggerer_job_runner.TriggerRunner:triggerer_job_runner.py:622 trigger test_dag/test_run/test_ti/-1/1 (ID 1) completed
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/33323 | https://github.com/apache/airflow/pull/34075 | 601b9cd33c5f1a92298eabb3934a78fb10ca9a98 | 47f79b9198f3350951dc21808c36f889bee0cd06 | "2023-08-11T18:50:00Z" | python | "2023-09-04T14:40:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,319 | [".github/workflows/release_dockerhub_image.yml"] | Documentation outdated on dockerhub | ### What do you see as an issue?
On:
https://hub.docker.com/r/apache/airflow
It says in several places that the last version is 2.3.3 like here:
![image](https://github.com/apache/airflow/assets/19591174/640b0e73-dfcb-4118-ae5f-c0376ebb98a3)
### Solving the problem
Update the version and requirements to 2.6.3.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33319 | https://github.com/apache/airflow/pull/33348 | 50765eb0883652c16b40d69d8a1ac78096646610 | 98fb7d6e009aaf4bd06ffe35e526af2718312607 | "2023-08-11T17:06:15Z" | python | "2023-08-12T14:22:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,310 | ["airflow/dag_processing/manager.py", "airflow/models/dag.py", "airflow/models/dagcode.py", "airflow/models/serialized_dag.py", "tests/dag_processing/test_job_runner.py", "tests/models/test_dag.py"] | Multiple DAG processors with separate DAG directories keep deactivating each other's DAGs | ### Apache Airflow version
2.6.3
### What happened
When running multiple standalone DAG processors with separate DAG directories using the `--subdir` argument the processors keep deactivating each other's DAGs (and reactivating their own).
After stepping through the code with a debugger I think the issue is that the calls [here](https://github.com/apache/airflow/blob/2.6.3/airflow/dag_processing/manager.py#L794) and [here](https://github.com/apache/airflow/blob/2.6.3/airflow/dag_processing/manager.py#L798) have no awareness of the DAG directories.
### What you think should happen instead
The DAG processors should not touch each other's DAGs in the metadata DB.
### How to reproduce
Start two or more standalone DAG processors with separate DAG directories and observe (e.g. via the UI) how the list of active DAGs keeps changing constantly.
### Operating System
Linux 94b223524983 6.1.32-0-virt #1-Alpine SMP PREEMPT_DYNAMIC Mon, 05 Jun 2023 09:39:09 +0000 x86_64 x86_64 x86_64 GNU/Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33310 | https://github.com/apache/airflow/pull/33357 | 3857d3399c2e5f4c3e0a838b7a76296c4aa19b3e | 35b18306a4928152fd1834964fc8ce0033811817 | "2023-08-11T08:28:44Z" | python | "2023-08-14T20:45:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,300 | ["airflow/providers/mysql/hooks/mysql.py", "tests/providers/mysql/hooks/test_mysql.py", "tests/providers/mysql/hooks/test_mysql_connector_python.py"] | MySqlHook add support for init_command | ### Description
There is currently no way to pass an `init_command` connection argument for a mysql connection when using either the `mysqlclient` or `mysql-connector-python` libraries with the MySql provider's `MySqlHook`.
Documentation for connection arguments for `mysqlclient` library, listing `init_command`:
https://mysqlclient.readthedocs.io/user_guide.html?highlight=init_command#functions-and-attributes
Documentation for connection arguments for `mysql-connector-python` library, listing `init_command`:
https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html
There can be many uses for `init_command`, but also what comes to mind is why do we explicitly provide support to certain connection arguments and not others?
### Use case/motivation
For my own use right now I am currently am subclassing the hook and then altering the connection arguments to pass in `init_command` to set the `time_zone` session variable to UTC at connection time, like so:
```python
conn_config['init_command'] = r"""SET time_zone = '+00:00';"""
```
Note: This is just an example, there can be many other uses for `init_command` besides the example above. Also, I am aware there is a `time_zone` argument for connections via the `mysql-connector-python` library, however that argument is not supported by connections made with `mysqlclient` library. Both libraries do support the `init_command` argument.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33300 | https://github.com/apache/airflow/pull/33359 | ea8519c0554d16b13d330a686f8479fc10cc58f2 | dce9796861e0a535952f79b0e2a7d5a012fcc01b | "2023-08-11T00:41:55Z" | python | "2023-08-18T05:58:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,256 | ["airflow/sensors/time_sensor.py", "tests/sensors/test_time_sensor.py"] | TimeSensorAsync does not use DAG timezone to convert naive time input | ### Apache Airflow version
2.6.3
### What happened
TimeSensor and TimeSensorAsync convert timezones differently.
TimeSensor converts a naive time into an tz-aware time with `self.dag.timezone`. TimeSensorAsync does not, and erronously converts it to UTC instead.
### What you think should happen instead
TimeSensor and TimeSensorAsync should behave the same.
### How to reproduce
Compare the logic of TimeSensor versus TimeSensorAsync, given a DAG with a UTC+2 (for example `Europe/Berlin`) timezone and the target_time input of `datetime.time(9, 0)`.
### Operating System
Official container image, Debian GNU/Linux 11 (bullseye), Python 3.10.12
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
EKS + Kustomize stack with airflow-ui, airflow-scheduler, and airflow-triggerer.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33256 | https://github.com/apache/airflow/pull/33406 | 84a3daed8691d5e129eaf3e02061efb8b6ca56cb | 6c50ef59cc4f739f126e5b123775340a3351a3e8 | "2023-08-09T11:52:35Z" | python | "2023-10-12T03:27:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,255 | ["airflow/providers/microsoft/azure/secrets/key_vault.py"] | Azure KeyVault Backend logging level | ### Apache Airflow version
2.6.3
### What happened
I've set up Azure keyvaults as a [backend](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/secrets-backends/azure-key-vault.html) for fetching connections, and it works fine. However, there's just too much logging and it's causing issues for our users to read logs. For example:
```
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://REDACTED.vault.azure.net/secrets/airflow-connections-ode-odbc-dev-dw/?api-version=REDACTED'
Request method: 'GET'
Request headers:
'Accept': 'application/json'
'x-ms-client-request-id': '6cdf2a74-36a8-11ee-8cac-6ac595ee5ea6'
'User-Agent': 'azsdk-python-keyvault-secrets/4.7.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 401
Response headers:
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Content-Length': '97'
'Content-Type': 'application/json; charset=utf-8'
'Expires': '-1'
'WWW-Authenticate': 'Bearer authorization="https://login.microsoftonline.com/100b3c99-f3e2-4da0-9c8a-b9d345742c36", resource="https://vault.azure.net"'
'x-ms-keyvault-region': 'REDACTED'
'x-ms-client-request-id': '6cdf2a74-36a8-11ee-8cac-6ac595ee5ea6'
'x-ms-request-id': '563d7428-9df4-4d6a-9766-19626395056f'
'x-ms-keyvault-service-version': '1.9.908.1'
'x-ms-keyvault-network-info': 'conn_type=Ipv4;addr=20.76.1.64;act_addr_fam=InterNetwork;'
'X-Content-Type-Options': 'REDACTED'
'Strict-Transport-Security': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://login.microsoftonline.com/100b3c99-f3e2-4da0-9c8a-b9d345742c36/v2.0/.well-known/openid-configuration'
Request method: 'GET'
Request headers:
'User-Agent': 'azsdk-python-identity/1.13.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 200
Response headers:
'Cache-Control': 'max-age=86400, private'
'Content-Type': 'application/json; charset=utf-8'
'Strict-Transport-Security': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'Access-Control-Allow-Origin': 'REDACTED'
'Access-Control-Allow-Methods': 'REDACTED'
'P3P': 'REDACTED'
'x-ms-request-id': '80869b0e-4cde-47f7-8721-3f430a8c3600'
'x-ms-ests-server': 'REDACTED'
'X-XSS-Protection': 'REDACTED'
'Set-Cookie': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
'Content-Length': '1753'
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://login.microsoftonline.com/common/discovery/instance?api-version=REDACTED&authorization_endpoint=REDACTED'
Request method: 'GET'
Request headers:
'Accept': 'application/json'
'User-Agent': 'azsdk-python-identity/1.13.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 200
Response headers:
'Cache-Control': 'max-age=86400, private'
'Content-Type': 'application/json; charset=utf-8'
'Strict-Transport-Security': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'Access-Control-Allow-Origin': 'REDACTED'
'Access-Control-Allow-Methods': 'REDACTED'
'P3P': 'REDACTED'
'x-ms-request-id': '93b3dfad-72c7-4629-8625-d2b335363a00'
'x-ms-ests-server': 'REDACTED'
'X-XSS-Protection': 'REDACTED'
'Set-Cookie': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
'Content-Length': '945'
[2023-08-09, 13:32:30 CEST] {_universal.py:510} INFO - Request URL: 'https://login.microsoftonline.com/100b3c99-f3e2-4da0-9c8a-b9d345742c36/oauth2/v2.0/token'
Request method: 'POST'
Request headers:
'Accept': 'application/json'
'x-client-sku': 'REDACTED'
'x-client-ver': 'REDACTED'
'x-client-os': 'REDACTED'
'x-client-cpu': 'REDACTED'
'x-ms-lib-capability': 'REDACTED'
'client-request-id': 'REDACTED'
'x-client-current-telemetry': 'REDACTED'
'x-client-last-telemetry': 'REDACTED'
'User-Agent': 'azsdk-python-identity/1.13.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
A body is sent with the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 200
Response headers:
'Cache-Control': 'no-store, no-cache'
'Pragma': 'no-cache'
'Content-Type': 'application/json; charset=utf-8'
'Expires': '-1'
'Strict-Transport-Security': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'P3P': 'REDACTED'
'client-request-id': 'REDACTED'
'x-ms-request-id': '79cf595b-4f41-47c3-a370-f9321c533a00'
'x-ms-ests-server': 'REDACTED'
'x-ms-clitelem': 'REDACTED'
'X-XSS-Protection': 'REDACTED'
'Set-Cookie': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
'Content-Length': '1313'
[2023-08-09, 13:32:30 CEST] {chained.py:87} INFO - DefaultAzureCredential acquired a token from EnvironmentCredential
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://REDACTED.vault.azure.net/secrets/airflow-connections-ode-odbc-dev-dw/?api-version=REDACTED'
Request method: 'GET'
Request headers:
'Accept': 'application/json'
'x-ms-client-request-id': '6cdf2a74-36a8-11ee-8cac-6ac595ee5ea6'
'User-Agent': 'azsdk-python-keyvault-secrets/4.7.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
'Authorization': 'REDACTED'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 404
Response headers:
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Content-Length': '332'
'Content-Type': 'application/json; charset=utf-8'
'Expires': '-1'
'x-ms-keyvault-region': 'REDACTED'
'x-ms-client-request-id': '6cdf2a74-36a8-11ee-8cac-6ac595ee5ea6'
'x-ms-request-id': 'ac41c47c-30f0-46cf-9157-6e5dba031ffa'
'x-ms-keyvault-service-version': '1.9.908.1'
'x-ms-keyvault-network-info': 'conn_type=Ipv4;addr=20.76.1.64;act_addr_fam=InterNetwork;'
'x-ms-keyvault-rbac-assignment-id': 'REDACTED'
'x-ms-keyvault-rbac-cache': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'Strict-Transport-Security': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
[2023-08-09, 13:32:30 CEST] {base.py:73} INFO - Using connection ID 'ode-odbc-dev-dw' for task execution.
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://REDACTED.vault.azure.net/secrets/airflow-connections-ode-odbc-dev-dw/?api-version=REDACTED'
Request method: 'GET'
Request headers:
'Accept': 'application/json'
'x-ms-client-request-id': '6d2b797e-36a8-11ee-8cac-6ac595ee5ea6'
'User-Agent': 'azsdk-python-keyvault-secrets/4.7.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
'Authorization': 'REDACTED'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 404
Response headers:
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Content-Length': '332'
'Content-Type': 'application/json; charset=utf-8'
'Expires': '-1'
'x-ms-keyvault-region': 'REDACTED'
'x-ms-client-request-id': '6d2b797e-36a8-11ee-8cac-6ac595ee5ea6'
'x-ms-request-id': 'ac37f859-14cc-48e3-8a88-6214b96ef75e'
'x-ms-keyvault-service-version': '1.9.908.1'
'x-ms-keyvault-network-info': 'conn_type=Ipv4;addr=20.76.1.64;act_addr_fam=InterNetwork;'
'x-ms-keyvault-rbac-assignment-id': 'REDACTED'
'x-ms-keyvault-rbac-cache': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'Strict-Transport-Security': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
[2023-08-09, 13:32:31 CEST] {base.py:73} INFO - Using connection ID 'ode-odbc-dev-dw' for task execution.
```
Changing airflow logging_level to WARNING/ERROR is one way, but then the task logs don't have sufficient information. Is it possible to influence just the logging level on SecretClient?
### What you think should happen instead
Ideally, it should be possible to set logging level specifically for the keyvault backend in the backend_kwargs:
```
backend_kwargs = {"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "vault_url": "https://example-akv-resource-name.vault.azure.net/", "logging_level": "WARNING"}
```
### How to reproduce
Set up KV backend as described [here](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/secrets-backends/azure-key-vault.html).
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed with Helm chart on AKS.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33255 | https://github.com/apache/airflow/pull/33314 | dfb2403ec4b6d147ac31125631677cee9e12347e | 4460356c03e5c1dedd72ce87a8ccfb9b19a33d76 | "2023-08-09T11:31:35Z" | python | "2023-08-13T22:40:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,248 | ["airflow/providers/amazon/aws/hooks/glue.py", "airflow/providers/amazon/aws/operators/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py", "tests/providers/amazon/aws/operators/test_glue.py"] | GlueOperator: iam_role_arn as a parameter | ### Description
Hi,
There is mandatory parameter iam_role_name parameter for GlueJobOperator/GlueJobHook.
It adds additional step of translating it to the arn, which needs connectivity to the global iam AWS endpoint (no privatelink availabale).
For private setups it needs opening connectivity + proxy configuration to make it working.
It would be great to have also possibility to just pass directly iam_role_arn and avoid this additional step.
### Use case/motivation
Role assignation does not need external connectivity, possibility of adding arn instead of the name.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33248 | https://github.com/apache/airflow/pull/33408 | cc360b73c904b7f24a229282458ee05112468f5d | 60df70526a00fb9a3e245bb3ffb2a9faa23582e7 | "2023-08-09T07:59:10Z" | python | "2023-08-15T21:20:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,217 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | get_current_context not present in user_defined_macros | ### Apache Airflow version
2.6.3
### What happened
get_current_context() fail in a user_defined_macros
give
```
{abstractoperator.py:594} ERROR - Exception rendering Jinja template for task 'toot', field 'op_kwargs'. Template: {'arg': '{{ macro_run_id() }}'}
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/abstractoperator.py", line 586, in _do_render_template_fields
rendered_content = self.render_template(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/template/templater.py", line 168, in render_template
return {k: self.render_template(v, context, jinja_env, oids) for k, v in value.items()}
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/template/templater.py", line 168, in <dictcomp>
return {k: self.render_template(v, context, jinja_env, oids) for k, v in value.items()}
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/template/templater.py", line 156, in render_template
return self._render(template, context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/abstractoperator.py", line 540, in _render
return super()._render(template, context, dag=dag)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/template/templater.py", line 113, in _render
return render_template_to_string(template, context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/helpers.py", line 288, in render_template_to_string
return render_template(template, cast(MutableMapping[str, Any], context), native=False)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/helpers.py", line 283, in render_template
return "".join(nodes)
File "<template>", line 12, in root
File "/home/airflow/.local/lib/python3.8/site-packages/jinja2/sandbox.py", line 393, in call
return __context.call(__obj, *args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/jinja2/runtime.py", line 298, in call
return __obj(*args, **kwargs)
File "/opt/airflow/dags/dags/exporter/finance_closing.py", line 7, in macro_run_id
schedule_interval = get_current_context()["dag"].schedule_interval.replace("@", "")
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 784, in get_current_context
raise AirflowException(
airflow.exceptions.AirflowException: Current context was requested but no context was found! Are you running within an airflow task?
```
### What you think should happen instead
User macros should be able to access to the current context
```
airflow.exceptions.AirflowException: Current context was requested but no context was found! Are you running within an airflow task?
```
### How to reproduce
```python
from airflow.models import DAG
from airflow.operators.python import PythonOperator
from airflow.utils.dates import days_ago
def macro_run_id():
from airflow.operators.python import get_current_context
a = get_current_context()["dag"].schedule_interval.replace("@", "")
if a == "None":
a = "manual"
return a
with DAG(dag_id="example2",
start_date=days_ago(61),
user_defined_macros={"macro_run_id": macro_run_id},
schedule_interval="@monthly"):
def toto(arg):
print(arg)
PythonOperator(task_id="toot", python_callable=toto,
op_kwargs={"arg": "{{ macro_run_id() }}"})
```
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33217 | https://github.com/apache/airflow/pull/33645 | 47682042a45501ab235d612580b8284a8957523e | 9fa782f622ad9f6e568f0efcadf93595f67b8a20 | "2023-08-08T17:17:47Z" | python | "2023-08-24T13:33:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,203 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | Provider apache-airflow-providers-microsoft-azure no longer==6.2.3 expose `account_name` | ### Apache Airflow version
2.6.3
### What happened
Till version apache-airflow-providers-microsoft-azure no longer==6.2.2 if you do `WasbHook(wasb_conn_id=self.conn_id).get_conn().account_name` you will get the `account_name` But in version `apache-airflow-providers-microsoft-azure==6.2.3` this is not longer working for below connection:
```
- conn_id: wasb_conn_with_access_key
conn_type: wasb
host: astrosdk.blob.core.windows.net
description: null
extra:
shared_access_key: $AZURE_WASB_ACCESS_KEY
```
### What you think should happen instead
We should get the `account_name` for `apache-airflow-providers-microsoft-azure==6.2.3`.
### How to reproduce
Try installing the version `apache-airflow-providers-microsoft-azure==6.2.3` and try running below code `WasbHook(wasb_conn_id=self.conn_id).get_conn().account_name`
### Operating System
Mac
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33203 | https://github.com/apache/airflow/pull/33457 | 8b7e0babe1c3e9bef6e934d1e362564bc73fda4d | bd608a56abd1a6c2a98987daf7f092d2dabea555 | "2023-08-08T12:00:13Z" | python | "2023-08-17T07:55:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,178 | ["airflow/cli/commands/task_command.py", "airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "airflow/utils/task_instance_session.py", "tests/decorators/test_python.py", "tests/models/test_mappedoperator.py", "tests/models/test_renderedtifields.py", "tests/models/test_xcom_arg_map.py"] | Flaky `test_xcom_map_error_fails_task` test | ### Body
This flaky test appears recently in our jobs and it seems this is a real problem with our code - after few attempts of fixing it, it still appears in our builds:
```
tests/models/test_xcom_arg_map.py:174:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/utils/session.py:74: in wrapper
return func(*args, **kwargs)
airflow/models/taskinstance.py:1840: in run
self._run_raw_task(
airflow/utils/session.py:74: in wrapper
return func(*args, **kwargs)
airflow/models/taskinstance.py:1494: in _run_raw_task
session.commit()
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:1454: in commit
self._transaction.commit(_to_root=self.future)
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:832: in commit
self._prepare_impl()
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:811: in _prepare_impl
self.session.flush()
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:3449: in flush
self._flush(objects)
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:3589: in _flush
transaction.rollback(_capture_exception=True)
/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py:70: in __exit__
compat.raise_(
/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py:211: in raise_
raise exception
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:3549: in _flush
flush_context.execute()
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py:456: in execute
rec.execute(self)
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py:630: in execute
util.preloaded.orm_persistence.save_obj(
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py:237: in save_obj
_emit_update_statements(
/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py:1001: in _emit_update_statements
c = connection._execute_20(
/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py:1710: in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py:334: in _execute_on_connection
return connection._execute_clauseelement(
/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py:1577: in _execute_clauseelement
ret = self._execute_context(
/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py:1953: in _execute_context
self._handle_dbapi_exception(
/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py:2134: in _handle_dbapi_exception
util.raise_(
/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py:211: in raise_
raise exception
/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py:1910: in _execute_context
self.dialect.do_execute(
/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py:736: in do_execute
cursor.execute(statement, parameters)
/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py:174: in execute
self._discard()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <MySQLdb.cursors.Cursor object at 0x7f52bc978a60>
def _discard(self):
self.description = None
self.description_flags = None
# Django uses some member after __exit__.
# So we keep rowcount and lastrowid here. They are cleared in Cursor._query().
# self.rowcount = 0
# self.lastrowid = None
self._rows = None
self.rownumber = None
if self._result:
self._result.discard()
self._result = None
con = self.connection
if con is None:
return
> while con.next_result() == 0: # -1 means no more data.
E sqlalchemy.exc.ProgrammingError: (MySQLdb.ProgrammingError) (2014, "Commands out of sync; you can't run this command now")
E [SQL: UPDATE task_instance SET pid=%s, updated_at=%s WHERE task_instance.dag_id = %s AND task_instance.task_id = %s AND task_instance.run_id = %s AND task_instance.map_index = %s]
E [parameters: (90, datetime.datetime(2023, 8, 7, 14, 44, 7, 580365), 'test_dag', 'pull', 'test', 0)]
E (Background on this error at: https://sqlalche.me/e/14/f405)
```
```
E sqlalchemy.exc.ProgrammingError: (MySQLdb.ProgrammingError) (2014, "Commands out of sync; you can't run this command now")
E [SQL: UPDATE task_instance SET pid=%s, updated_at=%s WHERE task_instance.dag_id = %s AND task_instance.task_id = %s AND task_instance.run_id = %s AND task_instance.map_index = %s]
E [parameters: (90, datetime.datetime(2023, 8, 7, 14, 44, 7, 580365), 'test_dag', 'pull', 'test', 0)]
E (Background on this error at: https://sqlalche.me/e/14/f405)
```
Example failures:
* https://github.com/apache/airflow/actions/runs/5786336184/job/15681127372?pr=33144
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/33178 | https://github.com/apache/airflow/pull/33309 | 20d81428699db240b65f72a92183255c24e8c19b | ef85c673d81cbeb60f29a978c5dc61787d61253e | "2023-08-07T15:43:04Z" | python | "2023-09-05T14:32:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,162 | ["Dockerfile", "scripts/docker/clean-logs.sh"] | Empty log folders are not removed when clean up | ### Apache Airflow version
main (development)
### What happened
Empty log folders use up all Inodes and they are not removed by [clean-logs.sh](https://github.com/apache/airflow/blob/main/scripts/docker/clean-logs.sh)
This is the diff after cleaning empty folders. (50GB disk used)
```
airflow@airflow-worker-2:/opt/airflow/logs$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme2n1 3276800 1311542 1965258 41% /opt/airflow/logs
airflow@airflow-worker-2:/opt/airflow/logs$ find . -type d -empty -delete
airflow@airflow-worker-2:/opt/airflow/logs$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme2n1 3276800 158708 3118092 5% /opt/airflow/logs
```
### What you think should happen instead
_No response_
### How to reproduce
Have lots of frequent DAGs.
### Operating System
Debian 11
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33162 | https://github.com/apache/airflow/pull/33252 | bd11ea81e50f602d1c9f64c44c61b4e7294aafa9 | 93c3ccbdf2e60a7c3721ce308edae8b6591c9f23 | "2023-08-07T02:28:16Z" | python | "2023-08-13T22:10:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,138 | ["airflow/providers/redis/sensors/redis_pub_sub.py", "tests/providers/redis/sensors/test_redis_pub_sub.py"] | Move redis subscribe to poke() method in Redis Sensor (#32984): @potiuk | The fix has a bug (subscribe happens too frequently) | https://github.com/apache/airflow/issues/33138 | https://github.com/apache/airflow/pull/33139 | 76ca94d2f23de298bb46668998c227a86b4ecbd0 | 29a59de237ccd42a3a5c20b10fc4c92b82ff4475 | "2023-08-05T09:05:21Z" | python | "2023-08-05T10:28:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,099 | ["chart/templates/_helpers.yaml", "chart/templates/configmaps/configmap.yaml", "chart/templates/scheduler/scheduler-deployment.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "helm_tests/airflow_core/test_scheduler.py", "helm_tests/webserver/test_webserver.py"] | Add startupProbe to airflow helm charts | ### Description
Introducing a startupProbe onto the airflow services would be useful for slow starting container and most of all it doesn't have side effects.
### Use case/motivation
We have an internal feature where we perform a copy of venv from airflow services to cloud storages which can sometimes take a few minutes. Copying of a venv is a metadata heavy load: https://learn.microsoft.com/en-us/troubleshoot/azure/azure-storage/files-troubleshoot-performance?tabs=linux#cause-2-metadata-or-namespace-heavy-workload.
Introducing a startupProbe onto the airflow services would be useful for slow starting container and most of all it doesn't have side effects.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33099 | https://github.com/apache/airflow/pull/33107 | 9736143468cfe034e65afb3df3031ab3626f0f6d | ca5acda1617a5cdb1d04f125568ffbd264209ec7 | "2023-08-04T07:14:41Z" | python | "2023-08-07T20:03:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,061 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | TriggerDagRunOperator DAG task log showing Warning: Unable to redact <DagRunState.SUCCESS: 'success'> | ### Apache Airflow version
main (development)
### What happened
When TriggerDagRunOperator task log showing below warning
`WARNING - Unable to redact <DagRunState.SUCCESS: 'success'>, please report this via <https://github.com/apache/airflow/issues>. Error was: TypeError: EnumMeta.__call__() missing 1 required positional argument: 'value'`
<img width="1479" alt="image" src="https://github.com/apache/airflow/assets/43964496/0c183ffc-2440-49ee-b8d0-951ddc078c36">
### What you think should happen instead
There should not be any warning in logs
### How to reproduce
Steps to Repo:
1. Launch airflow using Breeze with main
2. Trigger any TriggerDagRunOperator
3. Check logs
DAG :
```python
from airflow import DAG
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import days_ago
"""This example illustrates the use of the TriggerDagRunOperator. There are 2
entities at work in this scenario:
1. The Controller DAG - the DAG that conditionally executes the trigger
2. The Target DAG - DAG being triggered (in trigger_dagrun_target.py)
"""
dag = DAG(
dag_id="trigger_controller_dag",
default_args={"owner": "airflow", "start_date": days_ago(2)},
schedule_interval=None,
tags=["core"],
)
trigger = TriggerDagRunOperator(
task_id="test_trigger_dagrun",
trigger_dag_id="trigger_target_dag",
reset_dag_run=True,
wait_for_completion=True,
conf={"message": "Hello World"},
dag=dag,
)
```
Note: create a DAG `trigger_target_dag` which maybe sleeps for sometime
### Operating System
OS x
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33061 | https://github.com/apache/airflow/pull/33065 | 1ff33b800246fdbfa7aebe548055409d64307f46 | b0f61be2f9791b75da3bca0bc30fdbb88e1e0a8a | "2023-08-03T05:57:24Z" | python | "2023-08-03T13:30:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,014 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Clearing task from List Task Instance page in UI does not also clear downstream tasks? | ### Apache Airflow version
2.6.3
### What happened
Select tasks from List Task Instance page in UI and select clear
Only those tasks are cleared and downsteam tasks are not also cleared as they are in the DAG graph view
### What you think should happen instead
downstream tasks should also be cleared
### How to reproduce
Select tasks from List Task Instance page in UI for which there are downstream tasks and select clear
### Operating System
rocky
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33014 | https://github.com/apache/airflow/pull/34529 | 541c9addb6b2ee56244793503cbf5c218e80dec8 | 5b0ce3db4d36e2a7f20a78903daf538bbde5e38a | "2023-08-01T19:39:33Z" | python | "2023-09-22T17:54:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,996 | ["airflow/models/taskinstance.py"] | Task instance log_url is overwrites existing path in base_url | ### Apache Airflow version
2.6.3
### What happened
A task instance's [log_url](https://github.com/apache/airflow/blob/2.6.3/airflow/models/taskinstance.py#L726) does not contain the full URL defined in [base_url](https://github.com/apache/airflow/blob/2.6.3/airflow/models/taskinstance.py#L729C9-L729C69).
### What you think should happen instead
The base_url may contain paths that should be acknowledged when build the log_url.
The log_url is built with [urljoin](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urljoin). Due to how urljoin builds URLs, any existing paths are ignored leading to a faulty URL.
### How to reproduce
This snippet showcases how urljoin ignores existing paths when building the url.
```
>>> from urllib.parse import urljoin
>>>
>>>
>>> urljoin(
... "https://my.astronomer.run/path",
... f"log?execution_date=test"
... f"&task_id=wow"
... f"&dag_id=super"
... f"&map_index=-1",
... )
'https://eochgroup.astronomer.run/log?execution_date=test&task_id=wow&dag_id=super&map_index=-1'
```
### Operating System
n/a
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
This was introduced by #31833.
A way to fix this can be to utilize [urlsplit](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlsplit) and [urlunsplit](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlunsplit) to account for existing paths.
```
from urllib.parse import urlsplit, urlunsplit
parts = urlsplit("https://my.astronomer.run/paths")
urlunsplit((
parts.scheme,
parts.netloc,
f"{parts.path}/log",
f"execution_date=test"
f"&task_id=wow"
f"&dag_id=super"
f"&map_index=-1",
""
)
)
```
Here is the fix in action.
```
>>> parts = urlsplit("https://my.astronomer.run/paths")
>>> urlunsplit((
... parts.scheme,
... parts.netloc,
... f"{parts.path}/log",
... f"execution_date=test"
... f"&task_id=wow"
... f"&dag_id=super"
... f"&map_index=-1",
... ''))
'https://my.astronomer.run/paths/log?execution_date=test&task_id=wow&dag_id=super&map_index=-1'
>>>
>>> parts = urlsplit("https://my.astronomer.run/paths/test")
>>> urlunsplit((
... parts.scheme,
... parts.netloc,
... f"{parts.path}/log",
... f"execution_date=test"
... f"&task_id=wow"
... f"&dag_id=super"
... f"&map_index=-1",
... ''))
'https://my.astronomer.run/paths/test/log?execution_date=test&task_id=wow&dag_id=super&map_index=-1'
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32996 | https://github.com/apache/airflow/pull/33063 | 3bb63f1087176b24e9dc8f4cc51cf44ce9986d34 | baa1bc0438baa05d358b236eec3c343438d8d53c | "2023-08-01T08:42:28Z" | python | "2023-08-03T09:19:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,993 | ["airflow/providers/vertica/hooks/vertica.py", "tests/providers/vertica/hooks/test_vertica.py"] | Error not detected in multi-statement vertica query | ### Apache Airflow version
2.6.3
### What happened
Hello,
There is a problem with multi-statement query and vertica, error will be detected only if it happens on the first statement of the sql.
for example if I run the following sql with default SQLExecuteQueryOperator options:
INSERT INTO MyTable (Key, Label) values (1, 'test 1');
INSERT INTO MyTable (Key, Label) values (1, 'test 2');
INSERT INTO MyTable (Key, Label) values (3, 'test 3');
the first insert will be commited, the nexts won't and no errors will be returned.
the same sql runed on mysql will return an error and no row will be inserted.
It seems to be linked to the way the vertica python client works (an issue has been opened on their git 4 years ago, [Duplicate key values error is not thrown as exception and is getting ignored](https://github.com/vertica/vertica-python/issues/255)) but since a workaround was provided I don't think it will be fixed in a near future.
For the moment, to workaroud I use the split statement option with disabling auto-commit but I think it's dangerous to let this behaviour as is.
### What you think should happen instead
_No response_
### How to reproduce
create a table MyTable with two columns Key and Lbl, declare Key as primary key.
Run the following query with SQLExecuteQueryOperator
INSERT INTO MyTable (Key, Label) values (1, 'test 1');
INSERT INTO MyTable (Key, Label) values (1, 'test 2');
INSERT INTO MyTable (Key, Label) values (3, 'test 3');
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32993 | https://github.com/apache/airflow/pull/34041 | 6b2a0cb3c84eeeaec013c96153c6b9538c6e74c4 | 5f47e60962b3123b1e6c8b42bef2c2643f54b601 | "2023-08-01T08:06:25Z" | python | "2023-09-06T21:09:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,969 | ["airflow/providers/databricks/hooks/databricks_base.py", "docs/apache-airflow-providers-databricks/connections/databricks.rst", "tests/providers/databricks/hooks/test_databricks.py"] | Databricks support for Service Principal Oauth | ### Description
Authentication using OAuth for Databricks Service Principals is now in Public Preview. I would like to implement this into the Databricks Hook. By adding "service_principal_oauth" as a boolean value set to `true` in the extra configuration, the Client Id and Client Secret can be supplied as a username and password.
https://docs.databricks.com/dev-tools/authentication-oauth.html
### Use case/motivation
Before Authentication using Oauth the only way to use the Databricks Service Principals was by another user account performing a token request on-behalf of the Service Principal. This process is difficult to utilize in the real world, but this new way of collecting access tokens changes the process and should make a big difference.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32969 | https://github.com/apache/airflow/pull/33005 | a1b5bdb25a6f9565ac5934a9a458e9b079ccf3ae | 8bf53dd5545ecda0e5bbffbc4cc803cbbde719a9 | "2023-07-31T13:43:43Z" | python | "2023-08-14T10:16:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,920 | ["airflow/providers/amazon/aws/transfers/gcs_to_s3.py"] | GCSToS3Operator is providing an unexpected argument to GCSHook.list | ### Apache Airflow version
2.6.3
### What happened
https://github.com/apache/airflow/blob/d800c1bc3967265280116a05d1855a4da0e1ba10/airflow/providers/amazon/aws/transfers/gcs_to_s3.py#L148-L150
This line in `GCSToS3Operator` is currently broken on Airflow 2.6.3:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/airflow/providers/amazon/aws/transfers/gcs_to_s3.py", line 148, in execute
files = hook.list(
TypeError: GCSHook.list() got an unexpected keyword argument 'match_glob'
```
The call signature for `GCSHook.list` does not have a `match_glob` argument on Airflow 2.6.3
https://github.com/apache/airflow/blob/eb24742d5300d2d87b17b4bcd67f639dbafd9818/airflow/providers/google/cloud/hooks/gcs.py#L699
However it does on the `main` branch:
https://github.com/apache/airflow/blob/0924389a877c5461733ef8a048e860b951d81a56/airflow/providers/google/cloud/hooks/gcs.py#L702-L710
It appears that `GCSToS3Operator` jumped the gun on using the `match_glob`
### What you think should happen instead
_No response_
### How to reproduce
Create a task that uses `airflow.providers.amazon.aws.transfers.gcs_to_s3.GCSToS3Operator`. Execute the task.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.3.1
apache-airflow-providers-celery==3.2.1
apache-airflow-providers-cncf-kubernetes==7.3.0
apache-airflow-providers-common-sql==1.6.0
apache-airflow-providers-datadog==3.3.1
apache-airflow-providers-dbt-cloud==3.2.2
apache-airflow-providers-elasticsearch==4.5.1
apache-airflow-providers-ftp==3.4.2
apache-airflow-providers-google==10.0.0
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.2.2
apache-airflow-providers-microsoft-azure==6.2.1
apache-airflow-providers-postgres==5.5.2
apache-airflow-providers-redis==3.2.1
apache-airflow-providers-sftp==4.4.0
apache-airflow-providers-slack==7.3.1
apache-airflow-providers-sqlite==3.4.2
apache-airflow-providers-ssh==3.7.1
### Deployment
Astronomer
### Deployment details
Using Astro Runtime 8.8.0
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32920 | https://github.com/apache/airflow/pull/32925 | cf7e0c5aa5ccc7b8a3963b14eadde0c8bc7c4eb7 | 519d99baee058dfa56f293f94222309c493ba3c4 | "2023-07-28T15:16:28Z" | python | "2023-08-04T17:40:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,897 | ["airflow/providers/amazon/aws/hooks/logs.py", "airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "airflow/providers/amazon/aws/utils/__init__.py", "tests/providers/amazon/aws/hooks/test_logs.py", "tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py"] | Enhance Airflow Logs API to fetch logs from Amazon Cloudwatch with time range | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
MWAA Version: 2.4.3
Airflow Version: 2.4.3
Airflow Logs currently do not fetch logs from Cloudwatch without time range, so when the cloudwatch logs are large and CloudWatch log streams are OLD, the airflow UI cannot display logs with error message:
```
*** Reading remote log from Cloudwatch log_group: airflow-cdp-airflow243-XXXX-Task log_stream: dag_id=<DAG_NAME>/run_id=scheduled__2023-07-27T07_25_00+00_00/task_id=<TASK_ID>/attempt=1.log.
Could not read remote logs from log_group: airflow-cdp-airflow243-XXXXXX-Task log_group: airflow-cdp-airflow243-XXXX-Task log_stream: dag_id=<DAG_NAME>/run_id=scheduled__2023-07-27T07_25_00+00_00/task_id=<TASK_ID>/attempt=1.log
```
The Airflow API need to pass start and end timestamps to GetLogEvents API from Amazon CloudWatch to resolve this error and it also improves performance of fetching logs.
This is critical issue for customers when they would like to fetch logs to investigate failed pipelines form few days to weeks old
### What you think should happen instead
The Airflow API need to pass start and end timestamps to GetLogEvents API from Amazon CloudWatch to resolve this error.
This should also improve performance of fetching logs.
### How to reproduce
This issue is intermittent and happens mostly on FAILD tasks.
1. Log onto Amazon MWAA Service
2. Open Airflow UI
3. Select DAG
4. Select the Failed Tasks
5. Select Logs
You should see error message like below in the logs:
```
*** Reading remote log from Cloudwatch log_group: airflow-cdp-airflow243-XXXX-Task log_stream: dag_id=<DAG_NAME>/run_id=scheduled__2023-07-27T07_25_00+00_00/task_id=<TASK_ID>/attempt=1.log.
Could not read remote logs from log_group: airflow-cdp-airflow243-XXXXXX-Task log_group: airflow-cdp-airflow243-XXXX-Task log_stream: dag_id=<DAG_NAME>/run_id=scheduled__2023-07-27T07_25_00+00_00/task_id=<TASK_ID>/attempt=1.log
```
### Operating System
Running with Amazon MWAA
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.3.1
apache-airflow==2.4.3
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32897 | https://github.com/apache/airflow/pull/33231 | 5707103f447be818ad4ba0c34874b822ffeefc09 | c14cb85f16b6c9befd35866327fecb4ab9bc0fc4 | "2023-07-27T21:01:44Z" | python | "2023-08-10T17:30:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,890 | ["airflow/www/static/js/connection_form.js"] | Airflow UI ignoring extra connection field during test connection | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
In Airflow 2.6.1 I can no longer use the `extra` field in any `http` based connection when testing the connection.
Inspecting the web request for testing the connection I see that the `extra` field is empty, even though I have data in there:
```json
{
"connection_id": "",
"conn_type": "http",
"extra": "{}"
}
```
<img width="457" alt="image" src="https://github.com/apache/airflow/assets/6411855/d6bab951-5d03-4695-a397-8bf6989d93a7">
I saw [this issue](https://github.com/apache/airflow/issues/31330#issuecomment-1558315370) which seems related. It was closed because the opener worked around the issue by creating the connection in code instead of the Airflow UI.
I couldn't find any other issues mentioning this problem.
### What you think should happen instead
The `extra` field should be included in the test connection request.
### How to reproduce
Create an `http` connection in the Airflow UI using at least version 2.6.1. Put any value in the `extra` field and test the connection while inspecting the network request. Notice that the `extra` field value is not supplied in the request.
### Operating System
N/A
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
If I had to guess, I think it might be related to [this PR](https://github.com/apache/airflow/pull/28583) where a json linter was added to the extra field.
Saving the connection seems to work fine, just not testing it.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32890 | https://github.com/apache/airflow/pull/35122 | ef497bc3412273c3a45f43f40e69c9520c7cc74c | 789222cb1378079e2afd24c70c1a6783b57e27e6 | "2023-07-27T17:45:31Z" | python | "2023-10-23T15:18:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,877 | ["dev/README_RELEASE_AIRFLOW.md"] | Wrong version in Dockerfile | ### Apache Airflow version
2.6.3
### What happened
I want to use `2.6.3` stable version of `Airflow`. I cloned the project and checkout on the `tags/2.6.3`.
```bash
git checkout tags/2.6.3 -b my_custom_branch
```
After checkout I check the `Dockerfile` and there is what I see below:
```bash
ARG AIRFLOW_VERSION="2.6.2"
```
Then I just download code as a `zip` [2.6.3 link](https://github.com/apache/airflow/releases/tag/2.6.3) and I see the same under `Dockerfile`.
Does `AIRFLOW_VERSION` have a wrong value ?
Thanks !
### What you think should happen instead
_No response_
### How to reproduce
I need confirmation that, version is definitely wrong under `Dockerfile`.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32877 | https://github.com/apache/airflow/pull/32888 | db8d737ad690b721270d0c2fd3a83f08d7ce5c3f | 7ba7fb1173e55c24c94fe01f0742fd00cd9c0d82 | "2023-07-27T07:47:07Z" | python | "2023-07-28T04:53:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,866 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "docs/apache-airflow-providers-databricks/operators/submit_run.rst", "tests/providers/databricks/operators/test_databricks.py"] | DatabricksSubmitRunOperator should accept a pipeline name for a pipeline_task | ### Description
It would be nice if we could give the DatabricksSubmitRunOperator a pipeline name instead of a pipeline_id for cases when you do not already know the pipeline_id but do know the name. I'm not sure if there's an easy way to fetch a pipeline_id.
### Use case/motivation
Avoid hardcoding pipeline ID's, storing the ID's elsewhere, or fetching the pipeline list and filtering it manually if the pipeline name is known, but ID is not.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32866 | https://github.com/apache/airflow/pull/32903 | f7f3b675ecd40e32e458b71b5066864f866a60c8 | c45617c4d5988555f2f52684e082b96b65ca6c17 | "2023-07-26T15:33:05Z" | python | "2023-09-07T00:44:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,862 | ["airflow/jobs/triggerer_job_runner.py"] | Change log level of message for event loop block | ### Description
Currently, when the event loop is blocked for more than 0.2 seconds, an error message is logged to the Triggerer notifying the user that the async thread was blocked, likely due to a badly written trigger.
The issue with this message is that there currently no support for async DB reads. So whenever a DB read is performed (for getting connection information etc.) the event loop is blocked for a short while (~0.3 - 0.8 seconds). This usually only happens once during a Trigger execution, and is not an issue at all in terms of performance.
Based on our internal user testing, I noticed that this error message causes confusion for a lot of users who are new to Deferrable operators. As such, I am proposing that we change the log level of that message to `INFO` so that the message is retained, but does not cause confusion. Until a method is available that would allow us to read from the database asynchronously, there is nothing that can be done about the message.
### Use case/motivation
![image](https://github.com/apache/airflow/assets/103602455/201a41c7-ac76-4226-8d3a-7f83ccf7f146)
I'd like the user to see this message as an INFO rather than an ERROR, because there it is not something that can be addressed at the moment, and it does not cause any noticeable impact to the user.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32862 | https://github.com/apache/airflow/pull/32979 | 6ada88a407a91a3e1d42ab8a30769a4a6f55588b | 9cbe494e231a5b2e92e6831a4be25802753f03e5 | "2023-07-26T12:57:35Z" | python | "2023-08-02T10:23:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,840 | ["airflow/decorators/branch_python.py", "airflow/decorators/external_python.py", "airflow/decorators/python_virtualenv.py", "airflow/decorators/short_circuit.py"] | Usage of templates_dict in short_circuit decorated task | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
I believe that the `@task.short_circuit` operator is missing the code to handle the usage of `templates_dict`, as described in the [documentation](https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html#id4). The code snippet below demonstrates the issue. In it, I create two tasks. The first is a basic python task, and in the log, from the print statement, I see the contents of the `.sql` file. (So the templating worked correctly.) However, in the log for the second task, I see only the string itself, with no jinja templating performed. I think this contradicts the documentation.
```python
from airflow.decorators import dag, task
import datetime as dt
@dag(
dag_id='test_dag',
schedule=None,
start_date=dt.datetime(2023, 7, 25, 0, 0, 0, tzinfo=dt.timezone.utc),
)
def test_dag():
@task(task_id='test_task', templates_dict={'query': 'sql/myquery.sql'}, templates_exts=['.sql'])
def test_task(templates_dict=None):
print(templates_dict['query'])
@task.short_circuit(task_id='test_tasksc', templates_dict={'query': 'sql/myquery.sql'}, templates_exts=['.sql'])
def test_tasksc(templates_dict=None):
print(templates_dict['query'])
return True
test_task() >> test_tasksc()
test_dag()
```
Output of first task: `SELECT * FROM ...`.
Output of second task: `sql/myquery.sql`.
As a guess, I think the problem could be in [this line](https://github.com/apache/airflow/blob/main/airflow/decorators/short_circuit.py#L41), where `templates_dict` and `templatex_ext` are not explicitly passed to the super-class's init function. I am happy to make an MR if it's that small of a change.
### What you think should happen instead
Output of first task: `SELECT * FROM ...`.
Output of second task: `SELECT * FROM ...`.
### How to reproduce
See MRE above.
### Operating System
RHEL 7.9 (Maipo)
### Versions of Apache Airflow Providers
apache-airflow==2.5.0
No relevant providers.
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32840 | https://github.com/apache/airflow/pull/32845 | 2ab78ec441a748ae4d99e429fe336b80a601d7b1 | 8f12e7e4a9374e886965f3134aa801a5a267a36d | "2023-07-25T21:27:50Z" | python | "2023-07-31T20:15:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,839 | ["airflow/www/security.py", "docs/apache-airflow/security/access-control.rst", "tests/www/test_security.py"] | DAG-level permissions set in Web UI disappear from roles on DAG sync | ### Apache Airflow version
2.6.3
### What happened
Versions: 2.6.2, 2.6.3, main
PR [#30340](https://github.com/apache/airflow/pull/30340) introduced a bug that happens whenever a DAG gets updated or a new DAG is added
**Potential fix:** Adding the code that was removed in PR [#30340](https://github.com/apache/airflow/pull/30340) back to `airflow/models/dagbag.py` fixes the issue. I've tried it on the current main branch using Breeze.
### What you think should happen instead
Permissions set in Web UI stay whenever a DAG sync happens
### How to reproduce
1. Download `docker-compose.yaml`:
```bash
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.6.2/docker-compose.yaml'
```
2. Create dirs and set the right Airflow user:
```bash
mkdir -p ./dags ./logs ./plugins ./config && \
echo -e "AIRFLOW_UID=$(id -u)" > .env
```
4. Add `test_dag.py` to ./dags:
```python
import datetime
import pendulum
from airflow import DAG
from airflow.operators.bash import BashOperator
with DAG(
dag_id="test",
schedule="0 0 * * *",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
dagrun_timeout=datetime.timedelta(minutes=60),
) as dag:
test = BashOperator(
task_id="test",
bash_command="echo 1",
)
if __name__ == "__main__":
dag.test()
```
5. Run docker compose: `docker compose up`
6. Create role in Web UI: Security > List Roles > Add a new record:
Name: test
Permissions: `can read on DAG:test`
7. Update `test_dag.py`: change `bash_command="echo 1"` to `bash_command="echo 2"`
8. Check test role's permissions: `can read on DAG:test` will be removed
Another option is to add a new dag instead of changing the existing one:
6. Add another dag to ./dags, code doesn't matter
7. Restart scheduler: `docker restart [scheduler container name]`
9. Check test role's permissions: `can read on DAG:test` will be removed
### Operating System
Ubuntu 22.04.1 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Docker 24.0.2
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32839 | https://github.com/apache/airflow/pull/33632 | 83efcaa835c4316efe2f45fd9cfb619295b25a4f | 370348a396b5ddfe670e78ad3ab87d01f6d0107f | "2023-07-25T19:13:12Z" | python | "2023-08-24T19:20:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,804 | ["airflow/utils/helpers.py"] | `DagRun.find()` fails when given `execution_date` obtained from `context.get('execution_date')` directly | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
#### Airflow version
2.5.3+composer (The latest available airflow version in Cloud Composer)
#### What happened
`DagRun.find()` (the `find` method of SQLAlchemy model `DagRun` ) fails with the error message `TypeError: 'DateTime' object is not iterable`, when passing `execution_date` that is directly obtained from context:
```py
def delete_previous_dagrun_func(dag_to_delete, **context):
execution_date = context.get('execution_date')
dagruns_today = DagRun.find(dag_id=dag_to_delete.dag_id,
execution_date=execution_date)
```
#### What is the cause
Upon closer inspection, execution_date is of type `lazy_object_proxy.Proxy` and the `is_container()` function used in `DagRun.find()` determines if the variable is a "container" by the presence of the `__iter__` field.
`lazy_object_proxy.Proxy` has `__iter__`, so the `execution_date` is determined to be a container, and as a result it is passed as an array element to SQLAlchemy, which caused the "not iterable" error.
https://github.com/apache/airflow/blob/6313e5293280773aed7598e1befb8d371e8f5614/airflow/models/dagrun.py#L406-L409
https://github.com/apache/airflow/blob/6313e5293280773aed7598e1befb8d371e8f5614/airflow/utils/helpers.py#L117-L120
I think `is_container()` should have another conditional branch to deal with `lazy_object_proxy.Proxy`.
#### workaround
It works fine by unwrapping `lazy_object_proxy.Proxy` before passing it.
```py
def delete_previous_dagrun_func(dag_to_delete, **context):
execution_date = context.get('execution_date')
dagruns_today = DagRun.find(dag_id=dag_to_delete.dag_id,
execution_date=execution_date.__wrapped__)
```
https://github.com/ionelmc/python-lazy-object-proxy/blob/c56c68bda23b8957abbc2fef3d21f32dd44b7f93/src/lazy_object_proxy/simple.py#L76-L83
### What you think should happen instead
The variable fetched from `context.get('execution_date')` can be used directly as an argument of `DagRun.find()`
### How to reproduce
```py
dag_to_delete = DAG('DAG_TO_DELETE')
dag = DAG('DAG')
def delete_previous_dagrun_func(dag_to_delete, **context):
execution_date = context.get('execution_date')
dagruns_today = DagRun.find(dag_id=dag_to_delete.dag_id,
execution_date=execution_date)
# do something for `dagruns_today` (Delete the dagruns for today)
op = PythonOperator(
task_id='Delete_Previous_DagRun',
python_callable=delete_previous_dagrun_func,
op_args=[dag_to_delete],
provide_context=True,
dag=dag)
```
This code produces the following error at `DagRun.find` line:
```
[2023-07-24, 19:09:06 JST] {taskinstance.py:1778} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 210, in execute
branch = super().execute(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 175, in execute
return_value = self.execute_callable()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 192, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/airflow/gcs/dags/*******.py", line ***, in delete_previous_dagrun_func
dagruns_today = DagRun.find(dag_id=dag_to_delete.dag_id,
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/dagrun.py", line 386, in find
qry = qry.filter(cls.execution_date.in_(execution_date))
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/operators.py", line 641, in in_
return self.operate(in_op, other)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 317, in operate
return op(self.comparator, *other, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/operators.py", line 1423, in in_op
return a.in_(b)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/operators.py", line 641, in in_
return self.operate(in_op, other)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/orm/properties.py", line 426, in operate
return op(self.__clause_element__(), *other, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/operators.py", line 1423, in in_op
return a.in_(b)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/operators.py", line 641, in in_
return self.operate(in_op, other)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 870, in operate
return op(self.comparator, *other, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/operators.py", line 1423, in in_op
return a.in_(b)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/operators.py", line 641, in in_
return self.operate(in_op, other)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/type_api.py", line 1373, in operate
return super(TypeDecorator.Comparator, self).operate(
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/type_api.py", line 77, in operate
return o[0](self.expr, op, *(other + o[1:]), **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/default_comparator.py", line 159, in _in_impl
seq_or_selectable = coercions.expect(
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/coercions.py", line 193, in expect
resolved = impl._literal_coercion(
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/sql/coercions.py", line 573, in _literal_coercion
element = list(element)
TypeError: 'DateTime' object is not iterable
```
### Operating System
Google Cloud Composer (Ubuntu 20.04.6 LTS on Kubernetes)
### Versions of Apache Airflow Providers
No relevant providers
### Deployment
Google Cloud Composer
### Deployment details
Google Cloud Composer image version: `composer-2.3.4-airflow-2.5.3`
### Anything else
I have recently begun preparing to upgrade Airflow from 1.10 to Series 2.x.
The code described in the reproduce section still works in Airflow 1.10 environment.
I want to know it is intentional OR accidental incompatibility between 1.10 and 2.x.
(If it is intentional, Adding more helpful error message should save time to resolve.)
I am willing to submit a PR, if it is super easy (like changing 1 line of code).
If it is not the case and I need to make a big change, I think I do not have enough time to make PR timely.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32804 | https://github.com/apache/airflow/pull/32850 | 319045492d2559bd856a43a1fa810adf59358d7d | 12228d16be13afb2918139ea3c5a285a23242bd0 | "2023-07-24T12:35:00Z" | python | "2023-07-28T06:17:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,761 | ["airflow/models/abstractoperator.py", "tests/serialization/test_dag_serialization.py"] | Extra links order is not predictable causing shuffling in UI during webserver restarts | ### Apache Airflow version
main (development)
### What happened
Currently `extra_links` is a cached property that returns a list without any order since set is used. Since we have 3 links per operator this order gets shuffled during webserver restarts as reported by users. It would be good to have this sorted so that the order is predictable. This is already done in extra_links Airflow API output
https://github.com/apache/airflow/blob/d7899ecfafb20cc58f8fb43e287d1c6778b8fa9f/airflow/models/abstractoperator.py#L470-L472
https://github.com/apache/airflow/blob/d7899ecfafb20cc58f8fb43e287d1c6778b8fa9f/airflow/api_connexion/endpoints/extra_link_endpoint.py#L75-L78
### What you think should happen instead
extra link order should be predictable
### How to reproduce
1. Create an operator with 3 or more extra links.
2. Render the links in UI.
3. Restart the webserver and check the extra link order.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32761 | https://github.com/apache/airflow/pull/32762 | 3e467ba510d29e912d89115769726111b8bce891 | 4c878798ef88a1fa45956163630d71b6fc4f401f | "2023-07-22T06:42:40Z" | python | "2023-07-22T10:24:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,747 | ["airflow/www/app.py", "docs/apache-airflow/howto/set-config.rst", "docs/apache-airflow/security/webserver.rst"] | The application context is not passed to webserver_config.py | ### Apache Airflow version
2.6.3
### What happened
Hi, I would like to pass a custom [WSGI Middleware](https://medium.com/swlh/creating-middlewares-with-python-flask-166bd03f2fd4) to the underlying flask server.
I could theoretically do so in *webserver_config.py* by accessing `flask.current_app`:
```python
# webserver_config.py
from flask import current_app
from airflow.www.fab_security.manager import AUTH_REMOTE_USER
class MyAuthMiddleware:
def __init__(self, wsgi_app) -> None:
self.wsgi_app = wsgi_app
def __call__(self, environ: dict, start_response):
print("--> Custom authenticating logic")
environ["REMOTE_USER"] = "username"
return self.wsgi_app(environ, start_response)
current_app.wsgi_app = MyAuthMiddleware(current_app.wsgi_app)
AUTH_TYPE = AUTH_REMOTE_USER
```
But for this to work [the application context](https://flask.palletsprojects.com/en/2.3.x/appcontext/) should be pushed while reading the webserver config.
Thus https://github.com/apache/airflow/blob/fbeddc30178eec7bddbafc1d560ff1eb812ae37a/airflow/www/app.py#L84 should become
```python
with flask_app.app_context():
flask_app.config.from_pyfile(settings.WEBSERVER_CONFIG, silent=True)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32747 | https://github.com/apache/airflow/pull/32759 | 5a0494f83e8ad0e5cbf0d3dcad3022a3ea89d789 | 7847b6ead3c039726bb82e0de3a39e5ef5eb00aa | "2023-07-21T14:51:00Z" | python | "2023-08-08T07:00:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,744 | ["airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py"] | Getting 410 from spark_kubernetes_operator when a task runs for long | ### Apache Airflow version
2.6.3
### What happened
A spark job which runs for >1 hours gets a 410 Expired error regardless of what the actual output of the spark application was.
Logs -
```
[2023-07-21, 19:41:57 IST] {spark_kubernetes.py:126} INFO - 2023-07-21T14:11:57.424338279Z 23/07/21 19:41:57 INFO MetricsSystemImpl: s3a-file-system metrics system stopped.
[2023-07-21, 19:41:57 IST] {spark_kubernetes.py:126} INFO - 2023-07-21T14:11:57.424355219Z 23/07/21 19:41:57 INFO MetricsSystemImpl: s3a-file-system metrics system shutdown complete.
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-1] is pending
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-1] is pending
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-1] is running
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-1] completed
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-2] is pending
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-2] is running
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-3] is pending
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-3] is pending
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-4] is pending
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-4] is running
[2023-07-21, 19:41:58 IST] {spark_kubernetes.py:117} INFO - Executor [adhoc-d9e5ed897882bd27-exec-3] is running
[2023-07-21, 19:41:58 IST] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", line 112, in execute
for event in namespace_event_stream:
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/watch/watch.py", line 182, in stream
raise client.rest.ApiException(
kubernetes.client.exceptions.ApiException: (410)
Reason: Expired: The resourceVersion for the provided watch is too old.
[2023-07-21, 19:41:58 IST] {taskinstance.py:1345} INFO - Marking task as FAILED. dag_id=adhoc, task_id=submit_job, execution_date=20230721T125127, start_date=20230721T125217, end_date=20230721T141158
[2023-07-21, 19:41:58 IST] {standard_task_runner.py:104} ERROR - Failed to execute job 247326 for task submit_job ((410)
Reason: Expired: The resourceVersion for the provided watch is too old.
; 15)
[2023-07-21, 19:41:58 IST] {local_task_job_runner.py:225} INFO - Task exited with return code 1
[2023-07-21, 19:41:58 IST] {taskinstance.py:2653} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
In this case, I am able to see the logs of the task and only the final state is wrongly reported.
But in case the task runs for even longer (>4 hours) then even the logs are not seen.
### What you think should happen instead
In the first case, the tasks should have reported the correct state of the spark application. In the second case, logs should have been still visible and the correct state of the spark application should have been reported
### How to reproduce
To reproduce it, have a long running (>1 hour) task submitted using the SparkKubernetesOperator.
If the task gets completed before 4 hours then you should see a 410 Expired error regardless of what the actual output of the spark application was.
If the task takes longer, you should see the task fail around the 4 hour mark due to 410 even when the spark application is still running.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.2.0
apache-airflow-providers-celery==3.2.1
apache-airflow-providers-cncf-kubernetes==7.3.0
apache-airflow-providers-common-sql==1.5.2
apache-airflow-providers-docker==3.7.1
apache-airflow-providers-elasticsearch==4.5.1
apache-airflow-providers-ftp==3.4.2
apache-airflow-providers-google==10.2.0
apache-airflow-providers-grpc==3.2.1
apache-airflow-providers-hashicorp==3.4.1
apache-airflow-providers-http==4.4.2
apache-airflow-providers-imap==3.2.2
apache-airflow-providers-microsoft-azure==6.1.2
apache-airflow-providers-mysql==5.1.1
apache-airflow-providers-odbc==4.0.0
apache-airflow-providers-postgres==5.5.1
apache-airflow-providers-redis==3.2.1
apache-airflow-providers-sendgrid==3.2.1
apache-airflow-providers-sftp==4.3.1
apache-airflow-providers-slack==7.3.1
apache-airflow-providers-snowflake==4.2.0
apache-airflow-providers-sqlite==3.4.2
apache-airflow-providers-ssh==3.7.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
k8s version - 1.23 hosted using EKS
python 3.8
I have upgraded the apache-airflow-providers-cncf-kubernetes to ensure that the bug has not been fixed in the newer versions.
### Anything else
I think this issue is because of the kubernetes 'Watch().stream''s APIException not being handled. According to its docs -
```
Note that watching an API resource can expire. The method tries to
resume automatically once from the last result, but if that last result
is too old as well, an `ApiException` exception will be thrown with
``code`` 410. In that case you have to recover yourself, probably
by listing the API resource to obtain the latest state and then
watching from that state on by setting ``resource_version`` to
one returned from listing.
```
this error needs to be handled by Airflow and not by the kubernetes client.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32744 | https://github.com/apache/airflow/pull/32768 | 4c878798ef88a1fa45956163630d71b6fc4f401f | fcc6f284c742bdc554edecc5a83d9eaa7d9d7ba4 | "2023-07-21T14:19:09Z" | python | "2023-07-22T11:32:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,732 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "tests/providers/amazon/aws/hooks/test_base_aws.py"] | airflow.providers.amazong.aws.hooks.base_aws.BaseSessionFactory feeds synchronous credentials to aiobotocore when using `assume_role` | ### Apache Airflow version
2.6.3
### What happened
Hi all, I'm having a bit of a problem with aiobotocore and the deferrable AWS Batch Operator. When deferrable is off, everything works fine, but for some very long running batch jobs I wanted to try out the async option. Example DAG:
```python
from airflow.decorators import dag
from airflow.providers.amazon.aws.operators.batch import BatchOperator
from datetime import datetime, timedelta
default_args = {
"owner": "rkulkarni",
...
}
@dag(
default_args=default_args,
catchup=False,
schedule="0 1/8 * * *",
)
def batch_job_to_do():
submit_batch_job = BatchOperator(
task_id="submit_batch_job",
job_name="job_name",
job_queue="job_queue",
job_definition="job_definition:1",
overrides={},
aws_conn_id="aws_prod_batch",
region_name="us-east-1",
awslogs_enabled=True,
awslogs_fetch_interval=timedelta(seconds=30),
deferrable=True
)
submit_batch_job # type: ignore
batch_job_to_do()
```
And, for reference, this is running in an EC2 instance in one account that assumes a role in another account via STS to submit the job. Again, this all works fine when deferrable=False
If deferrable=True, however, the DAG works properly until it wakes up the first time.
I've identified the cause of this error: https://github.com/apache/airflow/blob/15d42b4320d535cf54743929f134e36f59c615bb/airflow/providers/amazon/aws/hooks/base_aws.py#L211 and a related error: https://github.com/apache/airflow/blob/15d42b4320d535cf54743929f134e36f59c615bb/airflow/providers/amazon/aws/hooks/base_aws.py#L204
These should be creating `aiobotocore.credentials.AioRefreshableCredentials` and `aiobotocore.credentials.AioDeferredRefreshableCredentials`, respectively. I can confirm that replacing `session._session._credentials` attribute with these fixes the above error.
I'm happy to submit a PR to resolve this.
### What you think should happen instead
_No response_
### How to reproduce
Attempt to use STS-based authentication with a deferrable AWS operator (any operator) and it will produce the below error:
```
Traceback (most recent call last):
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/airflow/jobs/triggerer_job_runner.py", line 537, in cleanup_finished_triggers
result = details["task"].result()
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/airflow/jobs/triggerer_job_runner.py", line 615, in run_trigger
async for event in trigger.run():
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/airflow/providers/amazon/aws/triggers/base.py", line 121, in run
await async_wait(
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/airflow/providers/amazon/aws/utils/waiter_with_logging.py", line 122, in async_wait
await waiter.wait(**args, WaiterConfig={"MaxAttempts": 1})
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/waiter.py", line 49, in wait
await AIOWaiter.wait(self, **kwargs)
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/waiter.py", line 94, in wait
response = await self._operation_method(**kwargs)
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/waiter.py", line 77, in __call__
return await self._client_method(**kwargs)
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/client.py", line 361, in _make_api_call
http, parsed_response = await self._make_request(
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/client.py", line 386, in _make_request
return await self._endpoint.make_request(
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/endpoint.py", line 96, in _send_request
request = await self.create_request(request_dict, operation_model)
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/endpoint.py", line 84, in create_request
await self._event_emitter.emit(
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/hooks.py", line 66, in _emit
response = await resolve_awaitable(handler(**kwargs))
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/_helpers.py", line 15, in resolve_awaitable
return await obj
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/signers.py", line 24, in handler
return await self.sign(operation_name, request)
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/signers.py", line 73, in sign
auth = await self.get_auth_instance(**kwargs)
File "/home/airflow/dagger/venv/lib/python3.9/site-packages/aiobotocore/signers.py", line 147, in get_auth_instance
await self._credentials.get_frozen_credentials()
TypeError: object ReadOnlyCredentials can't be used in 'await' expression
```
### Operating System
AmazonLinux2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32732 | https://github.com/apache/airflow/pull/32733 | 43a5b4750590bf43bb59cc7bd8377934737f63e8 | 57f203251b223550d6e7bb717910109af9aeed29 | "2023-07-21T01:10:11Z" | python | "2023-07-22T17:28:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,708 | ["Dockerfile", "Dockerfile.ci", "docs/docker-stack/build-arg-ref.rst", "docs/docker-stack/changelog.rst", "scripts/docker/install_mysql.sh"] | MYSQL_OPT_RECONNECT is deprecated. When exec airflow db upgrade. | ### Apache Airflow version
2.6.3
### What happened
When I install airflow can set backend database, I set mysql as my backend. And I exec `airflow db upgrade`
It shows many warning info contains "WARNING: MYSQL_OPT_RECONNECT is deprecated and will be removed in a future version."
### What you think should happen instead
_No response_
### How to reproduce
mysql_config --version
8.0.34
mysql --version
mysql Ver 8.0.34 for Linux on x86_64 (MySQL Community Server - GPL)
setup airflow backend and run `airflow db upgrade`
### Operating System
CentOS 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32708 | https://github.com/apache/airflow/pull/35070 | dcb72b5a4661223c9de7beea40264a152298f24b | 1f26ae13cf974a0b2af6d8bc94c601d65e2bd98a | "2023-07-20T07:13:49Z" | python | "2023-10-24T08:54:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,706 | ["airflow/cli/commands/scheduler_command.py", "tests/cli/commands/test_scheduler_command.py"] | Scheduler becomes zombie when run_job raises unhandled exception | ### Apache Airflow version
2.6.3
### What happened
# Context
When the backend database shuts down (for maintenance, for example), Airflow scheduler's main scheduler loop crashes, but the scheduler process does not exit. In my company's setup, the scheduler process is monitored by `supervisord`, but since the scheduler process does not exit, `supervisord` did not pick up on the scheduler failure, causing prolonged scheduler outage.
# Root cause
In the `airflow/cli/commands/scheduler_command.py`, the main function call of the `airflow scheduler` command is the `_run_scheduler_job` function. When the `_run_scheduler_job` function is called, depending on the configuration, two sub-processes `serve_logs` and/or `health_check` may be started. The life cycle of these two sub-processes are managed by a context manager, so that when the context exits, the two sub-processes are terminated by the context managers:
```python
def _run_scheduler_job(job_runner: SchedulerJobRunner, *, skip_serve_logs: bool) -> None:
InternalApiConfig.force_database_direct_access()
enable_health_check = conf.getboolean("scheduler", "ENABLE_HEALTH_CHECK")
with _serve_logs(skip_serve_logs), _serve_health_check(enable_health_check):
run_job(job=job_runner.job, execute_callable=job_runner._execute)
@contextmanager
def _serve_logs(skip_serve_logs: bool = False):
"""Starts serve_logs sub-process."""
from airflow.utils.serve_logs import serve_logs
sub_proc = None
executor_class, _ = ExecutorLoader.import_default_executor_cls()
if executor_class.serve_logs:
if skip_serve_logs is False:
sub_proc = Process(target=serve_logs)
sub_proc.start()
yield
if sub_proc:
sub_proc.terminate()
@contextmanager
def _serve_health_check(enable_health_check: bool = False):
"""Starts serve_health_check sub-process."""
sub_proc = None
if enable_health_check:
sub_proc = Process(target=serve_health_check)
sub_proc.start()
yield
if sub_proc:
sub_proc.terminate()
```
The mis-behavior happens when `run_job` raises unhandled exception. The exception takes over the control flow, and the context managers will not properly exit. When the main Python process tries to exit, the `multiprocessing` module tries to terminate all child processes (https://github.com/python/cpython/blob/1e1f4e91a905bab3103250a3ceadac0693b926d9/Lib/multiprocessing/util.py#L320C43-L320C43) by first calling `join()`. Because the sub-processes `serve_logs` and/or `health_check` are never terminated, calling `join()` on them will hang indefinitely, thus causing the zombie state.
Note that this behavior was introduced since 2.5.0 (2.4.3 does not have this issue) when the two sub-processes are not managed with context manager, and the scheduler job is placed inside a try-catch-finally block.
### What you think should happen instead
The scheduler process should never hang. If something went wrong, such as a database disconnect, the scheduler should simply crash, and let whoever manages the scheduler process handle respawn.
As to how this should be achieved, I think the best way is to place `run_job` inside a try-catch block so that any exception can be caught and gracefully handled, although I am open to feedback.
### How to reproduce
# To reproduce the scheduler zombie state
Start an Airflow cluster with breeze:
```
breeze start-airflow --python 3.9 --backend postgres [with any version at or later than 2.5.0]
```
After the command opens the `tmux` windows, stop the `postgres` container `docker stop docker-compose-postgres-1`
The webserver will not do anything. The triggerer should correctly crash and exit. The scheduler will crash but not exit.
# To reproduce the context manager's failure to exit
```python
from multiprocessing import Process
from contextlib import contextmanager
import time
def busybox():
time.sleep(24 * 3600) # the entire day
@contextmanager
def some_resource():
subproc = Process(target=busybox)
subproc.start()
print(f"Sub-process {subproc} started")
yield
subproc.terminate()
subproc.join()
print(f"Sub-process {subproc} terminated")
def main():
with some_resource():
raise Exception("Oops")
if __name__ == "__main__":
main()
```
### Operating System
MacOS with Docker Desktop
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32706 | https://github.com/apache/airflow/pull/32707 | 9570cb1482d25f288e607aaa1210b2457bc5ed12 | f2108892e89085f695f8a3f52e076b39288497c6 | "2023-07-19T23:58:15Z" | python | "2023-07-25T22:02:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,702 | ["airflow/providers/amazon/aws/operators/sagemaker.py", "docs/apache-airflow-providers-amazon/operators/sagemaker.rst", "tests/providers/amazon/aws/operators/test_sagemaker_notebook.py", "tests/system/providers/amazon/aws/example_sagemaker_notebook.py"] | Support for SageMaker Notebook Operators | ### Description
Today, Amazon provider package supports SageMaker operators for a few operations, like training, tuning, pipelines, but it lacks the support for SageMaker Notebook instances. Boto3 provides necessary APIs to [create notebook instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/create_notebook_instance.html), [start notebook instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/start_notebook_instance.html), [stop network instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/stop_notebook_instance.html) and [delete notebook instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/delete_notebook_instance.html). Leveraging these APIs, we should add new operators to SageMaker set under Amazon provider. At the same time, having a sensor (synchronous as well as deferrable) for notebook instance execution that utilizes [describe notebook instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/describe_notebook_instance.html) and waits for Stopped/Failed status would help with observability of the execution.
### Use case/motivation
Data Scientists are orchestrating ML use cases via Apache Airflow. A key component of ML use cases is running Jupyter Notebook on SageMaker. Having built-in operators and sensors would make it easy for Airflow users to run Notebook instances on SageMaker.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32702 | https://github.com/apache/airflow/pull/33219 | 45d5f6412731f81002be7e9c86c11060394875cf | 223b41d68f53e7aa76588ffb8ba1e37e780d9e3b | "2023-07-19T19:27:24Z" | python | "2023-08-16T16:53:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,657 | ["airflow/migrations/versions/0131_2_8_0_make_connection_login_password_text.py", "airflow/models/connection.py", "airflow/utils/db.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst"] | Increase connections HTTP login length to 5000 characters | ### Description
The current length limit for the `login` parameter in an HTTP connection is 500 characters. It'd be nice if this was 5000 characters like the `password` parameter.
### Use case/motivation
We've run into an issue with an API we need to integrate with. It uses basic HTTP authentication, and both username and password are about 900 characters long each. We don't have any control over this API, so we cannot change the authentication method, nor the length of these usernames and passwords.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32657 | https://github.com/apache/airflow/pull/32815 | a169cf2c2532a8423196c8d98eede86029a9de9a | 8e38c5a4d74b86af25b018b19f7a7d90d3e7610f | "2023-07-17T17:20:44Z" | python | "2023-09-26T17:00:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,622 | ["airflow/decorators/base.py", "tests/decorators/test_python.py"] | When multiple-outputs gets None as return value it crashes | ### Body
Currently when you use multiple-outputs in decorator and it gets None value, it crashes.
As explained in https://github.com/apache/airflow/issues/32553 and workaround for ShortCircuitOperator has been implemented here: https://github.com/apache/airflow/pull/32569
But a more complete fix for multiple-outputs handling None is needed.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/32622 | https://github.com/apache/airflow/pull/32625 | ea0deaa993674ad0e4ef777d687dc13809b0ec5d | a5dd08a9302acca77c39e9552cde8ef501fd788f | "2023-07-15T07:25:42Z" | python | "2023-07-16T14:31:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,621 | ["docs/apache-airflow-providers-apache-beam/operators.rst"] | Apache beam operators that submits to Dataflow requires gcloud CLI | ### What do you see as an issue?
It's unclear in the [apache beam provider documentation](https://airflow.apache.org/docs/apache-airflow-providers-apache-beam/stable/index.html) about [Apache Beam Operators](https://airflow.apache.org/docs/apache-airflow-providers-apache-beam/stable/operators.html) that the operators require gcloud CLI.
For example, `BeamRunPythonPipelineOperator` calls [provide_authorized_gcloud](https://github.com/apache/airflow/blob/providers-apache-beam/5.1.1/airflow/providers/apache/beam/operators/beam.py#L303C41-L303C66) which executes a [bash command that uses gcloud](https://github.com/apache/airflow/blob/main/airflow/providers/google/common/hooks/base_google.py#L545-L552).
### Solving the problem
A callout box in the apache beam provider documentation would be very helpful.
Something like this [callout](https://airflow.apache.org/docs/apache-airflow-providers-google/10.3.0/operators/cloud/dataflow.html) in the google provider documentation.
```
This operator requires gcloud command (Google Cloud SDK) must be installed on the Airflow worker <https://cloud.google.com/sdk/docs/install>`__
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32621 | https://github.com/apache/airflow/pull/32663 | f6bff828af28a9f7f25ef35ec77da4ca26388258 | 52d932f659d881a0b17bc1c1ba7e7bfd87d45848 | "2023-07-15T07:13:45Z" | python | "2023-07-18T11:22:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,590 | ["chart/templates/_helpers.yaml", "chart/templates/secrets/metadata-connection-secret.yaml", "chart/templates/workers/worker-kedaautoscaler.yaml", "chart/values.schema.json", "chart/values.yaml", "helm_tests/other/test_keda.py"] | When using KEDA and pgbouncer together, KEDA logs repeated prepared statement errors | ### Official Helm Chart version
1.10.0 (latest released)
### Apache Airflow version
2.6.2
### Kubernetes Version
v1.26.5-gke.1200
### Helm Chart configuration
values.pgbouncer.enabled: true
workers.keda.enabled: true
And configure a postgres database of any sort.
### Docker Image customizations
_No response_
### What happened
If KEDA is enabled in the helm chart, and pgbouncer is also enabled, KEDA will be configured to use the connection string from the worker pod to connect to the postgres database. That means it will connect to pgbouncer. Pgbouncer is configured in transaction pool mode according to the secret:
[pgbouncer]
pool_mode = transaction
And it appears that KEDA uses prepared statements in it's queries to postgres, resulting in numerous repeated errors in the KEDA logs:
```
2023-07-13T18:21:35Z ERROR postgresql_scaler could not query postgreSQL: ERROR: prepared statement "stmtcache_47605" does not exist (SQLSTATE 26000) {"type": "ScaledObject", "namespace": "airflow-sae-int", "name": "airflow-sae-int-worker", "error": "ERROR: prepared statement \"stmtcache_47605\" does not exist (SQLSTATE 26000)"}
```
Now KEDA still works, as it does the query again without the prepared statement, but this is not ideal and results in a ton of error logging.
### What you think should happen instead
I suggest having the KEDA connection go direct to the upstream configured postgresql server, as it's only one connection, instead of using pgbouncer.
### How to reproduce
Enabled KEDA for workers, and pgbouncer at the same time.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32590 | https://github.com/apache/airflow/pull/32608 | 51052bbbce159340e962e9fe40b6cae6ce05ab0c | f7ad549f2d7119a6496e3e66c43f078fbcc98ec1 | "2023-07-13T18:25:32Z" | python | "2023-07-15T20:52:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,585 | ["airflow/providers/apache/kafka/triggers/await_message.py"] | Commit failed: Local: No offset stored while using AwaitMessageTriggerFunctionSensor | ### Apache Airflow version
2.6.3
### What happened
While trying to use AwaitMessageTriggerFunctionSensor i'm increasing count of dagruns.
I've encountered an exception `cimpl.KafkaException: KafkaError{code=_NO_OFFSET,val=-168,str="Commit failed: Local: No offset stored"}`.
I tried to set consumers count less, equal and more than partitions but every time the error happened.
Here is a log:
```[2023-07-13, 14:37:07 UTC] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: kafka_test_dag.await_message scheduled__2023-07-13T14:35:00+00:00 [queued]>
[2023-07-13, 14:37:07 UTC] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: kafka_test_dag.await_message scheduled__2023-07-13T14:35:00+00:00 [queued]>
[2023-07-13, 14:37:07 UTC] {taskinstance.py:1308} INFO - Starting attempt 1 of 1
[2023-07-13, 14:37:07 UTC] {taskinstance.py:1327} INFO - Executing <Task(AwaitMessageTriggerFunctionSensor): await_message> on 2023-07-13 14:35:00+00:00
[2023-07-13, 14:37:07 UTC] {standard_task_runner.py:57} INFO - Started process 8918 to run task
[2023-07-13, 14:37:07 UTC] {standard_task_runner.py:84} INFO - Running: ['airflow', 'tasks', 'run', 'kafka_test_dag', 'await_message', 'scheduled__2023-07-13T14:35:00+00:00', '--job-id', '629111', '--raw', '--subdir', 'DAGS_FOLDER/dags/kafka_consumers_dag.py', '--cfg-path', '/tmp/tmp3de57b65']
[2023-07-13, 14:37:07 UTC] {standard_task_runner.py:85} INFO - Job 629111: Subtask await_message
[2023-07-13, 14:37:08 UTC] {task_command.py:410} INFO - Running <TaskInstance: kafka_test_dag.await_message scheduled__2023-07-13T14:35:00+00:00 [running]> on host airflow-worker-1.airflow-worker.syn-airflow-dev.svc.opus.s.mesh
[2023-07-13, 14:37:08 UTC] {taskinstance.py:1545} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='airflow' AIRFLOW_CTX_DAG_ID='kafka_test_dag' AIRFLOW_CTX_TASK_ID='await_message' AIRFLOW_CTX_EXECUTION_DATE='2023-07-13T14:35:00+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='scheduled__2023-07-13T14:35:00+00:00'
[2023-07-13, 14:37:09 UTC] {taskinstance.py:1415} INFO - Pausing task as DEFERRED. dag_id=kafka_test_dag, task_id=await_message, execution_date=20230713T143500, start_date=20230713T143707
[2023-07-13, 14:37:09 UTC] {local_task_job_runner.py:222} INFO - Task exited with return code 100 (task deferral)
[2023-07-13, 14:38:43 UTC] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: kafka_test_dag.await_message scheduled__2023-07-13T14:35:00+00:00 [queued]>
[2023-07-13, 14:38:43 UTC] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: kafka_test_dag.await_message scheduled__2023-07-13T14:35:00+00:00 [queued]>
[2023-07-13, 14:38:43 UTC] {taskinstance.py:1306} INFO - Resuming after deferral
[2023-07-13, 14:38:44 UTC] {taskinstance.py:1327} INFO - Executing <Task(AwaitMessageTriggerFunctionSensor): await_message> on 2023-07-13 14:35:00+00:00
[2023-07-13, 14:38:44 UTC] {standard_task_runner.py:57} INFO - Started process 9001 to run task
[2023-07-13, 14:38:44 UTC] {standard_task_runner.py:84} INFO - Running: ['airflow', 'tasks', 'run', 'kafka_test_dag', 'await_message', 'scheduled__2023-07-13T14:35:00+00:00', '--job-id', '629114', '--raw', '--subdir', 'DAGS_FOLDER/dags/kafka_consumers_dag.py', '--cfg-path', '/tmp/tmpo6xz234q']
[2023-07-13, 14:38:44 UTC] {standard_task_runner.py:85} INFO - Job 629114: Subtask await_message
[2023-07-13, 14:38:45 UTC] {task_command.py:410} INFO - Running <TaskInstance: kafka_test_dag.await_message scheduled__2023-07-13T14:35:00+00:00 [running]> on host airflow-worker-1.airflow-worker.airflow-dev.svc.opus.s.mesh
[2023-07-13, 14:38:46 UTC] {taskinstance.py:1598} ERROR - Trigger failed:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/jobs/triggerer_job_runner.py", line 537, in cleanup_finished_triggers
result = details["task"].result()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/jobs/triggerer_job_runner.py", line 615, in run_trigger
async for event in trigger.run():
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/apache/kafka/triggers/await_message.py", line 114, in run
await async_commit(asynchronous=False)
File "/home/airflow/.local/lib/python3.11/site-packages/asgiref/sync.py", line 479, in __call__
ret: _R = await loop.run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/asgiref/sync.py", line 538, in thread_handler
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
cimpl.KafkaException: KafkaError{code=_NO_OFFSET,val=-168,str="Commit failed: Local: No offset stored"}
[2023-07-13, 14:38:47 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
airflow.exceptions.TaskDeferralError: Trigger failure
[2023-07-13, 14:38:47 UTC] {taskinstance.py:1345} INFO - Marking task as FAILED. dag_id=kafka_test_dag, task_id=await_message, execution_date=20230713T143500, start_date=20230713T143707, end_date=20230713T143847
[2023-07-13, 14:38:48 UTC] {standard_task_runner.py:104} ERROR - Failed to execute job 629114 for task await_message (Trigger failure; 9001)
[2023-07-13, 14:38:48 UTC] {local_task_job_runner.py:225} INFO - Task exited with return code 1
[2023-07-13, 14:38:48 UTC] {taskinstance.py:2653} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
Sensor should get a message without errors. Each message should be committed once.
### How to reproduce
Example of a DAG:
```
from airflow.decorators import dag
from airflow.models import Variable
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow.utils.dates import days_ago
from airflow.providers.apache.kafka.sensors.kafka import \
AwaitMessageTriggerFunctionSensor
import uuid
def check_message(message):
if message:
return True
def trigger_dag(**context):
TriggerDagRunOperator(
trigger_dag_id='triggerer_test_dag',
task_id=f"triggered_downstream_dag_{uuid.uuid4()}"
).execute(context)
@dag(
description="This DAG listens kafka topic and triggers DAGs "
"based on received message.",
schedule_interval='* * * * *',
start_date=days_ago(2),
max_active_runs=4,
catchup=False
)
def kafka_test_dag():
AwaitMessageTriggerFunctionSensor(
task_id="await_message",
topics=['my_test_topic'],
apply_function="dags.kafka_consumers_dag.check_message",
event_triggered_function=trigger_dag
)
kafka_test_dag()
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-kafka==1.1.2
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32585 | https://github.com/apache/airflow/pull/36272 | 41096e0c266e3adb0ac3985d2609701f53aded00 | 148233a19ea68f424a7077d3bba6e6ca81679c10 | "2023-07-13T14:47:38Z" | python | "2023-12-18T10:22:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,553 | ["airflow/operators/python.py", "tests/decorators/test_python.py", "tests/decorators/test_short_circuit.py", "tests/operators/test_python.py"] | ShortCircuitOperator returns None when condition is Falsy which errors with multiple_outputs=True | ### Apache Airflow version
2.6.3
### What happened
When the condition in ```ShortCircuitOperator``` is truthy it is returned at
https://github.com/apache/airflow/blob/a2ae2265ce960d65bc3c4bf805ee77954a1f895c/airflow/operators/python.py#L252
but when it is falsy, the function falls through without returning anything.
If ```multiple_outputs``` is set ```true``` (with for example ```@task.short_circuit(multiple_outputs=True```) ```_handle_output``` of
```DecoratedOperator``` at
https://github.com/apache/airflow/blob/a2ae2265ce960d65bc3c4bf805ee77954a1f895c/airflow/decorators/base.py#L242
does not test for None and raises at
https://github.com/apache/airflow/blob/a2ae2265ce960d65bc3c4bf805ee77954a1f895c/airflow/decorators/base.py#L255
This makes it impossible to pass a falsy value (i.e. an empty dictionary) so ```ShortCircuitOperator``` is unusable with ```multiple_outputs=true```
### What you think should happen instead
Probably the ```xcom_push``` should not be attempted with ```None```, or possibly the condition should be returned by ```ShortCircuitOperator``` even if it is falsy
### How to reproduce
```python
@task.short_circuit(multiple_outputs=True)
def test():
return {}
```
### Operating System
Ubuntu 22.04.2 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32553 | https://github.com/apache/airflow/pull/32569 | 9b466bd13dd34d2a37b49687241f54f4d2df3b18 | 32a18d9e4373bd705087992d0066663833c65abd | "2023-07-12T11:46:53Z" | python | "2023-07-15T07:21:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,551 | ["airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/operators/test_snowflake.py"] | SnowflakeValueCheckOperator - database, warehouse, schema parameters doesn't ovveride connection | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We are using Airflow 2.6.0 with Airflow Snowflake Provider 4.3.0.
When we add database, schema and warehouse parameters to SnowflakeOperator all are overriding extra part of Snowflake connection definition. Same set of parameters in SnowflakeValueCheckOperator none of parameter is overriden.
### What you think should happen instead
When we go through Snowflake Provider source code we found, that for SnowflakeOperator hooks_params are created before parent class init. it is looked like:
```
if any([warehouse, database, role, schema, authenticator, session_parameters]):
hook_params = kwargs.pop("hook_params", {})
kwargs["hook_params"] = {
"warehouse": warehouse,
"database": database,
"role": role,
"schema": schema,
"authenticator": authenticator,
"session_parameters": session_parameters,
**hook_params,
}
super().__init__(conn_id=snowflake_conn_id, **kwargs)
```
For SnowflakeValueCheckOperator parent class init is added before initialization of class arguments:
```
super().__init__(sql=sql, parameters=parameters, conn_id=snowflake_conn_id, **kwargs)
self.snowflake_conn_id = snowflake_conn_id
self.sql = sql
self.autocommit = autocommit
self.do_xcom_push = do_xcom_push
self.parameters = parameters
self.warehouse = warehouse
self.database = database
```
Probably hook that is used in SnowflakeValueCheckOperator (and probably in the rest of classes) is initiated base on connection values and overriding is not working.
### How to reproduce
We should create connection with different database and warehouse than TEST_DB and TEST_WH. Table dual should exist only in TEST_DB.TEST_SCHEMA and not exists in connection db/schema.
```
from pathlib import Path
from datetime import timedelta, datetime
from time import time, sleep
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.providers.snowflake.operators.snowflake import SnowflakeValueCheckOperator
warehouse = 'TEST_WH'
database ='TEST_DB'
schema = 'TEST_SCHEMA'
args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': True,
'email_on_retry': False,
'start_date': pendulum.now(tz='Europe/Warsaw').add(months=-1),
'retries': 0,
'concurrency': 10,
'dagrun_timeout': timedelta(hours=24)
}
with DAG(
dag_id=dag_id,
template_undefined=jinja2.Undefined,
default_args=args,
description='Sequence ' + sequence_id,
schedule=schedule,
max_active_runs=10,
catchup=False,
tags=tags
) as dag:
value_check_task = SnowflakeValueCheckOperator(
task_id='value_check_task',
sql='select 1 from dual',
snowflake_conn_id ='con_snowflake_zabka',
warehouse=warehouse,
database=database,
schema=schema,
pass_value=1
)
snowflake_export_data_task = SnowflakeOperator(
task_id='snowflake_export_data_task',
snowflake_conn_id='con_snowflake',
sql=f"select 1 from dual",
warehouse=warehouse,
database=database,
schema=schema
)
```
### Operating System
Ubuntu 20.04.5 LTS
### Versions of Apache Airflow Providers
apache-airflow 2.6.0
apache-airflow-providers-celery 3.1.0
apache-airflow-providers-common-sql 1.4.0
apache-airflow-providers-ftp 3.3.1
apache-airflow-providers-http 4.3.0
apache-airflow-providers-imap 3.1.1
apache-airflow-providers-microsoft-azure 6.1.1
apache-airflow-providers-odbc 3.2.1
apache-airflow-providers-oracle 3.6.0
apache-airflow-providers-postgres 5.4.0
apache-airflow-providers-redis 3.1.0
apache-airflow-providers-snowflake 4.3.0
apache-airflow-providers-sqlite 3.3.2
apache-airflow-providers-ssh 3.6.0
### Deployment
Virtualenv installation
### Deployment details
Python 3.9.5
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32551 | https://github.com/apache/airflow/pull/32605 | 2b0d88e450f11af8e447864ca258142a6756126d | 2ab78ec441a748ae4d99e429fe336b80a601d7b1 | "2023-07-12T11:00:55Z" | python | "2023-07-31T19:21:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,503 | ["airflow/www/utils.py", "tests/www/views/test_views_tasks.py"] | execution date is missing from Task Instance tooltip | ### Apache Airflow version
main (development)
### What happened
It seems [this](https://github.com/apache/airflow/commit/e16207409998b38b91c1f1697557d5c229ed32d1) commit has made the task instance execution date disappear from the task instance tooltip completely:
![image](https://github.com/apache/airflow/assets/18099224/62c9fec5-9e02-4319-93b9-197d25a8b027)
Note the missing `Run: <execution date>` between Task_id and Run_id.
I think there's a problem with the task instance execution date, because it's always `undefined`. In an older version of Airflow (2.4.3), I can see that the tooltip always shows the **current** datetime instead of the actual execution date, which is what the author of the commit identified in the first place I think.
### What you think should happen instead
The tooltip should properly show the task instance's execution date, not the current datetime (or nothing). There's a deeper problem here that causes `ti.execution_date` to be `undefined`.
### How to reproduce
Run the main branch of Airflow, with a simple DAG that finishes a run successfully. Go to the Graph view of a DAG and hover over any completed task with the mouse.
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32503 | https://github.com/apache/airflow/pull/32527 | 58e21c66fdcc8a416a697b4efa852473ad8bd6fc | ed689f2be90cc8899438be66e3c75c3921e156cb | "2023-07-10T21:02:35Z" | python | "2023-07-25T06:53:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,499 | ["airflow/providers/google/cloud/hooks/dataproc.py", "airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/hooks/test_dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | Add filtering to DataprocListBatchesOperator | ### Description
The Python Google Cloud Dataproc API version has been updated in Airflow to support filtering on the Dataproc Batches API. The DataprocListBatchesOperator can be updated to make use of this filtering.
Currently, DataprocListBatchesOperator returns, and populates xcom with every job run in the project. This almost surely will fail as the return object is large and xcom storage is low, especially with MySQL. Filtering on job prefix and create_time are much more useful capabilities.
### Use case/motivation
The ability to filter lists of GCP Dataproc Batches jobs.
### Related issues
None
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32499 | https://github.com/apache/airflow/pull/32500 | 3c14753b03872b259ce2248eda92f7fb6f4d751b | 99b8a90346b8826756ac165b73464a701e2c33aa | "2023-07-10T19:47:11Z" | python | "2023-07-20T18:24:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,491 | ["BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/release_management_commands.py", "dev/breeze/src/airflow_breeze/commands/release_management_commands_config.py", "dev/breeze/src/airflow_breeze/utils/add_back_references.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_release-management.svg", "images/breeze/output_release-management_add-back-references.svg", "images/breeze/output_setup_check-all-params-in-groups.svg", "images/breeze/output_setup_regenerate-command-images.svg"] | Implement `breeze publish-docs` command | ### Body
We need a small improvement for our docs publishing process.
We currently have those two scripts:
* [x] docs/publish_docs.py https://github.com/apache/airflow/blob/main/docs/publish_docs.py in airflow repo
* [ ] post-docs/ in airflow-site https://github.com/apache/airflow-site/blob/main/post-docs/add-back-references.py
We have currently the steps that are describing how to publish the documentation in our release documentation:
* https://github.com/apache/airflow/blob/main/dev/README_RELEASE_AIRFLOW.md
* https://github.com/apache/airflow/blob/main/dev/README_RELEASE_PROVIDER_PACKAGES.md
* https://github.com/apache/airflow/blob/main/dev/README_RELEASE_HELM_CHART.md
This is the "Publish documentation" chapter
They currently consists of few steps:
1) checking out the main in "airflow-sites"
2) setting the AIRFLOW_SITE_DIRECTORY env variable to the checked out repo
3) building docs (with `breeze build-docs`)
4) Running publish_docs.py scripts in docs that copies the generated docs to "AIRFLOW_SITE_DIRECTORY"
5) **I just added those** running post-docs post-processing for back references
6) Commiting the changes and pushing them to airflow-site
(there are few variants of those depends what docs you are building).
The problem with that is that it requires several venvs to setup independently (and they might sometimes miss stuff) and those commands are distributed across repositories.
The goal of the change is to replace publish + post-docs with single, new breeze command - similarly as we have "build-docs" now.
I imagine this command should be similar to:
```
breeze publish-docs --airflow-site-directory DIRECTORY --package-filter .... and the rest of other arguments that publish_docs.py has
```
This command should copy the files and run post-processing on back-references (depending which package documentation we publish).
Then the process of publish docs should like:
1) checking out the main in "airflow-sites"
2) setting the AIRFLOW_SITE_DIRECTORY env variable to the checked out repo
3) building docs (with `breeze build-docs`)
4) publishing docs (with `breeze publish-docs`)
5) Commiting the changes and pushing them to airflow-site
The benefits:
* no separate venvs to manage (all done in breeze's env) - automatically manged
* nicer integration in our dev/CI environment
* all code for publishing docs in one place - in breeze
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/32491 | https://github.com/apache/airflow/pull/32594 | 1a1753c7246a2b35b993aad659f5551afd7e0215 | 945f48a1fdace8825f3949e5227bed0af2fd38ff | "2023-07-10T13:20:05Z" | python | "2023-07-14T16:36:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,442 | ["airflow/www/static/js/components/ViewTimeDelta.tsx", "airflow/www/static/js/dag/details/Dag.tsx"] | Dag run timeout with timedelta value is rendered as [object object] in UI | ### Apache Airflow version
main (development)
### What happened
Dag run timeout with timedelta value is rendered as [object object] in UI. It seems the data is fetched and string is used to render the value. timedelta is handled for schedule_interval which should also be done here.
![image](https://github.com/apache/airflow/assets/3972343/0a1e2257-4e7c-49a2-bde8-e9a796146cc5)
### What you think should happen instead
timedelta should be handled similar to how it's done in scheduleInterval value here.
https://github.com/apache/airflow/blob/e70bee00cd12ecf1462485a747c0e3296ef7d48c/airflow/www/static/js/dag/details/Dag.tsx#L278C2-L293
### How to reproduce
1. Create a dag with dag_run_timeout as timedelta value.
2. Visit the dag details tab in grid view to check dag_run_timeout
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32442 | https://github.com/apache/airflow/pull/32565 | 41164dd663c003c6be80abdf3b2180ec930a82e4 | f1fc6dc4b9bd496ddd25898eea63d83f12cb6ad0 | "2023-07-08T16:19:47Z" | python | "2023-07-13T23:50:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,412 | ["setup.py"] | Click 8.1.4 breaks mypy typing checks | ### Body
The Click 8.1.4 released 06.06.2023 broke a number of mypy checks. Until the problem is fixed, we need to limit click to unbreak main.
Example failing job: https://github.com/apache/airflow/actions/runs/5480089808/jobs/9983219862
Example failures:
```
dev/breeze/src/airflow_breeze/utils/common_options.py:78: error: Need type
annotation for "option_verbose" [var-annotated]
option_verbose = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:89: error: Need type
annotation for "option_dry_run" [var-annotated]
option_dry_run = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:100: error: Need type
annotation for "option_answer" [var-annotated]
option_answer = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:109: error: Need type
annotation for "option_github_repository" [var-annotated]
option_github_repository = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:118: error: Need type
annotation for "option_python" [var-annotated]
option_python = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:127: error: Need type
annotation for "option_backend" [var-annotated]
option_backend = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:136: error: Need type
annotation for "option_integration" [var-annotated]
option_integration = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:142: error: Need type
annotation for "option_postgres_version" [var-annotated]
option_postgres_version = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:150: error: Need type
annotation for "option_mysql_version" [var-annotated]
option_mysql_version = click.option(
^
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/32412 | https://github.com/apache/airflow/pull/32634 | 7092cfdbbfcfd3c03909229daa741a5bcd7ccc64 | 7123dc162bb222fdee7e4c50ae8a448c43cdd7d3 | "2023-07-06T21:54:23Z" | python | "2023-07-20T04:30:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,390 | ["airflow/providers/http/hooks/http.py", "tests/providers/http/hooks/test_http.py"] | Fix HttpAsyncHook headers | ### Body
The hook uses `_headers`:
https://github.com/apache/airflow/blob/9276310a43d17a9e9e38c2cb83686a15656896b2/airflow/providers/http/hooks/http.py#L340-L341
but passes `headers` to the async function
https://github.com/apache/airflow/blob/9276310a43d17a9e9e38c2cb83686a15656896b2/airflow/providers/http/hooks/http.py#L368-L375
The task:
`headers=headers` needs to be `headers=_headers`
https://github.com/apache/airflow/blob/9276310a43d17a9e9e38c2cb83686a15656896b2/airflow/providers/http/hooks/http.py#L372
There was attempt to address it in https://github.com/apache/airflow/pull/31010 but the PR become stale as no response from the author.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/32390 | https://github.com/apache/airflow/pull/32409 | ee38382efa54565c4b389eaeb536f0d45e12d498 | 358e6e8fa18166084fc17b23e75c6c29a37f245f | "2023-07-06T06:42:10Z" | python | "2023-07-06T20:59:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,367 | ["airflow/api_connexion/endpoints/xcom_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/xcom_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_xcom_endpoint.py", "tests/api_connexion/schemas/test_xcom_schema.py", "tests/conftest.py"] | Unable to get mapped task xcom value via REST API. Getting MultipleResultsFound error | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow 2.3.4 (but actual code seems to have same behaviour).
I have mapped task with xcom value.
I want to get xcom value of particular instance or xcom values of all task instances.
I am using standard REST API method /dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key}
And it throws an error
`
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2850, in one_or_none
return self._iter().one_or_none()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/result.py", line 1510, in one_or_none
return self._only_one_row(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/result.py", line 614, in _only_one_row
raise exc.MultipleResultsFound(
sqlalchemy.exc.MultipleResultsFound: Multiple rows were found when one or none was required
`
Is it any way of getting xcom of mapped tasks via API?
### What you think should happen instead
_No response_
### How to reproduce
Make dag with mapped task. Return xcom value in every task. Try to get xcom value via API.
### Operating System
ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Standard
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32367 | https://github.com/apache/airflow/pull/32453 | 2aa3cfb6abd10779029b0c072493a1c1ed820b77 | bc97646b262e7f338b4f3d4dce199e640e24e250 | "2023-07-05T10:16:02Z" | python | "2023-07-10T08:34:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,330 | ["airflow/providers/amazon/aws/hooks/glue_crawler.py", "tests/providers/amazon/aws/hooks/test_glue_crawler.py"] | AWS GlueCrawlerOperator deletes existing tags on run | ### Apache Airflow version
2.6.2
### What happened
We are currently on AWS Provider 6.0.0 and looking to upgrade to the latest version 8.2.0. However, there are some issues with the GlueCrawlerOperator making the upgrade challenging, namely that the operator attempts to update the crawler tags on every run. Because we manage our resource tagging through Terraform, we do not provide any tags to the operator, which results in all of the tags being deleted (as well as needing additional `glue:GetTags` and `glue:UntagResource` permissions needing to be added to relevant IAM roles to even run the crawler).
It seems strange that the default behaviour of the operator has been changed to make modifications to infrastructure, especially as this differs from the GlueJobOperator, which only performs updates when certain parameters are set. Potentially something similar could be done here, where if no `Tags` key is present in the `config` dict they aren't modified at all. Not sure what the best approach is.
### What you think should happen instead
The crawler should run without any alterations to the existing infrastructure
### How to reproduce
Run a GlueCrawlerOperator without tags in config, against a crawler with tags present
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
Amazon 8.2.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32330 | https://github.com/apache/airflow/pull/32331 | 9a0f41ba53185031bc2aa56ead2928ae4b20de99 | 7a3bc8d7c85448447abd39287ef6a3704b237a90 | "2023-07-03T13:40:23Z" | python | "2023-07-06T11:09:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,311 | ["airflow/serialization/pydantic/dag_run.py", "setup.cfg"] | Pydantic 2.0.0 support for Airflow | ### Body
Currently Pydantic 2.0.0 released on 30th of June 2023 breaks Airflow CI - building providers and running Kubernetes tests.
Therefore we limit Pydantic to < 2.0.0 in setup.cfg for now.
This should be fixed, especially that 2.0.0 brings a number of speed improvements.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/32311 | https://github.com/apache/airflow/pull/32366 | 723eb7d453e50fb82652a8cf1f6a538410be777f | 9cb463e20e4557efb4d1a6320b196c65ae519c23 | "2023-07-02T06:32:48Z" | python | "2023-07-07T20:37:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,301 | ["airflow/serialization/pydantic/dataset.py"] | = instead of : in type hints - failing Pydantic 2 | ### Apache Airflow version
2.6.2
### What happened
airflow doesn't work correct UPDATE: with Pydantic 2 released on 30th of June UPDATE:, raises
`pydantic.errors.PydanticUserError: A non-annotated attribute was detected: `dag_id = <class 'str'>`. All model fields require a type annotation; if `dag_id` is not meant to be a field, you may be able to resolve this error by annotating it as a `ClassVar` or updating `model_config['ignored_types']`.`
### What you think should happen instead
_No response_
### How to reproduce
just install apache-airflow and run `airflow db init`
### Operating System
Ubuntu 22.04.2 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32301 | https://github.com/apache/airflow/pull/32307 | df4c8837d022e66921bc0cf33f3249b235de6fdd | 4d84e304b86c97d0437fddbc6b6757b5201eefcc | "2023-07-01T12:00:53Z" | python | "2023-07-01T21:41:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,294 | ["airflow/models/renderedtifields.py"] | K8 executor throws MySQL DB error 'Deadlock found when trying to get lock; try restarting transaction' | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Apache Airflow version: 2.6.1
When multiple runs for a dag executing simultaneously, K8 executor fails with the following MySQL exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 1407, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 1534, in _execute_task_with_callbacks
RenderedTaskInstanceFields.write(rtif)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 75, in wrapper
with create_session() as session:
File "/usr/local/lib/python3.10/contextlib.py", line 142, in __exit__
next(self.gen)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 37, in create_session
session.commit()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1454, in commit
self._transaction.commit(_to_root=self.future)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 832, in commit
self._prepare_impl()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 811, in _prepare_impl
self.session.flush()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3449, in flush
self._flush(objects)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3588, in _flush
with util.safe_reraise():
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3549, in _flush
flush_context.execute()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
util.preloaded.orm_persistence.save_obj(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 237, in save_obj
_emit_update_statements(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 1001, in _emit_update_statements
c = connection._execute_20(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1710, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1577, in _execute_clauseelement
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1948, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2129, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1905, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.10/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
SQL: UPDATE rendered_task_instance_fields SET k8s_pod_yaml=%s WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND rendered_task_instance_fields.run_id = %s AND rendered_task_instance_fields.map_index = %s
### What you think should happen instead
K8 executor should complete processing successfully without error
### How to reproduce
Trigger multiple runs of the same dag simultaneously so that tasks under the dag get executed around the same time
### Operating System
Airflow docker image tag 2.6.1-python3.10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
User community airflow-helm chart
https://github.com/airflow-helm
### Anything else
This occurs consistently. If multiple runs for the dag are executed with a delay of few minutes, K8 executor completes successfully
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32294 | https://github.com/apache/airflow/pull/32341 | e53320d62030a53c6ffe896434bcf0fc85803f31 | c8a3c112a7bae345d37bb8b90d68c8d6ff2ef8fc | "2023-06-30T22:51:45Z" | python | "2023-07-05T11:28:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,290 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Try number is incorrect | ### Apache Airflow version
2.6.2
### What happened
All tasks were run 1 time. The try number is 2 for all tasks
### What you think should happen instead
Try number should be 1 if only tried 1 time
### How to reproduce
Run a task and use the UI to look up try number
### Operating System
centos
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32290 | https://github.com/apache/airflow/pull/32361 | 43f3e57bf162293b92154f16a8ce33e6922fbf4e | a8e4b8aee602e8c672ab879b7392a65b5c2bb34e | "2023-06-30T16:45:01Z" | python | "2023-07-05T08:30:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,283 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | EmptyOperator in dynamically mapped TaskGroups does not respect upstream dependencies | ### Apache Airflow version
2.6.2
### What happened
When using an EmptyOperator in dynamically mapped TaskGroups (https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#mapping-over-a-task-group), the EmptyOperator of all branches starts as soon as the first upstream task dependency of the EmptyOperator **in any branch** completes. This causes downstream tasks of the EmptyOperator to start prematurely in all branches, breaking depth-first execution of the mapped TaskGroup.
I have provided a test for this behavior below, by introducing an artificial wait time in a `variable_task`, followed by an `EmptyOperator` in `checkpoint` and a `final` dependent task .
![image](https://github.com/apache/airflow/assets/97735/e9d202fa-9b79-4766-b778-a8682a891050)
Running this test, during the execution I see this: The `checkpoint` and `final` tasks are already complete, while the upstream `variable_task` in the group is still running.
![image](https://github.com/apache/airflow/assets/97735/ad335ab5-ee91-4e91-805b-69b58e9bcd99)
I have measured the difference of time when of each the branches' `final` tasks execute, and compared them, to cause a failure condition, which you can see failing here in the `assert_branch_waited` task.
By using just a regular Task, one gets the correct behavior.
### What you think should happen instead
In each branch separately, the `EmptyOperator` should wait for its dependency to complete, before it starts. This would be the same behavior as using a regular `Task` for `checkpoint`.
### How to reproduce
Here are test cases in two dags, one with an `EmptyOperator`, showing incorrect behavior, one with a `Task` in sequence instead of the `EmptyOperator`, that has correct behavior.
```python
import time
from datetime import datetime
from airflow import DAG
from airflow.decorators import task, task_group
from airflow.models import TaskInstance
from airflow.operators.empty import EmptyOperator
branches = [1, 2]
seconds_difference_expected = 10
for use_empty_operator in [False, True]:
dag_id = "test-mapped-group"
if use_empty_operator:
dag_id += "-with-emptyoperator"
else:
dag_id += "-no-emptyoperator"
with DAG(
dag_id=dag_id,
schedule=None,
catchup=False,
start_date=datetime(2023, 1, 1),
default_args={"retries": 0},
) as dag:
@task_group(group_id="branch_run")
def mapped_group(branch_number):
"""Branch 2 will take > `seconds_difference_expected` seconds, branch 1 will be immediately executed"""
@task(dag=dag)
def variable_task(branch_number):
"""Waits `seconds_difference_expected` seconds for branch 2"""
if branch_number == 2:
time.sleep(seconds_difference_expected)
return branch_number
variable_task_result = variable_task(branch_number)
if use_empty_operator:
# emptyoperator as a checkpoint
checkpoint_result = EmptyOperator(task_id="checkpoint")
else:
@task
def checkpoint():
pass
checkpoint_result = checkpoint()
@task(dag=dag)
def final(ti: TaskInstance = None):
"""Return the time at the task execution"""
return datetime.now()
final_result = final()
variable_task_result >> checkpoint_result >> final_result
return final_result
@task(dag=dag)
def assert_branch_waited(times):
"""Check that the difference of the start times of the final step in each branch
are at least `seconds_difference_expected`, i.e. the branch waited for all steps
"""
seconds_difference = abs(times[1] - times[0]).total_seconds()
if not seconds_difference >= seconds_difference_expected:
raise RuntimeError(
"Branch 2 completed too fast with respect to branch 1: "
+ f"observed [seconds difference]: {seconds_difference}; "
+ f"expected [seconds difference]: >= {seconds_difference_expected}"
)
mapping_results = mapped_group.expand(branch_number=branches)
assert_branch_waited(mapping_results)
```
### Operating System
Debian GNU/Linux 11 (bullseye) on docker (official image)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32283 | https://github.com/apache/airflow/pull/32354 | a8e4b8aee602e8c672ab879b7392a65b5c2bb34e | 7722b6f226e9db3a89b01b89db5fdb7a1ab2256f | "2023-06-30T11:22:15Z" | python | "2023-07-05T08:38:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,280 | ["airflow/providers/amazon/aws/hooks/redshift_data.py", "airflow/providers/amazon/aws/operators/redshift_data.py", "tests/providers/amazon/aws/hooks/test_redshift_data.py", "tests/providers/amazon/aws/operators/test_redshift_data.py"] | RedshiftDataOperator: Add support for Redshift serverless clusters | ### Description
This feature adds support for Redshift Serverless clusters for the given operator.
### Use case/motivation
RedshiftDataOperator currently only supports provisioned clusters since it has the capability of adding `ClusterIdentifier` as a parameter but not `WorkgroupName`. The addition of this feature would help users to use this operator for their serverless cluster as well.
### Related issues
None
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32280 | https://github.com/apache/airflow/pull/32785 | d05e42e5d2081909c9c33de4bd4dfb759ac860c1 | 8012c9fce64f152b006f88497d65ea81d29571b8 | "2023-06-30T08:51:53Z" | python | "2023-07-24T17:09:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,279 | ["airflow/api/common/airflow_health.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/health_schema.py", "airflow/www/static/js/types/api-generated.ts", "docs/apache-airflow/administration-and-deployment/logging-monitoring/check-health.rst", "tests/api/common/test_airflow_health.py"] | Add DagProcessor status to health endpoint. | ### Description
Add DagProcessor status including latest heartbeat to health endpoint similar to Triggerer status added recently. Related PRs.
https://github.com/apache/airflow/pull/31529
https://github.com/apache/airflow/pull/27755
### Use case/motivation
It helps in dag processor monitoring
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32279 | https://github.com/apache/airflow/pull/32382 | bb97bf21fd320c593b77246590d4f8d2b0369c24 | b3db4de4985eccb859a30a07a2350499370c6a9a | "2023-06-30T08:42:33Z" | python | "2023-07-06T23:10:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,260 | ["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"] | Apparently the Jinja template does not work when using dynamic task mapping with SQLExecuteQueryOperator | ### Apache Airflow version
2.6.2
### What happened
We are trying to use dynamic task mapping with SQLExecuteQueryOperator on Trino. Our use case is to expand the sql parameter to the operator by calling some SQL files.
Without dynamic task mapping it works perfectly, but when used with the dynamic task mapping, it is unable to recognize the Path, and instead tries to execute the path as query.
I believe it has some relation with the template_searchpath parameter.
### What you think should happen instead
It should have worked similar with or without dynamic task mapping.
### How to reproduce
Deployed the following DAG in Airflow
```
from airflow.models import DAG
from datetime import datetime, timedelta
from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator
DEFAULT_ARGS = {
'start_date': datetime(2023, 7, 16),
}
with DAG (dag_id= 'trino_dinamic_map',
template_searchpath = '/opt/airflow',
description = "Esta é um dag para o projeto exemplo",
schedule = None,
default_args = DEFAULT_ARGS,
catchup=False,
) as dag:
trino_call = SQLExecuteQueryOperator(
task_id= 'trino_call',
conn_id='con_id',
sql = 'queries/insert_delta_dp_raw_table1.sql',
handler=list
)
trino_insert = SQLExecuteQueryOperator.partial(
task_id="trino_insert_table",
conn_id='con_id',
handler=list
).expand_kwargs([{'sql': 'queries/insert_delta_dp_raw_table1.sql'}, {'sql': 'queries/insert_delta_dp_raw_table2.sql'}, {'sql': 'queries/insert_delta_dp_raw_table3.sql'}])
trino_call >> trino_insert
```
In the sql file it can be any query, for the test I used a create table. Queries are located in /opt/airflow/queries
```
CREATE TABLE database_config.data_base_name.TABLE_NAME (
"JOB_NAME" VARCHAR(60) NOT NULL,
"JOB_ID" DECIMAL NOT NULL,
"JOB_STATUS" VARCHAR(10),
"JOB_STARTED_DATE" VARCHAR(10),
"JOB_STARTED_TIME" VARCHAR(10),
"JOB_FINISHED_DATE" VARCHAR(10),
"JOB_FINISHED_TIME" VARCHSAR(10)
)
```
task_1 (without dynamic task mapping) completes successfully, while task_2(with dynamic task mapping) fails.
Looking at the error logs, there was a failure when executing the query, not recognizing the query content but the path.
Here is the traceback:
trino.exceptions.TrinoUserError: TrinoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:1: mismatched input 'queries'. Expecting: 'ALTER', 'ANALYZE', 'CALL', 'COMMENT', 'COMMIT', 'CREATE', 'DEALLOCATE', 'DELETE', 'DENY', 'DESC', 'DESCRIBE', 'DROP', 'EXECUTE', 'EXPLAIN', 'GRANT', 'INSERT', 'MERGE', 'PREPARE', 'REFRESH', 'RESET', 'REVOKE', 'ROLLBACK', 'SET', 'SHOW', 'START', 'TRUNCATE', 'UPDATE', 'USE', <query>", query_id=20230629_114146_04418_qbcnd)
### Operating System
Red Hat Enterprise Linux 8.8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32260 | https://github.com/apache/airflow/pull/32272 | 58eb19fe7669b61d0a00bcc82df16adee379a233 | d1e6a5c48d03322dda090113134f745d1f9c34d4 | "2023-06-29T12:31:44Z" | python | "2023-08-18T19:17:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,227 | ["airflow/providers/amazon/aws/hooks/lambda_function.py", "airflow/providers/amazon/aws/operators/lambda_function.py", "tests/providers/amazon/aws/hooks/test_lambda_function.py", "tests/providers/amazon/aws/operators/test_lambda_function.py"] | LambdaInvokeFunctionOperator expects wrong type for payload arg | ### Apache Airflow version
2.6.2
### What happened
I instantiate LambdaInvokeFunctionOperator in my DAG.
I want to call the lambda function with some payload. After following code example from official documentation, I created a dict, and passed its json string version to the operator:
```
d = {'key': 'value'}
invoke_lambda_task = LambdaInvokeFunctionOperator(..., payload=json.dumps(d))
```
When I executed the DAG, this task failed. I got the following error message:
```
Invalid type for parameter Payload, value: {'key': 'value'}, type: <class 'dict'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object
```
Then I went to official boto3 documentation, and found out that indeed, the payload parameter type is `bytes` or file. See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda/client/invoke.html
### What you think should happen instead
To preserve backward compatibility, The API should encode payload argument if it is str, but also accept bytes or file, in which case it will be passed-through as is.
### How to reproduce
1. Create lambda function in AWS
2. Create a simple DAG with LambdaInvokeFunctionOperator
3. pass str value in the payload parameter; use json.dumps() with a simple dict, as lambda expects json payload
4. Run the DAG; the task is expected to fail
### Operating System
ubuntu
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.3.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32227 | https://github.com/apache/airflow/pull/32259 | 88da71ed1fdffc558de28d5c3fd78e5ae1ac4e8c | 5c72befcfde63ade2870491cfeb708675399d9d6 | "2023-06-28T09:13:24Z" | python | "2023-07-03T06:45:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,215 | ["airflow/providers/google/cloud/operators/dataproc.py"] | DataprocCreateBatchOperator in deferrable mode doesn't reattach with deferment. | ### Apache Airflow version
main (development)
### What happened
The DataprocCreateBatchOperator (Google provider) handles the case when a batch_id already exists in the Dataproc API by 'reattaching' to a potentially running job.
Current reattachment logic uses the non-deferrable method even when the operator is in deferrable mode.
### What you think should happen instead
The operator should reattach in deferrable mode.
### How to reproduce
Create a DAG with a task of DataprocCreateBatchOperator that is long running. Make DataprocCreateBatchOperator deferrable in the constructor.
Restart local Airflow to simulate having to 'reattach' to a running job in Google Cloud Dataproc.
The operator resumes using the running job but in the code path for the non-derferrable logic.
### Operating System
macOS 13.4.1 (22F82)
### Versions of Apache Airflow Providers
Current main.
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32215 | https://github.com/apache/airflow/pull/32216 | f2e2125b070794b6a66fb3e2840ca14d07054cf2 | 7d2ec76c72f70259b67af0047aa785b28668b411 | "2023-06-27T20:09:11Z" | python | "2023-06-29T13:51:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,203 | ["airflow/auth/managers/fab/views/roles_list.py", "airflow/www/fab_security/manager.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"] | AIP-56 - FAB AM - Role views | Move role related views to FAB Auth manager:
- List roles
- Edit role
- Create role
- View role | https://github.com/apache/airflow/issues/32203 | https://github.com/apache/airflow/pull/33043 | 90fb482cdc6a6730a53a82ace49d42feb57466e5 | 5707103f447be818ad4ba0c34874b822ffeefc09 | "2023-06-27T18:16:54Z" | python | "2023-08-10T14:15:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,202 | ["airflow/auth/managers/fab/views/user.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/auth/managers/fab/views/user_edit.py", "airflow/auth/managers/fab/views/user_stats.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"] | AIP-56 - FAB AM - User views | Move user related views to FAB Auth manager:
- List users
- Edit user
- Create user
- View user | https://github.com/apache/airflow/issues/32202 | https://github.com/apache/airflow/pull/33055 | 2d7460450dda5cc2f20d1e8cd9ead9e4d1946909 | 66254e42962f63d6bba3370fea40e082233e153d | "2023-06-27T18:16:48Z" | python | "2023-08-07T17:40:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,201 | ["airflow/auth/managers/fab/views/user.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/auth/managers/fab/views/user_edit.py", "airflow/auth/managers/fab/views/user_stats.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"] | AIP-56 - FAB AM - User's statistics view | Move user's statistics view to FAB Auth manager | https://github.com/apache/airflow/issues/32201 | https://github.com/apache/airflow/pull/33055 | 2d7460450dda5cc2f20d1e8cd9ead9e4d1946909 | 66254e42962f63d6bba3370fea40e082233e153d | "2023-06-27T18:16:43Z" | python | "2023-08-07T17:40:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,199 | ["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"] | AIP-56 - FAB AM - Permissions view | Move permissions view to FAB Auth manager | https://github.com/apache/airflow/issues/32199 | https://github.com/apache/airflow/pull/33277 | 5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f | 39aee60b33a56eee706af084ed1c600b12a0dd57 | "2023-06-27T18:16:38Z" | python | "2023-08-11T15:11:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,198 | ["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"] | AIP-56 - FAB AM - Actions view | Move actions view to FAB Auth manager | https://github.com/apache/airflow/issues/32198 | https://github.com/apache/airflow/pull/33277 | 5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f | 39aee60b33a56eee706af084ed1c600b12a0dd57 | "2023-06-27T18:16:33Z" | python | "2023-08-11T15:11:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,197 | ["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"] | AIP-56 - FAB AM - Resources view | Move resources view to FAB Auth manager | https://github.com/apache/airflow/issues/32197 | https://github.com/apache/airflow/pull/33277 | 5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f | 39aee60b33a56eee706af084ed1c600b12a0dd57 | "2023-06-27T18:16:27Z" | python | "2023-08-11T15:11:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,196 | ["airflow/auth/managers/base_auth_manager.py", "airflow/auth/managers/fab/fab_auth_manager.py", "airflow/auth/managers/fab/views/__init__.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/www/extensions/init_appbuilder.py", "airflow/www/fab_security/views.py", "airflow/www/security.py", "airflow/www/templates/appbuilder/navbar_right.html"] | AIP-56 - FAB AM - Move profile view into auth manager | The profile view (`/users/userinfo/`) needs to be moved to FAB auth manager. The profile URL needs to be returned as part of `get_url_account()` as specified in the AIP | https://github.com/apache/airflow/issues/32196 | https://github.com/apache/airflow/pull/32756 | f17bc0f4bf15504833f2c8fd72d947c2ddfa55ed | f2e93310c43b7e9df1cbe33350b91a8a84e938a2 | "2023-06-27T18:16:22Z" | python | "2023-07-26T14:20:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,193 | ["airflow/auth/managers/base_auth_manager.py", "airflow/auth/managers/fab/fab_auth_manager.py", "airflow/www/auth.py", "airflow/www/extensions/init_appbuilder.py", "airflow/www/extensions/init_security.py", "airflow/www/templates/appbuilder/navbar_right.html", "tests/auth/managers/fab/test_fab_auth_manager.py", "tests/www/views/test_session.py"] | AIP-56 - FAB AM - Logout | Move the logout logic to the auth manager | https://github.com/apache/airflow/issues/32193 | https://github.com/apache/airflow/pull/32819 | 86193f560815507b9abf1008c19b133d95c4da9f | 2b0d88e450f11af8e447864ca258142a6756126d | "2023-06-27T18:16:04Z" | python | "2023-07-31T19:20:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,164 | ["airflow/config_templates/config.yml", "airflow/metrics/otel_logger.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | Metrics - Encrypted OTel Endpoint? | ### Apache Airflow version
2.6.2
### What happened
I left this as a TODO and will get t it eventually, but if someone wants to look into it before I get time, this may be an easy one:
We are creating an OTLPMetricExporter endpoint with `http` [here](https://github.com/apache/airflow/blob/main/airflow/metrics/otel_logger.py#L400) and should look into whether we can make this work with `https`.
### Definition of Done:
Either implement HTTPS or replace the TODO with a reference citing why we can not.
I am submitting this as an Issue since I will be a little distracted for the next bit and figured someone may be able to have a look in the meantime. Please do not assign it to me, I'll get it when I can is nobody else does.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32164 | https://github.com/apache/airflow/pull/32524 | 978adb309aee755df02aadab72fdafb61bec5c80 | 531eb41bff032e10ffd1f8941113e2a872ef78fd | "2023-06-26T22:43:58Z" | python | "2023-07-21T10:07:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,153 | ["airflow/www/static/js/api/useExtraLinks.ts", "airflow/www/static/js/dag/details/taskInstance/ExtraLinks.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx"] | Support extra link per mapped task in grid view | ### Description
Currently extra links are disabled in mapped tasks summary but if we select the mapped task with a map_index the extra link still remains disabled. Since we support passing map_index to get the relevant extra link it would be helpful to have the appropriate link displayed.
### Use case/motivation
This will be useful for scenarios where are there are high number of mapped tasks that are linked to an external url or resource.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32153 | https://github.com/apache/airflow/pull/32154 | 5c0fca6440fae3ece915b365e1f06eb30db22d81 | 98c47f48e1b292d535d39cc3349660aa736d76cd | "2023-06-26T19:15:37Z" | python | "2023-06-28T15:22:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,121 | ["airflow/cli/commands/triggerer_command.py", "airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/jobs/triggerer_job_runner.py", "airflow/models/trigger.py", "tests/models/test_trigger.py"] | Multiple Triggerer processes keeps reassigning triggers to each other when job_heartbeat_sec is higher than 30 seconds. | ### Apache Airflow version
main (development)
### What happened
Airflow has `job_heartbeat_sec` (default 5) that was updated to 50 seconds in our environment. This caused 2 instances of triggerer process running for HA to keep updating triggerer_id since below query takes current time minus 30 seconds to query against the latest_heartbeat. This causes alive_triggerer_ids to return empty list since job_heartbeat_sec is more than 50 seconds and the current trigger running this query assigns the triggers to itself. This keeps happening moving triggers from one instance to another.
https://github.com/apache/airflow/blob/62a534dbc7fa8ddb4c249ade85c558b64d1630dd/airflow/models/trigger.py#L216-L223
### What you think should happen instead
The heartbeat calculation should have a value based on job_heartbeat_sec rather than 30 seconds hardcoded so that queries to check triggerer processes alive are adjusted as per user settings.
### How to reproduce
1. Change job_heartbeat_sec to 50 in airflow.cfg
2. Run 2 instances of triggerer.
3. Make an operator that uses `FileTrigger` with the file being absent or any other long running task.
4. Check triggerer_id for the trigger which keeps changing.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32121 | https://github.com/apache/airflow/pull/32123 | 4501f8b352aee9c2cd29126a64cab62fa19fc49d | d117728cd6f337266bebcf4916325d5de815fe03 | "2023-06-25T08:19:56Z" | python | "2023-06-30T20:19:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,111 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | KubernetesPodOperator job intermittently fails - unable to retrieve json from xcom sidecar container due to network connectivity issues | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We have seen that KubernetesPodOperator sometimes fails to retrieve json from xcom sidecar container due to network connectivity issues or in some cases retrieves incomplete json which cannot be parsed. The KubernetesPodOperator task then fails with these error stack traces
e.g.
`File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 398, in execute
result = self.extract_xcom(pod=self.pod)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 372, in extract_xcom
result = self.pod_manager.extract_xcom(pod)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 369, in extract_xcom
_preload_content=False,
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/stream/stream.py", line 35, in _websocket_request
return api_method(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 994, in connect_get_namespaced_pod_exec
return self.connect_get_namespaced_pod_exec_with_http_info(name, namespace, **kwargs) # noqa: E501
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 1115, in connect_get_namespaced_pod_exec_with_http_info
collection_formats=collection_formats)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 353, in call_api
_preload_content, _request_timeout, _host)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 184, in __call_api
_request_timeout=_request_timeout)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/stream/ws_client.py", line 518, in websocket_call
raise ApiException(status=0, reason=str(e))
kubernetes.client.exceptions.ApiException: (0)
Reason: Connection to remote host was lost.`
OR
`
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 398, in execute
result = self.extract_xcom(pod=self.pod)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 374, in extract_xcom
return json.loads(result)
File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 4076 (char 4075)
`
We are using airflow 2.6.1 and apache-airflow-providers-cncf-kubernetes==4.0.2
### What you think should happen instead
KubernetesPodOperator should not fail with these intermittent network connectivity issues when pulling json from xcom sidecar container. It should have retries and verify whether it was able to retrieve valid json before it kills the xcom side car container,
extract_xcom function in PodManager should
* Read and try to parse return json when its read from /airflow/xcom/return.json - to catch errors if say due to network connectivity issues we don not read incomplete json (truncated json)
* Add retries when we read the json - hopefully it will also catch against other network errors to (with kubernetes stream trying to talk to airflow container to get return json)
* Only if the return json can be read and parsed (if its valid) now the code goes ahead and kills the sidecar container.
### How to reproduce
This occurs intermittently so is hard to reproduce. Happens when the kubernetes cluster is under load. In 7 days we see this happen once or twice.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
airflow 2.6.1 and apache-airflow-providers-cncf-kubernetes==4.0.2
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
This occurs intermittently so is hard to reproduce. Happens when the kubernetes cluster is under load. In 7 days we see this happen once or twice.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32111 | https://github.com/apache/airflow/pull/32113 | d117728cd6f337266bebcf4916325d5de815fe03 | df4c8837d022e66921bc0cf33f3249b235de6fdd | "2023-06-23T22:03:19Z" | python | "2023-07-01T06:43:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,107 | ["airflow/providers/google/cloud/hooks/dataflow.py"] | Improved error logging for failed Dataflow jobs | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
When running Dataflow job in Cloud Composer composer-1.20.12-airflow-1.10.15 Airflow 1.10.15, Dataflow job fails throwing a generic error "Exception: DataFlow failed with return code 1", and the reason for the failure is not evident clearly from logs. This issue is in
Airflow 1:
https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/contrib/hooks/gcp_dataflow_hook.py#L172https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74b
This issue still exists in Airflow 2.
Airflow 2:
https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/hooks/dataflow.py#L1019
Can the error logging be improved to show exact reason and a few lines displayed of the standard error from Dataflow command run, so that it gives context?
This will help Dataflow users to identify the root cause of the issue directly from the logs, and avoid additional research and troubleshooting by going through the log details via Cloud Logging.
I am happy to contribute and raise PR to help out implementation for the bug fix. I am looking for inputs as to how to integrate with existing code bases.
Thanks for your help in advance!
Srabasti Banerjee
### What you think should happen instead
[2023-06-15 14:04:37,071] {taskinstance.py:1152} ERROR - DataFlow failed with return code 1
Traceback (most recent call last):
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/airflow/airflow/operators/python_operator.py", line 113, in execute
return_value = self.execute_callable()
File "/usr/local/lib/airflow/airflow/operators/python_operator.py", line 118, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/airflow/gcs/dags/X.zip/X.py", line Y, in task
DataFlowPythonOperator(
File "/usr/local/lib/airflow/airflow/contrib/operators/dataflow_operator.py", line 379, in execute
hook.start_python_dataflow(
File "/usr/local/lib/airflow/airflow/contrib/hooks/gcp_dataflow_hook.py", line 245, in start_python_dataflow
self._start_dataflow(variables, name, ["python"] + py_options + [dataflow],
File "/usr/local/lib/airflow/airflow/contrib/hooks/gcp_api_base_hook.py", line 363, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/lib/airflow/airflow/contrib/hooks/gcp_dataflow_hook.py", line 204, in _start_dataflow
job_id = _Dataflow(cmd).wait_for_done()
File "/usr/local/lib/airflow/airflow/contrib/hooks/gcp_dataflow_hook.py", line 178, in wait_for_done
raise Exception("DataFlow failed with return code {}".format(
Exception: DataFlow failed with return code 1
### How to reproduce
Any Failed Dataflow job that involves deleting a file when it is in process of being ingested via Dataflow job task run via Cloud Composer. Please let me know for any details needed.
### Operating System
Cloud Composer
### Versions of Apache Airflow Providers
_No response_
### Deployment
Google Cloud Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32107 | https://github.com/apache/airflow/pull/32847 | 2950fd768541fc902d8f7218e4243e8d83414c51 | b4102ce0b55e76baadf3efdec0df54762001f38c | "2023-06-23T20:08:20Z" | python | "2023-08-14T10:52:03Z" |