status
stringclasses
1 value
repo_name
stringlengths
9
24
repo_url
stringlengths
28
43
issue_id
int64
1
104k
updated_files
stringlengths
8
1.76k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
37
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
closed
apache/airflow
https://github.com/apache/airflow
32,285
["airflow/providers/google/cloud/transfers/azure_blob_to_gcs.py", "airflow/providers/google/provider.yaml", "airflow/providers/microsoft/azure/provider.yaml", "airflow/providers/microsoft/azure/transfers/azure_blob_to_gcs.py", "docs/apache-airflow-providers-google/connections/gcp.rst", "docs/apache-airflow-providers-google/operators/transfer/azure_blob_to_gcs.rst", "docs/apache-airflow-providers-microsoft-azure/redirects.txt", "docs/apache-airflow-providers-microsoft-azure/transfer/azure_blob_to_gcs.rst", "tests/providers/google/cloud/transfers/test_azure_blob_to_gcs.py", "tests/system/providers/google/cloud/azure/example_azure_blob_to_gcs.py"]
Move `AzureBlobStorageToGCSOperator` to the Google provider
### Body https://github.com/apache/airflow/blob/94128303e17412315aacd529d75a2ef549cce1f5/airflow/providers/microsoft/azure/transfers/azure_blob_to_gcs.py#L31 is stored in the Azure provider but our [policy](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#naming-conventions-for-provider-packages) is that transfer operators of major clouds should be in the target provider (Google) **The Task:** Deprecate `AzureBlobStorageToGCSOperator` in Azure provider Move `AzureBlobStorageToGCSOperator` to Google provider ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/32285
https://github.com/apache/airflow/pull/32306
566bc1b68b4e1643761b4e8518e5e556b8e6e82c
257136786c9a3eebbae717738637ab24fd6ab563
"2023-06-30T12:35:39Z"
python
"2023-07-08T05:01:20Z"
closed
apache/airflow
https://github.com/apache/airflow
32,283
["airflow/models/dagrun.py", "tests/models/test_dagrun.py"]
EmptyOperator in dynamically mapped TaskGroups does not respect upstream dependencies
### Apache Airflow version 2.6.2 ### What happened When using an EmptyOperator in dynamically mapped TaskGroups (https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#mapping-over-a-task-group), the EmptyOperator of all branches starts as soon as the first upstream task dependency of the EmptyOperator **in any branch** completes. This causes downstream tasks of the EmptyOperator to start prematurely in all branches, breaking depth-first execution of the mapped TaskGroup. I have provided a test for this behavior below, by introducing an artificial wait time in a `variable_task`, followed by an `EmptyOperator` in `checkpoint` and a `final` dependent task . ![image](https://github.com/apache/airflow/assets/97735/e9d202fa-9b79-4766-b778-a8682a891050) Running this test, during the execution I see this: The `checkpoint` and `final` tasks are already complete, while the upstream `variable_task` in the group is still running. ![image](https://github.com/apache/airflow/assets/97735/ad335ab5-ee91-4e91-805b-69b58e9bcd99) I have measured the difference of time when of each the branches' `final` tasks execute, and compared them, to cause a failure condition, which you can see failing here in the `assert_branch_waited` task. By using just a regular Task, one gets the correct behavior. ### What you think should happen instead In each branch separately, the `EmptyOperator` should wait for its dependency to complete, before it starts. This would be the same behavior as using a regular `Task` for `checkpoint`. ### How to reproduce Here are test cases in two dags, one with an `EmptyOperator`, showing incorrect behavior, one with a `Task` in sequence instead of the `EmptyOperator`, that has correct behavior. ```python import time from datetime import datetime from airflow import DAG from airflow.decorators import task, task_group from airflow.models import TaskInstance from airflow.operators.empty import EmptyOperator branches = [1, 2] seconds_difference_expected = 10 for use_empty_operator in [False, True]: dag_id = "test-mapped-group" if use_empty_operator: dag_id += "-with-emptyoperator" else: dag_id += "-no-emptyoperator" with DAG( dag_id=dag_id, schedule=None, catchup=False, start_date=datetime(2023, 1, 1), default_args={"retries": 0}, ) as dag: @task_group(group_id="branch_run") def mapped_group(branch_number): """Branch 2 will take > `seconds_difference_expected` seconds, branch 1 will be immediately executed""" @task(dag=dag) def variable_task(branch_number): """Waits `seconds_difference_expected` seconds for branch 2""" if branch_number == 2: time.sleep(seconds_difference_expected) return branch_number variable_task_result = variable_task(branch_number) if use_empty_operator: # emptyoperator as a checkpoint checkpoint_result = EmptyOperator(task_id="checkpoint") else: @task def checkpoint(): pass checkpoint_result = checkpoint() @task(dag=dag) def final(ti: TaskInstance = None): """Return the time at the task execution""" return datetime.now() final_result = final() variable_task_result >> checkpoint_result >> final_result return final_result @task(dag=dag) def assert_branch_waited(times): """Check that the difference of the start times of the final step in each branch are at least `seconds_difference_expected`, i.e. the branch waited for all steps """ seconds_difference = abs(times[1] - times[0]).total_seconds() if not seconds_difference >= seconds_difference_expected: raise RuntimeError( "Branch 2 completed too fast with respect to branch 1: " + f"observed [seconds difference]: {seconds_difference}; " + f"expected [seconds difference]: >= {seconds_difference_expected}" ) mapping_results = mapped_group.expand(branch_number=branches) assert_branch_waited(mapping_results) ``` ### Operating System Debian GNU/Linux 11 (bullseye) on docker (official image) ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32283
https://github.com/apache/airflow/pull/32354
a8e4b8aee602e8c672ab879b7392a65b5c2bb34e
7722b6f226e9db3a89b01b89db5fdb7a1ab2256f
"2023-06-30T11:22:15Z"
python
"2023-07-05T08:38:29Z"
closed
apache/airflow
https://github.com/apache/airflow
32,280
["airflow/providers/amazon/aws/hooks/redshift_data.py", "airflow/providers/amazon/aws/operators/redshift_data.py", "tests/providers/amazon/aws/hooks/test_redshift_data.py", "tests/providers/amazon/aws/operators/test_redshift_data.py"]
RedshiftDataOperator: Add support for Redshift serverless clusters
### Description This feature adds support for Redshift Serverless clusters for the given operator. ### Use case/motivation RedshiftDataOperator currently only supports provisioned clusters since it has the capability of adding `ClusterIdentifier` as a parameter but not `WorkgroupName`. The addition of this feature would help users to use this operator for their serverless cluster as well. ### Related issues None ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32280
https://github.com/apache/airflow/pull/32785
d05e42e5d2081909c9c33de4bd4dfb759ac860c1
8012c9fce64f152b006f88497d65ea81d29571b8
"2023-06-30T08:51:53Z"
python
"2023-07-24T17:09:44Z"
closed
apache/airflow
https://github.com/apache/airflow
32,279
["airflow/api/common/airflow_health.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/health_schema.py", "airflow/www/static/js/types/api-generated.ts", "docs/apache-airflow/administration-and-deployment/logging-monitoring/check-health.rst", "tests/api/common/test_airflow_health.py"]
Add DagProcessor status to health endpoint.
### Description Add DagProcessor status including latest heartbeat to health endpoint similar to Triggerer status added recently. Related PRs. https://github.com/apache/airflow/pull/31529 https://github.com/apache/airflow/pull/27755 ### Use case/motivation It helps in dag processor monitoring ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32279
https://github.com/apache/airflow/pull/32382
bb97bf21fd320c593b77246590d4f8d2b0369c24
b3db4de4985eccb859a30a07a2350499370c6a9a
"2023-06-30T08:42:33Z"
python
"2023-07-06T23:10:33Z"
closed
apache/airflow
https://github.com/apache/airflow
32,260
["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"]
Apparently the Jinja template does not work when using dynamic task mapping with SQLExecuteQueryOperator
### Apache Airflow version 2.6.2 ### What happened We are trying to use dynamic task mapping with SQLExecuteQueryOperator on Trino. Our use case is to expand the sql parameter to the operator by calling some SQL files. Without dynamic task mapping it works perfectly, but when used with the dynamic task mapping, it is unable to recognize the Path, and instead tries to execute the path as query. I believe it has some relation with the template_searchpath parameter. ### What you think should happen instead It should have worked similar with or without dynamic task mapping. ### How to reproduce Deployed the following DAG in Airflow ``` from airflow.models import DAG from datetime import datetime, timedelta from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator DEFAULT_ARGS = { 'start_date': datetime(2023, 7, 16), } with DAG (dag_id= 'trino_dinamic_map', template_searchpath = '/opt/airflow', description = "Esta é um dag para o projeto exemplo", schedule = None, default_args = DEFAULT_ARGS, catchup=False, ) as dag: trino_call = SQLExecuteQueryOperator( task_id= 'trino_call', conn_id='con_id', sql = 'queries/insert_delta_dp_raw_table1.sql', handler=list ) trino_insert = SQLExecuteQueryOperator.partial( task_id="trino_insert_table", conn_id='con_id', handler=list ).expand_kwargs([{'sql': 'queries/insert_delta_dp_raw_table1.sql'}, {'sql': 'queries/insert_delta_dp_raw_table2.sql'}, {'sql': 'queries/insert_delta_dp_raw_table3.sql'}]) trino_call >> trino_insert ``` In the sql file it can be any query, for the test I used a create table. Queries are located in /opt/airflow/queries ``` CREATE TABLE database_config.data_base_name.TABLE_NAME ( "JOB_NAME" VARCHAR(60) NOT NULL, "JOB_ID" DECIMAL NOT NULL, "JOB_STATUS" VARCHAR(10), "JOB_STARTED_DATE" VARCHAR(10), "JOB_STARTED_TIME" VARCHAR(10), "JOB_FINISHED_DATE" VARCHAR(10), "JOB_FINISHED_TIME" VARCHSAR(10) ) ``` task_1 (without dynamic task mapping) completes successfully, while task_2(with dynamic task mapping) fails. Looking at the error logs, there was a failure when executing the query, not recognizing the query content but the path. Here is the traceback: trino.exceptions.TrinoUserError: TrinoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:1: mismatched input 'queries'. Expecting: 'ALTER', 'ANALYZE', 'CALL', 'COMMENT', 'COMMIT', 'CREATE', 'DEALLOCATE', 'DELETE', 'DENY', 'DESC', 'DESCRIBE', 'DROP', 'EXECUTE', 'EXPLAIN', 'GRANT', 'INSERT', 'MERGE', 'PREPARE', 'REFRESH', 'RESET', 'REVOKE', 'ROLLBACK', 'SET', 'SHOW', 'START', 'TRUNCATE', 'UPDATE', 'USE', <query>", query_id=20230629_114146_04418_qbcnd) ### Operating System Red Hat Enterprise Linux 8.8 ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32260
https://github.com/apache/airflow/pull/32272
58eb19fe7669b61d0a00bcc82df16adee379a233
d1e6a5c48d03322dda090113134f745d1f9c34d4
"2023-06-29T12:31:44Z"
python
"2023-08-18T19:17:07Z"
closed
apache/airflow
https://github.com/apache/airflow
32,227
["airflow/providers/amazon/aws/hooks/lambda_function.py", "airflow/providers/amazon/aws/operators/lambda_function.py", "tests/providers/amazon/aws/hooks/test_lambda_function.py", "tests/providers/amazon/aws/operators/test_lambda_function.py"]
LambdaInvokeFunctionOperator expects wrong type for payload arg
### Apache Airflow version 2.6.2 ### What happened I instantiate LambdaInvokeFunctionOperator in my DAG. I want to call the lambda function with some payload. After following code example from official documentation, I created a dict, and passed its json string version to the operator: ``` d = {'key': 'value'} invoke_lambda_task = LambdaInvokeFunctionOperator(..., payload=json.dumps(d)) ``` When I executed the DAG, this task failed. I got the following error message: ``` Invalid type for parameter Payload, value: {'key': 'value'}, type: <class 'dict'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object ``` Then I went to official boto3 documentation, and found out that indeed, the payload parameter type is `bytes` or file. See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda/client/invoke.html ### What you think should happen instead To preserve backward compatibility, The API should encode payload argument if it is str, but also accept bytes or file, in which case it will be passed-through as is. ### How to reproduce 1. Create lambda function in AWS 2. Create a simple DAG with LambdaInvokeFunctionOperator 3. pass str value in the payload parameter; use json.dumps() with a simple dict, as lambda expects json payload 4. Run the DAG; the task is expected to fail ### Operating System ubuntu ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==7.3.0 ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32227
https://github.com/apache/airflow/pull/32259
88da71ed1fdffc558de28d5c3fd78e5ae1ac4e8c
5c72befcfde63ade2870491cfeb708675399d9d6
"2023-06-28T09:13:24Z"
python
"2023-07-03T06:45:24Z"
closed
apache/airflow
https://github.com/apache/airflow
32,215
["airflow/providers/google/cloud/operators/dataproc.py"]
DataprocCreateBatchOperator in deferrable mode doesn't reattach with deferment.
### Apache Airflow version main (development) ### What happened The DataprocCreateBatchOperator (Google provider) handles the case when a batch_id already exists in the Dataproc API by 'reattaching' to a potentially running job. Current reattachment logic uses the non-deferrable method even when the operator is in deferrable mode. ### What you think should happen instead The operator should reattach in deferrable mode. ### How to reproduce Create a DAG with a task of DataprocCreateBatchOperator that is long running. Make DataprocCreateBatchOperator deferrable in the constructor. Restart local Airflow to simulate having to 'reattach' to a running job in Google Cloud Dataproc. The operator resumes using the running job but in the code path for the non-derferrable logic. ### Operating System macOS 13.4.1 (22F82) ### Versions of Apache Airflow Providers Current main. ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32215
https://github.com/apache/airflow/pull/32216
f2e2125b070794b6a66fb3e2840ca14d07054cf2
7d2ec76c72f70259b67af0047aa785b28668b411
"2023-06-27T20:09:11Z"
python
"2023-06-29T13:51:17Z"
closed
apache/airflow
https://github.com/apache/airflow
32,203
["airflow/auth/managers/fab/views/roles_list.py", "airflow/www/fab_security/manager.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"]
AIP-56 - FAB AM - Role views
Move role related views to FAB Auth manager: - List roles - Edit role - Create role - View role
https://github.com/apache/airflow/issues/32203
https://github.com/apache/airflow/pull/33043
90fb482cdc6a6730a53a82ace49d42feb57466e5
5707103f447be818ad4ba0c34874b822ffeefc09
"2023-06-27T18:16:54Z"
python
"2023-08-10T14:15:11Z"
closed
apache/airflow
https://github.com/apache/airflow
32,202
["airflow/auth/managers/fab/views/user.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/auth/managers/fab/views/user_edit.py", "airflow/auth/managers/fab/views/user_stats.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"]
AIP-56 - FAB AM - User views
Move user related views to FAB Auth manager: - List users - Edit user - Create user - View user
https://github.com/apache/airflow/issues/32202
https://github.com/apache/airflow/pull/33055
2d7460450dda5cc2f20d1e8cd9ead9e4d1946909
66254e42962f63d6bba3370fea40e082233e153d
"2023-06-27T18:16:48Z"
python
"2023-08-07T17:40:40Z"
closed
apache/airflow
https://github.com/apache/airflow
32,201
["airflow/auth/managers/fab/views/user.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/auth/managers/fab/views/user_edit.py", "airflow/auth/managers/fab/views/user_stats.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"]
AIP-56 - FAB AM - User's statistics view
Move user's statistics view to FAB Auth manager
https://github.com/apache/airflow/issues/32201
https://github.com/apache/airflow/pull/33055
2d7460450dda5cc2f20d1e8cd9ead9e4d1946909
66254e42962f63d6bba3370fea40e082233e153d
"2023-06-27T18:16:43Z"
python
"2023-08-07T17:40:40Z"
closed
apache/airflow
https://github.com/apache/airflow
32,199
["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"]
AIP-56 - FAB AM - Permissions view
Move permissions view to FAB Auth manager
https://github.com/apache/airflow/issues/32199
https://github.com/apache/airflow/pull/33277
5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f
39aee60b33a56eee706af084ed1c600b12a0dd57
"2023-06-27T18:16:38Z"
python
"2023-08-11T15:11:55Z"
closed
apache/airflow
https://github.com/apache/airflow
32,198
["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"]
AIP-56 - FAB AM - Actions view
Move actions view to FAB Auth manager
https://github.com/apache/airflow/issues/32198
https://github.com/apache/airflow/pull/33277
5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f
39aee60b33a56eee706af084ed1c600b12a0dd57
"2023-06-27T18:16:33Z"
python
"2023-08-11T15:11:55Z"
closed
apache/airflow
https://github.com/apache/airflow
32,197
["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"]
AIP-56 - FAB AM - Resources view
Move resources view to FAB Auth manager
https://github.com/apache/airflow/issues/32197
https://github.com/apache/airflow/pull/33277
5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f
39aee60b33a56eee706af084ed1c600b12a0dd57
"2023-06-27T18:16:27Z"
python
"2023-08-11T15:11:55Z"
closed
apache/airflow
https://github.com/apache/airflow
32,196
["airflow/auth/managers/base_auth_manager.py", "airflow/auth/managers/fab/fab_auth_manager.py", "airflow/auth/managers/fab/views/__init__.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/www/extensions/init_appbuilder.py", "airflow/www/fab_security/views.py", "airflow/www/security.py", "airflow/www/templates/appbuilder/navbar_right.html"]
AIP-56 - FAB AM - Move profile view into auth manager
The profile view (`/users/userinfo/`) needs to be moved to FAB auth manager. The profile URL needs to be returned as part of `get_url_account()` as specified in the AIP
https://github.com/apache/airflow/issues/32196
https://github.com/apache/airflow/pull/32756
f17bc0f4bf15504833f2c8fd72d947c2ddfa55ed
f2e93310c43b7e9df1cbe33350b91a8a84e938a2
"2023-06-27T18:16:22Z"
python
"2023-07-26T14:20:29Z"
closed
apache/airflow
https://github.com/apache/airflow
32,193
["airflow/auth/managers/base_auth_manager.py", "airflow/auth/managers/fab/fab_auth_manager.py", "airflow/www/auth.py", "airflow/www/extensions/init_appbuilder.py", "airflow/www/extensions/init_security.py", "airflow/www/templates/appbuilder/navbar_right.html", "tests/auth/managers/fab/test_fab_auth_manager.py", "tests/www/views/test_session.py"]
AIP-56 - FAB AM - Logout
Move the logout logic to the auth manager
https://github.com/apache/airflow/issues/32193
https://github.com/apache/airflow/pull/32819
86193f560815507b9abf1008c19b133d95c4da9f
2b0d88e450f11af8e447864ca258142a6756126d
"2023-06-27T18:16:04Z"
python
"2023-07-31T19:20:38Z"
closed
apache/airflow
https://github.com/apache/airflow
32,164
["airflow/config_templates/config.yml", "airflow/metrics/otel_logger.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"]
Metrics - Encrypted OTel Endpoint?
### Apache Airflow version 2.6.2 ### What happened I left this as a TODO and will get t it eventually, but if someone wants to look into it before I get time, this may be an easy one: We are creating an OTLPMetricExporter endpoint with `http` [here](https://github.com/apache/airflow/blob/main/airflow/metrics/otel_logger.py#L400) and should look into whether we can make this work with `https`. ### Definition of Done: Either implement HTTPS or replace the TODO with a reference citing why we can not. I am submitting this as an Issue since I will be a little distracted for the next bit and figured someone may be able to have a look in the meantime. Please do not assign it to me, I'll get it when I can is nobody else does. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32164
https://github.com/apache/airflow/pull/32524
978adb309aee755df02aadab72fdafb61bec5c80
531eb41bff032e10ffd1f8941113e2a872ef78fd
"2023-06-26T22:43:58Z"
python
"2023-07-21T10:07:21Z"
closed
apache/airflow
https://github.com/apache/airflow
32,153
["airflow/www/static/js/api/useExtraLinks.ts", "airflow/www/static/js/dag/details/taskInstance/ExtraLinks.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx"]
Support extra link per mapped task in grid view
### Description Currently extra links are disabled in mapped tasks summary but if we select the mapped task with a map_index the extra link still remains disabled. Since we support passing map_index to get the relevant extra link it would be helpful to have the appropriate link displayed. ### Use case/motivation This will be useful for scenarios where are there are high number of mapped tasks that are linked to an external url or resource. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32153
https://github.com/apache/airflow/pull/32154
5c0fca6440fae3ece915b365e1f06eb30db22d81
98c47f48e1b292d535d39cc3349660aa736d76cd
"2023-06-26T19:15:37Z"
python
"2023-06-28T15:22:09Z"
closed
apache/airflow
https://github.com/apache/airflow
32,137
["airflow/providers/amazon/aws/operators/glue.py", "tests/providers/amazon/aws/operators/test_glue.py"]
AWS - GlueJobOperator - stop job on task stop
### Description if you stop a running GlueJobOperator task ( mark fail ) the operator do not call aws glue to stop the runnning task so the airflow task stop but the aws glue job continue to run ### Use case/motivation Like the KPO would be nice to stop the aws glue job in case the operator is stop currently the logs of operator are ```log [2023-06-26, 10:07:32 UTC] {glue.py:149} INFO - Initializing AWS Glue Job: XXXXXXXX. Wait for completion: True [2023-06-26, 10:07:32 UTC] {glue.py:290} INFO - Checking if job already exists: XXXXXXXXXXX [2023-06-26, 10:07:32 UTC] {base.py:73} INFO - Using connection ID 'XXXXXXXX' for task execution. [2023-06-26, 10:07:32 UTC] {credentials.py:1051} INFO - Found credentials from IAM Role: XXXXXXXXXXXX [2023-06-26, 10:07:33 UTC] {glue.py:169} INFO - You can monitor this Glue Job run at: https://console.aws.amazon.com/gluestudio/home?region=eu-west-1#/job/test_ip/run/XXXXXXXXXXXXXXX [2023-06-26, 10:07:33 UTC] {glue.py:265} INFO - Polling for AWS Glue Job test_ip current run state with status RUNNING [2023-06-26, 10:07:39 UTC] {glue.py:265} INFO - Polling for AWS Glue Job test_ip current run state with status RUNNING [2023-06-26, 10:07:45 UTC] {glue.py:265} INFO - Polling for AWS Glue Job test_ip current run state with status RUNNING [2023-06-26, 10:07:51 UTC] {glue.py:265} INFO - Polling for AWS Glue Job test_ip current run state with status RUNNING [2023-06-26, 10:07:57 UTC] {glue.py:265} INFO - Polling for AWS Glue Job test_ip current run state with status RUNNING [2023-06-26, 10:08:02 UTC] {local_task_job.py:276} WARNING - State of this instance has been externally set to failed. Terminating instance. [2023-06-26, 10:08:02 UTC] {process_utils.py:129} INFO - Sending Signals.SIGTERM to group 13175. PIDs of all processes in the group: [13175] [2023-06-26, 10:08:02 UTC] {process_utils.py:84} INFO - Sending the signal Signals.SIGTERM to group 13175 [2023-06-26, 10:08:02 UTC] {taskinstance.py:1488} ERROR - Received SIGTERM. Terminating subprocesses. [2023-06-26, 10:08:02 UTC] {process_utils.py:79} INFO - Process psutil.Process(pid=13175, status='terminated', exitcode=0, started='10:07:31') (13175) terminated with exit code 0 ``` ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32137
https://github.com/apache/airflow/pull/32155
98c47f48e1b292d535d39cc3349660aa736d76cd
1d60332cf82325b80e76a6771bca192c1477d594
"2023-06-26T10:13:47Z"
python
"2023-06-28T17:01:32Z"
closed
apache/airflow
https://github.com/apache/airflow
32,121
["airflow/cli/commands/triggerer_command.py", "airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/jobs/triggerer_job_runner.py", "airflow/models/trigger.py", "tests/models/test_trigger.py"]
Multiple Triggerer processes keeps reassigning triggers to each other when job_heartbeat_sec is higher than 30 seconds.
### Apache Airflow version main (development) ### What happened Airflow has `job_heartbeat_sec` (default 5) that was updated to 50 seconds in our environment. This caused 2 instances of triggerer process running for HA to keep updating triggerer_id since below query takes current time minus 30 seconds to query against the latest_heartbeat. This causes alive_triggerer_ids to return empty list since job_heartbeat_sec is more than 50 seconds and the current trigger running this query assigns the triggers to itself. This keeps happening moving triggers from one instance to another. https://github.com/apache/airflow/blob/62a534dbc7fa8ddb4c249ade85c558b64d1630dd/airflow/models/trigger.py#L216-L223 ### What you think should happen instead The heartbeat calculation should have a value based on job_heartbeat_sec rather than 30 seconds hardcoded so that queries to check triggerer processes alive are adjusted as per user settings. ### How to reproduce 1. Change job_heartbeat_sec to 50 in airflow.cfg 2. Run 2 instances of triggerer. 3. Make an operator that uses `FileTrigger` with the file being absent or any other long running task. 4. Check triggerer_id for the trigger which keeps changing. ### Operating System Ubuntu ### Versions of Apache Airflow Providers _No response_ ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32121
https://github.com/apache/airflow/pull/32123
4501f8b352aee9c2cd29126a64cab62fa19fc49d
d117728cd6f337266bebcf4916325d5de815fe03
"2023-06-25T08:19:56Z"
python
"2023-06-30T20:19:06Z"
closed
apache/airflow
https://github.com/apache/airflow
32,111
["airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"]
KubernetesPodOperator job intermittently fails - unable to retrieve json from xcom sidecar container due to network connectivity issues
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened We have seen that KubernetesPodOperator sometimes fails to retrieve json from xcom sidecar container due to network connectivity issues or in some cases retrieves incomplete json which cannot be parsed. The KubernetesPodOperator task then fails with these error stack traces e.g. `File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 398, in execute result = self.extract_xcom(pod=self.pod) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 372, in extract_xcom result = self.pod_manager.extract_xcom(pod) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 369, in extract_xcom _preload_content=False, File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/stream/stream.py", line 35, in _websocket_request return api_method(*args, **kwargs) File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 994, in connect_get_namespaced_pod_exec return self.connect_get_namespaced_pod_exec_with_http_info(name, namespace, **kwargs) # noqa: E501 File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 1115, in connect_get_namespaced_pod_exec_with_http_info collection_formats=collection_formats) File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 353, in call_api _preload_content, _request_timeout, _host) File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 184, in __call_api _request_timeout=_request_timeout) File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/stream/ws_client.py", line 518, in websocket_call raise ApiException(status=0, reason=str(e)) kubernetes.client.exceptions.ApiException: (0) Reason: Connection to remote host was lost.` OR ` File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 398, in execute result = self.extract_xcom(pod=self.pod) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 374, in extract_xcom return json.loads(result) File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 4076 (char 4075) ` We are using airflow 2.6.1 and apache-airflow-providers-cncf-kubernetes==4.0.2 ### What you think should happen instead KubernetesPodOperator should not fail with these intermittent network connectivity issues when pulling json from xcom sidecar container. It should have retries and verify whether it was able to retrieve valid json before it kills the xcom side car container, extract_xcom function in PodManager should * Read and try to parse return json when its read from /airflow/xcom/return.json - to catch errors if say due to network connectivity issues we don not read incomplete json (truncated json) * Add retries when we read the json - hopefully it will also catch against other network errors to (with kubernetes stream trying to talk to airflow container to get return json) * Only if the return json can be read and parsed (if its valid) now the code goes ahead and kills the sidecar container. ### How to reproduce This occurs intermittently so is hard to reproduce. Happens when the kubernetes cluster is under load. In 7 days we see this happen once or twice. ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers airflow 2.6.1 and apache-airflow-providers-cncf-kubernetes==4.0.2 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else This occurs intermittently so is hard to reproduce. Happens when the kubernetes cluster is under load. In 7 days we see this happen once or twice. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32111
https://github.com/apache/airflow/pull/32113
d117728cd6f337266bebcf4916325d5de815fe03
df4c8837d022e66921bc0cf33f3249b235de6fdd
"2023-06-23T22:03:19Z"
python
"2023-07-01T06:43:48Z"
closed
apache/airflow
https://github.com/apache/airflow
32,107
["airflow/providers/google/cloud/hooks/dataflow.py"]
Improved error logging for failed Dataflow jobs
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened When running Dataflow job in Cloud Composer composer-1.20.12-airflow-1.10.15 Airflow 1.10.15, Dataflow job fails throwing a generic error "Exception: DataFlow failed with return code 1", and the reason for the failure is not evident clearly from logs. This issue is in Airflow 1: https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/contrib/hooks/gcp_dataflow_hook.py#L172https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74b This issue still exists in Airflow 2. Airflow 2: https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/hooks/dataflow.py#L1019 Can the error logging be improved to show exact reason and a few lines displayed of the standard error from Dataflow command run, so that it gives context? This will help Dataflow users to identify the root cause of the issue directly from the logs, and avoid additional research and troubleshooting by going through the log details via Cloud Logging. I am happy to contribute and raise PR to help out implementation for the bug fix. I am looking for inputs as to how to integrate with existing code bases. Thanks for your help in advance! Srabasti Banerjee ### What you think should happen instead [2023-06-15 14:04:37,071] {taskinstance.py:1152} ERROR - DataFlow failed with return code 1 Traceback (most recent call last): File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_task result = task_copy.execute(context=context) File "/usr/local/lib/airflow/airflow/operators/python_operator.py", line 113, in execute return_value = self.execute_callable() File "/usr/local/lib/airflow/airflow/operators/python_operator.py", line 118, in execute_callable return self.python_callable(*self.op_args, **self.op_kwargs) File "/home/airflow/gcs/dags/X.zip/X.py", line Y, in task DataFlowPythonOperator( File "/usr/local/lib/airflow/airflow/contrib/operators/dataflow_operator.py", line 379, in execute hook.start_python_dataflow( File "/usr/local/lib/airflow/airflow/contrib/hooks/gcp_dataflow_hook.py", line 245, in start_python_dataflow self._start_dataflow(variables, name, ["python"] + py_options + [dataflow], File "/usr/local/lib/airflow/airflow/contrib/hooks/gcp_api_base_hook.py", line 363, in wrapper return func(self, *args, **kwargs) File "/usr/local/lib/airflow/airflow/contrib/hooks/gcp_dataflow_hook.py", line 204, in _start_dataflow job_id = _Dataflow(cmd).wait_for_done() File "/usr/local/lib/airflow/airflow/contrib/hooks/gcp_dataflow_hook.py", line 178, in wait_for_done raise Exception("DataFlow failed with return code {}".format( Exception: DataFlow failed with return code 1 ### How to reproduce Any Failed Dataflow job that involves deleting a file when it is in process of being ingested via Dataflow job task run via Cloud Composer. Please let me know for any details needed. ### Operating System Cloud Composer ### Versions of Apache Airflow Providers _No response_ ### Deployment Google Cloud Composer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32107
https://github.com/apache/airflow/pull/32847
2950fd768541fc902d8f7218e4243e8d83414c51
b4102ce0b55e76baadf3efdec0df54762001f38c
"2023-06-23T20:08:20Z"
python
"2023-08-14T10:52:03Z"
closed
apache/airflow
https://github.com/apache/airflow
32,106
["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"]
GCSToBigQueryOperator and BigQueryToGCSOperator do not respect their project_id arguments
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened We experienced this issue Airflow 2.6.1, but the problem exists in the Google provider rather than core Airflow, and were introduced with [these changes](https://github.com/apache/airflow/pull/30053/files). We are using version 10.0.0 of the provider. The [issue](https://github.com/apache/airflow/issues/29958) that resulted in these changes seems to be based on an incorrect understanding of how projects interact in BigQuery -- namely that the project used for storage and the project used for compute can be separate. The user reporting the issue appears to mistake an error about compute (`User does not have bigquery.jobs.create permission in project {project-A}.` for an error about storage, and this incorrect diagnosis resulted in a fix that inappropriately defaults the compute project to the project named in destination/source (depending on the operator) table. The change attempts to allow users to override this (imo incorrect) default, but unfortunately this does not currently work because `self.project_id` gets overridden with the named table's project [here](https://github.com/apache/airflow/pull/30053/files#diff-875bf3d1bfbba7067dc754732c0e416b8ebe7a5b722bc9ac428b98934f04a16fR512) and [here](https://github.com/apache/airflow/pull/30053/files#diff-875bf3d1bfbba7067dc754732c0e416b8ebe7a5b722bc9ac428b98934f04a16fR587). ### What you think should happen instead I think that the easiest fix would be to revert the change, and return to defaulting the compute project to the one specified in the default google cloud connection. However, since I can understand the desire to override the `project_id`, I think handling it correctly, and clearly distinguishing between the concepts of storage and compute w/r/t projects would also work. ### How to reproduce Attempt to use any other project for running the job, besides the one named in the source/destination table ### Operating System debian ### Versions of Apache Airflow Providers apache-airflow-providers-google==10.0.0 ### Deployment Other 3rd-party Helm chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32106
https://github.com/apache/airflow/pull/32232
b3db4de4985eccb859a30a07a2350499370c6a9a
2d690de110825ba09b9445967b47c44edd8f151c
"2023-06-23T19:08:10Z"
python
"2023-07-06T23:12:37Z"
closed
apache/airflow
https://github.com/apache/airflow
32,091
["airflow/jobs/triggerer_job_runner.py", "tests/jobs/test_triggerer_job.py"]
Triggerer intermittent failure when running many triggerers
### Apache Airflow version 2.6.2 ### What happened We are running a dag with many deferrable tasks using a custom trigger that waits for an Azure Batch task to complete. When many tasks have been deferred, we can an intermittent error in the Triggerer. The logged error message is the following: ``` Exception in thread Thread-2: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 980, in _bootstrap_inner self.run() File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/triggerer_job_runner.py", line 457, in run asyncio.run(self.arun()) File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/local/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete return future.result() File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/triggerer_job_runner.py", line 470, in arun await self.create_triggers() File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/triggerer_job_runner.py", line 492, in create_triggers dag_id = task_instance.dag_id AttributeError: 'NoneType' object has no attribute 'dag_id' ``` After this error occurs, the trigger still reports as healthy, but no events are triggered. Restarting the triggerer fixes the problem. ### What you think should happen instead The specific error in the trigger should be addressed to prevent the triggerer async thread from crashing. The triggerer should not perform heartbeat updates when the async triggerer thread has crashed. ### How to reproduce This occurs intermittently, and seems to be the results of running more than one triggerer. Running many deferred tasks eventually ends up with this error occurring. ### Operating System linux (standard airflow slim images extended with custom code running on kubernetes) ### Versions of Apache Airflow Providers postgres,celery,redis,ssh,statsd,papermill,pandas,github_enterprise ### Deployment Official Apache Airflow Helm Chart ### Deployment details Azure Kubernetes and helm chart 1.9.0. 2 replicas of both triggerer and scheduler. ### Anything else It seems that as triggers fire, the link between the trigger row and the associated task_instance for the trigger is removed before the trigger row is removed. This leaves a small amount of time where the trigger exists without an associated task_instance. The database updates are performed in a synchronous loop inside the triggerer, so with one triggerer, this is not a problem. However, it can be a problem with more than one triggerer. Also, once the triggerer async loop (that handles the trigger code) fails, the triggers no longer fire. However, the heartbeat is handled by the synchronous loop so the job still reports as healthy. I have included an associated PR to resolve these issues. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32091
https://github.com/apache/airflow/pull/32092
14785bc84c984b8747fa062b84e800d22ddc0477
e585b588fc49b1b1c73a8952e9b257d7a9e13314
"2023-06-23T11:08:50Z"
python
"2023-06-27T21:46:16Z"
closed
apache/airflow
https://github.com/apache/airflow
32,069
["airflow/providers/google/cloud/hooks/dataproc.py", "tests/providers/google/cloud/hooks/test_dataproc.py"]
AioRpcError in DataprocCreateBatchOperator
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Airflow version: 2.3.4 (Composer 2.1.12) I've been using the DataprocCreateBatchOperator with the deferrable=True option. It worked well for the past few months, but an error started appearing on June 21, 2023, at 16:51 UTC. The error message is as follows: ``` grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Request contains an invalid argument." debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.69.95:443 {grpc_message:"Request contains an invalid argument.", grpc_status:3, created_time:"2023-06-21T16:51:22.992951359+00:00"}" ``` ### What you think should happen instead The name argument in the [hook code](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/hooks/dataproc.py#L1746) follows the format "projects/PROJECT_ID/regions/DATAPROC_REGION/batches/BATCH_ID". However, according to the [Google Cloud DataProc API Reference](https://cloud.google.com/dataproc-serverless/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.GetBatchRequest), it should be in the format "projects/PROJECT_ID/locations/DATAPROC_REGION/batches/BATCH_ID . ### How to reproduce just run a dataproc operator like this ``` create_batch = DataprocCreateBatchOperator( task_id="create_batch", batch_id="batch_test", deferrable=True, ) ``` ### Operating System Ubuntu 20.04 ### Versions of Apache Airflow Providers _No response_ ### Deployment Google Cloud Composer ### Deployment details Composer 2.1.12 ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32069
https://github.com/apache/airflow/pull/32070
479719297ff4efa8373dc7b6909bfc59a5444c3a
59d64d8f2ed3c0e7b93d3c07041d47883cabb908
"2023-06-22T06:42:33Z"
python
"2023-06-22T21:20:44Z"
closed
apache/airflow
https://github.com/apache/airflow
32,045
["airflow/executors/celery_executor_utils.py", "tests/integration/executors/test_celery_executor.py"]
Celery Executor cannot connect to the database to get information, resulting in a scheduler exit abnormally
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened We use Celery Executor where using RabbitMQ as a broker and postgresql as a result backend Airflow Version: 2.2.3 Celery Version: 5.2.3 apache-airflow-providers-celery==2.1.0 Below is the error message: ``` _The above exception was the direct cause of the following exception: Traceback (most recent call last): File"/app/airflow2.2.3/airflow/airflow/jobs/schedulerjob.py”, line 672, in _execute self._run_scheduler_loop() File"/app/airflow2.2.3/airflow/airflow/jobs/scheduler_job.py", line 754, in _run_scheduler_loop self.executor.heartbeat() File"/app/airflow2.2.3/airflow/airflow/executors/base_executor.py”, line 168, in heartbeat self.sync() File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”, line 330, in sync self.update_all_task_states() File"/app/airflow223/airflow/airflow/executors/celery_executor.py”,line 442,in update_all_task_states state_and_info_by_celery_task_id=self.bulk_state_fetcher.get_many(self.tasks. values()) File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”,line 598, in get_many result = self._get many_from db backend(async_results) File"/app/airflow2.2.3/airflow/airflow/executors/celery_executor.py”,line 618, in get_many_from_db_backend tasks-session.query(task_cls).filter(task_cls.task_id.in(task_ids)).all() File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3373, in all return list(self) File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3535, in iter return self._execute_and_instances(context) File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3556, in _execute_and_instances conn =self._get bind args( File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/orm/query.py”, line 3571, in _get_bind_args return fn( File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3550, in _connection_from_session conn=self.session.connection(**kw) File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 1142, in connection return self._connection_for_bind( File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”,line 1150, in _connection_for_bind return self.transaction.connection_for bind( File“/app/airflow2.2.3/airflow2_env/Iib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 433, in _connection_for_bind conn=bind._contextual_connect() File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”,line 2302, in _contextual_connect self._wrap_pool_connect(self.pool.connect,None), File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2339, in _wrap_pool_connect Tracking Connection.handle dbapi_exception_noconnection( File "/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 1583,in handle_dbapi_exception_noconnection util.raise ( File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/compat.py”, line 182, in raise ents raise exception File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”, line 2336, in _wrap_pool_connect return fn() 2023-06-05 16:39:05.069 ERROR -Exception when executing SchedulerJob. run scheduler loop Traceback (most recent call last): File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2336,in _wrap_pool_connect return fno File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 364, in connect returnConnectionFairy.checkout(self) File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 778, in _checkout fairy=ConnectionRecordcheckout(pool) File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 495, in checkout rec=pool. do_get() File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/impl.py”, line 241, in _do_get return self._createconnection() File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/salalchemy/pool/base.py”, line 309, in _create_connection return _ConnectionRecord(self) File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/sitepackages/sqlalchemy/pool/base.py”, line 440, in init self. connect(firstconnectcheck=True) File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 661, in connect pool.logger.debug"Error onconnect(:%s",e) File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/langhelpers.py”, line 68, in exit compat.raise( File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/util/compat.py", line 182, in raise raise exception File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 656, in _connect connection =pool.invoke_creator(sel f) File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/strategies.py”, line 114, in connect return dialect.connect(*cargs, **cparans) File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/default.py”,line 508, in connect return self.dbapi.connect(*cargs, **cparams) File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/psycopg2/init.py”, line 126, in connect conn=connect(dsn,connection_factory=connection_factory, **kwasync) psycopg2.0perationalError: could not connect to server: Connection timed out Is the server running on host"xxxxxxxxxx”and accepting TCP/IP connections on port 5432? ``` ### What you think should happen instead I think it may be caused by network jitter issues, add retries to solve it ### How to reproduce celeryExecutor fails to create a PG connection while retrieving metadata information, and it can be reproduced ### Operating System NAME="RedFlag Asianux" VERSION="7 (Lotus)" ### Versions of Apache Airflow Providers _No response_ ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32045
https://github.com/apache/airflow/pull/31998
de585f521b5898ba7687072a7717fd3b67fa8c5c
c3df47efc2911706897bf577af8a475178de4b1b
"2023-06-21T08:09:17Z"
python
"2023-06-26T17:01:26Z"
closed
apache/airflow
https://github.com/apache/airflow
32,020
["airflow/cli/cli_config.py", "airflow/cli/commands/task_command.py", "airflow/utils/cli.py", "tests/cli/commands/test_task_command.py"]
Airflow tasks run -m cli command giving 504 response
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Hello , we are facing "504 Gateway Time-out" error when running "tasks run -m" CLI command. We are trying to create Complex DAG and run tasks from cli command. When we are trying to run "tasks run -m" then we received gateway timeout error. We also observed high resources spike in web server when we tried to execute this cli command. After looking into further, when we run "task run -m" Airflow CLI command, what it does is it parses the list of DAGs and the parses through the task list. Because this , we observed high resources of webserver and received gateway timeout error. ### What you think should happen instead We are expecting that when to execute "tasks run" cli command, it should only parse DAG name and task name provided in the command and not parse DAG lists followed by task list. ### How to reproduce please follow below steps to reproduce this issue. 1. we have 900 DAG in airflow environment. 2. we have created web login token to access web server. 3. after that we tried to run "tasks run" using python script ### Operating System Amazon linux ### Versions of Apache Airflow Providers Airflow version 2.2.2 ### Deployment Amazon (AWS) MWAA ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32020
https://github.com/apache/airflow/pull/32038
d49fa999a94a2269dd6661fe5eebbb4c768c7848
05a67efe32af248ca191ea59815b3b202f893f46
"2023-06-20T08:51:13Z"
python
"2023-06-23T22:31:05Z"
closed
apache/airflow
https://github.com/apache/airflow
32,007
["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"]
ZeroDivisionError while poking ExternalTaskSensor
### Apache Airflow version 2.6.2 ### What happened After upgrading to version 2.6.2, we started getting a `ZeroDivisionError` the first time some _ExternalTaskSensor_ were poked. ### What you think should happen instead Sensor should exit with return code 0, as it did when cleared after the first fail: ``` [2023-06-19, 10:03:57 UTC] {external_task.py:240} INFO - Poking for task_group 'lote2db_etl' in dag 'malha_batch' on 2023-06-16T08:30:00+00:00 ... [2023-06-19, 10:03:57 UTC] {base.py:255} INFO - Success criteria met. Exiting. [2023-06-19, 10:03:57 UTC] {taskinstance.py:1345} INFO - Marking task as SUCCESS. dag_id=rotinas_risco, task_id=ets_malha_lote2db_etl, execution_date=20230616T080000, start_date=20230619T100357, end_date=20230619T100357 [2023-06-19, 10:03:57 UTC] {local_task_job_runner.py:225} INFO - Task exited with return code 0 ``` ### How to reproduce We have a DAG, called *malha_batch*, whose `schedule` parameter equals to `"30 8 * * 1-5"`. We then have another one, called *rotinas_risco*, whose `schedule` parameter equals to `"0 8 * * 1-5"`, with four _ExternalTaskSensor_ pointing to *malha_batch*. Below are their definitions: <details><summary>Excerpt from rotinas_risco.py</summary> ``` python ets_malha_bbg_post_processing = ExternalTaskSensor( task_id="ets_malha_bbg_post_processing", external_dag_id="malha_batch", external_task_group_id="bloomberg.post_processing", allowed_states=[State.SUCCESS, State.SKIPPED], failed_states=[State.FAILED], execution_delta=timedelta(minutes=-30), poke_interval=300, mode="reschedule", ) ets_malha_bbg_refreshes = ExternalTaskSensor( task_id="ets_malha_bbg_refreshes", external_dag_id="malha_batch", external_task_group_id="bloomberg.refreshes", allowed_states=[State.SUCCESS, State.SKIPPED], failed_states=[State.FAILED], execution_delta=timedelta(minutes=-30), poke_interval=300, mode="reschedule", ) ets_malha_bbg_conversion_factor_to_base = ExternalTaskSensor( task_id="ets_malha_bbg_conversion_factor_to_base", external_dag_id="malha_batch", external_task_id="prices.conversion_factor_to_base", allowed_states=[State.SUCCESS, State.SKIPPED], failed_states=[State.FAILED], execution_delta=timedelta(minutes=-30), poke_interval=300, mode="reschedule", ) ets_malha_lote2db_etl = ExternalTaskSensor( task_id="ets_malha_lote2db_etl", external_dag_id="malha_batch", external_task_group_id="lote2db_etl", allowed_states=[State.SUCCESS, State.SKIPPED], failed_states=[State.FAILED], execution_delta=timedelta(minutes=-30), poke_interval=300, mode="reschedule", ) ``` </details> Out of those four _ExternalTaskSensor_, just one behave as expected, while the three others failed upon the first poking attempt with the following traceback: ``` [2023-06-19, 08:00:02 UTC] {external_task.py:240} INFO - Poking for task_group 'lote2db_etl' in dag 'malha_batch' on 2023-06-16T08:30:00+00:00 ... [2023-06-19, 08:00:02 UTC] {taskinstance.py:1824} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/sensors/base.py", line 225, in execute raise e File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/sensors/base.py", line 212, in execute poke_return = self.poke(context) File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/utils/session.py", line 76, in wrapper return func(*args, session=session, **kwargs) File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/sensors/external_task.py", line 260, in poke count_failed = self.get_count(dttm_filter, session, self.failed_states) File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/sensors/external_task.py", line 369, in get_count count = ( ZeroDivisionError: division by zero ``` The successful _ExternalTaskSensor_ logged as follows: ``` [2023-06-19, 08:00:02 UTC] {external_task.py:232} INFO - Poking for tasks ['prices.conversion_factor_to_base'] in dag malha_batch on 2023-06-16T08:30:00+00:00 ... [2023-06-19, 08:00:02 UTC] {taskinstance.py:1784} INFO - Rescheduling task, marking task as UP_FOR_RESCHEDULE [2023-06-19, 08:00:02 UTC] {local_task_job_runner.py:225} INFO - Task exited with return code 0 [2023-06-19, 08:00:02 UTC] {taskinstance.py:2653} INFO - 0 downstream tasks scheduled from follow-on schedule check ``` I was not able to reproduce the error with a smaller example, but the mere fact that, out of four similarly-defined sensors, three failed and one succeeded, to me, suggests we are facing a bug. Additionally, the problem did not arise with version 2.6.1. ### Operating System Ubuntu 20.04.6 LTS (Focal Fossa) ### Versions of Apache Airflow Providers ``` apache-airflow-providers-amazon==8.1.0 apache-airflow-providers-celery==3.2.0 apache-airflow-providers-common-sql==1.5.1 apache-airflow-providers-ftp==3.4.1 apache-airflow-providers-http==4.4.1 apache-airflow-providers-imap==3.2.1 apache-airflow-providers-postgres==5.5.0 apache-airflow-providers-redis==3.2.0 apache-airflow-providers-sqlite==3.4.1 ``` ### Deployment Virtualenv installation ### Deployment details Just a vanilla setup following https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html. ### Anything else Please let me know whether additional log files from the scheduler or executor (Celery) should be provided. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32007
https://github.com/apache/airflow/pull/32009
c508b8e5310447b302128d8fbcc5c297a3e6e244
14eb1d3116ecef15be7be9a8f9d08757e74f981c
"2023-06-19T14:44:04Z"
python
"2023-06-21T09:55:45Z"
closed
apache/airflow
https://github.com/apache/airflow
32,005
["airflow/www/views.py"]
webserver - MAPPED tasks - rendered-templates page FAIL
### Apache Airflow version 2.6.2 ### What happened If I want see the common information of the N mapped task . I edit manually the URL cause there is no button in the current airflow console ```url http://localhost:8090/task?dag_id=kubernetes_dag&task_id=task-one&execution_date=2023-06-18T00%3A00%3A00%2B00%3A00&map_index=0 ``` by removing the `&map_index=0` if I click on `rendered-templates` it fail ```log {app.py:1744} ERROR - Exception on /rendered-templates [GET] Traceback (most recent call last): File "/home/airflow/.local/lib/python3.11/site-packages/flask/app.py", line 2529, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/airflow/.local/lib/python3.11/site-packages/flask/app.py", line 1825, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/airflow/.local/lib/python3.11/site-packages/flask/app.py", line 1823, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/airflow/.local/lib/python3.11/site-packages/flask/app.py", line 1799, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/auth.py", line 47, in decorated return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/decorators.py", line 125, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/home/airflow/.local/lib/python3.11/site-packages/airflow/utils/session.py", line 76, in wrapper return func(*args, session=session, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/airflow/.local/lib/python3.11/site-packages/airflow/www/views.py", line 1354, in rendered_templates ti.refresh_from_task(raw_task) ^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'refresh_from_task' ``` ### What you think should happen instead _No response_ ### How to reproduce [Screencast from 19-06-2023 15:07:02.webm](https://github.com/apache/airflow/assets/10202690/10a3546f-f2ef-4c2d-a704-7eb43573ad83) ### Operating System ubuntu 22.04 ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32005
https://github.com/apache/airflow/pull/32011
2c645d59d505a99c8e7507ef05d6f3ecf430d578
62a534dbc7fa8ddb4c249ade85c558b64d1630dd
"2023-06-19T13:08:24Z"
python
"2023-06-25T08:06:34Z"
closed
apache/airflow
https://github.com/apache/airflow
32,002
["setup.cfg", "setup.py"]
log url breaks on login redirect
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened on 2.5.3: log url is https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00%2B00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1 this url works when I am logged in. If I am logged out, the login screen will redirect me to https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00+00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1 which shows me an empty log. the redirect seems to convert the `%2B` back to a `+` in the timezone component of the execution_date, while leaving all other escaped characters untouched. ### What you think should happen instead log url works correctly after login redirect ### How to reproduce https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00%2B00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1 have a log url with a execution date using a timezone with a positive utc offset ### Operating System linux ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/32002
https://github.com/apache/airflow/pull/32054
e39362130b8659942672a728a233887f0b02dc8b
92497fa727a23ef65478ef56572c7d71427c4a40
"2023-06-19T11:14:59Z"
python
"2023-07-08T19:18:12Z"
closed
apache/airflow
https://github.com/apache/airflow
31,986
["airflow/providers/google/cloud/operators/vertex_ai/auto_ml.py", "tests/providers/google/cloud/operators/test_vertex_ai.py"]
Unable to find project ID causing authentication failure in AutoML task
### Apache Airflow version Other Airflow 2 version (please specify below) : 2.6.1 ### What happened I encountered an issue while running an AutoML task. The task failed with an authentication error due to the inability to find the project ID. Here are the details of the error: ``` [2023-06-17T18:42:48.916+0530] {[taskinstance.py:1308](http://taskinstance.py:1308/)} INFO - Starting attempt 1 of 1 [2023-06-17T18:42:48.931+0530] {[taskinstance.py:1327](http://taskinstance.py:1327/)} INFO - Executing <Task(CreateAutoMLTabularTrainingJobOperator): auto_ml_tabular_task> on 2023-06-17 13:12:33+00:00 [2023-06-17T18:42:48.964+0530] {[standard_task_runner.py:57](http://standard_task_runner.py:57/)} INFO - Started process 12974 to run task [2023-06-17T18:42:48.971+0530] {[standard_task_runner.py:84](http://standard_task_runner.py:84/)} INFO - Running: ['airflow', 'tasks', 'run', 'vi_create_auto_ml_tabular_training_job_dag', 'auto_ml_tabular_task', 'manual__2023-06-17T13:12:33+00:00', '--job-id', '175', '--raw', '--subdir', 'DAGS_FOLDER/vi_create_model_train.py', '--cfg-path', '/tmp/tmprijpfzql'] [2023-06-17T18:42:48.974+0530] {[standard_task_runner.py:85](http://standard_task_runner.py:85/)} INFO - Job 175: Subtask auto_ml_tabular_task [2023-06-17T18:42:49.043+0530] {[logging_mixin.py:149](http://logging_mixin.py:149/)} INFO - Changing /mnt/d/projects/airflow/logs/dag_id=vi_create_auto_ml_tabular_training_job_dag/run_id=manual__2023-06-17T13:12:33+00:00/task_id=auto_ml_tabular_task permission to 509 [2023-06-17T18:42:49.044+0530] {[task_command.py:410](http://task_command.py:410/)} INFO - Running <TaskInstance: vi_create_auto_ml_tabular_training_job_dag.auto_ml_tabular_task manual__2023-06-17T13:12:33+00:00 [running]> on host DESKTOP-EIFUHU2.localdomain [2023-06-17T18:42:49.115+0530] {[taskinstance.py:1545](http://taskinstance.py:1545/)} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='airflow' AIRFLOW_CTX_DAG_ID='vi_create_auto_ml_tabular_training_job_dag' AIRFLOW_CTX_TASK_ID='auto_ml_tabular_task' AIRFLOW_CTX_EXECUTION_DATE='2023-06-17T13:12:33+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2023-06-17T13:12:33+00:00' [2023-06-17T18:42:49.120+0530] {[base.py:73](http://base.py:73/)} INFO - Using connection ID 'gcp_conn' for task execution. [2023-06-17T18:42:52.123+0530] {[_metadata.py:141](http://_metadata.py:141/)} WARNING - Compute Engine Metadata server unavailable on attempt 1 of 3. Reason: timed out [2023-06-17T18:42:55.125+0530] {[_metadata.py:141](http://_metadata.py:141/)} WARNING - Compute Engine Metadata server unavailable on attempt 2 of 3. Reason: timed out [2023-06-17T18:42:58.128+0530] {[_metadata.py:141](http://_metadata.py:141/)} WARNING - Compute Engine Metadata server unavailable on attempt 3 of 3. Reason: timed out [2023-06-17T18:42:58.129+0530] {[_default.py:340](http://_default.py:340/)} WARNING - Authentication failed using Compute Engine authentication due to unavailable metadata server. [2023-06-17T18:42:58.131+0530] {[taskinstance.py:1824](http://taskinstance.py:1824/)} ERROR - Task failed with exception Traceback (most recent call last): File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/initializer.py", line 244, in project self._set_project_as_env_var_or_google_auth_default() File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/initializer.py", line 81, in _set_project_as_env_var_or_google_auth_default credentials, project = google.auth.default() File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/auth/_default.py", line 692, in default raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS) google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see [https://cloud.google.com/docs/authentication/external/set-up-adc](https://cloud.google.com/docs/authentication/external/set-up-adc?authuser=0) for more information. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/vertex_ai/auto_ml.py", line 359, in execute dataset=datasets.TabularDataset(dataset_name=self.dataset_id), File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/datasets/dataset.py", line 77, in __init__ super().__init__( File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/base.py", line 925, in __init__ VertexAiResourceNoun.__init__( File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/base.py", line 507, in __init__ self.project = project or initializer.global_config.project File "/mnt/d/projects/tvenv/lib/python3.8/site-packages/google/cloud/aiplatform/initializer.py", line 247, in project raise GoogleAuthError(project_not_found_exception_str) from exc google.auth.exceptions.GoogleAuthError: Unable to find your project. Please provide a project ID by: - Passing a constructor argument - Using aiplatform.init() - Setting project using 'gcloud config set project my-project' - Setting a GCP environment variable [2023-06-17T18:42:58.139+0530] {[taskinstance.py:1345](http://taskinstance.py:1345/)} INFO - Marking task as FAILED. dag_id=vi_create_auto_ml_tabular_training_job_dag, task_id=auto_ml_tabular_task, execution_date=20230617T131233, start_date=20230617T131248, end_date=20230617T131258 [2023-06-17T18:42:58.152+0530] {[standard_task_runner.py:104](http://standard_task_runner.py:104/)} ERROR - Failed to execute job 175 for task auto_ml_tabular_task (Unable to find your project. Please provide a project ID by: - Passing a constructor argument - Using aiplatform.init() - Setting project using 'gcloud config set project my-project' - Setting a GCP environment variable; 12974) [2023-06-17T18:42:58.166+0530] {[local_task_job_runner.py:225](http://local_task_job_runner.py:225/)} INFO - Task exited with return code 1 [2023-06-17T18:42:58.183+0530] {[taskinstance.py:2651](http://taskinstance.py:2651/)} INFO - 0 downstream tasks scheduled from follow-on schedule check ``` ### What you think should happen instead Expected Behavior: The AutoML task should execute successfully, using the appropriate project ID and credentials for authentication given as per the gcs_con_id provided in the dag. Actual Behavior: The task fails with an authentication error due to the inability to find the project ID and default credentials. ### How to reproduce To reproduce the issue and execute the CreateAutoMLTabularTrainingJobOperator task in Apache Airflow, follow these steps: Ensure that Apache Airflow is installed. If not, run the following command to install it: ``` pip install apache-airflow ``` Create an instance of the CreateAutoMLTabularTrainingJobOperator within the DAG context: ``` create_auto_ml_tabular_training_job = CreateAutoMLTabularTrainingJobOperator( gcp_conn_id='gcp_conn', task_id="auto_ml_tabular_task", display_name=TABULAR_DISPLAY_NAME, optimization_prediction_type="regression", optimization_objective="minimize-rmse", #column_transformations=COLUMN_TRANSFORMATIONS, dataset_id=tabular_dataset_id, # Get this // target_column="mean_temp", training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, model_display_name='your-model-display-name', disable_early_stopping=False, region=REGION, project_id=PROJECT_ID ) ``` Start the Apache Airflow scheduler and webserver. Open a terminal or command prompt and run the following commands: ``` # Start the scheduler airflow scheduler # Start the webserver airflow webserver ``` Access the Apache Airflow web UI by opening a web browser and navigating to http://localhost:8080. Ensure that the scheduler and webserver are running without any errors. Navigate to the DAGs page in the Airflow UI and locate the vi_create_auto_ml_tabular_training_job_dag DAG. Trigger the DAG manually, either by clicking the "Trigger DAG" button or using the Airflow CLI command. Monitor the DAG execution status and check if the auto_ml_tabular_task completes successfully or encounters any errors. ### Operating System DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04 LTS" ### Versions of Apache Airflow Providers $ pip freeze aiofiles==23.1.0 aiohttp==3.8.4 aiosignal==1.3.1 alembic==1.11.1 anyio==3.7.0 apache-airflow==2.6.1 apache-airflow-providers-common-sql==1.5.1 apache-airflow-providers-ftp==3.4.1 apache-airflow-providers-google==10.1.1 apache-airflow-providers-http==4.4.1 apache-airflow-providers-imap==3.2.1 apache-airflow-providers-sqlite==3.4.1 apispec==5.2.2 argcomplete==3.1.1 asgiref==3.7.2 async-timeout==4.0.2 attrs==23.1.0 Babel==2.12.1 backoff==2.2.1 blinker==1.6.2 cachelib==0.9.0 cachetools==5.3.1 cattrs==23.1.2 certifi==2023.5.7 cffi==1.15.1 chardet==5.1.0 charset-normalizer==3.1.0 click==8.1.3 clickclick==20.10.2 colorama==0.4.6 colorlog==4.8.0 ConfigUpdater==3.1.1 connexion==2.14.2 cron-descriptor==1.4.0 croniter==1.4.1 cryptography==41.0.1 db-dtypes==1.1.1 Deprecated==1.2.14 dill==0.3.6 dnspython==2.3.0 docutils==0.20.1 email-validator==1.3.1 exceptiongroup==1.1.1 Flask==2.2.5 Flask-AppBuilder==4.3.0 Flask-Babel==2.0.0 Flask-Caching==2.0.2 Flask-JWT-Extended==4.5.2 Flask-Limiter==3.3.1 Flask-Login==0.6.2 flask-session==0.5.0 Flask-SQLAlchemy==2.5.1 Flask-WTF==1.1.1 frozenlist==1.3.3 future==0.18.3 gcloud-aio-auth==4.2.1 gcloud-aio-bigquery==6.3.0 gcloud-aio-storage==8.2.0 google-ads==21.2.0 google-api-core==2.11.1 google-api-python-client==2.89.0 google-auth==2.20.0 google-auth-httplib2==0.1.0 google-auth-oauthlib==1.0.0 google-cloud-aiplatform==1.26.0 google-cloud-appengine-logging==1.3.0 google-cloud-audit-log==0.2.5 google-cloud-automl==2.11.1 google-cloud-bigquery==3.11.1 google-cloud-bigquery-datatransfer==3.11.1 google-cloud-bigquery-storage==2.20.0 google-cloud-bigtable==2.19.0 google-cloud-build==3.16.0 google-cloud-compute==1.11.0 google-cloud-container==2.24.0 google-cloud-core==2.3.2 google-cloud-datacatalog==3.13.0 google-cloud-dataflow-client==0.8.3 google-cloud-dataform==0.5.1 google-cloud-dataplex==1.5.0 google-cloud-dataproc==5.4.1 google-cloud-dataproc-metastore==1.11.0 google-cloud-dlp==3.12.1 google-cloud-kms==2.17.0 google-cloud-language==2.10.0 google-cloud-logging==3.5.0 google-cloud-memcache==1.7.1 google-cloud-monitoring==2.15.0 google-cloud-orchestration-airflow==1.9.0 google-cloud-os-login==2.9.1 google-cloud-pubsub==2.17.1 google-cloud-redis==2.13.0 google-cloud-resource-manager==1.10.1 google-cloud-secret-manager==2.16.1 google-cloud-spanner==3.36.0 google-cloud-speech==2.20.0 google-cloud-storage==2.9.0 google-cloud-tasks==2.13.1 google-cloud-texttospeech==2.14.1 google-cloud-translate==3.11.1 google-cloud-videointelligence==2.11.2 google-cloud-vision==3.4.2 google-cloud-workflows==1.10.1 google-crc32c==1.5.0 google-resumable-media==2.5.0 googleapis-common-protos==1.59.1 graphviz==0.20.1 greenlet==2.0.2 grpc-google-iam-v1==0.12.6 grpcio==1.54.2 grpcio-gcp==0.2.2 grpcio-status==1.54.2 gunicorn==20.1.0 h11==0.14.0 httpcore==0.17.2 httplib2==0.22.0 httpx==0.24.1 idna==3.4 importlib-metadata==4.13.0 importlib-resources==5.12.0 inflection==0.5.1 itsdangerous==2.1.2 Jinja2==3.1.2 json-merge-patch==0.2 jsonschema==4.17.3 lazy-object-proxy==1.9.0 limits==3.5.0 linkify-it-py==2.0.2 lockfile==0.12.2 looker-sdk==23.10.0 Mako==1.2.4 Markdown==3.4.3 markdown-it-py==3.0.0 MarkupSafe==2.1.3 marshmallow==3.19.0 marshmallow-enum==1.5.1 marshmallow-oneofschema==3.0.1 marshmallow-sqlalchemy==0.26.1 mdit-py-plugins==0.4.0 mdurl==0.1.2 multidict==6.0.4 numpy==1.24.3 oauthlib==3.2.2 ordered-set==4.1.0 packaging==23.1 pandas==2.0.2 pandas-gbq==0.19.2 pathspec==0.9.0 pendulum==2.1.2 pkgutil-resolve-name==1.3.10 pluggy==1.0.0 prison==0.2.1 proto-plus==1.22.2 protobuf==4.23.3 psutil==5.9.5 pyarrow==12.0.1 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.21 pydantic==1.10.9 pydata-google-auth==1.8.0 Pygments==2.15.1 PyJWT==2.7.0 pyOpenSSL==23.2.0 pyparsing==3.0.9 pyrsistent==0.19.3 python-daemon==3.0.1 python-dateutil==2.8.2 python-nvd3==0.15.0 python-slugify==8.0.1 pytz==2023.3 pytzdata==2020.1 PyYAML==6.0 requests==2.31.0 requests-oauthlib==1.3.1 requests-toolbelt==1.0.0 rfc3339-validator==0.1.4 rich==13.4.2 rich-argparse==1.1.1 rsa==4.9 setproctitle==1.3.2 Shapely==1.8.5.post1 six==1.16.0 sniffio==1.3.0 SQLAlchemy==1.4.48 sqlalchemy-bigquery==1.6.1 SQLAlchemy-JSONField==1.0.1.post0 SQLAlchemy-Utils==0.41.1 sqlparse==0.4.4 tabulate==0.9.0 tenacity==8.2.2 termcolor==2.3.0 text-unidecode==1.3 typing-extensions==4.6.3 tzdata==2023.3 uc-micro-py==1.0.2 unicodecsv==0.14.1 uritemplate==4.1.1 urllib3==2.0.3 Werkzeug==2.3.6 wrapt==1.15.0 WTForms==3.0.1 yarl==1.9.2 zipp==3.15.0 ### Deployment Virtualenv installation ### Deployment details $ airflow info Apache Airflow version | 2.6.1 executor | SequentialExecutor task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler sql_alchemy_conn | sqlite:////home/test1/airflow/airflow.db dags_folder | /mnt/d/projects/airflow/dags plugins_folder | /home/test1/airflow/plugins base_log_folder | /mnt/d/projects/airflow/logs remote_base_log_folder | System info OS | Linux architecture | x86_64 uname | uname_result(system='Linux', node='DESKTOP-EIFUHU2', release='4.4.0-19041-Microsoft', version='#1237-Microsoft Sat Sep 11 14:32:00 PST | 2021', machine='x86_64', processor='x86_64') locale | ('en_US', 'UTF-8') python_version | 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] python_location | /mnt/d/projects/tvenv/bin/python3 Tools info git | git version 2.25.1 ssh | OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020 kubectl | NOT AVAILABLE gcloud | NOT AVAILABLE cloud_sql_proxy | NOT AVAILABLE mysql | NOT AVAILABLE sqlite3 | NOT AVAILABLE psql | NOT AVAILABLE Paths info airflow_home | /home/test1/airflow system_path | /mnt/d/projects/tvenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Program Files | (x86)/Microsoft SDKs/Azure/CLI2/wbin:/mnt/c/Program Files/Python39/Scripts/:/mnt/c/Program | Files/Python39/:/mnt/c/Windows/system32:/mnt/c/Windows:/mnt/c/Windows/System32/Wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0/:/mnt/c/W | indows/System32/OpenSSH/:/mnt/c/Users/ibrez/AppData/Roaming/nvm:/mnt/c/Program Files/nodejs:/mnt/c/Program | Files/dotnet/:/mnt/c/Windows/system32/config/systemprofile/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/test1/AppData/Local/Microsoft/Wi | ndowsApps:/snap/bin python_path | /mnt/d/projects/tvenv/bin:/usr/lib/python38.zip:/usr/lib/python3.8:/usr/lib/python3.8/lib-dynload:/mnt/d/projects/tvenv/lib/python3.8/site-p | ackages:/mnt/d/projects/airflow/dags:/home/test1/airflow/config:/home/test1/airflow/plugins airflow_on_path | True Providers info apache-airflow-providers-common-sql | 1.5.1 apache-airflow-providers-ftp | 3.4.1 apache-airflow-providers-google | 10.1.1 apache-airflow-providers-http | 4.4.1 apache-airflow-providers-imap | 3.2.1 apache-airflow-providers-sqlite | 3.4.1 ### Anything else To me it seems like the issue is at the line ``` dataset=datasets.TabularDataset(dataset_name=self.dataset_id), ``` [here](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/vertex_ai/auto_ml.py) The details like project id and credentials are not being passed to the TabularDataset class which causes issues down the like for ``` GoogleAuthError(project_not_found_exception_str) ``` ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31986
https://github.com/apache/airflow/pull/31991
10aa704e3d87ce951cb79f28492eed916bc18fe3
f2ebc292fe63d2ddd0686d90c3acc0630f017a07
"2023-06-17T22:25:21Z"
python
"2023-06-19T03:53:05Z"
closed
apache/airflow
https://github.com/apache/airflow
31,969
["airflow/cli/commands/dag_command.py", "airflow/cli/commands/task_command.py", "airflow/models/dag.py", "tests/cli/commands/test_dag_command.py", "tests/cli/commands/test_task_command.py", "tests/dags/test_workday_timetable.py", "tests/plugins/test_plugins_manager.py", "tests/plugins/workday.py"]
Custom Timetable breaks a handful of CLI commands
### Apache Airflow version 2.6.1 ### What happened A number of Airflow CLI commands will fail with an error like the one below if a DAG uses a [custom Timetable](https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/timetable.html): ``` Traceback (most recent call last): File "/opt/conda/envs/production-airflow-2.6.1/bin/airflow", line 11, in <module> sys.exit(main()) File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/__main__.py", line 48, in main args.func(args) File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/cli/cli_config.py", line 51, in command return func(*args, **kwargs) File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/utils/cli.py", line 112, in wrapper return f(*args, **kwargs) File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py", line 139, in dag_backfill _run_dag_backfill(dags, args) File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py", line 79, in _run_dag_backfill ti.dry_run() File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1730, in dry_run self.render_templates() File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2172, in render_templates context = self.get_template_context() File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1931, in get_template_context data_interval = dag.get_run_data_interval(dag_run) File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/models/dag.py", line 861, in get_run_data_interval return self.infer_automated_data_interval(run.execution_date) File "/opt/conda/envs/production-airflow-2.6.1/lib/python3.9/site-packages/airflow/models/dag.py", line 882, in infer_automated_data_interval raise ValueError(f"Not a valid timetable: {self.timetable!r}") ValueError: Not a valid timetable: <workday.AfterWorkdayTimetable object at 0x7f50f9cd1700> ``` Some commands I have found that demonstrate this issue (though there may be more that I haven't found): * `airflow dags backfill <dag_id> -s <date> -e <date> --dry-run` (works fine without `--dry-run`) * `airflow dags test <dag_id> [date]` * `airflow tasks render <dag_id> <task_id> <date>` (works fine if you pass a valid `run_id` instead of an `execution_date`) ### What you think should happen instead Custom Timetables should work in all contexts that built-in timetables do. ### How to reproduce Set up Airflow in a such a way that example plugins can be used (see comments below). Add this DAG script: ``` #!/usr/bin/env python from datetime import datetime from airflow.decorators import dag, task # copy airflow/example_dags/plugins/workday.py into your $PLUGINS_FOLDER # OR update airflow.cfg to set core.plugins_folder = /path/to/airflow/example_dags/plugins from workday import AfterWorkdayTimetable @dag( start_date=datetime(2023, 5, 13), timetable=AfterWorkdayTimetable(), catchup=True, ) def test_timetable(): @task() def t(): print('hello') t() test_timetable() ``` To verify that the timetable is working correctly, you should be able to unpause the DAG and see tasks complete. Then run one of the CLI commands that I mentioned above to see the error. ### Operating System CentOS Stream 8 ### Versions of Apache Airflow Providers N/A ### Deployment Other ### Deployment details Self-hosted deployment/`standalone` mode Postgres DB backend ### Anything else Possibly related: * #19578 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31969
https://github.com/apache/airflow/pull/32118
76021ef8cd630826d9524e9984d1a2a7d9c57549
2aa3cfb6abd10779029b0c072493a1c1ed820b77
"2023-06-16T19:42:27Z"
python
"2023-07-10T08:07:45Z"
closed
apache/airflow
https://github.com/apache/airflow
31,957
["airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"]
Airflow Observability Improvement Request
### Description We have a scheduler house keeping work (adopt_or_reset_orphaned_tasks, check_trigger_timeouts, _emit_pool_metrics, _find_zombies, clear_not_launched_queued_tasks and _check_worker_pods_pending_timeout) runs on certain frequency. Right now, we don't have any latency metrics on these house keeping work. These will impact the scheduler heartbeat. Its good idea to capture these latency metrics to identify and tune the airflow configuration ### Use case/motivation As we run the airflow at a large scale, we have found that the adopt_or_reset_orphaned_tasks and clear_not_launched_queued_tasks functions might take time in a few minutes (> 5 minutes). These will delay the heartbeat of the scheduler and leads to the scheduler instance restarting/killed. In order to detect these latency issues, we need better metrics to capture these latencies. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31957
https://github.com/apache/airflow/pull/35579
5a6dcfd8655c9622f3838a0e66948dc3091afccb
cd296d2068b005ebeb5cdc4509e670901bf5b9f3
"2023-06-16T10:19:09Z"
python
"2023-11-12T17:41:07Z"
closed
apache/airflow
https://github.com/apache/airflow
31,949
["airflow/www/static/js/dag/details/graph/Node.tsx", "airflow/www/static/js/types/index.ts", "airflow/www/static/js/utils/graph.ts"]
Support Operator User Interface Elements in new graph view
### Description The new graph UI looks good but currently doesn't support the color options mentioned here https://airflow.apache.org/docs/apache-airflow/stable/howto/custom-operator.html#user-interface ### Use case/motivation It would be great for these features to be supported in the new grid view as they are in the old one ### Related issues [slack](https://apache-airflow.slack.com/archives/CCPRP7943/p1686866630874739?thread_ts=1686865767.351809&cid=CCPRP7943) ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31949
https://github.com/apache/airflow/pull/32822
12a760f6df831c1d53d035e4d169a69887e8bb26
3bb63f1087176b24e9dc8f4cc51cf44ce9986d34
"2023-06-15T22:54:54Z"
python
"2023-08-03T09:06:11Z"
closed
apache/airflow
https://github.com/apache/airflow
31,907
["dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "images/breeze/output-commands-hash.txt", "images/breeze/output-commands.svg", "images/breeze/output_testing.svg", "images/breeze/output_testing_tests.svg"]
Add `--use-airflow-version` option to `breeze testing tests` command
### Description The option `--use-airflow-version` is available under the command `start-airflow` in `Breeze`. As an example, this is used when testing a release candidate as specified in [documentation](https://github.com/apache/airflow/blob/main/dev/README_RELEASE_AIRFLOW.md#verify-release-candidates-by-contributors): `breeze start-airflow --use-airflow-version <VERSION>rc<X> --python 3.8 --backend postgres`. The idea I have is to add that option as well under the command `testing tests` in `Breeze`. ### Use case/motivation Having the option `--use-airflow-version` available under the command `testing tests` in `Breeze` would make it possible to run system tests against a specific version of Airflow and provider. This could be helpful when releasing new version of Airflow and Airflow providers. As such, providers could run all system tests of their provider package on demand and share these results (somehow, a dashboard?, another way?) to the community/release manager. This would not replace the manual testing already in place that is needed when releasing such new version but would give more information/pointers to the release manager. Before submitting a PR, I wanted to have first some feedbacks about this idea. This might not be possible? This might not be a good idea? This might not be useful at all? ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31907
https://github.com/apache/airflow/pull/31914
518b93c24fda6e7a1df0acf0f4dd1921967dc8f6
b07a26523fad4f17ceb4e3a2f88e043dcaff5e53
"2023-06-14T18:35:02Z"
python
"2023-06-14T23:44:04Z"
closed
apache/airflow
https://github.com/apache/airflow
31,902
["airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"]
MappedOperator doesn't allow `operator_extra_links` instance property
### Apache Airflow version 2.6.1 ### What happened When doing `.partial()` and `.expand()` on any operator which has an instance property (e.g. `@property`) of `operator_extra_links` the `MappedOperator` does not handle it properly, causing the dag to fail to import. The `BatchOperator` from `airflow.providers.amazon.aws.operators.batch` is one example of an operator which defines `operator_extra_links` on a per-instance basis. ### What you think should happen instead The dag should not fail to import (especially when using the AWS `BatchOperator`!) Either: - If per-instance `operator_extra_links` is deemed disallowed behaviour - `MappedOperator` should detect it's a property and give a more helpful error message - `BatchOperator` from the AWS provider should be changed. If I need to open another ticket elsewhere for that please let me know - If per-instance `operator_extra_links` is allowed - `MappedOperator` needs to be adjusted to account for that ### How to reproduce ``` import pendulum from airflow.models.baseoperator import BaseOperator from airflow.decorators import dag, task class BadOperator(BaseOperator): def __init__(self, *args, some_argument: str, **kwargs): super().__init__(*args, some_argument, **kwargs) @property def operator_extra_links(self): # !PROBLEMATIC FUNCTION! # ... Some code to create a collection of `BaseOperatorLink`s dynamically return tuple() @dag( schedule=None, start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), catchup=False, tags=["example"] ) def airflow_is_bad(): """ Example to demonstrate issue with airflow API """ @task def create_arguments(): return [1,2,3,4] bad_operator_test_group = BadOperator.partial( task_id="bad_operator_test_group", ).expand(some_argument=create_arguments()) dag = airflow_is_bad() ``` Put this in your dags folder, Airflow will fail to import the dag with error ``` Broken DAG: [<USER>/airflow/dags/airflow_is_bad_minimal_example.py] Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 825, in _serialize_node serialize_op["_operator_extra_links"] = cls._serialize_operator_extra_links( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 1165, in _serialize_operator_extra_links for operator_extra_link in operator_extra_links: TypeError: 'property' object is not iterable ``` Commenting out the `operator_extra_links` from the `BadOperator` in the example will allow the dag to be imported fine ### Operating System macOS Ventura 13.4 ### Versions of Apache Airflow Providers ``` apache-airflow-providers-amazon==8.0.0 apache-airflow-providers-common-sql==1.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==10.0.0 apache-airflow-providers-http==4.3.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-sqlite==3.3.2 ``` ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else I found adjusting `operator_extra_links` on `BatchOperator` to be `operator_extra_links = (BatchJobDetailsLink(), BatchJobDefinitionLink(), BatchJobQueueLink(), CloudWatchEventsLink())` solved my issue and made it run fine, however I've no idea if that's safe or generalises because I'm not sure what `operator_extra_links` is actually for internally. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31902
https://github.com/apache/airflow/pull/31904
225e3041d269698d0456e09586924c1898d09434
3318212482c6e11ac5c2e2828f7e467bca5b7245
"2023-06-14T14:37:22Z"
python
"2023-07-06T05:50:06Z"
closed
apache/airflow
https://github.com/apache/airflow
31,891
["docs/apache-airflow-providers-google/api-auth-backend/google-openid.rst"]
Incorrect audience argument in Google OpenID authentication doc
### What do you see as an issue? I followed the [Google OpenID authentication doc](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/api-auth-backend/google-openid.html) and got this error: ``` $ ID_TOKEN="$(gcloud auth print-identity-token "--audience=${AUDIENCE}")" ERROR: (gcloud.auth.print-identity-token) unrecognized arguments: --audience=759115288429-c1v16874eprg4455kt1di902b3vkjho2.apps.googleusercontent.com (did you mean '--audiences'?) To search the help text of gcloud commands, run: gcloud help -- SEARCH_TERMS ``` Perhaps the gcloud CLI parameter changed since this doc was written. ### Solving the problem Update the CLI argument in the doc. ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31891
https://github.com/apache/airflow/pull/31893
ca13c7b77ea0e7d37bfe893871bab565d26884d0
fa07812d1013f964a4736eade3ba3e1a60f12692
"2023-06-14T09:05:50Z"
python
"2023-06-23T10:23:44Z"
closed
apache/airflow
https://github.com/apache/airflow
31,877
["airflow/providers/databricks/operators/databricks.py"]
DatabricksSubmitRunOperator libraries parameter has incorrect type
### Apache Airflow version 2.6.1 ### What happened The type of the libraries field in the DatabricksSubmitRunOperator is incorrect. According to the Databricks docs, the values should look more like: ```python [ {"pypi": {"package": "simplejson"}}, {"pypi": {"package": "Faker"}}, ] ``` as opposed to what the type hint implies: ```python [ {"pypi": "simplejson"}, {"pypi": "Faker"}, ] ``` https://github.com/apache/airflow/blob/providers-databricks/4.2.0/airflow/providers/databricks/operators/databricks.py#L306 ### What you think should happen instead _No response_ ### How to reproduce n/a ### Operating System macOS ### Versions of Apache Airflow Providers _No response_ ### Deployment Astronomer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31877
https://github.com/apache/airflow/pull/31888
f2ebc292fe63d2ddd0686d90c3acc0630f017a07
69bc90b82403b705b3c30176cc3d64b767f2252e
"2023-06-13T14:59:17Z"
python
"2023-06-19T07:22:45Z"
closed
apache/airflow
https://github.com/apache/airflow
31,873
["airflow/models/variable.py", "tests/models/test_variable.py"]
KubernetesPodOperator doesn't mask variables in Rendered Template that are used as arguments
### Apache Airflow version 2.6.1 ### What happened I am pulling a variable from Google Secret Manager and I'm using it as an argument in a KubernetesPodOperator task. I've also tried it with the KubernetesPodOperatorAsync operator and I'm getting the same behaviour. The variable value is not masked on Rendered Template page. If I use the exact same variable in a different operator, like the HttpSensorAsync, it is properly masked. That is quite critical and I can't deploy the DAG to production. ### What you think should happen instead The variable in the KubernetesPodOperator should be masked and only '***' should be shown in the Rendered Template page ### How to reproduce Here's the example of code where I use the exact same variable in two different Operators. It's in the arguments of the Kubernetes Operator and then used in a different operator next. ``` my_changeset = KubernetesPodOperator( task_id='my_load', namespace=kubernetes_namespace, service_account_name=service_account_name, image='my-feed:latest', name='changeset_load', in_cluster=in_cluster, cluster_context='docker-desktop', # is ignored when in_cluster is set to True is_delete_operator_pod=True, get_logs=True, image_pull_policy=image_pull_policy, arguments=[ '-k{{ var.json.faros_api_key.faros_api_key }}', ], container_resources=k8s.V1ResourceRequirements(requests=requests, limits=limits), volumes=volumes, volume_mounts=volume_mounts, log_events_on_failure=True, startup_timeout_seconds=60 * 5, ) test_var = HttpSensorAsync( task_id=f'wait_for_my_file', http_conn_id='my_paymentreports_http', endpoint='{{ var.json.my_paymentreports_http.client_id }}/report', headers={'user-agent': 'King'}, request_params={ 'access_token': '{{ var.json.faros_api_key.faros_api_key }}', }, response_check=lambda response: True if response.status_code == 200 else False, extra_options={'check_response': False}, timeout=60 * 60 * 8, ) ``` The same {{ var.json.faros_api_key.faros_api_key }} is used in both operators, but only masked in the HttpSensorAsync operator. ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers apache-airflow==2.6.1+astro.3 apache-airflow-providers-amazon==8.1.0 apache-airflow-providers-celery==3.2.0 apache-airflow-providers-cncf-kubernetes==7.0.0 apache-airflow-providers-common-sql==1.5.1 apache-airflow-providers-datadog==3.3.0 apache-airflow-providers-elasticsearch==4.5.0 apache-airflow-providers-ftp==3.4.1 apache-airflow-providers-github==2.3.0 apache-airflow-providers-google==10.0.0 apache-airflow-providers-hashicorp==3.4.0 apache-airflow-providers-http==4.4.1 apache-airflow-providers-imap==3.2.1 apache-airflow-providers-microsoft-azure==6.1.1 apache-airflow-providers-mysql==5.1.0 apache-airflow-providers-postgres==5.5.0 apache-airflow-providers-redis==3.2.0 apache-airflow-providers-samba==4.2.0 apache-airflow-providers-sendgrid==3.2.0 apache-airflow-providers-sftp==4.3.0 apache-airflow-providers-slack==7.3.0 apache-airflow-providers-sqlite==3.4.1 apache-airflow-providers-ssh==3.7.0 ### Deployment Astronomer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31873
https://github.com/apache/airflow/pull/31964
fc0e5a4d42ee882ca5bc20ea65be38b2c739644d
e22ce9baed19ddf771db59b7da1d25e240430625
"2023-06-13T11:25:23Z"
python
"2023-06-16T19:05:01Z"
closed
apache/airflow
https://github.com/apache/airflow
31,851
["airflow/cli/cli_config.py", "airflow/cli/commands/connection_command.py", "airflow/cli/commands/variable_command.py", "airflow/cli/utils.py"]
Allow variables to be printed to STDOUT
### Description Currently, the `airflow variables export` command requires an explicit file path and does not support output to stdout. However connections can be printed to stdout using `airflow connections export -`. This inconsistency between the two export commands can lead to confusion and limits the flexibility of the variables export command. ### Use case/motivation To bring some consistency, similar to connections, variables should also be printed to STDOUT, using `-` instead of filename. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31851
https://github.com/apache/airflow/pull/33279
bfa09da1380f0f1e0727dbbc9f1878bd44eb848d
09d478ec671f8017294d4e15d75db1f40b8cc404
"2023-06-12T02:56:23Z"
python
"2023-08-11T09:02:48Z"
closed
apache/airflow
https://github.com/apache/airflow
31,834
["airflow/providers/microsoft/azure/log/wasb_task_handler.py", "airflow/providers/redis/log/__init__.py", "airflow/providers/redis/log/redis_task_handler.py", "airflow/providers/redis/provider.yaml", "airflow/utils/log/file_task_handler.py", "docs/apache-airflow-providers-redis/index.rst", "docs/apache-airflow-providers-redis/logging/index.rst", "tests/providers/redis/log/__init__.py", "tests/providers/redis/log/test_redis_task_handler.py"]
Redis task handler for logs
### Discussed in https://github.com/apache/airflow/discussions/31832 <div type='discussions-op-text'> <sup>Originally posted by **michalc** June 10, 2023</sup> Should something like the below be in the codebase? It's a simple handler for storing Airflow task logs in Redis, enforcing a max number of entries per try, and an expiry time for the logs Happy to raise a PR (and I guessed a lot at how things should be... so suspect can be improved upon...) ```python class RedisHandler(logging.Handler): def __init__(self, client, key): super().__init__() self.client = client self.key = key def emit(self, record): p = self.client.pipeline() p.rpush(self.key, self.format(record)) p.ltrim(self.key, start=-10000, end=-1) p.expire(self.key, time=60 * 60 * 24 * 28) p.execute() class RedisTaskHandler(FileTaskHandler, LoggingMixin): """ RedisTaskHandler is a python log handler that handles and reads task instance logs. It extends airflow FileTaskHandler and uploads to and reads from Redis. """ trigger_should_wrap = True def __init__(self, base_log_folder: str, redis_url): super().__init__(base_log_folder) self.handler = None self.client = redis.Redis.from_url(redis_url) def _read( self, ti, try_number, metadata=None, ): log_str = b"\n".join( self.client.lrange(self._render_filename(ti, try_number), start=0, end=-1) ).decode("utf-8") return log_str, {"end_of_log": True} def set_context(self, ti): super().set_context(ti) self.handler = RedisHandler( self.client, self._render_filename(ti, ti.try_number) ) self.handler.setFormatter(self.formatter) ```</div>
https://github.com/apache/airflow/issues/31834
https://github.com/apache/airflow/pull/31855
6362ba5ab45a38008814616df4e17717cc3726c3
42b4b43c4c2ccf0b6e7eaa105c982df495768d01
"2023-06-10T17:38:26Z"
python
"2023-07-23T06:43:35Z"
closed
apache/airflow
https://github.com/apache/airflow
31,819
["docs/apache-airflow/authoring-and-scheduling/deferring.rst"]
Improve the docs around deferrable mode for Sensors
### Body With Operators the use case of deferrable operators is pretty clear. However with Sensor new questions are raised. All Sensors inherit from `BaseSensorOperator` which adds [mode](https://github.com/apache/airflow/blob/a98621f4facabc207b4d6b6968e6863845e1f90f/airflow/sensors/base.py#L93) parameter a question comes to mind what is the difference between: `SomeSensor(..., mode='reschedule')` to: `SomeSensor(..., deferrable=true)` Thus unlike Operators, when working with Sensors and assuming the sensor has defer implemented the users have a choice of what to use and both can be explained as "something is not ready, lets wait without consuming resources". The docs should clarify the difference and compare between the two options that might look the same but are different. We should explain it in two fronts: 1. What happens in Airflow for each one of the options (task state in `defer` mode vs `up_for_reschedule`) etc... 2. What is the motivation/justification for each one. pros and cons. 3. Do we have some kind of general recommendation as "always prefer X over Y" or "In executor X better to use one of the options" etc... also wondering what `SomeSensor(..., , mode='reschedule', deferrable=true)` means and if we are protected against such usage. ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/31819
https://github.com/apache/airflow/pull/31840
371833e076d033be84f109cce980a6275032833c
0db0ff14da449dc3dbfe9577ccdb12db946b9647
"2023-06-09T18:33:27Z"
python
"2023-06-24T16:40:26Z"
closed
apache/airflow
https://github.com/apache/airflow
31,818
["airflow/cli/cli_config.py", "airflow/cli/commands/db_command.py", "tests/cli/commands/test_db_command.py"]
Add retry + timeout to Airflow db check
### Description In my company usage of Airflow, developmental instances of Airflow run on containerized PostgreSQL that are spawned at the same time the Airflow container is spawned. Before the Airflow container runs its initialization scripts, it needs to make sure that the PostgreSQL instance can be reached, for which `airflow db check` is a great option. However, there is a non-deterministic race condition between the PSQL container and Airflow containers (not sure which will each readiness first and by how much), calling the `airflow db check` command once is not sufficient, and implementing a retry-timeout in shell script is feasible but unpleasant. It would be great if the `airflow db check` command can take two additional optional arguments: `--retry` and `--retry-delay` (just like with `curl`) so that the database connection can be checked repeatedly for up to a specified number of times. This command should exit with `0` exit code if any of the retries succeeds, and `1` if all of the retries failed. ### Use case/motivation _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31818
https://github.com/apache/airflow/pull/31836
a81ac70b33a589c58b59864df931d3293fada382
1b35a077221481e9bf4aeea07d1264973e7f3bf6
"2023-06-09T18:07:59Z"
python
"2023-06-15T08:54:09Z"
closed
apache/airflow
https://github.com/apache/airflow
31,811
["airflow/providers/microsoft/azure/hooks/wasb.py", "docs/apache-airflow-providers-microsoft-azure/connections/wasb.rst", "tests/providers/microsoft/azure/hooks/test_wasb.py"]
Airflow Connection Type Azure Blob Storage - Shared Access Key field
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Airflow Version 2.5.2 Issue: Create a connection of type azure blob storage using the method #3 described in [OSS docs](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/connections/wasb.html). When we use the storage account name and the access key as in the below screenshot, the connection works just fine. <img width="637" alt="connection-blob-storage-access-key" src="https://github.com/apache/airflow/assets/75730393/6cf76b44-f65f-40c0-8279-32c58b6d57ba"> Then what is the purpose of the extra field called `Blob Storage Shared Access Key(Optional)`? When I tried to put the access key in this field, then connection fails with the below error on testing: ``` invalid url http:// ``` Code reference: https://github.com/apache/airflow/blob/main/airflow/providers/microsoft/azure/hooks/wasb.py#L190-L192 ### What you think should happen instead _No response_ ### How to reproduce - Create a Azure Storage account - Go to Access Control (IAM), copy the key - Spin up new local airflow environment using Astro CLI with runtime version 7.4.1 - Go to Airflow UI -> Admin -> Connections -> create a new connection of type Azure blob Storage - Enter the name of the storage account in Blob Storage Login and the key copied from Azure to Blob Shared Access Key (Optional) and click on Test connection. ### Operating System Astro runtime 7.4.1 image on MacOS ### Versions of Apache Airflow Providers _No response_ ### Deployment Astronomer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31811
https://github.com/apache/airflow/pull/32082
0bc689ee6d4b6967d7ae99a202031aac14d181a2
46ee1c2c8d3d0e5793f42fd10bcd80150caa538b
"2023-06-09T06:40:54Z"
python
"2023-06-27T23:00:11Z"
closed
apache/airflow
https://github.com/apache/airflow
31,795
["airflow/providers/apache/kafka/triggers/await_message.py"]
AwaitMessageTriggerFunctionSensor not firing all eligble messages
### Apache Airflow version 2.6.1 ### What happened The AwaitMessageTriggerFunctionSensor is showing some buggy behaviour. When consuming from a topic, it is correctly applying the apply_function in order to yield a TriggerEvent. However, it is consuming multiple messages at a time and not yielding a trigger for the correct amount of messages that would be eligble (return a value in the apply_function). The observed behaviour is as follows: - Sensor is deferred and messages start getting consumed - Multiple eligble messages trigger a single TriggerEvent instead of multiple TriggerEvents. - The sensor returns to a deferred state , repeating the cycle. The event_triggered_function is being called correctly. However, due to the issue in consuming and correctly generating the appropriate TriggerEvents some of them are missed. ### What you think should happen instead Each eligble message should create an individual TriggerEvent to be consumed by the event_triggered_function. ### How to reproduce - Use a producer DAG to produce a set amount of messages on your kafka topic - Use a listener DAG to consume this topic, screening for eligble messages (apply_function) and use the event_trigger_function to monitor the amount of events that are being consumed. ### Operating System Kubernetes cluster - Linux ### Versions of Apache Airflow Providers apache-airflow-providers-apache-kafka==1.1.0 ### Deployment Official Apache Airflow Helm Chart ### Deployment details helm chart version 1.9.0 ### Anything else Every time (independent of topic, message content, apply_function and event_triggered_function) ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31795
https://github.com/apache/airflow/pull/31803
ead2530d3500dd27df54383a0802b6c94828c359
1b599c9fbfb6151a41a588edaa786745f50eec38
"2023-06-08T14:24:33Z"
python
"2023-06-30T09:26:46Z"
closed
apache/airflow
https://github.com/apache/airflow
31,769
["airflow/serialization/serde.py", "tests/serialization/test_serde.py"]
Deserialization of old xcom data fails after upgrade to 2.6.1 from 2.5.2 when calling /xcom/list/ [GET]
### Apache Airflow version 2.6.1 ### What happened After upgrading from airflow 2.5.2 to 2.6.1 calling the endpoint `xcom/list/` we get the following exception: ``` [2023-06-07T12:16:50.050+0000] {app.py:1744} ERROR - Exception on /xcom/list/ [GET] Traceback (most recent call last): File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 2529, in wsgi_app response = self.full_dispatch_request() File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1825, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request rv = self.dispatch_request() File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/security/decorators.py", line 139, in wraps return f(self, *args, **kwargs) File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/views.py", line 554, in list widgets = self._list() File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/baseviews.py", line 1177, in _list widgets = self._get_list_widget( File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/baseviews.py", line 1076, in _get_list_widget count, lst = self.datamodel.query( File "/home/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/models/sqla/interface.py", line 500, in query query_results = query.all() File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2773, in all return self._iter().all() File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 1476, in all return self._allrows() File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 401, in _allrows rows = self._fetchall_impl() File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 1389, in _fetchall_impl return self._real_result._fetchall_impl() File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 1813, in _fetchall_impl return list(self.iterator) File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 151, in chunks rows = [proc(row) for row in fetch] File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 151, in <listcomp> rows = [proc(row) for row in fetch] File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 984, in _instance state.manager.dispatch.load(state, context) File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/event/attr.py", line 334, in __call__ fn(*args, **kw) File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/mapper.py", line 3702, in _event_on_load instrumenting_mapper._reconstructor(state.obj()) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 128, in init_on_load self.value = self.orm_deserialize_value() File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 677, in orm_deserialize_value return BaseXCom._deserialize_value(self, True) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 659, in _deserialize_value return json.loads(result.value.decode("UTF-8"), cls=XComDecoder, object_hook=object_hook) File "/usr/local/lib/python3.10/json/__init__.py", line 359, in loads return cls(**kw).decode(s) File "/usr/local/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/local/lib/python3.10/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 126, in orm_object_hook return deserialize(dct, False) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 209, in deserialize o = _convert(o) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 273, in _convert return {CLASSNAME: old[OLD_TYPE], VERSION: DEFAULT_VERSION, DATA: old[OLD_DATA][OLD_DATA]} KeyError: '__var' ``` Some xcom entries from previous airflow versions seem to be incompatible with the new refactored serialization from https://github.com/apache/airflow/pull/28067 ### What you think should happen instead xcom entries should be displayed ### How to reproduce Add an entry to your the xcom table where value contains: ` [{"__classname__": "airflow.datasets.Dataset", "__version__": 1, "__data__": {"__var": {"uri": "bq://google_cloud_default@?table=table_name&schema=schema_name", "extra": null}, "__type": "dict"}}]` ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==8.0.0 apache-airflow-providers-apache-kafka==1.1.0 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==6.1.0 apache-airflow-providers-common-sql==1.4.0 apache-airflow-providers-docker==3.6.0 apache-airflow-providers-elasticsearch==4.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==10.1.1 apache-airflow-providers-grpc==3.1.0 apache-airflow-providers-hashicorp==3.3.1 apache-airflow-providers-http==4.3.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-microsoft-azure==6.0.0 apache-airflow-providers-mysql==5.0.0 apache-airflow-providers-odbc==3.2.1 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-sendgrid==3.1.0 apache-airflow-providers-sftp==4.2.4 apache-airflow-providers-slack==7.2.0 apache-airflow-providers-snowflake==4.0.5 apache-airflow-providers-sqlite==3.3.2 apache-airflow-providers-ssh==3.6.0 ### Deployment Other 3rd-party Helm chart ### Deployment details _No response_ ### Anything else The double `old[OLD_DATA][OLD_DATA]` looks suspicious to me in https://github.com/apache/airflow/blob/58fca5eb3c3521e3fa1b3beeb066acb15629deeb/airflow/serialization/serde.py#L273 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31769
https://github.com/apache/airflow/pull/31866
779226706c1d64e0fe1e19c5f077ead9c9b4914a
bd32467ede1a5a197e09456803f7cebaee9f9b77
"2023-06-07T15:08:32Z"
python
"2023-06-29T20:37:16Z"
closed
apache/airflow
https://github.com/apache/airflow
31,761
["BREEZE.rst"]
Update the troubleshoot section in breeze for pip running for long time.
### What do you see as an issue? Add an update on the troubleshooting section of BREEZE.rst if the pip is taking a significant amount of time with the following error: ``` ip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. ``` ### Solving the problem If the pip is taking a significant amount of time and your internet connection is causing pip to be unable to download the libraries within the default timeout, it is advisable to modify the default timeout as follows and run the breeze again. ``` export PIP_DEFAULT_TIMEOUT=1000 ``` ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31761
https://github.com/apache/airflow/pull/31760
07ea574fed5d56ca9405ee9e47828841289e3a3c
b9efbf513d8390b66d01ee380ccc43cd60d3c88b
"2023-06-07T11:47:11Z"
python
"2023-06-07T11:51:55Z"
closed
apache/airflow
https://github.com/apache/airflow
31,753
["airflow/providers/databricks/operators/databricks_sql.py", "tests/providers/databricks/operators/test_databricks_sql.py"]
AttributeError exception when returning result to XCom
### Apache Airflow version 2.6.1 ### What happened When i use _do_xcom_push=True_ in **DatabricksSqlOperator** the an exception with following stack trace is thrown: ``` [2023-06-06, 08:52:24 UTC] {sql.py:375} INFO - Running statement: SELECT cast(max(id) as STRING) FROM prod.unified.sessions, parameters: None [2023-06-06, 08:52:25 UTC] {taskinstance.py:1824} ERROR - Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper return func(*args, **kwargs) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2354, in xcom_push XCom.set( File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper return func(*args, **kwargs) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 237, in set value = cls.serialize_value( File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 632, in serialize_value return json.dumps(value, cls=XComEncoder).encode("UTF-8") File "/usr/local/lib/python3.10/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 102, in encode o = self.default(o) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 91, in default return serialize(o) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 144, in serialize return encode(classname, version, serialize(data, depth + 1)) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize return [serialize(d, depth + 1) for d in o] File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp> return [serialize(d, depth + 1) for d in o] File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 132, in serialize qn = qualname(o) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/module_loading.py", line 47, in qualname return f"{o.__module__}.{o.__name__}" File "/home/airflow/.local/lib/python3.10/site-packages/databricks/sql/types.py", line 161, in __getattr__ raise AttributeError(item) AttributeError: __name__. Did you mean: '__ne__'? ``` ### What you think should happen instead In _process_output() if self._output_path is False a list of tuples is returned: ``` def _process_output(self, results: list[Any], descriptions: list[Sequence[Sequence] | None]) -> list[Any]: if not self._output_path: return list(zip(descriptions, results)) ``` I suspect this breaks the serialization somehow which might be related to my own meta database(postgres). Replacing the Databricks SQL Operator with simple **PythonOperator** and **DatabricksSqlHook** works just fine: ``` def get_max_id(ti): hook = DatabricksSqlHook(databricks_conn_id=databricks_sql_conn_id, sql_endpoint_name='sql_endpoint') sql = "SELECT cast(max(id) as STRING) FROM prod.unified.sessions" return str(hook.get_first(sql)[0]) ``` ### How to reproduce ``` get_max_id_task = DatabricksSqlOperator( databricks_conn_id=databricks_sql_conn_id, sql_endpoint_name='sql_endpoint', task_id='get_max_id', sql="SELECT cast(max(id) as STRING) FROM prod.unified.sessions", do_xcom_push=True ) ``` ### Operating System Debian GNU/Linux 11 (bullseye) docker image, python 3.10 ### Versions of Apache Airflow Providers apache-airflow-providers-common-sql==1.5.1 databricks-sql-connector==2.5.2 apache-airflow-providers-databricks==4.2.0 ### Deployment Docker-Compose ### Deployment details Using extended Airflow image, LocalExecutor, Postgres 13 meta db as container in the same stack. docker-compose version 1.29.2, build 5becea4c Docker version 23.0.5, build bc4487a ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31753
https://github.com/apache/airflow/pull/31780
1aa9e803c26b8e86ab053cfe760153fc286e177c
049c6184b730a7ede41db9406654f054ddc8cc5f
"2023-06-07T06:44:13Z"
python
"2023-06-08T10:49:33Z"
closed
apache/airflow
https://github.com/apache/airflow
31,750
["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"]
BaseSQLToGCSOperator creates row group for each rows during parquet generation
### Apache Airflow version Other Airflow 2 version (please specify below) Airflow 2.4.2 ### What happened BaseSQLToGCSOperator creates row group for each rows during parquet generation, which cause compression not work and increase file size. ![image](https://github.com/apache/airflow/assets/51909776/bf256065-c130-4354-81c7-8ca2ed4e8d93) ### What you think should happen instead _No response_ ### How to reproduce OracleToGCSOperator( task_id='oracle_to_gcs_parquet_test', gcp_conn_id=GCP_CONNECTION, oracle_conn_id=ORACLE_CONNECTION, sql='', bucket=GCS_BUCKET_NAME, filename='', export_format='parquet', ) ### Operating System CentOS Linux 7 ### Versions of Apache Airflow Providers apache-airflow-providers-apache-hive 2.1.0 apache-airflow-providers-apache-sqoop 2.0.2 apache-airflow-providers-celery 3.0.0 apache-airflow-providers-common-sql 1.2.0 apache-airflow-providers-ftp 3.1.0 apache-airflow-providers-google 8.4.0 apache-airflow-providers-http 4.0.0 apache-airflow-providers-imap 3.0.0 apache-airflow-providers-mysql 3.0.0 apache-airflow-providers-oracle 2.1.0 apache-airflow-providers-salesforce 5.3.0 apache-airflow-providers-sqlite 3.2.1 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31750
https://github.com/apache/airflow/pull/31831
ee83a2fbd1a65e6a5c7d550a39e1deee49856270
b502e665d633262f3ce52d9c002c0a25e6e4ec9d
"2023-06-07T03:06:11Z"
python
"2023-06-14T12:05:05Z"
closed
apache/airflow
https://github.com/apache/airflow
31,745
["airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"]
Add a process_line callback to KubernetesPodOperator
### Description Add a process_line callback to KubernetesPodOperator Like https://github.com/apache/airflow/blob/main/airflow/providers/apache/beam/operators/beam.py#LL304C36-L304C57 the `BeamRunPythonPipelineOperator`, which allows the user to add stateful plugins based on the logging from docker job ### Use case/motivation I can add a plugin based on the logging and also allow cleanup on_kill based on the job creation log. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31745
https://github.com/apache/airflow/pull/34153
d800a0de5194bb1ef3cfad44c874abafcc78efd6
b5057e0e1fc6b7a47e38037a97cac862706747f0
"2023-06-06T18:40:42Z"
python
"2023-09-09T18:08:29Z"
closed
apache/airflow
https://github.com/apache/airflow
31,726
["airflow/models/taskinstance.py", "airflow/www/extensions/init_wsgi_middlewares.py", "tests/www/test_app.py"]
redirect to same url after set base_url
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened ``` $ curl localhost:8080/airflow/ <!doctype html> <html lang=en> <title>Redirecting...</title> <h1>Redirecting...</h1> <p>You should be redirected automatically to the target URL: <a href="http://localhost:8080/airflow/">http://localhost:8080/airflow/</a>. If not, click the link. ``` ### What you think should happen instead At least not circular redirect. ### How to reproduce generate yaml: ``` helm template --name-template=airflow ~/downloads/airflow > airflow.yaml ``` add base_url in webserver section and remove health and ready check in webserver(to keep pod alive): ``` [webserver] enable_proxy_fix = True rbac = True base_url = http://my.domain.com/airflow/ ``` ### Operating System Ubuntu 22.04.2 LTS ### Versions of Apache Airflow Providers apache-airflow==2.5.3 apache-airflow-providers-amazon==7.3.0 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==5.2.2 apache-airflow-providers-common-sql==1.3.4 apache-airflow-providers-docker==3.5.1 apache-airflow-providers-elasticsearch==4.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==8.11.0 apache-airflow-providers-grpc==3.1.0 apache-airflow-providers-hashicorp==3.3.0 apache-airflow-providers-http==4.2.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-microsoft-azure==5.2.1 apache-airflow-providers-mysql==4.0.2 apache-airflow-providers-odbc==3.2.1 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-sendgrid==3.1.0 apache-airflow-providers-sftp==4.2.4 apache-airflow-providers-slack==7.2.0 apache-airflow-providers-snowflake==4.0.4 apache-airflow-providers-sqlite==3.3.1 apache-airflow-providers-ssh==3.5.0 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31726
https://github.com/apache/airflow/pull/31833
69bc90b82403b705b3c30176cc3d64b767f2252e
fe4a6c843acd97c776d5890116bfa85356a54eee
"2023-06-06T02:39:47Z"
python
"2023-06-19T07:29:11Z"
closed
apache/airflow
https://github.com/apache/airflow
31,720
["airflow/jobs/triggerer_job_runner.py", "tests/jobs/test_triggerer_job.py"]
Add a log message when a trigger is canceled for timeout
### Body The _trigger_ log doesn't show that a trigger timed out when it is canceled due to timeout. We should try to see if we can add a log message that would show up in the right place. If we emit it from the trigger process, it might show up out of order. But then again, if we ultimately don't need to go back to the task, that would not be a problem. Additionally if we ultimately can "log from anywhere" then again, this would provide a clean solution. This came up in PR discussion here: https://github.com/apache/airflow/pull/30853#discussion_r1187018026 The relevant trigger code is here: https://github.com/apache/airflow/blob/main/airflow/jobs/triggerer_job_runner.py#L598-L619 I think we could add logic so that when we receive a cancelled error (which could be for a few different reasons) then we can log the reason for the cancellation. I think we could just add an `except CancelledError` and then log the reason. We might need also to update the code in the location where we actually _initiate_ the cancellation to add sufficient information for the log message. cc @syedahsn @phanikumv @jedcunningham @pankajastro ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/31720
https://github.com/apache/airflow/pull/31757
6becb7031618867bc253aefc9e3e216629575d2d
a60429eadfffb5fb0f867c220a6cecf628692dcf
"2023-06-05T18:55:20Z"
python
"2023-06-16T08:31:51Z"
closed
apache/airflow
https://github.com/apache/airflow
31,668
["docs/apache-airflow/core-concepts/dags.rst"]
Schedule "@daily" is wrongly declared in the "DAG/Core Concepts"
### What do you see as an issue? I found a small bug in the DAG Core Concepts documentation regarding the `@daily`schedule: https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html#running-dags DAGs do not require a schedule, but it’s very common to define one. You define it via the `schedule` argument, like this: ```python with DAG("my_daily_dag", schedule="@daily"): ... ``` The `schedule` argument takes any value that is a valid [Crontab](https://en.wikipedia.org/wiki/Cron) schedule value, so you could also do: ```python with DAG("my_daily_dag", schedule="0 * * * *"): ... ``` If I'm not mistaken, the daily crontab notation should be `0 0 * * *` instead of `0 * * * *`, otherwise the DAG would run every hour. The second `0`of course needs to be replaced with the hour, at which the DAG should run daily. ### Solving the problem I would change the documentation at the marked location: The `schedule` argument takes any value that is a valid [Crontab](https://en.wikipedia.org/wiki/Cron) schedule value, so for a daily run at 00:00, you could also do: ```python with DAG("my_daily_dag", schedule="0 0 * * *"): ... ``` ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31668
https://github.com/apache/airflow/pull/31666
4ebf1c814c6e382169db00493a897b11c680e72b
6a69fbb10c08f30c0cb22e2ba68f56f3a5d465aa
"2023-06-01T12:35:33Z"
python
"2023-06-01T14:36:56Z"
closed
apache/airflow
https://github.com/apache/airflow
31,656
["airflow/decorators/base.py", "tests/decorators/test_setup_teardown.py"]
Param on_failure_fail_dagrun should be overridable through `task.override`
Currently when you define a teardown e.g. ``` @teardown(on_failure_fail_dagrun=True) def my_teardown(): ... ``` You can not change this when you instantiate the task e.g. with ``` my_teardown.override(on_failure_fail_dagrun=True)() ``` I don't think this is good because if you define a reusable taskflow function then it might depend on the context.
https://github.com/apache/airflow/issues/31656
https://github.com/apache/airflow/pull/31665
29d2a31dc04471fc92cbfb2943ca419d5d8a6ab0
8dd194493d6853c2de80faee60d124b5d54ec3a6
"2023-05-31T21:45:59Z"
python
"2023-06-02T05:26:40Z"
closed
apache/airflow
https://github.com/apache/airflow
31,648
["airflow/providers/google/cloud/hooks/kubernetes_engine.py", "tests/providers/google/cloud/hooks/test_kubernetes_engine.py"]
Getting Unauthorized error messages with GKEStartPodOperator if pod execution is over 1 hour
### Apache Airflow version 2.6.1 ### What happened After installing 2.6.1 with fix https://github.com/apache/airflow/pull/31391 we could see our DAGs running normally, except that when they take more than 60 minutes, it stops reporting the log/status, and even if task is completed within job pod, is always marked as failure due to the "Unauthorized" errors. Log of job running starting and authenticating (some info redacted): ``` [2023-05-31, 07:00:17 UTC] {base.py:73} INFO - Using connection ID 'gcp_conn' for task execution. [2023-05-31, 07:00:17 UTC] {kubernetes_engine.py:288} INFO - Fetching cluster (project_id=<PROJECT-ID>, location=<REGION>, cluster_name=<CLUSTER-NAME>) [2023-05-31, 07:00:17 UTC] {credentials_provider.py:323} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook. [2023-05-31, 07:00:17 UTC] {_default.py:213} DEBUG - Checking None for explicit credentials as part of auth process... [2023-05-31, 07:00:17 UTC] {_default.py:186} DEBUG - Checking Cloud SDK credentials as part of auth process... [2023-05-31, 07:00:17 UTC] {_default.py:192} DEBUG - Cloud SDK credentials not found on disk; not using them [2023-05-31, 07:00:17 UTC] {_http_client.py:104} DEBUG - Making request: GET http://169.254.169.254 [2023-05-31, 07:00:17 UTC] {_http_client.py:104} DEBUG - Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id [2023-05-31, 07:00:17 UTC] {requests.py:192} DEBUG - Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true [2023-05-31, 07:00:17 UTC] {requests.py:192} DEBUG - Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/airflow@<PROJECT-ID>.iam.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform [2023-05-31, 07:00:17 UTC] {pod.py:769} DEBUG - Creating pod for KubernetesPodOperator task update [2023-05-31, 07:00:17 UTC] {pod.py:850} INFO - Building pod mydag-vfqdiqzm with labels: {'dag_id': 'mydag', 'task_id': 'update', 'run_id': 'scheduled__2023-05-30T0700000000-fa9d70c83', 'kubernetes_pod_operator': 'True', 'try_number': '1'} [2023-05-31, 07:00:17 UTC] {base.py:73} INFO - Using connection ID 'google_cloud_default' for task execution. [2023-05-31, 07:00:17 UTC] {credentials_provider.py:323} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook. [2023-05-31, 07:00:17 UTC] {_default.py:213} DEBUG - Checking None for explicit credentials as part of auth process... [2023-05-31, 07:00:17 UTC] {_default.py:186} DEBUG - Checking Cloud SDK credentials as part of auth process... [2023-05-31, 07:00:17 UTC] {_default.py:192} DEBUG - Cloud SDK credentials not found on disk; not using them [2023-05-31, 07:00:17 UTC] {_http_client.py:104} DEBUG - Making request: GET http://169.254.169.254 [2023-05-31, 07:00:17 UTC] {_http_client.py:104} DEBUG - Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id [2023-05-31, 07:00:17 UTC] {requests.py:192} DEBUG - Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true [2023-05-31, 07:00:17 UTC] {requests.py:192} DEBUG - Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/airflow@<PROJECT-ID>.iam.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform [2023-05-31, 07:00:17 UTC] {rest.py:231} DEBUG - response body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797683242"},"items":[]} [2023-05-31, 07:00:17 UTC] {pod.py:500} DEBUG - Starting pod: api_version: v1 kind: Pod metadata: annotations: {} cluster_name: null ... ``` Periodically we can see heartbeats and status: ``` [2023-05-31, 07:06:48 UTC] {rest.py:231} DEBUG - response body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"mydag-vfqdiqzm","namespace":"airflow-namespace","uid":"314deb6c-3c2b-41ae-b49f-1e0c89cf6950","resourceVersion":"797683421","creationTimestamp":"2023-05-31T07:00:17Z"...<REDACTING DETAILS POD>} [2023-05-31, 07:06:48 UTC] {taskinstance.py:789} DEBUG - Refreshing TaskInstance <TaskInstance: mydag.update scheduled__2023-05-30T07:00:00+00:00 [running]> from DB [2023-05-31, 07:06:48 UTC] {job.py:213} DEBUG - [heartbeat] ``` Exactly at the 1 hour mark this error occurs: ``` [2023-05-31, 08:00:55 UTC] {rest.py:231} DEBUG - response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401} [2023-05-31, 08:00:56 UTC] {rest.py:231} DEBUG - response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401} [2023-05-31, 08:00:58 UTC] {taskinstance.py:789} DEBUG - Refreshing TaskInstance <TaskInstance: mydag.update scheduled__2023-05-30T07:00:00+00:00 [running]> from DB [2023-05-31, 08:00:58 UTC] {rest.py:231} DEBUG - response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401} [2023-05-31, 08:00:58 UTC] {job.py:213} DEBUG - [heartbeat] [2023-05-31, 08:00:58 UTC] {rest.py:231} DEBUG - response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401} [2023-05-31, 08:00:58 UTC] {pod.py:905} ERROR - (401) Reason: Unauthorized HTTP response headers: HTTPHeaderDict({'Audit-Id': 'a9cfb5cd-9915-4490-8813-e8392a0e20d2', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Wed, 31 May 2023 08:00:58 GMT', 'Content-Length': '129'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401} Traceback (most recent call last): File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/operators/pod.py", line 543, in execute_sync self.pod_manager.fetch_container_logs( File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 361, in fetch_container_logs last_log_time = consume_logs( File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 339, in consume_logs for raw_line in logs: File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 166, in __iter__ if not self.logs_available(): File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 182, in logs_available remote_pod = self.read_pod() File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 200, in read_pod self.read_pod_cache = self.pod_manager.read_pod(self.pod) File "/home/airflow/.local/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/home/airflow/.local/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/home/airflow/.local/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter raise retry_exc.reraise() File "/home/airflow/.local/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise raise self.last_attempt.result() File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/home/airflow/.local/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 490, in read_pod return self._client.read_namespaced_pod(pod.metadata.name, pod.metadata.namespace) File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/client/api/core_v1_api.py", line 23483, in read_namespaced_pod return self.read_namespaced_pod_with_http_info(name, namespace, **kwargs) # noqa: E501 File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/client/api/core_v1_api.py", line 23570, in read_namespaced_pod_with_http_info return self.api_client.call_api( File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 348, in call_api return self.__call_api(resource_path, method, File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 180, in __call_api response_data = self.request( File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 373, in request return self.rest_client.GET(url, File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/client/rest.py", line 240, in GET return self.request("GET", url, File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/client/rest.py", line 234, in request raise ApiException(http_resp=r) ``` Job retries with same result and is marked as failed. ### What you think should happen instead Pod continues to report log and status of pod until completion (even if it takes over 1 hr), and job is marked as successful. ### How to reproduce Create a DAG that makes use of GKEStartPodOperator with a task that will take over one hour. ### Operating System cos_coaintainerd ### Versions of Apache Airflow Providers apache-airflow-providers-cncf-kubernetes==5.2.2 apache-airflow-providers-google==10.1.1 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31648
https://github.com/apache/airflow/pull/32673
27b5f696a48a088a23294c542acb46bd6e544809
848c69a194c03ed3a5badc909e26b5c1bda03050
"2023-05-31T15:43:06Z"
python
"2023-07-20T14:16:32Z"
closed
apache/airflow
https://github.com/apache/airflow
31,636
["airflow/providers/amazon/aws/operators/ecs.py", "airflow/providers/amazon/aws/triggers/ecs.py", "airflow/providers/amazon/aws/utils/task_log_fetcher.py", "airflow/providers/amazon/provider.yaml", "tests/providers/amazon/aws/operators/test_ecs.py", "tests/providers/amazon/aws/triggers/test_ecs.py", "tests/providers/amazon/aws/utils/test_task_log_fetcher.py"]
Add deferrable mode to EcsRunTaskOperator
### Description I would greatly appreciate it if the `EcsRunTaskOperator` could incorporate the `deferrable` mode. Currently, this operator significantly affects the performance of my workers, and running multiple instances simultaneously proves to be inefficient. I have noticed that the `KubernetesPodOperator` already supports this functionality, so having a similar feature available for ECS would be a valuable addition. Note: This feature request relates to the `amazon` provider. ### Use case/motivation Reduce resource utilisation of my worker when running multiple ecs run task operator in paralllel. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31636
https://github.com/apache/airflow/pull/31881
e4ca68818eec0f29ef04a1a5bfec3241ea03bf8c
415e0767616121854b6a29b3e44387f708cdf81e
"2023-05-31T09:40:58Z"
python
"2023-06-23T17:13:13Z"
closed
apache/airflow
https://github.com/apache/airflow
31,624
["chart/templates/workers/worker-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/airflow_core/test_worker.py"]
waitForMigrations.enabled is missing for "workers" helm
### Official Helm Chart version 1.9.0 (latest released) ### Apache Airflow version 2.6.1 ### Kubernetes Version 1.26 ### Helm Chart configuration `waitForMigrations` does not have an `enabled` property for the `workers` section. For consistency with the other components and to fully support ignoring migrations, the `enabled` boolean should be added. ### Docker Image customizations _No response_ ### What happened I tried to set `waitForMigration.enabled` on `workers` but it is not supported. ### What you think should happen instead For consistency with the other components and to fully support ignoring migrations, the `enabled` boolean should be added. ### How to reproduce Add this to your `values.yaml`: ``` workers: waitForMigrations: enabled: false ``` ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31624
https://github.com/apache/airflow/pull/31625
3bcaf3dd7e71993e1a89cf9367bf7c540cc88e75
b1d12271b85460803670415ae4af9313f8ae2599
"2023-05-30T22:29:31Z"
python
"2023-06-01T12:26:11Z"
closed
apache/airflow
https://github.com/apache/airflow
31,612
["airflow/providers/presto/provider.yaml", "generated/provider_dependencies.json"]
[airflow 2.4.3] presto queries returning none following upgrade to common.sql provider
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened After upgrading apache-airflow-providers-common-sql from 1.2.0 to anything above 1.3.0 presto queries using the get_records() and or get_first() function returns none. using the same query -- `select 1`: 1.2.0: `Done. Returned value was: [[1]]` 1.3.0 and above: ``` Running statement: select 1, parameters: None [2023-05-30, 11:57:37 UTC] {{python.py:177}} INFO - Done. Returned value was: None ``` ### What you think should happen instead i would expect that running the query `select 1` on presto would provide the same result when the environment is running apache-airflow-providers-common-sql 1.2.0 or apache-airflow-providers-common-sql 1.5.1. ### How to reproduce run the following query: `PrestoHook(conn_id).get_records(`select 1`)` ensure that the requirements are as labelled below. ### Operating System NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" ### Versions of Apache Airflow Providers apache-airflow==2.4.3 apache-airflow-providers-amazon==6.0.0 apache-airflow-providers-celery==3.0.0 apache-airflow-providers-common-sql==1.5.1 apache-airflow-providers-ftp==3.1.0 apache-airflow-providers-google==8.4.0 apache-airflow-providers-http==4.0.0 apache-airflow-providers-imap==3.0.0 apache-airflow-providers-jenkins==3.0.0 apache-airflow-providers-mysql==3.2.1 apache-airflow-providers-postgres==5.2.2 apache-airflow-providers-presto==5.1.0 apache-airflow-providers-sendgrid==3.0.0 apache-airflow-providers-slack==6.0.0 apache-airflow-providers-snowflake==3.3.0 apache-airflow-providers-sqlite==3.2.1 apache-airflow-providers-trino==4.1.0 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31612
https://github.com/apache/airflow/pull/35132
789222cb1378079e2afd24c70c1a6783b57e27e6
8ef2a9997d8b6633ba04dd9f752f504a2ce93e25
"2023-05-30T12:19:40Z"
python
"2023-10-23T15:40:20Z"
closed
apache/airflow
https://github.com/apache/airflow
31,604
["airflow/utils/task_group.py", "tests/decorators/test_task_group.py", "tests/utils/test_task_group.py"]
Override default_args between Nested TaskGroups
### What do you see as an issue? Hello! I don't know if this is intended, but `default_args` is not an override when using nested TaskGroups. ```python def callback_in_dag(context: Context): print("DAG!") def callback_in_task_group(context: Context): print("Parent TaskGroup!") with DAG( "some_dag_id", default_args={ "on_failure_callback": callback_in_dag }, schedule=None, start_date=datetime(2023, 1, 1) ) as dag: with TaskGroup( "parent_tg", default_args={ "on_failure_callback": callback_in_task_group } ) as parent_tg: with TaskGroup("child_tg") as child_tg: BashOperator(task_id="task_1", bash_command="nooooo_command") ``` I want the result to be "Parent TaskGroup!", but I get "DAG!". ``` [2023-05-30, 10:38:52 KST] {logging_mixin.py:137} INFO - DAG! ``` ### Solving the problem Add `_update_default_args` like [link](https://github.com/apache/airflow/blob/f6bb4746efbc6a94fa17b6c77b31d9fb17305ffc/airflow/models/baseoperator.py#L139) #### [airflow/utils/task_group.py](https://github.com/apache/airflow/blob/main/airflow/utils/task_group.py) ```python ... class TaskGroup(DAGNode): def __init__(...): ... self.default_args = copy.deepcopy(default_args or {}) # Call 'self._update_default_args' when exists parent_group if parent_group is not None: self._update_default_args(parent_group) ... ... # Update self.default_args def _update_default_args(parent_group: TaskGroup): if parent_group.default_args: self.default_args.update(parent_group.default_args) ... ``` ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31604
https://github.com/apache/airflow/pull/31608
efe8473385426bf8c1e23a845f1ba26482843855
9e8627faa71e9d2047816b291061c28585809508
"2023-05-30T01:51:38Z"
python
"2023-05-30T14:31:34Z"
closed
apache/airflow
https://github.com/apache/airflow
31,584
["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"]
BigQueryInsertJobOperator not exiting deferred state
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Using Apache Airflow 2.4.3 and apache airflow google provider 8.4 (also tried with 10.1.0). We have a query that in production should run for a long time, so we wanted to make the BigQueryInsertJobOperator deferrable. Making the operator deferrable runs the job, but the UI and triggerer process don't seem to be notified that the operator has finished. I have validated that the query is actually run as the data appears in the table, but the operator gets stuck in a deferred state. ### What you think should happen instead After the big query job is finished, the operator should exit it's deferred state. ### How to reproduce Skeleton of the code used ``` with DAG( dag_id="some_dag_id", schedule="@daily", catchup=False, start_date=pendulum.datetime(2023, 5, 8), ): extract_data = BigQueryInsertJobOperator( task_id="extract_data", impersonation_chain=GCP_ASTRO_TEAM_SA.get() params={"dst_table": _DST_TABLE, "lookback_days": _LOOKBACK_DAYS}, configuration={ "query": { "query": "{% include 'sql/sql_file.sql' %}", "useLegacySql": False, } }, outlets=DATASET, execution_timeout=timedelta(hours=2, minutes=30), deferrable=True, ) ``` ### Operating System Mac OS Ventura 13.3.1 ### Versions of Apache Airflow Providers apache airflow google provider 8.4 (also tried with 10.1.0). ### Deployment Official Apache Airflow Helm Chart ### Deployment details We're using astro and on local the Airflow environment is started using `astro dev start`. The issue appears when running the DAG locally. An entire other issue (may be unrelated) appears on Sandbox deployment. ### Anything else Every time the operator is marked as deferrable. I noticed this a few days ago (week started with Monday May 22nd 2023). ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31584
https://github.com/apache/airflow/pull/31591
fcbbf47864c251046de108aafdad394d66e1df23
81b85ebcbd241e1909793d7480aabc81777b225c
"2023-05-28T13:15:37Z"
python
"2023-07-29T07:33:56Z"
closed
apache/airflow
https://github.com/apache/airflow
31,573
["airflow/providers/hashicorp/_internal_client/vault_client.py", "tests/providers/hashicorp/_internal_client/test_vault_client.py", "tests/providers/hashicorp/hooks/test_vault.py"]
Vault AWS Login not working
### Apache Airflow version 2.6.1 ### What happened Trying to connect to Vault using `auth_type` = `aws_iam`. I am receiving the following error: `AttributeError: 'Client' has no attribute 'auth_aws_iam'`. Looking through [the code](https://github.com/apache/airflow/blob/main/airflow/providers/hashicorp/_internal_client/vault_client.py#L303), you are using `client.auth_aws_iam`, but the [HVAC docs](https://hvac.readthedocs.io/en/stable/usage/auth_methods/aws.html#caveats-for-non-default-aws-regions) use `client.auth.aws.iam_login`. You should also add support for `boto3.Session()` to use role based access. ### What you think should happen instead Airflow should authenticate to Vault using AWS auth. ### How to reproduce Setup a secrets backend of `airflow.providers.hashicorp.secrets.vault.VaultBackend` and set `auth_type` = `aws_iam`. It will error out saying that the client doesn't have an attribute named `auth_aws_iam`. ### Operating System Docker ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31573
https://github.com/apache/airflow/pull/31593
ec18db170745a8b1df0bb75569cd22e69892b3e2
41ea700cbdce99cddd0f7b51b33b9fab51b993af
"2023-05-26T18:20:56Z"
python
"2023-05-30T12:25:36Z"
closed
apache/airflow
https://github.com/apache/airflow
31,551
["airflow/providers/amazon/aws/hooks/redshift_sql.py", "tests/providers/amazon/aws/hooks/test_redshift_sql.py"]
Redshift connection breaking change with IAM authentication
### Apache Airflow version 2.6.1 ### What happened This [PR](https://github.com/apache/airflow/pull/28187) introduced the get_iam_token method in `redshift_sql.py`. This is the breaking change as introduces the check for `iam` in extras, and it's set to False by default. Error log: ``` self = <airflow.providers.amazon.aws.hooks.redshift_sql.RedshiftSQLHook object at 0x7f29f7c208e0> conn = redshift_default def get_iam_token(self, conn: Connection) -> tuple[str, str, int]: """ Uses AWSHook to retrieve a temporary ***word to connect to Redshift. Port is required. If none is provided, default is used for each service """ port = conn.port or 5439 # Pull the custer-identifier from the beginning of the Redshift URL # ex. my-cluster.ccdre4hpd39h.us-east-1.redshift.amazonaws.com returns my-cluster > cluster_identifier = conn.extra_dejson.get("cluster_identifier", conn.host.split(".")[0]) E AttributeError: 'NoneType' object has no attribute 'split' .nox/test-3-8-airflow-2-6-0/lib/python3.8/site-packages/airflow/providers/amazon/aws/hooks/redshift_sql.py:107: AttributeError ``` ### What you think should happen instead It should have backward compatibility ### How to reproduce Run an example DAG for redshift with the AWS IAM profile given at hook initialization to retrieve a temporary password to connect to Amazon Redshift. ### Operating System mac-os ### Versions of Apache Airflow Providers _No response_ ### Deployment Astronomer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31551
https://github.com/apache/airflow/pull/31567
0f1cef27a5a19dd56e6b07ab0bf9868fb850421a
5b3382f63898e497d482870636ed156ce861afbc
"2023-05-25T17:07:51Z"
python
"2023-05-30T18:18:02Z"
closed
apache/airflow
https://github.com/apache/airflow
31,547
["airflow/www/views.py"]
Tag filter doesn't sort tags alphabetically
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Airflow v: 2.6.0 This has been an issue since 2.4.0 for us at least. We recently did a refactor of many of our 160+ DAGs and part of that process was to remove some tags that we didn't want anymore. Unfortunately, the old tags were still left behind when we deployed our new image with the updated DAGs (been a consistent thing across several Airflow versions for us). There is also the issue that the tag filter doesn't sort our tags alphabetically. I tried to truncate the dag_tag table, and that did help to get rid of the old tags. However, the sorting issue remains. Example: ![image](https://github.com/apache/airflow/assets/102953522/a43194a6-90f3-40dd-887e-fdfad043f200) On one of our dev environments, we have just about 10 DAGs with a similar sorting problem, and the dag_tag table had 18 rows. I took a backup of it and truncated the dag_tag table, which was almost instantly refilled (I guess logs are DEBUG level on that, so I saw nothing). This did not initially fix the sorting problem, but after a couple of truncates, things got weird, and all the tags were sorted as expected, and the row count in the dag_tag table was now just 15, so 3 rows were removed in all. We also added a new DAG in there with a tag "arjun", which also got listed first - so all sorted on that environment. Summary: 1. Truncating of the dag_tag table got rid of the old tags that we no longer have in our DAGs. 2. The tags are still sorted incorrectly in the filter (see image). It seems that the logic here is contained in `www/static/js/dags.js.` I am willing to submit a PR if I can get some guidance :) ### What you think should happen instead _No response_ ### How to reproduce N/A ### Operating System debian ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31547
https://github.com/apache/airflow/pull/31553
6f86b6cd070097dafca196841c82de91faa882f4
24e52f92bd9305bf534c411f9455460060515ea7
"2023-05-25T16:08:43Z"
python
"2023-05-26T16:31:37Z"
closed
apache/airflow
https://github.com/apache/airflow
31,526
["airflow/models/skipmixin.py", "airflow/operators/python.py", "airflow/operators/subdag.py", "tests/operators/test_python.py", "tests/operators/test_subdag_operator.py"]
Short circuit task in expanded task group fails when it returns false
### Apache Airflow version 2.6.1 ### What happened I have a short circuit task which is in a task group that is expanded. The task work correctly when it returns true but the task fails when it returns false with the following error: ``` sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) insert or update on table "xcom" violates foreign key constraint "xcom_task_instance_fkey" DETAIL: Key (dag_id, task_id, run_id, map_index)=(pipeline_output_to_s3, transfer_output_file.already_in_manifest_short_circuit, manual__2023-05-24T20:21:35.420606+00:00, -1) is not present in table "task_instance". ``` It looks like it sets the map-index to -1 when false is returned which is causing the issue. If one task fails this way in the the task group, all other mapped tasks fail as well, even if the short circuit returns true. When this error occurs, all subsequent DAG runs will be stuck indefinitely in the running state unless the DAG is deleted. ### What you think should happen instead Returning false from the short circuit operator should skip downstream tasks without affecting other task groups / map indexes or subsequent DAG runs. ### How to reproduce include a short circuit operator in a mapped task group and have it return false. ### Operating System Red Hat Enterprise Linux 7 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31526
https://github.com/apache/airflow/pull/31541
c356e4fc22abc77f05aa136700094a882f2ca8c0
e2da3151d49dae636cb6901f3d3e124a49cbf514
"2023-05-24T20:37:27Z"
python
"2023-05-30T10:42:58Z"
closed
apache/airflow
https://github.com/apache/airflow
31,522
["airflow/api/common/airflow_health.py", "airflow/api_connexion/endpoints/health_endpoint.py", "airflow/www/views.py", "tests/api/__init__.py", "tests/api/common/__init__.py", "tests/api/common/test_airflow_health.py"]
`/health` endpoint missed when adding triggerer health status reporting
### Apache Airflow version main (development) ### What happened https://github.com/apache/airflow/pull/27755 added the triggerer to the rest api health endpoint, but not the main one served on `/health`. ### What you think should happen instead As documented [here](https://airflow.apache.org/docs/apache-airflow/2.6.1/administration-and-deployment/logging-monitoring/check-health.html#webserver-health-check-endpoint), the `/health` endpoint should include triggerer info like shown on `/api/v1/health`. ### How to reproduce Compare `/api/v1/health` and `/health` responses. ### Operating System mac os ### Versions of Apache Airflow Providers _No response_ ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31522
https://github.com/apache/airflow/pull/31529
afa9ead4cea767dfc4b43e6f301e6204f7521e3f
f048aba47e079e0c81417170a5ac582ed00595c4
"2023-05-24T20:08:34Z"
python
"2023-05-26T20:22:10Z"
closed
apache/airflow
https://github.com/apache/airflow
31,509
["airflow/cli/commands/user_command.py"]
Unable to delete user via CI
### Apache Airflow version 2.6.1 ### What happened I am unable to delete users via the "delete command". I am trying to create a new user delete the default admin user. so I tried running the command `airflow users delete -u admin`. Running this command gave the following error output: ``` Feature not implemented,tasks route disabled /usr/local/lib/python3.10/site-packages/flask_limiter/extension.py:293 UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. /usr/local/lib/python3.10/site-packages/astronomer/airflow/version_check/update_checks.py:440 UserWarning: The setup method 'app_context_processor' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently. Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it. This warning will become an exception in Flask 2.3. /usr/local/lib/python3.10/site-packages/flask/blueprints.py:673 UserWarning: The setup method 'record_once' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently. Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it. This warning will become an exception in Flask 2.3. /usr/local/lib/python3.10/site-packages/flask/blueprints.py:321 UserWarning: The setup method 'record' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently. Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it. This warning will become an exception in Flask 2.3. /usr/local/lib/python3.10/site-packages/airflow/www/fab_security/sqla/manager.py:151 SAWarning: Object of type <Role> not in session, delete operation along 'User.roles' won't proceed [2023-05-24T12:34:19.438+0000] {manager.py:154} ERROR - Remove Register User Error: (psycopg2.errors.ForeignKeyViolation) update or delete on table "ab_user" violates foreign key constraint "ab_user_role_user_id_fkey" on table "ab_user_role" DETAIL: Key (id)=(4) is still referenced from table "ab_user_role". [SQL: DELETE FROM ab_user WHERE ab_user.id = %(id)s] [parameters: {'id': 4}] (Background on this error at: https://sqlalche.me/e/14/gkpj) Failed to delete user ``` Deleting via the UI works fine. The error also occurs for users that have a different role, such as Viewer. ### What you think should happen instead No error should occur and the specified user should be deleted. ### How to reproduce Run a Dag with a task that is a BashOperator that deletes the user e.g.: ```python remove_default_admin_user_task = BashOperator(task_id="remove_default_admin_user", bash_command="airflow users delete -u admin") ``` ### Operating System Docker containers run in Debian 10 ### Versions of Apache Airflow Providers Astronomer ### Deployment Astronomer ### Deployment details I use Astronomer 8.0.0 ### Anything else Always. It also occurs when ran inside the webserver docker container or when ran via the astro CLI with the command `astro dev run airflow users delete -u admin`. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31509
https://github.com/apache/airflow/pull/31539
0fd42ff015be02d1a6a6c2e1a080f8267194b3a5
3ec66bb7cc686d060ff728bb6bf4d4e70e387ae3
"2023-05-24T12:40:00Z"
python
"2023-05-25T19:45:17Z"
closed
apache/airflow
https://github.com/apache/airflow
31,499
["airflow/providers/databricks/operators/databricks_sql.py", "tests/providers/databricks/operators/test_databricks_sql.py"]
XCom - Attribute Error when serializing output of `Merge Into` databricks sql command.
### Apache Airflow version 2.6.1 ### What happened After upgrading from Airflow 2.5.3 to 2.6.1 - the dag started to fail and it's related to XCom serialization. I noticed that something has changed in regards to serializing XCom:``` key | Value Version 2.5.3 | Value Version 2.6.1 | result -- | -- | -- | -- return_value | [[['Result', 'string', None, None, None, None, None]], []] | (['(num_affected_rows,bigint,None,None,None,None,None)', '(num_inserted_rows,bigint,None,None,None,None,None)'],[]) | ✅ return_value | [[['Result', 'string', None, None, None, None, None]], []] | (['(Result,string,None,None,None,None,None)'],[]) | ✅ return_value | [[['num_affected_rows', 'bigint', None, None, None, None, None], ['num_updated_rows', 'bigint', None, None, None, None, None], ['num_deleted_rows', 'bigint', None, None, None, None, None], ['num_inserted_rows', 'bigint', None, None, None, None, None]], [[1442, 605, 0, 837]]] | `AttributeError: __name__. Did you mean: '__ne__'?` | ❌ Query syntax that procuded an error: MERGE INTO https://docs.databricks.com/sql/language-manual/delta-merge-into.html Stacktrace included below: ``` [2023-05-24, 01:12:43 UTC] {taskinstance.py:1824} ERROR - Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper return func(*args, **kwargs) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2354, in xcom_push XCom.set( File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper return func(*args, **kwargs) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 237, in set value = cls.serialize_value( File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 632, in serialize_value return json.dumps(value, cls=XComEncoder).encode("UTF-8") File "/usr/local/lib/python3.10/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 102, in encode o = self.default(o) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 91, in default return serialize(o) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 144, in serialize return encode(classname, version, serialize(data, depth + 1)) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize return [serialize(d, depth + 1) for d in o] File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp> return [serialize(d, depth + 1) for d in o] File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize return [serialize(d, depth + 1) for d in o] File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp> return [serialize(d, depth + 1) for d in o] File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 132, in serialize qn = qualname(o) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/module_loading.py", line 47, in qualname return f"{o.__module__}.{o.__name__}" File "/home/airflow/.local/lib/python3.10/site-packages/databricks/sql/types.py", line 161, in __getattr__ raise AttributeError(item) AttributeError: __name__. Did you mean: '__ne__'? ``` ### What you think should happen instead Serialization should finish without an exception raised. ### How to reproduce 1. Dag file with declared operator: ``` task = DatabricksSqlOperator( task_id="task", databricks_conn_id="databricks_conn_id", sql_endpoint_name="name", sql="file.sql" ) ``` file.sql ``` MERGE INTO table_name ON condition WHEN MATCHED THEN UPDATE WHEN NOT MATCHED THEN INSERT ``` https://docs.databricks.com/sql/language-manual/delta-merge-into.html Query output is a table num_affected_rows | num_updated_rows | num_deleted_rows | num_inserted_rows -- | -- | -- | -- 0 | 0 | 0 | 0 EDIT: It also happens for a select command ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers ``` apache-airflow-providers-amazon==8.0.0 apache-airflow-providers-cncf-kubernetes==6.1.0 apache-airflow-providers-common-sql==1.4.0 apache-airflow-providers-databricks==4.1.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==10.0.0 apache-airflow-providers-hashicorp==3.3.1 apache-airflow-providers-http==4.3.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-mysql==5.0.0 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-sftp==4.2.4 apache-airflow-providers-sqlite==3.3.2 apache-airflow-providers-ssh==3.6.0 ``` ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31499
https://github.com/apache/airflow/pull/31780
1aa9e803c26b8e86ab053cfe760153fc286e177c
049c6184b730a7ede41db9406654f054ddc8cc5f
"2023-05-24T08:32:41Z"
python
"2023-06-08T10:49:33Z"
closed
apache/airflow
https://github.com/apache/airflow
31,480
["airflow/providers/amazon/aws/links/emr.py", "tests/providers/amazon/aws/links/test_links.py"]
Missing LogUri from emr describe-cluster API when executing EmrCreateJobFlowOperator
### Apache Airflow version main (development) ### What happened Encounter the following error when executing `EmrCreateJobFlowOperator` ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/airflow/providers/amazon/aws/operators/emr.py", line 695, in execute log_uri=get_log_uri(emr_client=self._emr_hook.conn, job_flow_id=self._job_flow_id), File "/usr/local/lib/python3.10/site-packages/airflow/providers/amazon/aws/links/emr.py", line 61, in get_log_uri log_uri = S3Hook.parse_s3_url(response["Cluster"]["LogUri"]) KeyError: 'LogUri' ``` According to the [this document](https://docs.aws.amazon.com/cli/latest/reference/emr/describe-cluster.html), it seems we might not always be able to get `["Cluster"]["LogUri"]` and we encounter errors after the release in https://github.com/apache/airflow/issues/31322. ### What you think should happen instead The `EmrCreateJobFlowOperator` should finish execution without error. ### How to reproduce 1. `git clone github.com/apache/airflow/` 2. `cd airflow` 3. `git checkout c082aec089405ed0399cfee548011b0520be0011` (the main branch when I found this issue) 4. Add the following DAG to `files/dags/` and name it as `example_emr.py` ```python import os from datetime import datetime, timedelta from airflow import DAG from airflow.providers.amazon.aws.operators.emr import EmrCreateJobFlowOperator, EmrTerminateJobFlowOperator JOB_FLOW_OVERRIDES = { "Name": "example_emr_sensor_cluster", "ReleaseLabel": "emr-5.29.0", "Applications": [{"Name": "Spark"}], "Instances": { "InstanceGroups": [ { "Name": "Primary node", "Market": "ON_DEMAND", "InstanceRole": "MASTER", "InstanceType": "m4.large", "InstanceCount": 1, }, ], "KeepJobFlowAliveWhenNoSteps": False, "TerminationProtected": False, }, "JobFlowRole": "EMR_EC2_DefaultRole" "ServiceRole": "EMR_DefaultRole", } DEFAULT_ARGS = { "execution_timeout": timedelta(hours=6), "retries": 2, "retry_delay": 60, } with DAG( dag_id="example_emr_sensor", schedule=None, start_date=datetime(2022, 1, 1), default_args=DEFAULT_ARGS, catchup=False, ) as dag: create_job_flow = EmrCreateJobFlowOperator( task_id="create_job_flow", job_flow_overrides=JOB_FLOW_OVERRIDES, aws_conn_id="aws_default", ) remove_job_flow = EmrTerminateJobFlowOperator( task_id="remove_job_flow", job_flow_id=create_job_flow.output, aws_conn_id="aws_default", trigger_rule="all_done", ) create_job_flow >> remove_job_flow ``` 5. `breeze --python 3.8 --backend sqlite start-airflow` 6. Trigger the DAG from web UI ### Operating System mac OS 13.4 ### Versions of Apache Airflow Providers _No response_ ### Deployment Astronomer ### Deployment details _No response_ ### Anything else https://github.com/apache/airflow/issues/31322#issuecomment-1556876252 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31480
https://github.com/apache/airflow/pull/31482
27001a23718d6b8b5118eb130be84713af9a4477
a8c45b088e088a5f1d9c924f9efb660c80c0ce12
"2023-05-23T14:52:59Z"
python
"2023-05-31T10:38:44Z"
closed
apache/airflow
https://github.com/apache/airflow
31,476
["airflow/cli/commands/kubernetes_command.py", "tests/cli/commands/test_kubernetes_command.py"]
cleanup-pod CLI command fails due to incorrect host
### Apache Airflow version 2.6.1 ### What happened When running `airflow kubernetes cleanup-pods`, the API call to delete a pod fails. A snippet of the log is below: ``` urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /api/v1/namespaces/airflow/pods/my-task-avd79fq1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52f9aebfd0>: Failed to establish a new connection: [Errno 111] Connection refused')) ``` [The Kubernetes client provisioned in _delete_pod](https://github.com/apache/airflow/blob/main/airflow/cli/commands/kubernetes_command.py#L151) incorrectly has the host as `http:localhost`. On the scheduler pod if I start a Python environment I can see that the configuration differs from the `get_kube_client()` configuration: ``` >>> get_kube_client().api_client.configuration.host 'https://172.20.0.1:443' >>> client.CoreV1Api().api_client.configuration.host 'http://localhost/' ``` On Airflow 2.5.3 these two clients have the same configuration. It's possible I have some mistake in my configuration but I'm not sure what it could be. The above fails on 2.6.0 also. ### What you think should happen instead Pods should clean up without error ### How to reproduce Run the following from a Kubernetes deployment of Airflow: ```python from airflow.kubernetes.kube_client import get_kube_client from kubernetes import client print(get_kube_client().api_client.configuration.host) print(client.CoreV1Api().api_client.configuration.host) ``` Alternatively run `airflow kubernetes cleanup-pods` with pods available for cleanup ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details Using `in_cluster` configuration for KubernetesExecutor ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31476
https://github.com/apache/airflow/pull/31477
739e6b5d775412f987a3ff5fb71c51fbb7051a89
adf0cae48ad4e87612c00fe9facffca9b5728e7d
"2023-05-23T14:14:39Z"
python
"2023-05-24T09:45:55Z"
closed
apache/airflow
https://github.com/apache/airflow
31,460
["airflow/models/connection.py", "tests/models/test_connection.py"]
Add capability in Airflow connections to validate host
### Apache Airflow version 2.6.1 ### What happened While creating connections in airflow, the form doesn't check for the correctness of the format of the host provided. For instance, we can proceed providing something like this which is not a valid url: `spark://k8s://100.68.0.1:443?deploy-mode=cluster`. It wont instantly fail but will return faulty hosts and other details if called later. Motivation: https://github.com/apache/airflow/pull/31376#discussion_r1200112106 ### What you think should happen instead The Connection form can have a validator which should check for these scenarios and report if there is an issue to save developers and users time later. ### How to reproduce 1. Go to airflow connections form 2. Fill in connection host as: `spark://k8s://100.68.0.1:443?deploy-mode=cluster`, other details can be anything 3. Create the connection 4. Run `airflow connections get <name>` 5. The host and schema will be wrong ### Operating System Macos ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31460
https://github.com/apache/airflow/pull/31465
232771869030d708c57f840aea735b18bd4bffb2
0560881f0eaef9c583b11e937bf1f79d13e5ac7c
"2023-05-22T09:50:46Z"
python
"2023-06-19T09:32:41Z"
closed
apache/airflow
https://github.com/apache/airflow
31,440
["airflow/example_dags/example_params_ui_tutorial.py", "airflow/www/static/js/trigger.js", "airflow/www/templates/airflow/trigger.html", "docs/apache-airflow/core-concepts/params.rst"]
Multi-Select, Text Proposals and Value Labels forTrigger Forms
### Description After the release for Airflow 2.6.0 I was integrating some forms into our setup and was missing some options selections - and some nice features to make selections user friendly. I'd like do contribute some few features into the user forms: * A select box option where proposals are made but user is not limited to hard `enum` list (`enum` is restricting user input to only the options provided) * A multi-option pick list (because sometimes a single selection is just not enough * Labels so that technical values used to control the DAG are different from what is presented as option to user ### Use case/motivation After the initial release of UI trigger forms, add more features (incrementially) ### Related issues Relates or potentially has a conflict with #31299, so this should be merged before. ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31440
https://github.com/apache/airflow/pull/31441
1ac35e710afc6cf5ea4466714b18efacdc44e1f7
c25251cde620481592392e5f82f9aa8a259a2f06
"2023-05-20T15:31:28Z"
python
"2023-05-22T14:33:03Z"
closed
apache/airflow
https://github.com/apache/airflow
31,432
["airflow/providers/google/cloud/operators/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"]
`BigQueryGetDataOperator`'s query job is bugged in deferrable mode
### Apache Airflow version main (development) ### What happened 1. When not providing `project_id` to `BigQueryGetDataOperator` in deferrable mode (`project_id=None`), the query generated by `generate_query` method is bugged, i.e.,: ````sql from `None.DATASET.TABLE_ID` limit ... ```` 2. `as_dict` param does not work `BigQueryGetDataOperator`. ### What you think should happen instead 1. When `project_id` is `None` - it should be removed from the query along with the trailing dot, i.e.,: ````sql from `DATASET.TABLE_ID` limit ... ```` 2. `as_dict` should be added to the serialization method of `BigQueryGetDataTrigger`. ### How to reproduce 1. Create a DAG file with `BigQueryGetDataOperator` defined as follows: ```python BigQueryGetDataOperator( task_id="bq_get_data_op", # project_id="PROJECT_ID", <-- Not provided dataset_id="DATASET", table_id="TABLE", use_legacy_sql=False, deferrable=True ) ```` 2. 1. Create a DAG file with `BigQueryGetDataOperator` defined as follows: ```python BigQueryGetDataOperator( task_id="bq_get_data_op", project_id="PROJECT_ID", dataset_id="DATASET", table_id="TABLE", use_legacy_sql=False, deferrable=True, as_dict=True ) ```` ### Operating System Debian ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else The `generate_query` method is not unit tested (which would have prevented it in the first place) - will be better to add one. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31432
https://github.com/apache/airflow/pull/31433
0e8bff9c4ec837d086dbe49b3d583a8d23f49e0e
0d6e626b050a860462224ad64dc5e9831fe8624d
"2023-05-19T18:20:45Z"
python
"2023-05-22T18:20:28Z"
closed
apache/airflow
https://github.com/apache/airflow
31,431
["airflow/migrations/versions/0125_2_6_2_add_onupdate_cascade_to_taskmap.py", "airflow/migrations/versions/0126_2_7_0_add_index_to_task_instance_table.py", "airflow/models/taskmap.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst", "tests/models/test_taskinstance.py"]
Clearing a task flow function executed earlier with task changed to mapped task crashes scheduler
### Apache Airflow version main (development) ### What happened Clearing a task flow function executed earlier with task changed to mapped task crashes scheduler. It seems TaskMap stored has a foreign key reference by map_index which needs to be cleared before execution. ``` airflow scheduler /home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/cli_config.py:1001 DeprecationWarning: The namespace option in [kubernetes] has been moved to the namespace option in [kubernetes_executor] - the old setting has been used, but please update your config. ____________ _____________ ____ |__( )_________ __/__ /________ __ ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / / ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ / _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/ /home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:196 DeprecationWarning: The '[celery] task_adoption_timeout' config option is deprecated. Please update your config to use '[scheduler] task_queued_timeout' instead. /home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:201 DeprecationWarning: The worker_pods_pending_timeout option in [kubernetes] has been moved to the worker_pods_pending_timeout option in [kubernetes_executor] - the old setting has been used, but please update your config. /home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:206 DeprecationWarning: The '[kubernetes_executor] worker_pods_pending_timeout' config option is deprecated. Please update your config to use '[scheduler] task_queued_timeout' instead. [2023-05-19T23:41:07.907+0530] {executor_loader.py:114} INFO - Loaded executor: SequentialExecutor [2023-05-19 23:41:07 +0530] [15527] [INFO] Starting gunicorn 20.1.0 [2023-05-19 23:41:07 +0530] [15527] [INFO] Listening at: http://[::]:8793 (15527) [2023-05-19 23:41:07 +0530] [15527] [INFO] Using worker: sync [2023-05-19 23:41:07 +0530] [15528] [INFO] Booting worker with pid: 15528 [2023-05-19T23:41:07.952+0530] {scheduler_job_runner.py:789} INFO - Starting the scheduler [2023-05-19T23:41:07.952+0530] {scheduler_job_runner.py:796} INFO - Processing each file at most -1 times [2023-05-19T23:41:07.954+0530] {scheduler_job_runner.py:1542} INFO - Resetting orphaned tasks for active dag runs [2023-05-19 23:41:07 +0530] [15529] [INFO] Booting worker with pid: 15529 [2023-05-19T23:41:08.567+0530] {scheduler_job_runner.py:853} ERROR - Exception when executing SchedulerJob._run_scheduler_loop Traceback (most recent call last): File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 836, in _execute self._run_scheduler_loop() File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 970, in _run_scheduler_loop num_queued_tis = self._do_scheduling(session) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1052, in _do_scheduling callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 90, in wrapped_function for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs): File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __iter__ do = self.iter(retry_state=retry_state) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 349, in iter return fut.result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 99, in wrapped_function return func(*args, **kwargs) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1347, in _schedule_all_dag_runs callback_tuples = [(run, self._schedule_dag_run(run, session=session)) for run in dag_runs] File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2811, in __iter__ return self._iter().__iter__() File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2818, in _iter result = self.session.execute( File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1669, in execute conn = self._connection_for_bind(bind, close_with_result=True) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1519, in _connection_for_bind return self._transaction._connection_for_bind( File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind self._assert_active() File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active raise sa_exc.PendingRollbackError( sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_map_task_instance_fkey" on table "task_map" DETAIL: Key (dag_id, task_id, run_id, map_index)=(bash_simple, get_command, manual__2023-05-18T13:54:01.345016+00:00, -1) is still referenced from table "task_map". [SQL: UPDATE task_instance SET map_index=%(map_index)s, updated_at=%(updated_at)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s] [parameters: {'map_index': 0, 'updated_at': datetime.datetime(2023, 5, 19, 18, 11, 8, 90512, tzinfo=Timezone('UTC')), 'task_instance_dag_id': 'bash_simple', 'task_instance_task_id': 'get_command', 'task_instance_run_id': 'manual__2023-05-18T13:54:01.345016+00:00', 'task_instance_map_index': -1}] (Background on this error at: http://sqlalche.me/e/14/gkpj) (Background on this error at: http://sqlalche.me/e/14/7s2a) [2023-05-19T23:41:08.572+0530] {scheduler_job_runner.py:865} INFO - Exited execute loop Traceback (most recent call last): File "/home/karthikeyan/stuff/python/airflow/.env/bin/airflow", line 8, in <module> sys.exit(main()) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/__main__.py", line 48, in main args.func(args) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/cli_config.py", line 51, in command return func(*args, **kwargs) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/cli.py", line 112, in wrapper return f(*args, **kwargs) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 77, in scheduler _run_scheduler_job(job_runner, skip_serve_logs=args.skip_serve_logs) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 42, in _run_scheduler_job run_job(job=job_runner.job, execute_callable=job_runner._execute) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/session.py", line 76, in wrapper return func(*args, session=session, **kwargs) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/job.py", line 284, in run_job return execute_job(job, execute_callable=execute_callable) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/job.py", line 313, in execute_job ret = execute_callable() File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 836, in _execute self._run_scheduler_loop() File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 970, in _run_scheduler_loop num_queued_tis = self._do_scheduling(session) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1052, in _do_scheduling callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 90, in wrapped_function for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs): File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __iter__ do = self.iter(retry_state=retry_state) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 349, in iter return fut.result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 99, in wrapped_function return func(*args, **kwargs) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1347, in _schedule_all_dag_runs callback_tuples = [(run, self._schedule_dag_run(run, session=session)) for run in dag_runs] File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2811, in __iter__ return self._iter().__iter__() File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2818, in _iter result = self.session.execute( File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1669, in execute conn = self._connection_for_bind(bind, close_with_result=True) File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1519, in _connection_for_bind return self._transaction._connection_for_bind( File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind self._assert_active() File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active raise sa_exc.PendingRollbackError( sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_map_task_instance_fkey" on table "task_map" DETAIL: Key (dag_id, task_id, run_id, map_index)=(bash_simple, get_command, manual__2023-05-18T13:54:01.345016+00:00, -1) is still referenced from table "task_map". [SQL: UPDATE task_instance SET map_index=%(map_index)s, updated_at=%(updated_at)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s] [parameters: {'map_index': 0, 'updated_at': datetime.datetime(2023, 5, 19, 18, 11, 8, 90512, tzinfo=Timezone('UTC')), 'task_instance_dag_id': 'bash_simple', 'task_instance_task_id': 'get_command', 'task_instance_run_id': 'manual__2023-05-18T13:54:01.345016+00:00', 'task_instance_map_index': -1}] (Background on this error at: http://sqlalche.me/e/14/gkpj) (Background on this error at: http://sqlalche.me/e/14/7s2a) ``` ### What you think should happen instead _No response_ ### How to reproduce 1. Create the dag with `command = get_command(1, 1)` and trigger a dagrun waiting for it to complete 2. Now change this to `command = get_command.partial(arg1=[1]).expand(arg2=[1, 2, 3, 4])` so that the task is now mapped. 3. Clear the existing task that causes the scheduler to crash. ```python import datetime, time from airflow.operators.bash import BashOperator from airflow import DAG from airflow.decorators import task with DAG( dag_id="bash_simple", start_date=datetime.datetime(2022, 1, 1), schedule=None, catchup=False, ) as dag: @task def get_command(arg1, arg2): for i in range(10): time.sleep(1) print(i) return ["echo hello"] command = get_command(1, 1) # command = get_command.partial(arg1=[1]).expand(arg2=[1, 2, 3, 4]) t1 = BashOperator.partial(task_id="bash").expand(bash_command=command) if __name__ == "__main__": dag.test() ``` ### Operating System Ubuntu ### Versions of Apache Airflow Providers _No response_ ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31431
https://github.com/apache/airflow/pull/31445
adf0cae48ad4e87612c00fe9facffca9b5728e7d
f6bb4746efbc6a94fa17b6c77b31d9fb17305ffc
"2023-05-19T18:12:39Z"
python
"2023-05-24T10:54:45Z"
closed
apache/airflow
https://github.com/apache/airflow
31,420
["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"]
allow other states than success/failed in tasks by REST API
### Apache Airflow version main (development) ### What happened see also: https://github.com/apache/airflow/issues/25463 From the conversation there, it sounds like it's intended to be possible to set a task to "skipped" via REST API, but it's not. Instead the next best thing we have is marking as success & adding a note. ### What you think should happen instead I see no reason for users to not just be able to set tasks to "skipped". I could imagine reasons to avoid "queued", and some other more "internal" states, but skipped makes perfect sense to me for - tasks that failed and got fixed externally - tasks that failed but are now irrelevant because of a newer run in case you still want to be able to see those in the future (actually, a custom state would be even better) ### How to reproduce ```python # task is just a dictionary with the right values for below r = requests.patch( f"{base_url}/api/v1/dags/{task['dag_id']}/dagRuns/{task['dag_run_id']}/taskInstances/{task['task_id']}", json={ "new_state": "skipped", }, headers={"Authorization": token}, ) ``` -> r.json() gives `{'detail': "'skipped' is not one of ['success', 'failed'] - 'new_state'", 'status': 400, 'title': 'Bad Request', 'type': 'http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest'}` ### Operating System / ### Versions of Apache Airflow Providers / ### Deployment Astronomer ### Deployment details / ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31420
https://github.com/apache/airflow/pull/31421
233663046d5210359ce9f4db2fe3db4f5c38f6ee
fba6f86ed7e59c166d0cf7717f1734ae30ba4d9c
"2023-05-19T14:28:31Z"
python
"2023-06-08T20:57:17Z"
closed
apache/airflow
https://github.com/apache/airflow
31,409
["airflow/sensors/base.py", "tests/sensors/test_base.py"]
ZeroDivisionError in BaseSensorOperator with `exponential_backoff=True` and `poke_interval=1`
### Apache Airflow version 2.6.1 ### What happened Sensor is fired with an exception ZeroDivisionError, if set up `mode="reschedule"`, `exponential_backoff=True` and `poke_interval=1` ``` ERROR - Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.9/site-packages/airflow/sensors/base.py", line 244, in execute next_poke_interval = self._get_next_poke_interval(started_at, run_duration, try_number) File "/home/airflow/.local/lib/python3.9/site-packages/airflow/sensors/base.py", line 274, in _get_next_poke_interval modded_hash = min_backoff + run_hash % min_backoff ZeroDivisionError: integer division or modulo by zero ``` ### What you think should happen instead Throw an human-readable exception about corner values for `poke_interval` or allow to set up this value less then 2 (if it's `>=` 2 - rescheduling works fine.) ### How to reproduce ``` from airflow.sensors.base import BaseSensorOperator from airflow import DAG import datetime class TestSensor(BaseSensorOperator): def poke(self, context): return False with DAG("test_dag", start_date=datetime.datetime.today(), schedule=None): sensor = TestSensor(task_id="sensor", mode="reschedule", poke_interval=1, exponential_backoff=True, max_wait=5) ``` ### Operating System Debian GNU/Linux 10 (buster) ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31409
https://github.com/apache/airflow/pull/31412
ba220b091c9fe9ba530533a71e88a9f5ca35d42d
a98621f4facabc207b4d6b6968e6863845e1f90f
"2023-05-19T09:28:27Z"
python
"2023-05-23T10:13:10Z"
closed
apache/airflow
https://github.com/apache/airflow
31,407
["airflow/jobs/scheduler_job_runner.py"]
Future DagRun rarely triggered by Race Condition when max_active_runs has reached its upper limit
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened The scheduler rarely triggers a DagRun to be executed in the future. Here are the conditions as I understand them. - max_active_runs is set and upper limit is reached - The preceding DagRun completes very slightly earlier than the following DagRun Details in "Anything else". ### What you think should happen instead DagRun should wait until scheduled ### How to reproduce I have confirmed reproduction in Airflow 2.2.2 with the following code. I reproduced it in my environment after running it for about half a day. ``` python import copy import logging import time from datetime import datetime, timedelta import pendulum from airflow import DAG, AirflowException from airflow.sensors.python import PythonSensor from airflow.utils import timezone logger = logging.getLogger(__name__) # very small min_file_process_interval may help to reproduce more. e.g. min_file_process_interval=3 def create_dag(interval): with DAG( dag_id=f"example_reproduce_{interval:0>2}", schedule_interval=f"*/{interval} * * * *", start_date=datetime(2021, 1, 1), catchup=False, max_active_runs=2, tags=["example_race_condition"], ) as dag: target_s = 10 def raise_if_future(context): now = timezone.utcnow() - timedelta(seconds=30) if context["data_interval_start"] > now: raise AirflowException("DagRun supposed to be triggered in the future triggered") def wait_sync(): now_dt = pendulum.now() if now_dt.minute % (interval * 2) == 0: # wait until target time to synchronize end time with the preceding job target_dt = copy.copy(now_dt).replace(second=target_s + 2) wait_sec = (target_dt - now_dt).total_seconds() logger.info(f"sleep {now_dt} -> {target_dt} in {wait_sec} seconds") if wait_sec > 0: time.sleep(wait_sec) return True PythonSensor( task_id="t2", python_callable=wait_sync, # To avoid getting stuck in SequentialExecutor, try to re-poke after the next job starts poke_interval=interval * 60 + target_s, mode="reschedule", pre_execute=raise_if_future, ) return dag for i in [1, 2]: globals()[i] = create_dag(i) ``` ### Operating System Amazon Linux 2 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details MWAA 2.2.2 ### Anything else The assumed flow and the associated actual query logs for the case max_active_runs=2 are shown below. **The assumed flow** 1. The first DagRun (DR1) starts 1. The subsequent DagRun (DR2) starts 1. DR2 completes; The scheduler set `next_dagrun_create_after=null` if max_active_runs is exceeded - https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L931 1. DR1 completes; The scheduler calls dag_model.calculate_dagrun_date_fields() in SchedulerJobRunner._schedule_dag_run(). The session is NOT committed yet - note: the result of `calculate_dagrun_date_fields` is the old DR1-based value from `dag.get_run_data_interval(DR"2")`. - https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L1017 1. DagFileProcessorProcess modifies next_dagrun_create_after - note: the dag record fetched in step 4 are not locked, so the `Processor` can select it and update it. - https://github.com/apache/airflow/blob/2.2.2/airflow/dag_processing/processor.py#L646 1. The scheduler reflects the calculation result of DR1 to DB by `guard.commit()` - note: Only the `next_dagrun_create_after` column set to null in step 2 is updated because sqlalchemy only updates the difference between the record retrieved in step 4 and the calculation result - https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L795 1. The scheduler triggers a future DagRun because the current time satisfies next_dagrun_create_after updated in step 6 **The associated query log** ``` sql bb55c5b0bdce: /# grep "future_dagrun_00" /var/lib/postgresql/data/log/postgresql-2023-03-08_210056.log | grep "next_dagrun" 2023-03-08 22: 00: 01.678 UTC[57378] LOG: statement: UPDATE dag SET next_dagrun_create_after = NULL WHERE dag.dag_id = 'future_dagrun_0' # set in step 3 2023-03-08 22: 00: 08.162 UTC[57472] LOG: statement: UPDATE dag SET last_parsed_time = '2023-03-08T22:00:07.683786+00:00':: timestamptz, next_dagrun = '2023-03-08T22:00:00+00:00':: timestamptz, next_dagrun_data_interval_start = '2023-03-08T22:00:00+00:00':: timestamptz, next_dagrun_data_interval_end = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_create_after = '2023-03-08T23:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 5 2023-03-08 22: 00: 09.137 UTC[57475] LOG: statement: UPDATE dag SET next_dagrun_create_after = '2023-03-08T22:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 6 2023-03-08 22: 00: 10.418 UTC[57479] LOG: statement: UPDATE dag SET next_dagrun = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_data_interval_start = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_data_interval_end = '2023-03-09T00:00:00+00:00':: timestamptz, next_dagrun_create_after = '2023-03-09T00:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 7 ``` From what I've read of the relevant code in the latest v2.6.1, I believe the problem continues. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31407
https://github.com/apache/airflow/pull/31414
e43206eb2e055a78814fcff7e8c35c6fd9c11e85
b53e2aeefc1714d306f93e58d211ad9d52356470
"2023-05-19T09:07:10Z"
python
"2023-08-08T12:22:13Z"
closed
apache/airflow
https://github.com/apache/airflow
31,399
["airflow/example_dags/example_params_ui_tutorial.py", "airflow/www/templates/airflow/trigger.html"]
Trigger UI Form Dropdowns with enums do not set default correct
### Apache Airflow version 2.6.1 ### What happened When playing around with the form features as interoduced in AIP-50 and using the select list option via `enum` I realized that the default value will not be correctly picked when the form is loaded. Instead the first value always will be pre-selected. ### What you think should happen instead Default value should be respected ### How to reproduce Modify the `airflow/example_dags/example_params_ui_tutorial.py:68` and change the default to some other value. Load the form and see that `value 1` is still displayed on form load. ### Operating System Ubuntu 20.04 ### Versions of Apache Airflow Providers not relevant ### Deployment Official Apache Airflow Helm Chart ### Deployment details not relevant ### Anything else Workaround: Make your default currently top of the list. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31399
https://github.com/apache/airflow/pull/31400
3bc0e3296abc9601dcaf7d77835e80e5fea43def
58aab1118a95ef63ba00784760fd13730dd46501
"2023-05-18T20:52:33Z"
python
"2023-05-21T17:15:05Z"
closed
apache/airflow
https://github.com/apache/airflow
31,387
["airflow/providers/google/cloud/operators/kubernetes_engine.py", "tests/providers/google/cloud/operators/test_kubernetes_engine.py"]
GKEStartPodOperator cannot connect to Private IP after upgrade to 2.6.x
### Apache Airflow version 2.6.1 ### What happened After upgrading to 2.6.1, GKEStartPodOperator stopped creating pods. According with release notes we created a specific gcp connection. But connection defaults to GKE Public endpoint (in error message masked as XX.XX.XX.XX) instead of private IP which is best since our cluster do not have public internet access. [2023-05-17T07:02:33.834+0000] {connectionpool.py:812} WARNING - Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0e47049ba0>, 'Connection to XX.XX.XX.XX timed out. (connect timeout=None)')': /api/v1/namespaces/airflow/pods?labelSelector=dag_id%3Dmytask%2Ckubernetes_pod_operator%3DTrue%2Crun_id%3Dscheduled__2023-05-16T0700000000-8fb0e9fa9%2Ctask_id%3Dmytask%2Calready_checked%21%3DTrue%2C%21airflow-sa Seems like with this change "use_private_ip" has been deprecated, what would be the workaround in this case then to connect using private endpoint? Also doc has not been updated to reflect this change in behaviour: https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/kubernetes_engine.html#using-with-private-cluster ### What you think should happen instead There should still be an option to connect using previous method with option "--private-ip" so API calls to Kubernetes call the private endpoint of GKE Cluster. ### How to reproduce 1. Create DAG file with GKEStartPodOperator. 2. Deploy said DAG in an environment with no access tu public internet. ### Operating System cos_coaintainerd ### Versions of Apache Airflow Providers apache-airflow-providers-cncf-kubernetes==5.2.2 apache-airflow-providers-google==8.11.0 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31387
https://github.com/apache/airflow/pull/31391
45b6cfa138ae23e39802b493075bd5b7531ccdae
c082aec089405ed0399cfee548011b0520be0011
"2023-05-18T13:53:30Z"
python
"2023-05-23T11:40:07Z"
closed
apache/airflow
https://github.com/apache/airflow
31,384
["dev/breeze/src/airflow_breeze/utils/run_utils.py"]
Breeze asset compilation causing OOM on dev-mode
### Apache Airflow version main (development) ### What happened asset compilation background thread is not killed when running `stop_airflow` or `breeze stop`. Webpack process takes a lot of memory, each `start-airflow` starts 4-5 of them. After a few breeze start, we end up with 15+ webpack background processes that takes more than 20G RAM ### What you think should happen instead `run_compile_www_assets` should stop when running `stop_airflow` from tmux. It looks like it spawn a `compile-www-assets-dev` pre-commit in a subprocess, that doesn't get killed when stopping breeze ### How to reproduce ``` breeze start-airflow --backend postgres --python 3.8 --dev-mode # Wait for tmux session to start breeze_stop breeze start-airflow --backend postgres --python 3.8 --dev-mode # Wait for tmux session to start breeze_stop # do a couple more if needed ``` Open tmux and monitor your memory, and specifically webpack processes. ### Operating System Ubuntu 20.04.6 LTS ### Versions of Apache Airflow Providers _No response_ ### Deployment Other ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31384
https://github.com/apache/airflow/pull/31403
c63b7774cdba29394ec746b381f45e816dcb0830
ac00547512f33b1222d699c7857108360d99b233
"2023-05-18T11:42:08Z"
python
"2023-05-19T09:58:16Z"
closed
apache/airflow
https://github.com/apache/airflow
31,365
["airflow/www/templates/airflow/dags.html"]
The `Next Run` column name and tooltip is misleading
### Description > Expected date/time of the next DAG Run, or for dataset triggered DAGs, how many datasets have been updated since the last DAG Run "Expected date/time of the next DAG Run" to me sounds like Run After. Should the tooltip indicate something along the lines of "start interval of the next dagrun" or maybe the header Next Run is outdated? Something like "Next Data Interval"? On the same vein, "Last Run" is equally as confusing. The header can be "Last Data Interval" in additional to a tool tip that describe it is the data interval start of the last dagrun. ### Use case/motivation Users confused "Next Run" as when the next dagrun will be queued and ran and does not interpret it as the next dagrun's data interval start. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31365
https://github.com/apache/airflow/pull/31467
f1d484c46c18a83e0b8bc010044126dafe4467bc
7db42fe6655c28330e80b8a062ef3e07968d6e76
"2023-05-17T17:56:37Z"
python
"2023-06-01T15:54:42Z"
closed
apache/airflow
https://github.com/apache/airflow
31,351
["airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"]
Changing task from unmapped to mapped task with task instance note and task reschedule
### Apache Airflow version main (development) ### What happened Changing a non-mapped task with task instance note and task reschedule to a mapped task crashes scheduler when the task is cleared for rerun. Related commit where a similar fix was done. commit a770edfac493f3972c10a43e45bcd0e7cfaea65f Author: Ephraim Anierobi <splendidzigy24@gmail.com> Date: Mon Feb 20 20:45:25 2023 +0100 Fix Scheduler crash when clear a previous run of a normal task that is now a mapped task (#29645) The fix was to clear the db references of the taskinstances in XCom, RenderedTaskInstanceFields and TaskFail. That way, we are able to run the mapped tasks ### What you think should happen instead _No response_ ### How to reproduce 1. Create below dag file with BashOperator non-mapped. 2. Schedule a dag run and wait for it to finish. 3. Add a task instance note to bash operator. 4. Change t1 to ` t1 = BashOperator.partial(task_id="bash").expand(bash_command=command)` and return `["echo hello"]` from get_command. 5. Restart the scheduler and clear the task. 6. scheduler crashes on trying to use map_index though foreign key reference exists to task instance note and task reschedule. ```python import datetime from airflow.operators.bash import BashOperator from airflow import DAG from airflow.decorators import task with DAG(dag_id="bash_simple", start_date=datetime.datetime(2022, 1, 1), schedule=None, catchup=False) as dag: @task def get_command(arg1, arg2): return "echo hello" command = get_command(1, 1) t1 = BashOperator(task_id="bash", bash_command=command) if __name__ == '__main__': dag.test() ``` ### Operating System Ubuntu ### Versions of Apache Airflow Providers _No response_ ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31351
https://github.com/apache/airflow/pull/31352
b1ea3f32f9284c6f53bab343bdf79ab3081276a8
f82246abe9491a49701abdb647be001d95db7e9f
"2023-05-17T11:59:30Z"
python
"2023-05-31T03:08:47Z"
closed
apache/airflow
https://github.com/apache/airflow
31,337
["airflow/providers/google/cloud/hooks/gcs.py"]
GCSHook support for cacheControl
### Description When a file is uploaded to GCS, [by default](https://cloud.google.com/storage/docs/metadata#cache-control), public files will get `Cache-Control: public, max-age=3600`. I've tried setting `cors` for the whole bucket (didn't work) and setting `Cache-Control` on individual file (disappears on file re-upload from airflow) Setting `metadata` for GCSHook is for different field (can't be used to set cache-control) ### Use case/motivation Allow GCSHook to set cache control rather than overriding the `upload` function ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31337
https://github.com/apache/airflow/pull/31338
ba3665f76a2205bad4553ba00537026a1346e9ae
233663046d5210359ce9f4db2fe3db4f5c38f6ee
"2023-05-17T04:51:33Z"
python
"2023-06-08T20:51:38Z"
closed
apache/airflow
https://github.com/apache/airflow
31,335
["airflow/providers/cncf/kubernetes/triggers/pod.py", "tests/providers/cncf/kubernetes/triggers/test_pod.py", "tests/providers/google/cloud/triggers/test_kubernetes_engine.py"]
KPO deferable "random" false fail
### Apache Airflow version 2.6.1 ### What happened With the KPO and only in deferrable I have "random" false fail the dag ```python from pendulum import today from airflow import DAG from airflow.providers.cncf.kubernetes.operators.pod import KubernetesPodOperator dag = DAG( dag_id="kubernetes_dag", schedule_interval="0 0 * * *", start_date=today("UTC").add(days=-1) ) with dag: cmd = "echo toto && sleep 22 && echo finish" KubernetesPodOperator.partial( task_id="task-one", namespace="default", kubernetes_conn_id="kubernetes_default", config_file="/opt/airflow/include/.kube/config", # bug of deferrable corrected in 6.2.0 name="airflow-test-pod", image="alpine:3.16.2", cmds=["sh", "-c", cmd], is_delete_operator_pod=True, deferrable=True, get_logs=True, ).expand(env_vars=[{"a": "a"} for _ in range(8)]) ``` ![Screenshot from 2023-05-17 02-01-03](https://github.com/apache/airflow/assets/10202690/1a189f78-e4d2-4aac-a621-4346d7e178c4) the log of the task in error : [dag_id=kubernetes_dag_run_id=scheduled__2023-05-16T00_00_00+00_00_task_id=task-one_map_index=2_attempt=1.log](https://github.com/apache/airflow/files/11492973/dag_id.kubernetes_dag_run_id.scheduled__2023-05-16T00_00_00%2B00_00_task_id.task-one_map_index.2_attempt.1.log) ### What you think should happen instead KPO should not fail ( in deferable ) if the container succesfully run in K8S ### How to reproduce If I remove the **sleep 22** from the cmd then I do not see any more random task fails ### Operating System ubuntu 22.04 ### Versions of Apache Airflow Providers apache-airflow-providers-cncf-kubernetes==6.1.0 ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31335
https://github.com/apache/airflow/pull/31348
57b7ba16a3d860268f03cd2619e5d029c7994013
8f5de83ee68c28100efc085add40ae4702bc3de1
"2023-05-17T00:06:22Z"
python
"2023-06-29T14:55:41Z"
closed
apache/airflow
https://github.com/apache/airflow
31,311
["chart/files/pod-template-file.kubernetes-helm-yaml", "tests/charts/airflow_aux/test_pod_template_file.py"]
Worker pod template file doesn't have option to add priorityClassName
### Apache Airflow version 2.6.0 ### What happened Worker pod template file doesn't have an option to add priorityClassName. ### What you think should happen instead Airflow workers deployment however has the option to add it via the override airflow.workers.priorityClassName . We should reuse this for the worker pod template file too. ### How to reproduce Trying to add a priorityClassName for airflow worker pod doesnt work. Unless we override the whole worker pod template with our own. But that is not preferrable as we will need to duplicate a lot of the existing template file. ### Operating System Rhel8 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31311
https://github.com/apache/airflow/pull/31328
fbb095605ab009869ef021535c16b62a3d18a562
2c9ce803d744949674e4ec9ac88f73ad0a361399
"2023-05-16T07:33:59Z"
python
"2023-06-01T00:27:35Z"
closed
apache/airflow
https://github.com/apache/airflow
31,304
["docs/apache-airflow/administration-and-deployment/logging-monitoring/logging-tasks.rst"]
Outdated 'airflow info' output in Logging for Tasks page
### What do you see as an issue? https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/logging-tasks.html#troubleshooting The referenced `airflow info` format is very outdated. ### Solving the problem Current output format is something like this: ``` Apache Airflow version | 2.7.0.dev0 executor | LocalExecutor task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler sql_alchemy_conn | postgresql+psycopg2://postgres:airflow@postgres/airflow dags_folder | /files/dags plugins_folder | /root/airflow/plugins base_log_folder | /root/airflow/logs remote_base_log_folder | System info OS | Linux architecture | arm uname | uname_result(system='Linux', node='fe54afd888cd', release='5.15.68-0-virt', version='#1-Alpine SMP Fri, 16 Sep | 2022 06:29:31 +0000', machine='aarch64', processor='') locale | ('en_US', 'UTF-8') python_version | 3.7.16 (default, May 3 2023, 09:44:48) [GCC 10.2.1 20210110] python_location | /usr/local/bin/python Tools info git | git version 2.30.2 ssh | OpenSSH_8.4p1 Debian-5+deb11u1, OpenSSL 1.1.1n 15 Mar 2022 kubectl | NOT AVAILABLE gcloud | NOT AVAILABLE cloud_sql_proxy | NOT AVAILABLE mysql | mysql Ver 15.1 Distrib 10.5.19-MariaDB, for debian-linux-gnu (aarch64) using EditLine wrapper sqlite3 | 3.34.1 2021-01-20 14:10:07 10e20c0b43500cfb9bbc0eaa061c57514f715d87238f4d835880cd846b9ealt1 psql | psql (PostgreSQL) 15.2 (Debian 15.2-1.pgdg110+1) Paths info airflow_home | /root/airflow system_path | /files/bin/:/opt/airflow/scripts/in_container/bin/:/root/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin: | /usr/sbin:/usr/bin:/sbin:/bin:/opt/airflow python_path | /usr/local/bin:/opt/airflow:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynl | oad:/usr/local/lib/python3.7/site-packages:/files/dags:/root/airflow/config:/root/airflow/plugins airflow_on_path | True Providers info [too long to include] ``` ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31304
https://github.com/apache/airflow/pull/31336
6d184d3a589b988c306aa3614e0f83e514b3f526
fc4f37b105ca0f03de7cc49ab4f00751287ae145
"2023-05-16T01:46:27Z"
python
"2023-05-18T07:44:53Z"
closed
apache/airflow
https://github.com/apache/airflow
31,303
["airflow/cli/cli_config.py"]
airflow dags list-jobs missing state metavar and choices keyword arguments
### What do you see as an issue? The `--state` CLI flag on `airflow dags list-jobs` does not tell the user what arguments it can take. https://github.com/apache/airflow/blob/1bd538be8c5b134643a6c5eddd06f70e6f0db2e7/airflow/cli/cli_config.py#L280 It probably needs some keyword args similar to the following: ```python metavar="(table, json, yaml, plain)", choices=("table", "json", "yaml", "plain"), ``` ### Solving the problem The problem can be solved by adding those keyword arguments so that the user gets a suggestion for what state arguments can be passed in. ### Anything else Any suggestions on what can be a valid state would be much appreciated. Otherwise, I'll find some time to read through the code and/or docs and figure it out. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31303
https://github.com/apache/airflow/pull/31308
fc4f37b105ca0f03de7cc49ab4f00751287ae145
8e296a09fc5c49188a129356caca8c3ea5eee000
"2023-05-15T23:55:24Z"
python
"2023-05-18T11:05:07Z"
closed
apache/airflow
https://github.com/apache/airflow
31,238
["airflow/providers/discord/notifications/__init__.py", "airflow/providers/discord/notifications/discord.py", "airflow/providers/discord/provider.yaml", "tests/providers/discord/notifications/__init__.py", "tests/providers/discord/notifications/test_discord.py"]
Discord notification
### Description The new [Slack notification](https://airflow.apache.org/docs/apache-airflow-providers-slack/stable/notifications/slack_notifier_howto_guide.html) feature allows users to send messages to a slack channel using the various [on_*_callbacks](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/callbacks.html) at both the DAG level and Task level. However, this solution needs a little more implementation ([Webhook](https://airflow.apache.org/docs/apache-airflow-providers-discord/stable/_api/airflow/providers/discord/hooks/discord_webhook/index.html)) to perform notifications on Discord. ### Use case/motivation Send Task/DAG status or other messages as notifications to a discord channel. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31238
https://github.com/apache/airflow/pull/31273
3689cee485215651bdb5ef434f24ab8774995a37
bdfebad5c9491234a78453856bd8c3baac98f75e
"2023-05-12T03:31:07Z"
python
"2023-06-16T05:49:03Z"
closed
apache/airflow
https://github.com/apache/airflow
31,236
["docs/apache-airflow/core-concepts/dags.rst"]
The @task.branch inside the dags.html seems to be incorrect
### What do you see as an issue? In the documentation page [https://airflow.apache.org/docs/apache-airflow/2.6.0/core-concepts/dags.html#branching)](https://airflow.apache.org/docs/apache-airflow/2.6.0/core-concepts/dags.html#branching), there is an incorrect usage of the @task.branch decorator. ``` python @task.branch(task_id="branch_task") def branch_func(ti): xcom_value = int(ti.xcom_pull(task_ids="start_task")) if xcom_value >= 5: return "continue_task" elif xcom_value >= 3: return "stop_task" else: return None branch_op = branch_func() ``` This code snippet is incorrect as it attempts to initialize branch_func without providing the required parameter. ### Solving the problem The correct version maybe is like that ```python def branch_func(**kwargs): ti: TaskInstance = kwargs["ti"] ``` ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31236
https://github.com/apache/airflow/pull/31265
d6051fd10a0949264098af23ce74c76129cfbcf4
d59b0533e18c7cf0ff17f8af50731d700a2e4b4d
"2023-05-12T01:53:07Z"
python
"2023-05-13T12:21:56Z"
closed
apache/airflow
https://github.com/apache/airflow
31,200
["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/jobs/job.py", "airflow/jobs/scheduler_job_runner.py", "newsfragments/31277.significant.rst", "tests/jobs/test_base_job.py"]
Constant "The scheduler does not appear to be running" warning on the UI following 2.6.0 upgrade
### Apache Airflow version 2.6.0 ### What happened Ever since we upgraded to Airflow 2.6.0 from 2.5.2, we have seen that there is a warning stating "The scheduler does not appear to be running" intermittently. This warning goes away by simply refreshing the page. And this conforms with our findings that the scheduler has not been down at all, at any point. By calling the /health point constantly, we can get it to show an "unhealthy" status: These are just approx. 6 seconds apart: ``` {"metadatabase": {"status": "healthy"}, "scheduler": {"latest_scheduler_heartbeat": "2023-05-11T07:42:36.857007+00:00", "status": "healthy"}} {"metadatabase": {"status": "healthy"}, "scheduler": {"latest_scheduler_heartbeat": "2023-05-11T07:42:42.409344+00:00", "status": "unhealthy"}} ``` This causes no operational issues, but it is misleading for end-users. What could be causing this? ### What you think should happen instead The warning should not be shown unless the last heartbeat was at least 30 sec earlier (default config). ### How to reproduce There are no concrete steps to reproduce it, but the warning appears in the UI after a few seconds of browsing around, or simply refresh the /health endpoint constantly. ### Operating System Debian GNU/Linux 11 ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==8.0.0 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==6.1.0 apache-airflow-providers-common-sql==1.4.0 apache-airflow-providers-docker==3.6.0 apache-airflow-providers-elasticsearch==4.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==10.0.0 apache-airflow-providers-grpc==3.1.0 apache-airflow-providers-hashicorp==3.3.1 apache-airflow-providers-http==4.3.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-microsoft-azure==6.0.0 apache-airflow-providers-microsoft-mssql==3.3.2 apache-airflow-providers-microsoft-psrp==2.2.0 apache-airflow-providers-microsoft-winrm==3.0.0 apache-airflow-providers-mysql==5.0.0 apache-airflow-providers-odbc==3.2.1 apache-airflow-providers-oracle==3.0.0 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-sendgrid==3.1.0 apache-airflow-providers-sftp==4.2.4 apache-airflow-providers-slack==7.2.0 apache-airflow-providers-snowflake==4.0.5 apache-airflow-providers-sqlite==3.3.2 apache-airflow-providers-ssh==3.6.0 ### Deployment Official Apache Airflow Helm Chart ### Deployment details Deployed on AKS with helm ### Anything else None more than in the description above. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31200
https://github.com/apache/airflow/pull/31277
3193857376bc2c8cd2eb133017be1e8cbcaa8405
f366d955cd3be551c96ad7f794e0b8525900d13d
"2023-05-11T07:51:57Z"
python
"2023-05-15T08:31:14Z"
closed
apache/airflow
https://github.com/apache/airflow
31,186
["airflow/www/static/js/dag/details/FilterTasks.tsx", "airflow/www/static/js/dag/details/dagRun/ClearRun.tsx", "airflow/www/static/js/dag/details/dagRun/MarkRunAs.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/ClearInstance.tsx"]
Problems after redesign grid view
### Apache Airflow version 2.6.0 ### What happened The changes in #30373 have had some unintended consequences. - The clear task button can now go of screen if the dag / task name is long enough. This is rather unfortunate since it is by far the most important button to fix issues (hence the reason it is taking up a lot of real estate) - Above issue is exacerbated by the fact that the task name can also push the grid off screen as well. I now have dags where I can see grid or the clear state button, but not both. - Downstream and Recursive don't seem to be selected default anymore for the clear task button. For some reason recursive is only selected for the latest task (maybe this was already the case?). The first two are an annoyance, the last one is preventing us from updating to 2.6.0 ### What you think should happen instead - Downstream should be selected by default again. (and possibly Recursive) - The clear task button should *always* be visible, no matter how implausibly small the viewport is. - Ideally long taks names should no longer hide the grid. ### How to reproduce To reproduce just make a dag with a long name with some tasks with long names and open the grid view on a small screen. ### Operating System unix ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31186
https://github.com/apache/airflow/pull/31232
d1fe67184da26fb0bca2416e26f321747fa4aa5d
03b04a3d54c0c2aff9873f88de116fad49f90600
"2023-05-10T15:26:49Z"
python
"2023-05-12T14:27:03Z"
closed
apache/airflow
https://github.com/apache/airflow
31,183
["airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py"]
SparkKubernetesSensor: 'None' has no attribute 'metadata'
### Apache Airflow version 2.6.0 ### What happened After upgrading to 2.6.0 version, pipelines with SparkKubernetesOperator -> SparkKubernetesSensor stopped working correctly. [this PR](https://github.com/apache/airflow/pull/29977) introduces some enhancement into Spark Kubernetes logic, now SparkKubernetesOperator receives the log from spark pods (which is great), but it doesn't monitor the status of a pod, which means if spark application fails - a task in Airflow finishes successfully. On the other hand, using previous pipelines (Operator + Sensor) is impossible now, cause SparkKubernetesSensor fails with `jinja2.exceptions.UndefinedError: 'None' has no attribute 'metadata'` as SparkKubernetesOperator is no longer pushing info to xcom. ### What you think should happen instead Old pipelines should be compatible with Airflow 2.6.0, even though the log would be retrieved in two places - operator and sensor. OR remove the sensor completely and implement all the functionality in the operator (log + status) ### How to reproduce Create a DAG with two operators ``` t1 = SparkKubernetesOperator( kubernetes_conn_id='common/kubernetes_default', task_id=f"task-submit", namespace="namespace", application_file="spark-applications/app.yaml", do_xcom_push=True, dag=dag, ) t2 = SparkKubernetesSensor( task_id=f"task-sensor", namespace="namespace", application_name=f"{{{{ task_instance.xcom_pull(task_ids='task-submit')['metadata']['name'] }}}}", dag=dag, attach_log=True, ) ``` ### Operating System Debian GNU/Linux 10 (buster) ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==8.0.0 apache-airflow-providers-apache-spark==4.0.1 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==6.1.0 apache-airflow-providers-common-sql==1.4.0 apache-airflow-providers-docker==3.6.0 apache-airflow-providers-elasticsearch==4.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==10.0.0 apache-airflow-providers-grpc==3.1.0 apache-airflow-providers-hashicorp==3.3.1 apache-airflow-providers-http==4.3.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-microsoft-azure==6.0.0 apache-airflow-providers-microsoft-mssql==3.3.2 apache-airflow-providers-microsoft-psrp==2.2.0 apache-airflow-providers-microsoft-winrm==3.1.1 apache-airflow-providers-mysql==5.0.0 apache-airflow-providers-odbc==3.2.1 apache-airflow-providers-oracle==3.6.0 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-sendgrid==3.1.0 apache-airflow-providers-sftp==4.2.4 apache-airflow-providers-slack==7.2.0 apache-airflow-providers-snowflake==4.0.5 apache-airflow-providers-sqlite==3.3.2 apache-airflow-providers-ssh==3.6.0 apache-airflow-providers-telegram==4.0.0 ### Deployment Other 3rd-party Helm chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31183
https://github.com/apache/airflow/pull/31798
771362af4784f3d913d6c3d3b44c78269280a96e
6693bdd72d70989f4400b5807e2945d814a83b85
"2023-05-10T11:42:40Z"
python
"2023-06-27T20:55:51Z"
closed
apache/airflow
https://github.com/apache/airflow
31,180
["docs/apache-airflow/administration-and-deployment/listeners.rst"]
Plugin for listeners - on_dag_run_running hook ignored
### Apache Airflow version 2.6.0 ### What happened I created a plugin for custom listeners, the task level listeners works fine, but the dag level listeners are not triggered. The [docs](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/listeners.html) states that listeners defined in `airflow/listeners/spec` should be supported. ``` @hookimpl def on_task_instance_failed(previous_state: TaskInstanceState, task_instance: TaskInstance, session): """ This method is called when task state changes to FAILED. Through callback, parameters like previous_task_state, task_instance object can be accessed. This will give more information about current task_instance that has failed its dag_run, task and dag information. """ print("This works fine") @hookimpl def on_dag_run_failed(dag_run: DagRun, msg: str): """ This method is called when dag run state changes to FAILED. """ print("This is not called!") ``` ### What you think should happen instead The dag specs defined `airflow/listeners/spec/dagrun.py` should be working ### How to reproduce Create a plugin and add the two hooks into a listeners. ### Operating System linux ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31180
https://github.com/apache/airflow/pull/32269
bc3b2d16d3563d5b9bccd283db3f9e290d1d823d
ab2c861dd8a96f22b0fda692368ce9b103175322
"2023-05-10T09:41:08Z"
python
"2023-07-04T20:57:49Z"
closed
apache/airflow
https://github.com/apache/airflow
31,156
["setup.cfg", "setup.py"]
Searching task instances by state doesn't work
### Apache Airflow version 2.6.0 ### What happened After specifying a state such as "Equal to" "failed", the search doesn't return anything but resetting the whole page (the specified filter is gone) https://github.com/apache/airflow/assets/14293802/5fb7f550-c09f-4040-963f-76dc0a2c1a53 ### What you think should happen instead _No response_ ### How to reproduce Go to "Browse" tab -> click "Task Instances" -> "Add Filter" -> "State" -> "Use anything (equal to, contains, etc)" -> Click "Search" ### Operating System Debian GNU/Linux 10 (buster) ### Versions of Apache Airflow Providers _No response_ ### Deployment Other Docker-based deployment ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31156
https://github.com/apache/airflow/pull/31203
d59b0533e18c7cf0ff17f8af50731d700a2e4b4d
1133035f7912fb2d2612c7cee5017ebf01f8ec9d
"2023-05-09T14:40:44Z"
python
"2023-05-13T13:13:06Z"
closed
apache/airflow
https://github.com/apache/airflow
31,109
["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"]
Add support for standard SQL in `BigQueryGetDataOperator`
### Description Currently, the BigQueryGetDataOperator always utilizes legacy SQL when submitting jobs (set as the default by the BQ API). This approach may cause problems when using standard SQL features, such as names for projects, datasets, or tables that include hyphens (which is very common nowadays). We would like to make it configurable, so users can set a flag in the operator to enable the use of standard SQL instead. ### Use case/motivation When implementing #30887 to address #24460, I encountered some unusual errors, which were later determined to be related to the usage of hyphens in the GCP project ID name. ### Related issues - #24460 - #28522 (PR) adds this parameter to `BigQueryCheckOperator` ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31109
https://github.com/apache/airflow/pull/31190
24532312b694242ba74644fdd43a487e93122235
d1fe67184da26fb0bca2416e26f321747fa4aa5d
"2023-05-06T14:04:34Z"
python
"2023-05-12T14:13:21Z"
closed
apache/airflow
https://github.com/apache/airflow
31,099
["airflow/providers/amazon/aws/operators/emr.py", "tests/providers/amazon/aws/operators/test_emr_serverless.py"]
Can't cancel EMR Serverless task
### Apache Airflow version 2.6.0 ### What happened When marking an EMR Serverless job as failed, the job continues to run. ### What you think should happen instead The job should be cancelled. Looking at the [EMR Serverless Operator](https://github.com/apache/airflow/blob/a6be96d92828a86e982b53646a9e2eeca00a5463/airflow/providers/amazon/aws/operators/emr.py#L939), I don't see an `on_kill` method, so assuming we just need to add that. I'm not sure how to handle the `EmrServerlessCreateApplicationOperator` operator, though - if the workflow has a corresponding `EmrServerlessDeleteApplicationOperator`, we'd probably want to delete the application if the job is cancelled. ### How to reproduce - Start an EMR Serverless DAG - Mark the job as failed in the Airflow UI - See that the EMR Serverless job continues to run ### Operating System n/a ### Versions of Apache Airflow Providers `apache-airflow-providers-amazon==8.0.0` ### Deployment Amazon (AWS) MWAA ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31099
https://github.com/apache/airflow/pull/31169
761c0da723799c3c37d9eb2cadaa9c4fa256d13a
d6051fd10a0949264098af23ce74c76129cfbcf4
"2023-05-05T17:01:29Z"
python
"2023-05-12T20:00:30Z"
closed
apache/airflow
https://github.com/apache/airflow
31,087
["Dockerfile.ci", "scripts/docker/entrypoint_ci.sh"]
Latest Botocore breaks SQS tests
### Body Our tests are broken in main due to latest botocore failing SQS tests. Example here: https://github.com/apache/airflow/actions/runs/4887737387/jobs/8724954226 ``` E botocore.exceptions.ClientError: An error occurred (400) when calling the SendMessage operation: <ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><Error><Type>Sender</Type> <Code>AWS.SimpleQueueService.NonExistentQueue</Code><Message>The specified queue does not exist for this wsdl version.</Message><Detail/></Error> <RequestId>ETDUP0OoJOXmn0WS6yWmB0dOhgYtpdVJCVwFWA28lYLKLmGJLAGu</RequestId></ErrorResponse> ``` The problem seems to come from botocore not recognizing just added queue: ``` QUEUE_NAME = "test-queue" QUEUE_URL = f"https://{QUEUE_NAME}" ``` Even if we replace it with the full queue name that gets returned by the "create_queue" API call to `moto`, it still does not work with latest botocore: ``` QUEUE_URL = f"https://sqs.us-east-1.amazonaws.com/123456789012/{QUEUE_NAME}" ``` Which indicates this likely a real botocore issue. ## How to reproduce: 1. Get working venv with `[amazon]` extra of Airlow (or breeze). Should be (when constraints from main are used): ``` root@b0c430d9a328:/opt/airflow# pip list | grep botocore aiobotocore 2.5.0 botocore 1.29.76 ``` 3. `pip uninstall aiobotocore` 4. `pip install --upgrade botocore` ``` root@b0c430d9a328:/opt/airflow# pip list | grep botocore botocore 1.29.127 ``` 6. `pytest tests/providers/amazon/aws/sensors/test_sqs.py` Result: ``` ===== 4 failed, 7 passed in 2.43s === ``` ---------------------------------- Comparing it to "success case": When you run it breeze (with the current constrained botocore): ``` root@b0c430d9a328:/opt/airflow# pip list | grep botocore aiobotocore 2.5.0 botocore 1.29.76 ``` 1, `pytest tests/providers/amazon/aws/sensors/test_sqs.py` Result: ``` ============ 11 passed in 4.57s ======= ``` ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/31087
https://github.com/apache/airflow/pull/31103
41c87464428d8d31ba81444b3adf457bc968e11d
49cc213919a7e2a5d4bdc9f952681fa4ef7bf923
"2023-05-05T11:31:51Z"
python
"2023-05-05T20:32:51Z"
closed
apache/airflow
https://github.com/apache/airflow
31,084
["docs/docker-stack/build.rst", "docs/docker-stack/docker-examples/extending/add-airflow-configuration/Dockerfile"]
Changing configuration as part of the custom airflow docker image
### What do you see as an issue? https://airflow.apache.org/docs/docker-stack/build.html This docs doesn't share information on how we can edit the airflow.cfg file for the airflow installed via docker. Adding this to docs, would give better idea about editing the configuration file ### Solving the problem Add more details in build your docker image for editing the airflow.cfg file ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31084
https://github.com/apache/airflow/pull/31842
7a786de96ed178ff99aef93761d82d100b29bdf3
9cc72bbaec0d7d6041ecd53541a524a2f1e523d0
"2023-05-05T10:43:52Z"
python
"2023-06-11T18:12:28Z"
closed
apache/airflow
https://github.com/apache/airflow
31,080
["airflow/providers/common/sql/operators/sql.py", "airflow/providers/common/sql/provider.yaml", "airflow/providers/databricks/operators/databricks_sql.py", "airflow/providers/databricks/provider.yaml", "generated/provider_dependencies.json", "tests/providers/databricks/operators/test_databricks_sql.py"]
SQLExecuteQueryOperator AttributeError exception when returning result to XCom
### Apache Airflow version 2.6.0 ### What happened I am using DatabricksSqlOperator which writes the result to a file. When the task finishes it writes all the data correctly to the file the throws the following exception: > [2023-05-05, 07:56:22 UTC] {taskinstance.py:1847} ERROR - Task failed with exception > Traceback (most recent call last): > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 73, in wrapper > return func(*args, **kwargs) > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2377, in xcom_push > XCom.set( > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 73, in wrapper > return func(*args, **kwargs) > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/xcom.py", line 237, in set > value = cls.serialize_value( > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/xcom.py", line 632, in serialize_value > return json.dumps(value, cls=XComEncoder).encode("UTF-8") > File "/usr/local/lib/python3.9/json/__init__.py", line 234, in dumps > return cls( > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/json.py", line 102, in encode > o = self.default(o) > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/json.py", line 91, in default > return serialize(o) > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 144, in serialize > return encode(classname, version, serialize(data, depth + 1)) > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in serialize > return [serialize(d, depth + 1) for d in o] > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in <listcomp> > return [serialize(d, depth + 1) for d in o] > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in serialize > return [serialize(d, depth + 1) for d in o] > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in <listcomp> > return [serialize(d, depth + 1) for d in o] > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 132, in serialize > qn = qualname(o) > File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 47, in qualname > return f"{o.__module__}.{o.__name__}" > File "/home/airflow/.local/lib/python3.9/site-packages/databricks/sql/types.py", line 161, in __getattr__ > raise AttributeError(item) > AttributeError: __name__ I found that **SQLExecuteQueryOperator** always return the result(so pushing XCom) from its execute() method except when the parameter **do_xcom_push** is set to **False**. But if do_xcom_push is False then the method _process_output() is not executed and DatabricksSqlOperator wont write the results to a file. ### What you think should happen instead I am not sure if the problem should be fixed in DatabricksSqlOperator or in SQLExecuteQueryOperator. In any case setting do_xcom_push shouldn't automatically prevent the exevution of _process_output(): ``` if not self.do_xcom_push: return None if return_single_query_results(self.sql, self.return_last, self.split_statements): # For simplicity, we pass always list as input to _process_output, regardless if # single query results are going to be returned, and we return the first element # of the list in this case from the (always) list returned by _process_output return self._process_output([output], hook.descriptions)[-1] return self._process_output(output, hook.descriptions) ``` What happens now is - i have in the same time big result in a file AND in the XCom. ### How to reproduce I suspect that the actual Exception is related to writing the XCom to the meta database and it might not fail on other scenarios. ### Operating System Debian GNU/Linux 11 (bullseye) docker image ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==8.0.0 apache-airflow-providers-apache-spark==4.0.1 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==6.1.0 apache-airflow-providers-common-sql==1.4.0 apache-airflow-providers-databricks==4.1.0 apache-airflow-providers-docker==3.6.0 apache-airflow-providers-elasticsearch==4.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==10.0.0 apache-airflow-providers-grpc==3.1.0 apache-airflow-providers-hashicorp==3.3.1 apache-airflow-providers-http==4.3.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-microsoft-azure==6.0.0 apache-airflow-providers-microsoft-mssql==3.3.2 apache-airflow-providers-mysql==5.0.0 apache-airflow-providers-odbc==3.2.1 apache-airflow-providers-oracle==3.6.0 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-samba==4.1.0 apache-airflow-providers-sendgrid==3.1.0 apache-airflow-providers-sftp==4.2.4 apache-airflow-providers-slack==7.2.0 apache-airflow-providers-snowflake==4.0.5 apache-airflow-providers-sqlite==3.3.2 apache-airflow-providers-ssh==3.6.0 apache-airflow-providers-telegram==4.0.0 ### Deployment Docker-Compose ### Deployment details Using extended Airflow image, LocalExecutor, Postgres 13 meta db as container in the same stack. docker-compose version 1.29.2, build 5becea4c Docker version 23.0.5, build bc4487a ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31080
https://github.com/apache/airflow/pull/31136
521dae534dd0b906e4dd9a7446c6bec3f9022ac3
edd7133a1336c9553d77ba13c83bc7f48d4c63f0
"2023-05-05T08:16:58Z"
python
"2023-05-09T11:11:41Z"
closed
apache/airflow
https://github.com/apache/airflow
31,067
["setup.py"]
[BUG] apache.hive extra is referencing incorrect provider name
### Apache Airflow version 2.6.0 ### What happened When creating docker image with airflow 2.6.0 I receive the error: `ERROR: No matching distribution found for apache-airflow-providers-hive>=5.1.0; extra == "apache.hive"` After which, I see that the package name should be `apache-airflow-providers-apache-hive` and not `apache-airflow-providers-hive`. ### What you think should happen instead We should change this line to say `apache-airflow-providers-apache-hive` and not `apache-airflow-providers-hive`, this will reference a provider which exists. ### How to reproduce Build image for airflow 2.6.0 with the dependency `apache.hive`. Such as `pip3 install apache-airflow[apache.hive]==2.6.0 --constraint https://raw.githubusercontent.com/apache/airflow/constraints-2.6.0/constraints-3.8.txt`. ### Operating System ubuntu:22.04 ### Versions of Apache Airflow Providers Image does not build. ### Deployment Other Docker-based deployment ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31067
https://github.com/apache/airflow/pull/31068
da61bc101eba0cdb17554f5b9ae44998bb0780d3
9e43d4aee3b86134b1b9a42f988fb9d3975dbaf7
"2023-05-04T17:37:49Z"
python
"2023-05-05T15:39:33Z"
closed
apache/airflow
https://github.com/apache/airflow
31,059
["airflow/utils/log/file_task_handler.py", "tests/providers/amazon/aws/log/test_s3_task_handler.py", "tests/utils/test_log_handlers.py"]
Logs no longer shown after task completed CeleryExecutor
### Apache Airflow version 2.6.0 ### What happened Stream logging works as long as the task is running. Once the task finishes, no logs are printed to the UI (only the hostname of the worker is printed) <img width="1657" alt="image" src="https://user-images.githubusercontent.com/16529101/236212701-aecf6cdc-4d87-4817-a685-0778b94d182b.png"> ### What you think should happen instead Expected to see the complete log of a task ### How to reproduce Start an airflow task. You should be able to see the logs coming in as a stream, once it finishes, the logs are gone ### Operating System CentOS 7 ### Versions of Apache Airflow Providers airflow-provider-great-expectations==0.1.5 apache-airflow==2.6.0 apache-airflow-providers-airbyte==3.1.0 apache-airflow-providers-apache-hive==4.0.1 apache-airflow-providers-apache-spark==3.0.0 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-common-sql==1.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-http==4.3.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-mysql==5.0.0 apache-airflow-providers-oracle==3.4.0 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-sqlite==3.3.2 ### Deployment Virtualenv installation ### Deployment details Celery with 4 workers nodes/VMs. Scheduler and Webserver on a different VM ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31059
https://github.com/apache/airflow/pull/31101
10dda55e8b0fed72e725b369c17cb5dfb0d77409
672ee7f0e175dd7edb041218850d0cd556d62106
"2023-05-04T13:07:47Z"
python
"2023-05-08T21:51:35Z"
closed
apache/airflow
https://github.com/apache/airflow
31,039
["airflow/dag_processing/processor.py", "tests/dag_processing/test_processor.py"]
Packaged DAGs not getting loaded in Airflow 2.6.0 (ValueError: source code string cannot contain null bytes)
### Apache Airflow version 2.6.0 ### What happened I am trying to upgrade Airflow from version 2.3.1 to 2.6.0. I have a zip file with a few test DAGs which used to get loaded correctly in 2.3.1 but after the upgrade I get the following error in the scheduler logs. ``` Process ForkProcess-609: Traceback (most recent call last): File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 259, in _run_processor_manager processor_manager.start() File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 493, in start return self._run_parsing_loop() File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 572, in _run_parsing_loop self.start_new_processes() File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 1092, in start_new_processes processor.start() File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/processor.py", line 196, in start for module in iter_airflow_imports(self.file_path): File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/file.py", line 389, in iter_airflow_imports parsed = ast.parse(Path(file_path).read_bytes()) File "/usr/local/lib/python3.7/ast.py", line 35, in parse return compile(source, filename, mode, PyCF_ONLY_AST) ValueError: source code string cannot contain null bytes ``` Single .py files are getting loaded without issues. ### What you think should happen instead Packaged file should be parsed and the DAGs inside it should be available in the DagBag. ### How to reproduce - Setup Airflow 2.6.0 using the official Docker image and helm chart. - Create a folder and place the python file below inside it. - Create a packaged DAG using command `zip -r test_dags.zip ./*` from within the folder. - Place the `test_dags.zip` file in `/opt/airflow/dags` directory. ```python import time from datetime import timedelta from textwrap import dedent from airflow import DAG from airflow.operators.python_operator import PythonOperator from airflow.utils.dates import days_ago default_args = { 'owner': 'airflow', 'depends_on_past': False, 'email': ['airflow@example.com'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 3, 'retry_delay': timedelta(minutes=1), } def test_function(task_name): print(f"{task_name}: Test function invoked") print(f"{task_name}: Sleeping 10 seconds") time.sleep(10) print(f"{task_name}: Exiting") with DAG( 'airflow2_test_dag_1', default_args=default_args, description='DAG for testing airflow 2.0', schedule_interval=timedelta(days=1), start_date=days_ago(1) ) as dag: t1 = PythonOperator( task_id='first_task', python_callable=test_function, op_kwargs={"task_name": "first_task"}, dag=dag, ) t2 = PythonOperator( task_id='second_task', python_callable=test_function, op_kwargs={"task_name": "second_task"}, dag=dag, ) t3 = PythonOperator( task_id='third_task', python_callable=test_function, op_kwargs={"task_name": "third_task"}, dag=dag, queue='kubernetes' ) t1 >> t2 >> t3 ``` ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers ``` apache-airflow-providers-amazon==7.3.0 apache-airflow-providers-apache-hive==2.3.2 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==5.2.2 apache-airflow-providers-common-sql==1.3.4 apache-airflow-providers-docker==3.5.1 apache-airflow-providers-elasticsearch==4.4.0 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==6.8.0 apache-airflow-providers-grpc==3.1.0 apache-airflow-providers-hashicorp==2.2.0 apache-airflow-providers-http==4.2.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-microsoft-azure==5.2.1 apache-airflow-providers-mysql==4.0.2 apache-airflow-providers-odbc==3.2.1 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-sendgrid==3.1.0 apache-airflow-providers-sftp==4.2.4 apache-airflow-providers-slack==4.2.3 apache-airflow-providers-snowflake==4.0.4 apache-airflow-providers-sqlite==3.3.1 apache-airflow-providers-ssh==3.5.0 ``` ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else The issue only occurs with `2.6.0`. I used the `2.5.3` docker image with everything else remaining same and packaged DAGs loaded with no issues. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/31039
https://github.com/apache/airflow/pull/31061
91e18bfc3e53002c191b33dbbfd017e152b23935
34b6230f3c7815b8ae7e99443e45a56921059d3f
"2023-05-03T13:02:13Z"
python
"2023-05-04T17:18:28Z"