status
stringclasses
1 value
repo_name
stringclasses
13 values
repo_url
stringclasses
13 values
issue_id
int64
1
104k
updated_files
stringlengths
10
1.76k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
38
55
pull_url
stringlengths
38
53
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
closed
apache/airflow
https://github.com/apache/airflow
30,335
["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "tests/executors/test_celery_executor.py"]
Reccomend (or set as default) to enable pool_recycle for celery workers (especially if using MySQL)
### What do you see as an issue? Similar to how `sql_alchemy_pool_recycle` defaults to 1800 seconds for the Airflow metastore: https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#config-database-sql-alchemy-pool-recycle If users are using celery as their backend it provides extra stability to set `pool_recycle`. This problem is particularly acute for users who are using MySQL as backend for tasks because MySQL disconnects connections after 8 hours of being idle. While Airflow can usually force celery to retry connecting it does not always work and tasks can fail. This is specifically reccomended by the SqlAlchemy docs: * https://docs.sqlalchemy.org/en/14/core/pooling.html#setting-pool-recycle * https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine.params.pool_recycle * https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_wait_timeout ### Solving the problem We currently have a file which looks like this: ```python from airflow.config_templates.default_celery import DEFAULT_CELERY_CONFIG database_engine_options = DEFAULT_CELERY_CONFIG.get( "database_engine_options", {} ) # Use pool_pre_ping to detect stale db connections # https://github.com/apache/airflow/discussions/22113 database_engine_options["pool_pre_ping"] = True # Use pool recyle due to MySQL disconnecting sessions after 8 hours # https://docs.sqlalchemy.org/en/14/core/pooling.html#setting-pool-recycle # https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine.params.pool_recycle # https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout database_engine_options["pool_recycle"] = 1800 DEFAULT_CELERY_CONFIG["database_engine_options"] = database_engine_options ``` And we point the env var `AIRFLOW__CELERY__CELERY_CONFIG_OPTIONS` to this object, not sure if this is best practise? ### Anything else Maybe just change the default options to include this? ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30335
https://github.com/apache/airflow/pull/30426
cb18d923f8253ac257c1b47e9276c39bae967666
bc1d68a6eb01919415c399d678f491e013eb9238
"2023-03-27T16:31:21Z"
python
"2023-06-02T14:16:25Z"
closed
apache/airflow
https://github.com/apache/airflow
30,324
["airflow/providers/cncf/kubernetes/CHANGELOG.rst", "airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/provider.yaml", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_pod.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"]
KPO deferrable needs kubernetes_conn_id while non deferrable does not
### Apache Airflow version 2.5.2 ### What happened Not sure if this is a feature not a bug, but I can use KubernetesPodOperator fine without setting a kubernetes_conn_id. For example: ``` start = KubernetesPodOperator( namespace="mynamespace", cluster_context="mycontext", security_context={ 'runAsUser': 1000 }, name="hello", image="busybox", image_pull_secrets=[k8s.V1LocalObjectReference('prodregistry')], cmds=["sh", "-cx"], arguments=["echo Start"], task_id="Start", in_cluster=False, is_delete_operator_pod=True, config_file="/home/airflow/.kube/config", ) ``` But if I add deferrable=True to this it won't work. It seems to require an explicit kubernetes_conn_id (which we don't configure). Is not possible to the deferrable version to work as the non deferrable one? ### What you think should happen instead I hoped that kpo deferrable would work the same as non deferrable. ### How to reproduce Use KPO with deferrable=True but no kubernetes_conn_id setting ### Operating System Debian 11 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30324
https://github.com/apache/airflow/pull/28848
a09fd0d121476964f1c9d7f12960c24517500d2c
85b9135722c330dfe1a15e50f5f77f3d58109a52
"2023-03-27T09:59:56Z"
python
"2023-04-08T16:26:53Z"
closed
apache/airflow
https://github.com/apache/airflow
30,309
["airflow/providers/docker/hooks/docker.py", "airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"]
in DockerOperator, adding an attribute `tls_verify` to choose whether to validate the provided certificate.
### Description The current version of docker operator always performs TLS certificate validation. I think it would be nice to add an option to choose whether or not to validate the provided certificate. ### Use case/motivation My work environment has several docker hosts with expired self-signed certificates. Since it is difficult to renew all certificates immediately, we are using a custom docker operator to disable certificate validation. It would be nice if it was provided as an official feature, so I registered an issue. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30309
https://github.com/apache/airflow/pull/30310
51f9910ecbf1186aff164e09d118bdf04d21dfcb
c1a685f752703eeb01f9369612af8c88c24cca09
"2023-03-26T15:14:46Z"
python
"2023-04-14T10:17:42Z"
closed
apache/airflow
https://github.com/apache/airflow
30,289
["airflow/sensors/base.py", "tests/sensors/test_base.py"]
If the first poke of a sensor throws an exception, `timeout` does not work
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Airflow Version: 2.2.5 In `reschedule` mode, if the first poke of a sensor throws an exception, `timeout` does not work. There can be any combination of the poke returning `False` or raising an exception after that. My guess is that something is initialized in some database incorrectly, because this returns an empty list every time if the first poke raises an exception: ``` TaskReschedule.find_for_task_instance( context["ti"], try_number=first_try_number ) ``` This happens here in the main branch: https://github.com/apache/airflow/blob/main/airflow/sensors/base.py#L174-L181 If the first poke returns `False`, I don't see this issue. ### What you think should happen instead The timeout should be respected whether `poke` returns successfully or not. A related issue is that if every poke raises an uncaught exception, the timeout will never be respected, since the timeout is checked only after a successful poke. Maybe both issues can be fixed at once? ### How to reproduce Use this code. Run the dag several times, and see if the total duration including all retries is greater than the timeout. ``` import datetime import random from airflow import DAG from airflow.models import TaskReschedule from airflow.sensors.base import BaseSensorOperator from airflow.utils.context import Context class RandomlyFailSensor(BaseSensorOperator): def poke(self, context: Context) -> bool: first_try_number = context["ti"].max_tries - self.retries + 1 task_reschedules = TaskReschedule.find_for_task_instance( context["ti"], try_number=first_try_number ) self.log.error(f"\n\nIf this is the first attempt, or first attempt failed, " f"this will be empty: \n\t{task_reschedules}\n\n") if random.random() < .5: self.log.error("\n\nIf this was the very first poke, the timeout *will not* work.\n\n") raise Exception('Failed!') else: self.log.error("\n\nIf this was the very first poke, the timeout *will* work.\n\n") return False dag = DAG( 'sensors_test', schedule_interval=None, max_active_runs=1, catchup=False, default_args={ "owner": "me", "depends_on_past": False, "start_date": datetime.datetime(2018, 1, 1), "email_on_failure": False, "email_on_retry": False, "execution_timeout": datetime.timedelta(minutes=10), } ) t_always_fail_sensor = RandomlyFailSensor( task_id='random_fail_sensor', mode="reschedule", poke_interval=1, retry_delay=datetime.timedelta(seconds=1), timeout=15, retries=50, dag=dag ) ``` ### Operating System Debian 11? This Docker image: https://hub.docker.com/layers/library/python/3.8.12/images/sha256-60d1cda1542582095795c25bff869b0c615e2a913c4026ed0313ede156b60468?context=explore ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details I use an internal tool that hides the details of the deployment. If there is more info that would be helpful for debugging, let me know. ### Anything else - This happens every time, based on the conditions I describe above. - I'd be happy to submit a PR, but that depends on what my manager says. - @yuqian90 might know more about this issue, since they contributed related code in [this commit](https://github.com/apache/airflow/commit/a0e6a847aa72ddb15bdc147695273fb3aec8839d#diff-62f7d8a52fefdb8e05d4f040c6d3459b4a56fe46976c24f68843dbaeb5a98487R1164). - Impact: if the first poke throws an exception and the rest return False, the task will continue indefinitely. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30289
https://github.com/apache/airflow/pull/30293
41c8e58deec2895b0a04879fcde5444b170e679e
24887091b807527b7f32a58e85775f4daec3aa84
"2023-03-24T20:21:45Z"
python
"2023-04-05T11:17:22Z"
closed
apache/airflow
https://github.com/apache/airflow
30,287
["airflow/providers/amazon/aws/transfers/redshift_to_s3.py", "tests/providers/amazon/aws/transfers/test_redshift_to_s3.py"]
RedshiftToS3 Operator Wrapping Query in Quotes Instead of $$
### Apache Airflow version 2.5.2 ### What happened When passing a select_query into the RedshiftToS3 Operator, the query will error out if it contains any single quotes because the body of the UNLOAD statement is being wrapped in single quotes. ### What you think should happen instead Instead, it's better practice to use the double dollar sign or dollar quoting to signify the start and end of the statement to run. This removes the need to escape any special characters and avoids the statement throwing an error in the common case of using single quotes to wrap string literals. ### How to reproduce Running the RedshiftToS3 Operator with the sql_query: `SELECT 'Single Quotes Break this Operator'` will throw the error ### Operating System NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com//" ### Versions of Apache Airflow Providers apache-airflow[package-extra]==2.4.3 apache-airflow-providers-amazon ### Deployment Amazon (AWS) MWAA ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30287
https://github.com/apache/airflow/pull/35986
e0df7441fa607645d0a379c2066ca4ab16f5cb95
04a781666be2955ed518780ea03bc13a1e3bd473
"2023-03-24T18:31:54Z"
python
"2023-12-04T19:19:00Z"
closed
apache/airflow
https://github.com/apache/airflow
30,280
["airflow/www/static/css/dags.css", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "docs/apache-airflow/core-concepts/dag-run.rst", "tests/www/views/test_views_home.py"]
Feature request - filter for dags with running status in the main page
### Description Feature request to filter by running dags (or by other statuses too). We have over 100 dags and we were having some performance problems. we wanted to see all the running Dags from the main page and found that we couldn't. We can see the light green circle in the runs (and that involves a lot of scrolling) but no way to filter for it. We use SQL Server and it's job scheduling tool (SQL Agent) has this feature. The implementation for airflow shouldn't necessarily be like this but just presenting this as an example that it's a helpful feature implemented in other tools. <img width="231" alt="image" src="https://user-images.githubusercontent.com/286903/227529646-97ac2e8e-52de-421a-8328-072f35ccdff2.png"> I'll leave implementation details for someone else. on v2.2.5 ### Use case/motivation _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30280
https://github.com/apache/airflow/pull/30429
c25251cde620481592392e5f82f9aa8a259a2f06
dbe14c31d52a345aa82e050cc0a91ee60d9ee567
"2023-03-24T13:11:24Z"
python
"2023-05-22T16:05:44Z"
closed
apache/airflow
https://github.com/apache/airflow
30,247
["chart/values.schema.json", "chart/values.yaml", "tests/charts/airflow_core/test_pdb_scheduler.py", "tests/charts/other/test_pdb_pgbouncer.py", "tests/charts/webserver/test_pdb_webserver.py"]
Pod Disruption Budget doesn't allow additional properties
### Official Helm Chart version 1.8.0 (latest released) ### Apache Airflow version 2 ### Kubernetes Version >1.21 ### Helm Chart configuration ```yaml webserver: podDisruptionBudget: enabled: true config: minAvailable: 1 ``` ### Docker Image customizations _No response_ ### What happened if you use the next values the chart you will not ablt to install the chart, the problem is in the schema in this line https://github.com/apache/airflow/blob/main/chart/values.schema.json#L3320 ### What you think should happen instead _No response_ ### How to reproduce use the this values ```yaml webserver: podDisruptionBudget: enabled: true config: minAvailable: 1 ``` ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30247
https://github.com/apache/airflow/pull/30603
3df0be0f6fe9786a5fcb85151fb83167649ee163
75f5f53ed0aa8df516c9d861153cab4f73318317
"2023-03-23T04:48:42Z"
python
"2023-05-08T08:16:12Z"
closed
apache/airflow
https://github.com/apache/airflow
30,242
["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/connection.py", "airflow/secrets/metastore.py", "airflow/serialization/enums.py", "airflow/serialization/serialized_objects.py", "tests/api_internal/endpoints/test_rpc_api_endpoint.py"]
AIP-44 Migrate MetastoreBackend to Internal API
Used by Variable/Connection. https://github.com/apache/airflow/blob/894741e311ffd642e036b80d3b1b5d53c3747cad/airflow/secrets/metastore.py#L32
https://github.com/apache/airflow/issues/30242
https://github.com/apache/airflow/pull/33829
0e4d3001397ba2005b2172ad401f9938d5d6aaf8
0cb875b7ec1cebb101866581166cd7b97047f941
"2023-03-22T15:56:40Z"
python
"2023-08-29T10:24:10Z"
closed
apache/airflow
https://github.com/apache/airflow
30,240
["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/api_internal/internal_api_call.py", "airflow/serialization/enums.py", "airflow/serialization/serialized_objects.py", "tests/api_internal/endpoints/test_rpc_api_endpoint.py", "tests/api_internal/test_internal_api_call.py", "tests/serialization/test_serialized_objects.py"]
AIP-44 Implement conversion to Pydantic-ORM objects in Internal API
null
https://github.com/apache/airflow/issues/30240
https://github.com/apache/airflow/pull/30282
7aca81ceaa6cb640dff9c5d7212adc4aeb078a2f
41c8e58deec2895b0a04879fcde5444b170e679e
"2023-03-22T15:26:50Z"
python
"2023-04-05T08:54:00Z"
closed
apache/airflow
https://github.com/apache/airflow
30,229
["docs/apache-airflow/howto/operator/python.rst"]
Update Python operator how-to with @task.sensor example
### Body The current [how-to documentation for the `PythonSensor`](https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html#pythonsensor) does not include any references to the existing `@task.sensor` TaskFlow decorator. It would be nice to see how uses together in this doc. ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/30229
https://github.com/apache/airflow/pull/30344
4e4e563d3fc68d1becdc1fc5ec1d1f41f6c24dd3
2a2ccfc27c3d40caa217ad8f6f0ba0d394ac2806
"2023-03-22T01:19:01Z"
python
"2023-04-11T09:12:52Z"
closed
apache/airflow
https://github.com/apache/airflow
30,225
["airflow/decorators/base.py", "airflow/decorators/setup_teardown.py", "airflow/models/baseoperator.py", "airflow/utils/setup_teardown.py", "airflow/utils/task_group.py", "tests/decorators/test_setup_teardown.py", "tests/serialization/test_dag_serialization.py", "tests/utils/test_setup_teardown.py"]
Ensure setup/teardown tasks can be reused/works with task.override
Ensure that this works: ```python @setup def mytask(): print("I am a setup task") with dag_maker() as dag: mytask.override(task_id='newtask') assert len(dag.task_group.children) == 1 setup_task = dag.task_group.children["newtask"] assert setup_task._is_setup ``` and teardown also works
https://github.com/apache/airflow/issues/30225
https://github.com/apache/airflow/pull/30342
28f73e42721bba5c5ad40bb547be9c057ca81030
c76555930aee9692d2a839b9c7b9e2220717b8a0
"2023-03-21T21:01:26Z"
python
"2023-03-28T18:15:07Z"
closed
apache/airflow
https://github.com/apache/airflow
30,220
["airflow/models/dag.py", "airflow/www/static/js/api/useMarkFailedTask.ts", "airflow/www/static/js/api/useMarkSuccessTask.ts", "airflow/www/static/js/api/useMarkTaskDryRun.ts", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/MarkInstanceAs.tsx", "airflow/www/views.py", "tests/models/test_dag.py", "tests/www/views/test_views.py"]
set tasks as successful/failed at their task-group level.
### Description Ability to clear or mark task groups as success/failure and have that propagate to the tasks within that task group. Sometimes there is a need to adjust the status of tasks within a task group, which can get unwieldy depending on the number of tasks in that task group. A great quality of life upgrade, and something that seems like an intuitive feature, would be the ability to clear or change the status of all tasks at their taskgroup level through the UI. ### Use case/motivation In the event a large number of tasks, or a whole task group in this case, need to be cleared or their status set to success/failure this would be a great improvement. For example, a manual DAG run triggered through the UI or the API that has a number of task sensors or tasks that otherwise don't matter for that DAG run - instead of setting each one as success by hand, doing so for each task group would be great. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30220
https://github.com/apache/airflow/pull/30478
decaaa3df2b3ef0124366033346dc21d62cff057
1132da19e5a7d38bef98be0b1f6c61e2c0634bf9
"2023-03-21T18:06:34Z"
python
"2023-04-27T16:10:28Z"
closed
apache/airflow
https://github.com/apache/airflow
30,200
["chart/templates/_helpers.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_airflow_common.py"]
Support for providing SHA digest for the image in Helm chart
### Description This is my configuration: ```yaml images: airflow: repository: <REPO> tag: <TAG> ``` I'd like to be able to do the following: ```yaml images: airflow: repository: <REPO> digest: <SHA_DIGEST> ``` Additionally, I've tried supplying only the repository, or placing the digest as the tag, but both don't work because of [this](https://github.com/apache/airflow/blob/c44c7e1b481b7c1a0d475265835a23b0f507506c/chart/templates/_helpers.yaml#L252). The formatting is done by `repo:tag` while I need `repo@digest`. ### Use case/motivation I'm using Terraform to deploy Airflow. I'm using the data source of [`aws_ecr_image`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecr_image) in order to pick the `latest` image. I want to supply to the [`helm_release`](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) of Airflow the image's digest rather than `latest` as according to the docs, it's bad practice. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30200
https://github.com/apache/airflow/pull/30214
78ab400d7749c683c5c122bcec0a023ded7a9603
e95f83ef374367d7ac8e75162ebe4ae1abae487f
"2023-03-20T17:07:22Z"
python
"2023-04-10T16:37:29Z"
closed
apache/airflow
https://github.com/apache/airflow
30,196
["airflow/www/utils.py", "airflow/www/views.py"]
delete dag run times out
### Apache Airflow version 2.5.2 ### What happened when trying to delete a dag run with many tasks (>1000) the operation times out and the dag run is not deleted. ### What you think should happen instead _No response_ ### How to reproduce attempt to delete a dag run that contains >1000 tasks (in my case 10k) using the dagrun/list/ page results in a timeout: ![image](https://user-images.githubusercontent.com/7373236/226325567-5a87efa1-4744-417e-9995-b97dd1791401.png) code for dag (however it fails on any dag with > 1000 tasks): ``` import json from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator from datetime import datetime, timedelta from airflow.decorators import dag, task default_args = { 'owner': 'airflow', 'depends_on_past': False, 'retries': 0, 'retry_delay': timedelta(minutes=1), 'start_date': datetime(2023, 2, 26), 'is_delete_operator_pod': True, 'get_logs': True } @dag('system_test', schedule=None, default_args=default_args, catchup=False, tags=['maintenance']) def run_test_airflow(): stress_image = 'dockerhub.prod.evogene.host/progrium/stress' @task def create_cmds(): commands = [] for i in range(10000): commands.append(["stress --cpu 4 --io 1 --vm 2 --vm-bytes 6000M --timeout 60s"]) return commands KubernetesPodOperator.partial( image=stress_image , task_id=f'test_airflow', name=f'test_airflow', cmds=["/bin/sh", "-c"], log_events_on_failure=True, pod_template_file=f'/opt/airflow/dags/repo/templates/cpb_cpu_4_mem_16' ).expand(arguments=create_cmds()) run_test_airflow() ``` ### Operating System kubernetes deployment ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30196
https://github.com/apache/airflow/pull/30330
a1b99fe5364977739b7d8f22a880eeb9d781958b
4e4e563d3fc68d1becdc1fc5ec1d1f41f6c24dd3
"2023-03-20T11:27:46Z"
python
"2023-04-11T07:58:08Z"
closed
apache/airflow
https://github.com/apache/airflow
30,169
["airflow/providers/google/cloud/hooks/looker.py", "tests/providers/google/cloud/hooks/test_looker.py"]
Potential issue with use of serialize in Looker SDK
### Apache Airflow Provider(s) google ### Versions of Apache Airflow Providers apache-airflow-providers-common-sql==1.3.4 apache-airflow-providers-ftp==3.3.1 apache-airflow-providers-google==8.11.0 apache-airflow-providers-http==4.2.0 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-sqlite==3.3.1 ### Apache Airflow version 2 ### Operating System OS X (same issue on AWS) ### Deployment Amazon (AWS) MWAA ### Deployment details _No response_ ### What happened I wrote a mod on top of LookerHook to access the `scheduled_plan_run_once` endpoint. The result was the following error. ```Traceback (most recent call last): File "/usr/local/airflow/dags/utils/looker_operators_mod.py", line 125, in execute resp = self.hook.run_scheduled_plan_once( File "/usr/local/airflow/dags/utils/looker_hook_mod.py", line 136, in run_scheduled_plan_once resp = sdk.scheduled_plan_run_once(plan_to_send) File "/usr/local/lib/python3.9/site-packages/looker_sdk/sdk/api40/methods.py", line 10273, in scheduled_plan_run_once self.post( File "/usr/local/lib/python3.9/site-packages/looker_sdk/rtl/api_methods.py", line 171, in post serialized = self._get_serialized(body) File "/usr/local/lib/python3.9/site-packages/looker_sdk/rtl/api_methods.py", line 156, in _get_serialized serialized = self.serialize(api_model=body) # type: ignore TypeError: serialize() missing 1 required keyword-only argument: 'converter' ``` I was able to get past the error by rewriting the `get_looker_sdk` function in LookerHook to initialize with `looker_sdk.init40` instead, which resolved the serialize() issue. ### What you think should happen instead I don't know why the serialization piece is part of the SDK initialization - would love some further context! ### How to reproduce As far as I can tell, any call to sdk.scheduled_plan_run_once() causes this issue. I tried it with a variety of different dict plans. I only resolved it by changing how I initialized the SDK ### Anything else n/a ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30169
https://github.com/apache/airflow/pull/34678
3623b77d22077b4f78863952928560833bfba2f4
562b98a6222912d3a3d859ca3881af3f768ba7b5
"2023-03-17T18:50:15Z"
python
"2023-10-02T20:31:07Z"
closed
apache/airflow
https://github.com/apache/airflow
30,167
["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"]
SSHOperator - Allow specific command timeout
### Description Following #29282, command timeout is set at the `SSHHook` level while it used to be able to set at the `SSHOperator` level. I will work on a PR as soon as i can. ### Use case/motivation Ideally, i think we could have a default value set on `SSHHook`, but with the possibility of overriding it at the `SSHOperator` level. ### Related issues #29282 ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30167
https://github.com/apache/airflow/pull/30190
2a42cb46af66c7d6a95a718726cb9206258a0c14
fe727f985b1053b838433b817458517c0c0f2480
"2023-03-17T15:56:30Z"
python
"2023-03-21T20:32:15Z"
closed
apache/airflow
https://github.com/apache/airflow
30,153
["airflow/providers/neo4j/hooks/neo4j.py", "tests/providers/neo4j/hooks/test_neo4j.py"]
Issue with Neo4j provider using some schemes
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Hi, I've run into some issues when using the neo4j operator. I've tried running a simple query and got an exception from the driver itself. **Using: Airflow 2.2.2** ### What you think should happen instead The exception stated that when using bolt+ssc URI scheme, it is not allowed to use the `encrypted` parameter which is mandatory in the hook (but actually not mandatory when using the driver standalone). The exception: neo4j.exceptions.ConfigurationError: The config settings "encrypted", "trust", "trusted_certificates", and "ssl_context" can only be used with the URI schemes ['bolt', 'neo4j']. Use the other URI schemes ['bolt+ssc', 'bolt+s', 'neo4j+ssc', 'neo4j+s'] for setting encryption settings. In my opinion: if there's a URI scheme with bolt+ssc, and a GraphDatabase.driver was chosen in the connection settings, it should not be used with the `encrypted` parameter. I did edit the hook myself and tried this, worked great for me. ### How to reproduce install the neo4j provider (I used v3.1.0) Create a neo4j connection in the UI. Add your host, user/login, password and extras. In the extras: { "encrypted": false, "neo4j_scheme": false, "certs_self_signed": true } ### Operating System Linux ### Versions of Apache Airflow Providers pyairtable==1.0.0 tableauserverclient==0.17.0 apache-airflow-providers-mysql==2.1.1 apache-airflow-providers-salesforce==3.3.0 apache-airflow-providers-slack==4.1.0 apache-airflow-providers-tableau==2.1.2 apache-airflow-providers-postgres==2.3.0 apache-airflow-providers-jdbc==2.0.1 apache-airflow-providers-neo4j==3.1.0 mysql-connector-python==8.0.27 slackclient>=1.0.0,<2.0.0 boto3==1.20.26 cached-property==1.5.2 ### Deployment Amazon (AWS) MWAA ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30153
https://github.com/apache/airflow/pull/30418
93a5422c5677a42b3329c329d65ff2b38b1348c2
cd458426c66aca201e43506c950ee68c2f6c3a0a
"2023-03-16T19:47:42Z"
python
"2023-04-21T22:01:31Z"
closed
apache/airflow
https://github.com/apache/airflow
30,124
["airflow/models/taskinstance.py", "airflow/utils/state.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/models/test_cleartasks.py", "tests/models/test_dagrun.py"]
DagRun's start_date updated when user clears task of the running Dagrun
### Apache Airflow version 2.5.1 ### What happened DagRun state and start_date are reset if somebody is clearing a task of the running DagRun. ### What you think should happen instead I think we should not reset DagRun `state` and `start_date` in it's in the running or queued states because it doesn't make any sense for me. `state` and `start_date` of the DgRun should remain the same in case somebody's clearing a task of the running DagRun ### How to reproduce Let's say we have a Dag with 2 tasks in it - short one and the long one: ``` dag = DAG( 'dummy-dag', schedule_interval='@once', catchup=False, ) DagContext.push_context_managed_dag(dag) bash_success = BashOperator( task_id='bash-success', bash_command='echo "Start and finish"; exit 0', retries=0, ) date_ind_success = BashOperator( task_id='bash-long-success', bash_command='echo "Start and finish"; sleep 300; exit 0', ) ``` Let's day we have a running Dagrun of this DAG. First task finishes in a second and the long one is still running. We have a start_date and duration set and the Dagrun is still running. It runs for example for a 30 secs (pic 1 and 2) <img width="486" alt="image" src="https://user-images.githubusercontent.com/23456894/225335210-c2223ad1-771b-459d-b8ed-8f0aacb9b890.png"> <img width="492" alt="image" src="https://user-images.githubusercontent.com/23456894/225335272-ad737aef-2051-4e27-ae36-38c76d720c95.png"> Then we are clearing the short task. It causes clear of the Dagrun state (to `queued`) and clears `start_date` like we have a new Dagrun (pic 3 and 4) <img width="407" alt="image" src="https://user-images.githubusercontent.com/23456894/225335397-6c7e0df7-a26a-46ed-8eaa-56ff928fc01a.png"> <img width="498" alt="image" src="https://user-images.githubusercontent.com/23456894/225335491-4d6a860a-e923-4878-b212-a6ccb4b590a3.png"> ### Operating System Unix/MacOS ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30124
https://github.com/apache/airflow/pull/30125
0133f6806dbfb60b84b5bea4ce0daf073c246d52
070ecbd87c5ac067418b2814f554555da0a4f30c
"2023-03-15T14:26:30Z"
python
"2023-04-26T15:27:48Z"
closed
apache/airflow
https://github.com/apache/airflow
30,089
["airflow/www/views.py", "tests/www/views/test_views_rendered.py"]
Connection password values appearing unmasked in the "Task Instance Details" -> "Environment" field
### Apache Airflow version Airflow 2.5.1 ### What happened Connection password values appearing in the "Task Instance Details" -> "Task Attributes" -> environment field. We are setting environment variables for the docker_operator with values from the password field in a connection. The values from the password field are masked in the "Rendered Template" section and in the logs but it's showing the values in the "environment" field under Task Instance Details. ### What you think should happen instead These password values should be masked like they are in the "Rendered Template" and logs. ### How to reproduce Via this DAG, can run off any image. Create a connection called "DATABASE_CONFIG" with a password in the password field. Run this DAg and then check its Task Instance Details. DAG Code: ``` from airflow import DAG from docker.types import Mount from airflow.providers.docker.operators.docker import DockerOperator from datetime import timedelta from airflow.models import Variable from airflow.hooks.base_hook import BaseHook import pendulum import json # Amount of times to retry job on failure retries = 0 environment_config = { "DB_WRITE_PASSWORD": BaseHook.get_connection("DATABASE_CONFIG").password, } # Setup default args for the job default_args = { "owner": "airflow", "start_date": pendulum.datetime(2023, 1, 1, tz="Australia/Sydney"), "retries": retries, } # Create the DAG dag = DAG( "test_dag", # DAG ID default_args=default_args, schedule_interval="* * * * *", catchup=False, ) # # Create the DAG object with dag as dag: docker_task = DockerOperator( task_id="task", image="<image>", execution_timeout=timedelta(minutes=2), environment=environment_config, command="<command>", api_version="auto", docker_url="tcp://docker.for.mac.localhost:2375", ) ``` Rendered Template is good: ![image](https://user-images.githubusercontent.com/41356007/224928676-4c1de3d9-90dc-40dc-bb27-aa10661537ba.png) In "Task Instance Details" ![image](https://user-images.githubusercontent.com/41356007/224928510-0dc4fc40-f675-49fd-a299-2c2f42feef5b.png) ### Operating System centOS Linux and MAC ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details Running on a docker via the airflow docker-compose ### Anything else _No response_ ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30089
https://github.com/apache/airflow/pull/31125
db359ee2375dd7208583aee09b9eae00f1eed1f1
ffe3a68f9ada2d9d35333d6a32eac2b6ac9c70d6
"2023-03-14T04:35:49Z"
python
"2023-05-08T14:59:58Z"
closed
apache/airflow
https://github.com/apache/airflow
30,075
["airflow/api_connexion/openapi/v1.yaml"]
Unable to set DagRun state in create Dagrun endpoint ("Property is read-only - 'state'")
### Apache Airflow version main (development) ### What happened While working on another change I noticed that the example [POST from the API docs](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/post_dag_run) actually leads to a request Error: ``` curl -X POST -H "Cookie: session=xxxx" localhost:8080/api/v1/dags/data_warehouse_dag_5by1a2rogu/dagRuns -d '{"dag_run_id":"string2","logical_date":"2019-08-24T14:15:24Z","execution_date":"2019-08-24T14:15:24Z","conf":{},"state":"queued","note":"strings"}' -H 'Content-Type: application/json' { "detail": "Property is read-only - 'state'", "status": 400, "title": "Bad Request", "type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest" } ``` I believe that this comes from the DagRunSchema marking this field as dump_only: https://github.com/apache/airflow/blob/478fd826522b6192af6b86105cfa0686583e34c2/airflow/api_connexion/schemas/dag_run_schema.py#L69 So either - 1) The documentation / API spec is incorrect and this field cannot be set in the request 2) The marshmallow schema is incorrect and this field is incorrectly marked as `dump_only` I think that its the former, as there's [even a test to ensure that this field can't be set in a request](https://github.com/apache/airflow/blob/751a995df55419068f11ebabe483dba3302916ed/tests/api_connexion/endpoints/test_dag_run_endpoint.py#L1247-L1257) - I can look into this and fix it soon. ### What you think should happen instead The API should accept requested which follow examples from the documentation. ### How to reproduce Spin up breeze and POST a create dagrun request which attempts to set the DagRun state. ### Operating System Breeze ### Versions of Apache Airflow Providers _No response_ ### Deployment Other ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30075
https://github.com/apache/airflow/pull/30149
f01140141f1fe51b6ee1eba5b02ab7516a67c9c7
e01c14661a4ec4bee3a2066ac1323fbd8a4386f1
"2023-03-13T17:28:20Z"
python
"2023-03-21T18:26:51Z"
closed
apache/airflow
https://github.com/apache/airflow
30,073
["airflow/models/taskinstance.py", "tests/ti_deps/deps/test_trigger_rule_dep.py"]
Task group expand fails on empty list at get_relevant_upstream_map_indexes
### Apache Airflow version 2.5.1 ### What happened Expanding of task group fails when the list is empty and there is a task which references mapped index in xcom pull of that group. ![image](https://user-images.githubusercontent.com/114723574/224769499-4a094b0c-8bbe-455f-9034-70c1cbfe2e3a.png) throws below error Traceback (most recent call last): File "/opt/bitnami/airflow/venv/bin/airflow", line 8, in <module> sys.exit(main()) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/__main__.py", line 39, in main args.func(args) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command return func(*args, **kwargs) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/cli.py", line 108, in wrapper return f(*args, **kwargs) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 73, in scheduler _run_scheduler_job(args=args) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 43, in _run_scheduler_job job.run() File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 258, in run self._execute() File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 759, in _execute self._run_scheduler_loop() File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 885, in _run_scheduler_loop num_queued_tis = self._do_scheduling(session) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 964, in _do_scheduling callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/retries.py", line 78, in wrapped_function for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs): File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 384, in __iter__ do = self.iter(retry_state=retry_state) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 351, in iter return fut.result() File "/opt/bitnami/python/lib/python3.9/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/opt/bitnami/python/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/retries.py", line 87, in wrapped_function return func(*args, **kwargs) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1253, in _schedule_all_dag_runs callback_to_run = self._schedule_dag_run(dag_run, session) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1322, in _schedule_dag_run schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper return func(*args, **kwargs) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/dagrun.py", line 563, in update_state info = self.task_instance_scheduling_decisions(session) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper return func(*args, **kwargs) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in task_instance_scheduling_decisions schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis( File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/dagrun.py", line 793, in _get_ready_tis if not schedulable.are_dependencies_met(session=session, dep_context=dep_context): File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper return func(*args, **kwargs) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1070, in are_dependencies_met for dep_status in self.get_failed_dep_statuses(dep_context=dep_context, session=session): File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1091, in get_failed_dep_statuses for dep_status in dep.get_dep_statuses(self, session, dep_context): File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/base_ti_dep.py", line 107, in get_dep_statuses yield from self._get_dep_statuses(ti, session, cxt) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/trigger_rule_dep.py", line 93, in _get_dep_statuses yield from self._evaluate_trigger_rule(ti=ti, dep_context=dep_context, session=session) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/trigger_rule_dep.py", line 219, in _evaluate_trigger_rule .filter(or_(*_iter_upstream_conditions())) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/trigger_rule_dep.py", line 191, in _iter_upstream_conditions map_indexes = _get_relevant_upstream_map_indexes(upstream_id) File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/trigger_rule_dep.py", line 138, in _get_relevant_upstream_map_indexes return ti.get_relevant_upstream_map_indexes( File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2652, in get_relevant_upstream_map_indexes ancestor_map_index = self.map_index * ancestor_ti_count // ti_count ### What you think should happen instead In case of empty list all the task group should be skipped ### How to reproduce from airflow.operators.bash import BashOperator from airflow.operators.python import get_current_context import pendulum from airflow.decorators import dag, task, task_group from airflow.operators.empty import EmptyOperator @dag(dag_id="test", start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), schedule=None, catchup=False, render_template_as_native_obj=True ) def testdag(): task1 =EmptyOperator(task_id="get_attribute_can_json_mapping") @task def lkp_schema_output_mapping(**context): return 1 @task def task2(**context): return 2 @task def task3(table_list, **context): return [] [task2(), task1, group2.expand(file_name=task3(table_list=task2()))] @task_group( group_id="group2" ) def group2(file_name): @task def get_table_name(name): return "testing" table_name = get_table_name(file_name) run_this = BashOperator( task_id="run_this", bash_command="echo {{task_instance.xcom_pull(task_ids='copy_to_staging.get_table_name'," "map_indexes=task_instance.map_index)}}", ) table_name >> run_this dag = testdag() if __name__ == "__main__": dag.test() ### Operating System Debian GNU/Linux 11 ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==7.1.0 apache-airflow-providers-apache-cassandra==3.1.0 apache-airflow-providers-apache-drill==2.3.1 apache-airflow-providers-apache-druid==3.3.1 apache-airflow-providers-apache-hdfs==3.2.0 apache-airflow-providers-apache-hive==5.1.1 apache-airflow-providers-apache-pinot==4.0.1 apache-airflow-providers-arangodb==2.1.0 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cloudant==3.1.0 apache-airflow-providers-cncf-kubernetes==5.1.1 apache-airflow-providers-common-sql==1.3.3 apache-airflow-providers-databricks==4.0.0 apache-airflow-providers-docker==3.4.0 apache-airflow-providers-elasticsearch==4.3.3 apache-airflow-providers-exasol==4.1.3 apache-airflow-providers-ftp==3.3.0 apache-airflow-providers-google==8.8.0 apache-airflow-providers-grpc==3.1.0 apache-airflow-providers-hashicorp==3.2.0 apache-airflow-providers-http==4.1.1 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-influxdb==2.1.0 apache-airflow-providers-microsoft-azure==5.1.0 apache-airflow-providers-microsoft-mssql==3.3.2 apache-airflow-providers-mongo==3.1.1 apache-airflow-providers-mysql==4.0.0 apache-airflow-providers-neo4j==3.2.1 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-presto==4.2.1 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-sendgrid==3.1.0 apache-airflow-providers-sftp==4.2.1 apache-airflow-providers-slack==7.2.0 apache-airflow-providers-sqlite==3.3.1 apache-airflow-providers-ssh==3.4.0 apache-airflow-providers-trino==4.3.1 apache-airflow-providers-vertica==3.3.1 ### Deployment Other ### Deployment details _No response_ ### Anything else I have manually changed below in the taskinstance.py(get_relevant_upstream_map_indexes method) and it ran fine. Please check if you can implement the same if ti_count is None or ti_count == 0: return None ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30073
https://github.com/apache/airflow/pull/30084
66b5f90f4536329ba1fe0e54e3f15ec98c1e2730
8d22828e2519a356e9e38c78c3efee1d13b45675
"2023-03-13T16:55:34Z"
python
"2023-03-15T22:58:24Z"
closed
apache/airflow
https://github.com/apache/airflow
30,071
["chart/templates/cleanup/cleanup-cronjob.yaml", "chart/templates/statsd/statsd-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_cleanup_pods.py", "tests/charts/test_statsd.py"]
Helm Chart: allow setting annotations for resource controllers (CronJob, Deployment)
### Description The Helm Chart allows setting annotations for the pods created by the `CronJob` [but not the CronJob controller itself](https://github.com/apache/airflow/blob/helm-chart/1.8.0/chart/templates/cleanup/cleanup-cronjob.yaml). The values file should offer an option to provide custom annotations for the `CronJob` controller, similarly to how the DB migrations job exposes `.Values.migrateDatabaseJob.jobAnnotations` In the same fashion, other `Deployment` templates expose custom annotations, but [statsd deployment doesn't](https://github.com/apache/airflow/blob/helm-chart/1.8.0/chart/templates/statsd/statsd-deployment.yaml). ### Use case/motivation Other tools e.g. ArgoCD may require the use of annotations, for example: * [ArgoCD Sync Options](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options) * [ArgoCD Sync Phases and Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/) Example use case: _Set the cleanup CronJob to be synced after the webserver and scheduler deployments have been synced with ArgoCD_ ### Related issues https://github.com/apache/airflow/issues/25446 originally mentioned the issue regarding the StatsD deployment, but the accepted fix was https://github.com/apache/airflow/pull/25732 which allows setting annotations for the pod template, not the `Deployment` itself ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30071
https://github.com/apache/airflow/pull/30126
c1aa4b9500f417e6669a79fbf59c11ae6e6993a2
8b634ffa6aa5a83e1f87f1a62bfa07e78147f5c5
"2023-03-13T12:13:38Z"
python
"2023-03-16T19:09:20Z"
closed
apache/airflow
https://github.com/apache/airflow
30,042
["airflow/www/utils.py", "airflow/www/views.py"]
Search/filter by note in List Dag Run
### Description Going to Airflow web UI, Browse>DAG Run displays the list of runs, but there is no way to search or filter based on the text in the "Note" column. ### Use case/motivation It is possible to do a free text search for the "Run Id" field. The Note field may contain pieces of information that may be relevant to find, or to filter on the basis of these notes. ### Related issues Sorting by Note in List Dag Run fails: https://github.com/apache/airflow/issues/30041 ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30042
https://github.com/apache/airflow/pull/31455
f00c131cbf5b2c19c817d1a1945326b80f8c79e7
5794393c95156097095e6fbf76d7faeb6ec08072
"2023-03-11T14:16:02Z"
python
"2023-05-25T18:17:15Z"
closed
apache/airflow
https://github.com/apache/airflow
30,023
["docs/apache-airflow/best-practices.rst"]
Variable with template is ambiguous, especially for new users
### What do you see as an issue? In the doc below, it states `Make sure to use variable with template in operator, not in the top level code.` https://github.com/apache/airflow/blob/main/docs/apache-airflow/best-practices.rst It then gives this example as a Good Example. **Good Example** ``` bash_use_variable_good = BashOperator( task_id="bash_use_variable_good", bash_command="echo variable foo=${foo_env}", env={"foo_env": "{{ var.value.get('foo') }}"}, ) ``` example below, since `{{ var.value.get('foo') }}` is in the top level code (since the `__init__` method is run every time the dag file is parsed. This can be ambiguous for users, especially new users, to understand the true difference between templated and non-templated variables. The difference between the two examples below isn't that one of them is using top-level code and the other isn't, it's that one is jinja templated and the other isn't. There is a great opportunity here to showcase the utility of jinja templating. ``` bash_use_variable_bad_3 = BashOperator( task_id="bash_use_variable_bad_3", bash_command="echo variable foo=${foo_env}", env={"foo_env": Variable.get("foo")}, # DON'T DO THAT ) ``` and ``` bash_use_variable_good = BashOperator( task_id="bash_use_variable_good", bash_command="echo variable foo=${foo_env}", env={"foo_env": "{{ var.value.get('foo') }}"}, ) ``` ### Solving the problem Replacing `Make sure to use variable with template in operator, not in the top level code.` with a sentence that is more in line with the examples following it will not only show alignment but also highlight the benefits of jinja templating in top level code. Perhaps: ``` In top-level code, variables using jinja templates do not produce a request until runtime, whereas, `Variable.get()` produces a request every time the dag file is parsed by the scheduler. This will lead to suboptimal performance for the scheduler and can cause the dag file to timeout before it is fully parsed. ``` ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30023
https://github.com/apache/airflow/pull/30040
8d22828e2519a356e9e38c78c3efee1d13b45675
f1e40cf799c5ae73ec6f7991efe604f2088d8622
"2023-03-10T14:13:32Z"
python
"2023-03-16T00:06:35Z"
closed
apache/airflow
https://github.com/apache/airflow
30,010
["airflow/providers/snowflake/CHANGELOG.rst", "airflow/providers/snowflake/operators/snowflake.py"]
SnowflakeOperator default autocommit flipped to False
### Apache Airflow Provider(s) snowflake ### Versions of Apache Airflow Providers This started with apache-airflow-providers-snowflake==4.0.0 and is still an issue with 4.0.4 ### Apache Airflow version 2.5.1 ### Operating System Debian GNU/Linux 11 (bullseye) ### Deployment Astronomer ### Deployment details This is affecting both local and hosted deployments ### What happened We are testing out several updated packages, and one thing that broke was the SnowflakeOperator when it was executing a stored procedure. The specific error points to autocommit being set to False: `Stored procedure execution error: Scoped transaction started in stored procedure is incomplete and it was rolled back.` Whereas this used to work in version 3.2.0: ``` copy_data_snowflake = SnowflakeOperator( task_id=f'copy_{table_name}_snowflake', sql=query, ) ``` In order for it to work now, we have to specify autocommit=True: ``` copy_data_snowflake = SnowflakeOperator( task_id=f'copy_{table_name}_snowflake', sql=query, autocommit=True, ) ``` [The code](https://github.com/apache/airflow/blob/599c587e26d5e0b8fa0a0967f3dc4fa92d257ed0/airflow/providers/snowflake/operators/snowflake.py#L45) still indicates that the default is True, but I believe [this commit](https://github.com/apache/airflow/commit/ecd4d6654ff8e0da4a7b8f29fd23c37c9c219076#diff-e9f45fcabfaa0f3ed0c604e3bf2215fed1c9d3746e9c684b89717f9cd75f1754L98) broke it. ### What you think should happen instead The default for autocommit should revert to the previous behavior, matching the documentation. ### How to reproduce In Snowflake: ``` CREATE OR REPLACE TABLE PUBLIC.FOO (BAR VARCHAR); CREATE OR REPLACE PROCEDURE PUBLIC.FOO() RETURNS VARCHAR LANGUAGE SQL AS $$ INSERT INTO PUBLIC.FOO VALUES('bar'); $$ ; ``` In Airflow, this fails: ``` copy_data_snowflake = SnowflakeOperator( task_id='call_foo', sql="call public.foo()", ) ``` But this succeeds: ``` copy_data_snowflake = SnowflakeOperator( task_id='call_foo', sql="call public.foo()", autocommit=True, ) ``` ### Anything else It looks like this may be an issue with stored procedures specifically. If I instead do this: ``` copy_data_snowflake = SnowflakeOperator( task_id='call_foo', sql="INSERT INTO PUBLIC.FOO VALUES('bar');", ) ``` The logs show that although autocommit is confusingly set to False, a `COMMIT` statement is executed: ``` [2023-03-09, 18:43:09 CST] {cursor.py:727} INFO - query: [ALTER SESSION SET autocommit=False] [2023-03-09, 18:43:09 CST] {cursor.py:740} INFO - query execution done [2023-03-09, 18:43:09 CST] {cursor.py:878} INFO - Number of results in first chunk: 1 [2023-03-09, 18:43:09 CST] {sql.py:375} INFO - Running statement: INSERT INTO PUBLIC.FOO VALUES('bar');, parameters: None [2023-03-09, 18:43:09 CST] {cursor.py:727} INFO - query: [INSERT INTO PUBLIC.FOO VALUES('bar');] [2023-03-09, 18:43:09 CST] {cursor.py:740} INFO - query execution done [2023-03-09, 18:43:09 CST] {sql.py:384} INFO - Rows affected: 1 [2023-03-09, 18:43:09 CST] {snowflake.py:380} INFO - Rows affected: 1 [2023-03-09, 18:43:09 CST] {snowflake.py:381} INFO - Snowflake query id: 01aad76b-0606-feb5-0000-26b511d0ba02 [2023-03-09, 18:43:09 CST] {cursor.py:727} INFO - query: [COMMIT] ``` ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/30010
https://github.com/apache/airflow/pull/30020
26c6a1c11bcd463d1923bbd9622cbe0682bc9e8a
b9c231ceb0f3053a27744b80e95f08ac0684fe38
"2023-03-10T01:05:10Z"
python
"2023-03-10T17:47:46Z"
closed
apache/airflow
https://github.com/apache/airflow
29,980
["airflow/providers/microsoft/azure/hooks/data_lake.py"]
ADLS Gen2 Hook incorrectly forms account URL when using Active Directory authentication method (Azure Data Lake Storage V2)
### Apache Airflow Provider(s) microsoft-azure ### Versions of Apache Airflow Providers apache-airflow-providers-microsoft-azure 5.2.1 ### Apache Airflow version 2.5.1 ### Operating System Ubuntu 18.04 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened When attempting to use Azure Active Directory application to connect to Azure Data Lake Storage Gen2 hook, the generated account URL sent to the DataLakeServiceClient is incorrect. It substitutes in the Client ID (`login` field) where the storage account name should be. ### What you think should happen instead The `host` field on the connection form should be used to store the storage account name and should be used to fill the account URL for both Active Directory and Key-based authentication. ### How to reproduce 1. Create an "Azure Data Lake Storage V2" connection (adls) and put the AAD application Client ID into `login` field, Client secret into `password` field and Tenant ID into `tenant_id` field. 2. Attempt to perform any operations with the `AzureDataLakeStorageV2Hook` hook. 3. Notice how it fails, and that the URL in the logs is incorrectly `https://{client_id}.dfs.core.windows.net/...`, when it should be `https://{storage_account}.dfs.core.windows.net/...` This can be fixed by: 1. Making your own copy of the hook. 2. Entering the storage account name into the `host` field (currently labelled "Account Name (Active Directory Auth)"). 3. Editing the `get_conn` method to substitute `conn.host` into the `account_url` (instead of `conn.login`). ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29980
https://github.com/apache/airflow/pull/29981
def1f89e702d401f67a94f34a01f6a4806ea92e6
008f52444a84ceaa2de7c2166b8f253f55ca8c21
"2023-03-08T15:42:36Z"
python
"2023-03-10T12:11:28Z"
closed
apache/airflow
https://github.com/apache/airflow
29,967
["chart/dockerfiles/pgbouncer-exporter/build_and_push.sh", "chart/dockerfiles/pgbouncer/build_and_push.sh", "chart/newsfragments/30054.significant.rst"]
Build our supporting images for chart in multi-platform versions
### Body The supporting images of ours are built using one platform only but they could be multiplatform. The scripts to build those should be updated to support multi-platform builds. ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/29967
https://github.com/apache/airflow/pull/30054
5a3be7256b2a848524d3635d7907b6829a583101
39cfc67cad56afa3b2434bc8e60bcd0676d41fc1
"2023-03-08T00:22:45Z"
python
"2023-03-15T22:19:52Z"
closed
apache/airflow
https://github.com/apache/airflow
29,960
["airflow/providers/amazon/aws/hooks/glue.py", "airflow/providers/amazon/aws/operators/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py"]
GlueJobOperator failing with Invalid type for parameter RoleName after updating provider version.
### Apache Airflow Provider(s) amazon ### Versions of Apache Airflow Providers apache-airflow-providers-amazon = "7.3.0" ### Apache Airflow version 2.5.1 ### Operating System Debian GNU/Linux ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened After updating the provider version to 7.3.0 from 6.0.0, our glue jobs started failing. We currently use the GlueJobOperator to run existing Glue jobs that we manage in Terraform. The full traceback is below: ``` Traceback (most recent call last): File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/operators/glue.py", line 150, in execute glue_job_run = glue_job.initialize_job(self.script_args, self.run_job_kwargs) File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 165, in initialize_job job_name = self.create_or_update_glue_job() File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 325, in create_or_update_glue_job config = self.create_glue_job_config() File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 108, in create_glue_job_config execution_role = self.get_iam_execution_role() File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 143, in get_iam_execution_role glue_execution_role = iam_client.get_role(RoleName=self.role_name) File "/home/airflow/.local/lib/python3.9/site-packages/botocore/client.py", line 530, in _api_call return self._make_api_call(operation_name, kwargs) File "/home/airflow/.local/lib/python3.9/site-packages/botocore/client.py", line 919, in _make_api_call request_dict = self._convert_to_request_dict( File "/home/airflow/.local/lib/python3.9/site-packages/botocore/client.py", line 990, in _convert_to_request_dict request_dict = self._serializer.serialize_to_request( File "/home/airflow/.local/lib/python3.9/site-packages/botocore/validate.py", line 381, in serialize_to_request raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed: Invalid type for parameter RoleName, value: None, type: <class 'NoneType'>, valid types: <class 'str'> ``` ### What you think should happen instead The operator creates a new job run for a glue job without additional configuration. ### How to reproduce Create a DAG with a GlueJobOperator without using `iam_role_name`. Example: ```python task = GlueJobOperator(task_id="glue-task", job_name=<glue-job-name>) ``` ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29960
https://github.com/apache/airflow/pull/30162
fe727f985b1053b838433b817458517c0c0f2480
46d9a0c294ea72574a79f0fb567eb9dc97cf96c1
"2023-03-07T16:44:40Z"
python
"2023-03-21T20:50:19Z"
closed
apache/airflow
https://github.com/apache/airflow
29,959
["airflow/jobs/local_task_job_runner.py", "airflow/jobs/scheduler_job_runner.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "airflow/serialization/pydantic/job.py"]
expand dynamic mapped tasks in batches
### Description expanding tasks in batches to allow mapped tasks spawn more than 1024 processes. ### Use case/motivation Maximum length of a list is limited to 1024 by `max_map_length (AIRFLOW__CORE__MAX_MAP_LENGTH)`. during scheduling of the new tasks, an UPDATE query is ran that tries to set all the new tasks at once. Increasing `max_map_length` more than 4K makes airflow scheduler completely unresponsive. Also, Postgres throws `stack depth limit exceeded` error which can be fixed by updating to a newer version and setting `max_stack_depth` higher. But it doesn't really matter because airflow scheduler freezes up. As a workaround, I split the dag runs into subdag runs which works but it would be much nicer if we didn't have to worry about exceeding `max_map_length`. ### Related issues It was discussed here: [Increasing 'max_map_length' leads to SQL 'max_stack_depth' error with 5000 dags to be spawned #28478](https://github.com/apache/airflow/discussions/28478) ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29959
https://github.com/apache/airflow/pull/30372
5f2628d36cb8481ee21bd79ac184fd8fdce3e47d
ed39b6fab7a241e2bddc49044c272c5f225d6692
"2023-03-07T16:12:04Z"
python
"2023-04-22T19:10:56Z"
closed
apache/airflow
https://github.com/apache/airflow
29,958
["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"]
GCSToBigQueryOperator does not respect the destination project ID
### Apache Airflow Provider(s) google ### Versions of Apache Airflow Providers apache-airflow-providers-google==8.10.0 ### Apache Airflow version 2.3.4 ### Operating System Ubuntu 18.04.6 LTS ### Deployment Google Cloud Composer ### Deployment details Google Cloud Composer 2.1.2 ### What happened [`GCSToBigQueryOperator`](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L58) does not respect the BigQuery project ID specified in [`destination_project_dataset_table`](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L74-L77) argument. Instead, it prioritizes the project ID defined in the [Airflow connection](https://i.imgur.com/1tTIlQF.png). ### What you think should happen instead The project ID specified via [`destination_project_dataset_table`](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L74-L77) should be respected. **Use case:** Suppose our Composer environment and service account (SA) live in `project-A`, and we want to transfer data into foreign projects `B`, `C`, and `D`. We don't have credentials (and thus don't have Airflow connections defined) for projects `B`, `C`, and `D`. Instead, all transfers are executed by our singular SA in `project-A`. (Assume this SA has cross-project IAM policies). Thus, we want to use a _single_ SA and _single_ [Airflow connection](https://i.imgur.com/1tTIlQF.png) (i.e. `gcp_conn_id=google_cloud_default`) to send data into 3+ destination projects. I imagine this is a fairly common setup for sending data across GCP projects. **Root cause:** I've been studying the source code, and I believe the bug is caused by [line 309](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L309). Experimentally, I have verified that `hook.project_id` traces back to the [Airflow connection's project ID](https://i.imgur.com/1tTIlQF.png). If no destination project ID is explicitly specified, then it makes sense to _fall back_ on the connection's project. However, if the destination project is explicitly provided, surely the operator should honor that. I think this bug can be fixed by amending line 309 as follows: ```python project=passed_in_project or hook.project_id ``` This pattern is used successfully in many other areas of the repo: [example](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/operators/gcs.py#L154). ### How to reproduce Admittedly, this bug is difficult to reproduce, because it requires two GCP projects, i.e. a service account in `project-A`, and inbound GCS files and a destination BigQuery table in `project-B`. Also, you need an Airflow server with a `google_cloud_default` connection that points to `project-A` like [this](https://i.imgur.com/1tTIlQF.png). Assuming all that exists, the bug can be reproduced via the following Airflow DAG: ```python from airflow import DAG from airflow.providers.google.cloud.transfers.gcs_to_bigquery import GCSToBigQueryOperator from datetime import datetime GCS_BUCKET='my_bucket' GCS_PREFIX='path/to/*.json' BQ_PROJECT='project-B' BQ_DATASET='my_dataset' BQ_TABLE='my_table' SERVICE_ACCOUNT='my_account@project-A.iam.gserviceaccount.com' with DAG( dag_id='my_dag', start_date=datetime(2023, 1, 1), schedule_interval=None, ) as dag: task = GCSToBigQueryOperator( task_id='gcs_to_bigquery', bucket=GCS_BUCKET, source_objects=GCS_PREFIX, source_format='NEWLINE_DELIMITED_JSON', destination_project_dataset_table='{}.{}.{}'.format(BQ_PROJECT, BQ_DATASET, BQ_TABLE), impersonation_chain=SERVICE_ACCOUNT, ) ``` Stack trace: ``` Traceback (most recent call last): File "/opt/python3.8/lib/python3.8/site-packages/airflow/executors/debug_executor.py", line 79, in _run_task ti.run(job_id=ti.job_id, **params) File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper return func(*args, session=session, **kwargs) File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1797, in run self._run_raw_task( File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 68, in wrapper return func(*args, **kwargs) File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1464, in _run_raw_task self._execute_task_with_callbacks(context, test_mode) File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1612, in _execute_task_with_callbacks result = self._execute_task(context, task_orig) File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1673, in _execute_task result = execute_callable(context=context) File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 387, in execute job = self._submit_job(self.hook, job_id) File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 307, in _submit_job return hook.insert_job( File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 468, in inner_wrapper return func(self, *args, **kwargs) File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 1549, in insert_job job._begin() File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/bigquery/job/base.py", line 510, in _begin api_response = client._call_api( File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/bigquery/client.py", line 782, in _call_api return call() File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/retry.py", line 283, in retry_wrapped_func return retry_target( File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/retry.py", line 190, in retry_target return target() File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/_http/__init__.py", line 494, in api_request raise exceptions.from_http_response(response) google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/{project-A}/jobs?prettyPrint=false: Access Denied: Project {project-A}: User does not have bigquery.jobs.create permission in project {project-A}. ``` From the stack trace, notice the operator is (incorrectly) attempting to insert into `project-A` rather than `project-B`. ### Anything else Perhaps out-of-scope, but the inverse direction also suffers from this same problem, i.e. [BigQueryToGcsOperator](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py#L38) and [line 192](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py#L192). ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29958
https://github.com/apache/airflow/pull/30053
732fcd789ddecd5251d391a8d9b72f130bafb046
af4627fec988995537de7fa172875497608ef710
"2023-03-07T16:07:36Z"
python
"2023-03-20T08:34:19Z"
closed
apache/airflow
https://github.com/apache/airflow
29,957
["chart/templates/scheduler/scheduler-deployment.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_scheduler.py", "tests/charts/test_webserver.py"]
hostAliases for scheduler and webserver
### Description I am not sure why this PR was not merged (https://github.com/apache/airflow/pull/23558) but I think it would be great to add hostAliases not just to the workers, but the scheduler and webserver too. ### Use case/motivation Be able to modify /etc/hosts in webserver and scheduler. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29957
https://github.com/apache/airflow/pull/30051
5c15b23023be59a87355c41ab23a46315cca21a5
f07d300c4c78fa1b2becb4653db8d25b011ea273
"2023-03-07T15:25:15Z"
python
"2023-03-12T14:22:05Z"
closed
apache/airflow
https://github.com/apache/airflow
29,939
["airflow/providers/amazon/aws/links/emr.py", "airflow/providers/amazon/aws/operators/emr.py", "airflow/providers/amazon/aws/sensors/emr.py", "tests/providers/amazon/aws/operators/test_emr_add_steps.py", "tests/providers/amazon/aws/operators/test_emr_create_job_flow.py", "tests/providers/amazon/aws/operators/test_emr_modify_cluster.py", "tests/providers/amazon/aws/operators/test_emr_terminate_job_flow.py", "tests/providers/amazon/aws/sensors/test_emr_job_flow.py", "tests/providers/amazon/aws/sensors/test_emr_step.py"]
AWS EMR Operators: Add Log URI in task logs to speed up debugging
### Description Airflow is widely used to launch, interact and submit jobs on AWS EMR Clusters. Existing EMR operators do not provide links to the EMR logs (Job Flow/Step logs), as a result in case of failures the users need to switch to EMR Console or go to AWS S3 console to locate the logs for EMR Jobs and Steps using the job_flow_id available in the EMR Operators and in Xcom. It will be really convenient and help with debugging if the EMR log links are present in Operator Task logs, it will obviate the need to switch to AWS S3 or AWS EMR consoles from Airflow and lookup the logs using job_flow_ids. It will be a nice improvement for the developer experience. LogUri for Cluster is available in [DescribeCluster](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr/client/describe_cluster.html) LogFile path for Steps in case of failure is available in [ListSteps](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr/client/list_steps.html) ### Use case/motivation Ability to go to EMR logs directly from Airflow EMR Task logs. ### Related issues N/A ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29939
https://github.com/apache/airflow/pull/31032
6c92efbe8b99e172fe3b585114e1924c0bb2f26b
2d5166f9829835bdfd6479aa789c8a27147288d6
"2023-03-06T18:03:55Z"
python
"2023-05-03T23:18:02Z"
closed
apache/airflow
https://github.com/apache/airflow
29,912
["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py"]
BigQueryToGCSOperator does not wait for completion
### Apache Airflow Provider(s) google ### Versions of Apache Airflow Providers apache-airflow-providers-google==7.0.0 ### Apache Airflow version 2.3.2 ### Operating System Debian GNU/Linux ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened [Deferrable mode for BigQueryToGCSOperator #27683](https://github.com/apache/airflow/pull/27683) changed the functionality of the `BigQueryToGCSOperator` so that it no longer waits for the completion of the operation. This is because the `nowait=True` parameter is now [being set](https://github.com/apache/airflow/pull/27683/files#diff-23c5b2e773487f9c28b75b511dbf7269eda1366f16dec84a349d95fa033ffb3eR191). ### What you think should happen instead This is unexpected behavior. Any downstream tasks of the `BigQueryToGCSOperator` that expect the CSVs to have been written by the time they are called may result in errors (and have done so in our own operations). The property should at least be configurable. ### How to reproduce 1. Leverage the `BigQueryToGcsOperator` in your DAG. 2. Have it write a large table to a CSV somewhere in GCS 3. Notice that the task completes almost immediately but the CSVs may not exist in GCS until later. ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29912
https://github.com/apache/airflow/pull/29925
30b2e6c185305a56f9fd43683f1176f01fe4e3f6
464ab1b7caa78637975008fcbb049d5b52a8b005
"2023-03-03T23:29:15Z"
python
"2023-03-05T10:40:38Z"
closed
apache/airflow
https://github.com/apache/airflow
29,903
["airflow/models/baseoperator.py", "tests/models/test_mappedoperator.py"]
Task-level retries overrides from the DAG-level default args are not respected when using `partial`
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened When running a DAG that is structured like: ``` @dag{dag_id="my_dag", default_args={"retries":0"}} def dag(): op = MyOperator.partial(task_id="my_task", retries=3).expand(...) ``` The following test fails: ``` def test_retries(self) -> None: dag_bag = DagBag(dag_folder=DAG_FOLDER, include_examples=False) dag = dag_bag.dags["my_dag"] for task in dag.tasks: if "my_task" in task.task_id: self.assertEqual(3, task.retries) # fails - this is 0 ``` When printing out `task.partial_kwargs`, and looking at how the default args and partial args are merged, it seems like the default args are always taking precedence, even though in the `partial` global function, the `retries` do get set later on with the task-level parameter value. This doesn't seem to be respected though. ### What you think should happen instead _No response_ ### How to reproduce If you run my above unit test for a test DAG, on version 2.4.3, it should show up as a test failure. ### Operating System OS Ventura ### Versions of Apache Airflow Providers _No response_ ### Deployment Google Cloud Composer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29903
https://github.com/apache/airflow/pull/29913
57c09e59ee9273ff64cd4a85b020a4df9b1d9eca
f01051a75e217d5f20394b8c890425915383101f
"2023-03-03T19:22:23Z"
python
"2023-04-14T12:16:11Z"
closed
apache/airflow
https://github.com/apache/airflow
29,900
["airflow/models/dag.py", "airflow/timetables/base.py", "airflow/timetables/simple.py", "docs/apache-airflow/core-concepts/dag-run.rst", "tests/models/test_dag.py", "tests/timetables/test_continuous_timetable.py"]
Add continues scheduling option
### Body There are some use cases where users want to trigger new DAG run as soon as one finished. This is a request I've seen several times with some variations (for example like this [Stackoverflow question](https://stackoverflow.com/q/75623153/14624409)) but the basic request is the same. The workaround users do to get such functionality is place `TriggerDagRunOperator` as last task of their DAG invoking the same DAG: ``` from datetime import datetime from airflow import DAG from airflow.operators.empty import EmptyOperator from airflow.operators.trigger_dagrun import TriggerDagRunOperator with DAG( dag_id="example", start_date=datetime(2023, 1, 1,), catchup=False, schedule=None, ) as dag: task = EmptyOperator(task_id="first") trigger = TriggerDagRunOperator( task_id="trigger", trigger_dag_id="example", ) task >> trigger ``` As you can see this works nicely: ![Screenshot 2023-03-03 at 14 20 51](https://user-images.githubusercontent.com/45845474/222718862-dfde8a41-24b6-4991-b318-b7f9784514f6.png) My suggestion is to add first class support for this use case, so the above example will be changed to: ``` from datetime import datetime from airflow import DAG from airflow.operators.empty import EmptyOperator from airflow.operators.trigger_dagrun import TriggerDagRunOperator with DAG( dag_id="example", start_date=datetime(2023, 1, 1,), catchup=False, schedule="@continues", ) as dag: task = EmptyOperator(task_id="first") ``` I guess it won't exactly be `"@continues"` but more likely new [ScheduleArg](https://github.com/apache/airflow/blob/8b8552f5c4111fe0732067d7af06aa5285498a79/airflow/models/dag.py#L127) type but I show it like that just for simplification of the idea. ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/29900
https://github.com/apache/airflow/pull/29909
70680ded7a4056882008b019f5d1a8f559a301cd
c1aa4b9500f417e6669a79fbf59c11ae6e6993a2
"2023-03-03T12:30:33Z"
python
"2023-03-16T19:08:45Z"
closed
apache/airflow
https://github.com/apache/airflow
29,875
["airflow/cli/cli_parser.py", "airflow/cli/commands/connection_command.py", "docs/apache-airflow/howto/connection.rst", "tests/cli/commands/test_connection_command.py"]
Airflow Connection Testing Using Airflow CLI
### Description Airflow Connection testing using airflow CLI would be very useful , where users can quick add test function to test connection in their applications. It will benefit CLI user to create and test new connections right from instance and reduce time on troubleshooting any connection issue. ### Use case/motivation airflow connection testing using airflow CLI , similar function as we have in Airflow CLI. example: airflow connection test "hello_id" ### Related issues N/A ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29875
https://github.com/apache/airflow/pull/29892
a3d59c8c759582c27f5a234ffd4c33a9daeb22a9
d2e5b097e6251e31fb4c9bb5bf16dc9c77b56f75
"2023-03-02T14:13:55Z"
python
"2023-03-09T09:26:10Z"
closed
apache/airflow
https://github.com/apache/airflow
29,858
["airflow/www/package.json", "airflow/www/static/js/api/index.ts", "airflow/www/static/js/api/useDag.ts", "airflow/www/static/js/api/useDagCode.ts", "airflow/www/static/js/dag/details/dagCode/CodeBlock.tsx", "airflow/www/static/js/dag/details/dagCode/index.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/templates/airflow/dag.html", "airflow/www/yarn.lock"]
Migrate DAG Code page to Grid Details
- [ ] Use REST API to render DAG Code in the grid view as a tab when a user has no runs/tasks selected - [ ] Redirect all urls to new code - [ ] delete the old code view
https://github.com/apache/airflow/issues/29858
https://github.com/apache/airflow/pull/31113
3363004450355582712272924fac551dc1f7bd56
4beb89965c4ee05498734aa86af2df7ee27e9a51
"2023-03-02T00:38:49Z"
python
"2023-05-17T16:27:06Z"
closed
apache/airflow
https://github.com/apache/airflow
29,843
["airflow/models/taskinstance.py", "tests/www/views/test_views.py"]
The "Try Number" filter under task instances search is comparing integer with non-integer object
### Apache Airflow version 2.5.1 ### What happened The `Try Number` filter is comparing the given integer with an instance of a "property" object * screenshots ![2023-03-01_11-30](https://user-images.githubusercontent.com/14293802/222210209-fc17c634-4005-4f3d-bee1-30ed23403e71.png) ![2023-03-01_11-31](https://user-images.githubusercontent.com/14293802/222210227-53ef42b7-0b43-4ee1-ad76-cf31b504b4a3.png) * text version ``` Something bad has happened. Airflow is used by many users, and it is very likely that others had similar problems and you can easily find a solution to your problem. Consider following these steps: * gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment) * find similar issues using: * [GitHub Discussions](https://github.com/apache/airflow/discussions) * [GitHub Issues](https://github.com/apache/airflow/issues) * [Stack Overflow](https://stackoverflow.com/questions/tagged/airflow) * the usual search engine you use on a daily basis * if you run Airflow on a Managed Service, consider opening an issue using the service support channels * if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a [bug report](https://github.com/apache/airflow/issues/new/choose). Make sure however, to include all relevant details and results of your investigation so far. Python version: 3.8.16 Airflow version: 2.5.1 Node: kip-airflow-8b665fdd7-lcg6q ------------------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/security/decorators.py", line 133, in wraps return f(self, *args, **kwargs) File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/views.py", line 554, in list widgets = self._list() File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 1164, in _list widgets = self._get_list_widget( File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 1063, in _get_list_widget count, lst = self.datamodel.query( File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 461, in query count = self.query_count(query, filters, select_columns) File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 382, in query_count return self._apply_inner_all( File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 368, in _apply_inner_all query = self.apply_filters(query, inner_filters) File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 223, in apply_filters return filters.apply_all(query) File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/filters.py", line 300, in apply_all query = flt.apply(query, value) File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/filters.py", line 169, in apply return query.filter(field > value) TypeError: '>' not supported between instances of 'property' and 'int' ``` ### What you think should happen instead The "Try Number" search should compare integer with integer ### How to reproduce 1. Go to "Browse" -> "Task Instances" 2. "Search" -> "Add Filter" -> choose "Dag Id" and "Try Number" 3. Choose "Greater than" in the drop-down and enter an integer 4. Click "Search" ### Operating System Debian GNU/Linux 10 (buster) ### Versions of Apache Airflow Providers _No response_ ### Deployment Other Docker-based deployment ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29843
https://github.com/apache/airflow/pull/29850
00a2c793c7985f8165c2bef9106fc81ee66e07bb
a3c9902bc606f0c067a45f09e9d3d152058918e9
"2023-03-01T17:45:26Z"
python
"2023-03-10T12:01:15Z"
closed
apache/airflow
https://github.com/apache/airflow
29,841
["setup.cfg"]
high memory leak, cannot start even webserver
### Apache Airflow version 2.5.1 ### What happened I'd used airflow 2.3.1 and everything was fine. Then I decided to move to airflow 2.5.1. I can't start even webserver, airflow on my laptop consumes the entire memory (32Gb) and OOM killer comes. I investigated a bit. So it starts with airflow 2.3.4. Only using official docker image (apache/airflow:2.3.4) and only on linux laptop, mac is ok. Memory leak starts when source code tries to import for example `airflow.cli.commands.webserver_command` module using `airflow.utils.module_loading.import_string`. I dived deeply and found that it happens when "import daemon" is performed. You can reproduce it with this command: `docker run --rm --entrypoint="" apache/airflow:2.3.4 /bin/bash -c "python -c 'import daemon'"`. Once again, reproducec only on linux (my kernel is 6.1.12). That's weird considering `daemon` hasn't been changed since 2018. ### What you think should happen instead _No response_ ### How to reproduce docker run --rm --entrypoint="" apache/airflow:2.3.4 /bin/bash -c "python -c 'import daemon'" ### Operating System Arch Linux (kernel 6.1.12) ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29841
https://github.com/apache/airflow/pull/29916
864ff2e3ce185dfa3df0509a4bd3c6b5169e907f
c8cc49af2d011f048ebea8a6559ddd5fca00f378
"2023-03-01T15:36:01Z"
python
"2023-03-04T15:27:20Z"
closed
apache/airflow
https://github.com/apache/airflow
29,839
["airflow/api_connexion/endpoints/dag_run_endpoint.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"]
Calling endpoint dags/{dag_id}/dagRuns for removed DAG returns "500 Internal Server Error" instead of "404 Not Found"
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Apache Airflow version: 2.4.0 I remove DAG from storage then trigger it: curl -X POST 'http://localhost:8080/api/dags/<DAG_ID>/dag_runs' --header 'Content-Type: application/json' --data '{"dag_run_id":"my_id"}' it returns: ``` Traceback (most recent call last): File &#34;/home/airflow/.local/lib/python3.8/site-packages/flask/app.py&#34;, line 2525, in wsgi_app response = self.full_dispatch_request() File &#34;/home/airflow/.local/lib/python3.8/site-packages/flask/app.py&#34;, line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File &#34;/home/airflow/.local/lib/python3.8/site-packages/flask/app.py&#34;, line 1820, in full_dispatch_request rv = self.dispatch_request() File &#34;/home/airflow/.local/lib/python3.8/site-packages/flask/app.py&#34;, line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File &#34;/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/decorator.py&#34;, line 68, in wrapper response = function(request) File &#34;/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/uri_parsing.py&#34;, line 149, in wrapper response = function(request) File &#34;/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/validation.py&#34;, line 196, in wrapper response = function(request) File &#34;/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/validation.py&#34;, line 399, in wrapper return function(request) File &#34;/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/response.py&#34;, line 112, in wrapper response = function(request) File &#34;/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/parameter.py&#34;, line 120, in wrapper return function(**kwargs) File &#34;/home/airflow/.local/lib/python3.8/site-packages/airflow/api_connexion/security.py&#34;, line 51, in decorated return func(*args, **kwargs) File &#34;/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py&#34;, line 75, in wrapper return func(*args, session=session, **kwargs) File &#34;/home/airflow/.local/lib/python3.8/site-packages/airflow/api_connexion/endpoints/dag_run_endpoint.py&#34;, line 310, in post_dag_run dag_run = dag.create_dagrun( AttributeError: &#39;NoneType&#39; object has no attribute &#39;create_dagrun&#39; ``` ### What you think should happen instead should response with 404 "A specified resource is not found." ### How to reproduce - remove existing DAG file from storage - create a new DAG run using API endpoint /api/dags/<DAG_ID>/dag_runs for that deleted DAG ### Operating System 18.04.1 Ubuntu ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29839
https://github.com/apache/airflow/pull/29860
fcd3c0149f17b364dfb94c0523d23e3145976bbe
751a995df55419068f11ebabe483dba3302916ed
"2023-03-01T13:51:58Z"
python
"2023-03-03T14:40:07Z"
closed
apache/airflow
https://github.com/apache/airflow
29,836
["airflow/www/forms.py", "airflow/www/validators.py", "tests/www/test_validators.py", "tests/www/views/test_views_connection.py"]
Restrict allowed characters in connection ids
### Description I bumped into a bug where a connection id was suffixed with a whitespace e.g. "myconn ". When referencing the connection id "myconn" (without whitespace), you get a connection not found error. To avoid such human errors, I suggest restricting the characters allowed for connection ids. Some suggestions: - There's an `airflow.utils.helpers.validate_key` function for validating the DAG id. Probably a good idea to reuse this. - I believe variable ids are also not validated, would be good to check those too. ### Use case/motivation _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29836
https://github.com/apache/airflow/pull/31140
85482e86f5f93015487938acfb0cca368059e7e3
5cb8ef80a0bd84651fb660c552563766d8ec0ea1
"2023-03-01T11:58:40Z"
python
"2023-05-12T10:25:37Z"
closed
apache/airflow
https://github.com/apache/airflow
29,819
["airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"]
DAG fails serialization if template_field contains execution_timeout
### Apache Airflow version 2.5.1 ### What happened If an Operator specifies a template_field with `execution_timeout` then the DAG will serialize correctly but throw an error during deserialization. This causes the entire scheduler to crash and breaks the application. ### What you think should happen instead The scheduler should never go down because of some code someone wrote, this should probably throw an error during serialization. ### How to reproduce Define an operator like this ``` class ExecutionTimeoutOperator(BaseOperator): template_fields = ("execution_timeout", ) def __init__(self, execution_timeout: timedelta, **kwargs): super().__init__(**kwargs) self.execution_timeout = execution_timeout ``` then make a dag like this ``` dag = DAG( "serialize_with_default", schedule_interval="0 12 * * *", start_date=datetime(2023, 2, 28), catchup=False, default_args={ "execution_timeout": timedelta(days=4), }, ) with dag: execution = ExecutionTimeoutOperator(task_id="execution", execution_timeout=timedelta(hours=1)) ``` that will break the scheduler, you can force the stack trace by doing this ``` from airflow.models import DagBag db = DagBag('dags/', read_dags_from_db=True) db.get_dag('serialize_with_default') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper return func(*args, session=session, **kwargs) File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 190, in get_dag self._add_dag_from_db(dag_id=dag_id, session=session) File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 265, in _add_dag_from_db dag = row.dag File "/usr/local/lib/python3.9/site-packages/airflow/models/serialized_dag.py", line 218, in dag dag = SerializedDAG.from_dict(self.data) File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1287, in from_dict return cls.deserialize_dag(serialized_obj["dag"]) File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1194, in deserialize_dag v = {task["task_id"]: SerializedBaseOperator.deserialize_operator(task) for task in v} File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1194, in <dictcomp> v = {task["task_id"]: SerializedBaseOperator.deserialize_operator(task) for task in v} File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 955, in deserialize_operator cls.populate_operator(op, encoded_op) File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 864, in populate_operator v = cls._deserialize_timedelta(v) File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 513, in _deserialize_timedelta return datetime.timedelta(seconds=seconds) TypeError: unsupported type for timedelta seconds component: str ``` ### Operating System Mac 13.1 (22C65) ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==5.1.0 apache-airflow-providers-apache-hdfs==3.2.0 apache-airflow-providers-apache-hive==5.1.1 apache-airflow-providers-apache-spark==4.0.0 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==5.1.1 apache-airflow-providers-common-sql==1.3.3 apache-airflow-providers-datadog==3.1.0 apache-airflow-providers-ftp==3.3.0 apache-airflow-providers-http==4.1.1 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-jdbc==3.3.0 apache-airflow-providers-jenkins==3.2.0 apache-airflow-providers-mysql==4.0.0 apache-airflow-providers-pagerduty==3.1.0 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-presto==4.2.1 apache-airflow-providers-slack==7.2.0 apache-airflow-providers-sqlite==3.3.1 apache-airflow-providers-ssh==3.4.0 ### Deployment Docker-Compose ### Deployment details I could repro this with docker-compose and in a helm backed deployment so I don't think it's really related to the deployment details ### Anything else In the serialization code there are two pieces of logic that are in direct conflict with each other. The first dictates how template fields are serialized, from the code ``` # Store all template_fields as they are if there are JSON Serializable # If not, store them as strings ``` and the second special cases a few names of arguments that need to be deserialized in a specific way ``` elif k in {"retry_delay", "execution_timeout", "sla", "max_retry_delay"}: v = cls._deserialize_timedelta(v) ``` so during serialization airflow sees that execution_timeout is a template field, serializes it as a string, then during deserialization it is a special name that forces the deserialization as timedelta and BOOM! ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29819
https://github.com/apache/airflow/pull/29821
6d2face107f24b7e7dce4b98ae3def1178e1fc4c
7963360b8d43a15791a6b7d4335f482fce1d82d2
"2023-02-28T18:48:13Z"
python
"2023-03-04T18:19:09Z"
closed
apache/airflow
https://github.com/apache/airflow
29,803
["airflow/utils/db.py"]
Run DAG in isolated session
### Apache Airflow version 2.5.1 ### What happened Trying the new `airflow.models.DAG.test` function to run e2e tests on a DAG in a `pytest` fashion I find there's no way to force to write to a different db other than the configured one. This should create an alchemy session for an inmemory db, initialise the db and then use it for the test ```python @fixture(scope="session") def airflow_db(): # in-memory database engine = create_engine(f"sqlite://") with Session(engine) as db_session: initdb(session=db_session, load_connections=False) yield db_session def test_dag_runs_default(airflow_db): dag.test(session=airflow_db) ``` However `initdb` never receives the `engine` from `settings` that has been initialised before. It uses the engine **from `settings` instead of the engine from the session**. https://github.com/apache/airflow/blob/main/airflow/utils/db.py#L694-L695 ```python with create_global_lock(session=session, lock=DBLocks.MIGRATIONS): Base.metadata.create_all(settings.engine) Model.metadata.create_all(settings.engine) ``` Then `_create_flask_session_tbl()` reads again the database from the config (which might be the same as when settings was initialised or not) and creates all Airflow tables in a database different from the provided in the session again. ### What you think should happen instead The sql alchemy base, models and airflow tables should be created in the database provided by the session. In case the session is injected then, this will match the config. But if a session is provided, it should use this session instead ### How to reproduce This inits the db specified in the config (defaults to `${HOME}/airflow/airflow.db`), then the test tries to use the in-memory one and breaks ```python @fixture(scope="session") def airflow_db(): # in-memory database engine = create_engine(f"sqlite://") with Session(engine) as db_session: initdb(session=db_session, load_connections=False) yield db_session def test_dag_runs_default(airflow_db): dag.test(session=airflow_db) ``` ### Operating System MacOs ### Versions of Apache Airflow Providers _No response_ ### Deployment Virtualenv installation ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29803
https://github.com/apache/airflow/pull/29804
7ce3b66237fbdb1605cf1f7cec06f0b823c455a1
0975560dfa48f43b340c4db9c03658a11ae7c666
"2023-02-28T13:56:11Z"
python
"2023-04-10T08:06:23Z"
closed
apache/airflow
https://github.com/apache/airflow
29,781
["airflow/providers/sftp/hooks/sftp.py", "airflow/providers/sftp/sensors/sftp.py", "tests/providers/sftp/hooks/test_sftp.py", "tests/providers/sftp/sensors/test_sftp.py"]
newer_than and file_pattern don't work well together in SFTPSensor
### Apache Airflow Provider(s) sftp ### Versions of Apache Airflow Providers 4.2.3 ### Apache Airflow version 2.5.1 ### Operating System macOS Ventura 13.2.1 ### Deployment Astronomer ### Deployment details _No response_ ### What happened I wanted to use `file_pattern` and `newer_than` in `SFTPSensor` to find only the files that landed in SFTP after the data interval of the prior successful DAG run (`{{ prev_data_interval_end_success }}`). I have four text files (`file.txt`, `file1.txt`, `file2.txt` and `file3.txt`) but only `file3.txt` has the last modification date after the data interval of the prior successful DAG run. I use the following file pattern: `"*.txt"`. The moment the first file (`file.txt`) was matched and the modification date did not meet the requirement, the task changed the status to `up_for_reschedule`. ### What you think should happen instead The other files matching the pattern should be checked as well. ### How to reproduce ```python import pendulum from airflow import DAG from airflow.providers.sftp.sensors.sftp import SFTPSensor with DAG( dag_id="sftp_test", start_date=pendulum.datetime(2023, 2, 1, tz="UTC"), schedule="@once", render_template_as_native_obj=True, ): wait_for_file = SFTPSensor( task_id="wait_for_file", sftp_conn_id="sftp_default", path="/upload/", file_pattern="*.txt", newer_than="{{ prev_data_interval_end_success }}", ) ``` ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29781
https://github.com/apache/airflow/pull/29794
60d98a1bc2d54787fcaad5edac36ecfa484fb42b
9357c81828626754c990c3e8192880511a510544
"2023-02-27T12:25:27Z"
python
"2023-02-28T05:45:59Z"
closed
apache/airflow
https://github.com/apache/airflow
29,759
["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"]
Improve code in `KubernetesPodOperator._render_nested_template_fields`
### Apache Airflow Provider(s) cncf-kubernetes ### Versions of Apache Airflow Providers apache-airflow-providers-cncf-kubernetes==5.2.1 ### Apache Airflow version 2.5.1 ### Operating System Arch Linux ### Deployment Other ### Deployment details _No response_ ### What happened Not really showing a failure in operation, but the code in the [`KubernetesPodOperator._render_nested_template_fields`](https://github.com/apache/airflow/blob/d26dc223915c50ff58252a709bb7b33f5417dfce/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L373-L403) function could be improved. The current code is formed by 6 conditionals checking the type of the `content` variable. Even when the 1st of the succeed, the other 5 conditionals are still checked, which is inefficient because the function could end right there, saving time and resources. ### What you think should happen instead The conditionals flow could be fixed with a simple map, using a dictionary to immediately get the value or fallback to the default one. ### How to reproduce There is no bug _per se_ to reproduce. It's just making the code cleaner and more efficient, avoiding to keep computing conditionals even when the condition has been resolved. ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29759
https://github.com/apache/airflow/pull/29760
9357c81828626754c990c3e8192880511a510544
1e536eb43de4408612bf7bb7d9d2114470c6f43a
"2023-02-25T09:33:25Z"
python
"2023-02-28T05:46:37Z"
closed
apache/airflow
https://github.com/apache/airflow
29,754
["airflow/example_dags/example_dynamic_task_mapping_with_no_taskflow_operators.py", "docs/apache-airflow/authoring-and-scheduling/dynamic-task-mapping.rst", "tests/serialization/test_dag_serialization.py", "tests/www/views/test_views_acl.py"]
Add classic operator example for dynamic task mapping "reduce" task
### What do you see as an issue? The [documentation for Dynamic Task Mapping](https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#simple-mapping ) does not include an example of a "reduce" task (e.g. `sum_it` in the [examples](https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#simple-mapping)) using the classic (or non-TaskFlow) operators. It only includes an example that uses the TaskFlow operators. When I attempted to write a "reduce" task using classic operators for my DAG, I found that there wasn't an obvious approach. ### Solving the problem We should add an example of a "reduce" task that uses the classic (non-TaskFlow) operators. For example, for the given `sum_it` example: ``` """Example DAG demonstrating the usage of dynamic task mapping reduce using classic operators. """ from __future__ import annotations from datetime import datetime from airflow import DAG from airflow.decorators import task from airflow.operators.python import PythonOperator def add_one(x: int): return x + 1 def sum_it(values): total = sum(values) print(f"Total was {total}") with DAG(dag_id="example_dynamic_task_mapping_reduce", start_date=datetime(2022, 3, 4)): add_one_task = PythonOperator.partial( task_id="add_one", python_callable=add_one, ).expand( op_kwargs=[ {"x": 1}, {"x": 2}, {"x": 3}, ] ) sum_it_task = PythonOperator( task_id="sum_it", python_callable=sum_it, op_kwargs={"values": add_one_task.output}, ) ``` ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29754
https://github.com/apache/airflow/pull/29762
c9607d44de5a3c9674a923a601fc444ff957ac7e
4d4c2b9d8b5de4bf03524acf01a298c162e1d9e4
"2023-02-24T23:35:25Z"
python
"2023-05-31T05:47:46Z"
closed
apache/airflow
https://github.com/apache/airflow
29,746
["airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"]
DatabricksSubmitRunOperator does not support passing output of another task to `base_parameters`
### Apache Airflow Provider(s) databricks ### Versions of Apache Airflow Providers apache-airflow-providers-databricks==4.0.0 ### Apache Airflow version 2.4.3 ### Operating System MAC OS ### Deployment Virtualenv installation ### Deployment details The issue is consistent across multiple Airflow deployments (locally on Docker Compose, remotely on MWAA in AWS, locally using virualenv) ### What happened Passing `base_parameters` key into `notebook_task` parameter for `DatabricksSubmitRunOperator` as output of a previous task (TaskFlow paradigm) does not work. After inspection of `DatabricksSubmitRunOperator.init` it seems that the problem relies on the fact that it uses `utils.databricks.normalise_json_content` to validate input parameters and, given that the input parameter is of type `PlainXComArg`, it fails to parse. The workaround I found is to call it using `partial` and `expand`, which is a bit hacky and much less legible ### What you think should happen instead `DatabricksSubmitRunOperator` should accept `PlainXComArg` arguments on init and eventually validate on `execute`, prior to submitting job run. ### How to reproduce This DAG fails to parse: ```python3 with DAG( "dag_erroring", start_date=days_ago(1), params={"param_1": "", "param_2": ""}, ) as dag: @task def from_dag_params_to_notebook_params(**context): # Transform/Validate DAG input parameters to sth expected by Notebook notebook_param_1 = context["dag_run"].conf["param_1"] + "abcd" notebook_param_2 = context["dag_run"].conf["param_2"] + "efgh" return {"some_param": notebook_param_1, "some_other_param": notebook_param_2} DatabricksSubmitRunOperator( task_id="my_notebook_task", new_cluster={ "cluster_name": "single-node-cluster", "spark_version": "7.6.x-scala2.12", "node_type_id": "i3.xlarge", "num_workers": 0, "spark_conf": { "spark.databricks.cluster.profile": "singleNode", "spark.master": "[*, 4]", }, "custom_tags": {"ResourceClass": "SingleNode"}, }, notebook_task={ "notebook_path": "some/path/to/a/notebook", "base_parameters": from_dag_params_to_notebook_params(), }, libraries=[], databricks_retry_limit=3, timeout_seconds=86400, polling_period_seconds=20, ) ``` This one does not: ```python3 with DAG( "dag_parsing_fine", start_date=days_ago(1), params={"param_1": "", "param_2": ""}, ) as dag: @task def from_dag_params_to_notebook_params(**context): # Transform/Validate DAG input parameters to sth expected by Notebook notebook_param_1 = context["dag_run"].conf["param_1"] + "abcd" notebook_param_2 = context["dag_run"].conf["param_2"] + "efgh" return [{"notebook_path": "some/path/to/a/notebook", "base_parameters":{"some_param": notebook_param_1, "some_other_param": notebook_param_2}}] DatabricksSubmitRunOperator.partial( task_id="my_notebook_task", new_cluster={ "cluster_name": "single-node-cluster", "spark_version": "7.6.x-scala2.12", "node_type_id": "i3.xlarge", "num_workers": 0, "spark_conf": { "spark.databricks.cluster.profile": "singleNode", "spark.master": "[*, 4]", }, "custom_tags": {"ResourceClass": "SingleNode"}, }, libraries=[], databricks_retry_limit=3, timeout_seconds=86400, polling_period_seconds=20, ).expand(notebook_task=from_dag_params_to_notebook_params()) ``` ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29746
https://github.com/apache/airflow/pull/29840
c95184e8bc0f974ea8d2d51cbe3ca67e5f4516ac
c405ecb63e352c7a29dd39f6f249ba121bae7413
"2023-02-24T15:50:14Z"
python
"2023-03-07T15:03:17Z"
closed
apache/airflow
https://github.com/apache/airflow
29,733
["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "airflow/providers/databricks/provider.yaml", "docs/apache-airflow-providers-databricks/operators/jobs_create.rst", "tests/providers/databricks/hooks/test_databricks.py", "tests/providers/databricks/operators/test_databricks.py", "tests/system/providers/databricks/example_databricks.py"]
Databricks create/reset then run-now
### Description Allow an Airflow DAG to define a Databricks job with the `api/2.1/jobs/create` (or `api/2.1/jobs/reset`) endpoint then run that same job with the `api/2.1/jobs/run-now` endpoint. This would give similar capabilities as the DatabricksSubmitRun operator, but the `api/2.1/jobs/create` endpoint supports additional parameters that the `api/2.1/jobs/runs/submit` doesn't (e.g. `job_clusters`, `email_notifications`, etc.). ### Use case/motivation Create and run a Databricks job all in the Airflow DAG. Currently, DatabricksSubmitRun operator uses the `api/2.1/jobs/runs/submit` endpoint which doesn't support all features and creates runs that aren't tied to a job in the Databricks UI. Also, DatabricksRunNow operator requires you to define the job either directly in the Databricks UI or through a separate CI/CD pipeline causing the headache of having to change code in multiple places. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29733
https://github.com/apache/airflow/pull/35156
da2fdbb7609f7c0e8dd1d1fd9efaec31bb937fe8
a8784e3c352aafec697d3778eafcbbd455b7ba1d
"2023-02-23T21:01:27Z"
python
"2023-10-27T18:52:26Z"
closed
apache/airflow
https://github.com/apache/airflow
29,712
["airflow/providers/amazon/aws/hooks/emr.py", "tests/providers/amazon/aws/hooks/test_emr.py"]
EMRHook.get_cluster_id_by_name() doesn't use pagination
### Apache Airflow version 2.5.1 ### What happened When using EMRHook.get_cluster_id_by_name or any any operator that depends on it (e.g. EMRAddStepsOperator), if the results of the ListClusters API call is paginated (e.g. if your account has more than 50 clusters in the current region), and the desired cluster is in the 2nd page of results, None will be returned instead of the cluster ID. ### What you think should happen instead Boto's pagination API should be used and the cluster ID should be returned. ### How to reproduce Use `EmrAddStepsOperator` with the `job_flow_name` parameter on an `aws_conn_id` with more than 50 EMR clusters in the current region. ### Operating System Linux ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==7.2.1 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29712
https://github.com/apache/airflow/pull/29732
607068f4f0d259b638743db5b101660da1b43d11
9662fd8cc05f69f51ca94b495b14f907aed0d936
"2023-02-23T00:39:37Z"
python
"2023-05-01T18:45:02Z"
closed
apache/airflow
https://github.com/apache/airflow
29,702
["airflow/api_connexion/endpoints/connection_endpoint.py", "airflow/api_connexion/endpoints/update_mask.py", "airflow/api_connexion/endpoints/variable_endpoint.py", "tests/api_connexion/endpoints/test_update_mask.py", "tests/api_connexion/endpoints/test_variable_endpoint.py"]
Updating Variables Description via PATCH in Airflow API is Clearing the existing description field of the variable and unable to update description field
### Apache Airflow version 2.5.1 ### What happened When i made these patch requests to update description of the variable via axios ### 1) Trying to modify new value and description ```javascript let payload ={ key : "example_variable", value : "new_value", description: "new_Description" } axios.patch("https://localhost:8080/api/v1/variables/example_variable" , payload , { auth : { username : "username", password : "password" }, headers: { "Content-Type" : "application/json", } }); ``` following response received and In the airflow , Existing Variable's ```Description``` is cleared and set to ```None``` ```html response body : { "description" : "new_Description", "key": "example_variable", "value" : "new_value" } ``` ### 2) Trying to update Description with update_mask ```javascript let payload ={ key : "example_variable", value : "value", description: "new_Description" } axios.patch("https://localhost:8080/api/v1/variables/example_variable?update_mask=description" , payload , { auth : { username : "username", password : "password" }, headers: { "Content-Type" : "application/json", } }); ``` following response received ```html response body : { "detail" : null, "status": 400, "detail" : "No field to update", "type" : "https://airflow.apache.org/docs/apache-airflow/2.5.0/stable-rest-api-ref.html#section/Errors/BadRequest" } ``` ### What you think should happen instead The filed "description" is ignored both while setting the Variable (L113) and ```update_mask``` (L107-111). https://github.com/apache/airflow/blob/1768872a0085ba423d0a34fe6cc4e1e109f3adeb/airflow/api_connexion/endpoints/variable_endpoint.py#L97-L115 Also in Variable setter its set to ```None``` if input doesn't contain description field https://github.com/apache/airflow/blob/1768872a0085ba423d0a34fe6cc4e1e109f3adeb/airflow/models/variable.py#L156-L165 ### How to reproduce ## PATCH in Airflow REST API ### API call "https://localhost:8080/api/v1/variables/example_variable?update_mask=description" ### payload { key : "example_variable", value : "value", description: "new_Description" } ### headers "Content-Type" : "application/json" OR ### API call "https://localhost:8080/api/v1/variables/example_variable ### payload { key : "example_variable", value : "new_value", description: "new_Description" } ### headers "Content-Type" : "application/json" ### Operating System Ubuntu 22.04.1 LTS ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else Possible Solution to update Description field can be like https://github.com/apache/airflow/blob/1768872a0085ba423d0a34fe6cc4e1e109f3adeb/airflow/api_connexion/endpoints/connection_endpoint.py#L134-L145 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29702
https://github.com/apache/airflow/pull/29711
3f6b5574c61ef9765d077bdd08ccdaba14013e4a
de8e07dc6fea620541e0daa67131e8fe21dbd5fe
"2023-02-22T19:21:40Z"
python
"2023-03-18T21:03:41Z"
closed
apache/airflow
https://github.com/apache/airflow
29,687
["airflow/models/renderedtifields.py"]
Deadlock when airflow try to update 'k8s_pod_yaml' in 'rendered_task_instance_fields' table
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened **Airflow 2.4.2** We run into a problem, where HttpSensor has an error because of deadlock. We are running 3 different dags with 12 max_active_runs, that call api and check for response if it should reshedule it or go to next task. All these sensors have 1 minutes poke interval, so 36 of them are running at the same time. Sometimes (like once in 20 runs) we get following deadlock error: `Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context cursor, statement, parameters, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute cursor.execute(statement, parameters) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute res = self._query(query) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query db.query(q) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/connections.py", line 254, in query _mysql.connection.query(self, query) MySQLdb.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1457, in _run_raw_task self._execute_task_with_callbacks(context, test_mode) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1579, in _execute_task_with_callbacks RenderedTaskInstanceFields.write(rtif) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper return func(*args, session=session, **kwargs) File "/usr/local/lib/python3.7/contextlib.py", line 119, in __exit__ next(self.gen) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 36, in create_session session.commit() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1428, in commit self._transaction.commit(_to_root=self.future) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 829, in commit self._prepare_impl() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl self.session.flush() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3345, in flush self._flush(objects) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush transaction.rollback(_capture_exception=True) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__ with_traceback=exc_tb, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush flush_context.execute() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute rec.execute(self) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 633, in execute uow, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 241, in save_obj update, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1001, in _emit_update_statements statement, multiparams, execution_options=execution_options File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20 return meth(self, args_10style, kwargs_10style, execution_options) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 326, in _execute_on_connection self, multiparams, params, execution_options File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1491, in _execute_clauseelement cache_hit=cache_hit, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context e, statement, parameters, cursor, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2027, in _handle_dbapi_exception sqlalchemy_exception, with_traceback=exc_info[2], from_=e File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context cursor, statement, parameters, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute cursor.execute(statement, parameters) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute res = self._query(query) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query db.query(q) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/connections.py", line 254, in query _mysql.connection.query(self, query) sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') [SQL: UPDATE rendered_task_instance_fields SET k8s_pod_yaml=%s WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND rendered_task_instance_fields.run_id = %s AND rendered_task_instance_fields.map_index = %s] [parameters: ('{"metadata": {"annotations": {"dag_id": "bidder-joiner", "task_id": "capitest", "try_number": "1", "run_id": "scheduled__2023-02-15T14:15:00+00:00"}, ... (511 characters truncated) ... e": "AIRFLOW_IS_K8S_EXECUTOR_POD", "value": "True"}], "image": "artifactorymaster.outbrain.com:5005/datainfra/airflow:8cbd2a3d8c", "name": "base"}]}}', 'bidder-joiner', 'capitest', 'scheduled__2023-02-15T14:15:00+00:00', -1)] (Background on this error at: https://sqlalche.me/e/14/e3q8) ` `Failed to execute job 3966 for task capitest ((MySQLdb.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') [SQL: UPDATE rendered_task_instance_fields SET k8s_pod_yaml=%s WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND rendered_task_instance_fields.run_id = %s AND rendered_task_instance_fields.map_index = %s] [parameters: ('{"metadata": {"annotations": {"dag_id": "bidder-joiner", "task_id": "capitest", "try_number": "1", "run_id": "scheduled__2023-02-15T14:15:00+00:00"}, ... (511 characters truncated) ... e": "AIRFLOW_IS_K8S_EXECUTOR_POD", "value": "True"}], "image": "artifactorymaster.outbrain.com:5005/datainfra/airflow:8cbd2a3d8c", "name": "base"}]}}', 'bidder-joiner', 'capitest', 'scheduled__2023-02-15T14:15:00+00:00', -1)] (Background on this error at: https://sqlalche.me/e/14/e3q8); 68) ` I checked MySql logs and deadlock is caused by query: ``` DELETE FROM rendered_task_instance_fields WHERE rendered_task_instance_fields.dag_id = 'bidder-joiner-raw_data_2nd_pass_delay' AND rendered_task_instance_fields.task_id = 'is_data_ready' AND ((rendered_task_instance_fields.dag_id, rendered_task_instance_fields.task_id, rendered_task_instance_fields.run_id) NOT IN (SELECT subq2.dag_id, subq2.task_id, subq2.run_id FROM (SELECT subq1.dag_id AS dag_id, subq1.task_id AS task_id, subq1.run_id AS run_id FROM (SELECT DISTINCT rendered_task_instance_fields.dag_id AS dag_id, rendered_task_instance_fields.task_id AS task_id, rendered_task_instance_fields.run_id AS run_id, dag_run.execution_date AS execution_date FROM rendered_task_instance_fields INNER JOIN dag_run ON rendered_task_instance_fields.dag_id = dag_run.dag_id AND rendered_task_instance_fields.run_id = dag_run.run_id WHERE rendered_task_instance_fields.dag_id = 'bidder-joiner-raw_data ``` ### What you think should happen instead I found similar issue open on github (https://github.com/apache/airflow/issues/25765) so I think it should be resolved in the same way - adding @retry_db_transaction annotation to function that is executing this query ### How to reproduce Create 3 dags with 12 max_active_runs that use HttpSensor at the same time, same poke interval and mode reschedule. ### Operating System Ubuntu 20 ### Versions of Apache Airflow Providers apache-airflow-providers-common-sql>=1.2.0 mysql-connector-python>=8.0.11 mysqlclient>=1.3.6 apache-airflow-providers-mysql==3.2.1 apache-airflow-providers-http==4.0.0 apache-airflow-providers-slack==6.0.0 apache-airflow-providers-apache-spark==3.0.0 ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29687
https://github.com/apache/airflow/pull/32341
e53320d62030a53c6ffe896434bcf0fc85803f31
c8a3c112a7bae345d37bb8b90d68c8d6ff2ef8fc
"2023-02-22T09:00:28Z"
python
"2023-07-05T11:28:16Z"
closed
apache/airflow
https://github.com/apache/airflow
29,679
["tests/cli/commands/test_internal_api_command.py"]
Fix Quarantined `test_cli_internal_api_background`
### Body Recently, [this test](https://github.com/apache/airflow/blob/9de301da2a44385f57be5407e80e16ee376f3d39/tests/cli/commands/test_internal_api_command.py#L134-L137) began to failed with timeout error and it has affected all tests in single CI run. As temporary solution this test was marked as `quarantined`. We should figure out why it happen and try to resolve it. ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/29679
https://github.com/apache/airflow/pull/29688
f99d27e5bde8e76fdb504fa213b9eb898c4bc903
946bded31af480d03cb2d45a3f8cdd0a9c32838d
"2023-02-21T21:47:56Z"
python
"2023-02-23T07:08:08Z"
closed
apache/airflow
https://github.com/apache/airflow
29,677
["airflow/providers/amazon/aws/operators/lambda_function.py", "docs/apache-airflow-providers-amazon/operators/lambda.rst", "tests/always/test_project_structure.py", "tests/providers/amazon/aws/operators/test_lambda_function.py", "tests/system/providers/amazon/aws/example_lambda.py"]
Rename AWS lambda related resources
### Apache Airflow Provider(s) amazon ### Versions of Apache Airflow Providers _No response_ ### Apache Airflow version 2.5.0 ### Operating System MacOS ### Deployment Virtualenv installation ### Deployment details AWS Lambda in Amazon provider package do not follow the convention #20296. Hook, operators and sensors related to AWS lambda need to be renamed to follow this convention. Here are the proposed changes in order to fix it: - Rename `airflow/providers/amazon/aws/operators/lambda_function.py` to `airflow/providers/amazon/aws/operators/lambda.py` - Rename `airflow/providers/amazon/aws/sensors/lambda_function.py` to `airflow/providers/amazon/aws/sensors/lambda.py` - Rename `airflow/providers/amazon/aws/hooks/lambda_function.py` to `airflow/providers/amazon/aws/hooks/lambda.py` - Rename `AwsLambdaInvokeFunctionOperator` to `LambdaInvokeFunctionOperator` Since all these changes are breaking changes, it will have to be done following the deprecation pattern: - Copy/paste the files with the new name - Update the existing hook, operators and sensors to inherit from these new classes - Deprecate these classes by sending deprecation warnings. See an example [here](airflow/providers/amazon/aws/operators/aws_lambda.py) ### What happened _No response_ ### What you think should happen instead _No response_ ### How to reproduce N/A ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29677
https://github.com/apache/airflow/pull/29749
b2ecaf9d2c6ccb94ae97728a2d54d31bd351f11e
38b901ec3f07e6e65880b11cc432fb8ad6243629
"2023-02-21T19:36:46Z"
python
"2023-02-24T21:40:54Z"
closed
apache/airflow
https://github.com/apache/airflow
29,671
["tests/providers/openlineage/extractors/test_default_extractor.py"]
Adapt OpenLineage default extractor to properly accept all OL implementation
### Body Adapt default extractor to accept any valid type returned from Operators `get_openlineage_facets_*` method. This needs to ensure compatibility with operators made with external extractors for current openlineage-airflow integration. ### Committer - [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
https://github.com/apache/airflow/issues/29671
https://github.com/apache/airflow/pull/31381
89bed231db4807826441930661d79520250f3075
4e73e47d546bf3fd230f93056d01e12f92274433
"2023-02-21T18:43:14Z"
python
"2023-06-13T19:09:28Z"
closed
apache/airflow
https://github.com/apache/airflow
29,666
["airflow/providers/hashicorp/_internal_client/vault_client.py", "airflow/providers/hashicorp/secrets/vault.py", "tests/providers/hashicorp/_internal_client/test_vault_client.py", "tests/providers/hashicorp/secrets/test_vault.py"]
Multiple Mount Points for Hashicorp Vault Back-end
### Description Support mounting to multiple namespaces with the Hashicorp Vault Secrets Back-end ### Use case/motivation As a data engineer I wish to utilize secrets stored in multiple mount paths (to support connecting to multiple namespaces) without having to mount to a higher up namespace. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29666
https://github.com/apache/airflow/pull/29734
d0783744fcae40b0b6b2e208a555ea5fd9124dfb
dff425bc3d92697bb447010aa9f3b56519a59f1e
"2023-02-21T16:44:08Z"
python
"2023-02-24T09:48:01Z"
closed
apache/airflow
https://github.com/apache/airflow
29,663
["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/stats.py", "tests/core/test_stats.py"]
Option to Disable High Cardinality Metrics on Statsd
### Description With recent PRs enabling tags-support on Statsd metrics, we gained a deeper understanding into the issue of publishing high cardinality metrics. Through this issue, I hope to facilitate the discussion in categorizing metric cardinality of Airflow specific events and tags, and finding a way to disable high cardinality metrics and including it into 2.6.0 release In the world of Observability & Metrics, cardinality is broadly defined as the following: `number of unique metric names * number of unique application tag pairs` This means that events with _unbounded_ number of tag-pairs (key value pair of tags) as well as events with _unbounded_ number of unique metric names will incur expensive storage requirements on the metrics backend. Let's take a look at the following metric: `local_task_job.task_exit.<job_id>.<dag_id>.<task_id>.<return_code>` Here, we have 4 different variable/tag-like attributes embedded into the metric name that I think we can categorize into 3 levels of cardinality. 1. High cardinality / Unbounded metric 2. Medium cardinality / semi-bounded metric 3. Low cardinality / categorically-bounded metric ### High Cardinality / Unbounded Metric Example tag: <job_id> This category of metrics are strictly unbounded, and incorporates a monotonically increasing attribute like <job_id> or <run_id>. To demonstrate just how explosive the growth of these metrics can be, let's take an example. In an Airflow instance with 1000 daily jobs, with a metric retention period of 10 days, we are increasing the cardinality of our metrics by 10,000 on just one single metric just by adding this tag alone. If we add this tag to a few other metrics, that could easily result in an explosion of metric cardinality. As a benchmark,[ DataDog's Enterprise level pricing plan only has 200 custom metrics per host included](https://www.datadoghq.com/pricing/), and anything beyond that needs to be added at a premium. These metrics should be avoided at all costs. ### Medium Cardinality / semi-bounded metric Example tag: <dag_id>, <task_id> This category of metrics are semi-bounded. They are not bounded by a pre-defined category of enums, but they are bounded by the number of dags or tasks there are within an Airflow infrastructure. This means that although these metrics can lead to increasing levels of cardinality in an Airflow cluster with increasing number of dags, cardinality will still be temporarily bounded. I.e. a given cluster will maintain its level of cardinality over time. ### Low Cardinality / categorically-bounded metric Example tag: <return_code> This category of metrics is strictly bounded by a category of enums. <return_code> and <task_state> are good examples of attributes with low cardinality. Ideally, we would only want to publish metrics with this level of cardinality. Using above definition of High Cardinality, I've identified the following metrics as examples that fall under this criteria. https://github.com/apache/airflow/blob/main/airflow/jobs/local_task_job.py#L292 https://github.com/apache/airflow/blob/main/airflow/dag_processing/processor.py#L444 https://github.com/apache/airflow/blob/main/airflow/jobs/scheduler_job.py#L691 https://github.com/apache/airflow/blob/main/airflow/jobs/scheduler_job.py#L1584 https://github.com/apache/airflow/blob/main/airflow/models/dag.py#L1331 https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1258 https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1577 https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1847 I would like to propose that we need to provide the option to disable 'Unbounded metrics' with 2.6.0 release. In order to ensure backward compatibility, we could leave the default behavior to publish all metrics, but implement a single Boolean flag to disable these high cardinality metrics. ### Use case/motivation _No response_ ### Related issues https://github.com/apache/airflow/pull/28961 https://github.com/apache/airflow/pull/29093 ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29663
https://github.com/apache/airflow/pull/29881
464ab1b7caa78637975008fcbb049d5b52a8b005
86cd79ffa76d4e4d4abe3fe829d7797852a713a5
"2023-02-21T16:12:58Z"
python
"2023-03-06T06:20:05Z"
closed
apache/airflow
https://github.com/apache/airflow
29,662
["airflow/www/decorators.py"]
Audit Log is unclear when using Azure AD login
### Apache Airflow version 2.5.1 ### What happened We're using an Azure OAUTH based login in our Airflow implementation, and everything works great. This is more of a visual problem than an actual bug. In the Audit logs, the `owner` key is mapped to the username, which in most cases is airflow. But, in situations where we manually pause a DAG or enable it, it is mapped to our generated username, which doesn't really tell one who it is unless they were to look up that string in the users list. Example: ![image](https://user-images.githubusercontent.com/102953522/220382349-102f897b-52c4-4a92-a3e1-5b8a1b1082ff.png) It would be nice if it were possible to include the user's first and last name alongside the username. I could probably give this one a go myself, if I could get a hint on where to look. I've found the dag_audit_log.html template, but not sure where to change log.owner. ### What you think should happen instead It would be good to get a representation such as username (FirstName LastName). ### How to reproduce N/A ### Operating System Debian GNU/Linux 11 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details Deployed with Helm chart v1.7.0, and Azure OAUTH for login. ### Anything else _No response_ ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29662
https://github.com/apache/airflow/pull/30185
0b3b6704cb12a3b8f22da79d80b3db85528418b7
a03f6ccb153f9b95f624d5bc3346f315ca3f0211
"2023-02-21T15:10:30Z"
python
"2023-05-17T20:15:55Z"
closed
apache/airflow
https://github.com/apache/airflow
29,621
["chart/templates/dags-persistent-volume-claim.yaml", "chart/values.yaml"]
Fix adding annotations for dag persistence PVC
### Official Helm Chart version 1.8.0 (latest released) ### Apache Airflow version 2.5.0 ### Kubernetes Version v1.25.4 ### Helm Chart configuration The dags persistence section doesn't have a default value for annotations and the usage looks like: ``` annotations: {{- if .Values.dags.persistence.annotations}} {{- toYaml .Values.dags.persistence.annotations | nindent 4 }} {{- end }} ``` ### Docker Image customizations _No response_ ### What happened As per the review comments here: https://github.com/apache/airflow/pull/29270#pullrequestreview-1304890651, due to this design, the upgrades might suffer. Fix them to be helm upgrade friendly ### What you think should happen instead The design should be written in an helm upgrade friendly way, refer to this suggestion https://github.com/apache/airflow/pull/29270#pullrequestreview-1304890651 ### How to reproduce - ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29621
https://github.com/apache/airflow/pull/29622
5835b08e8bc3e11f4f98745266d10bbae510b258
901774718c5d7ff7f5ddc6f916701d281bb60a4b
"2023-02-20T03:20:25Z"
python
"2023-02-20T22:58:03Z"
closed
apache/airflow
https://github.com/apache/airflow
29,593
["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"]
Cannot disable XCom push in SnowflakeOperator
### Apache Airflow Provider(s) snowflake ### Versions of Apache Airflow Providers 4.0.3 ### Apache Airflow version 2.5.0 ### Operating System docker/linux ### Deployment Astronomer ### Deployment details Normal Astro CLI ### What happened ``` >>> SnowflakeOperator( ..., do_xcom_push=False ).execute() ERROR - Task failed with exception Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 272, in execute return self._process_output([output], hook.descriptions)[-1] File "/usr/local/lib/python3.9/site-packages/airflow/providers/snowflake/operators/snowflake.py", line 118, in _process_output for row in result_list: TypeError: 'NoneType' object is not iterable ``` ### What you think should happen instead XCom's should be able to be turned off ### How to reproduce 1) ``` astro dev init ``` 2) `dags/snowflake_test.py` ``` import os from datetime import datetime from airflow import DAG from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator os.environ["AIRFLOW_CONN_SNOWFLAKE"] = "snowflake://......." with DAG('snowflake_test', schedule=None, start_date=datetime(2023, 1, 1)): SnowflakeOperator( task_id='snowflake_test', snowflake_conn_id="snowflake", sql="select 1;", do_xcom_push=False ) ``` 3) ``` astro run -d dags/snowflake_test.py snowflake_test ``` ``` Loading DAGs... Running snowflake_test... [2023-02-17 18:45:33,537] {connection.py:280} INFO - Snowflake Connector for Python Version: 2.9.0, Python Version: 3.9.16, Platform: Linux-5.15.49-linuxkit-aarch64-with-glibc2.31 ... [2023-02-17 18:45:34,608] {cursor.py:727} INFO - query: [select 1] [2023-02-17 18:45:34,698] {cursor.py:740} INFO - query execution done ... [2023-02-17 18:45:34,785] {connection.py:581} INFO - closed [2023-02-17 18:45:34,841] {connection.py:584} INFO - No async queries seem to be running, deleting session FAILED 'NoneType' object is not iterable ``` ### Anything else _No response_ ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29593
https://github.com/apache/airflow/pull/29599
2bc1081ea6ca569b4e7fc538bfc827d74e8493ae
19f1e7c27b85e297497842c73f13533767ebd6ba
"2023-02-17T16:27:19Z"
python
"2023-02-22T09:33:08Z"
closed
apache/airflow
https://github.com/apache/airflow
29,585
["airflow/providers/docker/decorators/docker.py", "tests/providers/docker/decorators/test_docker.py"]
template_fields not working in the decorator `task.docker`
### Apache Airflow Provider(s) docker ### Versions of Apache Airflow Providers apache-airflow-providers-docker 3.4.0 ### Apache Airflow version 2.5.1 ### Operating System Linux ### Deployment Docker-Compose ### Deployment details _No response_ ### What happened The templated fields are not working under `task.docker` ```python @task.docker(image="python:3.9-slim-bullseye", container_name='python_{{macros.datetime.now() | ts_nodash}}', multiple_outputs=True) def transform(order_data_dict: dict): """ #### Transform task A simple Transform task which takes in the collection of order data and computes the total order value. """ total_order_value = 0 for value in order_data_dict.values(): total_order_value += value return {"total_order_value": total_order_value} ``` Will throws error with un-templated `container_name` `Bad Request ("Invalid container name (python_{macros.datetime.now() | ts_nodash}), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed")` ### What you think should happen instead All these fields should work with docker operator: https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html ``` template_fields: Sequence[str]= ('image', 'command', 'environment', 'env_file', 'container_name') ``` ### How to reproduce with the example above ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29585
https://github.com/apache/airflow/pull/29586
792416d4ad495f1e5562e6170f73f4d8f1fa2eff
7bd87e75def1855d8f5b91e9ab1ffbbf416709ec
"2023-02-17T09:32:11Z"
python
"2023-02-17T17:51:57Z"
closed
apache/airflow
https://github.com/apache/airflow
29,578
["airflow/jobs/scheduler_job_runner.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst", "newsfragments/30374.significant.rst"]
scheduler.tasks.running metric is always 0
### Apache Airflow version 2.5.1 ### What happened I'd expect the `scheduler.tasks.running` metric to represent the number of running tasks, but it is always zero. It appears that #10956 broke this when it removed [the line that increments `num_tasks_in_executor`](https://github.com/apache/airflow/pull/10956/files#diff-bde85feb359b12bdd358aed4106ef4fccbd8fa9915e16b9abb7502912a1c1ab3L1363). Right now that variable is set to 0, never incremented, and the emitted as a gauge. ### What you think should happen instead `scheduler.tasks.running` should either represent the number of tasks running or be removed altogether. ### How to reproduce _No response_ ### Operating System Ubuntu 18.04 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29578
https://github.com/apache/airflow/pull/30374
d8af20f064b8d8abc9da1f560b2d7e1ac7dd1cc1
cce9b2217b86a88daaea25766d0724862577cc6c
"2023-02-16T17:59:47Z"
python
"2023-04-13T11:04:12Z"
closed
apache/airflow
https://github.com/apache/airflow
29,576
["airflow/triggers/temporal.py", "tests/triggers/test_temporal.py"]
DateTimeSensorAsync breaks if target_time is timezone-aware
### Apache Airflow version 2.5.1 ### What happened `DateTimeSensorAsync` fails with the following error if `target_time` is aware: ``` [2022-06-29, 05:09:11 CDT] {taskinstance.py:1889} ERROR - Task failed with exception Traceback (most recent call last):a File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/sensors/time_sensor.py", line 60, in execute trigger=DateTimeTrigger(moment=self.target_datetime), File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/triggers/temporal.py", line 42, in __init__ raise ValueError(f"The passed datetime must be using Pendulum's UTC, not {moment.tzinfo!r}") ValueError: The passed datetime must be using Pendulum's UTC, not Timezone('America/Chicago') ``` ### What you think should happen instead Given the fact that `DateTimeSensor` correctly handles timezones, this seems like a bug. `DateTimeSensorAsync` should be a drop-in replacement for `DateTimeSensor`, and therefore should have the same timezone behavior. ### How to reproduce ``` #!/usr/bin/env python3 import datetime from airflow.decorators import dag from airflow.sensors.date_time import DateTimeSensor, DateTimeSensorAsync import pendulum @dag( start_date=datetime.datetime(2022, 6, 29), schedule='@daily', ) def datetime_sensor_dag(): naive_time1 = datetime.datetime(2023, 2, 16, 0, 1) aware_time1 = datetime.datetime(2023, 2, 16, 0, 1).replace(tzinfo=pendulum.local_timezone()) naive_time2 = pendulum.datetime(2023, 2, 16, 23, 59) aware_time2 = pendulum.datetime(2023, 2, 16, 23, 59).replace(tzinfo=pendulum.local_timezone()) DateTimeSensor(task_id='naive_time1', target_time=naive_time1, mode='reschedule') DateTimeSensor(task_id='naive_time2', target_time=naive_time2, mode='reschedule') DateTimeSensor(task_id='aware_time1', target_time=aware_time1, mode='reschedule') DateTimeSensor(task_id='aware_time2', target_time=aware_time2, mode='reschedule') DateTimeSensorAsync(task_id='async_naive_time1', target_time=naive_time1) DateTimeSensorAsync(task_id='async_naive_time2', target_time=naive_time2) DateTimeSensorAsync(task_id='async_aware_time1', target_time=aware_time1) # fails DateTimeSensorAsync(task_id='async_aware_time2', target_time=aware_time2) # fails datetime_sensor_dag() ``` This can also happen if the `target_time` is naive and `core.default_timezone = system`. ### Operating System CentOS Stream 8 ### Versions of Apache Airflow Providers N/A ### Deployment Other ### Deployment details Standalone ### Anything else This appears to be nearly identical to #24736. Probably worth checking other time-related sensors as well. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29576
https://github.com/apache/airflow/pull/29606
fd000684d05a993ade3fef38b683ef3cdfdfc2b6
79c07e3fc5d580aea271ff3f0887291ae9e4473f
"2023-02-16T16:03:25Z"
python
"2023-02-19T20:27:44Z"
closed
apache/airflow
https://github.com/apache/airflow
29,552
["airflow/providers/google/suite/hooks/drive.py", "airflow/providers/google/suite/transfers/local_to_drive.py", "tests/providers/google/suite/hooks/test_drive.py", "tests/providers/google/suite/transfers/test_local_to_drive.py"]
Google provider doesn't let uploading file to a shared drive
### Apache Airflow Provider(s) google ### Versions of Apache Airflow Providers ``` apache-airflow-providers-google: version 8.9.0 ``` ### Apache Airflow version 2.5.1 ### Operating System Linux - official airflow image from docker hub apache/airflow:slim-2.5.1 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened Not sure if it's a bug or a feature. Originally I have used `LocalFilesystemToGoogleDriveOperator` to try uploading a file into a shared Google Drive without success. Provider didn't find a directory with a given name so it created a new one without browsing `shared drives`. Method call that doesn't allow to upload it is here: https://github.com/apache/airflow/blob/main/airflow/providers/google/suite/hooks/drive.py#L223 ### What you think should happen instead It would be nice if there was an optional parameter to provide a `drive_id` into which user would like to upload a file. With the same directory check behaviour that already exists but extended to `shared drives` ### How to reproduce 1. Create a directory on the shared google drive 2. Fill `LocalFilesystemToGoogleDriveOperator` constructor with the arguments. 3. Execute the function ### Anything else I am willing to submit a PR but I would need to know more details, your thoughts, expectations of the implementation to make as little iteration on it possible. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29552
https://github.com/apache/airflow/pull/29477
0222f7d91cee80cc1a464f277f99e69e845c52db
f37772adfdfdee8763147e0563897e4d5d5657c8
"2023-02-15T08:52:31Z"
python
"2023-02-18T19:29:35Z"
closed
apache/airflow
https://github.com/apache/airflow
29,538
["airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/marketing_platform/hooks/campaign_manager.py", "airflow/providers/google/marketing_platform/operators/campaign_manager.py", "airflow/providers/google/marketing_platform/sensors/campaign_manager.py", "airflow/providers/google/provider.yaml", "docs/apache-airflow-providers-google/operators/marketing_platform/campaign_manager.rst", "tests/providers/google/marketing_platform/hooks/test_campaign_manager.py", "tests/providers/google/marketing_platform/operators/test_campaign_manager.py", "tests/providers/google/marketing_platform/sensors/test_campaign_manager.py", "tests/system/providers/google/marketing_platform/example_campaign_manager.py"]
GoogleCampaignManagerReportSensor not working correctly on API Version V4
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Hello, My organization has been running Airflow 2.3.4 and we have run into a problem in regard to the Google Campaign Manager Report Sensor. The purpose of this sensor is to check if a report has finished processing and is ready to be downloaded. If we use API Version: v3.5 it works flawlessly. Unfortunately, if we use API Version v4, the sensor malfunctions. It always succeeds regardless of whether the report is ready to download or not. This causes the job to fail downstream because it makes it impossible to download a file that it is not ready. At first this doesn't seem like a big problem in just using v3.5. However, Google announced that they are going to only let you use API version v4 starting in a week. Is there a way we can get this resolved 😭? Thanks! ### What you think should happen instead _No response_ ### How to reproduce _No response_ ### Operating System linux? ### Versions of Apache Airflow Providers _No response_ ### Deployment Composer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29538
https://github.com/apache/airflow/pull/30598
5b42aa3b8d0ec069683e22c2cb3b8e8e6e5fee1c
da2749cae56d6e0da322695b3286acd9393052c8
"2023-02-14T15:13:33Z"
python
"2023-04-15T13:34:31Z"
closed
apache/airflow
https://github.com/apache/airflow
29,537
["airflow/cli/commands/config_command.py", "tests/cli/commands/test_config_command.py"]
Docker image fails to start if celery config section is not defined
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Using Airflow `2.3.4` We removed any config values we did not explicitly set from `airflow.cfg`. This was to make future upgrades less involved, as we could only compare configuration values we explicitly set, rather than all permutations of versions. This has been [recommended in slack](https://apache-airflow.slack.com/archives/CCQB40SQJ/p1668441275427859?thread_ts=1668394200.637899&cid=CCQB40SQJ) as an approach. e.g. we set `AIRFLOW__CELERY__BROKER_URL` as an environment variable - we do not set this in `airflow.cfg`, so we removed the `[celery]` section from the Airflow configuration. We set `AIRFLOW__CORE__EXECUTOR=CeleryExecutor`, so we are using the Celery executor. Upon starting the Airflow scheduler, it exited with code `1`, and this message: ``` The section [celery] is not found in config. ``` Upon adding back in an empty ``` [celery] ``` section to `airflow.cfg`, this error went away. I have verified that it still picks up `AIRFLOW__CELERY__BROKER_URL` correctly. ### What you think should happen instead I'd expect Airflow to take defaults as listed [here](https://airflow.apache.org/docs/apache-airflow/2.3.4/howto/set-config.html), I wouldn't expect the presence of configuration sections to cause errors. ### How to reproduce 1. Setup a docker image for the Airflow `scheduler` with `apache/airflow:slim-2.3.4)-python3.10` and the following configuration in `airflow.cfg` - with no `[celery]` section: ``` [core] # The executor class that airflow should use. Choices include # ``SequentialExecutor``, ``LocalExecutor``, ``CeleryExecutor``, ``DaskExecutor``, # ``KubernetesExecutor``, ``CeleryKubernetesExecutor`` or the # full import path to the class when using a custom executor. executor = CeleryExecutor [logging] [metrics] [secrets] [cli] [debug] [api] [lineage] [atlas] [operators] [hive] [webserver] [email] [smtp] [sentry] [celery_kubernetes_executor] [celery_broker_transport_options] [dask] [scheduler] [triggerer] [kerberos] [github_enterprise] [elasticsearch] [elasticsearch_configs] [kubernetes] [smart_sensor] ``` 2. Run the `scheduler` command, also setting `AIRFLOW__CELERY__BROKER_URL` to point to a Celery redis broker. 3. Observe that the scheduler exits. ### Operating System Ubuntu 20.04.5 LTS (Focal Fossa) ### Versions of Apache Airflow Providers _No response_ ### Deployment Other Docker-based deployment ### Deployment details AWS ECS Docker `apache/airflow:slim-2.3.4)-python3.10` Separate: - Webserver - Triggerer - Scheduler - Celery worker - Celery flower services ### Anything else This seems to occur due to this `get-value` check in the Airflow image entrypoint: https://github.com/apache/airflow/blob/28126c12fbdd2cac84e0fbcf2212154085aa5ed9/scripts/docker/entrypoint_prod.sh#L203-L212 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29537
https://github.com/apache/airflow/pull/29541
84b13e067f7b0c71086a42957bb5cf1d6dc86d1d
06d45f0f2c8a71c211e22cf3792cc873f770e692
"2023-02-14T14:58:55Z"
python
"2023-02-15T01:41:37Z"
closed
apache/airflow
https://github.com/apache/airflow
29,532
["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/dag.py", "airflow/models/dagwarning.py"]
AIP-44 Migrate DagWarning.purge_inactive_dag_warnings to Internal API
Used in https://github.com/mhenc/airflow/blob/master/airflow/dag_processing/manager.py#L613 should be straighforward
https://github.com/apache/airflow/issues/29532
https://github.com/apache/airflow/pull/29534
289ae47f43674ae10b6a9948665a59274826e2a5
50b30e5b92808e91ad9b6b05189f560d58dd8152
"2023-02-14T13:13:04Z"
python
"2023-02-15T00:13:44Z"
closed
apache/airflow
https://github.com/apache/airflow
29,531
["airflow/ti_deps/deps/prev_dagrun_dep.py", "tests/ti_deps/deps/test_prev_dagrun_dep.py"]
Dynamic task mapping does not always create mapped tasks
### Apache Airflow version 2.5.1 ### What happened Same problem as https://github.com/apache/airflow/issues/28296, but seems to happen nondeterministically, and still happens when ignoring `depends_on_past=True`. I've got a task that retrieves some filenames, which then creates dynamically mapped tasks to move the files, one per task. I'm using a similar task across multiple DAGs. However, task mapping fails on some DAG runs: it inconsistently happens per DAG run, and some DAGs do not seem to be affected at all. These seem to be the DAGs where no task was ever mapped, so that the mapped task instance ended up in a Skipped state. What happens is that multiple files will be found, but only a single dynamically mapped task will be created. This task never starts and has map_index of -1. It can be found under the "List instances, all runs" menu, but says "No Data found." under the "Mapped Tasks" tab. ![Screenshot 2023-02-14 at 13 29 15](https://user-images.githubusercontent.com/64646000/218742434-c132d3c1-8013-446f-8fd0-9b485506f43e.png) ![Screenshot 2023-02-14 at 13 29 25](https://user-images.githubusercontent.com/64646000/218742461-fb0114f6-6366-403b-841e-03b0657e3561.png) When I press the "Run" button when the mapped task is selected, the following error appears: ``` Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped! ``` The previous task _has_ run however. No errors appeared in my Airflow logs. When I try to run the task with **Ignore All Deps** enabled, I get the error: ``` Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped! ``` This last bit is a contradiction, the task cannot be mapped and not mapped simultaneously. If the amount of mapped tasks is 0 while in this erroneous state, the mapped tasks will not be marked as skipped as expected. ### What you think should happen instead The mapped tasks should not get stuck with "no status". The mapped tasks should be created and ran successfully, or in the case of a 0-length list output of the upstream task they should be skipped. ### How to reproduce Run the below DAG, if it runs successfully clear several tasks out of order. This may not immediately reproduce the bug, but after some task clearing, for me it always ends up in the faulty state described above. ``` from airflow import DAG from airflow.decorators import task import datetime as dt from airflow.operators.python import PythonOperator import random @task def get_filenames_kwargs(): return [ {"file_name": i} for i in range(random.randint(0, 2)) ] def print_filename(file_name): print(file_name) with DAG( dag_id="dtm_test_2", start_date=dt.datetime(2023, 2, 10), default_args={ "owner": "airflow", "depends_on_past": True, }, schedule="@daily", ) as dag: get_filenames_task = get_filenames_kwargs.override(task_id="get_filenames_task")() print_filename_task = PythonOperator.partial( task_id="print_filename_task", python_callable=print_filename, ).expand(op_kwargs=get_filenames_task) ``` ### Operating System Amazon Linux v2 ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29531
https://github.com/apache/airflow/pull/32397
685328e3572043fba6db432edcaacf8d06cf88d0
73bc49adb17957e5bb8dee357c04534c6b41f9dd
"2023-02-14T12:47:12Z"
python
"2023-07-23T23:53:52Z"
closed
apache/airflow
https://github.com/apache/airflow
29,515
["airflow/www/templates/airflow/task.html"]
Hide non-used docs attributes from Task Instance Detail
### Description Inside a BashOperator, I added a markdown snippet of documentation for the "Task Instance Details" of my Airflow nodes. Now I can see my markdown, defined by the attribute "doc_md", but also Attribute: bash_command Attribute: doc Attribute: doc_json Attribute: doc_rst Attribute: doc_yaml I think it would look better if only the chosen type of docs would be shown in the Task Instance detail, instead of leaving the names of other attributes without anything added to them. ![screenshot](https://user-images.githubusercontent.com/23013638/218585618-f75d180c-6319-4cc5-a569-835af82b3e52.png) ### Use case/motivation I would like to see only the type of doc attribute that I chose to add to my task instance detail and hide all the others docs type. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29515
https://github.com/apache/airflow/pull/29545
655ffb835eb4c5343c3f2b4d37b352248f2768ef
f2f6099c5a2f3613dce0cc434a95a9479d748cf5
"2023-02-13T22:10:31Z"
python
"2023-02-16T14:17:49Z"
closed
apache/airflow
https://github.com/apache/airflow
29,435
["airflow/decorators/base.py", "tests/decorators/test_python.py"]
TaskFlow API `multiple_outputs` inferral causes import errors when using TYPE_CHECKING
### Apache Airflow version 2.5.1 ### What happened When using the TaskFlow API, I like to generally keep a good practice of adding type annotations in the TaskFlow functions so others reading the DAG and task code have better context around inputs/outputs, keep imports solely used for typing behind `typing.TYPE_CHECKING`, and utilize PEP 563 for forwarding annotation evaluations. Unfortunately, when using ~PEP 563 _and_ `TYPE_CHECKING`~ just TYPE_CHECKING, DAG import errors occur with a "NameError: <name> is not defined." exception. ### What you think should happen instead Users should be free to use ~PEP 563 and~ `TYPE_CHECKING` when using the TaskFlow API and not hit DAG import errors along the way. ### How to reproduce Using a straightforward use case of transforming a DataFrame, let's assume this toy example: ```py from __future__ import annotations from typing import TYPE_CHECKING, Any from pendulum import datetime from airflow.decorators import dag, task if TYPE_CHECKING: from pandas import DataFrame @dag(start_date=datetime(2023, 1, 1), schedule=None) def multiple_outputs(): @task() def transform(df: DataFrame) -> dict[str, Any]: ... transform() multiple_outputs() ``` Add this DAG to your DAGS_FOLDER and the following import error should be observed: <img width="641" alt="image" src="https://user-images.githubusercontent.com/48934154/217713685-ec29d5cc-4a48-4049-8dfa-56cbd76cddc3.png"> ### Operating System Debian GNU/Linux ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==6.2.0 apache-airflow-providers-apache-hive==5.1.1 apache-airflow-providers-apache-livy==3.2.0 apache-airflow-providers-celery==3.1.0 apache-airflow-providers-cncf-kubernetes==5.1.1 apache-airflow-providers-common-sql==1.3.3 apache-airflow-providers-databricks==4.0.0 apache-airflow-providers-dbt-cloud==2.3.1 apache-airflow-providers-elasticsearch==4.3.3 apache-airflow-providers-ftp==3.3.0 apache-airflow-providers-google==8.8.0 apache-airflow-providers-http==4.1.1 apache-airflow-providers-imap==3.1.1 apache-airflow-providers-microsoft-azure==5.1.0 apache-airflow-providers-postgres==5.4.0 apache-airflow-providers-redis==3.1.0 apache-airflow-providers-sftp==4.2.1 apache-airflow-providers-snowflake==4.0.2 apache-airflow-providers-sqlite==3.3.1 apache-airflow-providers-ssh==3.4.0 astronomer-providers==1.14.0 ### Deployment Astronomer ### Deployment details OOTB local Airflow install with LocalExecutor built with the Astro CLI. ### Anything else - This behavior/error was not observed using Airflow 2.4.3. - As a workaround, `multiple_outputs` can be explicitly set on the TaskFlow function to skip the inferral. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29435
https://github.com/apache/airflow/pull/29445
f9e9d23457cba5d3e18b5bdb7b65ecc63735b65b
b1306065054b98a63c6d3ab17c84d42c2d52809a
"2023-02-09T03:55:48Z"
python
"2023-02-12T07:45:26Z"
closed
apache/airflow
https://github.com/apache/airflow
29,432
["airflow/models/mappedoperator.py", "tests/models/test_mappedoperator.py", "tests/test_utils/mock_operators.py"]
Jinja templating doesn't work with container_resources when using dymanic task mapping with Kubernetes Pod Operator
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Google Cloud Composer Version - 2.1.5 Airflow Version - 2.4.3 We are trying to use dynamic task mapping with Kubernetes Pod Operator. Our use-case is to return the pod's CPU and memory requirements from a function which is included as a macro in DAG Without dynamic task mapping it works perfectly, but when used with the dynamic task mapping, it is unable to recognize the macro. container_resources is a templated field as per the [docs](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/operators/kubernetes_pod/index.html#airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator), the feature was introduced in this [PR](https://github.com/apache/airflow/pull/27457). We also tried the toggling the boolean `render_template_as_native_obj`, but still no luck. Providing below a trimmed version of our DAG to help reproduce the issue. (function to return cpu and memory is trivial here just to show example) ### What you think should happen instead It should have worked similar with or without dynamic task mapping. ### How to reproduce Deployed the following DAG in Google Cloud Composer. ``` import datetime import os from airflow import models from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import ( KubernetesPodOperator, ) from kubernetes.client import models as k8s_models dvt_image = os.environ.get("DVT_IMAGE") default_dag_args = {"start_date": datetime.datetime(2022, 1, 1)} def pod_mem(): return "4000M" def pod_cpu(): return "1000m" with models.DAG( "sample_dag", schedule_interval=None, default_args=default_dag_args, render_template_as_native_obj=True, user_defined_macros={ "pod_mem": pod_mem, "pod_cpu": pod_cpu, }, ) as dag: task_1 = KubernetesPodOperator( task_id="task_1", name="task_1", namespace="default", image=dvt_image, cmds=["bash", "-cx"], arguments=["echo hello"], service_account_name="sa-k8s", container_resources=k8s_models.V1ResourceRequirements( limits={ "memory": "{{ pod_mem() }}", "cpu": "{{ pod_cpu() }}", } ), startup_timeout_seconds=1800, get_logs=True, image_pull_policy="Always", config_file="/home/airflow/composer_kube_config", dag=dag, ) task_2 = KubernetesPodOperator.partial( task_id="task_2", name="task_2", namespace="default", image=dvt_image, cmds=["bash", "-cx"], service_account_name="sa-k8s", container_resources=k8s_models.V1ResourceRequirements( limits={ "memory": "{{ pod_mem() }}", "cpu": "{{ pod_cpu() }}", } ), startup_timeout_seconds=1800, get_logs=True, image_pull_policy="Always", config_file="/home/airflow/composer_kube_config", dag=dag, ).expand(arguments=[["echo hello"]]) task_1 >> task_2 ``` task_1 (without dynamic task mapping) completes successfully, while task_2(with dynamic task mapping) fails. Looking at the error logs, it failed while rendering the Pod spec since the calls to pod_cpu() and pod_mem() are unresolved. Here is the traceback: Exception when attempting to create Namespaced Pod: { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": {}, "labels": { "dag_id": "sample_dag", "task_id": "task_2", "run_id": "manual__2023-02-08T183926.890852Z-eee90e4ee", "kubernetes_pod_operator": "True", "map_index": "0", "try_number": "2", "airflow_version": "2.4.3-composer", "airflow_kpo_in_cluster": "False" }, "name": "task-2-46f76eb0432d42ae9a331a6fc53835b3", "namespace": "default" }, "spec": { "affinity": {}, "containers": [ { "args": [ "echo hello" ], "command": [ "bash", "-cx" ], "env": [], "envFrom": [], "image": "us.gcr.io/ams-e2e-testing/edw-dvt-tool", "imagePullPolicy": "Always", "name": "base", "ports": [], "resources": { "limits": { "memory": "{{ pod_mem() }}", "cpu": "{{ pod_cpu() }}" } }, "volumeMounts": [] } ], "hostNetwork": false, "imagePullSecrets": [], "initContainers": [], "nodeSelector": {}, "restartPolicy": "Never", "securityContext": {}, "serviceAccountName": "sa-k8s", "tolerations": [], "volumes": [] } } Traceback (most recent call last): File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 143, in run_pod_async resp = self._client.create_namespaced_pod( File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 7356, in create_namespaced_pod return self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) # noqa: E501 File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 7455, in create_namespaced_pod_with_http_info return self.api_client.call_api( File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api return self.__call_api(resource_path, method, File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api response_data = self.request( File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 391, in request return self.rest_client.POST(url, File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 275, in POST return self.request("POST", url, File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 234, in request raise ApiException(http_resp=r) kubernetes.client.exceptions.ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'Audit-Id': '1ef20c0b-6980-4173-b9cc-9af5b4792e86', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '1b263a21-4c75-4ef8-8147-c18780a13f0e', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3cd4cda4-908c-4944-a422-5512b0fb88d6', 'Date': 'Wed, 08 Feb 2023 18:45:23 GMT', 'Content-Length': '256'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod in version \"v1\" cannot be handled as a Pod: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'","reason":"BadRequest","code":400} ### Operating System Google Composer Kubernetes Cluster ### Versions of Apache Airflow Providers _No response_ ### Deployment Composer ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29432
https://github.com/apache/airflow/pull/29451
43443eb539058b7b4756455f76b0e883186d9250
5eefd47771a19dca838c8cce40a4bc5c555e5371
"2023-02-08T19:01:33Z"
python
"2023-02-13T08:48:47Z"
closed
apache/airflow
https://github.com/apache/airflow
29,428
["pyproject.toml"]
Require newer version of pypi/setuptools to remove security scan issue (CVE-2022-40897)
### Description Hi. My team is evaluating airflow, so I ran a security scan on it. It is flagging a Medium security issue with pypi/setuptools. See https://nvd.nist.gov/vuln/detail/CVE-2022-40897 for details. Is it possible to require a more recent version? Or perhaps airflow users are not vulnerable to this? ### Use case/motivation _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29428
https://github.com/apache/airflow/pull/29465
9c6f83bb6f3e3b57ae0abbe9eb0582fcde265702
41dff9875bce4800495c9132b10a6c8bff900a7c
"2023-02-08T15:11:54Z"
python
"2023-02-11T16:03:14Z"
closed
apache/airflow
https://github.com/apache/airflow
29,423
["airflow/providers/amazon/aws/hooks/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py"]
GlueJobOperator throws error after migration to newest version of Airflow
### Apache Airflow version 2.5.1 ### What happened We were using GlueJobOperator with Airflow 2.3.3 (official docker image) and it was working well, we didn't specify script file location, because it was inferred from the job name. After migration to 2.5.1 (official docker image) the operator fails if `s3_bucket` and `script_location` are not specified. That's the error I see: ``` Traceback (most recent call last): File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/operators/glue.py", line 146, in execute glue_job_run = glue_job.initialize_job(self.script_args, self.run_job_kwargs) File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 155, in initialize_job job_name = self.create_or_update_glue_job() File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 300, in create_or_update_glue_job config = self.create_glue_job_config() File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 97, in create_glue_job_config raise ValueError("Could not initialize glue job, error: Specify Parameter `s3_bucket`") ValueError: Could not initialize glue job, error: Specify Parameter `s3_bucket` ``` ### What you think should happen instead I was expecting that after migration the operator would work the same way. ### How to reproduce Create a dag with `GlueJobOperator` operator and do not use s3_bucket or script_location arguments ### Operating System Linux ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==7.1.0 ### Deployment Docker-Compose ### Deployment details `apache/airflow:2.5.1-python3.10` Docker image and official docker compose ### Anything else I believe it was commit #27893 by @romibuzi that introduced this behaviour. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29423
https://github.com/apache/airflow/pull/29659
9de301da2a44385f57be5407e80e16ee376f3d39
6c13f04365b916e938e3bea57e37fc80890b8377
"2023-02-08T09:09:12Z"
python
"2023-02-22T00:00:18Z"
closed
apache/airflow
https://github.com/apache/airflow
29,422
["airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py", "tests/providers/amazon/aws/transfers/test_dynamodb_to_s3.py"]
Multiple AWS connections support in DynamoDBToS3Operator
### Description I want to add support of a separate AWS connection for DynamoDB in `DynamoDBToS3Operator` in `apache-airflow-providers-amazon` via `aws_dynamodb_conn_id` constructor argument. ### Use case/motivation Sometimes DynamoDB tables and S3 buckets live in different AWS accounts so to access both resources you need to assume a role in another account from one of them. That role can be specified in AWS connection, thus we need to support two of them in this operator. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29422
https://github.com/apache/airflow/pull/29452
8691c4f98c6cd6d96e87737158a9be0f6a04b9ad
3780b01fc46385809423bec9ef858be5be64b703
"2023-02-08T08:58:26Z"
python
"2023-03-09T22:02:18Z"
closed
apache/airflow
https://github.com/apache/airflow
29,405
["airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts"]
Add pagination to get_log in the rest API
### Description Right now, the `get_log` endpoint at `/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/logs/{task_try_number}` does not have any pagination and therefore we can be forced to load extremely large text blocks, which makes everything slow. (see the workaround fix we needed to do in the UI: https://github.com/apache/airflow/pull/29390) In `task_log_reader`, we do have `log_pos` and `offset` (see [here](https://github.com/apache/airflow/blob/main/airflow/utils/log/log_reader.py#L80-L83)). It would be great to expose those parameters in the REST API in order to break apart task instance logs into more manageable pieces. ### Use case/motivation _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29405
https://github.com/apache/airflow/pull/30729
7d02277ae13b7d1e6cea9e6c8ff0d411100daf77
7d62cbb97e1bc225f09e3cfac440aa422087a8a7
"2023-02-07T16:10:57Z"
python
"2023-04-22T20:49:40Z"
closed
apache/airflow
https://github.com/apache/airflow
29,396
["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"]
BigQuery Hook list_rows method missing page_token return value
### Apache Airflow Provider(s) google ### Versions of Apache Airflow Providers apache-airflow-providers-google==7.0.0 But the problem exists in all newer versions. ### Apache Airflow version apache-airflow==2.3.2 ### Operating System Ubuntu 20.04.4 LTS ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened The `list_rows` method in the BigQuery Hook does not return the page_token value, which is necessary for paginating query results. Same problem with `get_datasets_list` method. The documentation for the `get_datasets_list` method even states that the page_token parameter can be accessed: ``` :param page_token: Token representing a cursor into the datasets. If not passed, the API will return the first page of datasets. The token marks the beginning of the iterator to be returned and the value of the ``page_token`` can be accessed at ``next_page_token`` of the :class:`~google.api_core.page_iterator.HTTPIterator`. ``` but it doesn't return HTTPIterator. Instead, it converts the `HTTPIterator` to `list[DatasetListItem]` using `list(datasets)`, making it impossible to retrieve the original `HTTPIterator` and thus impossible to obtain the `next_page_token`. ### What you think should happen instead `list_rows` \ `get_datasets_list` methods should return `Iterator` OR both the list of rows\datasets and the page_token value to allow users to retrieve multiple results pages. For backward compatibility, we can have a parameter like `return_iterator=True` or smth like that. ### How to reproduce _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29396
https://github.com/apache/airflow/pull/30543
d9896fd96eb91a684a512a86924a801db53eb945
4703f9a0e589557f5176a6f466ae83fe52644cf6
"2023-02-07T02:26:41Z"
python
"2023-04-08T17:01:57Z"
closed
apache/airflow
https://github.com/apache/airflow
29,393
["airflow/providers/amazon/aws/log/s3_task_handler.py", "tests/providers/amazon/aws/log/test_s3_task_handler.py"]
S3TaskHandler continuously returns "*** Falling back to local log" even if log_pos is provided when log not in s3
### Apache Airflow Provider(s) amazon ### Versions of Apache Airflow Providers apache-airflow-providers-amazon==7.1.0 ### Apache Airflow version 2.5.1 ### Operating System Ubuntu 18.04 ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened When looking at logs in the UI for a running task when using remote s3 logging, the logs for the task are only uploaded to s3 after the task has completed. The the `S3TaskHandler` falls back to the local logs stored on the worker in that case (by falling back to the `FileTaskHandler` behavior) and prepends the line `*** Falling back to local log` to those logs. This is mostly fine, but for the new log streaming behavior, this means that `*** Falling back to local log` is returned from `/get_logs_with_metadata` on each call, even if there are no new logs. ### What you think should happen instead I'd expect the falling back message only to be included in calls with no `log_pos` in the metadata or with a `log_pos` of `0`. ### How to reproduce Start a task with `logging.remote_logging` set to `True` and `logging.remote_base_log_folder` set to `s3://something` and watch the logs while the task is running. You'll see `*** Falling back to local log` printed every few seconds. ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29393
https://github.com/apache/airflow/pull/29708
13098d5c35cf056c3ef08ea98a1970ee1a3e76f8
5e006d743d1ba3781acd8e053642f2367a8e7edc
"2023-02-06T20:33:08Z"
python
"2023-02-23T21:25:39Z"
closed
apache/airflow
https://github.com/apache/airflow
29,358
["airflow/models/baseoperator.py", "airflow/models/dag.py", "airflow/models/param.py"]
Cannot use TypedDict object when defining params
### Apache Airflow version 2.5.1 ### What happened Context: I am attempting to use [TypedDict](https://docs.python.org/3/library/typing.html#typing.TypedDict) objects to maintain the keys used in DAG params in a single place, and check for key names across multiple DAGs that use the params. This raises an error with `mypy` as `params` expects an `Optional[Dict]`. Due to the invariance of `Dict`, this does not accept `TypedDict` objects. What happened: I passed a `TypedDict` to the `params` arg of `DAG` and got a TypeError. ### What you think should happen instead `TypedDict` objects should be accepted by `DAG`, which should accept `Optional[Mapping[str, Any]]`. Unless I'm mistaken, `params` are converted to a `ParamsDict` class and therefore the appropriate type hint is a generic `Mapping` type. ### How to reproduce Steps to reproduce ```Python from typing import TypedDict from airflow import DAG from airflow.models import Param class ParamsTypedDict(TypedDict): str_param: Param params: ParamsTypedDict = { "str_param": Param("", type="str") } with DAG( dag_id="mypy-error-dag", # The line below raises a mypy error # Argument "params" to "DAG" has incompatible type "ParamsTypedDict"; expected "Optional[Dict[Any, Any]]" [arg-type] params=params, ) as dag: pass ``` ### Operating System Amazon Linux ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29358
https://github.com/apache/airflow/pull/29782
b6392ae5fd466fa06ca92c061a0f93272e27a26b
b069df9b0a792beca66b08d873a66d5640ddadb7
"2023-02-03T14:40:04Z"
python
"2023-03-07T21:25:15Z"
closed
apache/airflow
https://github.com/apache/airflow
29,329
["airflow/example_dags/example_setup_teardown.py", "airflow/models/abstractoperator.py", "airflow/models/dag.py", "tests/models/test_dag.py", "tests/models/test_dagrun.py"]
Automatically clear setup/teardown when clearing a dependent task
null
https://github.com/apache/airflow/issues/29329
https://github.com/apache/airflow/pull/30271
f4c4b7748655cd11d2c297de38563b2e6b840221
0c2778f348f61f3bf08b840676d681e93a60f54a
"2023-02-02T15:44:26Z"
python
"2023-06-21T13:34:18Z"
closed
apache/airflow
https://github.com/apache/airflow
29,325
["airflow/providers/cncf/kubernetes/python_kubernetes_script.py", "airflow/utils/decorators.py", "tests/decorators/test_external_python.py", "tests/decorators/test_python_virtualenv.py", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py", "tests/providers/docker/decorators/test_docker.py"]
Ensure setup/teardown work on a previously decorated function (eg task.docker)
null
https://github.com/apache/airflow/issues/29325
https://github.com/apache/airflow/pull/30216
3022e2ecbb647bfa0c93fbcd589d0d7431541052
df49ad179bddcdb098b3eccbf9bb6361cfbafc36
"2023-02-02T15:43:06Z"
python
"2023-03-24T17:01:34Z"
closed
apache/airflow
https://github.com/apache/airflow
29,323
["airflow/models/serialized_dag.py", "tests/models/test_serialized_dag.py"]
DAG dependencies graph not updating when deleting a DAG
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened ON Airflow 2.4.2 Dag dependencies graph show deleted DAGs that use to have dependencies to currently existing DAGs ### What you think should happen instead Deleted DAGs should not appear on DAG Dependencies ### How to reproduce Create a DAG with dependencies on other DAG, like a wait sensor. Remove new DAG ### Operating System apache/airflow:2.4.2-python3.10 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29323
https://github.com/apache/airflow/pull/29407
18347d36e67894604436f3ef47d273532683b473
02a2efeae409bddcfedafe273fffc353595815cc
"2023-02-02T15:22:37Z"
python
"2023-02-13T19:25:49Z"
closed
apache/airflow
https://github.com/apache/airflow
29,322
["airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_utils.py"]
DAG list, sorting lost when switching page
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Hi, I'm currently on Airflow 2.4.2 In /home when sorting by DAG/Owner/Next Run and going to the next page the sort resets. This feature only works if I'm looking for last or first, everything in the middle is unreachable. ### What you think should happen instead The sorting should continue over the pagination ### How to reproduce Sort by any sortable field on DagList and go to the next page ### Operating System apache/airflow:2.4.2-python3.10 ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29322
https://github.com/apache/airflow/pull/29756
c917c9de3db125cac1beb0a58ac81f56830fb9a5
c8cd90fa92c1597300dbbad4366c2bef49ef6390
"2023-02-02T15:19:51Z"
python
"2023-03-02T14:59:43Z"
closed
apache/airflow
https://github.com/apache/airflow
29,320
["airflow/api/common/experimental/get_task_instance.py", "airflow/cli/commands/task_command.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "airflow/serialization/pydantic/dag_run.py", "airflow/serialization/pydantic/taskinstance.py", "airflow/utils/log/logging_mixin.py", "airflow/www/views.py"]
AIP-44 Migrate TaskCommand._get_ti to Internal API
https://github.com/apache/airflow/blob/main/airflow/cli/commands/task_command.py#L145
https://github.com/apache/airflow/issues/29320
https://github.com/apache/airflow/pull/35312
ab6e623cb1a75f54fc419cee66a16e3d8ff1adc2
1e1adc569f43494aabf3712b651956636c04df7f
"2023-02-02T15:10:45Z"
python
"2023-11-08T15:53:52Z"
closed
apache/airflow
https://github.com/apache/airflow
29,301
["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"]
BigQueryCreateEmptyTableOperator `exists_ok` parameter doesn't throw appropriate error when set to "False"
### Apache Airflow Provider(s) google ### Versions of Apache Airflow Providers I'm using `apache-airflow-providers-google==8.2.0`, but it looks like the relevant code that's causing this to occur is still in use as of `8.8.0`. ### Apache Airflow version 2.3.2 ### Operating System Debian (from Docker image `apache/airflow:2.3.2-python3.10`) ### Deployment Official Apache Airflow Helm Chart ### Deployment details Deployed on an EKS cluster via Helm. ### What happened The first task in one of my DAGs is to create an empty BigQuery table using the `BigQueryCreateEmptyTableOperator` as follows: ```python create_staging_table = BigQueryCreateEmptyTableOperator( task_id="create_staging_table", dataset_id="my_dataset", table_id="tmp_table", schema_fields=[ {"name": "field_1", "type": "TIMESTAMP", "mode": "NULLABLE"}, {"name": "field_2", "type": "INTEGER", "mode": "NULLABLE"}, {"name": "field_3", "type": "INTEGER", "mode": "NULLABLE"} ], exists_ok=False ) ``` Note that `exists_ok=False` explicitly here, but it is also the default value. This task exits with a `SUCCESS` status even when `my_dataset.tmp_table` already exists in a given BigQuery project. The task returns the following logs: ``` [2023-02-02, 05:52:29 UTC] {bigquery.py:875} INFO - Creating table [2023-02-02, 05:52:29 UTC] {bigquery.py:901} INFO - Table my_dataset.tmp_table already exists. [2023-02-02, 05:52:30 UTC] {taskinstance.py:1395} INFO - Marking task as SUCCESS. dag_id=my_fake_dag, task_id=create_staging_table, execution_date=20230202T044000, start_date=20230202T055229, end_date=20230202T055230 [2023-02-02, 05:52:30 UTC] {local_task_job.py:156} INFO - Task exited with return code 0 ``` ### What you think should happen instead Setting `exists_ok=False` should raise an exception and exit the task with a `FAILED` status if the table being created already exists in BigQuery. ### How to reproduce 1. Deploy Airflow 2.3.2 running Python 3.10 in some capacity 2. Ensure `apache-airflow-providers-google==8.2.0` (or 8.8.0, as I don't believe the issue has been fixed) is installed on the deployment. 3. Set up a GCP project and create a BigQuery dataset. 4. Create an empty BigQuery table with a schema. 5. Create a DAG that uses the `BigQueryCreateEmptyTableOperator` to create a new BigQuery table. 6. Run the DAG from Step 5 on the Airflow instance deployed in Step 1. 7. Observe the task's status. ### Anything else I believe the silent failure may be occurring [here](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py#L1377), as the `except` statement results in a log output, but doesn't actually raise an exception or change a state that would make the task fail. If this is in fact the case, I'd be happy to submit a PR, but appreciate any input as to any error-handling standards/consistencies that this provider package maintains. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29301
https://github.com/apache/airflow/pull/29394
228d79c1b3e11ecfbff5a27c900f9d49a84ad365
a5adb87ab4ee537eb37ef31aba755b40f6f29a1e
"2023-02-02T06:30:16Z"
python
"2023-02-26T19:09:08Z"
closed
apache/airflow
https://github.com/apache/airflow
29,282
["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "docs/apache-airflow-providers-ssh/connections/ssh.rst", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"]
Ssh connection extra parameter conn_timeout doesn't work with ssh operator
### Apache Airflow Provider(s) ssh ### Versions of Apache Airflow Providers apache-airflow-providers-ssh>=3.3.0 ### Apache Airflow version 2.5.0 ### Operating System debian "11 (bullseye)" ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened I have an SSH operator task where the command can take a long time. In recent SSH provider versions(>=3.3.0) it stopped working, as I suspect it is because of #27184 . After this change looks like the timeout is 10 seconds, and after there is no output provided through SSH for 10 seconds I'm getting the following error: ``` [2023-01-26, 11:49:57 UTC] {taskinstance.py:1772} ERROR - Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/ssh/operators/ssh.py", line 171, in execute result = self.run_ssh_client_command(ssh_client, self.command, context=context) File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/ssh/operators/ssh.py", line 156, in run_ssh_client_command exit_status, agg_stdout, agg_stderr = self.ssh_hook.exec_ssh_client_command( File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/ssh/hooks/ssh.py", line 521, in exec_ssh_client_command raise AirflowException("SSH command timed out") airflow.exceptions.AirflowException: SSH command timed out ``` At first I thought that this is ok, since I can just set `conn_timeout` extra parameter in my ssh connection. But then I noticed that this parameter from the connection is not used anywhere - so this doesn't work, and you have to modify your task code to set the needed value of this parameter in the SSH operator. What's more, even even with modifying task code it's not possible to achieve the previous behavior(when this parameter was not set) since now it'll be set to 10 when you pass None as value. ### What you think should happen instead I think it should be possible to pass timeout parameter through connection extra field for ssh operator (including None value, meaning no timeout). ### How to reproduce Add simple DAG with sleeping for more than 10 seconds, for example: ```python # this DAG only works for SSH provider versions <=3.2.0 from airflow.models import DAG from airflow.contrib.operators.ssh_operator import SSHOperator from airflow.utils.dates import days_ago from airflow.operators.dummy import DummyOperator args = { 'owner': 'airflow', 'start_date': days_ago(2), } dag = DAG( default_args=args, dag_id="test_ssh", max_active_runs=1, catchup=False, schedule_interval="@hourly" ) task0 = SSHOperator(ssh_conn_id='ssh_localhost', task_id="test_sleep", command=f'sleep 15s', dag=dag) task0 ``` Try configuring `ssh_localhost` connection to make the DAG work using extra conn_timeout or extra timeout (or other) parameters. ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29282
https://github.com/apache/airflow/pull/29347
a21c17bc07c1eeb733eca889a02396fab401b215
fd000684d05a993ade3fef38b683ef3cdfdfc2b6
"2023-02-01T08:52:03Z"
python
"2023-02-19T18:51:51Z"
closed
apache/airflow
https://github.com/apache/airflow
29,267
["airflow/example_dags/example_python_decorator.py", "airflow/example_dags/example_python_operator.py", "airflow/example_dags/example_short_circuit_operator.py", "docs/apache-airflow/howto/operator/python.rst", "docs/conf.py", "docs/sphinx_design/static/custom.css", "setup.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"]
Support tabs in docs
### What do you see as an issue? I suggest supporting tabs in the docs to improve the readability when demonstrating different ways to achieve the same things. **Motivation** We have multiple ways to achieve the same thing in Airflow, for example: - TaskFlow API & "classic" operators - CLI & REST API & API client However, our docs currently do not consistently demonstrate different ways to use Airflow. For example, https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html demonstrates TaskFlow operators in some examples and classic operators in other examples. All cases covered can be supported by both the TaskFlow & classic operators. In the case of https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html, I think a nice solution to demonstrate both approaches would be to use tabs. That way somebody who prefers the TaskFlow API can view all TaskFlow examples, and somebody who prefers the classic operators (we should give those a better name) can view only those examples. **Possible implementation** There is a package [sphinx-tabs](https://github.com/executablebooks/sphinx-tabs) for this. For the example above, having https://sphinx-tabs.readthedocs.io/en/latest/#group-tabs would be great because it enables you to view all examples of one "style" with a single click. ### Solving the problem Install https://github.com/executablebooks/sphinx-tabs with the docs. ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29267
https://github.com/apache/airflow/pull/36041
f60d458dc08a5d5fbe5903fffca8f7b03009f49a
58e264c83fed1ca42486302600288230b944ab06
"2023-01-31T14:23:42Z"
python
"2023-12-06T08:44:18Z"
closed
apache/airflow
https://github.com/apache/airflow
29,258
["airflow/providers/google/cloud/hooks/compute_ssh.py", "tests/providers/google/cloud/hooks/test_compute_ssh.py", "tests/system/providers/google/cloud/compute/example_compute_ssh.py", "tests/system/providers/google/cloud/compute/example_compute_ssh_os_login.py", "tests/system/providers/google/cloud/compute/example_compute_ssh_parallel.py"]
ComputeEngineSSHHook on parallel runs in Composer gives banner Error reading SSH protocol banner
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened We are using ComputeEngineSSHHook for some of our Airflow DAGS in Cloud Composer Everything works fine when DAGs run one by one But when we start parallelism where multiple tasks are trying to connect to our GCE instance using ComputeEngineSSHHook at the same time, We experience intermittent errors like the one give below Since cloud composer by default has 3 retries, sometimes in the second or third attempt this issue gets resolved automatically but we would like to understand why this issue comes in the first place when there are multiple operators trying to generate keys and SSH into GCE instance We have tried maintaining the DAG task with banner_timeout and expire_timeout parameters but we still see this issue create_transfer_run_directory = SSHOperator( task_id="create_transfer_run_directory", ssh_hook=ComputeEngineSSHHook( instance_name=GCE_INSTANCE, zone=GCE_ZONE, use_oslogin=True, use_iap_tunnel=False, use_internal_ip=True, ), conn_timeout = 120, cmd_timeout = 120, banner_timeout = 120.0, command=f"sudo mkdir -p {transfer_run_directory}/" '{{ ti.xcom_pull(task_ids="load_config", key="transfer_id") }}', dag=dag, ) **[2023-01-31, 03:30:39 UTC] {compute_ssh.py:286} INFO - Importing SSH public key using OSLogin: user=edw-sa-gcc@pso-e2e-sql.iam.gserviceaccount.com [2023-01-31, 03:30:39 UTC] {compute_ssh.py:236} INFO - Opening remote connection to host: username=sa_115585236623848451866, hostname=10.128.0.29 [2023-01-31, 03:30:41 UTC] {transport.py:1874} ERROR - Exception (client): Error reading SSH protocol banner [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - Traceback (most recent call last): [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/transport.py", line 2271, in _check_banner [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - buf = self.packetizer.readline(timeout) [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/packet.py", line 380, in readline [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - buf += self._read_timeout(timeout) [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/packet.py", line 609, in _read_timeout [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - raise EOFError() [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - EOFError [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - During handling of the above exception, another exception occurred: [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - Traceback (most recent call last): [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/transport.py", line 2094, in run [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - self._check_banner() [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/transport.py", line 2275, in _check_banner [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - raise SSHException( [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - paramiko.ssh_exception.SSHException: Error reading SSH protocol banner [2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - [2023-01-31, 03:30:41 UTC] {compute_ssh.py:258} INFO - Failed to connect. Waiting 0s to retry [2023-01-31, 03:30:43 UTC] {transport.py:1874} INFO - Connected (version 2.0, client OpenSSH_8.9p1) [2023-01-31, 03:30:43 UTC] {transport.py:1874} INFO - Authentication (publickey) failed. [2023-01-31, 03:30:43 UTC] {compute_ssh.py:258} INFO - Failed to connect. Waiting 1s to retry [2023-01-31, 03:30:47 UTC] {transport.py:1874} INFO - Connected (version 2.0, client OpenSSH_8.9p1) [2023-01-31, 03:30:50 UTC] {transport.py:1874} INFO - Authentication (publickey) failed. [2023-01-31, 03:30:50 UTC] {compute_ssh.py:258} INFO - Failed to connect. Waiting 6s to retry [2023-01-31, 03:30:58 UTC] {transport.py:1874} INFO - Connected (version 2.0, client OpenSSH_8.9p1) [2023-01-31, 03:30:58 UTC] {transport.py:1874} INFO - Authentication (publickey) failed. [2023-01-31, 03:30:58 UTC] {taskinstance.py:1904} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/ssh/operators/ssh.py", line 157, in execute with self.get_ssh_client() as ssh_client: File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/ssh/operators/ssh.py", line 124, in get_ssh_client return self.get_hook().get_conn() File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/compute_ssh.py", line 232, in get_conn sshclient = self._connect_to_instance(user, hostname, privkey, proxy_command) File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/compute_ssh.py", line 245, in _connect_to_instance client.connect( File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/compute_ssh.py", line 50, in connect return super().connect(*args, **kwargs) File "/opt/python3.8/lib/python3.8/site-packages/paramiko/client.py", line 450, in connect self._auth( File "/opt/python3.8/lib/python3.8/site-packages/paramiko/client.py", line 781, in _auth raise saved_exception File "/opt/python3.8/lib/python3.8/site-packages/paramiko/client.py", line 681, in _auth self._transport.auth_publickey(username, pkey) File "/opt/python3.8/lib/python3.8/site-packages/paramiko/transport.py", line 1635, in auth_publickey return self.auth_handler.wait_for_response(my_event) File "/opt/python3.8/lib/python3.8/site-packages/paramiko/auth_handler.py", line 259, in wait_for_response raise e paramiko.ssh_exception.AuthenticationException: Authentication failed. [2023-01-31, 03:30:58 UTC] {taskinstance.py:1408} INFO - Marking task as UP_FOR_RETRY. dag_id=run_data_transfer_configs_dag, task_id=create_transfer_run_directory, execution_date=20230131T033002, start_date=20230131T033035, end_date=20230131T033058 [2023-01-31, 03:30:58 UTC] {standard_task_runner.py:92} ERROR - Failed to execute job 1418 for task create_transfer_run_directory (Authentication failed.; 21885)** ### What you think should happen instead The SSH Hook operator should be able to seamlessly SSH into the GCE instance without any intermittent authentication issues ### How to reproduce _No response_ ### Operating System Composer Kubernetes Cluster ### Versions of Apache Airflow Providers Composer Version - 2.1.3 Airflow version - 2.3.4 ### Deployment Composer ### Deployment details Kubernetes Cluster GCE Compute Engine VM (Ubuntu) ### Anything else Very random and intermittent ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29258
https://github.com/apache/airflow/pull/32365
df74553ec484ad729fcd75ccbc1f5f18e7f34dc8
0c894dbb24ad9ad90dcb10c81269ccc056789dc3
"2023-01-31T03:43:49Z"
python
"2023-08-02T09:16:03Z"
closed
apache/airflow
https://github.com/apache/airflow
29,250
["airflow/providers/databricks/hooks/databricks.py", "tests/providers/databricks/hooks/test_databricks.py"]
Repair functionality in DatabricksRunNowOperator
### Description The Databricks jobs 2.1 API has the ability to repair failed or skipped tasks in a Databricks workflow without having to rerun successful tasks for a given workflow run. It would be nice to be able to leverage this functionality via airflow operators. ### Use case/motivation The primary motivation is the ability to be more efficient and only have to rerun failed or skipped tasks rather than the entire workflow if only 1 out of 10 tasks fail. **Repair run API:** https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsRepairfail @alexott for visability ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29250
https://github.com/apache/airflow/pull/30786
424fc17d49afd4175826a62aa4fe7aa7c5772143
9bebf85e24e352f9194da2f98e2bc66a5e6b972e
"2023-01-30T21:24:49Z"
python
"2023-04-22T21:21:14Z"
closed
apache/airflow
https://github.com/apache/airflow
29,227
["airflow/www/views.py", "tests/www/views/test_views_tasks.py"]
Calendar page doesn't load when using a timedelta DAG schedule
### Apache Airflow version 2.5.1 ### What happened /calendar page give a problem, here is the capture ![屏幕截图 2023-01-30 093116](https://user-images.githubusercontent.com/19165258/215369479-9fc7de5c-f190-460c-9cf7-9ab27d8ac355.png) ### What you think should happen instead _No response_ ### How to reproduce _No response_ ### Operating System Ubuntu 22.04.1 LTS ### Versions of Apache Airflow Providers Distributor ID: Ubuntu Description: Ubuntu 22.04.1 LTS Release: 22.04 Codename: jammy ### Deployment Other ### Deployment details Distributor ID: Ubuntu Description: Ubuntu 22.04.1 LTS Release: 22.04 Codename: jammy ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29227
https://github.com/apache/airflow/pull/29454
28126c12fbdd2cac84e0fbcf2212154085aa5ed9
f837c0105c85d777ea18c88a9578eeeeac5f57db
"2023-01-30T01:32:44Z"
python
"2023-02-14T17:06:09Z"
closed
apache/airflow
https://github.com/apache/airflow
29,209
["airflow/providers/google/cloud/operators/bigquery_dts.py", "tests/providers/google/cloud/operators/test_bigquery_dts.py"]
BigQueryCreateDataTransferOperator will log AWS credentials when transferring from S3
### Apache Airflow Provider(s) google ### Versions of Apache Airflow Providers [apache-airflow-providers-google 8.6.0](https://airflow.apache.org/docs/apache-airflow-providers-google/8.6.0/) ### Apache Airflow version 2.5.0 ### Operating System Debian GNU/Linux 11 (bullseye) ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened When creating a transfer config that will move data from AWS S3, an access_key_id and secret_access_key are provided (see: https://cloud.google.com/bigquery/docs/s3-transfer). These parameters are logged and exposed as XCom return_value. ### What you think should happen instead At least the secret_access_key should be hidden or removed from the XCom return value ### How to reproduce ``` PROJECT_ID=123 TRANSFER_CONFIG={ "destination_dataset_id": destination_dataset, "display_name": display_name, "data_source_id": "amazon_s3", "schedule_options": {"disable_auto_scheduling": True}, "params": { "destination_table_name_template": destination_table, "file_format": "PARQUET", "data_path": data_path, "access_key_id": access_key_id, "secret_access_key": secret_access_key } }, gcp_bigquery_create_transfer = BigQueryCreateDataTransferOperator( transfer_config=TRANSFER_CONFIG, project_id=PROJECT_ID, task_id="gcp_bigquery_create_transfer", ) ``` ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29209
https://github.com/apache/airflow/pull/29348
3dbcf99d20d47cde0debdd5faf9bd9b2ebde1718
f51742d20b2e53bcd90a19db21e4e12d2a287677
"2023-01-28T19:58:00Z"
python
"2023-02-20T23:06:50Z"
closed
apache/airflow
https://github.com/apache/airflow
29,199
["airflow/models/xcom_arg.py", "tests/decorators/test_python.py"]
TaskFlow AirflowSkipException causes downstream step to fail when multiple_outputs is true
### Apache Airflow version 2.5.1 ### What happened Most of our code is based on TaskFlow API and we have many tasks that raise AirflowSkipException (or BranchPythonOperator) on purpose to skip the next downstream task (with trigger_rule = none_failed_min_one_success). And these tasks are expecting a multiple output XCom result (local_file_path, file sizes, records count) from previous tasks and it's causing this error: `airflow.exceptions.XComNotFound: XComArg result from copy_from_data_lake_to_local_file at outbound_dag_AIR2070 with key="local_file_path" is not found!` ### What you think should happen instead Considering trigger rule "none_failed_min_one_success", we expect that upstream task should be allowed to skip and downstream tasks will still run without raising any errors caused by not found XCom results. ### How to reproduce This is an aproximate example dag based on an existing one. ```python from os import path import pendulum from airflow import DAG from airflow.decorators import task from airflow.operators.python import BranchPythonOperator DAG_ID = "testing_dag_AIR" # PGP_OPERATION = None PGP_OPERATION = "decrypt" LOCAL_FILE_PATH = "/temp/example/example.csv" with DAG( dag_id=DAG_ID, schedule='0 7-18 * * *', start_date=pendulum.datetime(2022, 12, 15, 7, 0, 0), ) as dag: @task(multiple_outputs=True, trigger_rule='none_failed_min_one_success') def copy_from_local_file_to_data_lake(local_file_path: str, dest_dir_path: str): destination_file_path = path.join(dest_dir_path, path.basename(local_file_path)) return { "destination_file_path": destination_file_path, "file_size": 100 } @task(multiple_outputs=True, trigger_rule='none_failed_min_one_success') def copy_from_data_lake_to_local_file(data_lake_file_path, local_dir_path): local_file_path = path.join(local_dir_path, path.basename(data_lake_file_path)) return { "local_file_path": local_file_path, "file_size": 100 } @task(multiple_outputs=True, task_id='get_pgp_file_info', trigger_rule='none_failed_min_one_success') def get_pgp_file_info(file_path, operation): import uuid import os src_file_name = os.path.basename(file_path) src_file_dir = os.path.dirname(file_path) run_id = str(uuid.uuid4()) if operation == "decrypt": wait_pattern = f'*{src_file_name}' else: wait_pattern = f'*{src_file_name}.pgp' target_path = 'datalake/target' return { 'src_file_path': file_path, 'src_file_dir': src_file_dir, 'target_path': target_path, 'pattern': wait_pattern, 'guid': run_id } @task(multiple_outputs=True, task_id='return_src_path', trigger_rule='none_failed_min_one_success') def return_src_path(src_file_path): return { 'file_path': src_file_path, 'file_size': 100 } @task(multiple_outputs=True, task_id='choose_result', trigger_rule='none_failed_min_one_success') def choose_result(src_file_path, src_file_size, decrypt_file_path, decrypt_file_size): import os file_path = decrypt_file_path or src_file_path file_size = decrypt_file_size or src_file_size local_dir = os.path.dirname(file_path) return { 'local_dir': local_dir, 'file_path': file_path, 'file_size': file_size, 'file_name': os.path.basename(file_path) } def switch_branch_func(pgp_operation): if pgp_operation in ["decrypt", "encrypt"]: return 'get_pgp_file_info' else: return 'return_src_path' operation = PGP_OPERATION local_file_path = LOCAL_FILE_PATH check_need_to_decrypt = BranchPythonOperator( task_id='branch_task', python_callable=switch_branch_func, op_args=(operation,)) pgp_file_info = get_pgp_file_info(local_file_path, operation) data_lake_file = copy_from_local_file_to_data_lake(pgp_file_info['src_file_path'], pgp_file_info['target_path']) decrypt_local_file = copy_from_data_lake_to_local_file( data_lake_file['destination_file_path'], pgp_file_info['src_file_dir']) src_result = return_src_path(local_file_path) result = choose_result(src_result['file_path'], src_result['file_size'], decrypt_local_file['local_file_path'], decrypt_local_file['file_size']) check_need_to_decrypt >> [pgp_file_info, src_result] pgp_file_info >> decrypt_local_file [decrypt_local_file, src_result] >> result ``` ### Operating System Windows 10 ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details docker-compose version: 3.7 Note: This also happens when it's deployed to one of our testing environments using official Airflow Helm Chart. ### Anything else This issue is similar to [#24338](https://github.com/apache/airflow/issues/24338), it was solved by [#25661](https://github.com/apache/airflow/pull/25661) but this case is related to multiple_outputs being set to True. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29199
https://github.com/apache/airflow/pull/32027
14eb1d3116ecef15be7be9a8f9d08757e74f981c
79eac7687cf7c6bcaa4df2b8735efaad79a7fee2
"2023-01-27T18:27:43Z"
python
"2023-06-21T09:55:57Z"
closed
apache/airflow
https://github.com/apache/airflow
29,198
["airflow/providers/snowflake/operators/snowflake.py"]
SnowflakeCheckOperator - The conn_id `None` isn't defined
### Apache Airflow Provider(s) snowflake ### Versions of Apache Airflow Providers `apache-airflow-providers-snowflake==4.0.2` ### Apache Airflow version 2.5.1 ### Operating System Debian GNU/Linux 11 (bullseye) ### Deployment Official Apache Airflow Helm Chart ### Deployment details _No response_ ### What happened After upgrading the _apache-airflow-providers-snowflake_ from version **3.3.0** to **4.0.2**, the SnowflakeCheckOperator tasks starts to throw the following error: ``` File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 179, in get_db_hook return self._hook File "/usr/local/lib/python3.9/functools.py", line 993, in __get__ val = self.func(instance) File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 141, in _hook conn = BaseHook.get_connection(self.conn_id) File "/home/airflow/.local/lib/python3.9/site-packages/airflow/hooks/base.py", line 72, in get_connection conn = Connection.get_connection_from_secrets(conn_id) File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/connection.py", line 435, in get_connection_from_secrets raise AirflowNotFoundException(f"The conn_id `{conn_id}` isn't defined") airflow.exceptions.AirflowNotFoundException: The conn_id `None` isn't defined ``` ### What you think should happen instead _No response_ ### How to reproduce - Define a _Snowflake_ Connection with the name **snowflake_default** - Create a Task similar to this: ``` my_task = SnowflakeCheckOperator( task_id='my_task', warehouse='warehouse', database='database', schema='schema', role='role', sql='select 1 from my_table' ) ``` - Run and check the error. ### Anything else We can workaround this by adding the conn_id='snowflake_default' to the SnowflakeCheckOperator. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29198
https://github.com/apache/airflow/pull/29211
a72e28d6e1bc6ae3185b8b3971ac9de5724006e6
9b073119d401594b3575c6f7dc4a14520d8ed1d3
"2023-01-27T18:24:51Z"
python
"2023-01-29T08:54:39Z"
closed
apache/airflow
https://github.com/apache/airflow
29,197
["airflow/www/templates/airflow/dag.html"]
Trigger DAG w/config raising error from task detail views
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened Version: 2.4.3 (migrated from 2.2.4) Manual UI option "Trigger DAG w/config" raises an error _400 Bad Request - Invalid datetime: None_ from views "Task Instance Details", "Rendered Template", "Log" and "XCom" . Note that DAG is actually triggered , but still error response 400 is raised. ### What you think should happen instead No 400 error ### How to reproduce 1. Go to any DAG graph view 2. Select a Task > go to "Instance Details" 3. Select "Trigger DAG w/config" 4. Select Trigger 5. See error ### Operating System PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" ### Versions of Apache Airflow Providers _No response_ ### Deployment Other 3rd-party Helm chart ### Deployment details _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29197
https://github.com/apache/airflow/pull/29212
9b073119d401594b3575c6f7dc4a14520d8ed1d3
7315d6f38caa58e6b19054f3e8a20ed02df16a29
"2023-01-27T18:07:01Z"
python
"2023-01-29T08:56:35Z"
closed
apache/airflow
https://github.com/apache/airflow
29,178
["airflow/api/client/local_client.py", "airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "tests/api/client/test_local_client.py", "tests/cli/commands/test_dag_command.py"]
Add `output` format to missing cli commands
### Description I have noticed that for some commands, there is an option to get the output in json or yaml (as described in this PR from 2020 https://github.com/apache/airflow/issues/12699). However, there are still some commands that do not support the `--output` argument, most notable one is the `dags trigger`. When triggering a dag, it is crucial to get the run_id that has been triggered, so the triggered dag run can be monitored by the calling party. However, the output from this command is hard to parse without resorting to (gasp!) regex: ``` [2023-01-26 11:03:41,038] {{__init__.py:42}} INFO - Loaded API auth backend: airflow.api.auth.backend.session Created <DagRun sample_dag @ 2023-01-26T11:03:41+00:00: manual__2023-01-26T11:03:41+00:00, state:queued, queued_at: 2023-01-26 11:03:41.412394+00:00. externally triggered: True> ``` As you can see, extracting the run_id `manual__2023-01-26T11:03:41+00:00` is not easy from the above output. For what I see [in the code](https://github.com/apache/airflow/blob/main/airflow/cli/cli_parser.py#L1156), the `ARG_OUTPUT` is not added to `dag_trigger` command. ### Use case/motivation At my company we want to be able to trigger dags from another airflow environment (mwaa) and be able to wait for its completion before proceeding with the calling DAG. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29178
https://github.com/apache/airflow/pull/29224
ffdc696942d96a14a5ee0279f950e3114817055c
60fc40791121b19fe379e4216529b2138162b443
"2023-01-26T11:05:20Z"
python
"2023-02-19T15:15:56Z"
closed
apache/airflow
https://github.com/apache/airflow
29,177
["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/http/hooks/http.py", "airflow/providers/http/operators/http.py", "tests/providers/http/hooks/test_http.py"]
SimpleHttpOperator not working with loginless auth_type
### Apache Airflow Provider(s) http ### Versions of Apache Airflow Providers apache-airflow-providers-http==4.1.1 ### Apache Airflow version 2.5.0 ### Operating System Ubuntu 20.04.5 LTS (Focal Fossa)" ### Deployment Virtualenv installation ### Deployment details Reproduced on a local deployment inside WSL on virtualenv - not related to specific deployment. ### What happened SimpleHttpOperator supports passing in auth_type. Hovewer, [this auth_type is only initialized if login is provided](https://github.com/astronomer/airflow-provider-sample/blob/main/sample_provider/hooks/sample_hook.py#L64-L65). In our setup we are using the Kerberos authentication. This authentication relies on kerberos sidecar with keytab, and not on user-password pair in the connection string. However, this would also be issue with any other implementation not relying on username passed in the connection string. We were trying to use some other auth providers from (`HTTPSPNEGOAuth` from [requests_gssapi](https://pypi.org/project/requests-gssapi/) and `HTTPKerberosAuth` from [requests_kerberos](https://pypi.org/project/requests-kerberos/)). We noticed that requests_kerberos is used in Airflow in some other places for Kerberos support, hence we have settled on the latter. ### What you think should happen instead A suggestion is to initialize the passed `auth_type` also if no login is present. ### How to reproduce A branch demonstrating possible fix: https://github.com/apache/airflow/commit/7d341f081f0160ed102c06b9719582cb463b538c ### Anything else The linked branch is a quick-and-dirty solution, but maybe the code could be refactored in another way? Support for **kwargs could also be useful, but I wanted to make as minimal changes as possible. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29177
https://github.com/apache/airflow/pull/29206
013490edc1046808c651c600db8f0436b40f7423
c44c7e1b481b7c1a0d475265835a23b0f507506c
"2023-01-26T08:28:39Z"
python
"2023-03-20T13:52:02Z"
closed
apache/airflow
https://github.com/apache/airflow
29,175
["airflow/providers/redis/provider.yaml", "docs/apache-airflow-providers-redis/index.rst", "generated/provider_dependencies.json", "tests/system/providers/redis/__init__.py", "tests/system/providers/redis/example_redis_publish.py"]
Support for Redis Time series in Airflow common packages
### Description The current Redis API version is quite old. I need to implement a DAG for Timeseries data feature. Please upgrade to version that supports this. BTW, I was able to manually update my redis worker and it now works. Can this be added to next release please. ### Use case/motivation Timeseries in Redis is a growing area needing support in Airflow ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29175
https://github.com/apache/airflow/pull/31279
df3569cf489ce8ef26f5b4d9d9c3826d3daad5f2
94cad11b439e0ab102268e9e7221b0ab9d98e0df
"2023-01-26T03:42:51Z"
python
"2023-05-16T13:11:18Z"
closed
apache/airflow
https://github.com/apache/airflow
29,150
["docs/apache-airflow/howto/docker-compose/index.rst"]
trigger process missing from Airflow docker docs
### What do you see as an issue? The section [`Fetching docker-compose.yaml`](https://github.com/apache/airflow/blob/main/docs/apache-airflow/howto/docker-compose/index.rst#fetching-docker-composeyaml) claims to talk about all the process definitions that the Dockerfile compose contain but missed to talk the `airflow-trigger` process. ### Solving the problem We need to include the `airflow-trigger` in the process list that the docker-compose file contains. ### Anything else None ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29150
https://github.com/apache/airflow/pull/29203
f8c1410a0b0e62a1c4b67389d9cfb80cc024058d
272f358fd6327468fcb04049ef675a5cf939b93e
"2023-01-25T08:50:27Z"
python
"2023-01-30T09:52:43Z"
closed
apache/airflow
https://github.com/apache/airflow
29,137
["airflow/decorators/sensor.py"]
Fix access to context in functions decorated by task.sensor
### Description Hello, I am a new Airflow user. I am requesting a feature in which the airflow context (containing task instance, etc.) be available inside of functions decorated by `airflow.decorators.task.sensor`. ### Use case/motivation I have noticed that when using the `airflow.decorators.task` decorator, one can access items from the context (such as the task instance) by using `**kwargs` or keyword arguments in the decorated function. But I have discovered that the same is not true for the `airflow.decorators.task.sensor` decorator. I'm not sure if this is a bug or intentional, but it would be very useful to be able to access the context normally from functions decorated by `task.sensor`. I believe this may have been an oversight. The `DecoratedSensorOperator` class is a child class of `PythonSensor`: https://github.com/apache/airflow/blob/1fbfd312d9d7e28e66f6ba5274421a96560fb7ba/airflow/decorators/sensor.py#L28 This `DecoratedSensorOperator` class overrides `poke`, but does not incorporate the passed in `Context` object before calling the decorated function: https://github.com/apache/airflow/blob/1fbfd312d9d7e28e66f6ba5274421a96560fb7ba/airflow/decorators/sensor.py#L60-L61 This is in contrast to the `PythonSensor`, whose `poke` method merges the context with the existing `op_kwargs`: https://github.com/apache/airflow/blob/1fbfd312d9d7e28e66f6ba5274421a96560fb7ba/airflow/sensors/python.py#L68-L77 This seems like an easy fix, and I'd be happy to submit a pull request. But I figured I'd start with a feature request since I'm new to the open source community. ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29137
https://github.com/apache/airflow/pull/29146
0a4184e34c1d83ad25c61adc23b838e994fc43f1
2d3cc504db8cde6188c1503675a698c74404cf58
"2023-01-24T20:19:59Z"
python
"2023-02-20T00:20:08Z"
closed
apache/airflow
https://github.com/apache/airflow
29,128
["docs/apache-airflow-providers-ftp/index.rst"]
[Doc] Link to examples how to use FTP provider is incorrect
### What do you see as an issue? HI. I tried to use FTP provider (https://airflow.apache.org/docs/apache-airflow-providers-ftp/stable/connections/ftp.html#howto-connection-ftp) but link to the "Example DAGs" is incorrect and Github response is 404. ### Solving the problem Please update links to Example DAGs - here and check it in other providers. ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29128
https://github.com/apache/airflow/pull/29134
33ba242d7eb8661bf936a9b99a8cad4a74b29827
1fbfd312d9d7e28e66f6ba5274421a96560fb7ba
"2023-01-24T12:07:45Z"
python
"2023-01-24T19:24:26Z"
closed
apache/airflow
https://github.com/apache/airflow
29,125
["airflow/models/dag.py", "airflow/models/dagrun.py", "tests/models/test_dag.py", "tests/models/test_dagrun.py"]
Ensure teardown failure with on_failure_fail_dagrun=True fails the DagRun, and not otherwise
null
https://github.com/apache/airflow/issues/29125
https://github.com/apache/airflow/pull/30398
fc4166127a1d2099d358fee1ea10662838cf9cf3
db359ee2375dd7208583aee09b9eae00f1eed1f1
"2023-01-24T11:08:45Z"
python
"2023-05-08T10:58:30Z"
closed
apache/airflow
https://github.com/apache/airflow
29,113
["docs/apache-airflow-providers-sqlite/operators.rst"]
sqlite conn id unclear
### What do you see as an issue? The sqlite conn doc here https://airflow.apache.org/docs/apache-airflow-providers-sqlite/stable/operators.html is unclear. Sqlite does not use username, password, port, schema. These need to be removed from the docs. Furthermore, it is unclear how to construct a conn string for sqlite, since the docs for constructing a conn string here https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html assume that all these fields are given. ### Solving the problem Remove unused arguments for sqlite in connection, and make it clearer how to construct a connection to sqlite ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
https://github.com/apache/airflow/issues/29113
https://github.com/apache/airflow/pull/29139
d23033cff8a25e5f71d01cb513c8ec1d21bbf491
ec7674f111177c41c02e5269ad336253ed9c28b4
"2023-01-23T17:44:59Z"
python
"2023-05-01T20:34:12Z"