status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 30,335 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "tests/executors/test_celery_executor.py"] | Reccomend (or set as default) to enable pool_recycle for celery workers (especially if using MySQL) | ### What do you see as an issue?
Similar to how `sql_alchemy_pool_recycle` defaults to 1800 seconds for the Airflow metastore: https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#config-database-sql-alchemy-pool-recycle
If users are using celery as their backend it provides extra stability to set `pool_recycle`. This problem is particularly acute for users who are using MySQL as backend for tasks because MySQL disconnects connections after 8 hours of being idle. While Airflow can usually force celery to retry connecting it does not always work and tasks can fail.
This is specifically reccomended by the SqlAlchemy docs:
* https://docs.sqlalchemy.org/en/14/core/pooling.html#setting-pool-recycle
* https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine.params.pool_recycle
* https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_wait_timeout
### Solving the problem
We currently have a file which looks like this:
```python
from airflow.config_templates.default_celery import DEFAULT_CELERY_CONFIG
database_engine_options = DEFAULT_CELERY_CONFIG.get(
"database_engine_options", {}
)
# Use pool_pre_ping to detect stale db connections
# https://github.com/apache/airflow/discussions/22113
database_engine_options["pool_pre_ping"] = True
# Use pool recyle due to MySQL disconnecting sessions after 8 hours
# https://docs.sqlalchemy.org/en/14/core/pooling.html#setting-pool-recycle
# https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine.params.pool_recycle
# https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout
database_engine_options["pool_recycle"] = 1800
DEFAULT_CELERY_CONFIG["database_engine_options"] = database_engine_options
```
And we point the env var `AIRFLOW__CELERY__CELERY_CONFIG_OPTIONS` to this object, not sure if this is best practise?
### Anything else
Maybe just change the default options to include this?
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30335 | https://github.com/apache/airflow/pull/30426 | cb18d923f8253ac257c1b47e9276c39bae967666 | bc1d68a6eb01919415c399d678f491e013eb9238 | "2023-03-27T16:31:21Z" | python | "2023-06-02T14:16:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,324 | ["airflow/providers/cncf/kubernetes/CHANGELOG.rst", "airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/provider.yaml", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_pod.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | KPO deferrable needs kubernetes_conn_id while non deferrable does not | ### Apache Airflow version
2.5.2
### What happened
Not sure if this is a feature not a bug, but I can use KubernetesPodOperator fine without setting a kubernetes_conn_id.
For example:
```
start = KubernetesPodOperator(
namespace="mynamespace",
cluster_context="mycontext",
security_context={ 'runAsUser': 1000 },
name="hello",
image="busybox",
image_pull_secrets=[k8s.V1LocalObjectReference('prodregistry')],
cmds=["sh", "-cx"],
arguments=["echo Start"],
task_id="Start",
in_cluster=False,
is_delete_operator_pod=True,
config_file="/home/airflow/.kube/config",
)
```
But if I add deferrable=True to this it won't work. It seems to require an explicit kubernetes_conn_id (which we don't configure).
Is not possible to the deferrable version to work as the non deferrable one?
### What you think should happen instead
I hoped that kpo deferrable would work the same as non deferrable.
### How to reproduce
Use KPO with deferrable=True but no kubernetes_conn_id setting
### Operating System
Debian 11
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30324 | https://github.com/apache/airflow/pull/28848 | a09fd0d121476964f1c9d7f12960c24517500d2c | 85b9135722c330dfe1a15e50f5f77f3d58109a52 | "2023-03-27T09:59:56Z" | python | "2023-04-08T16:26:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,309 | ["airflow/providers/docker/hooks/docker.py", "airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | in DockerOperator, adding an attribute `tls_verify` to choose whether to validate the provided certificate. | ### Description
The current version of docker operator always performs TLS certificate validation. I think it would be nice to add an option to choose whether or not to validate the provided certificate.
### Use case/motivation
My work environment has several docker hosts with expired self-signed certificates. Since it is difficult to renew all certificates immediately, we are using a custom docker operator to disable certificate validation.
It would be nice if it was provided as an official feature, so I registered an issue.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30309 | https://github.com/apache/airflow/pull/30310 | 51f9910ecbf1186aff164e09d118bdf04d21dfcb | c1a685f752703eeb01f9369612af8c88c24cca09 | "2023-03-26T15:14:46Z" | python | "2023-04-14T10:17:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,287 | ["airflow/providers/amazon/aws/transfers/redshift_to_s3.py", "tests/providers/amazon/aws/transfers/test_redshift_to_s3.py"] | RedshiftToS3 Operator Wrapping Query in Quotes Instead of $$ | ### Apache Airflow version
2.5.2
### What happened
When passing a select_query into the RedshiftToS3 Operator, the query will error out if it contains any single quotes because the body of the UNLOAD statement is being wrapped in single quotes.
### What you think should happen instead
Instead, it's better practice to use the double dollar sign or dollar quoting to signify the start and end of the statement to run. This removes the need to escape any special characters and avoids the statement throwing an error in the common case of using single quotes to wrap string literals.
### How to reproduce
Running the RedshiftToS3 Operator with the sql_query: `SELECT 'Single Quotes Break this Operator'` will throw the error
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com//"
### Versions of Apache Airflow Providers
apache-airflow[package-extra]==2.4.3
apache-airflow-providers-amazon
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30287 | https://github.com/apache/airflow/pull/35986 | e0df7441fa607645d0a379c2066ca4ab16f5cb95 | 04a781666be2955ed518780ea03bc13a1e3bd473 | "2023-03-24T18:31:54Z" | python | "2023-12-04T19:19:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,280 | ["airflow/www/static/css/dags.css", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "docs/apache-airflow/core-concepts/dag-run.rst", "tests/www/views/test_views_home.py"] | Feature request - filter for dags with running status in the main page | ### Description
Feature request to filter by running dags (or by other statuses too). We have over 100 dags and we were having some performance problems. we wanted to see all the running Dags from the main page and found that we couldn't. We can see the light green circle in the runs (and that involves a lot of scrolling) but no way to filter for it.
We use SQL Server and it's job scheduling tool (SQL Agent) has this feature. The implementation for airflow shouldn't necessarily be like this but just presenting this as an example that it's a helpful feature implemented in other tools.
<img width="231" alt="image" src="https://user-images.githubusercontent.com/286903/227529646-97ac2e8e-52de-421a-8328-072f35ccdff2.png">
I'll leave implementation details for someone else.
on v2.2.5
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30280 | https://github.com/apache/airflow/pull/30429 | c25251cde620481592392e5f82f9aa8a259a2f06 | dbe14c31d52a345aa82e050cc0a91ee60d9ee567 | "2023-03-24T13:11:24Z" | python | "2023-05-22T16:05:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,240 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/api_internal/internal_api_call.py", "airflow/serialization/enums.py", "airflow/serialization/serialized_objects.py", "tests/api_internal/endpoints/test_rpc_api_endpoint.py", "tests/api_internal/test_internal_api_call.py", "tests/serialization/test_serialized_objects.py"] | AIP-44 Implement conversion to Pydantic-ORM objects in Internal API | null | https://github.com/apache/airflow/issues/30240 | https://github.com/apache/airflow/pull/30282 | 7aca81ceaa6cb640dff9c5d7212adc4aeb078a2f | 41c8e58deec2895b0a04879fcde5444b170e679e | "2023-03-22T15:26:50Z" | python | "2023-04-05T08:54:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,229 | ["docs/apache-airflow/howto/operator/python.rst"] | Update Python operator how-to with @task.sensor example | ### Body
The current [how-to documentation for the `PythonSensor`](https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html#pythonsensor) does not include any references to the existing `@task.sensor` TaskFlow decorator. It would be nice to see how uses together in this doc.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/30229 | https://github.com/apache/airflow/pull/30344 | 4e4e563d3fc68d1becdc1fc5ec1d1f41f6c24dd3 | 2a2ccfc27c3d40caa217ad8f6f0ba0d394ac2806 | "2023-03-22T01:19:01Z" | python | "2023-04-11T09:12:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,225 | ["airflow/decorators/base.py", "airflow/decorators/setup_teardown.py", "airflow/models/baseoperator.py", "airflow/utils/setup_teardown.py", "airflow/utils/task_group.py", "tests/decorators/test_setup_teardown.py", "tests/serialization/test_dag_serialization.py", "tests/utils/test_setup_teardown.py"] | Ensure setup/teardown tasks can be reused/works with task.override | Ensure that this works:
```python
@setup
def mytask():
print("I am a setup task")
with dag_maker() as dag:
mytask.override(task_id='newtask')
assert len(dag.task_group.children) == 1
setup_task = dag.task_group.children["newtask"]
assert setup_task._is_setup
```
and teardown also works | https://github.com/apache/airflow/issues/30225 | https://github.com/apache/airflow/pull/30342 | 28f73e42721bba5c5ad40bb547be9c057ca81030 | c76555930aee9692d2a839b9c7b9e2220717b8a0 | "2023-03-21T21:01:26Z" | python | "2023-03-28T18:15:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,220 | ["airflow/models/dag.py", "airflow/www/static/js/api/useMarkFailedTask.ts", "airflow/www/static/js/api/useMarkSuccessTask.ts", "airflow/www/static/js/api/useMarkTaskDryRun.ts", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/MarkInstanceAs.tsx", "airflow/www/views.py", "tests/models/test_dag.py", "tests/www/views/test_views.py"] | set tasks as successful/failed at their task-group level. | ### Description
Ability to clear or mark task groups as success/failure and have that propagate to the tasks within that task group. Sometimes there is a need to adjust the status of tasks within a task group, which can get unwieldy depending on the number of tasks in that task group. A great quality of life upgrade, and something that seems like an intuitive feature, would be the ability to clear or change the status of all tasks at their taskgroup level through the UI.
### Use case/motivation
In the event a large number of tasks, or a whole task group in this case, need to be cleared or their status set to success/failure this would be a great improvement. For example, a manual DAG run triggered through the UI or the API that has a number of task sensors or tasks that otherwise don't matter for that DAG run - instead of setting each one as success by hand, doing so for each task group would be great.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30220 | https://github.com/apache/airflow/pull/30478 | decaaa3df2b3ef0124366033346dc21d62cff057 | 1132da19e5a7d38bef98be0b1f6c61e2c0634bf9 | "2023-03-21T18:06:34Z" | python | "2023-04-27T16:10:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,196 | ["airflow/www/utils.py", "airflow/www/views.py"] | delete dag run times out | ### Apache Airflow version
2.5.2
### What happened
when trying to delete a dag run with many tasks (>1000) the operation times out and the dag run is not deleted.
### What you think should happen instead
_No response_
### How to reproduce
attempt to delete a dag run that contains >1000 tasks (in my case 10k) using the dagrun/list/ page results in a timeout:

code for dag (however it fails on any dag with > 1000 tasks):
```
import json
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
from datetime import datetime, timedelta
from airflow.decorators import dag, task
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'retries': 0,
'retry_delay': timedelta(minutes=1),
'start_date': datetime(2023, 2, 26),
'is_delete_operator_pod': True,
'get_logs': True
}
@dag('system_test', schedule=None, default_args=default_args, catchup=False, tags=['maintenance'])
def run_test_airflow():
stress_image = 'dockerhub.prod.evogene.host/progrium/stress'
@task
def create_cmds():
commands = []
for i in range(10000):
commands.append(["stress --cpu 4 --io 1 --vm 2 --vm-bytes 6000M --timeout 60s"])
return commands
KubernetesPodOperator.partial(
image=stress_image ,
task_id=f'test_airflow',
name=f'test_airflow',
cmds=["/bin/sh", "-c"],
log_events_on_failure=True,
pod_template_file=f'/opt/airflow/dags/repo/templates/cpb_cpu_4_mem_16'
).expand(arguments=create_cmds())
run_test_airflow()
```
### Operating System
kubernetes deployment
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30196 | https://github.com/apache/airflow/pull/30330 | a1b99fe5364977739b7d8f22a880eeb9d781958b | 4e4e563d3fc68d1becdc1fc5ec1d1f41f6c24dd3 | "2023-03-20T11:27:46Z" | python | "2023-04-11T07:58:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,169 | ["airflow/providers/google/cloud/hooks/looker.py", "tests/providers/google/cloud/hooks/test_looker.py"] | Potential issue with use of serialize in Looker SDK | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==8.11.0
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-sqlite==3.3.1
### Apache Airflow version
2
### Operating System
OS X (same issue on AWS)
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### What happened
I wrote a mod on top of LookerHook to access the `scheduled_plan_run_once` endpoint. The result was the following error.
```Traceback (most recent call last):
File "/usr/local/airflow/dags/utils/looker_operators_mod.py", line 125, in execute
resp = self.hook.run_scheduled_plan_once(
File "/usr/local/airflow/dags/utils/looker_hook_mod.py", line 136, in run_scheduled_plan_once
resp = sdk.scheduled_plan_run_once(plan_to_send)
File "/usr/local/lib/python3.9/site-packages/looker_sdk/sdk/api40/methods.py", line 10273, in scheduled_plan_run_once
self.post(
File "/usr/local/lib/python3.9/site-packages/looker_sdk/rtl/api_methods.py", line 171, in post
serialized = self._get_serialized(body)
File "/usr/local/lib/python3.9/site-packages/looker_sdk/rtl/api_methods.py", line 156, in _get_serialized
serialized = self.serialize(api_model=body) # type: ignore
TypeError: serialize() missing 1 required keyword-only argument: 'converter'
```
I was able to get past the error by rewriting the `get_looker_sdk` function in LookerHook to initialize with `looker_sdk.init40` instead, which resolved the serialize() issue.
### What you think should happen instead
I don't know why the serialization piece is part of the SDK initialization - would love some further context!
### How to reproduce
As far as I can tell, any call to sdk.scheduled_plan_run_once() causes this issue. I tried it with a variety of different dict plans. I only resolved it by changing how I initialized the SDK
### Anything else
n/a
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30169 | https://github.com/apache/airflow/pull/34678 | 3623b77d22077b4f78863952928560833bfba2f4 | 562b98a6222912d3a3d859ca3881af3f768ba7b5 | "2023-03-17T18:50:15Z" | python | "2023-10-02T20:31:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,167 | ["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | SSHOperator - Allow specific command timeout | ### Description
Following #29282, command timeout is set at the `SSHHook` level while it used to be able to set at the `SSHOperator` level.
I will work on a PR as soon as i can.
### Use case/motivation
Ideally, i think we could have a default value set on `SSHHook`, but with the possibility of overriding it at the `SSHOperator` level.
### Related issues
#29282
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30167 | https://github.com/apache/airflow/pull/30190 | 2a42cb46af66c7d6a95a718726cb9206258a0c14 | fe727f985b1053b838433b817458517c0c0f2480 | "2023-03-17T15:56:30Z" | python | "2023-03-21T20:32:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,153 | ["airflow/providers/neo4j/hooks/neo4j.py", "tests/providers/neo4j/hooks/test_neo4j.py"] | Issue with Neo4j provider using some schemes | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hi,
I've run into some issues when using the neo4j operator.
I've tried running a simple query and got an exception from the driver itself.
**Using: Airflow 2.2.2**
### What you think should happen instead
The exception stated that when using bolt+ssc URI scheme, it is not allowed to use the `encrypted` parameter which is mandatory in the hook (but actually not mandatory when using the driver standalone).
The exception:
neo4j.exceptions.ConfigurationError: The config settings "encrypted", "trust", "trusted_certificates", and "ssl_context" can only be used with the URI schemes ['bolt', 'neo4j']. Use the other URI schemes ['bolt+ssc', 'bolt+s', 'neo4j+ssc', 'neo4j+s'] for setting encryption settings.
In my opinion:
if there's a URI scheme with bolt+ssc, and a GraphDatabase.driver was chosen in the connection settings, it should not be used with the `encrypted` parameter.
I did edit the hook myself and tried this, worked great for me.
### How to reproduce
install the neo4j provider (I used v3.1.0)
Create a neo4j connection in the UI.
Add your host, user/login, password and extras.
In the extras:
{
"encrypted": false,
"neo4j_scheme": false,
"certs_self_signed": true
}
### Operating System
Linux
### Versions of Apache Airflow Providers
pyairtable==1.0.0
tableauserverclient==0.17.0
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-salesforce==3.3.0
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-tableau==2.1.2
apache-airflow-providers-postgres==2.3.0
apache-airflow-providers-jdbc==2.0.1
apache-airflow-providers-neo4j==3.1.0
mysql-connector-python==8.0.27
slackclient>=1.0.0,<2.0.0
boto3==1.20.26
cached-property==1.5.2
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30153 | https://github.com/apache/airflow/pull/30418 | 93a5422c5677a42b3329c329d65ff2b38b1348c2 | cd458426c66aca201e43506c950ee68c2f6c3a0a | "2023-03-16T19:47:42Z" | python | "2023-04-21T22:01:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,124 | ["airflow/models/taskinstance.py", "airflow/utils/state.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/models/test_cleartasks.py", "tests/models/test_dagrun.py"] | DagRun's start_date updated when user clears task of the running Dagrun | ### Apache Airflow version
2.5.1
### What happened
DagRun state and start_date are reset if somebody is clearing a task of the running DagRun.
### What you think should happen instead
I think we should not reset DagRun `state` and `start_date` in it's in the running or queued states because it doesn't make any sense for me. `state` and `start_date` of the DgRun should remain the same in case somebody's clearing a task of the running DagRun
### How to reproduce
Let's say we have a Dag with 2 tasks in it - short one and the long one:
```
dag = DAG(
'dummy-dag',
schedule_interval='@once',
catchup=False,
)
DagContext.push_context_managed_dag(dag)
bash_success = BashOperator(
task_id='bash-success',
bash_command='echo "Start and finish"; exit 0',
retries=0,
)
date_ind_success = BashOperator(
task_id='bash-long-success',
bash_command='echo "Start and finish"; sleep 300; exit 0',
)
```
Let's day we have a running Dagrun of this DAG. First task finishes in a second and the long one is still running. We have a start_date and duration set and the Dagrun is still running. It runs for example for a 30 secs (pic 1 and 2)
<img width="486" alt="image" src="https://user-images.githubusercontent.com/23456894/225335210-c2223ad1-771b-459d-b8ed-8f0aacb9b890.png">
<img width="492" alt="image" src="https://user-images.githubusercontent.com/23456894/225335272-ad737aef-2051-4e27-ae36-38c76d720c95.png">
Then we are clearing the short task. It causes clear of the Dagrun state (to `queued`) and clears `start_date` like we have a new Dagrun (pic 3 and 4)
<img width="407" alt="image" src="https://user-images.githubusercontent.com/23456894/225335397-6c7e0df7-a26a-46ed-8eaa-56ff928fc01a.png">
<img width="498" alt="image" src="https://user-images.githubusercontent.com/23456894/225335491-4d6a860a-e923-4878-b212-a6ccb4b590a3.png">
### Operating System
Unix/MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30124 | https://github.com/apache/airflow/pull/30125 | 0133f6806dbfb60b84b5bea4ce0daf073c246d52 | 070ecbd87c5ac067418b2814f554555da0a4f30c | "2023-03-15T14:26:30Z" | python | "2023-04-26T15:27:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,089 | ["airflow/www/views.py", "tests/www/views/test_views_rendered.py"] | Connection password values appearing unmasked in the "Task Instance Details" -> "Environment" field | ### Apache Airflow version
Airflow 2.5.1
### What happened
Connection password values appearing in the "Task Instance Details" -> "Task Attributes" -> environment field.
We are setting environment variables for the docker_operator with values from the password field in a connection.
The values from the password field are masked in the "Rendered Template" section and in the logs but it's showing the values in the "environment" field under Task Instance Details.
### What you think should happen instead
These password values should be masked like they are in the "Rendered Template" and logs.
### How to reproduce
Via this DAG, can run off any image.
Create a connection called "DATABASE_CONFIG" with a password in the password field.
Run this DAg and then check its Task Instance Details.
DAG Code:
```
from airflow import DAG
from docker.types import Mount
from airflow.providers.docker.operators.docker import DockerOperator
from datetime import timedelta
from airflow.models import Variable
from airflow.hooks.base_hook import BaseHook
import pendulum
import json
# Amount of times to retry job on failure
retries = 0
environment_config = {
"DB_WRITE_PASSWORD": BaseHook.get_connection("DATABASE_CONFIG").password,
}
# Setup default args for the job
default_args = {
"owner": "airflow",
"start_date": pendulum.datetime(2023, 1, 1, tz="Australia/Sydney"),
"retries": retries,
}
# Create the DAG
dag = DAG(
"test_dag", # DAG ID
default_args=default_args,
schedule_interval="* * * * *",
catchup=False,
)
# # Create the DAG object
with dag as dag:
docker_task = DockerOperator(
task_id="task",
image="<image>",
execution_timeout=timedelta(minutes=2),
environment=environment_config,
command="<command>",
api_version="auto",
docker_url="tcp://docker.for.mac.localhost:2375",
)
```
Rendered Template is good:

In "Task Instance Details"

### Operating System
centOS Linux and MAC
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Running on a docker via the airflow docker-compose
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30089 | https://github.com/apache/airflow/pull/31125 | db359ee2375dd7208583aee09b9eae00f1eed1f1 | ffe3a68f9ada2d9d35333d6a32eac2b6ac9c70d6 | "2023-03-14T04:35:49Z" | python | "2023-05-08T14:59:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,075 | ["airflow/api_connexion/openapi/v1.yaml"] | Unable to set DagRun state in create Dagrun endpoint ("Property is read-only - 'state'") | ### Apache Airflow version
main (development)
### What happened
While working on another change I noticed that the example [POST from the API docs](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/post_dag_run) actually leads to a request Error:
```
curl -X POST -H "Cookie: session=xxxx" localhost:8080/api/v1/dags/data_warehouse_dag_5by1a2rogu/dagRuns -d '{"dag_run_id":"string2","logical_date":"2019-08-24T14:15:24Z","execution_date":"2019-08-24T14:15:24Z","conf":{},"state":"queued","note":"strings"}' -H 'Content-Type: application/json'
{
"detail": "Property is read-only - 'state'",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
I believe that this comes from the DagRunSchema marking this field as dump_only:
https://github.com/apache/airflow/blob/478fd826522b6192af6b86105cfa0686583e34c2/airflow/api_connexion/schemas/dag_run_schema.py#L69
So either -
1) The documentation / API spec is incorrect and this field cannot be set in the request
2) The marshmallow schema is incorrect and this field is incorrectly marked as `dump_only`
I think that its the former, as there's [even a test to ensure that this field can't be set in a request](https://github.com/apache/airflow/blob/751a995df55419068f11ebabe483dba3302916ed/tests/api_connexion/endpoints/test_dag_run_endpoint.py#L1247-L1257) - I can look into this and fix it soon.
### What you think should happen instead
The API should accept requested which follow examples from the documentation.
### How to reproduce
Spin up breeze and POST a create dagrun request which attempts to set the DagRun state.
### Operating System
Breeze
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30075 | https://github.com/apache/airflow/pull/30149 | f01140141f1fe51b6ee1eba5b02ab7516a67c9c7 | e01c14661a4ec4bee3a2066ac1323fbd8a4386f1 | "2023-03-13T17:28:20Z" | python | "2023-03-21T18:26:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,042 | ["airflow/www/utils.py", "airflow/www/views.py"] | Search/filter by note in List Dag Run | ### Description
Going to Airflow web UI, Browse>DAG Run displays the list of runs, but there is no way to search or filter based on the text in the "Note" column.
### Use case/motivation
It is possible to do a free text search for the "Run Id" field. The Note field may contain pieces of information that may be relevant to find, or to filter on the basis of these notes.
### Related issues
Sorting by Note in List Dag Run fails:
https://github.com/apache/airflow/issues/30041
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30042 | https://github.com/apache/airflow/pull/31455 | f00c131cbf5b2c19c817d1a1945326b80f8c79e7 | 5794393c95156097095e6fbf76d7faeb6ec08072 | "2023-03-11T14:16:02Z" | python | "2023-05-25T18:17:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,980 | ["airflow/providers/microsoft/azure/hooks/data_lake.py"] | ADLS Gen2 Hook incorrectly forms account URL when using Active Directory authentication method (Azure Data Lake Storage V2) | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure 5.2.1
### Apache Airflow version
2.5.1
### Operating System
Ubuntu 18.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When attempting to use Azure Active Directory application to connect to Azure Data Lake Storage Gen2 hook, the generated account URL sent to the DataLakeServiceClient is incorrect.
It substitutes in the Client ID (`login` field) where the storage account name should be.
### What you think should happen instead
The `host` field on the connection form should be used to store the storage account name and should be used to fill the account URL for both Active Directory and Key-based authentication.
### How to reproduce
1. Create an "Azure Data Lake Storage V2" connection (adls) and put the AAD application Client ID into `login` field, Client secret into `password` field and Tenant ID into `tenant_id` field.
2. Attempt to perform any operations with the `AzureDataLakeStorageV2Hook` hook.
3. Notice how it fails, and that the URL in the logs is incorrectly `https://{client_id}.dfs.core.windows.net/...`, when it should be `https://{storage_account}.dfs.core.windows.net/...`
This can be fixed by:
1. Making your own copy of the hook.
2. Entering the storage account name into the `host` field (currently labelled "Account Name (Active Directory Auth)").
3. Editing the `get_conn` method to substitute `conn.host` into the `account_url` (instead of `conn.login`).
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29980 | https://github.com/apache/airflow/pull/29981 | def1f89e702d401f67a94f34a01f6a4806ea92e6 | 008f52444a84ceaa2de7c2166b8f253f55ca8c21 | "2023-03-08T15:42:36Z" | python | "2023-03-10T12:11:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,967 | ["chart/dockerfiles/pgbouncer-exporter/build_and_push.sh", "chart/dockerfiles/pgbouncer/build_and_push.sh", "chart/newsfragments/30054.significant.rst"] | Build our supporting images for chart in multi-platform versions | ### Body
The supporting images of ours are built using one platform only but they could be multiplatform.
The scripts to build those should be updated to support multi-platform builds.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29967 | https://github.com/apache/airflow/pull/30054 | 5a3be7256b2a848524d3635d7907b6829a583101 | 39cfc67cad56afa3b2434bc8e60bcd0676d41fc1 | "2023-03-08T00:22:45Z" | python | "2023-03-15T22:19:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,959 | ["airflow/jobs/local_task_job_runner.py", "airflow/jobs/scheduler_job_runner.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "airflow/serialization/pydantic/job.py"] | expand dynamic mapped tasks in batches | ### Description
expanding tasks in batches to allow mapped tasks spawn more than 1024 processes.
### Use case/motivation
Maximum length of a list is limited to 1024 by `max_map_length (AIRFLOW__CORE__MAX_MAP_LENGTH)`.
during scheduling of the new tasks, an UPDATE query is ran that tries to set all the new tasks at once. Increasing `max_map_length` more than 4K makes airflow scheduler completely unresponsive.
Also, Postgres throws `stack depth limit exceeded` error which can be fixed by updating to a newer version and setting `max_stack_depth` higher. But it doesn't really matter because airflow scheduler freezes up.
As a workaround, I split the dag runs into subdag runs which works but it would be much nicer if we didn't have to worry about exceeding `max_map_length`.
### Related issues
It was discussed here:
[Increasing 'max_map_length' leads to SQL 'max_stack_depth' error with 5000 dags to be spawned #28478](https://github.com/apache/airflow/discussions/28478)
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29959 | https://github.com/apache/airflow/pull/30372 | 5f2628d36cb8481ee21bd79ac184fd8fdce3e47d | ed39b6fab7a241e2bddc49044c272c5f225d6692 | "2023-03-07T16:12:04Z" | python | "2023-04-22T19:10:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,957 | ["chart/templates/scheduler/scheduler-deployment.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_scheduler.py", "tests/charts/test_webserver.py"] | hostAliases for scheduler and webserver | ### Description
I am not sure why this PR was not merged (https://github.com/apache/airflow/pull/23558) but I think it would be great to add hostAliases not just to the workers, but the scheduler and webserver too.
### Use case/motivation
Be able to modify /etc/hosts in webserver and scheduler.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29957 | https://github.com/apache/airflow/pull/30051 | 5c15b23023be59a87355c41ab23a46315cca21a5 | f07d300c4c78fa1b2becb4653db8d25b011ea273 | "2023-03-07T15:25:15Z" | python | "2023-03-12T14:22:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,939 | ["airflow/providers/amazon/aws/links/emr.py", "airflow/providers/amazon/aws/operators/emr.py", "airflow/providers/amazon/aws/sensors/emr.py", "tests/providers/amazon/aws/operators/test_emr_add_steps.py", "tests/providers/amazon/aws/operators/test_emr_create_job_flow.py", "tests/providers/amazon/aws/operators/test_emr_modify_cluster.py", "tests/providers/amazon/aws/operators/test_emr_terminate_job_flow.py", "tests/providers/amazon/aws/sensors/test_emr_job_flow.py", "tests/providers/amazon/aws/sensors/test_emr_step.py"] | AWS EMR Operators: Add Log URI in task logs to speed up debugging | ### Description
Airflow is widely used to launch, interact and submit jobs on AWS EMR Clusters. Existing EMR operators do not provide links to the EMR logs (Job Flow/Step logs), as a result in case of failures the users need to switch to EMR Console or go to AWS S3 console to locate the logs for EMR Jobs and Steps using the job_flow_id available in the EMR Operators and in Xcom.
It will be really convenient and help with debugging if the EMR log links are present in Operator Task logs, it will obviate the need to switch to AWS S3 or AWS EMR consoles from Airflow and lookup the logs using job_flow_ids. It will be a nice improvement for the developer experience.
LogUri for Cluster is available in [DescribeCluster](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr/client/describe_cluster.html)
LogFile path for Steps in case of failure is available in [ListSteps](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr/client/list_steps.html)
### Use case/motivation
Ability to go to EMR logs directly from Airflow EMR Task logs.
### Related issues
N/A
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29939 | https://github.com/apache/airflow/pull/31032 | 6c92efbe8b99e172fe3b585114e1924c0bb2f26b | 2d5166f9829835bdfd6479aa789c8a27147288d6 | "2023-03-06T18:03:55Z" | python | "2023-05-03T23:18:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,912 | ["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py"] | BigQueryToGCSOperator does not wait for completion | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==7.0.0
### Apache Airflow version
2.3.2
### Operating System
Debian GNU/Linux
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
[Deferrable mode for BigQueryToGCSOperator #27683](https://github.com/apache/airflow/pull/27683) changed the functionality of the `BigQueryToGCSOperator` so that it no longer waits for the completion of the operation. This is because the `nowait=True` parameter is now [being set](https://github.com/apache/airflow/pull/27683/files#diff-23c5b2e773487f9c28b75b511dbf7269eda1366f16dec84a349d95fa033ffb3eR191).
### What you think should happen instead
This is unexpected behavior. Any downstream tasks of the `BigQueryToGCSOperator` that expect the CSVs to have been written by the time they are called may result in errors (and have done so in our own operations).
The property should at least be configurable.
### How to reproduce
1. Leverage the `BigQueryToGcsOperator` in your DAG.
2. Have it write a large table to a CSV somewhere in GCS
3. Notice that the task completes almost immediately but the CSVs may not exist in GCS until later.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29912 | https://github.com/apache/airflow/pull/29925 | 30b2e6c185305a56f9fd43683f1176f01fe4e3f6 | 464ab1b7caa78637975008fcbb049d5b52a8b005 | "2023-03-03T23:29:15Z" | python | "2023-03-05T10:40:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,903 | ["airflow/models/baseoperator.py", "tests/models/test_mappedoperator.py"] | Task-level retries overrides from the DAG-level default args are not respected when using `partial` | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
When running a DAG that is structured like:
```
@dag{dag_id="my_dag", default_args={"retries":0"}}
def dag():
op = MyOperator.partial(task_id="my_task", retries=3).expand(...)
```
The following test fails:
```
def test_retries(self) -> None:
dag_bag = DagBag(dag_folder=DAG_FOLDER, include_examples=False)
dag = dag_bag.dags["my_dag"]
for task in dag.tasks:
if "my_task" in task.task_id:
self.assertEqual(3, task.retries) # fails - this is 0
```
When printing out `task.partial_kwargs`, and looking at how the default args and partial args are merged, it seems like the default args are always taking precedence, even though in the `partial` global function, the `retries` do get set later on with the task-level parameter value. This doesn't seem to be respected though.
### What you think should happen instead
_No response_
### How to reproduce
If you run my above unit test for a test DAG, on version 2.4.3, it should show up as a test failure.
### Operating System
OS Ventura
### Versions of Apache Airflow Providers
_No response_
### Deployment
Google Cloud Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29903 | https://github.com/apache/airflow/pull/29913 | 57c09e59ee9273ff64cd4a85b020a4df9b1d9eca | f01051a75e217d5f20394b8c890425915383101f | "2023-03-03T19:22:23Z" | python | "2023-04-14T12:16:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,875 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/connection_command.py", "docs/apache-airflow/howto/connection.rst", "tests/cli/commands/test_connection_command.py"] | Airflow Connection Testing Using Airflow CLI | ### Description
Airflow Connection testing using airflow CLI would be very useful , where users can quick add test function to test connection in their applications. It will benefit CLI user to create and test new connections right from instance and reduce time on troubleshooting any connection issue.
### Use case/motivation
airflow connection testing using airflow CLI , similar function as we have in Airflow CLI.
example: airflow connection test "hello_id"
### Related issues
N/A
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29875 | https://github.com/apache/airflow/pull/29892 | a3d59c8c759582c27f5a234ffd4c33a9daeb22a9 | d2e5b097e6251e31fb4c9bb5bf16dc9c77b56f75 | "2023-03-02T14:13:55Z" | python | "2023-03-09T09:26:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,858 | ["airflow/www/package.json", "airflow/www/static/js/api/index.ts", "airflow/www/static/js/api/useDag.ts", "airflow/www/static/js/api/useDagCode.ts", "airflow/www/static/js/dag/details/dagCode/CodeBlock.tsx", "airflow/www/static/js/dag/details/dagCode/index.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/templates/airflow/dag.html", "airflow/www/yarn.lock"] | Migrate DAG Code page to Grid Details | - [ ] Use REST API to render DAG Code in the grid view as a tab when a user has no runs/tasks selected
- [ ] Redirect all urls to new code
- [ ] delete the old code view | https://github.com/apache/airflow/issues/29858 | https://github.com/apache/airflow/pull/31113 | 3363004450355582712272924fac551dc1f7bd56 | 4beb89965c4ee05498734aa86af2df7ee27e9a51 | "2023-03-02T00:38:49Z" | python | "2023-05-17T16:27:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,843 | ["airflow/models/taskinstance.py", "tests/www/views/test_views.py"] | The "Try Number" filter under task instances search is comparing integer with non-integer object | ### Apache Airflow version
2.5.1
### What happened
The `Try Number` filter is comparing the given integer with an instance of a "property" object
* screenshots


* text version
```
Something bad has happened.
Airflow is used by many users, and it is very likely that others had similar problems and you can easily find
a solution to your problem.
Consider following these steps:
* gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment)
* find similar issues using:
* [GitHub Discussions](https://github.com/apache/airflow/discussions)
* [GitHub Issues](https://github.com/apache/airflow/issues)
* [Stack Overflow](https://stackoverflow.com/questions/tagged/airflow)
* the usual search engine you use on a daily basis
* if you run Airflow on a Managed Service, consider opening an issue using the service support channels
* if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a [bug report](https://github.com/apache/airflow/issues/new/choose).
Make sure however, to include all relevant details and results of your investigation so far.
Python version: 3.8.16
Airflow version: 2.5.1
Node: kip-airflow-8b665fdd7-lcg6q
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/security/decorators.py", line 133, in wraps
return f(self, *args, **kwargs)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/views.py", line 554, in list
widgets = self._list()
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 1164, in _list
widgets = self._get_list_widget(
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 1063, in _get_list_widget
count, lst = self.datamodel.query(
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 461, in query
count = self.query_count(query, filters, select_columns)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 382, in query_count
return self._apply_inner_all(
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 368, in _apply_inner_all
query = self.apply_filters(query, inner_filters)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 223, in apply_filters
return filters.apply_all(query)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/filters.py", line 300, in apply_all
query = flt.apply(query, value)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/filters.py", line 169, in apply
return query.filter(field > value)
TypeError: '>' not supported between instances of 'property' and 'int'
```
### What you think should happen instead
The "Try Number" search should compare integer with integer
### How to reproduce
1. Go to "Browse" -> "Task Instances"
2. "Search" -> "Add Filter" -> choose "Dag Id" and "Try Number"
3. Choose "Greater than" in the drop-down and enter an integer
4. Click "Search"
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29843 | https://github.com/apache/airflow/pull/29850 | 00a2c793c7985f8165c2bef9106fc81ee66e07bb | a3c9902bc606f0c067a45f09e9d3d152058918e9 | "2023-03-01T17:45:26Z" | python | "2023-03-10T12:01:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,841 | ["setup.cfg"] | high memory leak, cannot start even webserver | ### Apache Airflow version
2.5.1
### What happened
I'd used airflow 2.3.1 and everything was fine.
Then I decided to move to airflow 2.5.1.
I can't start even webserver, airflow on my laptop consumes the entire memory (32Gb) and OOM killer comes.
I investigated a bit. So it starts with airflow 2.3.4. Only using official docker image (apache/airflow:2.3.4) and only on linux laptop, mac is ok.
Memory leak starts when source code tries to import for example `airflow.cli.commands.webserver_command` module using `airflow.utils.module_loading.import_string`.
I dived deeply and found that it happens when "import daemon" is performed.
You can reproduce it with this command: `docker run --rm --entrypoint="" apache/airflow:2.3.4 /bin/bash -c "python -c 'import daemon'"`. Once again, reproducec only on linux (my kernel is 6.1.12).
That's weird considering `daemon` hasn't been changed since 2018.
### What you think should happen instead
_No response_
### How to reproduce
docker run --rm --entrypoint="" apache/airflow:2.3.4 /bin/bash -c "python -c 'import daemon'"
### Operating System
Arch Linux (kernel 6.1.12)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29841 | https://github.com/apache/airflow/pull/29916 | 864ff2e3ce185dfa3df0509a4bd3c6b5169e907f | c8cc49af2d011f048ebea8a6559ddd5fca00f378 | "2023-03-01T15:36:01Z" | python | "2023-03-04T15:27:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,836 | ["airflow/www/forms.py", "airflow/www/validators.py", "tests/www/test_validators.py", "tests/www/views/test_views_connection.py"] | Restrict allowed characters in connection ids | ### Description
I bumped into a bug where a connection id was suffixed with a whitespace e.g. "myconn ". When referencing the connection id "myconn" (without whitespace), you get a connection not found error.
To avoid such human errors, I suggest restricting the characters allowed for connection ids.
Some suggestions:
- There's an `airflow.utils.helpers.validate_key` function for validating the DAG id. Probably a good idea to reuse this.
- I believe variable ids are also not validated, would be good to check those too.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29836 | https://github.com/apache/airflow/pull/31140 | 85482e86f5f93015487938acfb0cca368059e7e3 | 5cb8ef80a0bd84651fb660c552563766d8ec0ea1 | "2023-03-01T11:58:40Z" | python | "2023-05-12T10:25:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,781 | ["airflow/providers/sftp/hooks/sftp.py", "airflow/providers/sftp/sensors/sftp.py", "tests/providers/sftp/hooks/test_sftp.py", "tests/providers/sftp/sensors/test_sftp.py"] | newer_than and file_pattern don't work well together in SFTPSensor | ### Apache Airflow Provider(s)
sftp
### Versions of Apache Airflow Providers
4.2.3
### Apache Airflow version
2.5.1
### Operating System
macOS Ventura 13.2.1
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
I wanted to use `file_pattern` and `newer_than` in `SFTPSensor` to find only the files that landed in SFTP after the data interval of the prior successful DAG run (`{{ prev_data_interval_end_success }}`).
I have four text files (`file.txt`, `file1.txt`, `file2.txt` and `file3.txt`) but only `file3.txt` has the last modification date after the data interval of the prior successful DAG run. I use the following file pattern: `"*.txt"`.
The moment the first file (`file.txt`) was matched and the modification date did not meet the requirement, the task changed the status to `up_for_reschedule`.
### What you think should happen instead
The other files matching the pattern should be checked as well.
### How to reproduce
```python
import pendulum
from airflow import DAG
from airflow.providers.sftp.sensors.sftp import SFTPSensor
with DAG(
dag_id="sftp_test",
start_date=pendulum.datetime(2023, 2, 1, tz="UTC"),
schedule="@once",
render_template_as_native_obj=True,
):
wait_for_file = SFTPSensor(
task_id="wait_for_file",
sftp_conn_id="sftp_default",
path="/upload/",
file_pattern="*.txt",
newer_than="{{ prev_data_interval_end_success }}",
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29781 | https://github.com/apache/airflow/pull/29794 | 60d98a1bc2d54787fcaad5edac36ecfa484fb42b | 9357c81828626754c990c3e8192880511a510544 | "2023-02-27T12:25:27Z" | python | "2023-02-28T05:45:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,754 | ["airflow/example_dags/example_dynamic_task_mapping_with_no_taskflow_operators.py", "docs/apache-airflow/authoring-and-scheduling/dynamic-task-mapping.rst", "tests/serialization/test_dag_serialization.py", "tests/www/views/test_views_acl.py"] | Add classic operator example for dynamic task mapping "reduce" task | ### What do you see as an issue?
The [documentation for Dynamic Task Mapping](https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#simple-mapping
) does not include an example of a "reduce" task (e.g. `sum_it` in the [examples](https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#simple-mapping)) using the classic (or non-TaskFlow) operators. It only includes an example that uses the TaskFlow operators.
When I attempted to write a "reduce" task using classic operators for my DAG, I found that there wasn't an obvious approach.
### Solving the problem
We should add an example of a "reduce" task that uses the classic (non-TaskFlow) operators.
For example, for the given `sum_it` example:
```
"""Example DAG demonstrating the usage of dynamic task mapping reduce using classic
operators.
"""
from __future__ import annotations
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
from airflow.operators.python import PythonOperator
def add_one(x: int):
return x + 1
def sum_it(values):
total = sum(values)
print(f"Total was {total}")
with DAG(dag_id="example_dynamic_task_mapping_reduce", start_date=datetime(2022, 3, 4)):
add_one_task = PythonOperator.partial(
task_id="add_one",
python_callable=add_one,
).expand(
op_kwargs=[
{"x": 1},
{"x": 2},
{"x": 3},
]
)
sum_it_task = PythonOperator(
task_id="sum_it",
python_callable=sum_it,
op_kwargs={"values": add_one_task.output},
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29754 | https://github.com/apache/airflow/pull/29762 | c9607d44de5a3c9674a923a601fc444ff957ac7e | 4d4c2b9d8b5de4bf03524acf01a298c162e1d9e4 | "2023-02-24T23:35:25Z" | python | "2023-05-31T05:47:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,746 | ["airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | DatabricksSubmitRunOperator does not support passing output of another task to `base_parameters` | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==4.0.0
### Apache Airflow version
2.4.3
### Operating System
MAC OS
### Deployment
Virtualenv installation
### Deployment details
The issue is consistent across multiple Airflow deployments (locally on Docker Compose, remotely on MWAA in AWS, locally using virualenv)
### What happened
Passing `base_parameters` key into `notebook_task` parameter for `DatabricksSubmitRunOperator` as output of a previous task (TaskFlow paradigm) does not work.
After inspection of `DatabricksSubmitRunOperator.init` it seems that the problem relies on the fact that it uses `utils.databricks.normalise_json_content` to validate input parameters and, given that the input parameter is of type `PlainXComArg`, it fails to parse.
The workaround I found is to call it using `partial` and `expand`, which is a bit hacky and much less legible
### What you think should happen instead
`DatabricksSubmitRunOperator` should accept `PlainXComArg` arguments on init and eventually validate on `execute`, prior to submitting job run.
### How to reproduce
This DAG fails to parse:
```python3
with DAG(
"dag_erroring",
start_date=days_ago(1),
params={"param_1": "", "param_2": ""},
) as dag:
@task
def from_dag_params_to_notebook_params(**context):
# Transform/Validate DAG input parameters to sth expected by Notebook
notebook_param_1 = context["dag_run"].conf["param_1"] + "abcd"
notebook_param_2 = context["dag_run"].conf["param_2"] + "efgh"
return {"some_param": notebook_param_1, "some_other_param": notebook_param_2}
DatabricksSubmitRunOperator(
task_id="my_notebook_task",
new_cluster={
"cluster_name": "single-node-cluster",
"spark_version": "7.6.x-scala2.12",
"node_type_id": "i3.xlarge",
"num_workers": 0,
"spark_conf": {
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "[*, 4]",
},
"custom_tags": {"ResourceClass": "SingleNode"},
},
notebook_task={
"notebook_path": "some/path/to/a/notebook",
"base_parameters": from_dag_params_to_notebook_params(),
},
libraries=[],
databricks_retry_limit=3,
timeout_seconds=86400,
polling_period_seconds=20,
)
```
This one does not:
```python3
with DAG(
"dag_parsing_fine",
start_date=days_ago(1),
params={"param_1": "", "param_2": ""},
) as dag:
@task
def from_dag_params_to_notebook_params(**context):
# Transform/Validate DAG input parameters to sth expected by Notebook
notebook_param_1 = context["dag_run"].conf["param_1"] + "abcd"
notebook_param_2 = context["dag_run"].conf["param_2"] + "efgh"
return [{"notebook_path": "some/path/to/a/notebook", "base_parameters":{"some_param": notebook_param_1, "some_other_param": notebook_param_2}}]
DatabricksSubmitRunOperator.partial(
task_id="my_notebook_task",
new_cluster={
"cluster_name": "single-node-cluster",
"spark_version": "7.6.x-scala2.12",
"node_type_id": "i3.xlarge",
"num_workers": 0,
"spark_conf": {
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "[*, 4]",
},
"custom_tags": {"ResourceClass": "SingleNode"},
},
libraries=[],
databricks_retry_limit=3,
timeout_seconds=86400,
polling_period_seconds=20,
).expand(notebook_task=from_dag_params_to_notebook_params())
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29746 | https://github.com/apache/airflow/pull/29840 | c95184e8bc0f974ea8d2d51cbe3ca67e5f4516ac | c405ecb63e352c7a29dd39f6f249ba121bae7413 | "2023-02-24T15:50:14Z" | python | "2023-03-07T15:03:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,733 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "airflow/providers/databricks/provider.yaml", "docs/apache-airflow-providers-databricks/operators/jobs_create.rst", "tests/providers/databricks/hooks/test_databricks.py", "tests/providers/databricks/operators/test_databricks.py", "tests/system/providers/databricks/example_databricks.py"] | Databricks create/reset then run-now | ### Description
Allow an Airflow DAG to define a Databricks job with the `api/2.1/jobs/create` (or `api/2.1/jobs/reset`) endpoint then run that same job with the `api/2.1/jobs/run-now` endpoint. This would give similar capabilities as the DatabricksSubmitRun operator, but the `api/2.1/jobs/create` endpoint supports additional parameters that the `api/2.1/jobs/runs/submit` doesn't (e.g. `job_clusters`, `email_notifications`, etc.).
### Use case/motivation
Create and run a Databricks job all in the Airflow DAG. Currently, DatabricksSubmitRun operator uses the `api/2.1/jobs/runs/submit` endpoint which doesn't support all features and creates runs that aren't tied to a job in the Databricks UI. Also, DatabricksRunNow operator requires you to define the job either directly in the Databricks UI or through a separate CI/CD pipeline causing the headache of having to change code in multiple places.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29733 | https://github.com/apache/airflow/pull/35156 | da2fdbb7609f7c0e8dd1d1fd9efaec31bb937fe8 | a8784e3c352aafec697d3778eafcbbd455b7ba1d | "2023-02-23T21:01:27Z" | python | "2023-10-27T18:52:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,712 | ["airflow/providers/amazon/aws/hooks/emr.py", "tests/providers/amazon/aws/hooks/test_emr.py"] | EMRHook.get_cluster_id_by_name() doesn't use pagination | ### Apache Airflow version
2.5.1
### What happened
When using EMRHook.get_cluster_id_by_name or any any operator that depends on it (e.g. EMRAddStepsOperator), if the results of the ListClusters API call is paginated (e.g. if your account has more than 50 clusters in the current region), and the desired cluster is in the 2nd page of results, None will be returned instead of the cluster ID.
### What you think should happen instead
Boto's pagination API should be used and the cluster ID should be returned.
### How to reproduce
Use `EmrAddStepsOperator` with the `job_flow_name` parameter on an `aws_conn_id` with more than 50 EMR clusters in the current region.
### Operating System
Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.2.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29712 | https://github.com/apache/airflow/pull/29732 | 607068f4f0d259b638743db5b101660da1b43d11 | 9662fd8cc05f69f51ca94b495b14f907aed0d936 | "2023-02-23T00:39:37Z" | python | "2023-05-01T18:45:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,687 | ["airflow/models/renderedtifields.py"] | Deadlock when airflow try to update 'k8s_pod_yaml' in 'rendered_task_instance_fields' table | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
**Airflow 2.4.2**
We run into a problem, where HttpSensor has an error because of deadlock. We are running 3 different dags with 12 max_active_runs, that call api and check for response if it should reshedule it or go to next task. All these sensors have 1 minutes poke interval, so 36 of them are running at the same time. Sometimes (like once in 20 runs) we get following deadlock error:
`Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context cursor, statement, parameters, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute cursor.execute(statement, parameters) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute res = self._query(query) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query db.query(q) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/connections.py", line 254, in query _mysql.connection.query(self, query) MySQLdb.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1457, in _run_raw_task self._execute_task_with_callbacks(context, test_mode) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1579, in _execute_task_with_callbacks RenderedTaskInstanceFields.write(rtif) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper return func(*args, session=session, **kwargs) File "/usr/local/lib/python3.7/contextlib.py", line 119, in __exit__ next(self.gen) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 36, in create_session session.commit() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1428, in commit self._transaction.commit(_to_root=self.future) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 829, in commit self._prepare_impl() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl self.session.flush() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3345, in flush self._flush(objects) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush transaction.rollback(_capture_exception=True) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__ with_traceback=exc_tb, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush flush_context.execute() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute rec.execute(self) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 633, in execute uow, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 241, in save_obj update, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1001, in _emit_update_statements statement, multiparams, execution_options=execution_options File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20 return meth(self, args_10style, kwargs_10style, execution_options) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 326, in _execute_on_connection self, multiparams, params, execution_options File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1491, in _execute_clauseelement cache_hit=cache_hit, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context e, statement, parameters, cursor, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2027, in _handle_dbapi_exception sqlalchemy_exception, with_traceback=exc_info[2], from_=e File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context cursor, statement, parameters, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute cursor.execute(statement, parameters) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute res = self._query(query) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query db.query(q) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/connections.py", line 254, in query _mysql.connection.query(self, query) sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') [SQL: UPDATE rendered_task_instance_fields SET k8s_pod_yaml=%s WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND rendered_task_instance_fields.run_id = %s AND rendered_task_instance_fields.map_index = %s] [parameters: ('{"metadata": {"annotations": {"dag_id": "bidder-joiner", "task_id": "capitest", "try_number": "1", "run_id": "scheduled__2023-02-15T14:15:00+00:00"}, ... (511 characters truncated) ... e": "AIRFLOW_IS_K8S_EXECUTOR_POD", "value": "True"}], "image": "artifactorymaster.outbrain.com:5005/datainfra/airflow:8cbd2a3d8c", "name": "base"}]}}', 'bidder-joiner', 'capitest', 'scheduled__2023-02-15T14:15:00+00:00', -1)] (Background on this error at: https://sqlalche.me/e/14/e3q8)
`
`Failed to execute job 3966 for task capitest ((MySQLdb.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') [SQL: UPDATE rendered_task_instance_fields SET k8s_pod_yaml=%s WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND rendered_task_instance_fields.run_id = %s AND rendered_task_instance_fields.map_index = %s] [parameters: ('{"metadata": {"annotations": {"dag_id": "bidder-joiner", "task_id": "capitest", "try_number": "1", "run_id": "scheduled__2023-02-15T14:15:00+00:00"}, ... (511 characters truncated) ... e": "AIRFLOW_IS_K8S_EXECUTOR_POD", "value": "True"}], "image": "artifactorymaster.outbrain.com:5005/datainfra/airflow:8cbd2a3d8c", "name": "base"}]}}', 'bidder-joiner', 'capitest', 'scheduled__2023-02-15T14:15:00+00:00', -1)] (Background on this error at: https://sqlalche.me/e/14/e3q8); 68)
`
I checked MySql logs and deadlock is caused by query:
```
DELETE FROM rendered_task_instance_fields WHERE rendered_task_instance_fields.dag_id = 'bidder-joiner-raw_data_2nd_pass_delay' AND rendered_task_instance_fields.task_id = 'is_data_ready' AND ((rendered_task_instance_fields.dag_id, rendered_task_instance_fields.task_id, rendered_task_instance_fields.run_id) NOT IN (SELECT subq2.dag_id, subq2.task_id, subq2.run_id
FROM (SELECT subq1.dag_id AS dag_id, subq1.task_id AS task_id, subq1.run_id AS run_id
FROM (SELECT DISTINCT rendered_task_instance_fields.dag_id AS dag_id, rendered_task_instance_fields.task_id AS task_id, rendered_task_instance_fields.run_id AS run_id, dag_run.execution_date AS execution_date
FROM rendered_task_instance_fields INNER JOIN dag_run ON rendered_task_instance_fields.dag_id = dag_run.dag_id AND rendered_task_instance_fields.run_id = dag_run.run_id
WHERE rendered_task_instance_fields.dag_id = 'bidder-joiner-raw_data
```
### What you think should happen instead
I found similar issue open on github (https://github.com/apache/airflow/issues/25765) so I think it should be resolved in the same way - adding @retry_db_transaction annotation to function that is executing this query
### How to reproduce
Create 3 dags with 12 max_active_runs that use HttpSensor at the same time, same poke interval and mode reschedule.
### Operating System
Ubuntu 20
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql>=1.2.0
mysql-connector-python>=8.0.11
mysqlclient>=1.3.6
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-http==4.0.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-apache-spark==3.0.0
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29687 | https://github.com/apache/airflow/pull/32341 | e53320d62030a53c6ffe896434bcf0fc85803f31 | c8a3c112a7bae345d37bb8b90d68c8d6ff2ef8fc | "2023-02-22T09:00:28Z" | python | "2023-07-05T11:28:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,671 | ["tests/providers/openlineage/extractors/test_default_extractor.py"] | Adapt OpenLineage default extractor to properly accept all OL implementation | ### Body
Adapt default extractor to accept any valid type returned from Operators `get_openlineage_facets_*` method.
This needs to ensure compatibility with operators made with external extractors for current openlineage-airflow integration.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29671 | https://github.com/apache/airflow/pull/31381 | 89bed231db4807826441930661d79520250f3075 | 4e73e47d546bf3fd230f93056d01e12f92274433 | "2023-02-21T18:43:14Z" | python | "2023-06-13T19:09:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,666 | ["airflow/providers/hashicorp/_internal_client/vault_client.py", "airflow/providers/hashicorp/secrets/vault.py", "tests/providers/hashicorp/_internal_client/test_vault_client.py", "tests/providers/hashicorp/secrets/test_vault.py"] | Multiple Mount Points for Hashicorp Vault Back-end | ### Description
Support mounting to multiple namespaces with the Hashicorp Vault Secrets Back-end
### Use case/motivation
As a data engineer I wish to utilize secrets stored in multiple mount paths (to support connecting to multiple namespaces) without having to mount to a higher up namespace.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29666 | https://github.com/apache/airflow/pull/29734 | d0783744fcae40b0b6b2e208a555ea5fd9124dfb | dff425bc3d92697bb447010aa9f3b56519a59f1e | "2023-02-21T16:44:08Z" | python | "2023-02-24T09:48:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,663 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/stats.py", "tests/core/test_stats.py"] | Option to Disable High Cardinality Metrics on Statsd | ### Description
With recent PRs enabling tags-support on Statsd metrics, we gained a deeper understanding into the issue of publishing high cardinality metrics. Through this issue, I hope to facilitate the discussion in categorizing metric cardinality of Airflow specific events and tags, and finding a way to disable high cardinality metrics and including it into 2.6.0 release
In the world of Observability & Metrics, cardinality is broadly defined as the following:
`number of unique metric names * number of unique application tag pairs`
This means that events with _unbounded_ number of tag-pairs (key value pair of tags) as well as events with _unbounded_ number of unique metric names will incur expensive storage requirements on the metrics backend.
Let's take a look at the following metric:
`local_task_job.task_exit.<job_id>.<dag_id>.<task_id>.<return_code>`
Here, we have 4 different variable/tag-like attributes embedded into the metric name that I think we can categorize into 3 levels of cardinality.
1. High cardinality / Unbounded metric
2. Medium cardinality / semi-bounded metric
3. Low cardinality / categorically-bounded metric
### High Cardinality / Unbounded Metric
Example tag: <job_id>
This category of metrics are strictly unbounded, and incorporates a monotonically increasing attribute like <job_id> or <run_id>. To demonstrate just how explosive the growth of these metrics can be, let's take an example. In an Airflow instance with 1000 daily jobs, with a metric retention period of 10 days, we are increasing the cardinality of our metrics by 10,000 on just one single metric just by adding this tag alone. If we add this tag to a few other metrics, that could easily result in an explosion of metric cardinality. As a benchmark,[ DataDog's Enterprise level pricing plan only has 200 custom metrics per host included](https://www.datadoghq.com/pricing/), and anything beyond that needs to be added at a premium. These metrics should be avoided at all costs.
### Medium Cardinality / semi-bounded metric
Example tag: <dag_id>, <task_id>
This category of metrics are semi-bounded. They are not bounded by a pre-defined category of enums, but they are bounded by the number of dags or tasks there are within an Airflow infrastructure. This means that although these metrics can lead to increasing levels of cardinality in an Airflow cluster with increasing number of dags, cardinality will still be temporarily bounded. I.e. a given cluster will maintain its level of cardinality over time.
### Low Cardinality / categorically-bounded metric
Example tag: <return_code>
This category of metrics is strictly bounded by a category of enums. <return_code> and <task_state> are good examples of attributes with low cardinality. Ideally, we would only want to publish metrics with this level of cardinality.
Using above definition of High Cardinality, I've identified the following metrics as examples that fall under this criteria.
https://github.com/apache/airflow/blob/main/airflow/jobs/local_task_job.py#L292
https://github.com/apache/airflow/blob/main/airflow/dag_processing/processor.py#L444
https://github.com/apache/airflow/blob/main/airflow/jobs/scheduler_job.py#L691
https://github.com/apache/airflow/blob/main/airflow/jobs/scheduler_job.py#L1584
https://github.com/apache/airflow/blob/main/airflow/models/dag.py#L1331
https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1258
https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1577
https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1847
I would like to propose that we need to provide the option to disable 'Unbounded metrics' with 2.6.0 release. In order to ensure backward compatibility, we could leave the default behavior to publish all metrics, but implement a single Boolean flag to disable these high cardinality metrics.
### Use case/motivation
_No response_
### Related issues
https://github.com/apache/airflow/pull/28961
https://github.com/apache/airflow/pull/29093
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29663 | https://github.com/apache/airflow/pull/29881 | 464ab1b7caa78637975008fcbb049d5b52a8b005 | 86cd79ffa76d4e4d4abe3fe829d7797852a713a5 | "2023-02-21T16:12:58Z" | python | "2023-03-06T06:20:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,662 | ["airflow/www/decorators.py"] | Audit Log is unclear when using Azure AD login | ### Apache Airflow version
2.5.1
### What happened
We're using an Azure OAUTH based login in our Airflow implementation, and everything works great. This is more of a visual problem than an actual bug.
In the Audit logs, the `owner` key is mapped to the username, which in most cases is airflow. But, in situations where we manually pause a DAG or enable it, it is mapped to our generated username, which doesn't really tell one who it is unless they were to look up that string in the users list. Example:

It would be nice if it were possible to include the user's first and last name alongside the username. I could probably give this one a go myself, if I could get a hint on where to look.
I've found the dag_audit_log.html template, but not sure where to change log.owner.
### What you think should happen instead
It would be good to get a representation such as username (FirstName LastName).
### How to reproduce
N/A
### Operating System
Debian GNU/Linux 11
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed with Helm chart v1.7.0, and Azure OAUTH for login.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29662 | https://github.com/apache/airflow/pull/30185 | 0b3b6704cb12a3b8f22da79d80b3db85528418b7 | a03f6ccb153f9b95f624d5bc3346f315ca3f0211 | "2023-02-21T15:10:30Z" | python | "2023-05-17T20:15:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,621 | ["chart/templates/dags-persistent-volume-claim.yaml", "chart/values.yaml"] | Fix adding annotations for dag persistence PVC | ### Official Helm Chart version
1.8.0 (latest released)
### Apache Airflow version
2.5.0
### Kubernetes Version
v1.25.4
### Helm Chart configuration
The dags persistence section doesn't have a default value for annotations and the usage looks like:
```
annotations:
{{- if .Values.dags.persistence.annotations}}
{{- toYaml .Values.dags.persistence.annotations | nindent 4 }}
{{- end }}
```
### Docker Image customizations
_No response_
### What happened
As per the review comments here: https://github.com/apache/airflow/pull/29270#pullrequestreview-1304890651, due to this design, the upgrades might suffer. Fix them to be helm upgrade friendly
### What you think should happen instead
The design should be written in an helm upgrade friendly way, refer to this suggestion https://github.com/apache/airflow/pull/29270#pullrequestreview-1304890651
### How to reproduce
-
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29621 | https://github.com/apache/airflow/pull/29622 | 5835b08e8bc3e11f4f98745266d10bbae510b258 | 901774718c5d7ff7f5ddc6f916701d281bb60a4b | "2023-02-20T03:20:25Z" | python | "2023-02-20T22:58:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,585 | ["airflow/providers/docker/decorators/docker.py", "tests/providers/docker/decorators/test_docker.py"] | template_fields not working in the decorator `task.docker` | ### Apache Airflow Provider(s)
docker
### Versions of Apache Airflow Providers
apache-airflow-providers-docker 3.4.0
### Apache Airflow version
2.5.1
### Operating System
Linux
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
The templated fields are not working under `task.docker`
```python
@task.docker(image="python:3.9-slim-bullseye", container_name='python_{{macros.datetime.now() | ts_nodash}}', multiple_outputs=True)
def transform(order_data_dict: dict):
"""
#### Transform task
A simple Transform task which takes in the collection of order data and
computes the total order value.
"""
total_order_value = 0
for value in order_data_dict.values():
total_order_value += value
return {"total_order_value": total_order_value}
```
Will throws error with un-templated `container_name`
`Bad Request ("Invalid container name (python_{macros.datetime.now() | ts_nodash}), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed")`
### What you think should happen instead
All these fields should work with docker operator:
https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html
```
template_fields: Sequence[str]= ('image', 'command', 'environment', 'env_file', 'container_name')
```
### How to reproduce
with the example above
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29585 | https://github.com/apache/airflow/pull/29586 | 792416d4ad495f1e5562e6170f73f4d8f1fa2eff | 7bd87e75def1855d8f5b91e9ab1ffbbf416709ec | "2023-02-17T09:32:11Z" | python | "2023-02-17T17:51:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,578 | ["airflow/jobs/scheduler_job_runner.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst", "newsfragments/30374.significant.rst"] | scheduler.tasks.running metric is always 0 | ### Apache Airflow version
2.5.1
### What happened
I'd expect the `scheduler.tasks.running` metric to represent the number of running tasks, but it is always zero. It appears that #10956 broke this when it removed [the line that increments `num_tasks_in_executor`](https://github.com/apache/airflow/pull/10956/files#diff-bde85feb359b12bdd358aed4106ef4fccbd8fa9915e16b9abb7502912a1c1ab3L1363). Right now that variable is set to 0, never incremented, and the emitted as a gauge.
### What you think should happen instead
`scheduler.tasks.running` should either represent the number of tasks running or be removed altogether.
### How to reproduce
_No response_
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29578 | https://github.com/apache/airflow/pull/30374 | d8af20f064b8d8abc9da1f560b2d7e1ac7dd1cc1 | cce9b2217b86a88daaea25766d0724862577cc6c | "2023-02-16T17:59:47Z" | python | "2023-04-13T11:04:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,538 | ["airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/marketing_platform/hooks/campaign_manager.py", "airflow/providers/google/marketing_platform/operators/campaign_manager.py", "airflow/providers/google/marketing_platform/sensors/campaign_manager.py", "airflow/providers/google/provider.yaml", "docs/apache-airflow-providers-google/operators/marketing_platform/campaign_manager.rst", "tests/providers/google/marketing_platform/hooks/test_campaign_manager.py", "tests/providers/google/marketing_platform/operators/test_campaign_manager.py", "tests/providers/google/marketing_platform/sensors/test_campaign_manager.py", "tests/system/providers/google/marketing_platform/example_campaign_manager.py"] | GoogleCampaignManagerReportSensor not working correctly on API Version V4 | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hello,
My organization has been running Airflow 2.3.4 and we have run into a problem in regard to the Google Campaign Manager Report Sensor. The purpose of this sensor is to check if a report has finished processing and is ready to be downloaded. If we use API Version: v3.5 it works flawlessly. Unfortunately, if we use API Version v4, the sensor malfunctions. It always succeeds regardless of whether the report is ready to download or not. This causes the job to fail downstream because it makes it impossible to download a file that it is not ready.
At first this doesn't seem like a big problem in just using v3.5. However, Google announced that they are going to only let you use API version v4 starting in a week. Is there a way we can get this resolved 😭?
Thanks!
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
linux?
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29538 | https://github.com/apache/airflow/pull/30598 | 5b42aa3b8d0ec069683e22c2cb3b8e8e6e5fee1c | da2749cae56d6e0da322695b3286acd9393052c8 | "2023-02-14T15:13:33Z" | python | "2023-04-15T13:34:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,537 | ["airflow/cli/commands/config_command.py", "tests/cli/commands/test_config_command.py"] | Docker image fails to start if celery config section is not defined | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Airflow `2.3.4`
We removed any config values we did not explicitly set from `airflow.cfg`. This was to make future upgrades less involved, as we could only compare configuration values we explicitly set, rather than all permutations of versions. This has been [recommended in slack](https://apache-airflow.slack.com/archives/CCQB40SQJ/p1668441275427859?thread_ts=1668394200.637899&cid=CCQB40SQJ) as an approach.
e.g. we set `AIRFLOW__CELERY__BROKER_URL` as an environment variable - we do not set this in `airflow.cfg`, so we removed the `[celery]` section from the Airflow configuration.
We set `AIRFLOW__CORE__EXECUTOR=CeleryExecutor`, so we are using the Celery executor.
Upon starting the Airflow scheduler, it exited with code `1`, and this message:
```
The section [celery] is not found in config.
```
Upon adding back in an empty
```
[celery]
```
section to `airflow.cfg`, this error went away. I have verified that it still picks up `AIRFLOW__CELERY__BROKER_URL` correctly.
### What you think should happen instead
I'd expect Airflow to take defaults as listed [here](https://airflow.apache.org/docs/apache-airflow/2.3.4/howto/set-config.html), I wouldn't expect the presence of configuration sections to cause errors.
### How to reproduce
1. Setup a docker image for the Airflow `scheduler` with `apache/airflow:slim-2.3.4)-python3.10` and the following configuration in `airflow.cfg` - with no `[celery]` section:
```
[core]
# The executor class that airflow should use. Choices include
# ``SequentialExecutor``, ``LocalExecutor``, ``CeleryExecutor``, ``DaskExecutor``,
# ``KubernetesExecutor``, ``CeleryKubernetesExecutor`` or the
# full import path to the class when using a custom executor.
executor = CeleryExecutor
[logging]
[metrics]
[secrets]
[cli]
[debug]
[api]
[lineage]
[atlas]
[operators]
[hive]
[webserver]
[email]
[smtp]
[sentry]
[celery_kubernetes_executor]
[celery_broker_transport_options]
[dask]
[scheduler]
[triggerer]
[kerberos]
[github_enterprise]
[elasticsearch]
[elasticsearch_configs]
[kubernetes]
[smart_sensor]
```
2. Run the `scheduler` command, also setting `AIRFLOW__CELERY__BROKER_URL` to point to a Celery redis broker.
3. Observe that the scheduler exits.
### Operating System
Ubuntu 20.04.5 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
AWS ECS
Docker `apache/airflow:slim-2.3.4)-python3.10`
Separate:
- Webserver
- Triggerer
- Scheduler
- Celery worker
- Celery flower
services
### Anything else
This seems to occur due to this `get-value` check in the Airflow image entrypoint: https://github.com/apache/airflow/blob/28126c12fbdd2cac84e0fbcf2212154085aa5ed9/scripts/docker/entrypoint_prod.sh#L203-L212
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29537 | https://github.com/apache/airflow/pull/29541 | 84b13e067f7b0c71086a42957bb5cf1d6dc86d1d | 06d45f0f2c8a71c211e22cf3792cc873f770e692 | "2023-02-14T14:58:55Z" | python | "2023-02-15T01:41:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,532 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/dag.py", "airflow/models/dagwarning.py"] | AIP-44 Migrate DagWarning.purge_inactive_dag_warnings to Internal API | Used in https://github.com/mhenc/airflow/blob/master/airflow/dag_processing/manager.py#L613
should be straighforward | https://github.com/apache/airflow/issues/29532 | https://github.com/apache/airflow/pull/29534 | 289ae47f43674ae10b6a9948665a59274826e2a5 | 50b30e5b92808e91ad9b6b05189f560d58dd8152 | "2023-02-14T13:13:04Z" | python | "2023-02-15T00:13:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,531 | ["airflow/ti_deps/deps/prev_dagrun_dep.py", "tests/ti_deps/deps/test_prev_dagrun_dep.py"] | Dynamic task mapping does not always create mapped tasks | ### Apache Airflow version
2.5.1
### What happened
Same problem as https://github.com/apache/airflow/issues/28296, but seems to happen nondeterministically, and still happens when ignoring `depends_on_past=True`.
I've got a task that retrieves some filenames, which then creates dynamically mapped tasks to move the files, one per task.
I'm using a similar task across multiple DAGs. However, task mapping fails on some DAG runs: it inconsistently happens per DAG run, and some DAGs do not seem to be affected at all. These seem to be the DAGs where no task was ever mapped, so that the mapped task instance ended up in a Skipped state.
What happens is that multiple files will be found, but only a single dynamically mapped task will be created. This task never starts and has map_index of -1. It can be found under the "List instances, all runs" menu, but says "No Data found." under the "Mapped Tasks" tab.


When I press the "Run" button when the mapped task is selected, the following error appears:
```
Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped!
```
The previous task _has_ run however. No errors appeared in my Airflow logs.
When I try to run the task with **Ignore All Deps** enabled, I get the error:
```
Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped!
```
This last bit is a contradiction, the task cannot be mapped and not mapped simultaneously.
If the amount of mapped tasks is 0 while in this erroneous state, the mapped tasks will not be marked as skipped as expected.
### What you think should happen instead
The mapped tasks should not get stuck with "no status".
The mapped tasks should be created and ran successfully, or in the case of a 0-length list output of the upstream task they should be skipped.
### How to reproduce
Run the below DAG, if it runs successfully clear several tasks out of order. This may not immediately reproduce the bug, but after some task clearing, for me it always ends up in the faulty state described above.
```
from airflow import DAG
from airflow.decorators import task
import datetime as dt
from airflow.operators.python import PythonOperator
import random
@task
def get_filenames_kwargs():
return [
{"file_name": i}
for i in range(random.randint(0, 2))
]
def print_filename(file_name):
print(file_name)
with DAG(
dag_id="dtm_test_2",
start_date=dt.datetime(2023, 2, 10),
default_args={
"owner": "airflow",
"depends_on_past": True,
},
schedule="@daily",
) as dag:
get_filenames_task = get_filenames_kwargs.override(task_id="get_filenames_task")()
print_filename_task = PythonOperator.partial(
task_id="print_filename_task",
python_callable=print_filename,
).expand(op_kwargs=get_filenames_task)
```
### Operating System
Amazon Linux v2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29531 | https://github.com/apache/airflow/pull/32397 | 685328e3572043fba6db432edcaacf8d06cf88d0 | 73bc49adb17957e5bb8dee357c04534c6b41f9dd | "2023-02-14T12:47:12Z" | python | "2023-07-23T23:53:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,515 | ["airflow/www/templates/airflow/task.html"] | Hide non-used docs attributes from Task Instance Detail | ### Description
Inside a BashOperator, I added a markdown snippet of documentation for the "Task Instance Details" of my Airflow nodes.
Now I can see my markdown, defined by the attribute "doc_md", but also
Attribute: bash_command
Attribute: doc
Attribute: doc_json
Attribute: doc_rst
Attribute: doc_yaml
I think it would look better if only the chosen type of docs would be shown in the Task Instance detail, instead of leaving the names of other attributes without anything added to them.

### Use case/motivation
I would like to see only the type of doc attribute that I chose to add to my task instance detail and hide all the others docs type.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29515 | https://github.com/apache/airflow/pull/29545 | 655ffb835eb4c5343c3f2b4d37b352248f2768ef | f2f6099c5a2f3613dce0cc434a95a9479d748cf5 | "2023-02-13T22:10:31Z" | python | "2023-02-16T14:17:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,435 | ["airflow/decorators/base.py", "tests/decorators/test_python.py"] | TaskFlow API `multiple_outputs` inferral causes import errors when using TYPE_CHECKING | ### Apache Airflow version
2.5.1
### What happened
When using the TaskFlow API, I like to generally keep a good practice of adding type annotations in the TaskFlow functions so others reading the DAG and task code have better context around inputs/outputs, keep imports solely used for typing behind `typing.TYPE_CHECKING`, and utilize PEP 563 for forwarding annotation evaluations. Unfortunately, when using ~PEP 563 _and_ `TYPE_CHECKING`~ just TYPE_CHECKING, DAG import errors occur with a "NameError: <name> is not defined." exception.
### What you think should happen instead
Users should be free to use ~PEP 563 and~ `TYPE_CHECKING` when using the TaskFlow API and not hit DAG import errors along the way.
### How to reproduce
Using a straightforward use case of transforming a DataFrame, let's assume this toy example:
```py
from __future__ import annotations
from typing import TYPE_CHECKING, Any
from pendulum import datetime
from airflow.decorators import dag, task
if TYPE_CHECKING:
from pandas import DataFrame
@dag(start_date=datetime(2023, 1, 1), schedule=None)
def multiple_outputs():
@task()
def transform(df: DataFrame) -> dict[str, Any]:
...
transform()
multiple_outputs()
```
Add this DAG to your DAGS_FOLDER and the following import error should be observed:
<img width="641" alt="image" src="https://user-images.githubusercontent.com/48934154/217713685-ec29d5cc-4a48-4049-8dfa-56cbd76cddc3.png">
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.2.0
apache-airflow-providers-apache-hive==5.1.1
apache-airflow-providers-apache-livy==3.2.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==5.1.1
apache-airflow-providers-common-sql==1.3.3
apache-airflow-providers-databricks==4.0.0
apache-airflow-providers-dbt-cloud==2.3.1
apache-airflow-providers-elasticsearch==4.3.3
apache-airflow-providers-ftp==3.3.0
apache-airflow-providers-google==8.8.0
apache-airflow-providers-http==4.1.1
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.1.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sftp==4.2.1
apache-airflow-providers-snowflake==4.0.2
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.4.0
astronomer-providers==1.14.0
### Deployment
Astronomer
### Deployment details
OOTB local Airflow install with LocalExecutor built with the Astro CLI.
### Anything else
- This behavior/error was not observed using Airflow 2.4.3.
- As a workaround, `multiple_outputs` can be explicitly set on the TaskFlow function to skip the inferral.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29435 | https://github.com/apache/airflow/pull/29445 | f9e9d23457cba5d3e18b5bdb7b65ecc63735b65b | b1306065054b98a63c6d3ab17c84d42c2d52809a | "2023-02-09T03:55:48Z" | python | "2023-02-12T07:45:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,432 | ["airflow/models/mappedoperator.py", "tests/models/test_mappedoperator.py", "tests/test_utils/mock_operators.py"] | Jinja templating doesn't work with container_resources when using dymanic task mapping with Kubernetes Pod Operator | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Google Cloud Composer Version - 2.1.5
Airflow Version - 2.4.3
We are trying to use dynamic task mapping with Kubernetes Pod Operator. Our use-case is to return the pod's CPU and memory requirements from a function which is included as a macro in DAG
Without dynamic task mapping it works perfectly, but when used with the dynamic task mapping, it is unable to recognize the macro.
container_resources is a templated field as per the [docs](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/operators/kubernetes_pod/index.html#airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator), the feature was introduced in this [PR](https://github.com/apache/airflow/pull/27457).
We also tried the toggling the boolean `render_template_as_native_obj`, but still no luck.
Providing below a trimmed version of our DAG to help reproduce the issue. (function to return cpu and memory is trivial here just to show example)
### What you think should happen instead
It should have worked similar with or without dynamic task mapping.
### How to reproduce
Deployed the following DAG in Google Cloud Composer.
```
import datetime
import os
from airflow import models
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import (
KubernetesPodOperator,
)
from kubernetes.client import models as k8s_models
dvt_image = os.environ.get("DVT_IMAGE")
default_dag_args = {"start_date": datetime.datetime(2022, 1, 1)}
def pod_mem():
return "4000M"
def pod_cpu():
return "1000m"
with models.DAG(
"sample_dag",
schedule_interval=None,
default_args=default_dag_args,
render_template_as_native_obj=True,
user_defined_macros={
"pod_mem": pod_mem,
"pod_cpu": pod_cpu,
},
) as dag:
task_1 = KubernetesPodOperator(
task_id="task_1",
name="task_1",
namespace="default",
image=dvt_image,
cmds=["bash", "-cx"],
arguments=["echo hello"],
service_account_name="sa-k8s",
container_resources=k8s_models.V1ResourceRequirements(
limits={
"memory": "{{ pod_mem() }}",
"cpu": "{{ pod_cpu() }}",
}
),
startup_timeout_seconds=1800,
get_logs=True,
image_pull_policy="Always",
config_file="/home/airflow/composer_kube_config",
dag=dag,
)
task_2 = KubernetesPodOperator.partial(
task_id="task_2",
name="task_2",
namespace="default",
image=dvt_image,
cmds=["bash", "-cx"],
service_account_name="sa-k8s",
container_resources=k8s_models.V1ResourceRequirements(
limits={
"memory": "{{ pod_mem() }}",
"cpu": "{{ pod_cpu() }}",
}
),
startup_timeout_seconds=1800,
get_logs=True,
image_pull_policy="Always",
config_file="/home/airflow/composer_kube_config",
dag=dag,
).expand(arguments=[["echo hello"]])
task_1 >> task_2
```
task_1 (without dynamic task mapping) completes successfully, while task_2(with dynamic task mapping) fails.
Looking at the error logs, it failed while rendering the Pod spec since the calls to pod_cpu() and pod_mem() are unresolved.
Here is the traceback:
Exception when attempting to create Namespaced Pod: { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": {}, "labels": { "dag_id": "sample_dag", "task_id": "task_2", "run_id": "manual__2023-02-08T183926.890852Z-eee90e4ee", "kubernetes_pod_operator": "True", "map_index": "0", "try_number": "2", "airflow_version": "2.4.3-composer", "airflow_kpo_in_cluster": "False" }, "name": "task-2-46f76eb0432d42ae9a331a6fc53835b3", "namespace": "default" }, "spec": { "affinity": {}, "containers": [ { "args": [ "echo hello" ], "command": [ "bash", "-cx" ], "env": [], "envFrom": [], "image": "us.gcr.io/ams-e2e-testing/edw-dvt-tool", "imagePullPolicy": "Always", "name": "base", "ports": [], "resources": { "limits": { "memory": "{{ pod_mem() }}", "cpu": "{{ pod_cpu() }}" } }, "volumeMounts": [] } ], "hostNetwork": false, "imagePullSecrets": [], "initContainers": [], "nodeSelector": {}, "restartPolicy": "Never", "securityContext": {}, "serviceAccountName": "sa-k8s", "tolerations": [], "volumes": [] } }
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 143, in run_pod_async
resp = self._client.create_namespaced_pod(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 7356, in create_namespaced_pod
return self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) # noqa: E501
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 7455, in create_namespaced_pod_with_http_info
return self.api_client.call_api(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 391, in request
return self.rest_client.POST(url,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 275, in POST
return self.request("POST", url,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 234, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': '1ef20c0b-6980-4173-b9cc-9af5b4792e86', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '1b263a21-4c75-4ef8-8147-c18780a13f0e', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3cd4cda4-908c-4944-a422-5512b0fb88d6', 'Date': 'Wed, 08 Feb 2023 18:45:23 GMT', 'Content-Length': '256'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod in version \"v1\" cannot be handled as a Pod: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'","reason":"BadRequest","code":400}
### Operating System
Google Composer Kubernetes Cluster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29432 | https://github.com/apache/airflow/pull/29451 | 43443eb539058b7b4756455f76b0e883186d9250 | 5eefd47771a19dca838c8cce40a4bc5c555e5371 | "2023-02-08T19:01:33Z" | python | "2023-02-13T08:48:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,428 | ["pyproject.toml"] | Require newer version of pypi/setuptools to remove security scan issue (CVE-2022-40897) | ### Description
Hi. My team is evaluating airflow, so I ran a security scan on it. It is flagging a Medium security issue with pypi/setuptools. See https://nvd.nist.gov/vuln/detail/CVE-2022-40897 for details. Is it possible to require a more recent version? Or perhaps airflow users are not vulnerable to this?
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29428 | https://github.com/apache/airflow/pull/29465 | 9c6f83bb6f3e3b57ae0abbe9eb0582fcde265702 | 41dff9875bce4800495c9132b10a6c8bff900a7c | "2023-02-08T15:11:54Z" | python | "2023-02-11T16:03:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,422 | ["airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py", "tests/providers/amazon/aws/transfers/test_dynamodb_to_s3.py"] | Multiple AWS connections support in DynamoDBToS3Operator | ### Description
I want to add support of a separate AWS connection for DynamoDB in `DynamoDBToS3Operator` in `apache-airflow-providers-amazon` via `aws_dynamodb_conn_id` constructor argument.
### Use case/motivation
Sometimes DynamoDB tables and S3 buckets live in different AWS accounts so to access both resources you need to assume a role in another account from one of them.
That role can be specified in AWS connection, thus we need to support two of them in this operator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29422 | https://github.com/apache/airflow/pull/29452 | 8691c4f98c6cd6d96e87737158a9be0f6a04b9ad | 3780b01fc46385809423bec9ef858be5be64b703 | "2023-02-08T08:58:26Z" | python | "2023-03-09T22:02:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,405 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts"] | Add pagination to get_log in the rest API | ### Description
Right now, the `get_log` endpoint at `/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/logs/{task_try_number}` does not have any pagination and therefore we can be forced to load extremely large text blocks, which makes everything slow. (see the workaround fix we needed to do in the UI: https://github.com/apache/airflow/pull/29390)
In `task_log_reader`, we do have `log_pos` and `offset` (see [here](https://github.com/apache/airflow/blob/main/airflow/utils/log/log_reader.py#L80-L83)). It would be great to expose those parameters in the REST API in order to break apart task instance logs into more manageable pieces.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29405 | https://github.com/apache/airflow/pull/30729 | 7d02277ae13b7d1e6cea9e6c8ff0d411100daf77 | 7d62cbb97e1bc225f09e3cfac440aa422087a8a7 | "2023-02-07T16:10:57Z" | python | "2023-04-22T20:49:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,396 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | BigQuery Hook list_rows method missing page_token return value | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==7.0.0
But the problem exists in all newer versions.
### Apache Airflow version
apache-airflow==2.3.2
### Operating System
Ubuntu 20.04.4 LTS
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
The `list_rows` method in the BigQuery Hook does not return the page_token value, which is necessary for paginating query results. Same problem with `get_datasets_list` method.
The documentation for the `get_datasets_list` method even states that the page_token parameter can be accessed:
```
:param page_token: Token representing a cursor into the datasets. If not passed,
the API will return the first page of datasets. The token marks the beginning of the
iterator to be returned and the value of the ``page_token`` can be accessed at
``next_page_token`` of the :class:`~google.api_core.page_iterator.HTTPIterator`.
```
but it doesn't return HTTPIterator. Instead, it converts the `HTTPIterator` to `list[DatasetListItem]` using `list(datasets)`, making it impossible to retrieve the original `HTTPIterator` and thus impossible to obtain the `next_page_token`.
### What you think should happen instead
`list_rows` \ `get_datasets_list` methods should return `Iterator` OR both the list of rows\datasets and the page_token value to allow users to retrieve multiple results pages. For backward compatibility, we can have a parameter like `return_iterator=True` or smth like that.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29396 | https://github.com/apache/airflow/pull/30543 | d9896fd96eb91a684a512a86924a801db53eb945 | 4703f9a0e589557f5176a6f466ae83fe52644cf6 | "2023-02-07T02:26:41Z" | python | "2023-04-08T17:01:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,393 | ["airflow/providers/amazon/aws/log/s3_task_handler.py", "tests/providers/amazon/aws/log/test_s3_task_handler.py"] | S3TaskHandler continuously returns "*** Falling back to local log" even if log_pos is provided when log not in s3 | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.1.0
### Apache Airflow version
2.5.1
### Operating System
Ubuntu 18.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When looking at logs in the UI for a running task when using remote s3 logging, the logs for the task are only uploaded to s3 after the task has completed. The the `S3TaskHandler` falls back to the local logs stored on the worker in that case (by falling back to the `FileTaskHandler` behavior) and prepends the line `*** Falling back to local log` to those logs.
This is mostly fine, but for the new log streaming behavior, this means that `*** Falling back to local log` is returned from `/get_logs_with_metadata` on each call, even if there are no new logs.
### What you think should happen instead
I'd expect the falling back message only to be included in calls with no `log_pos` in the metadata or with a `log_pos` of `0`.
### How to reproduce
Start a task with `logging.remote_logging` set to `True` and `logging.remote_base_log_folder` set to `s3://something` and watch the logs while the task is running. You'll see `*** Falling back to local log` printed every few seconds.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29393 | https://github.com/apache/airflow/pull/29708 | 13098d5c35cf056c3ef08ea98a1970ee1a3e76f8 | 5e006d743d1ba3781acd8e053642f2367a8e7edc | "2023-02-06T20:33:08Z" | python | "2023-02-23T21:25:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,358 | ["airflow/models/baseoperator.py", "airflow/models/dag.py", "airflow/models/param.py"] | Cannot use TypedDict object when defining params | ### Apache Airflow version
2.5.1
### What happened
Context: I am attempting to use [TypedDict](https://docs.python.org/3/library/typing.html#typing.TypedDict) objects to maintain the keys used in DAG params in a single place, and check for key names across multiple DAGs that use the params.
This raises an error with `mypy` as `params` expects an `Optional[Dict]`. Due to the invariance of `Dict`, this does not accept `TypedDict` objects.
What happened: I passed a `TypedDict` to the `params` arg of `DAG` and got a TypeError.
### What you think should happen instead
`TypedDict` objects should be accepted by `DAG`, which should accept `Optional[Mapping[str, Any]]`.
Unless I'm mistaken, `params` are converted to a `ParamsDict` class and therefore the appropriate type hint is a generic `Mapping` type.
### How to reproduce
Steps to reproduce
```Python
from typing import TypedDict
from airflow import DAG
from airflow.models import Param
class ParamsTypedDict(TypedDict):
str_param: Param
params: ParamsTypedDict = {
"str_param": Param("", type="str")
}
with DAG(
dag_id="mypy-error-dag",
# The line below raises a mypy error
# Argument "params" to "DAG" has incompatible type "ParamsTypedDict"; expected "Optional[Dict[Any, Any]]" [arg-type]
params=params,
) as dag:
pass
```
### Operating System
Amazon Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29358 | https://github.com/apache/airflow/pull/29782 | b6392ae5fd466fa06ca92c061a0f93272e27a26b | b069df9b0a792beca66b08d873a66d5640ddadb7 | "2023-02-03T14:40:04Z" | python | "2023-03-07T21:25:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,329 | ["airflow/example_dags/example_setup_teardown.py", "airflow/models/abstractoperator.py", "airflow/models/dag.py", "tests/models/test_dag.py", "tests/models/test_dagrun.py"] | Automatically clear setup/teardown when clearing a dependent task | null | https://github.com/apache/airflow/issues/29329 | https://github.com/apache/airflow/pull/30271 | f4c4b7748655cd11d2c297de38563b2e6b840221 | 0c2778f348f61f3bf08b840676d681e93a60f54a | "2023-02-02T15:44:26Z" | python | "2023-06-21T13:34:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,325 | ["airflow/providers/cncf/kubernetes/python_kubernetes_script.py", "airflow/utils/decorators.py", "tests/decorators/test_external_python.py", "tests/decorators/test_python_virtualenv.py", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py", "tests/providers/docker/decorators/test_docker.py"] | Ensure setup/teardown work on a previously decorated function (eg task.docker) | null | https://github.com/apache/airflow/issues/29325 | https://github.com/apache/airflow/pull/30216 | 3022e2ecbb647bfa0c93fbcd589d0d7431541052 | df49ad179bddcdb098b3eccbf9bb6361cfbafc36 | "2023-02-02T15:43:06Z" | python | "2023-03-24T17:01:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,323 | ["airflow/models/serialized_dag.py", "tests/models/test_serialized_dag.py"] | DAG dependencies graph not updating when deleting a DAG | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
ON Airflow 2.4.2 Dag dependencies graph show deleted DAGs that use to have dependencies to currently existing DAGs
### What you think should happen instead
Deleted DAGs should not appear on DAG Dependencies
### How to reproduce
Create a DAG with dependencies on other DAG, like a wait sensor.
Remove new DAG
### Operating System
apache/airflow:2.4.2-python3.10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29323 | https://github.com/apache/airflow/pull/29407 | 18347d36e67894604436f3ef47d273532683b473 | 02a2efeae409bddcfedafe273fffc353595815cc | "2023-02-02T15:22:37Z" | python | "2023-02-13T19:25:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,322 | ["airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_utils.py"] | DAG list, sorting lost when switching page | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hi, I'm currently on Airflow 2.4.2
In /home when sorting by DAG/Owner/Next Run and going to the next page the sort resets.
This feature only works if I'm looking for last or first, everything in the middle is unreachable.
### What you think should happen instead
The sorting should continue over the pagination
### How to reproduce
Sort by any sortable field on DagList and go to the next page
### Operating System
apache/airflow:2.4.2-python3.10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29322 | https://github.com/apache/airflow/pull/29756 | c917c9de3db125cac1beb0a58ac81f56830fb9a5 | c8cd90fa92c1597300dbbad4366c2bef49ef6390 | "2023-02-02T15:19:51Z" | python | "2023-03-02T14:59:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,267 | ["airflow/example_dags/example_python_decorator.py", "airflow/example_dags/example_python_operator.py", "airflow/example_dags/example_short_circuit_operator.py", "docs/apache-airflow/howto/operator/python.rst", "docs/conf.py", "docs/sphinx_design/static/custom.css", "setup.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Support tabs in docs | ### What do you see as an issue?
I suggest supporting tabs in the docs to improve the readability when demonstrating different ways to achieve the same things.
**Motivation**
We have multiple ways to achieve the same thing in Airflow, for example:
- TaskFlow API & "classic" operators
- CLI & REST API & API client
However, our docs currently do not consistently demonstrate different ways to use Airflow. For example, https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html demonstrates TaskFlow operators in some examples and classic operators in other examples. All cases covered can be supported by both the TaskFlow & classic operators.
In the case of https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html, I think a nice solution to demonstrate both approaches would be to use tabs. That way somebody who prefers the TaskFlow API can view all TaskFlow examples, and somebody who prefers the classic operators (we should give those a better name) can view only those examples.
**Possible implementation**
There is a package [sphinx-tabs](https://github.com/executablebooks/sphinx-tabs) for this. For the example above, having https://sphinx-tabs.readthedocs.io/en/latest/#group-tabs would be great because it enables you to view all examples of one "style" with a single click.
### Solving the problem
Install https://github.com/executablebooks/sphinx-tabs with the docs.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29267 | https://github.com/apache/airflow/pull/36041 | f60d458dc08a5d5fbe5903fffca8f7b03009f49a | 58e264c83fed1ca42486302600288230b944ab06 | "2023-01-31T14:23:42Z" | python | "2023-12-06T08:44:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,250 | ["airflow/providers/databricks/hooks/databricks.py", "tests/providers/databricks/hooks/test_databricks.py"] | Repair functionality in DatabricksRunNowOperator | ### Description
The Databricks jobs 2.1 API has the ability to repair failed or skipped tasks in a Databricks workflow without having to rerun successful tasks for a given workflow run. It would be nice to be able to leverage this functionality via airflow operators.
### Use case/motivation
The primary motivation is the ability to be more efficient and only have to rerun failed or skipped tasks rather than the entire workflow if only 1 out of 10 tasks fail.
**Repair run API:**
https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsRepairfail
@alexott for visability
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29250 | https://github.com/apache/airflow/pull/30786 | 424fc17d49afd4175826a62aa4fe7aa7c5772143 | 9bebf85e24e352f9194da2f98e2bc66a5e6b972e | "2023-01-30T21:24:49Z" | python | "2023-04-22T21:21:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,227 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Calendar page doesn't load when using a timedelta DAG schedule | ### Apache Airflow version
2.5.1
### What happened
/calendar page give a problem, here is the capture

### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04.1 LTS
### Versions of Apache Airflow Providers
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
### Deployment
Other
### Deployment details
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29227 | https://github.com/apache/airflow/pull/29454 | 28126c12fbdd2cac84e0fbcf2212154085aa5ed9 | f837c0105c85d777ea18c88a9578eeeeac5f57db | "2023-01-30T01:32:44Z" | python | "2023-02-14T17:06:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,209 | ["airflow/providers/google/cloud/operators/bigquery_dts.py", "tests/providers/google/cloud/operators/test_bigquery_dts.py"] | BigQueryCreateDataTransferOperator will log AWS credentials when transferring from S3 | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
[apache-airflow-providers-google 8.6.0](https://airflow.apache.org/docs/apache-airflow-providers-google/8.6.0/)
### Apache Airflow version
2.5.0
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When creating a transfer config that will move data from AWS S3, an access_key_id and secret_access_key are provided (see: https://cloud.google.com/bigquery/docs/s3-transfer).
These parameters are logged and exposed as XCom return_value.
### What you think should happen instead
At least the secret_access_key should be hidden or removed from the XCom return value
### How to reproduce
```
PROJECT_ID=123
TRANSFER_CONFIG={
"destination_dataset_id": destination_dataset,
"display_name": display_name,
"data_source_id": "amazon_s3",
"schedule_options": {"disable_auto_scheduling": True},
"params": {
"destination_table_name_template": destination_table,
"file_format": "PARQUET",
"data_path": data_path,
"access_key_id": access_key_id,
"secret_access_key": secret_access_key
}
},
gcp_bigquery_create_transfer = BigQueryCreateDataTransferOperator(
transfer_config=TRANSFER_CONFIG,
project_id=PROJECT_ID,
task_id="gcp_bigquery_create_transfer",
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29209 | https://github.com/apache/airflow/pull/29348 | 3dbcf99d20d47cde0debdd5faf9bd9b2ebde1718 | f51742d20b2e53bcd90a19db21e4e12d2a287677 | "2023-01-28T19:58:00Z" | python | "2023-02-20T23:06:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,199 | ["airflow/models/xcom_arg.py", "tests/decorators/test_python.py"] | TaskFlow AirflowSkipException causes downstream step to fail when multiple_outputs is true | ### Apache Airflow version
2.5.1
### What happened
Most of our code is based on TaskFlow API and we have many tasks that raise AirflowSkipException (or BranchPythonOperator) on purpose to skip the next downstream task (with trigger_rule = none_failed_min_one_success).
And these tasks are expecting a multiple output XCom result (local_file_path, file sizes, records count) from previous tasks and it's causing this error:
`airflow.exceptions.XComNotFound: XComArg result from copy_from_data_lake_to_local_file at outbound_dag_AIR2070 with key="local_file_path" is not found!`
### What you think should happen instead
Considering trigger rule "none_failed_min_one_success", we expect that upstream task should be allowed to skip and downstream tasks will still run without raising any errors caused by not found XCom results.
### How to reproduce
This is an aproximate example dag based on an existing one.
```python
from os import path
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.operators.python import BranchPythonOperator
DAG_ID = "testing_dag_AIR"
# PGP_OPERATION = None
PGP_OPERATION = "decrypt"
LOCAL_FILE_PATH = "/temp/example/example.csv"
with DAG(
dag_id=DAG_ID,
schedule='0 7-18 * * *',
start_date=pendulum.datetime(2022, 12, 15, 7, 0, 0),
) as dag:
@task(multiple_outputs=True, trigger_rule='none_failed_min_one_success')
def copy_from_local_file_to_data_lake(local_file_path: str, dest_dir_path: str):
destination_file_path = path.join(dest_dir_path, path.basename(local_file_path))
return {
"destination_file_path": destination_file_path,
"file_size": 100
}
@task(multiple_outputs=True, trigger_rule='none_failed_min_one_success')
def copy_from_data_lake_to_local_file(data_lake_file_path, local_dir_path):
local_file_path = path.join(local_dir_path, path.basename(data_lake_file_path))
return {
"local_file_path": local_file_path,
"file_size": 100
}
@task(multiple_outputs=True, task_id='get_pgp_file_info', trigger_rule='none_failed_min_one_success')
def get_pgp_file_info(file_path, operation):
import uuid
import os
src_file_name = os.path.basename(file_path)
src_file_dir = os.path.dirname(file_path)
run_id = str(uuid.uuid4())
if operation == "decrypt":
wait_pattern = f'*{src_file_name}'
else:
wait_pattern = f'*{src_file_name}.pgp'
target_path = 'datalake/target'
return {
'src_file_path': file_path,
'src_file_dir': src_file_dir,
'target_path': target_path,
'pattern': wait_pattern,
'guid': run_id
}
@task(multiple_outputs=True, task_id='return_src_path', trigger_rule='none_failed_min_one_success')
def return_src_path(src_file_path):
return {
'file_path': src_file_path,
'file_size': 100
}
@task(multiple_outputs=True, task_id='choose_result', trigger_rule='none_failed_min_one_success')
def choose_result(src_file_path, src_file_size, decrypt_file_path, decrypt_file_size):
import os
file_path = decrypt_file_path or src_file_path
file_size = decrypt_file_size or src_file_size
local_dir = os.path.dirname(file_path)
return {
'local_dir': local_dir,
'file_path': file_path,
'file_size': file_size,
'file_name': os.path.basename(file_path)
}
def switch_branch_func(pgp_operation):
if pgp_operation in ["decrypt", "encrypt"]:
return 'get_pgp_file_info'
else:
return 'return_src_path'
operation = PGP_OPERATION
local_file_path = LOCAL_FILE_PATH
check_need_to_decrypt = BranchPythonOperator(
task_id='branch_task',
python_callable=switch_branch_func,
op_args=(operation,))
pgp_file_info = get_pgp_file_info(local_file_path, operation)
data_lake_file = copy_from_local_file_to_data_lake(pgp_file_info['src_file_path'], pgp_file_info['target_path'])
decrypt_local_file = copy_from_data_lake_to_local_file(
data_lake_file['destination_file_path'], pgp_file_info['src_file_dir'])
src_result = return_src_path(local_file_path)
result = choose_result(src_result['file_path'], src_result['file_size'],
decrypt_local_file['local_file_path'], decrypt_local_file['file_size'])
check_need_to_decrypt >> [pgp_file_info, src_result]
pgp_file_info >> decrypt_local_file
[decrypt_local_file, src_result] >> result
```
### Operating System
Windows 10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
docker-compose version: 3.7
Note: This also happens when it's deployed to one of our testing environments using official Airflow Helm Chart.
### Anything else
This issue is similar to [#24338](https://github.com/apache/airflow/issues/24338), it was solved by [#25661](https://github.com/apache/airflow/pull/25661) but this case is related to multiple_outputs being set to True.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29199 | https://github.com/apache/airflow/pull/32027 | 14eb1d3116ecef15be7be9a8f9d08757e74f981c | 79eac7687cf7c6bcaa4df2b8735efaad79a7fee2 | "2023-01-27T18:27:43Z" | python | "2023-06-21T09:55:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,198 | ["airflow/providers/snowflake/operators/snowflake.py"] | SnowflakeCheckOperator - The conn_id `None` isn't defined | ### Apache Airflow Provider(s)
snowflake
### Versions of Apache Airflow Providers
`apache-airflow-providers-snowflake==4.0.2`
### Apache Airflow version
2.5.1
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
After upgrading the _apache-airflow-providers-snowflake_ from version **3.3.0** to **4.0.2**, the SnowflakeCheckOperator tasks starts to throw the following error:
```
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 179, in get_db_hook
return self._hook
File "/usr/local/lib/python3.9/functools.py", line 993, in __get__
val = self.func(instance)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 141, in _hook
conn = BaseHook.get_connection(self.conn_id)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/hooks/base.py", line 72, in get_connection
conn = Connection.get_connection_from_secrets(conn_id)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/connection.py", line 435, in get_connection_from_secrets
raise AirflowNotFoundException(f"The conn_id `{conn_id}` isn't defined")
airflow.exceptions.AirflowNotFoundException: The conn_id `None` isn't defined
```
### What you think should happen instead
_No response_
### How to reproduce
- Define a _Snowflake_ Connection with the name **snowflake_default**
- Create a Task similar to this:
```
my_task = SnowflakeCheckOperator(
task_id='my_task',
warehouse='warehouse',
database='database',
schema='schema',
role='role',
sql='select 1 from my_table'
)
```
- Run and check the error.
### Anything else
We can workaround this by adding the conn_id='snowflake_default' to the SnowflakeCheckOperator.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29198 | https://github.com/apache/airflow/pull/29211 | a72e28d6e1bc6ae3185b8b3971ac9de5724006e6 | 9b073119d401594b3575c6f7dc4a14520d8ed1d3 | "2023-01-27T18:24:51Z" | python | "2023-01-29T08:54:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,197 | ["airflow/www/templates/airflow/dag.html"] | Trigger DAG w/config raising error from task detail views | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Version: 2.4.3 (migrated from 2.2.4)
Manual UI option "Trigger DAG w/config" raises an error _400 Bad Request - Invalid datetime: None_ from views "Task Instance Details", "Rendered Template", "Log" and "XCom" . Note that DAG is actually triggered , but still error response 400 is raised.
### What you think should happen instead
No 400 error
### How to reproduce
1. Go to any DAG graph view
2. Select a Task > go to "Instance Details"
3. Select "Trigger DAG w/config"
4. Select Trigger
5. See error
### Operating System
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29197 | https://github.com/apache/airflow/pull/29212 | 9b073119d401594b3575c6f7dc4a14520d8ed1d3 | 7315d6f38caa58e6b19054f3e8a20ed02df16a29 | "2023-01-27T18:07:01Z" | python | "2023-01-29T08:56:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,177 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/http/hooks/http.py", "airflow/providers/http/operators/http.py", "tests/providers/http/hooks/test_http.py"] | SimpleHttpOperator not working with loginless auth_type | ### Apache Airflow Provider(s)
http
### Versions of Apache Airflow Providers
apache-airflow-providers-http==4.1.1
### Apache Airflow version
2.5.0
### Operating System
Ubuntu 20.04.5 LTS (Focal Fossa)"
### Deployment
Virtualenv installation
### Deployment details
Reproduced on a local deployment inside WSL on virtualenv - not related to specific deployment.
### What happened
SimpleHttpOperator supports passing in auth_type. Hovewer, [this auth_type is only initialized if login is provided](https://github.com/astronomer/airflow-provider-sample/blob/main/sample_provider/hooks/sample_hook.py#L64-L65).
In our setup we are using the Kerberos authentication. This authentication relies on kerberos sidecar with keytab, and not on user-password pair in the connection string. However, this would also be issue with any other implementation not relying on username passed in the connection string.
We were trying to use some other auth providers from (`HTTPSPNEGOAuth` from [requests_gssapi](https://pypi.org/project/requests-gssapi/) and `HTTPKerberosAuth` from [requests_kerberos](https://pypi.org/project/requests-kerberos/)). We noticed that requests_kerberos is used in Airflow in some other places for Kerberos support, hence we have settled on the latter.
### What you think should happen instead
A suggestion is to initialize the passed `auth_type` also if no login is present.
### How to reproduce
A branch demonstrating possible fix:
https://github.com/apache/airflow/commit/7d341f081f0160ed102c06b9719582cb463b538c
### Anything else
The linked branch is a quick-and-dirty solution, but maybe the code could be refactored in another way? Support for **kwargs could also be useful, but I wanted to make as minimal changes as possible.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29177 | https://github.com/apache/airflow/pull/29206 | 013490edc1046808c651c600db8f0436b40f7423 | c44c7e1b481b7c1a0d475265835a23b0f507506c | "2023-01-26T08:28:39Z" | python | "2023-03-20T13:52:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,175 | ["airflow/providers/redis/provider.yaml", "docs/apache-airflow-providers-redis/index.rst", "generated/provider_dependencies.json", "tests/system/providers/redis/__init__.py", "tests/system/providers/redis/example_redis_publish.py"] | Support for Redis Time series in Airflow common packages | ### Description
The current Redis API version is quite old. I need to implement a DAG for Timeseries data feature. Please upgrade to version that supports this. BTW, I was able to manually update my redis worker and it now works. Can this be added to next release please.
### Use case/motivation
Timeseries in Redis is a growing area needing support in Airflow
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29175 | https://github.com/apache/airflow/pull/31279 | df3569cf489ce8ef26f5b4d9d9c3826d3daad5f2 | 94cad11b439e0ab102268e9e7221b0ab9d98e0df | "2023-01-26T03:42:51Z" | python | "2023-05-16T13:11:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,128 | ["docs/apache-airflow-providers-ftp/index.rst"] | [Doc] Link to examples how to use FTP provider is incorrect | ### What do you see as an issue?
HI.
I tried to use FTP provider (https://airflow.apache.org/docs/apache-airflow-providers-ftp/stable/connections/ftp.html#howto-connection-ftp) but link to the "Example DAGs" is incorrect and Github response is 404.
### Solving the problem
Please update links to Example DAGs - here and check it in other providers.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29128 | https://github.com/apache/airflow/pull/29134 | 33ba242d7eb8661bf936a9b99a8cad4a74b29827 | 1fbfd312d9d7e28e66f6ba5274421a96560fb7ba | "2023-01-24T12:07:45Z" | python | "2023-01-24T19:24:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,125 | ["airflow/models/dag.py", "airflow/models/dagrun.py", "tests/models/test_dag.py", "tests/models/test_dagrun.py"] | Ensure teardown failure with on_failure_fail_dagrun=True fails the DagRun, and not otherwise | null | https://github.com/apache/airflow/issues/29125 | https://github.com/apache/airflow/pull/30398 | fc4166127a1d2099d358fee1ea10662838cf9cf3 | db359ee2375dd7208583aee09b9eae00f1eed1f1 | "2023-01-24T11:08:45Z" | python | "2023-05-08T10:58:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,113 | ["docs/apache-airflow-providers-sqlite/operators.rst"] | sqlite conn id unclear | ### What do you see as an issue?
The sqlite conn doc here https://airflow.apache.org/docs/apache-airflow-providers-sqlite/stable/operators.html is unclear.
Sqlite does not use username, password, port, schema. These need to be removed from the docs. Furthermore, it is unclear how to construct a conn string for sqlite, since the docs for constructing a conn string here https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html assume that all these fields are given.
### Solving the problem
Remove unused arguments for sqlite in connection, and make it clearer how to construct a connection to sqlite
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29113 | https://github.com/apache/airflow/pull/29139 | d23033cff8a25e5f71d01cb513c8ec1d21bbf491 | ec7674f111177c41c02e5269ad336253ed9c28b4 | "2023-01-23T17:44:59Z" | python | "2023-05-01T20:34:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,105 | ["airflow/www/static/js/graph.js"] | graph disappears during run time when using branch_task and a dynamic classic operator | ### Apache Airflow version
2.5.1
### What happened
when using a dynamically generated task that gets the expand data from xcom after a branch_task the graph doesn't render.
It reappears once the dag run is finished.
tried with BashOperator and a KubernetesPodOperator.
the developer console in the browser shows the error:
`Uncaught TypeError: Cannot read properties of undefined (reading 'length')
at z (graph.1c0596dfced26c638bfe.js:2:17499)
at graph.1c0596dfced26c638bfe.js:2:17654
at Array.map (<anonymous>)
at z (graph.1c0596dfced26c638bfe.js:2:17646)
at graph.1c0596dfced26c638bfe.js:2:26602
at graph.1c0596dfced26c638bfe.js:2:26655
at graph.1c0596dfced26c638bfe.js:2:26661
at graph.1c0596dfced26c638bfe.js:2:222
at graph.1c0596dfced26c638bfe.js:2:227
z @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
z @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
`
grid view renders fine.
### What you think should happen instead
graph should be rendered.
### How to reproduce
```@dag('branch_dynamic', schedule_interval=None, default_args=default_args, catchup=False)
def branch_dynamic_flow():
@branch_task
def choose_path():
return 'b'
@task
def a():
print('a')
@task
def get_args():
return ['echo 1', 'echo 2']
b = BashOperator.partial(task_id="b").expand(bash_command=get_args())
path = choose_path()
path >> a()
path >> b
```
### Operating System
red hat
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes | 5.1.1 | Kubernetes
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29105 | https://github.com/apache/airflow/pull/29042 | b2825e11852890cf0b0f4d0bcaae592311781cdf | 33ba242d7eb8661bf936a9b99a8cad4a74b29827 | "2023-01-23T14:55:28Z" | python | "2023-01-24T15:27:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,100 | ["airflow/www/static/js/dag/details/Dag.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/LogBlock.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "airflow/www/static/js/dag/grid/index.tsx", "airflow/www/static/js/utils/useOffsetHeight.tsx"] | Unnecessary scrollbars in grid view | ### Apache Airflow version
2.5.0
### What happened
Compare the same DAG grid view in 2.4.3: (everything is scrolled using the "main" scrollbar of the window)

and in 2.5.0 (and 2.5.1) (left and right side of the grid have their own scrollbars):

It was much more ergonomic previously when only the main scrollbar was used.
I think the relevant change was in #27560, where `maxHeight={offsetHeight}` was added to some places.
Is this the intended way the grid view should look like or did happen as an accident?
I tried to look around in the developer tools and it seems like removing the `max-height` from this element restores the old look: `div#react-container div div.c-1rr4qq7 div.c-k008qs div.c-19srwsc div.c-scptso div.c-l7cpmp`. Well it does for the left side of the grid view. Similar change has to be done for some other divs also.

### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29100 | https://github.com/apache/airflow/pull/29367 | 1b18a501fe818079e535838fa4f232b03365fc75 | 643d736ebb32c488005b3832c2c3f226a77900b2 | "2023-01-23T07:19:18Z" | python | "2023-02-05T23:15:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,070 | ["airflow/providers/ftp/operators/ftp.py", "airflow/providers/sftp/operators/sftp.py", "tests/providers/ftp/operators/test_ftp.py"] | FTP operator has logic in __init__ | ### Body
Similarly to SFTP (fixed in https://github.com/apache/airflow/pull/29068) the logic from __init__ should be moved to execute.
The #29068 provides a blueprint for that.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29070 | https://github.com/apache/airflow/pull/29073 | 8eb348911f2603feba98787d79b88bbd84bd17be | 2b7071c60022b3c483406839d3c0ef734db5daad | "2023-01-20T19:31:08Z" | python | "2023-01-21T00:29:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,973 | ["airflow/models/xcom_arg.py", "tests/models/test_taskinstance.py"] | Dynamic Task Mapping skips tasks before upstream has started | ### Apache Airflow version
2.5.0
### What happened
In some cases we are seeing dynamic mapped task being skipped before upstream tasks have started & the dynamic count for the task can be calculated. We see this both locally in a with the `LocalExecutor` & on our cluster with the `KubernetesExecutor`.
To trigger the issue we need multiple dynamic tasks merging into a upstream task, see the images below for example. If there is no merging the tasks run as expected. The tasks also need to not know the number of dynamic tasks that will be created on DAG start, for example by chaining in an other dynamic task output.


If the DAG, task, or upstream tasks are cleared the skipped task runs as expected.
The issue exists both on airflow 2.4.x & 2.5.0.
Happy to help debug this further & answer any questions!
### What you think should happen instead
The tasks should run after upstream tasks are done.
### How to reproduce
The following code is able to reproduce the issue on our side:
```python
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
from airflow.utils.task_group import TaskGroup
from airflow.operators.empty import EmptyOperator
# Only one chained tasks results in only 1 of the `skipped_tasks` skipping.
# Add in extra tasks results in both `skipped_tasks` skipping, but
# no earlier tasks are ever skipped.
CHAIN_TASKS = 1
@task()
def add(x, y):
return x, y
with DAG(
dag_id="test_skip",
schedule=None,
start_date=datetime(2023, 1, 13),
) as dag:
init = EmptyOperator(task_id="init_task")
final = EmptyOperator(task_id="final")
for i in range(2):
with TaskGroup(f"task_group_{i}") as tg:
chain_task = [i]
for j in range(CHAIN_TASKS):
chain_task = add.partial(x=j).expand(y=chain_task)
skipped_task = (
add.override(task_id="skipped").partial(x=i).expand(y=chain_task)
)
# Task isn't skipped if final (merging task) is removed.
init >> tg >> final
```
### Operating System
MacOS
### Versions of Apache Airflow Providers
This can be reproduced without any extra providers installed.
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28973 | https://github.com/apache/airflow/pull/30641 | 8cfc0f6332c45ca750bc2317ea1e283aaf2ac5bd | 5f2628d36cb8481ee21bd79ac184fd8fdce3e47d | "2023-01-16T14:18:41Z" | python | "2023-04-22T19:00:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,951 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/decorators/test_docker.py", "tests/providers/docker/operators/test_docker.py"] | Add a way to skip Docker Operator task | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow 2.3.3
Raising the `AirflowSkipException` in the source code, using the `DockerOperator`, is supposed to mark the task as skipped, according to the [docs](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html#special-exceptions). However, what happens is that the task is marked as failed with the logs showing `ERROR - Task failed with exception`.
### What you think should happen instead
Tasks should be marked as skipped, not failed.
### How to reproduce
Raise the `AirflowSkipException` in the python source code, while using the `DockerOperator`.
### Operating System
Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-125-generic x86_64)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28951 | https://github.com/apache/airflow/pull/28996 | bc5cecc0db27cb8684c238b36ad12c7217d0c3ca | 3a7bfce6017207218889b66976dbee1ed84292dc | "2023-01-15T11:36:04Z" | python | "2023-01-18T21:04:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,912 | ["docs/apache-airflow/start.rst"] | quick start fails: DagRun for example_bash_operator with run_id or execution_date of '2015-01-01' not found | ### Apache Airflow version
2.5.0
### What happened
I follow the [quick start guide](https://airflow.apache.org/docs/apache-airflow/stable/start.html)
When I execute `airflow tasks run example_bash_operator runme_0 2015-01-01` I got the following error:
```
[2023-01-13 15:50:42,493] {dagbag.py:538} INFO - Filling up the DagBag from /root/airflow/dags
[2023-01-13 15:50:42,761] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): prepare_email>, send_email already registered for DAG: example_dag_decorator
[2023-01-13 15:50:42,761] {taskmixin.py:205} WARNING - Dependency <Task(EmailOperator): send_email>, prepare_email already registered for DAG: example_dag_decorator
[2023-01-13 15:50:42,830] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_entry_group>, delete_entry_group already registered for DAG: example_complex
[2023-01-13 15:50:42,830] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_entry_group>, create_entry_group already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_entry_gcs>, delete_entry already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_entry>, create_entry_gcs already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_tag>, delete_tag already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_tag>, create_tag already registered for DAG: example_complex
[2023-01-13 15:50:42,852] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,852] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,853] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,853] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,854] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,854] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {example_python_operator.py:90} WARNING - The virtalenv_python example task requires virtualenv, please install it.
[2023-01-13 15:50:43,608] {tutorial_taskflow_api_virtualenv.py:29} WARNING - The tutorial_taskflow_api_virtualenv example DAG requires virtualenv, please install it.
/root/miniconda3/lib/python3.7/site-packages/airflow/models/dag.py:3524 RemovedInAirflow3Warning: Param `schedule_interval` is deprecated and will be removed in a future release. Please use `schedule` instead.
Traceback (most recent call last):
File "/root/miniconda3/bin/airflow", line 8, in <module>
sys.exit(main())
File "/root/miniconda3/lib/python3.7/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 384, in task_run
ti, _ = _get_ti(task, args.map_index, exec_date_or_run_id=args.execution_date_or_run_id, pool=args.pool)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 163, in _get_ti
session=session,
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 118, in _get_dag_run
) from None
airflow.exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of '2023-11-01' not found
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 18.04.3 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28912 | https://github.com/apache/airflow/pull/28949 | c57c23dce39992eafcf86dc08a1938d7d407803f | a4f6f3d6fe614457ff95ac803fd15e9f0bd38d27 | "2023-01-13T07:55:02Z" | python | "2023-01-15T21:01:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,891 | ["chart/templates/pgbouncer/pgbouncer-deployment.yaml", "chart/values.schema.json", "chart/values.yaml"] | Pgbouncer metrics exporter restarts | ### Official Helm Chart version
1.6.0
### Apache Airflow version
2.4.2
### Kubernetes Version
1.21
### Helm Chart configuration
Nothing really specific
### Docker Image customizations
_No response_
### What happened
From time to time we have pg_bouncer metrics exporter that fails its healthcheck.
When it fails its healtchecks three times in a row, pgbouncer stop being reachable and drops all the ongoing connection.
Is it possible to make the pgbouncer healtcheck configurable at least the timeout parameter of one second that seems really short?
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28891 | https://github.com/apache/airflow/pull/29752 | d0fba865aed1fc21d82f0a61cddb1fa0bd4b7d0a | 44f89c6db115d91aba91955fde42475d1a276628 | "2023-01-12T15:18:28Z" | python | "2023-02-27T18:20:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,888 | ["airflow/www/app.py", "tests/www/views/test_views_base.py"] | `webserver.instance_name` shows markup text in `<title>` tag | ### Apache Airflow version
2.5.0
### What happened
https://github.com/apache/airflow/pull/20888 enables the use of markup to style the `webserver.instance_name`.
However, if the instance name has HTML code, this will also be reflected in the `<title>` tag, as shown in the screenshot below.

This is not a pretty behaviour.
### What you think should happen instead
Ideally, if `webserver. instance_name_has_markup = True`, then the text inside the `<title>` should be stripped of HTML code.
For example:
- Set `webserver.instance_name` to some text with markup, like `<b style="color: red">title</b>`
- Set `webserver.Instance_name_has_markup` to `true`
This is how the `<title>` tag should look like:
```html
<title>DAGs - title</title>
```
Instead of:
```
<title>DAGs - <b style="color: red">title<b></title>
```
### How to reproduce
- Airflow version 2.3+, which is [when this change has been introduced](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#instance-name-has-markup)
- Set `webserver.instance_name` to some text with markup, like `<b style="color: red">title</b>`
- Set `webserver.Instance_name_has_markup` to `true`
### Operating System
Doesn't matter
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28888 | https://github.com/apache/airflow/pull/28894 | 696b91fafe4a557f179098e0609eb9d9dcb73f72 | 971e3226dc3ca43900f0b79c42afffb14c59d691 | "2023-01-12T14:32:55Z" | python | "2023-03-16T11:34:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,884 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | Azure Blob storage exposes crendentials in UI | ### Apache Airflow version
Other Airflow 2 version (please specify below)
2.3.3
### What happened
Azure Blob Storage exposes credentials in the UI
<img width="1249" alt="Screenshot 2023-01-12 at 14 00 05" src="https://user-images.githubusercontent.com/35199552/212072943-adca75c4-2226-4251-9446-e8f18fb22081.png">
### What you think should happen instead
_No response_
### How to reproduce
Create an Azure Blob storage connection. then click on the edit button on the connection.
### Operating System
debain
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28884 | https://github.com/apache/airflow/pull/28914 | 6f4544cfbdfa3cabb3faaeea60a651206cd84e67 | 3decb189f786781bb0dfb3420a508a4a2a22bd8b | "2023-01-12T13:01:24Z" | python | "2023-01-13T15:02:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,847 | ["airflow/www/static/js/callModal.js", "airflow/www/templates/airflow/dag.html", "airflow/www/views.py"] | Graph UI: Add Filter Downstream & Filter DownStream & Upstream | ### Description
Currently Airflow has a `Filter Upstream` View/option inside the graph view. (As documented [here](https://docs.astronomer.io/learn/airflow-ui#graph-view) under `Filter Upstream`)
<img width="682" alt="image" src="https://user-images.githubusercontent.com/9246654/211711759-670a1180-7f90-4ecd-84b0-2f3b290ff477.png">
It would be great if there were also the options
1. `Filter Downstream` &
2. `Filter Downstream & Upstream`
### Use case/motivation
Sometimes it is useful to view downstream tasks & down & upstream tasks when reviewing dags. This feature would make it as easy to view those as it is to view upstream today.
### Related issues
I found nothing with a quick search
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28847 | https://github.com/apache/airflow/pull/29226 | 624520db47f736af820b4bc834a5080111adfc96 | a8b2de9205dd805ee42cf6b0e15e7e2805752abb | "2023-01-11T03:35:33Z" | python | "2023-02-03T15:04:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,830 | ["airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py", "airflow/providers/amazon/aws/waiters/README.md", "airflow/providers/amazon/aws/waiters/dynamodb.json", "docs/apache-airflow-providers-amazon/transfer/dynamodb_to_s3.rst", "tests/providers/amazon/aws/transfers/test_dynamodb_to_s3.py", "tests/providers/amazon/aws/waiters/test_custom_waiters.py", "tests/system/providers/amazon/aws/example_dynamodb_to_s3.py"] | Export DynamoDB table to S3 with PITR | ### Description
Airflow provides the Amazon DynamoDB to Amazon S3 below.
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/transfer/dynamodb_to_s3.html
Most of Data Engineer build their "export DDB data to s3" pipeline using "within the point in time recovery window".
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Client.export_table_to_point_in_time
I appreciate if airflow has this function as a native function.
### Use case/motivation
My daily batch job exports its data with pitr option. All of tasks is written by apache-airflow-providers-amazon except "export_table_to_point_in_time" task.
"export_table_to_point_in_time" task only used the python operator. I expect I can unify the task as apache-airflow-providers-amazon library.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28830 | https://github.com/apache/airflow/pull/31142 | 71c26276bcd3ddd5377d620e6b8baef30b72eaa0 | cd3fa33e82922e01888d609ed9c24b9c2dadfa27 | "2023-01-10T13:44:29Z" | python | "2023-05-09T23:56:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,825 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Bad request when triggering dag run with `note` in payload | ### Apache Airflow version
2.5.0
### What happened
Specifying a `note` in the payload (as mentioned [in the doc](https://airflow.apache.org/docs/apache-airflow/2.5.0/stable-rest-api-ref.html#operation/post_dag_run)) when triggering a new dag run yield a 400 bad request
(Git Version: .release:2.5.0+fa2bec042995004f45b914dd1d66b466ccced410)
### What you think should happen instead
As far as I understand the documentation, I should be able to set a note for this dag run, and it is not the case.
### How to reproduce
This is a local airflow, using default credentials and default setup when following [this guide](https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#)
DAG:
<details>
```
import airflow
from airflow import DAG
import logging
from airflow.operators.python import PythonOperator
from airflow.operators.dummy import DummyOperator
from datetime import timedelta
logger = logging.getLogger("airflow.task")
default_args = {
"owner": "airflow",
"depends_on_past": False,
"retries": 0,
"retry_delay": timedelta(minutes=5),
}
def log_body(**context):
logger.info(f"Body: {context['dag_run'].conf}")
with DAG(
"my-validator",
default_args=default_args,
schedule_interval=None,
start_date=airflow.utils.dates.days_ago(0),
catchup=False
) as dag:
(
PythonOperator(
task_id="abcde",
python_callable=log_body,
provide_context=True
)
>> DummyOperator(
task_id="todo"
)
)
```
</details>
Request:
<details>
```
curl --location --request POST '0.0.0.0:8080/api/v1/dags/my-validator/dagRuns' \
--header 'Authorization: Basic YWlyZmxvdzphaXJmbG93' \
--header 'Content-Type: application/json' \
--data-raw '{
"conf": {
"key":"value"
},
"note": "test"
}'
```
</details>
Response:
<details>
```
{
"detail": "{'note': ['Unknown field.']}",
"status": 400,
"title": "Bad Request",
"type": "https://airflow.apache.org/docs/apache-airflow/2.5.0/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
</details>
Removing the `note` key, returns 200... with a null `note`!
<details>
```
{
"conf": {
"key": "value"
},
"dag_id": "my-validator",
"dag_run_id": "manual__2023-01-10T10:45:26.102802+00:00",
"data_interval_end": "2023-01-10T10:45:26.102802+00:00",
"data_interval_start": "2023-01-10T10:45:26.102802+00:00",
"end_date": null,
"execution_date": "2023-01-10T10:45:26.102802+00:00",
"external_trigger": true,
"last_scheduling_decision": null,
"logical_date": "2023-01-10T10:45:26.102802+00:00",
"note": null,
"run_type": "manual",
"start_date": null,
"state": "queued"
}
```
</details>
### Operating System
Ubuntu 20.04.5 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
Everytime.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28825 | https://github.com/apache/airflow/pull/29228 | e626131563efb536f325a35c78585b74d4482ea3 | b94f36bf563f5c8372086cec63b74eadef638ef8 | "2023-01-10T10:53:02Z" | python | "2023-02-01T19:37:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,803 | ["airflow/datasets/manager.py", "airflow/jobs/scheduler_job.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | statsd metric for dataset count | ### Description
A count of datasets that are currently registered/declared in an Airflow deployment.
### Use case/motivation
Would be nice to see how deployments are adopting datasets.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28803 | https://github.com/apache/airflow/pull/28907 | 5d84b59554c93fd22e92b46a1061b40b899a8dec | 7689592c244111b24bc52e7428c5a3bb80a4c2d6 | "2023-01-09T14:51:24Z" | python | "2023-01-18T09:35:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,789 | ["airflow/cli/cli_parser.py", "setup.cfg"] | Add colors in help outputs of Airfow CLI commands | ### Body
Folowing up after https://github.com/apache/airflow/pull/22613#issuecomment-1374530689 - seems that there is a new [rich-argparse](https://github.com/hamdanal/rich-argparse) project that might give us the option without rewriting Airflow's argument parsing to click (click has a number of possible performance issues that might impact airlfow's speed of CLI command parsing)
Seems this might be rather easy thing to do (just adding the formatter class for argparse).
Would be nice if someone implements it and tests (also for performance of CLI).
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28789 | https://github.com/apache/airflow/pull/29116 | c310fb9255ba458b2842315f65f59758b76df9d5 | fdac67b3a5350ab4af79fd98612592511ca5f3fc | "2023-01-07T23:05:56Z" | python | "2023-02-08T11:04:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,772 | ["airflow/utils/json.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_utils.py"] | DAG Run List UI Breaks when a non-JSON serializable value is added to dag_run.conf | ### Apache Airflow version
2.5.0
### What happened
When accessing `dag_run.conf` via a task's context, I was able to add a value that is non-JSON serializable. When I tried to access the Dag Run List UI (`/dagrun/list/`) or the Dag's Grid View, I was met with these error messages respectively:
**Dag Run List UI**
```
Ooops!
Something bad has happened.
Airflow is used by many users, and it is very likely that others had similar problems and you can easily find
a solution to your problem.
Consider following these steps:
* gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment)
* find similar issues using:
* [GitHub Discussions](https://github.com/apache/airflow/discussions)
* [GitHub Issues](https://github.com/apache/airflow/issues)
* [Stack Overflow](https://stackoverflow.com/questions/tagged/airflow)
* the usual search engine you use on a daily basis
* if you run Airflow on a Managed Service, consider opening an issue using the service support channels
* if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a [bug report](https://github.com/apache/airflow/issues/new/choose).
Make sure however, to include all relevant details and results of your investigation so far.
```
**Grid View**
```
Auto-refresh Error
<!DOCTYPE html> <html lang="en"> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"> </head> <body> <div class="container"> <h1> Ooops! </h1> <div> <pre> Something bad has happened. Airflow is used by many users, and it is very likely that others had similar problems and you can easily find a solution to your problem. Consider following these steps: * gather the relevant information (detailed logs
```
I was able to push the same value to XCom with `AIRFLOW__CORE__ENABLE_XCOM_PICKLING=True`, and the XCom List UI (`/xcom/list/`) did **not** throw an error.
In the postgres instance I am using for the Airflow DB, both `dag_run.conf` & `xcom.value` have `BYTEA` types.
### What you think should happen instead
Since we are able to add (and commit) a non-JSON serializable value into a Dag Run's conf, the UI should not break when trying to load this value. We could also ensure that one DAG Run's conf does not break the List UI for all Dag Runs (across all DAGs), and the DAG's Grid View.
### How to reproduce
- Set `AIRFLOW__CORE__ENABLE_XCOM_PICKLING=True`
- Trigger this DAG:
```
import datetime
from airflow.decorators import dag, task
from airflow.models.xcom import XCom
@dag(
schedule_interval=None,
start_date=datetime.datetime(2023, 1, 1),
)
def ui_issue():
@task()
def update_conf(**kwargs):
dag_conf = kwargs["dag_run"].conf
dag_conf["non_json_serializable_value"] = b"1234"
print(dag_conf)
@task()
def push_to_xcom(**kwargs):
dag_conf = kwargs["dag_run"].conf
print(dag_conf)
XCom.set(key="dag_conf", value=dag_conf, dag_id=kwargs["ti"].dag_id, task_id=kwargs["ti"].task_id, run_id=kwargs["ti"].run_id)
return update_conf() >> push_to_xcom()
dag = ui_issue()
```
- View both the Dag Runs and XCom lists in the UI.
- The DAG Run List UI should break, and the XCom List UI should show a value of `{'non_json_serializable_value': b'1234'}` for `ui_issue.push_to_xcom`.
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
The XCom List UI was able to render this value. We could extend this capability to the DAG Run List UI.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28772 | https://github.com/apache/airflow/pull/28777 | 82c5a5f343d2310822f7bb0d316efa0abe9d4a21 | 8069b500e8487675df0472b4a5df9081dcfa9d6c | "2023-01-06T19:10:49Z" | python | "2023-04-03T08:46:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,766 | ["airflow/cli/commands/connection_command.py", "tests/cli/commands/test_connection_command.py"] | Cannot create connection without defining host using CLI | ### Apache Airflow version
2.5.0
### What happened
In order to send logs to s3 bucket after finishing the task, I added a connection to airflow using cli.
```airflow connections add connection_id_1 --conn-uri aws://s3/?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
Then I got a logging warning saying:
[2023-01-06T13:28:39.585+0000] {logging_mixin.py:137} WARNING - <string>:8 DeprecationWarning: Host s3 specified in the connection is not used. Please, set it on extra['endpoint_url'] instead
Instead I was trying to remove the host from the `conn-uri` I provided but every attempt to create a connection failed (list of my attempts below):
```airflow connections add connection_id_1 --conn-uri aws://?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
```airflow connections add connection_id_1 --conn-uri aws:///?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
### What you think should happen instead
I believe there are 2 options:
1. Allow to create connection without defining host
or
2. Remove the warning log
### How to reproduce
Create an S3 connection using CLI:
```airflow connections add connection_id_1 --conn-uri aws://s3/?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
### Operating System
Linux - official airflow image from docker hub apache/airflow:slim-2.5.0
### Versions of Apache Airflow Providers
```
apache-airflow-providers-cncf-kubernetes | 5.0.0 | Kubernetes
apache-airflow-providers-common-sql | 1.3.1 | Common SQL Provider
apache-airflow-providers-databricks | 4.0.0 | Databricks
apache-airflow-providers-ftp | 3.2.0 | File Transfer Protocol (FTP)
apache-airflow-providers-hashicorp | 3.2.0 | Hashicorp including Hashicorp Vault
apache-airflow-providers-http | 4.1.0 | Hypertext Transfer Protocol (HTTP)
apache-airflow-providers-imap | 3.1.0 | Internet Message Access Protocol (IMAP)
apache-airflow-providers-postgres | 5.3.1 | PostgreSQL
apache-airflow-providers-sqlite | 3.3.1 | SQLite
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
This log message is printed every second minute so it is pretty annoying.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28766 | https://github.com/apache/airflow/pull/28922 | c5ee4b8a3a2266ef98b379ee28ed68ff1b59ac5f | d8b84ce0e6d36850cd61b1ce37840c80aaec0116 | "2023-01-06T13:43:51Z" | python | "2023-01-13T21:41:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,751 | ["airflow/providers/google/cloud/operators/cloud_base.py", "tests/providers/google/cloud/operators/test_cloud_base.py"] | KubernetesExecutor leaves failed pods due to deepcopy issue with Google providers | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
With Airflow 2.3 and 2.4 there appears to be a bug in the KubernetesExecutor when used in conjunction with the Google airflow providers. This bug does not affect Airflow 2.2 due to the pip version requirements.
The bug specifically presents itself when using nearly any Google provider operator. During the pod lifecycle, all is well until the executor in the pod starts to clean up following a successful run. Airflow itself still see's the task marked as a success, but in Kubernetes, while the task is finishing up after reporting status, it actually crashes and puts the pod into a Failed state silently:
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 103, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 382, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 189, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 247, in _run_task_by_local_task_job
run_job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 247, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 137, in _execute
self.handle_task_exit(return_code)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 168, in handle_task_exit
self._run_mini_scheduler_on_child_tasks()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 253, in _run_mini_scheduler_on_child_tasks
partial_dag = task.dag.partial_subset(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2188, in partial_subset
dag.task_dict = {
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2189, in <dictcomp>
t.task_id: _deepcopy_task(t)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2186, in _deepcopy_task
return copy.deepcopy(t, memo)
File "/usr/local/lib/python3.9/copy.py", line 153, in deepcopy
y = copier(memo)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1163, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo))
File "/usr/local/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/lib/python3.9/copy.py", line 264, in _reconstruct
y = func(*args)
File "/usr/local/lib/python3.9/enum.py", line 384, in __call__
return cls.__new__(cls, value)
File "/usr/local/lib/python3.9/enum.py", line 702, in __new__
raise ve_exc
ValueError: <object object at 0x7f570181a3c0> is not a valid _MethodDefault
```
Based on a quick look, it appears to be related to the default argument that Google is using in its operators which happens to be an Enum, and fails during a deepcopy at the end of the task.
Example operator that is affected: https://github.com/apache/airflow/blob/403ed7163f3431deb7fc21108e1743385e139907/airflow/providers/google/cloud/hooks/dataproc.py#L753
Reference to the Google Python API core which has the Enum causing the problem: https://github.com/googleapis/python-api-core/blob/main/google/api_core/gapic_v1/method.py#L31
### What you think should happen instead
Kubernetes pods should succeed, be marked as `Completed`, and then be gracefully terminated.
### How to reproduce
Use any `apache-airflow-providers-google` >= 7.0.0 which includes `google-api-core` >= 2.2.2. Run a DAG with a task which uses any of the Google operators which have `_MethodDefault` as a default argument.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-apache-hive==5.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.1
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-presto==4.2.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28751 | https://github.com/apache/airflow/pull/29518 | ec31648be4c2fc4d4a7ef2bd23be342ca1150956 | 5a632f78eb6e3dcd9dc808e73b74581806653a89 | "2023-01-05T17:31:57Z" | python | "2023-03-04T22:44:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,745 | ["chart/templates/logs-persistent-volume-claim.yaml", "chart/values.schema.json", "chart/values.yaml"] | annotations in logs pvc | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.5.0
### Kubernetes Version
v1.22.8+d48376b
### Helm Chart configuration
_No response_
### Docker Image customisations
_No response_
### What happened
When creating the dags pvc, it is possible to inject annotations to the object.
### What you think should happen instead
There should be the possibility to inject annotations to the logs pvc as well.
### How to reproduce
_No response_
### Anything else
We are using annotations on pvc to disable the creation of backup snapshots provided by our company platform. (OpenShift)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28745 | https://github.com/apache/airflow/pull/29270 | 6ef5ba9104f5a658b003f8ade274f19d7ec1b6a9 | 5835b08e8bc3e11f4f98745266d10bbae510b258 | "2023-01-05T13:22:16Z" | python | "2023-02-20T22:57:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,691 | ["airflow/providers/amazon/aws/utils/waiter.py", "tests/providers/amazon/aws/utils/test_waiter.py"] | Fix custom waiter function in AWS provider package | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.5.0
### Operating System
MacOS
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Discussed in #28294
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28691 | https://github.com/apache/airflow/pull/28753 | 2b92c3c74d3259ebac714f157c525836f0af50f0 | ce188e509389737b3c0bdc282abea2425281c2b7 | "2023-01-03T14:34:10Z" | python | "2023-01-05T22:09:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,680 | ["airflow/providers/amazon/aws/operators/batch.py", "tests/providers/amazon/aws/operators/test_batch.py"] | Improve AWS Batch hook and operator | ### Description
AWS Batch hook and operator do not support the boto3 parameter shareIdentifier, which is required to submit jobs to specific types of queues.
### Use case/motivation
I wish that AWS Batch hook and operator support the submit of jobs to queues that require shareIdentifier parameter.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28680 | https://github.com/apache/airflow/pull/30829 | bd542fdf51ad9550e5c4348f11e70b5a6c9adb48 | 612676b975a2ff26541bb2581fbdf2befc6c3de9 | "2023-01-02T14:47:23Z" | python | "2023-04-28T22:04:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,670 | ["airflow/providers/telegram/CHANGELOG.rst", "airflow/providers/telegram/hooks/telegram.py", "airflow/providers/telegram/provider.yaml", "docs/spelling_wordlist.txt", "generated/provider_dependencies.json", "tests/providers/telegram/hooks/test_telegram.py"] | Support telegram-bot v20+ | ### Body
Currently our telegram integration uses Telegram v13 telegram-bot library. On 1st of Jan 2023 a new, backwards incompatible version of Telegram-bot has been released : https://pypi.org/project/python-telegram-bot/20.0/#history and at least as reported by MyPy and our test suite test failures, Telegram 20 needs some changes to work:
Here is a transition guide that might be helpful.
Transition guide is here: https://github.com/python-telegram-bot/python-telegram-bot/wiki/Transition-guide-to-Version-20.0
In the meantime we limit telegram to < 20.0.0
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28670 | https://github.com/apache/airflow/pull/28953 | 68412e166414cbf6228385e1e118ec0939857496 | 644cea14fff74d34f823b5c52c9dbf5bad33bd52 | "2023-01-02T06:58:45Z" | python | "2023-02-23T03:24:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,637 | ["docs/helm-chart/index.rst"] | version 2.4.1 migration job "run-airflow-migrations" run once only when deploy via helm or flux/kustomization | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.4.1
### Kubernetes Version
v4.5.4
### Helm Chart configuration
_No response_
### Docker Image customisations
_No response_
### What happened
manually copied from [the Q & A 27992 migration job](https://github.com/apache/airflow/discussions/27992) (the button create issue from discussion did not work)
I found my migration job would not restart for the 2nd time (the 1st time run was when the default airflow is deployed onto Kubernetes and it had no issues), and then i started to apply changes to the values.yaml file such as **make the database to be azure postgresql**; but then it would not take the values into effect, see screen shots;
of course my debug skills on kubernetes are not high, so i would need extra help if extra info is needed.



```
database:
sql_alchemy_conn_secret: airflow-postgres-redis
sql_alchemy_connect_args:
{
"keepalives": 1,
"keepalives_idle": 30,
"keepalives_interval": 5,
"keepalives_count": 5,
}
postgresql:
enabled: false
pgbouncer:
enabled: false
# Airflow database & redis config
data:
metadataSecretName: airflow-postgres-redis
```
check again the pod for waiting for the migration:

and below was the 1st success at the initial installation (which did not use external db)
```
kubectl describe job airflow-airflow-run-airflow-migrations
Name: airflow-airflow-run-airflow-migrations
Namespace: airflow
Selector: controller-uid=efdc3c7b-5172-4841-abcf-17e055fa6e2e
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=run-airflow-migrations
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Tue, 27 Dec 2022 14:21:50 +0100
Completed At: Tue, 27 Dec 2022 14:22:29 +0100
Duration: 39s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=run-airflow-migrations
controller-uid=efdc3c7b-5172-4841-abcf-17e055fa6e2e
job-name=airflow-airflow-run-airflow-migrations
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-migrate-database-job
Containers:
run-airflow-migrations:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow db upgrade
Environment:
PYTHONUNBUFFERED: 1
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
my further experiment/try tells me the jobs were only run once.
more independent tests could be done with a bit help, such as what kind of changes will trigger migration job to run.
See below helm release history: the 1st installation worked; and i could not make the 3rd release to succeed even though the values are 100% correct; so **the bug/issue short description is: helmRelease in combination with `flux` have issues with db migration jobs (only run once <can be successful>) which makes it a stopper for further upgrade**
```
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Wed Dec 28 02:22:42 2022 superseded airflow-1.7.0 2.4.1 Install complete
2 Wed Dec 28 02:43:25 2022 deployed airflow-1.7.0 2.4.1 Upgrade complete
```
see below equivalent values , even tried to disable the db migration did not make flux to work with it.
```
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
config:
webserver:
expose_config: 'non-sensitive-only'
postgresql:
enabled: false
pgbouncer:
enabled: true
# The maximum number of connections to PgBouncer
maxClientConn: 100
# The maximum number of server connections to the metadata database from PgBouncer
metadataPoolSize: 10
# The maximum number of server connections to the result backend database from PgBouncer
resultBackendPoolSize: 5
# Airflow database & redis config
data:
metadataSecretName: airflow-postgres-redis
# to generate strong secret: python3 -c 'import secrets; print(secrets.token_hex(16))'
webserverSecretKeySecretName: airflow-webserver-secret
```
and see below 2 jobs
```
$ kubectl describe job -n airflow
Name: airflow-airflow-create-user
Namespace: airflow
Selector: controller-uid=8b09e28b-ba3a-4cee-b20f-693a3aa15363
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=create-user-job
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Wed, 28 Dec 2022 03:22:46 +0100
Completed At: Wed, 28 Dec 2022 03:24:32 +0100
Duration: 106s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=create-user-job
controller-uid=8b09e28b-ba3a-4cee-b20f-693a3aa15363
job-name=airflow-airflow-create-user
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-create-user-job
Containers:
create-user:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow users create "$@"
--
-r
Admin
-u
admin
-e
admin@example.com
-f
admin
-l
user
-p
admin
Environment:
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
Name: airflow-airflow-run-airflow-migrations
Namespace: airflow
Selector: controller-uid=5da8c81f-7920-4eaf-9d7a-58a48c740bdc
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=run-airflow-migrations
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Wed, 28 Dec 2022 03:22:46 +0100
Completed At: Wed, 28 Dec 2022 03:23:07 +0100
Duration: 21s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=run-airflow-migrations
controller-uid=5da8c81f-7920-4eaf-9d7a-58a48c740bdc
job-name=airflow-airflow-run-airflow-migrations
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-migrate-database-job
Containers:
run-airflow-migrations:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow db upgrade
Environment:
PYTHONUNBUFFERED: 1
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28637 | https://github.com/apache/airflow/pull/29078 | 30ad26e705f50442f05dd579990372196323fc86 | 6c479437b1aedf74d029463bda56b42950278287 | "2022-12-29T10:27:55Z" | python | "2023-01-27T20:58:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,615 | ["airflow/dag_processing/processor.py", "airflow/models/dagbag.py", "tests/models/test_dagbag.py"] | AIP-44 Migrate Dagbag.sync_to_db to internal API. | This method is used in DagFileProcessor.process_file - it may be easier to migrate all it's internal calls instead of the whole method. | https://github.com/apache/airflow/issues/28615 | https://github.com/apache/airflow/pull/29188 | 05242e95bbfbaf153e4ae971fc0d0a5314d5bdb8 | 5c15b23023be59a87355c41ab23a46315cca21a5 | "2022-12-27T20:09:25Z" | python | "2023-03-12T10:02:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,614 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/api_internal/internal_api_call.py", "airflow/models/dag.py", "tests/api_internal/test_internal_api_call.py"] | AIP-44 Migrate DagModel.get_paused_dag_ids to Internal API | null | https://github.com/apache/airflow/issues/28614 | https://github.com/apache/airflow/pull/28693 | f114c67c03a9b4257cc98bb8a970c6aed8d0c673 | ad738198545431c1d10619f8e924d082bf6a3c75 | "2022-12-27T20:09:14Z" | python | "2023-01-20T19:08:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,613 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/trigger.py"] | AIP-44 Migrate Trigger class to Internal API | null | https://github.com/apache/airflow/issues/28613 | https://github.com/apache/airflow/pull/29099 | 69babdcf7449c95fea7fe3b9055c677b92a74298 | ee0a56a2caef0ccfb42406afe57b9d2169c13a01 | "2022-12-27T20:09:03Z" | python | "2023-02-20T21:26:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,612 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/xcom.py"] | AIP-44 Migrate XCom get*/clear* to Internal API | null | https://github.com/apache/airflow/issues/28612 | https://github.com/apache/airflow/pull/29083 | 9bc48747ddbd609c2bd3baa54a5d0472e9fdcbe4 | a1ffb26e5bcf4547e3b9e494cf7ccd24af30c2e6 | "2022-12-27T20:08:50Z" | python | "2023-01-22T19:19:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,510 | [".pre-commit-config.yaml", "STATIC_CODE_CHECKS.rst", "airflow/cli/commands/info_command.py", "scripts/ci/pre_commit/pre_commit_check_provider_yaml_files.py", "scripts/in_container/run_provider_yaml_files_check.py"] | Add pre-commit/test to verify extra links refer to existed classes | ### Body
We had an issue where extra link class (`AIPlatformConsoleLink`) was removed in [PR](https://github.com/apache/airflow/pull/26836) without removing the class from the `provider.yaml` extra links this resulted in web server exception as shown in https://github.com/apache/airflow/pull/28449
**The Task:**
Add validation that classes of extra-links in provider.yaml are importable
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28510 | https://github.com/apache/airflow/pull/28516 | 7ccbe4e7eaa529641052779a89e34d54c5a20f72 | e47c472e632effbfe3ddc784788a956c4ca44122 | "2022-12-20T22:35:11Z" | python | "2022-12-22T02:25:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,483 | ["airflow/www/static/css/main.css"] | Issues with Custom Menu Items on Smaller Windows | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We take advantage of the custom menu items with flask appbuilder offer a variety of dropdown menus with custom DAG filters. We've notice two things:
1. When you have too many dropdown menu items in a single category, several menu items are unreachable when using the Airflow UI on a small screen:
<img width="335" alt="Screenshot 2022-12-19 at 6 34 24 PM" src="https://user-images.githubusercontent.com/40223998/208548419-f9d1ff57-6cad-4a40-bc58-dbf20148a92a.png">
2. When you have too many menu categories, multiple rows of dropdown menus are displayed, but cover some other components.
<img width="1077" alt="Screenshot 2022-12-19 at 6 32 05 PM" src="https://user-images.githubusercontent.com/40223998/208548222-44e50717-9040-4899-be06-d503a8c0f69a.png">
### What you think should happen instead
1. When you have too many dropdown menu items in a single category, there should be a scrollbar.
2. When you have too many menu categories, multiple rows of dropdown menus are displayed, the menu shouldn't cover the dag import errors or any part of the UI
### How to reproduce
1. Add a bunch of menu items under the same category in a custom plugin and resize your window smaller
2. Add a large number of menu item categories in a custom plugin and resize your window smaller.
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
2.4.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
I'm happy to make a PR for this. I just don't have the frontend context. If someone can point me in the right direction that'd be great
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28483 | https://github.com/apache/airflow/pull/28561 | ea3be1a602b3e109169c6e90e555a418e2649f9a | 2aa52f4ce78e1be7f34b0995d40be996b4826f26 | "2022-12-19T23:40:01Z" | python | "2022-12-30T01:50:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,465 | ["airflow/providers/jenkins/hooks/jenkins.py", "docs/apache-airflow-providers-jenkins/connections.rst", "tests/providers/jenkins/hooks/test_jenkins.py"] | Airflow 2.2.4 Jenkins Connection - unable to set as the hook expects to be | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hello team,
I am trying to use the `JenkinsJobTriggerOperator` version v3.1.0 on an Airflow instance version 2.2.4
Checking the documentation regards how to set up the connection and the hook in order to use `https` instead of the default `http`, I see https://airflow.apache.org/docs/apache-airflow-providers-jenkins/3.1.0/connections.html
```
Extras (optional)
Specify whether you want to use http or https scheme by entering true to use https or false for http in extras. Default is http.
```
Unfortunately from the Airflow UI when trying to specify the connection and especially the `Extras` options it accepts a JSON-like object, so whatever you put differently to a dictionary the code fails to update the extra options for that connection.
Checking in more details what the [Jenkins hook](https://airflow.apache.org/docs/apache-airflow-providers-jenkins/3.1.0/_modules/airflow/providers/jenkins/hooks/jenkins.html#JenkinsHook.conn_name_attr) does:
```
self.connection = connection
connection_prefix = "http"
# connection.extra contains info about using https (true) or http (false)
if to_boolean(connection.extra):
connection_prefix = "https"
url = f"{connection_prefix}://{connection.host}:{connection.port}"
```
where the `connection.extra` cannot be a simple true/false string!
### What you think should happen instead
Either we should get the `http` or `https` from the `Schema`
Or we should update the [JenkinsHook](https://airflow.apache.org/docs/apache-airflow-providers-jenkins/stable/_modules/airflow/providers/jenkins/hooks/jenkins.html#JenkinsHook.default_conn_name) to read the provided dictionary for http value:
`if to_boolean(connection.extra.https)`
### How to reproduce
_No response_
### Operating System
macos Monterey 12.6.2
### Versions of Apache Airflow Providers
```
pip freeze | grep apache-airflow-providers
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-common-sql==1.3.1
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-jenkins==3.1.0
apache-airflow-providers-sqlite==2.1.0
```
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28465 | https://github.com/apache/airflow/pull/30301 | f7d5b165fcb8983bd82a852dcc5088b4b7d26a91 | 1f8bf783b89d440ecb3e6db536c63ff324d9fc62 | "2022-12-19T14:43:00Z" | python | "2023-03-25T19:37:53Z" |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.