status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 18,632 | ["airflow/www/jest-setup.js", "airflow/www/package.json", "airflow/www/static/js/tree/useTreeData.js", "airflow/www/static/js/tree/useTreeData.test.js", "airflow/www/templates/airflow/tree.html", "airflow/www/views.py", "airflow/www/yarn.lock"] | Auto-refresh doesn't take into account the selected date | ### Apache Airflow version
2.1.3
### Operating System
Debian GNU/Linux 9 (stretch)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
In the DAG tree view, if I select a custom date in the date filter (top left corner) and press "update", DAG runs are correctly filtered to the selected date and number of runs.
However, if the "auto-refresh" toggle is on, when the next tick refresh happens, the date filter is no longer taken into account and the UI displays the actual **latest** x DAG runs status. However, neither the header dates (45° angle) nor the date filter reflect this new time window
I investigated in the network inspector and it seems that the xhr request that fetches dag runs status doesn't contain a date parameter and thus always fetch the latest DAG run data
### What you expected to happen
I expect that the auto-refresh feature preserves the selected time window
### How to reproduce
Open a DAG with at least 20 dag runs
Make sure autorefresh is disabled
Select a filter date earlier than the 10th dag run start date and a "number of runs" value of 10, press "update"
The DAG tree view should now be focused on the 10 first dag runs
Now toggle autorefresh and wait for next tick
The DAG tree view will now display status of the latest 10 runs but the header dates will still reflect the old start dates
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18632 | https://github.com/apache/airflow/pull/19605 | 186513e24e723b79bc57e3ca0ade3c73e4fa2f9a | d3ccc91ba4af069d3402d458a1f0ca01c3ffb863 | "2021-09-30T09:45:00Z" | python | "2021-11-16T00:24:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,600 | ["airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dags.html"] | Selecting DAG in search dropdown should lead directly to DAG | ### Description
When searching for a DAG in the search box, the dropdown menu suggests matching DAG names. Currently, selecting a DAG from the dropdown menu initiates a search with that DAG name as the search query. However, I think it would be more intuitive to go directly to the DAG.
If the user prefers to execute the search query, they can still have the option to search without selecting from the dropdown.
---
Select `example_bash_operator` from dropdown menu:
![image](https://user-images.githubusercontent.com/40527812/135201722-3d64f46a-36d8-4a6c-bc29-9ed90416a17f.png)
---
We are taken to the search result instead of that specific DAG:
![image](https://user-images.githubusercontent.com/40527812/135201941-5a498d75-cff4-43a0-8bcc-f62532c92b5f.png)
### Use case/motivation
When you select a specific DAG from the dropdown, you probably intend to go to that DAG. Even if there were another DAG that began with the name of the selected DAG, both DAGs would appear in the dropdown, so you would still be able to select the one that you want.
For example:
![image](https://user-images.githubusercontent.com/40527812/135202518-f603e1c5-6806-4338-a3d6-7803c7af3572.png)
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18600 | https://github.com/apache/airflow/pull/18991 | 324c31c2d7ad756ce3814f74f0b6654d02f19426 | 4d1f14aa9033c284eb9e6b94e9913a13d990f93e | "2021-09-29T04:22:53Z" | python | "2021-10-22T15:35:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,599 | ["airflow/stats.py", "tests/core/test_stats.py"] | datadog parsing error for dagrun.schedule_delay since it is not passed in float type | ### Apache Airflow version
2.1.2
### Operating System
Gentoo/Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
In datadog-agent logs, got parsing error
[ AGENT ] 2021-09-29 03:20:01 UTC | CORE | ERROR | (pkg/dogstatsd/server.go:411 in errLog) | Dogstatsd: error parsing metric message '"airflow.dagrun.schedule_delay.skew:<Period [2021-09-29T03:20:00+00:00 -> 2021-09-29T03:20:00.968404+00:00]>|ms"': could not parse dogstatsd metric value: strconv.ParseFloat: parsing "<Period [2021-09-29T03:20:00+00:00 -> 2021-09-29T03:20:00.968404+00:00]>": invalid syntax
### What you expected to happen
since datadog agent expects a float, see https://github.com/DataDog/datadog-agent/blob/6830beaeb182faadac40368d9d781b796b4b2c6f/pkg/dogstatsd/parse.go#L119, the schedule_delay should be a float instead of timedelta.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18599 | https://github.com/apache/airflow/pull/19973 | dad2f8103be954afaedf15e9d098ee417b0d5d02 | 5d405d9cda0b88909e6b726769381044477f4678 | "2021-09-29T04:02:52Z" | python | "2021-12-15T10:42:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,578 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/configuration.py", "airflow/models/abstractoperator.py", "airflow/models/baseoperator.py", "tests/core/test_configuration.py"] | Allow to set execution_timeout default value in airflow.cfg | ### Discussed in https://github.com/apache/airflow/discussions/18411
<div type='discussions-op-text'>
<sup>Originally posted by **alexInhert** September 21, 2021</sup>
### Description
Currently the default value of execution_timeout in base operator is None
https://github.com/apache/airflow/blob/c686241f4ceb62d52e9bfa607822e4b7a3c76222/airflow/models/baseoperator.py#L502
This means that a task will run without limit till finished.
### Use case/motivation
The problem is that there is no way to overwrite this default for all dags. This causes problems where we find that tasks run sometimes for 1 week!!! for no reason. They are just stuck. This mostly happens with tasks that submit work to some 3rd party resource. The 3rd party had some error and terminated but the Airflow task for some reason did not.
This can be handled in cluster policy however it feels that cluster policy is more to enforce something but less about setting defaults values.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/18578 | https://github.com/apache/airflow/pull/22389 | 4201e6e0da58ef7fff0665af3735112244114d5b | a111a79c0d945640b073783a048c0d0a979b9d02 | "2021-09-28T16:16:58Z" | python | "2022-04-11T08:31:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,545 | ["airflow/www/views.py"] | Unable to add a new user when logged using LDAP auth | ### Discussed in https://github.com/apache/airflow/discussions/18290
<div type='discussions-op-text'>
<sup>Originally posted by **pawsok** September 16, 2021</sup>
### Apache Airflow version
2.1.4 (latest released)
### Operating System
Amazon Linux AMI 2018.03
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
- AWS ECS EC2 mode
- RDS PostgreSQL for DB
- LDAP authentication enabled
### What happened
We upgraded Airflow from 2.0.1 to 2.1.3 and now when i log into Airflow (Admin role) using LDAP authentication and go to Security --> List Users i cannot see **add button** ("plus").
**Airflow 2.0.1** (our current version):
![image](https://user-images.githubusercontent.com/90831710/133586254-24e22cd6-7e02-4800-b90f-d6f575ba2826.png)
**Airflow 2.1.3:**
![image](https://user-images.githubusercontent.com/90831710/133586024-48952298-a906-4189-abe1-bd88d96518bc.png)
### What you expected to happen
Option to add a new user (using LDAP auth).
### How to reproduce
1. Upgrade to Airflow 2.1.3
2. Log in to Airflow as LDAP user type
3. Go to Security --> List Users
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/18545 | https://github.com/apache/airflow/pull/22619 | 29de8daeeb979d8f395b1e8e001e182f6dee61b8 | 4e4c0574cdd3689d22e2e7d03521cb82179e0909 | "2021-09-27T08:55:18Z" | python | "2022-04-01T08:42:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,512 | ["airflow/models/renderedtifields.py"] | airflow deadlock trying to update rendered_task_instance_fields table (mysql) | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
We have been unable to reproduce this in our testing. Our dags will utilize the S3KeySensor task. Sometimes we will have up to 100 dag_runs (running the same DAG) waiting for the S3KeySensor to poke the expected S3 documents.
We are regularly seeing this deadlock with mysql:
```
Exception: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: DELETE FROM rendered_task_instance_fields WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND (rendered_task_instance_fields.dag_id, rendered_task_instance_fields.task_id, rendered_task_instance_fields.execution_date) NOT IN (SELECT subq1.dag_id, subq1.task_id, subq1.execution_date
FROM (SELECT rendered_task_instance_fields.dag_id AS dag_id, rendered_task_instance_fields.task_id AS task_id, rendered_task_instance_fields.execution_date AS execution_date
FROM rendered_task_instance_fields
WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s ORDER BY rendered_task_instance_fields.execution_date DESC
LIMIT %s) AS subq1)]
[parameters: ('refresh_hub', 'scorecard_wait', 'refresh_hub', 'scorecard_wait', 1)] Exception trying create activation error: 400:
```
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
Sometimes we will wait multiple days for our S3 documents to appear, This deadlock is occurring for 30%-40% of the dag_runs for an individual dags.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18512 | https://github.com/apache/airflow/pull/18616 | b6aa8d52a027c75aaa1151989c68f8d6b8529107 | db2d73d95e793e63e152692f216deec9b9d9bc85 | "2021-09-24T21:40:14Z" | python | "2021-09-30T16:50:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,495 | ["airflow/www/views.py", "docs/apache-airflow/howto/email-config.rst"] | apache-airflow-providers-sendgrid==2.0.1 doesn't show in the connections drop down UI | ### Apache Airflow Provider(s)
sendgrid
### Versions of Apache Airflow Providers
I'm running this version of airflow locally to test the providers modules. I'm interested in sendgrid, however it doesn't show up as a conn type in the UI making it unusable.
The other packages I've installed do show up.
```
@linux-2:~$ airflow info
Apache Airflow
version | 2.1.2
executor | SequentialExecutor
task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler
sql_alchemy_conn | sqlite:////home/airflow/airflow.db
dags_folder | /home/airflow/dags
plugins_folder | /home/airflow/plugins
base_log_folder | /home/airflow/logs
remote_base_log_folder |
System info
OS | Linux
architecture | x86_64
uname | uname_result(system='Linux', node='linux-2.fritz.box', release='5.11.0-34-generic', version='#36~20.04.1-Ubuntu SMP Fri Aug 27 08:06:32 UTC 2021', machine='x86_64', processor='x86_64')
locale | ('en_US', 'UTF-8')
python_version | 3.6.4 (default, Aug 12 2021, 10:51:13) [GCC 9.3.0]
python_location |.pyenv/versions/3.6.4/bin/python3.6
Tools info
git | git version 2.25.1
ssh | OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020
kubectl | Client Version: v1.20.4
gcloud | Google Cloud SDK 357.0.0
cloud_sql_proxy | NOT AVAILABLE
mysql | mysql Ver 8.0.26-0ubuntu0.20.04.2 for Linux on x86_64 ((Ubuntu))
sqlite3 | NOT AVAILABLE
psql | psql (PostgreSQL) 12.8 (Ubuntu 12.8-0ubuntu0.20.04.1)
Providers info
apache-airflow-providers-amazon | 2.0.0
apache-airflow-providers-ftp | 2.0.1
apache-airflow-providers-google | 3.0.0
apache-airflow-providers-http | 2.0.1
apache-airflow-providers-imap | 2.0.1
apache-airflow-providers-sendgrid | 2.0.1
apache-airflow-providers-sqlite | 2.0.1
```
### Apache Airflow version
2.1.2
### Operating System
NAME="Ubuntu" VERSION="20.04.2 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.2 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal
### Deployment
Other
### Deployment details
I simply run `airflow webserver --port 8080` to test this on my local machine.
In production we're using a helm chart, but the package also doesn't show up there prompting me to test it out locally with no luck.
### What happened
Nothing
### What you expected to happen
sendgrid provider being available as conn_type
### How to reproduce
run it locally using localhost
### Anything else
nop
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18495 | https://github.com/apache/airflow/pull/18502 | ac4acf9c5197bd96fbbcd50a83ef3266bfc366a7 | be82001f39e5b04fd16d51ef79b3442a3aa56e88 | "2021-09-24T12:10:04Z" | python | "2021-09-24T16:12:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,487 | ["airflow/timetables/interval.py"] | 2.1.3/4 queued dag runs changes catchup=False behaviour | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
Amazon Linux 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
Say, for example, you have a DAG that has a sensor. This DAG is set to run every minute, with max_active_runs=1, and catchup=False.
This sensor may pass 1 or more times per day.
Previously, when this sensor is satisfied once per day, there is 1 DAG run for that given day. When the sensor is satisfied twice per day, there are 2 DAG runs for that given day.
With the new queued dag run state, new dag runs will be queued for each minute (up to AIRFLOW__CORE__MAX_QUEUED_RUNS_PER_DAG), this seems to be against the spirit of catchup=False.
This means that if a dag run is waiting on a sensor for longer than the schedule_interval, it will still in effect 'catchup' due to the queued dag runs.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18487 | https://github.com/apache/airflow/pull/19130 | 3a93ad1d0fd431f5f4243d43ae8865e22607a8bb | 829f90ad337c2ea94db7cd58ccdd71dd680ad419 | "2021-09-23T23:24:20Z" | python | "2021-10-22T01:03:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,482 | ["airflow/utils/log/secrets_masker.py"] | No SQL error shown when using the JDBC operator | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
Debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
astro dev start
Dockerfile - https://gist.github.com/jyotsa09/267940333ffae4d9f3a51ac19762c094#file-dockerfile
### What happened
Both connection pointed to the same database where table "Person" doesn't exist.
When there is a SQL failure in PostgressOperator then logs are saying -
```
psycopg2.errors.UndefinedTable: relation "person" does not exist
LINE 1: Select * from Person
```
When there is a SQL failure in JDBCOperator then logs are just saying -
```
[2021-09-23, 19:06:22 UTC] {local_task_job.py:154} INFO - Task exited with return code
```
### What you expected to happen
JDBCOperator should show "Task failed with exception" or something similar to Postgress.
### How to reproduce
Use this dag - https://gist.github.com/jyotsa09/267940333ffae4d9f3a51ac19762c094
(Connection extras is in the gist.)
### Anything else
Related: https://github.com/apache/airflow/issues/16564
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18482 | https://github.com/apache/airflow/pull/21540 | cb24ee9414afcdc1a2b0fe1ec0b9f0ba5e1bd7b7 | bc1b422e1ce3a5b170618a7a6589f8ae2fc33ad6 | "2021-09-23T20:01:53Z" | python | "2022-02-27T13:07:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,473 | ["airflow/cli/commands/dag_command.py", "airflow/jobs/backfill_job.py", "airflow/models/dag.py", "tests/cli/commands/test_dag_command.py"] | CLI: `airflow dags test { dag w/ schedule_interval=None } ` error: "No run dates were found" | ### Apache Airflow version
2.2.0b2 (beta snapshot)
### Operating System
ubuntu 20.04
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
pip install /path/to/airflow/src
### What happened
Given any DAG initialized with: `schedule_interval=None`
Run `airflow dags test mydagname $(date +%Y-%m-%d)` and get an error:
```
INFO - No run dates were found for the given dates and dag interval.
```
This behavior changed in https://github.com/apache/airflow/pull/15397, it used to trigger a backfill dagrun at the given date.
### What you expected to happen
I expected a backfill dagrun with the given date, regardless of whether it fit into the `schedule_interval`.
If AIP-39 made that an unrealistic expectation, then I'd hope for some way to define unscheduled dags which can still be tested from the command line (which, so far as I know, is the fastest way to iterate on a DAG.).
As it is, I keep changing `schedule_interval` back and forth depending on whether I want to iterate via `astro dev start` (which tolerates `None` but does superfluous work if the dag is scheduled) or via `airflow dags test ...` (which doesn't tolerate `None`).
### How to reproduce
Initialize a DAG with: `schedule_interval=None` and run it via `airflow dags test mydagname $(date +%Y-%m-%d)`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18473 | https://github.com/apache/airflow/pull/18742 | 5306a6071e1cf223ea6b4c8bc4cb8cacd25d370e | cfc2e1bb1cbf013cae065526578a4e8ff8c18362 | "2021-09-23T15:05:23Z" | python | "2021-10-06T19:40:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,465 | ["airflow/providers/slack/operators/slack.py"] | SlackAPIFileOperator filename is not templatable but is documented as is | ### Apache Airflow Provider(s)
slack
### Versions of Apache Airflow Providers
apache-airflow-providers-slack==3.0.0
### Apache Airflow version
2.1.4 (latest released)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
Using a template string for `SlackAPIFileOperator.filename` doesn't apply the templating.
### What you expected to happen
`SlackAPIFileOperator.filrname` should work with a template string.
### How to reproduce
Use a template string when using `SlackAPIFileOperator`.
### Anything else
Re: PR: https://github.com/apache/airflow/pull/17400/files
In commit: https://github.com/apache/airflow/pull/17400/commits/bbd11de2f1d37e9e2f07e5f9b4d331bf94ef6b97
`filename` was removed from `template_fields`. I believe this was because `filename` was removed as a parameter in favour of `file`. In a later commit `file` was renamed to `filename` but the `template_fields` was not put back. The documentation still states that it's a templated field.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18465 | https://github.com/apache/airflow/pull/18466 | 958b679dae6bbd9def7c60191c3a7722ce81382a | 9bf0ed2179b62f374cad74334a8976534cf1a53b | "2021-09-23T12:22:24Z" | python | "2021-09-23T18:11:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,454 | ["airflow/operators/python.py", "tests/operators/test_python.py"] | BranchPythonOperator intermittently fails to find its downstream task ids | ### Apache Airflow version
2.2.0b2 (beta snapshot)
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start` with dockerfile:
```
FROM quay.io/astronomer/ap-airflow-dev:2.2.0-buster-44011
```
And dags:
- [simple_branch](https://gist.github.com/MatrixManAtYrService/bb7eac11d2a60c840b02f5fc7c4e864f#file-simple_branch-py)
- [branch](https://gist.github.com/MatrixManAtYrService/bb7eac11d2a60c840b02f5fc7c4e864f#file-branch-py)
### What happened
Spamming the "manual trigger" button on `simple_branch` caused no trouble:
![good](https://user-images.githubusercontent.com/5834582/134444526-4c733e74-3a86-4043-8aa3-a5475c93cc0b.png)
But doing the same on `branch` caused intermittent failures:
![bad](https://user-images.githubusercontent.com/5834582/134444555-161b1803-0e78-41f8-a1d3-25e652001fd1.png)
task logs:
```
*** Reading local file: /usr/local/airflow/logs/branch/decide_if_you_like_it/2021-09-23T01:31:50.261287+00:00/2.log
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1034} INFO - Dependencies all met for <TaskInstance: branch.decide_if_you_like_it manual__2021-09-23T01:31:50.261287+00:00 [queued]>
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1034} INFO - Dependencies all met for <TaskInstance: branch.decide_if_you_like_it manual__2021-09-23T01:31:50.261287+00:00 [queued]>
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1232} INFO -
--------------------------------------------------------------------------------
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1233} INFO - Starting attempt 2 of 2
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1234} INFO -
--------------------------------------------------------------------------------
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1253} INFO - Executing <Task(BranchPythonOperator): decide_if_you_like_it> on 2021-09-23 01:31:50.261287+00:00
[2021-09-23, 01:41:27 UTC] {standard_task_runner.py:52} INFO - Started process 8584 to run task
[2021-09-23, 01:41:27 UTC] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'branch', 'decide_if_you_like_it', 'manual__2021-09-23T01:31:50.261287+00:00', '--job-id', '216', '--raw', '--subdir', 'DAGS_FOLDER/branch.py', '--cfg-path', '/tmp/tmp82f81d8w', '--error-file', '/tmp/tmpco74n1v0']
[2021-09-23, 01:41:27 UTC] {standard_task_runner.py:77} INFO - Job 216: Subtask decide_if_you_like_it
[2021-09-23, 01:41:27 UTC] {logging_mixin.py:109} INFO - Running <TaskInstance: branch.decide_if_you_like_it manual__2021-09-23T01:31:50.261287+00:00 [running]> on host 992912b473f9
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1406} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=branch
AIRFLOW_CTX_TASK_ID=decide_if_you_like_it
AIRFLOW_CTX_EXECUTION_DATE=2021-09-23T01:31:50.261287+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-09-23T01:31:50.261287+00:00
[2021-09-23, 01:41:27 UTC] {python.py:152} INFO - Done. Returned value was: None
[2021-09-23, 01:41:27 UTC] {skipmixin.py:143} INFO - Following branch None
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1680} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1315, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1437, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1493, in _execute_task
result = execute_callable(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/operators/python.py", line 181, in execute
self.skip_all_except(context['ti'], branch)
File "/usr/local/lib/python3.9/site-packages/airflow/models/skipmixin.py", line 147, in skip_all_except
branch_task_ids = set(branch_task_ids)
TypeError: 'NoneType' object is not iterable
[2021-09-23, 01:41:27 UTC] {taskinstance.py:1261} INFO - Marking task as FAILED. dag_id=branch, task_id=decide_if_you_like_it, execution_date=20210923T013150, start_date=20210923T014127, end_date=20210923T014127
[2021-09-23, 01:41:27 UTC] {standard_task_runner.py:88} ERROR - Failed to execute job 216 for task decide_if_you_like_it
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 292, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1315, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1437, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1493, in _execute_task
result = execute_callable(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/operators/python.py", line 181, in execute
self.skip_all_except(context['ti'], branch)
File "/usr/local/lib/python3.9/site-packages/airflow/models/skipmixin.py", line 147, in skip_all_except
branch_task_ids = set(branch_task_ids)
TypeError: 'NoneType' object is not iterable
[2021-09-23, 01:41:27 UTC] {local_task_job.py:154} INFO - Task exited with return code 1
[2021-09-23, 01:41:27 UTC] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you expected to happen
`BranchPythonOperator` should create tasks that always succeed
### How to reproduce
Keep clicking manual executions of the dag called `branch` until you've triggered ten or so. At least one of them will fail with the error:
```
TypeError: 'NoneType' object is not iterable
```
### Anything else
I've also seen this behavior in image: `quay.io/astronomer/ap-airflow-dev:2.1.4-buster-44000`
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18454 | https://github.com/apache/airflow/pull/18471 | 2f44f0b023eb8d322fef9b53b609e390e57c7d45 | c1578fbd80ae8be7bc73b20ec142df6c0e8b10c8 | "2021-09-23T02:16:00Z" | python | "2021-09-25T17:55:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,449 | ["airflow/migrations/versions/cc1e65623dc7_add_max_tries_column_to_task_instance.py"] | Silent failure when running `airflow db init` | ### Discussed in https://github.com/apache/airflow/discussions/18408
<div type='discussions-op-text'>
<sup>Originally posted by **ziplokk1** September 21, 2021</sup>
Hey folks,
I've come across an issue where when attempting to run migrations against a clean database, but with existing dags defined in my dags folder, there's a silent failure and the migration exits with the `airflow` command usage prompt saying "invalid command 'db'".
I got to digging and the culprit was the [following line](https://github.com/apache/airflow/blob/main/airflow/migrations/versions/cc1e65623dc7_add_max_tries_column_to_task_instance.py#L73).
I'm almost positive that this has to do with an erroneous `.py` file in my dags folder (I have yet to dig deeper into the issue), but this begged the question of why is the migration is reading the dags to migrate existing data in the database in the first place? Shouldn't the migration simply migrate the data regardless of the state of the dag files themselves?
I was able to fix the issue by changing the line in the link above to `dagbag = DagBag(read_dags_from_db=True)` but I'd like to see if there was any context behind reading the dag file instead of the database before opening a PR.</div> | https://github.com/apache/airflow/issues/18449 | https://github.com/apache/airflow/pull/18450 | 716c6355224b1d40b6475c2fa60f0b967204fce6 | 7924249ce37decd5284427a916eccb83f235a4bb | "2021-09-22T23:06:41Z" | python | "2021-09-22T23:33:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,442 | ["airflow/providers/microsoft/azure/hooks/azure_batch.py", "docs/apache-airflow-providers-microsoft-azure/connections/azure_batch.rst", "tests/providers/microsoft/azure/hooks/test_azure_batch.py", "tests/providers/microsoft/azure/operators/test_azure_batch.py"] | Cannot retrieve Account URL in AzureBatchHook using the custom connection fields | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==2.0.0
### Apache Airflow version
2.1.0
### Operating System
Debian GNU/Linux
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
Related to #18124.
When attempting to connect to Azure Batch, the following exception is thrown even though the corresponding "Azure Batch Account URl" field is populated in the Azure Batch connection form:
```
airflow.exceptions.AirflowException: Extra connection option is missing required parameter: `account_url`
```
Airflow Connection:
![image](https://user-images.githubusercontent.com/48934154/134413597-260987e9-3035-4ced-a329-3af1129e4ae9.png)
### What you expected to happen
Airflow tasks should be able to connect to Azure Batch when properly populating the custom connection form. Or, at the very least, the above exception should not be thrown when an Azure Batch Account URL is provided in the connection.
### How to reproduce
1. Install the Microsoft Azure provider and create an Airflow Connection with the type Azure Batch
2. Provide values for at least "Azure Batch Account URl"
3. Finally execute a task which uses the AzureBatchHook
### Anything else
The get_required_param() method is being passed a value of "auth_method" from the `Extra` field in the connection form. The `Extra` field is no longer exposed in the connection form and would never be able to be provided.
```python
def _get_required_param(name):
"""Extract required parameter from extra JSON, raise exception if not found"""
value = conn.extra_dejson.get(name)
if not value:
raise AirflowException(f'Extra connection option is missing required parameter: `{name}`')
return value
...
batch_account_url = _get_required_param('account_url') or _get_required_param(
'extra__azure_batch__account_url'
)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18442 | https://github.com/apache/airflow/pull/18456 | df131471999578f4824a2567ce932a3a24d7c495 | 1d2924c94e38ade7cd21af429c9f451c14eba183 | "2021-09-22T20:07:53Z" | python | "2021-09-24T08:04:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,440 | ["airflow/www/views.py"] | Can't Run Tasks from UI when using CeleryKubernetesExecutor | ### Apache Airflow version
2.1.3
### Operating System
ContainerOS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.1.0
apache-airflow-providers-celery==2.0.0
apache-airflow-providers-cncf-kubernetes==2.0.2
apache-airflow-providers-databricks==2.0.1
apache-airflow-providers-docker==2.1.0
apache-airflow-providers-elasticsearch==2.0.2
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-google==5.0.0
apache-airflow-providers-grpc==2.0.0
apache-airflow-providers-hashicorp==2.0.0
apache-airflow-providers-http==2.0.0
apache-airflow-providers-imap==2.0.0
apache-airflow-providers-microsoft-azure==3.1.0
apache-airflow-providers-mysql==2.1.0
apache-airflow-providers-papermill==2.0.1
apache-airflow-providers-postgres==2.0.0
apache-airflow-providers-redis==2.0.0
apache-airflow-providers-sendgrid==2.0.0
apache-airflow-providers-sftp==2.1.0
apache-airflow-providers-slack==4.0.0
apache-airflow-providers-sqlite==2.0.0
apache-airflow-providers-ssh==2.1.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Official Helm Chart on GKE using CeleryKubernetesExecutor.
### What happened
I'm unable to Run Tasks from the UI when using CeleryKubernetesExecutor. It just shows the error "Only works with the Celery or Kubernetes executors, sorry".
<img width="1792" alt="Captura de Pantalla 2021-09-22 a la(s) 17 00 30" src="https://user-images.githubusercontent.com/2586758/134413668-5b638941-bb50-44ff-9349-d53326d1f489.png">
### What you expected to happen
It should be able to run Tasks from the UI when using CeleryKubernetesExecutor, as it only is a "selective superset" (based on queue) of both the Celery and Kubernetes executors.
### How to reproduce
1. Run an Airflow instance with the CeleryKubernetes executor.
2. Go to any DAG
3. Select a Task
4. Press the "Run" button.
5. Check the error.
### Anything else
I will append an example PR as a comment.
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18440 | https://github.com/apache/airflow/pull/18441 | 32947a464d174517da942a84d62dd4d0b1ff4b70 | dc45f97cbb192882d628428bd6dd3ccd32128537 | "2021-09-22T20:03:30Z" | python | "2021-10-07T00:30:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,436 | ["airflow/www/templates/airflow/variable_list.html", "airflow/www/templates/airflow/variable_show.html", "airflow/www/templates/airflow/variable_show_widget.html", "airflow/www/views.py", "airflow/www/widgets.py"] | Add a 'View' button for Airflow Variables in the UI | ### Description
A 'view' (eye) button is added to the Airflow Variables in the UI. This button shall open the selected Variable for viewing (but not for editing), with the actual json formatting, in either a new page or a modal view.
### Use case/motivation
I store variables in the json format, most of them have > 15 attributes. To view the content of the variables, I either have to open the Variable in editing mode, which I find dangerous since there is a chance I (or another user) accidentally delete information; or I have to copy the variable as displayed in the Airflow Variables page list and then pass it through a json formatter to get the correct indentation. I would like to have a way to (safely) view the variable in it's original format.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18436 | https://github.com/apache/airflow/pull/21342 | 5276ef8ad9749b2aaf4878effda513ee378f4665 | f0bbb9d1079e2660b4aa6e57c53faac84b23ce3d | "2021-09-22T16:29:17Z" | python | "2022-02-28T01:41:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,392 | ["airflow/jobs/triggerer_job.py", "tests/jobs/test_triggerer_job.py"] | TriggerEvent fires, and then defers a second time (doesn't fire a second time though). | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
debian buster
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
Helm info:
```
helm repo add astronomer https://helm.astronomer.io
cat <<- 'EOF' | helm install airflow astronomer/airflow --namespace airflow -f -
airflowVersion: 2.2.0
defaultAirflowRepository: myrepo
defaultAirflowTag: myimagetag
executor: KubernetesExecutor
images:
airflow:
repository: myrepo
pullPolicy: Always
pod_template:
repository: myrepo
pullPolicy: Always
triggerer:
serviceAccount:
create: True
EOF
```
Dockerfile:
```
FROM quay.io/astronomer/ap-airflow-dev:2.2.0-buster-43897
COPY ./dags/ ./dags/
```
[the dag](https://gist.github.com/MatrixManAtYrService/8d1d0c978465aa249e8bd0498cc08031#file-many_triggers-py)
### What happened
Six tasks deferred for a random (but deterministic) amount of time, and the triggerer fired six events. Two of those events then deferred a second time, which wasn't necessary because they had already fired. Looks like a race condition.
Here are the triggerer logs: https://gist.github.com/MatrixManAtYrService/8d1d0c978465aa249e8bd0498cc08031#file-triggerer-log-L29
Note that most deferrals only appear once, but the ones for "Aligator" and "Bear" appear twice.
### What you expected to happen
Six tasks deferred, six tasks fires, no extra deferrals.
### How to reproduce
Run the dag in [this gist](https://gist.github.com/MatrixManAtYrService/8d1d0c978465aa249e8bd0498cc08031) with the kubernetes executor (be sure there's a triggerer running). Notice that some of the triggerer log messages appear nore than once. Those represent superfluous computation.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18392 | https://github.com/apache/airflow/pull/20699 | 0ebd55e0f8fc7eb26a2b35b779106201ffe88f55 | 16b8c476518ed76e3689966ec4b0b788be935410 | "2021-09-20T21:16:04Z" | python | "2022-01-06T23:16:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,369 | ["airflow/www/static/js/graph.js"] | If the DAG contains Tasks Goups, Graph view does not properly display tasks statuses | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
Linux/Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When upgrading to latest release (2.1.4) from 2.1.3, Graph View does not properly display tasks statuses.
See:
![image](https://user-images.githubusercontent.com/12841394/133980917-678a447b-5678-4225-b172-93df17257661.png)
All these tasks are in succeed or failed status in the Tree View, for the same Dag Run.
### What you expected to happen
Graph View to display task statuses.
### How to reproduce
As I can see, DAGs without tasks groups are not affected. So, probably just create a DAG with a task group. I can reproduce this with in local with the official docker image (python 3.8).
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18369 | https://github.com/apache/airflow/pull/18607 | afec743a60fa56bd743a21e85d718e685cad0778 | 26680d4a274c4bac036899167e6fea6351e73358 | "2021-09-20T09:27:16Z" | python | "2021-09-29T16:30:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,363 | ["airflow/api/common/experimental/mark_tasks.py", "airflow/api_connexion/endpoints/dag_run_endpoint.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | http patched an in-progress dagRun to state=failed, but the change was clobbered by a subsequent TI state change. | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
debian buster
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start` image: [quay.io/astronomer/ap-airflow-dev:2.2.0-buster-43897](https://quay.io/repository/astronomer/ap-airflow-dev?tab=tags)
### What happened
I patched a dagRun's state to `failed` while it was running.
```
curl -i -X PATCH "http://localhost:8080/api/v1/dags/steady_task_stream/dagRuns/scheduled__2021-09-18T00:00:00+00:00" -H 'Content-Type: application/json' --user 'admin:admin' \
-d '{ "state": "failed" }'
```
For a brief moment, I saw my change reflected in the UI. Then auto-refresh toggled itself to disabled. When I re-enabled it, the dag_run the state: "running".
![new_api](https://user-images.githubusercontent.com/5834582/133952242-7758b1f5-1b1f-4e3d-a0f7-789a8230b02c.gif)
Presumably this happened because the new API **only** affects the dagRun. The tasks keep running, so they updated dagRun state and overwrite the patched value.
### What you expected to happen
The UI will not let you set a dagrun's state without also setting TI states. Here's what it looks like for "failed":
![ui](https://user-images.githubusercontent.com/5834582/133952960-dca90e7b-3cdd-4832-a23a-b3718ff5ad60.gif)
Notice that it forced me to fail a task instance. This is what prevents the dag from continuing onward and clobbering my patched value. It's similar when you set it to "success", except every child task gets set to "success".
At the very least, *if* the API lets me patch the dagrun state *then* it should also lets me patch the TI states. This way an API user can patch both at once and prevent the clobber. (`/api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}` doesn't yet support the `patch` verb, so I can't currently do this).
Better still would be for the API to make it easy for me to do what the UI does. For instance, patching a dagrun with:
```
{ "state": "failed" }
```
Should return code 409 and a message like:
> To patch steady_task_stream.scheduled__2021-09-18T00:00:00+00:00.state to success, you must also make the following changes: { "task_1":"success", "task_2":"success", ... }. Supply "update_tasks": "true" to do this.
And updating it with:
```
{ "state": "failed",
"update_tasks": "true"}
```
Should succeed and provide feedback about which state changes occurred.
### How to reproduce
Any dag will do here, but here's the one I used:
https://gist.github.com/MatrixManAtYrService/654827111dc190407a3c81008da6ee16
Be sure to run it in an airflow that has https://github.com/apache/airflow/pull/17839, which introduced the patch functionality that makes it possible to reach this bug.
- Unpause the dag
- Make note of the execution date
- Delete the dag and wait for it to repopulate in the UI (you may need to refresh).
- Prepare this command in your terminal, you may need to tweak the dag name and execution date to match your scenario:
```
curl -i -X PATCH "http://localhost:8080/api/v1/dags/steady_task_stream/dagRuns/scheduled__2021-09-18T00:00:00+00:00" -H 'Content-Type: application/json' --user 'admin:admin' \
-d '{ "state": "failed" }'
```
- Unpause the dag again
- Before it completes, run the command to patch its state to "failed"
Note that as soon as the next task completes, your patched state has been overwritten and is now "running" or maybe "success"
### Anything else
@ephraimbuddy, @kaxil mentioned that you might be interested in this.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18363 | https://github.com/apache/airflow/pull/18370 | 13a558d658c3a1f6df4b2ee5d894fee65dc103db | 56d1765ea01b25b42947c3641ef4a64395daec5e | "2021-09-20T03:28:12Z" | python | "2021-09-22T20:16:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,333 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | No Mass Delete Option for Task Instances Similar to What DAGRuns Have in UI | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
macOS Big Sur 11.3.1
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
default Astronomer deployment with some test DAGs
### What happened
In the UI for DAGRuns there are checkboxes that allow multiple DAGRuns to be selected.
![image](https://user-images.githubusercontent.com/89415310/133841290-94cda771-9bb3-4677-8530-8c2861525719.png)
Within the Actions menu on this same view, there is a Delete option which allows multiple DAGRuns to be deleted at the same time.
![image](https://user-images.githubusercontent.com/89415310/133841362-3c7fc81d-a823-4f41-8b1d-64be425810ce.png)
Task instance view on the UI does not offer the same option, even though Task Instances can be individually deleted with the trash can button.
![image](https://user-images.githubusercontent.com/89415310/133841442-3fee0930-a6bb-4035-b172-1d10b85f3bf6.png)
### What you expected to happen
I expect that the Task Instances can also be bulk deleted, in the same way that DAGRuns can be.
### How to reproduce
Open up Task Instance and DAGRun views from the Browse tab and compare the options in the Actions dropdown menus.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18333 | https://github.com/apache/airflow/pull/18438 | db2d73d95e793e63e152692f216deec9b9d9bc85 | 932a2254064a860d614ba2b7c10c7cb091605c7d | "2021-09-17T19:12:15Z" | python | "2021-09-30T17:19:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,329 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/executors/kubernetes_executor.py", "airflow/kubernetes/kubernetes_helper_functions.py", "tests/executors/test_kubernetes_executor.py"] | Add task name, DAG name, try_number, and run_id to all Kubernetes executor logs | ### Description
Every line in the scheduler log pertaining to a particular task instance should be stamped with that task instance's identifying information. In the Kubernetes executor, some lines are stamped only with the pod name instead.
### Use case/motivation
When trying to trace the lifecycle of a task in the kubernetes executor, you currently must search first for the name of the pod created by the task, then search for the pod name in the logs. This means you need to be pretty familiar with the structure of the scheduler logs in order to search effectively for the lifecycle of a task that had a problem.
Some log statements like `Attempting to finish pod` do have the annotations for the pod, which include dag name, task name, and run_id, but others do not. For instance, `Event: podname-a2f2c1ac706 had an event of type DELETED` has no such annotations.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18329 | https://github.com/apache/airflow/pull/29929 | ffc1dbb4acc1448a9ee6576cb4d348a20b209bc5 | 64b0872d92609e2df465989062e39357eeef9dab | "2021-09-17T13:41:38Z" | python | "2023-05-25T08:40:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,283 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "chart/templates/_helpers.yaml", "chart/templates/secrets/result-backend-connection-secret.yaml", "chart/values.yaml", "tests/charts/test_basic_helm_chart.py", "tests/charts/test_rbac.py", "tests/charts/test_result_backend_connection_secret.py"] | db+ string in result backend but not metadata secret | ### Official Helm Chart version
1.1.0 (latest released)
### Apache Airflow version
2.1.3 (latest released)
### Kubernetes Version
1.21
### Helm Chart configuration
data:
metadataSecretName: "airflow-metadata"
resultBackendSecretName: "airflow-result-backend"
### Docker Image customisations
_No response_
### What happened
If we only supply 1 secret with
```
connection: postgresql://airflow:password@postgres.rds:5432/airflow?sslmode=disable
```
To use for both metadata and resultBackendConnection then we end up with a connection error because
resultBackendConnection expects the string to be formatted like
```
connection: db+postgresql://airflow:password@postgres.rds:5432/airflow?sslmode=disable
```
from what i can tell
### What you expected to happen
I'd expect to be able to use the same secret for both using the same format if they are using the same connection.
### How to reproduce
Make a secret structured like above to look like the metadataConnection auto-generated secret.
use that same secret for the result backend.
deploy.
### Anything else
Occurs always.
To get around currently we make 2 secrets one with just the db+ prepended.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18283 | https://github.com/apache/airflow/pull/24496 | 5f67cc0747ea661b703e4c44c77e7cd005cb9588 | 9312b2865a53cfcfe637605c708cf68d6df07a2c | "2021-09-15T22:16:35Z" | python | "2022-06-23T15:34:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,267 | ["airflow/providers/amazon/aws/transfers/gcs_to_s3.py", "tests/providers/amazon/aws/transfers/test_gcs_to_s3.py"] | GCSToS3Operator : files already uploaded because of wrong prefix | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian 10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### What happened
The operator airflow.providers.amazon.aws.transfers.gcs_to_s3.GCSToS3Operator is buggy because if the argument replace=False and the task is retried then the task will always fail :
```
Traceback (most recent call last):
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_task
result = task_copy.execute(context=context)
File "/home/airflow/gcs/plugins/birdz/operators/gcs_to_s3_operator.py", line 159, in execute
cast(bytes, file_bytes), key=dest_key, replace=self.replace, acl_policy=self.s3_acl_policy
File "/usr/local/lib/airflow/airflow/providers/amazon/aws/hooks/s3.py", line 61, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/lib/airflow/airflow/providers/amazon/aws/hooks/s3.py", line 90, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/usr/local/lib/airflow/airflow/providers/amazon/aws/hooks/s3.py", line 608, in load_bytes
self._upload_file_obj(file_obj, key, bucket_name, replace, encrypt, acl_policy)
File "/usr/local/lib/airflow/airflow/providers/amazon/aws/hooks/s3.py", line 653, in _upload_file_obj
raise ValueError(f"The key {key} already exists.")
```
Furthermore I noted that the argument "prefix" that correspond to the GCS prefix to search in the bucket is keeped in the destination S3
### What you expected to happen
I expect the operator to print "In sync, no files needed to be uploaded to S3" if a retry is done in case of replace=false
And I expect to keep or not the GCS prefix to the destination S3 key. It's already handled by other transfer operator like 'airflow.providers.google.cloud.transfers.gcs_to_sftp.GCSToSFTPOperator' which implement a keep_directory_structure argument to keep the folder structure between source and destination
### How to reproduce
Retry the operator with replace=False
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18267 | https://github.com/apache/airflow/pull/22071 | 184a46fc93cf78e6531f25d53aa022ee6fd66496 | c7286e53064d717c97807f7ccd6cad515f88fe52 | "2021-09-15T11:34:01Z" | python | "2022-03-08T14:11:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,245 | ["airflow/settings.py", "airflow/utils/state.py", "docs/apache-airflow/howto/customize-ui.rst", "tests/www/views/test_views_home.py"] | Deferred status color not distinct enough | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
Mac OSX 11.5.2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
In the tree view, the status color for deferred is very difficult to distinguish from up_for_reschedule and is generally not very distinct. I can’t imagine it’s good for people with colorblindness either.
### What you expected to happen
I expect the status colors to form a distinct palette. While there is already some crowding in the greens, I don't want to see it get worse with the new status.
### How to reproduce
Make a deferred task and view in tree view.
### Anything else
I might suggest something in the purple range like [BlueViolet #8A2BE2](https://www.color-hex.com/color/8a2be2)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18245 | https://github.com/apache/airflow/pull/18414 | cfc2e1bb1cbf013cae065526578a4e8ff8c18362 | e351eada1189ed50abef8facb1036599ae96399d | "2021-09-14T14:47:27Z" | python | "2021-10-06T19:45:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,242 | ["airflow/www/templates/airflow/task.html", "airflow/www/views.py", "tests/www/views/test_views_task_norun.py"] | Rendered Template raise general error if TaskInstance not exists | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
Linux
### Versions of Apache Airflow Providers
n/a
### Deployment
Docker-Compose
### Deployment details
Docker Image: apache/airflow:2.2.0b1-python3.8
```shell
$ docker version
Client:
Version: 20.10.8
API version: 1.41
Go version: go1.16.6
Git commit: 3967b7d28e
Built: Wed Aug 4 10:59:01 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d88bc
Built: Wed Aug 4 10:58:48 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.5.5
GitCommit: 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0.m
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
```
```shell
$ docker-compose version
docker-compose version 1.29.2, build unknown
docker-py version: 5.0.2
CPython version: 3.9.6
OpenSSL version: OpenSSL 1.1.1l 24 Aug 2021
```
### What happened
Rendered Template endpoint raise error if not Task Instances exists (e.g. fresh DAG)
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/auth.py", line 51, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 72, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1055, in rendered_templates
ti = dag_run.get_task_instance(task_id=task.task_id, session=session)
AttributeError: 'NoneType' object has no attribute 'get_task_instance'
```
### What you expected to happen
I understand that probably no way to render this value anymore by airflow webserver.
So basically it can be replaced by the same warning/error which throw when we tried to access to XCom for TaskInstance which not exists (Task Instance -> XCom -> redirect to /home -> In top of the page Task Not exists)
![image](https://user-images.githubusercontent.com/85952209/133264002-21c3ff4a-8273-4e24-b00e-cf91639c58b4.png)
However it would be nice if we have same as we have in 2.1 (and probably earlier in 2.x)
### How to reproduce
1. Create new DAG
2. Go to Graph View
3. Try to access Rendered Tab
4. Get an error
![image](https://user-images.githubusercontent.com/85952209/133263628-80e4a7f9-ac82-48d5-9aea-84df1c64a240.png)
### Anything else
Every time.
Sample url: `http://127.0.0.1:8081/rendered-templates?dag_id=sample_dag&task_id=sample_task&execution_date=2021-09-14T10%3A10%3A23.484252%2B00%3A00`
Discuss with @uranusjr
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18242 | https://github.com/apache/airflow/pull/18244 | ed10edd20b3c656fe1cd9b1c17468c0026f075e2 | c313febf63ba534c97955955273d3faec583cfd9 | "2021-09-14T13:15:09Z" | python | "2021-09-16T03:54:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,237 | ["airflow/providers/microsoft/azure/hooks/wasb.py"] | Azure wasb hook is creating a container when getting a blob client | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==3.1.1
### Apache Airflow version
2.1.2
### Operating System
OSX
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
`_get_blob_client` method in the wasb hook is trying to create a container
### What you expected to happen
I expect to get a container client of an existing container
### How to reproduce
_No response_
### Anything else
I believe the fix is minor, e.g.
```
def _get_blob_client(self, container_name: str, blob_name: str) -> BlobClient:
"""
Instantiates a blob client
:param container_name: The name of the blob container
:type container_name: str
:param blob_name: The name of the blob. This needs not be existing
:type blob_name: str
"""
container_client = self.create_container(container_name)
return container_client.get_blob_client(blob_name)
```
should be changed to
```
def _get_blob_client(self, container_name: str, blob_name: str) -> BlobClient:
"""
Instantiates a blob client
:param container_name: The name of the blob container
:type container_name: str
:param blob_name: The name of the blob. This needs not be existing
:type blob_name: str
"""
container_client = self._get_container_client(container_name)
return container_client.get_blob_client(blob_name)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18237 | https://github.com/apache/airflow/pull/18287 | f8ba4755ae77f3e08275d18e5df13c368363066b | 2dac083ae241b96241deda20db7725e2fcf3a93e | "2021-09-14T10:35:37Z" | python | "2021-09-16T16:59:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,225 | ["setup.cfg", "setup.py", "tests/core/test_providers_manager.py"] | tests/core/test_providers_manager.py::TestProviderManager::test_hooks broken on 3.6 | ### Apache Airflow version
main (development)
### Operating System
Any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### What happened
`tests/core/test_providers_manager.py::TestProviderManager::test_hooks` is broken on 3.6 by #18209 since newer `importlib-resources` versions uses a different implementation and the mocks no longer works. This was actually visible before merging; all and only 3.6 checks in the PR failed. Let’s be more careful identifying CI failure patterns in the future 🙂
Not exactly sure how to fix yet. I believe the breaking changes were introduced in [importlib-resources 5.2](https://github.com/python/importlib_resources/blob/v5.2.2/CHANGES.rst#v520), but restricting `<5.2` is not a long-term fix since the same version is also in the Python 3.10 stdlib and will bite us again very soon.
### What you expected to happen
_No response_
### How to reproduce
```
$ ./breeze tests -- tests/core/test_providers_manager.py::TestProviderManager::test_hooks
...
tests/core/test_providers_manager.py F [100%]
================================================== FAILURES ==================================================
_______________________________________ TestProviderManager.test_hooks _______________________________________
self = <test_providers_manager.TestProviderManager testMethod=test_hooks>
mock_import_module = <MagicMock name='import_module' id='139679504564520'>
@patch('airflow.providers_manager.importlib.import_module')
def test_hooks(self, mock_import_module):
# Compat with importlib_resources
mock_import_module.return_value.__spec__ = Mock()
with pytest.warns(expected_warning=None) as warning_records:
with self._caplog.at_level(logging.WARNING):
> provider_manager = ProvidersManager()
tests/core/test_providers_manager.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/providers_manager.py:242: in __init__
self._provider_schema_validator = _create_provider_info_schema_validator()
airflow/providers_manager.py:92: in _create_provider_info_schema_validator
schema = json.loads(importlib_resources.read_text('airflow', 'provider_info.schema.json'))
/usr/local/lib/python3.6/site-packages/importlib_resources/_legacy.py:46: in read_text
with open_text(package, resource, encoding, errors) as fp:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
package = 'airflow', resource = 'provider_info.schema.json', encoding = 'utf-8', errors = 'strict'
def open_text(
package: Package,
resource: Resource,
encoding: str = 'utf-8',
errors: str = 'strict',
) -> TextIO:
"""Return a file-like object opened for text reading of the resource."""
> return (_common.files(package) / _common.normalize_path(resource)).open(
'r', encoding=encoding, errors=errors
)
E TypeError: unsupported operand type(s) for /: 'Mock' and 'str'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18225 | https://github.com/apache/airflow/pull/18228 | c313febf63ba534c97955955273d3faec583cfd9 | 21d53ed2ab4e246890da56698d1146a2287b932d | "2021-09-14T06:32:07Z" | python | "2021-09-16T09:11:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,216 | ["chart/UPDATING.rst", "chart/templates/NOTES.txt", "chart/templates/flower/flower-ingress.yaml", "chart/templates/webserver/webserver-ingress.yaml", "chart/tests/test_ingress_flower.py", "chart/tests/test_ingress_web.py", "chart/values.schema.json", "chart/values.yaml"] | Helm chart ingress support multiple hostnames | ### Description
When the official airflow helm chart is used to install airflow in k8s, I want to be able to access the airflow UI from multiple hostnames. Currently given how the ingress resource is structured it doesn't seem possible, and modifying it needs to take into account backwards compatibility concerns.
### Use case/motivation
Given my company's DNS structure, I need to be able to access the airflow UI running in kubernetes from multiple hostnames.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18216 | https://github.com/apache/airflow/pull/18257 | 4308a8c364d410ea8c32d2af7cc8ca3261054696 | 516d6d86064477b1e2044a92bffb33bf9d7fb508 | "2021-09-13T21:01:13Z" | python | "2021-09-17T16:53:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,204 | ["airflow/api_connexion/endpoints/user_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_user_endpoint.py", "tests/test_utils/api_connexion_utils.py"] | POST /api/v1/users fails with exception | ### Apache Airflow version
main (development)
### Operating System
From Astronomer’s QA team
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
When adding a new user, The following exception is emitted:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.9/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.9/site-packages/airflow/_vendor/connexion/decorators/decorator.py", line 48, in wrapper
response = function(request)
File "/usr/local/lib/python3.9/site-packages/airflow/_vendor/connexion/decorators/uri_parsing.py", line 144, in wrapper
response = function(request)
File "/usr/local/lib/python3.9/site-packages/airflow/_vendor/connexion/decorators/validation.py", line 184, in wrapper
response = function(request)
File "/usr/local/lib/python3.9/site-packages/airflow/_vendor/connexion/decorators/response.py", line 103, in wrapper
response = function(request)
File "/usr/local/lib/python3.9/site-packages/airflow/_vendor/connexion/decorators/parameter.py", line 121, in wrapper
return function(**kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/api_connexion/security.py", line 47, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/api_connexion/endpoints/user_endpoint.py", line 105, in post_user
user.roles.extend(roles_to_add)
AttributeError: 'bool' object has no attribute 'roles'
```
The immediate cause to this exception is F.A.B. returns `False` when it fails to add a new user. The problem, however, is _why_ excactly it failed. This is the payload used:
```json
{
"username": "username6",
"password": "password1",
"email": "username5@example.com",
"first_name": "user2",
"last_name": "test1",
"roles":[{"name":"Admin"},{"name":"Viewer"}]
}
```
This went through validation, therefore we know
1. The POST-ing user has permission to create a new user.
2. The format is correct (including the nested roles).
3. There is not already an existing `username6` in the database.
4. All listed roles exist.
(All these are already covered by unit tests.)
Further complicating the issue is F.A.B.’s security manager swallows an exception when this happens, and only logs the exception to the server. And we’re having trouble locating that line of log. It’s quite difficult to diagnose further, so I’m posting this hoping someone has better luck reproducing this.
I will submit a fix to correct the immediate issue, making the API emit 500 with something like “Failed to create user for unknown reason” to make the failure _slightly_ less confusing.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18204 | https://github.com/apache/airflow/pull/18224 | a9776d36ca46d57a8da2fe931ce91a2847322345 | 7a1912437ca8dddf668eaa4ca2448dc958e77697 | "2021-09-13T14:16:43Z" | python | "2021-09-14T10:28:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,188 | ["airflow/www/templates/airflow/task.html", "airflow/www/views.py", "tests/www/views/test_views_task_norun.py"] | NoResultFound raises in UI when access to Task Instance Details | ### Apache Airflow version
2.2.0b1 (beta snapshot)
### Operating System
Linux (`5.13.13-zen1-1-zen #1 ZEN SMP PREEMPT Thu, 26 Aug 2021 19:14:35 +0000 x86_64 GNU/Linux`)
### Versions of Apache Airflow Providers
<details><summary> pip freeze | grep apache-airflow-providers</summary>
apache-airflow-providers-amazon==2.1.0<br>
apache-airflow-providers-celery==2.0.0<br>
apache-airflow-providers-cncf-kubernetes==2.0.1<br>
apache-airflow-providers-docker==2.1.0<br>
apache-airflow-providers-elasticsearch==2.0.2<br>
apache-airflow-providers-ftp==2.0.0<br>
apache-airflow-providers-google==5.0.0<br>
apache-airflow-providers-grpc==2.0.0<br>
apache-airflow-providers-hashicorp==2.0.0<br>
apache-airflow-providers-http==2.0.0<br>
apache-airflow-providers-imap==2.0.0<br>
apache-airflow-providers-microsoft-azure==3.1.0<br>
apache-airflow-providers-mysql==2.1.0<br>
apache-airflow-providers-postgres==2.0.0<br>
apache-airflow-providers-redis==2.0.0<br>
apache-airflow-providers-sendgrid==2.0.0<br>
apache-airflow-providers-sftp==2.1.0<br>
apache-airflow-providers-slack==4.0.0<br>
apache-airflow-providers-sqlite==2.0.0<br>
apache-airflow-providers-ssh==2.1.0<br>
</details>
### Deployment
Docker-Compose
### Deployment details
Docker Image: apache/airflow:2.2.0b1-python3.8
```shell
$ docker version
Client:
Version: 20.10.8
API version: 1.41
Go version: go1.16.6
Git commit: 3967b7d28e
Built: Wed Aug 4 10:59:01 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d88bc
Built: Wed Aug 4 10:58:48 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.5.5
GitCommit: 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0.m
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
```
```shell
$ docker-compose version
docker-compose version 1.29.2, build unknown
docker-py version: 5.0.2
CPython version: 3.9.6
OpenSSL version: OpenSSL 1.1.1l 24 Aug 2021
```
### What happened
I've tested current project for compatibility to migrate to 2.2.x in the future.
As soon as i tried to access from UI to Task Instance Details or Rendered Template for DAG _which never started before_ I've got this error
```
Python version: 3.8.11
Airflow version: 2.2.0b1
Node: e70bca1d41d3
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/auth.py", line 51, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 72, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1368, in task
session.query(TaskInstance)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3500, in one
raise orm_exc.NoResultFound("No row was found for one()")
sqlalchemy.orm.exc.NoResultFound: No row was found for one()
```
### What you expected to happen
On previous version (Apache Airflow 2.1.2) it show information even if DAG never started.
If it's new behavior of Airflow for Task Instances in UI it would be nice get information specific to this error rather than generic error
### How to reproduce
1. Use Apache Airflow: 2.2.0b1
2. Create new DAG
3. In web server try to access to Task Instance Details (`/task` entrypoint) or Rendered Template (`rendered-templates` entrypoint)
### Anything else
As soon as DAG started at least once this kind of errors gone when access Task Instance Details or Rendered Template for this DAG Tasks
Seems like this kind of errors happen after #17719
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18188 | https://github.com/apache/airflow/pull/18244 | ed10edd20b3c656fe1cd9b1c17468c0026f075e2 | c313febf63ba534c97955955273d3faec583cfd9 | "2021-09-12T12:43:32Z" | python | "2021-09-16T03:54:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,155 | ["tests/core/test_providers_manager.py"] | Upgrade `importlib-resources` version | ### Description
The version for `importlib-resources` constraint sets it to be [v1.5.0](https://github.com/python/importlib_resources/tree/v1.5.0) which is over a year old. For compatibility sake (for instance with something like Datapane) I would suggest upgrading it.
### Use case/motivation
Upgrade a an old dependency to keep code up to date.
### Related issues
Not that I am aware of, maybe somewhat #12120, or #15991.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18155 | https://github.com/apache/airflow/pull/18215 | dd313a57721918577b6465cd00d815a429a8f240 | b7f366cd68b3fed98a4628d5aa15a1e8da7252a3 | "2021-09-10T19:40:00Z" | python | "2021-09-13T23:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,146 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Deferred TI's `next_method` and `next_kwargs` not cleared on retries | ### Apache Airflow version
main (development)
### Operating System
macOS 11.5.2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
If your first try fails and you have retries enabled, the subsequent tries skip right to the final `next_method(**next_kwargs)` instead of starting with `execute` again.
### What you expected to happen
Like we reset things like start date, we should wipe `next_method` and `next_kwargs` so we can retry the task from the beginning.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18146 | https://github.com/apache/airflow/pull/18210 | 9c8f7ac6236bdddd979bb6242b6c63003fae8490 | 9d497729184711c33630dec993b88603e0be7248 | "2021-09-10T14:15:01Z" | python | "2021-09-13T18:14:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,136 | ["chart/templates/rbac/security-context-constraint-rolebinding.yaml", "chart/tests/test_scc_rolebinding.py", "chart/values.schema.json", "chart/values.yaml", "docs/helm-chart/production-guide.rst"] | Allow airflow standard images to run in openshift utilising the official helm chart | ### Description
Airflow helm chart is very powerful and configurable, however in order to run it in a on-premises openshift 4 environment, one must manually create security context constraints or extra RBAC rules in order to permit the pods to start with arbitrary user ids.
### Use case/motivation
I would like to be able to run Airflow using the provided helmchart in a on-premises Openshift 4 installation.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18136 | https://github.com/apache/airflow/pull/18147 | 806e4bce9bf827869b4066a14861f791c46179c8 | 45e8191f5c07a1db83c54cf892767ae71a295ba0 | "2021-09-10T10:21:30Z" | python | "2021-09-28T16:38:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,124 | ["airflow/providers/microsoft/azure/hooks/adx.py", "tests/providers/microsoft/azure/hooks/test_adx.py"] | Cannot retrieve Authentication Method in AzureDataExplorerHook using the custom connection fields | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==1!3.1.1
### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian GNU/Linux
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
When attempting to connect to Azure Data Explorer, the following exception is thrown even though the corresponding "Authentication Method" field is populated in the Azure Data Explorer connection form:
```
airflow.exceptions.AirflowException: Extra connection option is missing required parameter: `auth_method`
```
Airflow Connection:
![image](https://user-images.githubusercontent.com/48934154/132747672-25cb79b2-4589-4560-bb7e-4508de7d8659.png)
### What you expected to happen
Airflow tasks should be able to connect to Azure Data Explorer when properly populating the custom connection form. Or, at the very least, the above exception should not be thrown when an Authentication Method is provided in the connection.
### How to reproduce
1. Install the Microsoft Azure provider and create an Airflow Connection with the type `Azure Data Explorer`.
2. Provide all values for "Auth Username", "Auth Password", "Tenant ID", and "Authentication Method".
3. Finally execute a task which uses the `AzureDataExplorerHook`
### Anything else
Looks like there are a few issues in the `AzureDataExplorerHook`:
- The `get_required_param()` method is being passed a value of "auth_method" from the `Extras` field in the connection form. The `Extras` field is no longer exposed in the connection form and would never be able to be provided.
```python
def get_required_param(name: str) -> str:
"""Extract required parameter from extra JSON, raise exception if not found"""
value = conn.extra_dejson.get(name)
if not value:
raise AirflowException(f'Extra connection option is missing required parameter: `{name}`')
return value
auth_method = get_required_param('auth_method') or get_required_param(
'extra__azure_data_explorer__auth_method'
)
```
- The custom fields mapping for "Tenant ID" and "Authentication Method" are switched so even if these values are provided in the connection form they will not be used properly in the hook.
```python
@staticmethod
def get_connection_form_widgets() -> Dict[str, Any]:
"""Returns connection widgets to add to connection form"""
from flask_appbuilder.fieldwidgets import BS3PasswordFieldWidget, BS3TextFieldWidget
from flask_babel import lazy_gettext
from wtforms import PasswordField, StringField
return {
"extra__azure_data_explorer__auth_method": StringField(
lazy_gettext('Tenant ID'), widget=BS3TextFieldWidget()
),
"extra__azure_data_explorer__tenant": StringField(
lazy_gettext('Authentication Method'), widget=BS3TextFieldWidget()
),
...
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18124 | https://github.com/apache/airflow/pull/18203 | d9c0e159dbe670458b89a47d81f49d6a083619a2 | 410e6d7967c6db0a968f26eb903d072e356f1348 | "2021-09-09T19:24:05Z" | python | "2021-09-18T14:01:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,118 | ["airflow/sentry.py"] | Exception within LocalTaskJob._run_mini_scheduler_on_child_tasks brakes Sentry Handler | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon @ file:///root/.cache/pypoetry/artifacts/7f/f7/23/fc7fd3543aa486275ef0385c29063ff0dc391b0fc95dc5aa6cab2cf4e5/apache_airflow_providers_amazon-2.2.0-py3-none-any.whl
apache-airflow-providers-celery @ file:///root/.cache/pypoetry/artifacts/14/80/39/0d9d57205da1d24189ac9c18eb3477664ed2c2618c1467c9809b9a2fbf/apache_airflow_providers_celery-2.0.0-py3-none-any.whl
apache-airflow-providers-ftp @ file:///root/.cache/pypoetry/artifacts/a5/13/da/bf14abc40193a1ee1b82bbd800e3ac230427d7684b9d40998ac3684bef/apache_airflow_providers_ftp-2.0.1-py3-none-any.whl
apache-airflow-providers-http @ file:///root/.cache/pypoetry/artifacts/fc/d7/d2/73c89ef847bbae1704fa403d7e92dba1feead757aae141613980db40ff/apache_airflow_providers_http-2.0.0-py3-none-any.whl
apache-airflow-providers-imap @ file:///root/.cache/pypoetry/artifacts/af/5d/de/21c10bfc7ac076a415dcc3fc909317547e77e38c005487552cf40ddd97/apache_airflow_providers_imap-2.0.1-py3-none-any.whl
apache-airflow-providers-postgres @ file:///root/.cache/pypoetry/artifacts/50/27/e0/9b0d8f4c0abf59967bb87a04a93d73896d9a4558994185dd8bc43bb67f/apache_airflow_providers_postgres-2.2.0-py3-none-any.whl
apache-airflow-providers-redis @ file:///root/.cache/pypoetry/artifacts/7d/95/03/5d2a65ace88ae9a9ce9134b927b1e9639c8680c13a31e58425deae55d1/apache_airflow_providers_redis-2.0.1-py3-none-any.whl
apache-airflow-providers-sqlite @ file:///root/.cache/pypoetry/artifacts/ec/e6/a3/e0d81fef662ccf79609e7d2c4e4440839a464771fd2a002d252c9a401d/apache_airflow_providers_sqlite-2.0.1-py3-none-any.whl
```
### Deployment
Other Docker-based deployment
### Deployment details
We are using the Sentry integration
### What happened
An exception within LocalTaskJobs mini scheduler was handled incorrectly by the Sentry integrations 'enrich_errors' method. This is because it assumes its applied to a method of a TypeInstance task
```
TypeError: cannot pickle 'dict_keys' object
File "airflow/sentry.py", line 166, in wrapper
return func(task_instance, *args, **kwargs)
File "airflow/jobs/local_task_job.py", line 241, in _run_mini_scheduler_on_child_tasks
partial_dag = task.dag.partial_subset(
File "airflow/models/dag.py", line 1487, in partial_subset
dag.task_dict = {
File "airflow/models/dag.py", line 1488, in <dictcomp>
t.task_id: copy.deepcopy(t, {id(t.dag): dag}) # type: ignore
File "copy.py", line 153, in deepcopy
y = copier(memo)
File "airflow/models/baseoperator.py", line 970, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo))
File "copy.py", line 161, in deepcopy
rv = reductor(4)
AttributeError: 'LocalTaskJob' object has no attribute 'task'
File "airflow", line 8, in <module>
sys.exit(main())
File "airflow/__main__.py", line 40, in main
args.func(args)
File "airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "airflow/utils/cli.py", line 91, in wrapper
return f(*args, **kwargs)
File "airflow/cli/commands/task_command.py", line 238, in task_run
_run_task_by_selected_method(args, dag, ti)
File "airflow/cli/commands/task_command.py", line 64, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "airflow/cli/commands/task_command.py", line 121, in _run_task_by_local_task_job
run_job.run()
File "airflow/jobs/base_job.py", line 245, in run
self._execute()
File "airflow/jobs/local_task_job.py", line 128, in _execute
self.handle_task_exit(return_code)
File "airflow/jobs/local_task_job.py", line 166, in handle_task_exit
self._run_mini_scheduler_on_child_tasks()
File "airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "airflow/sentry.py", line 168, in wrapper
self.add_tagging(task_instance)
File "airflow/sentry.py", line 119, in add_tagging
task = task_instance.task
```
### What you expected to happen
The error to be handled correctly and passed on to Sentry without raising another exception within the error handling system
### How to reproduce
In this case we were trying to backfill task for a DAG that at that point had a compilation error. This is quite an edge case yes :-)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18118 | https://github.com/apache/airflow/pull/18119 | c9d29467f71060f14863ca3508cb1055572479b5 | f97ddf10e148bd18b6a09ec96f1901068c8684f0 | "2021-09-09T13:35:32Z" | python | "2021-09-09T23:14:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,103 | ["chart/templates/jobs/create-user-job.yaml", "chart/templates/jobs/migrate-database-job.yaml", "chart/tests/test_basic_helm_chart.py"] | Helm Chart Jobs (apache-airflow-test-run-airflow-migrations) does not Pass Labels to Pod | ### Official Helm Chart version
1.1.0 (latest released)
### Apache Airflow version
2.1.3 (latest released)
### Kubernetes Version
1.18
### Helm Chart configuration
Any configuration should replicate this. Here is a simple example:
```
statsd:
enabled: false
redis:
enabled: false
postgresql:
enabled: false
dags:
persistence:
enabled: true
labels:
sidecar.istio.iok/inject: "false"
```
### Docker Image customisations
Base image that comes with helm
### What happened
When I installed istio (/w istio proxy etc) on my namespace I noticed that the "apache-airflow-test-run-airflow-migrations" job would never fully complete. I was investigating and it seemed like the issue was with Istio so I tried creating a label in my values.yaml (as seen above) to disable to the istio inject -
```
labels:
sidecar.istio.iok/inject: "false"
```
The job picked up this label but when the job created the pod, the pod did not have this label. It appears there's a mismatch between the job's and the pods' labels that it creates.
### What you expected to happen
I was thinking that any labels associated to job (from values.yaml) would be inherited in the corresponding pods created
### How to reproduce
1. Install istio on your cluster
2. Create a namespace and add this label to the namespace- `istio-injection: enabled`
3. Add this in your values.yaml `sidecar.istio.iok/inject: "false"` and deploy your helm chart
4. Ensure that your apache-airflow-test-run-airflow-migrations job has the istio inject label, but the corresponding apache-airflow-test-run-airflow-migrations-... pod does **not**
### Anything else
I have a fix for this issue in my forked repo [here](https://github.com/jlouk/airflow/commit/c4400493da8b774c59214078eed5cf7d328844ea)
I've tested this on my cluster and have ensured this removes the mismatch of labels between job and pod. The helm install continues without error. Let me know if there are any other tests you'd like me to run.
Thanks,
John
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18103 | https://github.com/apache/airflow/pull/18403 | 43bd351df17084ec8c670e712da0193503519b74 | a91d9a7d681a5227d7d72876110e31b448383c20 | "2021-09-08T21:16:02Z" | python | "2021-09-21T18:43:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,100 | ["airflow/www/static/css/flash.css"] | DAG Import Errors Broken DAG Dropdown Arrow Icon Switched | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
macOS Big Sur 11.3.1
### Versions of Apache Airflow Providers
N/A
### Deployment
Virtualenv installation
### Deployment details
Default installation, with one DAG added for testing.
### What happened
The arrow icons used when a DAG has an import error there is a drop down that will tell you the errors with the import.
![image](https://user-images.githubusercontent.com/89415310/132572414-814b7366-4057-449c-9cdf-3309f3963297.png)
![image](https://user-images.githubusercontent.com/89415310/132572424-72ca5e22-b98f-4dac-82bf-5fa8149b4d92.png)
Once the arrow is pressed it flips from facing right to facing down, and the specific tracebacks are displayed.
![image](https://user-images.githubusercontent.com/89415310/132572385-9313fec6-ae48-4743-9149-d068c9dc8555.png)
![image](https://user-images.githubusercontent.com/89415310/132572395-aaaf84e9-da98-4e7e-bb1b-3a8fb4a25939.png)
By clicking the arrow on the traceback, it displays more information about the error that occurred. That arrow then flips from facing down to facing right.
![image](https://user-images.githubusercontent.com/89415310/132572551-17745c9e-cd95-4481-a98f-ebb33cf150bf.png)
![image](https://user-images.githubusercontent.com/89415310/132572567-f0d1cc2e-560d-44da-b62a-7f0e46743e9d.png)
Notice how the arrows in the UI display conflicting information depending on the direction of the arrow and the element that contains that arrow. For the parent -- DAG Import Errors -- when the arrow is right-facing, it hides information in the child elements and when the arrow is down-facing it displays the child elements. For the child elements, the arrow behavior is the opposite of expected: right-facing shows the information, and left-facing hides it.
### What you expected to happen
I expect for both the parent and child elements of the DAG Import Errors banner, when the arrow in the GUI is facing right the information is hidden, and when the arrow faces down the information is shown.
### How to reproduce
Create a DAG with an import error and the banner will be displayed.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18100 | https://github.com/apache/airflow/pull/18207 | 8ae2bb9bfa8cfd62a8ae5f6edabce47800ccb140 | d3d847acfdd93f8d1d8dc1495cf5b3ca69ae5f78 | "2021-09-08T20:03:00Z" | python | "2021-09-13T16:11:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,096 | ["airflow/cli/commands/celery_command.py", "airflow/cli/commands/scheduler_command.py", "tests/cli/commands/test_celery_command.py", "tests/cli/commands/test_scheduler_command.py"] | Scheduler process not gracefully exiting on major exceptions | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.2.0
apache-airflow-providers-cncf-kubernetes==2.0.2
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==5.1.0
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-postgres==2.2.0
apache-airflow-providers-sftp==2.1.1
apache-airflow-providers-sqlite==2.0.1
apache-airflow-providers-ssh==2.1.1
### Deployment
Other Docker-based deployment
### Deployment details
docker-compose version 1.29.2, build 5becea4c
Docker version 20.10.8, build 3967b7d
### What happened
When a major exception occurs in the scheduler (such as a RDBMS backend failure/restart) or basically anything triggering a failure of the JobScheduler, if the log_serving is on, then although the JobScheduler process is stopped, the exception cascades all the way up to the CLI, but the shutdown of the `serve_logs` process is never triggered [here](https://github.com/apache/airflow/blob/ff64fe84857a58c4f6e47ec3338b576125c4223f/airflow/cli/commands/scheduler_command.py#L72), causing the parent process to wait idle, therefore making any watchdog/restart of the process fail with a downed scheduler in such instances.
### What you expected to happen
I expect the whole CLI `airflow scheduler` to exit with an error so it can be safely restarted immediately without any external intervention.
### How to reproduce
Launch any scheduler from the CLI with the log serving on (no `--skip-serve-logs`), then shutdown the RDBMS. This will cause the scheduler to exit, but the process will still be running, which is not the expected behavior.
### Anything else
I already have a PR to submit for a fix on this issue, care to take a look at it ? Will link it in the issue.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18096 | https://github.com/apache/airflow/pull/18092 | 0b7b13372f6dbf18a35d5346d3955f65b31dd00d | 9a63bf2efb7b45ededbe84039d5f3cf6c2cfb853 | "2021-09-08T17:59:59Z" | python | "2021-09-18T00:02:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,089 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | KeyError when ECS failed to start image | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==1.4.0
apache-airflow-providers-docker==1.2.0
apache-airflow-providers-ftp==1.1.0
apache-airflow-providers-http==1.1.1
apache-airflow-providers-imap==1.0.1
apache-airflow-providers-postgres==2.0.0
apache-airflow-providers-sqlite==1.0.2
### Deployment
Other
### Deployment details
_No response_
### What happened
[2021-09-08 00:30:00,035] {taskinstance.py:1462} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1164, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1282, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1312, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/amazon/aws/operators/ecs.py", line 230, in execute
self._check_success_task()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/amazon/aws/operators/ecs.py", line 380, in _check_success_task
if container.get('lastStatus') == 'STOPPED' and container['exitCode'] != 0:
KeyError: 'exitCode'
### What you expected to happen
I Expect to see the error reason reported by the ECS operator not the KeyError when the code tries to access the exit code from the container.
Maybe something like:
` if container.get('lastStatus') == 'STOPPED' and container.get('exitCode', 1) != 0:`
to avoid the exception and some error handling that would make is easier to find the reason the container failed.
currently we have to search through the big huge info log with the Failed INFO.
### How to reproduce
Run ECS Task with bad creds to pull docker image from gitlab repo
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18089 | https://github.com/apache/airflow/pull/20264 | 38fd65dcfe85149170d6be640c23018e0065cf7c | 206cce971da6941e8c1b0d3c4dbf4fa8afe0fba4 | "2021-09-08T14:39:39Z" | python | "2021-12-16T05:35:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,083 | ["airflow/www/static/js/dags.js"] | Incorrect parameter posted to last_dagruns, task_stats, blocked etc | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
`apache-airflow-providers-amazon @ file:///root/.cache/pypoetry/artifacts/7f/f7/23/fc7fd3543aa486275ef0385c29063ff0dc391b0fc95dc5aa6cab2cf4e5/apache_airflow_providers_amazon-2.2.0-py3-none-any.whl
apache-airflow-providers-celery @ file:///root/.cache/pypoetry/artifacts/14/80/39/0d9d57205da1d24189ac9c18eb3477664ed2c2618c1467c9809b9a2fbf/apache_airflow_providers_celery-2.0.0-py3-none-any.whl
apache-airflow-providers-ftp @ file:///root/.cache/pypoetry/artifacts/a5/13/da/bf14abc40193a1ee1b82bbd800e3ac230427d7684b9d40998ac3684bef/apache_airflow_providers_ftp-2.0.1-py3-none-any.whl
apache-airflow-providers-http @ file:///root/.cache/pypoetry/artifacts/fc/d7/d2/73c89ef847bbae1704fa403d7e92dba1feead757aae141613980db40ff/apache_airflow_providers_http-2.0.0-py3-none-any.whl
apache-airflow-providers-imap @ file:///root/.cache/pypoetry/artifacts/af/5d/de/21c10bfc7ac076a415dcc3fc909317547e77e38c005487552cf40ddd97/apache_airflow_providers_imap-2.0.1-py3-none-any.whl
apache-airflow-providers-postgres @ file:///root/.cache/pypoetry/artifacts/77/15/08/a8b670fb068b3135f97d1d343e96d48a43cbf7f6ecd0d3006ba37d90bb/apache_airflow_providers_postgres-2.2.0rc1-py3-none-any.whl
apache-airflow-providers-redis @ file:///root/.cache/pypoetry/artifacts/7d/95/03/5d2a65ace88ae9a9ce9134b927b1e9639c8680c13a31e58425deae55d1/apache_airflow_providers_redis-2.0.1-py3-none-any.whl
apache-airflow-providers-sqlite @ file:///root/.cache/pypoetry/artifacts/ec/e6/a3/e0d81fef662ccf79609e7d2c4e4440839a464771fd2a002d252c9a401d/apache_airflow_providers_sqlite-2.0.1-py3-none-any.whl
`
### Deployment
Docker-Compose
### Deployment details
Issue is agnostic to the deployment as long as you have more dags in your system than will fit on the first page in airflow home
### What happened
The blocked, last_dagrun, dag_stats, task_stats endpoints are being sent the incorrect form field.
The field used to be `dag_ids` and now it seems to be `dagIds` (on the JS side) https://github.com/apache/airflow/blame/2.1.3/airflow/www/static/js/dags.js#L89 and https://github.com/apache/airflow/blob/2.1.3/airflow/www/views.py#L1659.
This causes the end point to attempt to return all dags which in our case is 13000+.
The fall back behaviour of returning all dags the user has permission for is a bad idea if i am honest. Perhaps it can be removed?
### What you expected to happen
The correct form field be posted and only the dags relevant to the page be returned
### How to reproduce
1. Create more dags with task runs than's available on one page (suggest lowering page size)
2. Enable js debugging
3. Refresh /home
4. Inspect response from /blocked /task_stats /last_dagruns and observe it returns dags which aren't on the page
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18083 | https://github.com/apache/airflow/pull/18085 | 3fe948a860a6eed2ee51a6f1be658a3ba260683f | d6e48cd98a1d82829326d4c48a80688866563f3e | "2021-09-08T09:59:17Z" | python | "2021-09-08T22:00:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,082 | ["airflow/api/common/experimental/trigger_dag.py", "tests/api/client/test_local_client.py", "tests/operators/test_trigger_dagrun.py", "tests/www/api/experimental/test_dag_runs_endpoint.py"] | TriggerDagRunOperator start_date is not set | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
ubuntu, macos
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==1.4.0
apache-airflow-providers-ftp==1.1.0
apache-airflow-providers-http==1.1.1
apache-airflow-providers-imap==1.0.1
apache-airflow-providers-postgres==1.0.2
apache-airflow-providers-slack==4.0.0
apache-airflow-providers-sqlite==1.0.2
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
We are using TriggerDagRunOperator in the end of DAG to retrigger current DAG:
`TriggerDagRunOperator(task_id=‘trigger_task’, trigger_dag_id=‘current_dag’)`
Everything works fine, except we have missing duration in UI and warnings in scheduler :
`[2021-09-07 15:33:12,890] {dagrun.py:604} WARNING - Failed to record duration of <DagRun current_dag @ 2021-09-07 12:32:17.035471+00:00: manual__2021-09-07T12:32:16.956461+00:00, externally triggered: True>: start_date is not set.`
And in web UI we can't see duration in TreeView and DAG has no started and duration values.
### What you expected to happen
Correct behaviour with start date and duration metrics in web UI.
### How to reproduce
```
import pendulum
import pytz
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
with DAG(
start_date=pendulum.datetime(year=2021, month=7, day=1).astimezone(pytz.utc),
schedule_interval='@daily',
default_args={},
max_active_runs=1,
dag_id='current_dag'
) as dag:
step1 = DummyOperator(task_id='dummy_task')
trigger_self = TriggerDagRunOperator(task_id='trigger_self', trigger_dag_id='current_dag')
step1 >> trigger_self
```
`[2021-09-08 12:53:35,094] {dagrun.py:604} WARNING - Failed to record duration of <DagRun current_dag @ 2021-01-04 03:00:11+00:00: backfill__2021-01-04T03:00:11+00:00, externally triggered: False>: start_date is not set.`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18082 | https://github.com/apache/airflow/pull/18226 | 1d2924c94e38ade7cd21af429c9f451c14eba183 | 6609e9a50f0ab593e347bfa92f56194334f5a94d | "2021-09-08T09:55:33Z" | python | "2021-09-24T09:45:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,069 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/www/static/js/graph.js", "airflow/www/static/js/tree.js", "airflow/www/templates/airflow/graph.html", "airflow/www/templates/airflow/tree.html", "airflow/www/views.py"] | Move auto-refresh interval to config variable | ### Description
Auto-refresh is an awesome new feature! Right now it occurs every [3 seconds](https://github.com/apache/airflow/blob/79d85573591f641db4b5f89a12213e799ec6dea1/airflow/www/static/js/tree.js#L463), and this interval is not configurable. Making this interval configurable would be useful.
### Use case/motivation
On a DAG with many tasks and many active DAG runs, auto-refresh can be a significant strain on the webserver. We observed our production webserver reach 100% CPU today when we had about 30 running DAG runs of a DAG containing about 50 tasks. Being able to increase the interval to guard against this would be helpful.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18069 | https://github.com/apache/airflow/pull/18107 | a3c4784cdc22744aa8bf2634526805cf07da5152 | a77379454c7841bef619523819edfb92795cb597 | "2021-09-07T20:20:49Z" | python | "2021-09-10T22:35:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,066 | ["Dockerfile", "chart/templates/workers/worker-deployment.yaml", "docs/docker-stack/entrypoint.rst"] | Chart version 1.1.0 does not gracefully shutdown workers | ### Official Helm Chart version
1.1.0 (latest released)
### Apache Airflow version
2.1.3 (latest released)
### Kubernetes Version
1.19.13
### Helm Chart configuration
```yaml
executor: "CeleryExecutor"
workers:
# Number of airflow celery workers in StatefulSet
replicas: 1
# Below is the default value, it does not work
command: ~
args:
- "bash"
- "-c"
- |-
exec \
airflow celery worker
```
### Docker Image customisations
```dockerfile
FROM apache/airflow:2.1.3-python3.7
ENV AIRFLOW_HOME=/opt/airflow
USER root
RUN set -ex \
&& buildDeps=' \
python3-dev \
libkrb5-dev \
libssl-dev \
libffi-dev \
build-essential \
libblas-dev \
liblapack-dev \
libpq-dev \
gcc \
g++ \
' \
&& apt-get update -yqq \
&& apt-get upgrade -yqq \
&& apt-get install -yqq --no-install-recommends \
$buildDeps \
libsasl2-dev \
libsasl2-modules \
apt-utils \
curl \
vim \
rsync \
netcat \
locales \
sudo \
patch \
libpq5 \
&& apt-get autoremove -yqq --purge\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER airflow
COPY --chown=airflow:root requirements*.txt /tmp/
RUN pip install -U pip setuptools wheel cython \
&& pip install -r /tmp/requirements_providers.txt \
&& pip install -r /tmp/requirements.txt
COPY --chown=airflow:root setup.py /tmp/custom_operators/
COPY --chown=airflow:root custom_operators/ /tmp/custom_operators/custom_operators/
RUN pip install /tmp/custom_operatos
COPY --chown=airflow:root entrypoint*.sh /
COPY --chown=airflow:root config/ ${AIRFLOW_HOME}/config/
COPY --chown=airflow:root airflow.cfg ${AIRFLOW_HOME}/
COPY --chown=airflow:root dags/ ${AIRFLOW_HOME}/dags
```
### What happened
Using CeleryExecutor whenever I kill a worker pod that is running a task with `kubectl delete pod` or a `helm upgrade` the pod gets instantly killed and does not wait for the task to finish or the end of terminationGracePeriodSeconds.
### What you expected to happen
I expect the worker to finish all it's tasks inside the grace period before being killed
Killing the pod when it's running a task throws this
```bash
k logs -f airflow-worker-86d78f7477-rjljs
* Serving Flask app "airflow.utils.serve_logs" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
[2021-09-07 16:26:25,612] {_internal.py:113} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
/home/airflow/.local/lib/python3.7/site-packages/celery/platforms.py:801 RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=50000 euid=50000 gid=0 egid=0
[2021-09-07 16:28:11,021: WARNING/ForkPoolWorker-1] Running <TaskInstance: test-long-running.long-long 2021-09-07T16:28:09.148524+00:00 [queued]> on host airflow-worker-86d78f7477-rjljs
worker: Warm shutdown (MainProcess)
[2021-09-07 16:28:32,919: ERROR/MainProcess] Process 'ForkPoolWorker-2' pid:20 exited with 'signal 15 (SIGTERM)'
[2021-09-07 16:28:32,930: ERROR/MainProcess] Process 'ForkPoolWorker-1' pid:19 exited with 'signal 15 (SIGTERM)'
[2021-09-07 16:28:33,183: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 15 (SIGTERM) Job: 0.')
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/worker.py", line 208, in start
self.blueprint.start(self)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 599, in start
c.loop(*c.loop_args())
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/loops.py", line 83, in asynloop
next(loop)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 303, in create_loop
poll_timeout = fire_timers(propagate=propagate) if scheduled else 1
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 145, in fire_timers
entry()
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/timer.py", line 68, in __call__
return self.fun(*self.args, **self.kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/timer.py", line 130, in _reschedules
return fun(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/gossip.py", line 167, in periodic
for worker in values(workers):
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/utils/functional.py", line 109, in _iterate_values
for k in self:
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/utils/functional.py", line 95, in __iter__
def __iter__(self):
File "/home/airflow/.local/lib/python3.7/site-packages/celery/apps/worker.py", line 285, in _handle_request
raise exc(exitcode)
celery.exceptions.WorkerShutdown: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/billiard/pool.py", line 1267, in mark_as_worker_lost
human_status(exitcode), job._job),
billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 15 (SIGTERM) Job: 0.
-------------- celery@airflow-worker-86d78f7477-rjljs v4.4.7 (cliffs)
--- ***** -----
-- ******* ---- Linux-5.4.129-63.229.amzn2.x86_64-x86_64-with-debian-10.10 2021-09-07 16:26:26
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7ff517d78d90
- ** ---------- .> transport: redis://:**@airflow-redis:6379/0
- ** ---------- .> results: postgresql+psycopg2://airflow:**@db-host:5432/airflow
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
```
### How to reproduce
Run a dag with this airflow configuration
```yaml
executor: "CeleryExecutor"
workers:
replicas: 1
command: ~
args:
- "bash"
- "-c"
# The format below is necessary to get `helm lint` happy
- |-
exec \
airflow celery worker
```
and kill the worker pod
### Anything else
Overwriting the official entrypoint seems to solve the issue
```yaml
workers:
# To gracefully shutdown workers I have to overwrite the container entrypoint
command: ["airflow"]
args: ["celery", "worker"]
```
When the worker gets killed another worker pod comes online and the old one stays in status `Terminating`, all new tasks go to the new worker.
Below are the logs when the worker gets killed
```bash
k logs -f airflow-worker-5ff95df84f-fznk7
* Serving Flask app "airflow.utils.serve_logs" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
[2021-09-07 16:42:42,399] {_internal.py:113} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
/home/airflow/.local/lib/python3.7/site-packages/celery/platforms.py:801 RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=50000 euid=50000 gid=0 egid=0
[2021-09-07 16:42:53,133: WARNING/ForkPoolWorker-1] Running <TaskInstance: test-long-running.long-long 2021-09-07T16:28:09.148524+00:00 [queued]> on host airflow-worker-5ff95df84f-fznk7
worker: Warm shutdown (MainProcess)
-------------- celery@airflow-worker-5ff95df84f-fznk7 v4.4.7 (cliffs)
--- ***** -----
-- ******* ---- Linux-5.4.129-63.229.amzn2.x86_64-x86_64-with-debian-10.10 2021-09-07 16:42:43
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f69aaa90d50
- ** ---------- .> transport: redis://:**@airflow-redis:6379/0
- ** ---------- .> results: postgresql+psycopg2://airflow:**@db-host:5432/airflow
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
rpc error: code = Unknown desc = Error: No such container: efe5ce470f5bd5b7f84479c1a8f5dc1d5d92cb1ad6b16696fa5a1ca9610602ee%
```
There is no timestamp but it waits for the task to finish before writing `worker: Warm shutdown (MainProcess)`
Another option I tried was using this as the entrypoint and it also works
```bash
#!/usr/bin/env bash
handle_worker_term_signal() {
# Remove worker from queue
celery -b $AIRFLOW__CELERY__BROKER_URL -d celery@$HOSTNAME control cancel_consumer default
while [ $(airflow jobs check --hostname $HOSTNAME | grep "Found one alive job." | wc -l) -eq 1 ]; do
echo 'Finishing jobs!'
airflow jobs check --hostname $HOSTNAME --limit 100 --allow-multiple
sleep 60
done
echo 'All jobs finished! Terminating worker'
kill $pid
exit 0
}
trap handle_worker_term_signal SIGTERM
airflow celery worker &
pid="$!"
wait $pid
```
Got the idea from this post: https://medium.com/flatiron-engineering/upgrading-airflow-with-zero-downtime-8df303760c96
Thanks!
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18066 | https://github.com/apache/airflow/pull/18068 | 491d81893b0afb80b5f9df191369875bce6e2aa0 | 9e13e450032f4c71c54d091e7f80fe685204b5b4 | "2021-09-07T17:02:04Z" | python | "2021-09-10T18:13:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,055 | [".github/workflows/build-images.yml"] | The current CI requires rebasing to main far too often | ### Body
I was wondering why people have to rebase their PRs so often now, this was not the case for quite a while, but it seems that now, the need to rebase to latest `main` is far more frequent than it should be. I think also we have a number of cases where PR build succeeds but it breaks main after merging. This also did not use to be that often as we observe now..
I looked a bit closer and I believe the problem was #15944 PR streamlined some of the code of our CI I just realised it also removed an important feature of the previous setup - namely using the MERGE commit rather than original commit from the PR.
Previously, when we built the image, we have not used the original commit from the incoming PR, but the merge commit that GitHub generates. Whenever there is no conflict, GitHub performs automatic merge with `main` and by default PR during the build uses that 'merge' PR and not the original commit.
This means that all the PRs - even if they can be cleanly rebased - are using the original commit now and they are built "as if they were built using original branch point".
Unfortunately as far as I checked, there is no "merge commit hash" in the "pull_request_target" workflow. Previously, the "build image" workflow used my custom "get_workflow_origin" action to find the merge commit via GitHub API. This has much better user experience because the users do not have to rebase to main nearly as often as they do now.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/18055 | https://github.com/apache/airflow/pull/18060 | 64d2f5488f6764194a2f4f8a01f961990c75b840 | 1bfb5722a8917cbf770922a26dc784ea97aacf33 | "2021-09-07T11:52:12Z" | python | "2021-09-07T15:21:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,053 | ["airflow/providers/hashicorp/hooks/vault.py", "tests/providers/hashicorp/hooks/test_vault.py"] | `VaultHook` AppRole authentication fails when using a conn_uri | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
MacOS Big Sur
### Versions of Apache Airflow Providers
apache-airflow-providers-hashicorp==2.0.0
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
When trying to to use the VaultHook with AppRole authentication via a connection defined as a conn_uri, I was unable to establish a connection in a custom operator due the 'role_id' not being provided despite it being an optional argument.
https://github.com/apache/airflow/blob/main/airflow/providers/hashicorp/hooks/vault.py#L124
### What you expected to happen
So, with a connection defined as a URI as follows:
```
http://[ROLE_ID]:[SECRET_ID]@https://[VAULT_URL]?auth_type=approle
```
I receive the following error when trying to run a task in a custom operator.
```
[2021-09-07 08:30:36,877] {base.py:79} INFO - Using connection to: id: vault. Host: https://[VAULT_URL], Port: None, Schema: None, Login: [ROLE_ID], Password: ***, extra: {'auth_type': 'approle'}
[2021-09-07 08:30:36,879] {taskinstance.py:1462} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1164, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1282, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1312, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/airflow/plugins/operators/my_vault_operator.py", line 28, in execute
vaulthook = VaultHook(vault_conn_id=self.vault_conn_id)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/hashicorp/hooks/vault.py", line 216, in __init__
radius_port=radius_port,
File "/usr/local/lib/python3.7/site-packages/airflow/providers/hashicorp/_internal_client/vault_client.py", line 153, in __init__
raise VaultError("The 'approle' authentication type requires 'role_id'")
hvac.exceptions.VaultError: The 'approle' authentication type requires 'role_id', on None None
```
Given that the ROLE_ID and SECRET_ID are part of the connection, I was expecting that the hook should retrieve this from `self.connection.login`.
### How to reproduce
1. Create a connection to Vault defined as a `conn_uri`, e.g. `http://[ROLE_ID]:[SECRET_ID]@https://[VAULT_URL]?auth_type=approle`
2. Create a simple custom operator as follows:
```
from airflow.models import BaseOperator
from airflow.providers.hashicorp.hooks.vault import VaultHook
class MyVaultOperator(BaseOperator):
def __init__(self, vault_conn_id=None, *args, **kwargs):
super(MyVaultOperator, self).__init__(*args, **kwargs)
self.vault_conn_id = vault_conn_id
def execute(self, context):
vaulthook = VaultHook(vault_conn_id=self.vault_conn_id)
```
3. Create a simple DAG to use this operator
```
from datetime import datetime
from airflow import models
from operators.my_vault_operator import MyVaultOperator
default_args = {
'owner': 'airflow',
'start_date': datetime(2011, 1, 1),
}
dag_name = 'my_vault_dag'
with models.DAG(
dag_name,
default_args=default_args
) as dag:
my_vault_task = MyVaultOperator(
task_id='vault_task',
vault_conn_id='vault',
)
```
4. Run this DAG via `airflow tasks test my_vault_dag vault_task 2021-09-06`
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18053 | https://github.com/apache/airflow/pull/18064 | 9e13e450032f4c71c54d091e7f80fe685204b5b4 | 476ae0eb588a5bfbc3d415a34cf4f0262f53888e | "2021-09-07T09:42:20Z" | python | "2021-09-10T19:08:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,050 | ["airflow/www/views.py", "tests/www/views/test_views_connection.py"] | Duplicating the same connection twice gives "Integrity error, probably unique constraint" | ### Apache Airflow version
2.2.0
### Operating System
Debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
astro dev start
### What happened
When I have tried to duplicate a connection first time it has created a connection with hello_copy1 and duplicating the same connection second time giving me an error of
```Connection hello_copy2 can't be added. Integrity error, probably unique constraint.```
### What you expected to happen
It should create a connection with name hello_copy3 or the error message should be more user friendly.
### How to reproduce
https://github.com/apache/airflow/pull/15574#issuecomment-912438705
### Anything else
Suggested Error
```
Connection hello_copy2 can't be added because it already exists.
```
or
Change the number logic in _copyN so the new name will be always unique.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18050 | https://github.com/apache/airflow/pull/18161 | f248a215aa341608e2bc7d9083ca9d18ab756ac4 | 3ddb36578c5020408f89f5532b21dc0c38e739fb | "2021-09-06T17:44:51Z" | python | "2021-10-09T14:06:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,005 | ["airflow/providers/neo4j/hooks/neo4j.py", "tests/providers/neo4j/hooks/test_neo4j.py", "tests/providers/neo4j/operators/test_neo4j.py"] | Neo4j Hook does not return query response | ### Apache Airflow Provider(s)
neo4j
### Versions of Apache Airflow Providers
apache-airflow-providers-neo4j version 2.0.0
### Apache Airflow version
2.1.0
### Operating System
macOS Big Sur Version 11.4
### Deployment
Docker-Compose
### Deployment details
docker-compose version 1.29.2, build 5becea4c
Docker version 20.10.7, build f0df350
### What happened
```python
neo4j = Neo4jHook(conn_id=self.neo4j_conn_id)
sql = "MATCH (n) RETURN COUNT(n)"
result = neo4j.run(sql)
logging.info(result)
```
If i run the code snippet above i always get an empty response.
### What you expected to happen
i would like to have the query response.
### How to reproduce
```python
from airflow.models.baseoperator import BaseOperator
from airflow.providers.neo4j.hooks.neo4j import Neo4jHook
import logging
class testNeo4jHookOperator(BaseOperator):
def __init__(
self,
neo4j_conn_id: str,
**kwargs) -> None:
super().__init__(**kwargs)
self.neo4j_conn_id = neo4j_conn_id
def execute(self, context):
neo4j = Neo4jHook(conn_id=self.neo4j_conn_id)
sql = "MATCH (n) RETURN COUNT(n)"
result = neo4j.run(sql)
logging.info(result)
return result
```
I created this custom operator to test the bug.
### Anything else
The bug is related to the way the hook is implemented and how the Neo4j driver works:
```python
def run(self, query) -> Result:
"""
Function to create a neo4j session
and execute the query in the session.
:param query: Neo4j query
:return: Result
"""
driver = self.get_conn()
if not self.connection.schema:
with driver.session() as session:
result = session.run(query)
else:
with driver.session(database=self.connection.schema) as session:
result = session.run(query)
return result
```
In my opinion, the above hook run method should be changed to:
```python
def run(self, query) -> Result:
"""
Function to create a neo4j session
and execute the query in the session.
:param query: Neo4j query
:return: Result
"""
driver = self.get_conn()
if not self.connection.schema:
with driver.session() as session:
result = session.run(query)
return result.data()
else:
with driver.session(database=self.connection.schema) as session:
result = session.run(query)
return result.data()
```
This is because when trying to get result.data (or result in general) externally the session is always empty.
I tried this solution and it seems to work.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18005 | https://github.com/apache/airflow/pull/18007 | 0dba2e0d644ab0bd2512144231b56463218a3b74 | 5d2b056f558f3802499eb6d98643433c31d8534c | "2021-09-03T09:11:40Z" | python | "2021-09-07T16:17:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,003 | ["airflow/providers/google/cloud/hooks/cloud_sql.py"] | CloudSqlProxyRunner doesn't support connections from secrets backends | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google~=5.0
### Apache Airflow version
2.1.3 (latest released)
### Operating System
MacOS Big Sur
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
The CloudSqlProxyRunner queries the Airflow DB directly to retrieve the GCP connection.
https://github.com/apache/airflow/blob/7b3a5f95cd19667a683e92e311f6c29d6a9a6a0b/airflow/providers/google/cloud/hooks/cloud_sql.py#L521
### What you expected to happen
In order to not break when using connections stored in external secrets backends, it should use standardized methods to retrieve connections and not query the Airflow DB directly.
### How to reproduce
Use a CloudSQLDatabaseHook with `use_proxy=True` while using an external secrets backend.
### Anything else
Every time
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18003 | https://github.com/apache/airflow/pull/18006 | 867e9305f08bf9580f25430d8b6e84071c59f9e6 | 21348c194d4149237e357e0fff9ed444d27fa71d | "2021-09-03T08:21:38Z" | python | "2021-09-03T20:26:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,969 | ["airflow/api_connexion/endpoints/dag_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/exceptions.py", "airflow/www/views.py"] | Delete DAG REST API Functionality | ### Description
The stable REST API seems to be missing the functionality to delete a DAG.
This would mirror clicking the "Trash Can" on the UI (and could maybe eventually power it)
![image](https://user-images.githubusercontent.com/80706212/131702280-2a8d0d74-6388-4b84-84c1-efabdf640cde.png)
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17969 | https://github.com/apache/airflow/pull/17980 | b6a962ca875bc29aa82a252f5c179faff601780b | 2cace945cd35545385d83090c8319525d91f8efd | "2021-09-01T15:45:03Z" | python | "2021-09-03T12:46:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,966 | ["airflow/task/task_runner/standard_task_runner.py"] | lack of definitive error message if task launch fails | ### Apache Airflow version
2.1.2
### Operating System
Centos docker image
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Self built docker image on kubernetes
### What happened
If airflow fails to launch a task (Kubernetes worker pod in my case), the task logs with a simple error message "task exited with return code 1".
This is problematic for developers working on developing platforms around airflow.
### What you expected to happen
Provide a proper error message when a task exits with return code 1 from standard task runner.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17966 | https://github.com/apache/airflow/pull/17967 | bff580602bc619afe1bee2f7a5c3ded5fc6e39dd | b6a962ca875bc29aa82a252f5c179faff601780b | "2021-09-01T14:49:56Z" | python | "2021-09-03T12:39:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,962 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/www/views.py", "docs/apache-airflow/security/webserver.rst", "tests/www/views/test_views_base.py", "tests/www/views/test_views_robots.py"] | Warn if robots.txt is accessed | ### Description
https://github.com/apache/airflow/pull/17946 implements a `/robots.txt` endpoint to block search engines crawling Airflow - in the cases where it is (accidentally) exposed to the public Internet.
If we record any GET requests to that end-point we'd have a strong warning flag that the deployment is exposed, and could issue a warning in the UI, or even enable some kill-switch on the deployment.
Some deployments are likely intentionally available and rely on auth mechanisms on the `login` endpoint, so there should be a config option to suppress the warnings.
An alternative approach would be to monitor for requests from specific user-agents used by crawlers for the same reasons
### Use case/motivation
People who accidentally expose airflow have a slightly higher chance of realising they've done so and tighten their security.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17962 | https://github.com/apache/airflow/pull/18557 | 7c45bc35e767ff21982636fa2e36bc07c97b9411 | 24a53fb6476a3f671859451049407ba2b8d931c8 | "2021-09-01T12:11:26Z" | python | "2021-12-24T10:24:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,932 | ["airflow/models/baseoperator.py", "airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | template_ext in task attributes shows incorrect value | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian Buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Managed Services (Astronomer, Composer, MWAA etc.)
### Deployment details
Running `quay.io/astronomer/ap-airflow:2.1.3-buster` Docker image
### What happened
In the task attributes view of a BashOperator, the `template_ext` row doesn't show any value, while I would expect to see `('.sh', '.bash')`.
![image](https://user-images.githubusercontent.com/6249654/131467817-8620f68d-0f3d-491d-ad38-cbf5b91cdcb3.png)
### What you expected to happen
Expect to see correct `template_ext` value
### How to reproduce
Run a BashOperator and browse to Instance Details --> scroll down to `template_ext` row
### Anything else
Spent a little time trying to figure out what could be the reason, but haven't found the problem yet. Therefore created issue to track.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17932 | https://github.com/apache/airflow/pull/17985 | f7276353ccd5d15773eea6c0d90265650fd22ae3 | ca4f99d349e664bbcf58d3c84139b5f4919f6c8e | "2021-08-31T08:21:15Z" | python | "2021-09-02T22:54:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,931 | ["airflow/plugins_manager.py", "airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py", "tests/test_utils/timetables.py"] | Timetable registration a la OperatorLinks | Currently (as implemented in #17414), timetables are serialised by their classes’ full import path. This works most of the time, but not in some cases, including:
* Nested in class or function
* Declared directly in a DAG file without a valid import name (e.g. `12345.py`)
It’s fundamentally impossible to fix some of the cases (e.g. function-local class declaration) due to how Python works, but by requiring the user to explicitly register the timetable class, we can at least expose that problem so users don’t attempt to do that.
However, since the timetable actually would work a lot of times without any additional mechanism, I’m also wondering if we should _require_ registration.
1. Always require registration. A DAG using an unregistered timetable class fails to serialise.
2. Only require registration when the timetable class has wonky import path. “Normal” classes work out of the box without registering, and user sees a serialisation error asking for registration otherwise.
3. Don’t require registration. If a class cannot be correctly serialised, tell the user we can’t do it and the timetable must be declared another way. | https://github.com/apache/airflow/issues/17931 | https://github.com/apache/airflow/pull/17989 | 31b15c94886c6083a6059ca0478060e46db67fdb | be7efb1d30929a7f742f5b7735a3d6fbadadd352 | "2021-08-31T08:13:51Z" | python | "2021-09-03T12:15:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,930 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/providers/amazon/aws/utils/emailer.py", "airflow/utils/email.py", "docs/apache-airflow/howto/email-config.rst", "tests/providers/amazon/aws/utils/test_emailer.py", "tests/utils/test_email.py"] | Emails notifications with AWS SES not working due to missing "from:" field | ### Apache Airflow version
main (development)
### Operating System
macOsS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon (main)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
https://github.com/apache/airflow/blob/098765e227d4ab7873a2f675845b50c633e356da/airflow/providers/amazon/aws/utils/emailer.py#L40-L41
```
hook.send_email(
mail_from=None,
to=to,
subject=subject,
html_content=html_content,
files=files,
cc=cc,
bcc=bcc,
mime_subtype=mime_subtype,
mime_charset=mime_charset,
)
```
the `mail_from=None` will trigger an error when sending the mail, when using AWS Simple Email Service you need to provide a from address and has to be already verified in AWS SES > "Verified identities"
### What you expected to happen
I expected it to send a mail but it doesn't because
### How to reproduce
_No response_
### Anything else
This is easily solved by providing a from address like in:
```
smtp_mail_from = conf.get('smtp', 'SMTP_MAIL_FROM')
hook.send_email(
mail_from=smtp_mail_from,
```
the problem is: Can we reuse the smtp.SMTP_MAIL_FROM or do we need to create a new configuration parameter like email.email_from_address ?
* smtp uses its own config smtp.smtp_mail_from
* sendgrid uses an environment variable `SENDGRID_MAIL_FROM` (undocumented by the way)
So, my personal proposal is to
* introduce an email.email_from_email and email.email_from_name
* read those new configuration parameters at [utils.email.send_email](https://github.com/apache/airflow/blob/098765e227d4ab7873a2f675845b50c633e356da/airflow/utils/email.py#L50-L67 )
* pass those as arguments to the backend (kwargs `from_email`, `from_name`) . the [sendgrid backend can already read those ](https://github.com/apache/airflow/blob/098765e227d4ab7873a2f675845b50c633e356da/airflow/providers/sendgrid/utils/emailer.py#L70-L71) although seems unused at the momemt.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17930 | https://github.com/apache/airflow/pull/18042 | 0649688e15b9763e91b4cd2963cdafea1eb8219d | 1543dc28f4a2f1631dfaedd948e646a181ccf7ee | "2021-08-31T07:17:07Z" | python | "2021-10-29T09:10:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,897 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/models/base.py", "docs/apache-airflow/howto/set-up-database.rst", "tests/models/test_base.py"] | Dag tags are not refreshing if case sensitivity changed | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Custom docker image on k8s
### What happened
New dag version was written in same named dag. The only difference was one of specified tags got changed in case, for ex. "test" to "Test". With huge amount of dags this causes scheduler immediately to crash due to constraint violation.
### What you expected to happen
Tags are refreshed correctly without crash of scheduler.
### How to reproduce
1. Create dag with tags in running airflow cluster
2. Update dag with change of case of one of tags for ex. "test" to "Test"
3. Watch scheduler crash continuously
### Anything else
Alternative option is to make tags case sensitive...
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17897 | https://github.com/apache/airflow/pull/18072 | 79d85573591f641db4b5f89a12213e799ec6dea1 | b658a4243fb5b22b81677ac0f30c767dfc3a1b4b | "2021-08-29T17:14:31Z" | python | "2021-09-07T23:17:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,879 | ["airflow/utils/state.py", "tests/utils/test_state.py"] | Cleared tasks get literal 'DagRunState.QUEUED' instead of the value 'queued' | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
CentOS Stream release 8
### Versions of Apache Airflow Providers
None of them are relevant
### Deployment
Virtualenv installation
### Deployment details
mkdir /srv/airflow
cd /srv/airflow
virtualenv venv
source venv/bin/activate
pip install apache-airflow==2.1.3
AIRFLOW_HOME and AIRFLOW_CONFIG is specified via environment variables in /etc/sysconfig/airflow, which is in turn used as EnvironmentFile in systemd service files.
systemctl start airflow-{scheduler,webserver,kerberos}
Python version: 3.9.2
LocalExecutors are used
### What happened
On the Web UI, I had cleared failed tasks, which have been cleared properly, but the DagRun became black with a literal value of "DagRunState.QUEUED", therefore it can't be scheduled again.
### What you expected to happen
DagRun state should be 'queued'.
### How to reproduce
Just clear any tasks on the Web UI. I wonder how could it be that nobody noticed this issue.
### Anything else
Here's a patch to fix it. Maybe the __str__ method should be different, or the database/persistence layer should handle this, but for now, this solves the problem.
```patch
--- airflow/models/dag.py.orig 2021-08-28 09:48:05.465542450 +0200
+++ airflow/models/dag.py 2021-08-28 09:47:34.272639960 +0200
@@ -1153,7 +1153,7 @@
confirm_prompt=False,
include_subdags=True,
include_parentdag=True,
- dag_run_state: DagRunState = DagRunState.QUEUED,
+ dag_run_state: DagRunState = DagRunState.QUEUED.value,
dry_run=False,
session=None,
get_tis=False,
@@ -1369,7 +1369,7 @@
confirm_prompt=False,
include_subdags=True,
include_parentdag=False,
- dag_run_state=DagRunState.QUEUED,
+ dag_run_state=DagRunState.QUEUED.value,
dry_run=False,
):
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17879 | https://github.com/apache/airflow/pull/17886 | 332406dae9f6b08de0d43576c4ed176eb49b8ed0 | a3f9c690aa80d12ff1d5c42eaaff4fced07b9429 | "2021-08-28T08:13:59Z" | python | "2021-08-30T18:05:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,874 | ["airflow/www/static/js/trigger.js"] | Is it possible to extend the window with parameters in the UI? | ### Description
Is it possible to extend the window with parameters in the UI? - I have simple parameters and they do not fit (
### Use case/motivation
I have simple parameters and they do not fit (
### Related issues
yt
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/17874 | https://github.com/apache/airflow/pull/20052 | 9083ecd928a0baba43e369e2c65225e092a275ca | abe842cf6b68472cc4f84dcec1a5ef94ff98ba5b | "2021-08-27T20:38:53Z" | python | "2021-12-08T22:14:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,869 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/configuration.py", "airflow/www/extensions/init_views.py", "docs/apache-airflow/security/api.rst"] | More than one url in cors header | **Description**
Enabled list of cors headers origin allowed
**Use case / motivation**
I'm currently developping Django application working with apache airflow for some external work through api. From Django, user run dag and a js request wait for dag end. By this way, I enabled cors headers in airflow.cfg with correct url. But now, I've environment (dev & prod). It could be useful to be able to set more than one url in airflow.cfg cors header origin allowed
Thank's in advance :)
| https://github.com/apache/airflow/issues/17869 | https://github.com/apache/airflow/pull/17941 | ace2374c20a9dc4b004237bfd600dd6eaa0f91b4 | a88115ea24a06f8706886a30e4f765aa4346ccc3 | "2021-08-27T12:24:31Z" | python | "2021-09-03T14:37:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,853 | ["docs/apache-airflow/templates-ref.rst", "docs/apache-airflow/timezone.rst"] | Macros reference point to outdated pendulum information | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**:
2.x
**Description**:
The documentation on Macros Reference is outdated when it comes to `pendulum`.
This commit: https://github.com/apache/airflow/commit/c41192fa1fc5c2b3e7b8414c59f656ab67bbef28 updated Pendulum to `~=2.0` but the [documentation](https://airflow.apache.org/docs/apache-airflow/stable/macros-ref.html) still refers to `pendulum.Pendulum` (which doesn't exist in 2.x).
The link to the docs refers to 2.x, but it gives me a 404 when I try to open it? https://github.com/apache/airflow/blame/98d45cbe503241798653d47192364b1ffb633f35/docs/apache-airflow/templates-ref.rst#L178
I think these macros are all of type `pendulum.DateTime` now
**Are you willing to submit a PR?**
Yes, I'll add a PR asap
| https://github.com/apache/airflow/issues/17853 | https://github.com/apache/airflow/pull/18766 | c9bf5f33e5d5bcbf7d31663a8571628434d7073f | cf27419cfe058750cde4247935e20deb60bda572 | "2021-08-26T15:10:42Z" | python | "2021-10-06T12:04:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,852 | ["docs/apache-airflow/howto/connection.rst"] | Connections created via AIRFLOW_CONN_ enviroment variables do not show up in the Admin > Connections or airflow connections list | The connections created using [environment variables like AIRFLOW_CONN_MYCONNID](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#storing-a-connection-in-environment-variables) do not show up in the UI.
They don't show up in `airflow connections list` either, although if you know the conn_id you can `airflow connections get conn_id` and it will find it.
| https://github.com/apache/airflow/issues/17852 | https://github.com/apache/airflow/pull/17915 | b5da846dd1f27d798dc7dc4f4227de4418919874 | e9bf127a651794b533f30a77dc171e0ea5052b4f | "2021-08-26T14:43:30Z" | python | "2021-08-30T16:52:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,833 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "airflow/providers/amazon/aws/utils/connection_wrapper.py", "docs/apache-airflow-providers-amazon/connections/aws.rst", "tests/providers/amazon/aws/utils/test_connection_wrapper.py"] | AwsBaseHook isn't using connection.host, using connection.extra.host instead | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**:
Version: v2.1.2
Git Version: .release:2.1.2+d25854dd413aa68ea70fb1ade7fe01425f456192
**OS**:
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
**Apache Airflow Provider versions**:
**Deployment**:
Docker compose using reference docker compose from here: https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
**What happened**:
Connection specifications have a host field. This is declared as a member variable of the connection class here:
https://github.com/apache/airflow/blob/96f7e3fec76a78f49032fbc9a4ee9a5551f38042/airflow/models/connection.py#L101
It's correctly used in the BaseHook class, eg here:
https://github.com/apache/airflow/blob/96f7e3fec76a78f49032fbc9a4ee9a5551f38042/airflow/hooks/base.py#L69
However in AwsBaseHook, it's expected to be in extra, here:
https://github.com/apache/airflow/blob/96f7e3fec76a78f49032fbc9a4ee9a5551f38042/airflow/providers/amazon/aws/hooks/base_aws.py#L404-L406
This should be changed to
`endpoint_url = connection_object.host`
**What you expected to happen**:
AwsBaseHook should use the connection.host value, not connection.extra.host
**How to reproduce it**:
See above code
**Anything else we need to know**:
**Are you willing to submit a PR?**
Probably. It'd also be good for someone inexperienced. This would represent a backwards-incompatible change, any problem with that?
<!---
This is absolutely not required, but we are happy to guide you in contribution process
especially if you already have a good understanding of how to implement the fix.
Airflow is a community-managed project and we love to bring new contributors in.
Find us in #airflow-how-to-pr on Slack!
-->
| https://github.com/apache/airflow/issues/17833 | https://github.com/apache/airflow/pull/25494 | 2682dd6b2c2f2de95cda435984832e8421d0497b | a0212a35930f44d88e12f19e83ec5c9caa0af82a | "2021-08-25T17:17:15Z" | python | "2022-08-08T18:36:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,828 | ["scripts/in_container/prod/entrypoint_prod.sh"] | Docker image entrypoint does not parse redis URLs correctly with no password | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**:
<!-- AIRFLOW VERSION IS MANDATORY -->
2.1.3
**Deployment**:
<!-- e.g. Virtualenv / VM / Docker-compose / K8S / Helm Chart / Managed Airflow Service -->
<!-- Please include your deployment tools and versions: docker-compose, k8s, helm, etc -->
ECS, docker-compose
**What happened**:
A redis connection URL of the form:
```
redis://<host>:6379/1
```
is parsed incorrectly by this pattern: https://github.com/apache/airflow/blob/2.1.3/scripts/in_container/prod/entrypoint_prod.sh#L96 - the host is determined to be `localhost` as there is no `@` character.
Adding an `@` character, e.g. `redis://@redis-analytics-airflow.signal:6379/1` results in correct parsing, but cannot connect to redis as auth fails.
Logs:
```
BACKEND=redis
DB_HOST=localhost
DB_PORT=6379
```
and eventually
```
ERROR! Maximum number of retries (20) reached.
Last check result:
$ run_nc 'localhost' '6379'
(UNKNOWN) [127.0.0.1] 6379 (?) : Connection refused
sent 0, rcvd 0
```
**How to reproduce it**:
Run airflow with an official docker image and ` AIRFLOW__CELERY__BROKER_URL: redis://<ANY_REDIS_HOSTNAME>:6379/1` and observe that it parses the host as `localhost`.
**Anything else we need to know**:
This was working without issue in Airflow 2.1.2 - weirdly I cannot see changes to the entrypoint, but we have not changed the URL `AIRFLOW__CELERY__BROKER_URL: redis://redis-analytics-airflow.signal:6379/1`. | https://github.com/apache/airflow/issues/17828 | https://github.com/apache/airflow/pull/17847 | a2fd67dc5e52b54def97ea9bb61c8ba3557179c6 | 275e0d1b91949ad1a1b916b0ffa27009e0797fea | "2021-08-25T10:56:44Z" | python | "2021-08-27T15:22:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,826 | ["airflow/www/static/css/dags.css", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "docs/apache-airflow/core-concepts/dag-run.rst", "tests/www/views/test_views_home.py"] | Add another tab in Dags table that shows running now | The DAGs main page has All/Active/Paused.
It would be nice if we can have also Running Now which will show all the DAGs that have at least 1 in Running of DAG runs:
![Screen Shot 2021-08-25 at 11 30 55](https://user-images.githubusercontent.com/62940116/130756490-647dc664-7528-4eb7-b4d8-515577f7e6bb.png)
For the above example the new DAGs tab of Running Now should show only 2 rows.
The use case is to see easily what DAGs are currently in progress.
| https://github.com/apache/airflow/issues/17826 | https://github.com/apache/airflow/pull/30429 | c25251cde620481592392e5f82f9aa8a259a2f06 | dbe14c31d52a345aa82e050cc0a91ee60d9ee567 | "2021-08-25T08:38:02Z" | python | "2023-05-22T16:05:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,810 | ["airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dags.html"] | "Runs" column being cut off by "Schedule" column in web UI | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**:
2.1.3
**OS**:
CentOS Linux 7
**Apache Airflow Provider versions**:
Standard provider package that comes with Airflow 2.1.3 via Docker container
**Deployment**:
docker-compose version 1.29.2, build 5becea4c
**What happened**:
Updated container from Airflow 2.1.2 --> 2.1.3 without an specific error message
**What you expected to happen**:
I did not expect an overlap of these two columns.
**How to reproduce it**:
Update Airflow image from 2.1.2 to 2.1.3 using Google Chrome Version 92.0.4515.159 (Official Build) (x86_64).
**Anything else we need to know**:
Tested in Safari Version 14.1.2 (16611.3.10.1.6) and the icons do not even load. The loading "dots" animate and do not load "Runs" and "Schedule" icons. No special plugins used and adblockers shut off. Refresh does not help; rebuild does not help. Widened window. Does not fix issue as other column adjust with change in window width.
**Are you willing to submit a PR?**
I am rather new to Airflow and do not have enough experience in fixing Airflow bugs.
![Screen Shot 2021-08-24 at 9 08 29 AM](https://user-images.githubusercontent.com/18553670/130621925-50d8f09d-6863-4e55-8392-7931b8963a0e.png)
| https://github.com/apache/airflow/issues/17810 | https://github.com/apache/airflow/pull/17817 | 06e53f26e5cb2f1ad4aabe05fa12d2db9c66e282 | 96f7e3fec76a78f49032fbc9a4ee9a5551f38042 | "2021-08-24T13:09:54Z" | python | "2021-08-25T12:48:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,804 | ["airflow/cli/commands/webserver_command.py", "setup.cfg"] | Webserver fail to start with virtualenv: No such file or directory: 'gunicorn': 'gunicorn' | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**: 2.1.2
<!-- AIRFLOW VERSION IS MANDATORY -->
**OS**: CentOS Linux release 8.2.2004 (Core)
<!-- MANDATORY! You can get it via `cat /etc/oss-release` for example -->
**Apache Airflow Provider versions**:
apache-airflow-providers-celery==2.0.0
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-http==2.0.0
apache-airflow-providers-imap==2.0.0
apache-airflow-providers-mysql==2.1.0
apache-airflow-providers-redis==2.0.0
apache-airflow-providers-sftp==2.1.0
apache-airflow-providers-sqlite==2.0.0
apache-airflow-providers-ssh==2.1.0
<!-- You can use `pip freeze | grep apache-airflow-providers` (you can leave only relevant ones)-->
**Deployment**: Virtualenv 3.7.5
<!-- e.g. Virtualenv / VM / Docker-compose / K8S / Helm Chart / Managed Airflow Service -->
<!-- Please include your deployment tools and versions: docker-compose, k8s, helm, etc -->
**What happened**:
Got error when I try to start a webserver:
```bash
$ $VENV_PATH/bin/airflow webserver
...
=================================================================
Traceback (most recent call last):
File "/VENV_PATH/bin/airflow", line 8, in <module>
sys.exit(main())
File "/VENV_PATH/lib/python3.7/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/VENV_PATH/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/VENV_PATH/lib/python3.7/site-packages/airflow/utils/cli.py", line 91, in wrapper
return f(*args, **kwargs)
File "/VENV_PATH/lib/python3.7/site-packages/airflow/cli/commands/webserver_command.py", line 483, in webserver
with subprocess.Popen(run_args, close_fds=True) as gunicorn_master_proc:
File "/home/user/.pyenv/versions/3.7.5/lib/python3.7/subprocess.py", line 800, in __init__
restore_signals, start_new_session)
File "/home/user/.pyenv/versions/3.7.5/lib/python3.7/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'gunicorn': 'gunicorn'
[user@xxxxx xxxxxxxxxx]$ /VENV_PATH/bin/python
Python 3.7.5 (default, Aug 20 2021, 17:16:24)
```
<!-- Please include exact error messages if you can -->
**What you expected to happen**:
Airflow start webserver success
<!-- What do you think went wrong? -->
**How to reproduce it**:
assume we already have `AIRFLOW_HOME`
1. create a venv: `virtualenv xxx`
2. `xxx/bin/pip install apache-airflow`
3. `xxx/bin/airflow webserver`
<!--
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images/screen-casts etc. by drag-dropping the image here.
-->
**Anything else we need to know**:
`cli/commands/webserver_command.py:394` use `gunicorn` as subprocess command, assume it's in PATH.
`airflow webserver` runs well after `export PATH=$PATH:$VENV_PATH/bin`.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here inside fenced
``` ``` blocks or inside a foldable details tag if it's long:
<details><summary>x.log</summary> lots of stuff </details>
-->
**Are you willing to submit a PR?**
I'm not good at python so maybe it's not a good idea for me to submit a PR.
<!---
This is absolutely not required, but we are happy to guide you in contribution process
especially if you already have a good understanding of how to implement the fix.
Airflow is a community-managed project and we love to bring new contributors in.
Find us in #airflow-how-to-pr on Slack!
-->
| https://github.com/apache/airflow/issues/17804 | https://github.com/apache/airflow/pull/17805 | 5019916b7b931a13ad7131f0413601b0db475b77 | c32ab51ef8819427d8af0b297a80b99e2f93a953 | "2021-08-24T10:00:16Z" | python | "2021-08-24T13:27:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,800 | ["airflow/providers/google/cloud/hooks/gcs.py", "airflow/providers/google/cloud/operators/bigquery.py", "airflow/providers/google/cloud/utils/mlengine_operator_utils.py", "tests/providers/google/cloud/hooks/test_gcs.py", "tests/providers/google/cloud/operators/test_mlengine_utils.py", "tests/providers/google/cloud/utils/test_mlengine_operator_utils.py"] | BigQueryCreateExternalTableOperator from providers package fails to get schema from GCS object |
**Apache Airflow version**: 1.10.15
**OS**: Linux 5.4.109+
**Apache Airflow Provider versions**:
apache-airflow-backport-providers-apache-beam==2021.3.13
apache-airflow-backport-providers-cncf-kubernetes==2021.3.3
apache-airflow-backport-providers-google==2021.3.3
**Deployment**: Cloud Composer 1.16.6 (Google Cloud Managed Airflow Service)
**What happened**:
BigQueryCreateExternalTableOperator from the providers package ([airflow.providers.google.cloud.operators.bigquery](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py)) fails with correct _schema_object_ parameter.
**What you expected to happen**:
I expected the DAG to succesfully run, as I've previously tested it with the deprecated operator from the contrib package ([airflow.contrib.operators.bigquery_operator](https://github.com/apache/airflow/blob/5786dcdc392f7a2649f398353a0beebef01c428e/airflow/contrib/operators/bigquery_operator.py#L476)), using the same parameters.
Debbuging the DAG execution log, I saw the providers operator generated a wrong call to the Cloud Storage API: it mixed up the bucket and object parameters, according the stack trace bellow.
```
[2021-08-23 23:17:22,316] {taskinstance.py:1152} ERROR - 404 GET https://storage.googleapis.com/download/storage/v1/b/foo/bar/schema.json/o/mybucket?alt=media: Not Found: ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
Traceback (most recent call last)
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/storage/client.py", line 728, in download_blob_to_fil
checksum=checksum
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 956, in _do_downloa
response = download.consume(transport, timeout=timeout
File "/opt/python3.6/lib/python3.6/site-packages/google/resumable_media/requests/download.py", line 168, in consum
self._process_response(result
File "/opt/python3.6/lib/python3.6/site-packages/google/resumable_media/_download.py", line 186, in _process_respons
response, _ACCEPTABLE_STATUS_CODES, self._get_status_cod
File "/opt/python3.6/lib/python3.6/site-packages/google/resumable_media/_helpers.py", line 104, in require_status_cod
*status_code
google.resumable_media.common.InvalidResponse: ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>
During handling of the above exception, another exception occurred
Traceback (most recent call last)
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_tas
result = task_copy.execute(context=context
File "/usr/local/lib/airflow/airflow/providers/google/cloud/operators/bigquery.py", line 1178, in execut
schema_fields = json.loads(gcs_hook.download(self.bucket, self.schema_object)
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/gcs.py", line 301, in downloa
return blob.download_as_string(
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 1391, in download_as_strin
timeout=timeout
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 1302, in download_as_byte
checksum=checksum
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/storage/client.py", line 731, in download_blob_to_fil
_raise_from_invalid_response(exc
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 3936, in _raise_from_invalid_respons
raise exceptions.from_http_status(response.status_code, message, response=response
google.api_core.exceptions.NotFound: 404 GET https://storage.googleapis.com/download/storage/v1/b/foo/bar/schema.json/o/mybucket?alt=media: Not Found: ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>
```
PS: the bucket (_mybucket_) and object path (_foo/bar/schema.json_) were masked for security reasons.
I believe the error appears on the [following](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py#L1183) line, although the bug itself is probably located on the [gcs_hook.download()](https://github.com/apache/airflow/blob/0264fea8c2024d7d3d64aa0ffa28a0cfa48839cd/airflow/providers/google/cloud/hooks/gcs.py#L266) method:
`schema_fields = json.loads(gcs_hook.download(self.bucket, self.schema_object))`
**How to reproduce it**:
Create a DAG using both operators and the same parameters, as the example bellow. The task using the contrib version of the operator should work, while the task using the providers version should fail.
```
from airflow.contrib.operators.bigquery_operator import BigQueryCreateExternalTableOperator as BQExtTabOptContrib
from airflow.providers.google.cloud.operators.bigquery import BigQueryCreateExternalTableOperator as BQExtTabOptProviders
#TODO: default args and DAG definition
create_landing_external_table_contrib = BQExtTabOptContrib(
task_id='create_landing_external_table_contrib',
bucket='mybucket',
source_objects=['foo/bar/*.csv'],
destination_project_dataset_table='project.dataset.table',
schema_object='foo/bar/schema_file.json',
)
create_landing_external_table_providers = BQExtTabOptProviders(
task_id='create_landing_external_table_providers',
bucket='mybucket',
source_objects=['foo/bar/*.csv'],
destination_project_dataset_table='project.dataset.table',
schema_object='foo/bar/schema_file.json',
)
```
**Anything else we need to know**:
The [*gcs_hook.download()*](https://github.com/apache/airflow/blob/0264fea8c2024d7d3d64aa0ffa28a0cfa48839cd/airflow/providers/google/cloud/hooks/gcs.py#L313) method is using the deprecated method _download_as_string()_ from the Cloud Storage API (https://googleapis.dev/python/storage/latest/blobs.html). It should be changed to _download_as_bytes()_.
Also, comparing the [providers version](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py#L1183) of the operator to the [contrib version](https://github.com/apache/airflow/blob/5786dcdc392f7a2649f398353a0beebef01c428e/airflow/contrib/operators/bigquery_operator.py#L621), I observed there is also a missing decode operation: `.decode("utf-8")`
**Are you willing to submit a PR?**
Yes | https://github.com/apache/airflow/issues/17800 | https://github.com/apache/airflow/pull/20091 | 3a0f5545a269b35cf9ccc845c4ec9397d9376b70 | 50bf5366564957cc0f057ca923317c421fffdeaa | "2021-08-24T00:25:07Z" | python | "2021-12-07T23:53:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,795 | ["airflow/providers/http/provider.yaml"] | Circular Dependency in Apache Airflow 2.1.3 | **Apache Airflow version**:
2.1.3
**OS**:
Ubuntu 20.04 LTS
**Deployment**:
Bazel
**What happened**:
When I tried to bump my Bazel monorepo from 2.1.2 to 2.1.3, Bazel complains that there is the following circular dependency.
```
ERROR: /github/home/.cache/bazel/_bazel_bookie/c5c5e4532705a81d38d884f806d2bf84/external/pip/pypi__apache_airflow/BUILD:11:11: in py_library rule @pip//pypi__apache_airflow:pypi__apache_airflow: cycle in dependency graph:
//wager/publish/airflow:airflow
.-> @pip//pypi__apache_airflow:pypi__apache_airflow
| @pip//pypi__apache_airflow_providers_http:pypi__apache_airflow_providers_http
`-- @pip//pypi__apache_airflow:pypi__apache_airflow
```
**What you expected to happen**:
No dependency cycles.
**How to reproduce it**:
A concise reproduction will require some effort. I am hoping that there is a quick resolution to this, but I am willing to create a reproduction if it is required to determine the root cause.
**Anything else we need to know**:
Perhaps related to apache/airflow#14128.
**Are you willing to submit a PR?**
Yes. | https://github.com/apache/airflow/issues/17795 | https://github.com/apache/airflow/pull/17796 | 36c5fd3df9b271702e1dd2d73c579de3f3bd5fc0 | 0264fea8c2024d7d3d64aa0ffa28a0cfa48839cd | "2021-08-23T20:52:43Z" | python | "2021-08-23T22:47:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,765 | ["airflow/providers/apache/hive/hooks/hive.py", "tests/providers/apache/hive/hooks/test_hive.py", "tests/test_utils/mock_process.py"] | get_pandas_df() fails when it tries to read an empty table | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**:
2.2.0.dev0
**OS**:
Debian GNU/Linux
**Apache Airflow Provider versions**:
<!-- You can use `pip freeze | grep apache-airflow-providers` (you can leave only relevant ones)-->
**Deployment**:
docker-compose version 1.29.2
**What happened**:
Currently in hive hooks, when [get_pandas_df()](https://github.com/apache/airflow/blob/main/airflow/providers/apache/hive/hooks/hive.py#L1039) is used to create a dataframe, the first step creates the dataframe, the second step creates the columns. The step to add columns throws an exception when the table is empty.
```
hh = HiveServer2Hook()
sql = "SELECT * FROM <table> WHERE 1=0"
df = hh.get_pandas_df(sql)
[2021-08-15 21:10:15,282] {{hive.py:449}} INFO - SELECT * FROM <table> WHERE 1=0
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/venv/lib/python3.7/site-packages/airflow/providers/apache/hive/hooks/hive.py", line 1073, in get_pandas_df
df.columns = [c[0] for c in res['header']]
File "/venv/lib/python3.7/site-packages/pandas/core/generic.py", line 5154, in __setattr__
return object.__setattr__(self, name, value)
File "pandas/_libs/properties.pyx", line 66, in pandas._libs.properties.AxisProperty.__set__
File "/venv/lib/python3.7/site-packages/pandas/core/generic.py", line 564, in _set_axis
self._mgr.set_axis(axis, labels)
File "/venv/lib/python3.7/site-packages/pandas/core/internals/managers.py", line 227, in set_axis
f"Length mismatch: Expected axis has {old_len} elements, new "
ValueError: Length mismatch: Expected axis has 0 elements, new values have 1 elements
```
**What you expected to happen**:
When an empty table is read, get_pandas_df() should return an empty dataframe, with the columns listed in the empty dataframe. Ideally, `len(df.index)` produces `0` and `df.columns` lists the columns.
**How to reproduce it**:
Use your own hive server...
```
from airflow.providers.apache.hive.hooks.hive import HiveServer2Hook
hh = HiveServer2Hook()
sql = "SELECT * FROM <table> WHERE 1=0"
df = hh.get_pandas_df(sql)
```
Where \<table> can be any table.
**Anything else we need to know**:
Here's a potential fix ...
```
#df = pandas.DataFrame(res['data'], **kwargs)
#df.columns = [c[0] for c in res['header']]
df = pandas.DataFrame(res['data'], columns=[c[0] for c in res['header']], **kwargs)
```
A PR is more or less ready but someone else should confirm this is an actual bug. I can trigger the bug on an actual hive server. I cannot trigger the bug using [test_hive.py](https://github.com/apache/airflow/blob/main/tests/providers/apache/hive/hooks/test_hive.py). Also, if a PR Is needed, and a new test is needed, then I may need some help updating test_hive.py.
**Are you willing to submit a PR?**
Yes.
| https://github.com/apache/airflow/issues/17765 | https://github.com/apache/airflow/pull/17777 | 1d71441ceca304b9f6c326a4381626a60164b774 | da99c3fa6c366d762bba9fbf3118cc3b3d55f6b4 | "2021-08-21T04:09:51Z" | python | "2021-08-30T19:22:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,733 | ["scripts/in_container/prod/clean-logs.sh"] | Scheduler-gc fails on helm chart when running 'find' | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**: 2.1.3
**OS**: centos-release-7-8.2003.0.el7.centos.x86_64
**Apache Airflow Provider versions**: Irrelevant
**Deployment**: Helm Chart
**Versions**:
```go
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-eks-ad4801", GitCommit:"ad4801fd44fe0f125c8d13f1b1d4827e8884476d", GitTreeState:"clean", BuildDate:"2020-10-20T23:30:20Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.17-eks-087e67", GitCommit:"087e67e479962798594218dc6d99923f410c145e", GitTreeState:"clean", BuildDate:"2021-07-31T01:39:55Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
```
**What happened**:
Helm deployment succeeds but scheduler pod fails with error:
```log
Cleaning logs every 900 seconds
Trimming airflow logs to 15 days.
find: The -delete action automatically turns on -depth, but -prune does nothing when -depth is in effect. If you want to carry on anyway, just explicitly use the -depth option.
```
**What you expected to happen**:
I expect the `scheduler-gc` to run successfully and if it fails, helm deployment should fail.
**How to reproduce it**:
1. Build container using breeze from [reference `719709b6e994a99ad2cb8f90042a19a7924acb8e`](https://github.com/apache/airflow/commit/719709b6e994a99ad2cb8f90042a19a7924acb8e)
2. Deploy to helm using this container.
3. Run kubectl describe pod airflow-schduler-XXXX -c sheduler-gc` to see the error.
For minimal version:
```shell
git clone git@github.com:apache/airflow.git
cd airflow
./breeze build-image --image-tag airflow --production-image
docker run --entrypoint=bash airflow /clean-logs
echo $?
```
**Anything else we need to know**:
It appears the changes was made in order to support a specific kubernetes volume protocol: https://github.com/apache/airflow/pull/17547.
I ran this command locally and achieved the same message:
```console
$ find "${DIRECTORY}"/logs -type d -name 'lost+found' -prune -name '*.log' -delete
find: The -delete action automatically turns on -depth, but -prune does nothing when -depth is in effect. If you want to carry on anyway, just explicitly use the -depth option.
$ echo $?
1
````
**Are you willing to submit a PR?**
Maybe. I am not very familiar with the scheduler-gc service so I will have to look into it. | https://github.com/apache/airflow/issues/17733 | https://github.com/apache/airflow/pull/17739 | af9dc8b3764fc0f4630c0a83f1f6a8273831c789 | f654db8327cdc5c6c1f26517f290227cbc752a3c | "2021-08-19T14:43:47Z" | python | "2021-08-19T19:42:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,727 | ["airflow/www/templates/airflow/_messages.html", "airflow/www/templates/airflow/dags.html", "airflow/www/templates/airflow/main.html", "airflow/www/templates/appbuilder/flash.html", "airflow/www/views.py", "tests/www/views/test_views.py"] | Duplicated general notifications in Airflow UI above DAGs list | **Apache Airflow version**:
2.2.0.dev0 (possible older versions too)
**OS**:
Linux Debian 11
(BTW Here we have a typo in comment inside bug report template in `.github/ISSUE_TEMPLATE/bug_report.md`: `cat /etc/oss-release` <- double 's'.)
**Apache Airflow Provider versions**: -
**Deployment**:
Docker-compose 1.25.0
**What happened**:
The issue is related to Airflow UI and it shows duplicated notifications when removing all the tags from the input after filtering DAGs.
**What you expected to happen**:
The notifications should not be duplicated.
**How to reproduce it**:
In the main view, above the list of the DAGs (just below the top bar menu), there is a place where notifications appear. Suppose that there are 2 notifications (no matter which). Now try to search DAGs by tag using 'Filter DAGs by tag' input and use a valid tag. After the filtering is done, clear the input either by clicking on 'x' next to the tag or on the 'x' near the right side of the input. Notice that the notifications are duplicated and now you have 4 instead of 2 (each one is displayed twice). The input id is `s2id_autogen1`.
This bug happens only if all the tags are removed from the filtering input. If you remove the tag while there is still another one in the input, the bug will not appear. Also, it is not present while searching DAGs by name using the input 'Search DAGs'.
After search, before removing tags from input:
![notif](https://user-images.githubusercontent.com/7412964/130072185-ca2b3bd4-023d-4574-9d28-71061bf55db6.png)
Duplicated notifications after removing tag from input:
![notif_dup](https://user-images.githubusercontent.com/7412964/130072195-1d1e0eec-c10a-42d8-b75c-1904e05d44fc.png)
**Are you willing to submit a PR?**
I can try. | https://github.com/apache/airflow/issues/17727 | https://github.com/apache/airflow/pull/18462 | 3857aa822df5a057e0db67b2342ef769c012785f | 18e91bcde0922ded6eed724924635a31578d8230 | "2021-08-19T13:00:16Z" | python | "2021-09-24T11:52:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,694 | ["airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/operators/test_snowflake.py"] | Add SQL Check Operators to Snowflake Provider | Add SnowflakeCheckOperator, SnowflakeValueCheckOperator, and SnowflakeIntervalCheckOperator to Snowflake provider.
**Use case / motivation**
The SQL operator, as well as other provider DB operators, already support this functionality. It can prove useful for data quality use cases with Snowflake. It should be relatively easy to implement in the same fashion as [BigQuery's versions](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py#L101).
**Are you willing to submit a PR?**
Yep!
**Related Issues**
Not that I could find. | https://github.com/apache/airflow/issues/17694 | https://github.com/apache/airflow/pull/17741 | 19454940d45486a925a6d890da017a22b6b962de | a8970764d98f33a54be0e880df27f86b311038ac | "2021-08-18T18:39:57Z" | python | "2021-09-09T23:41:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,693 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Kubernetes Executor: Tasks Stuck in Queued State indefinitely (or until scheduler restart). | **Apache Airflow version**: 2.1.3
**OS**: Custom Docker Image built from python:3.8-slim
**Apache Airflow Provider versions**:
apache-airflow-providers-cncf-kubernetes==2.0.1
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-http==2.0.0
apache-airflow-providers-imap==2.0.0
**Deployment**: Self managed on EKS, manifests templated and rendered using krane, dags mounted via PVC & EFS
**What happened**: Task remains in queued state
<img width="1439" alt="Screen Shot 2021-08-18 at 12 17 28 PM" src="https://user-images.githubusercontent.com/47788186/129936170-e16e1362-24ca-4ce9-b2f7-978f2642d388.png">
**What you expected to happen**: Task starts running
**How to reproduce it**: I believe it is because a node is removed. I've attached both the scheduler/k8s executor logs and the kubernetes logs. I deployed a custom executor with an extra line in the KubernetesJobWatcher.process_status that logged the Event type for the pod.
```
date,name,message
2021-09-08T01:06:35.083Z,KubernetesJobWatcher,Event: <pod_id> had an event of type ADDED
2021-09-08T01:06:35.084Z,KubernetesJobWatcher,Event: <pod_id> Status: Pending
2021-09-08T01:06:35.085Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:06:35.085Z,KubernetesJobWatcher,Event: <pod_id> Status: Pending
2021-09-08T01:06:43.390Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:06:43.390Z,KubernetesJobWatcher,Event: <pod_id> Status: Pending
2021-09-08T01:06:43.390Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:06:43.391Z,KubernetesJobWatcher,Event: <pod_id> Status: Pending
2021-09-08T01:06:45.392Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:06:45.393Z,KubernetesJobWatcher,Event: <pod_id> Status: is Running
2021-09-08T01:07:33.185Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:07:33.186Z,KubernetesJobWatcher,Event: <pod_id> Status: is Running
2021-09-08T01:09:50.478Z,KubernetesJobWatcher,Event: <pod_id> had an event of type MODIFIED
2021-09-08T01:09:50.479Z,KubernetesJobWatcher,Event: <pod_id> Status: is Running
2021-09-08T01:09:50.479Z,KubernetesJobWatcher,Event: <pod_id> had an event of type DELETED
2021-09-08T01:09:50.480Z,KubernetesJobWatcher,Event: <pod_id> Status: is Running
```
Here are the corresponding EKS Logs
```
@timestamp,@message
2021-09-08 01:06:34 1 factory.go:503] pod etl/<pod_id> is already present in the backoff queue
2021-09-08 01:06:43 1 scheduler.go:742] pod etl/<pod_id> is bound successfully on node ""ip-10-0-133-132.ec2.internal"", 19 nodes evaluated, 1 nodes were found feasible."
2021-09-08 01:07:32 1 controller_utils.go:122] Update ready status of pods on node [ip-10-0-133-132.ec2.internal]
2021-09-08 01:07:32 1 controller_utils.go:139] Updating ready status of pod <pod_id> to false
2021-09-08 01:07:38 1 node_lifecycle_controller.go:889] Node ip-10-0-133-132.ec2.internal is unresponsive as of 2021-09-08 01:07:38.017788167
2021-09-08 01:08:51 1 node_tree.go:100] Removed node "ip-10-0-133-132.ec2.internal" in group "us-east-1:\x00:us-east-1b" from NodeTree
2021-09-08 01:08:53 1 node_lifecycle_controller.go:789] Controller observed a Node deletion: ip-10-0-133-132.ec2.internal
2021-09-08 01:09:50 1 gc_controller.go:185] Found orphaned Pod etl/<pod_id> assigned to the Node ip-10-0-133-132.ec2.internal. Deleting.
2021-09-08 01:09:50 1 gc_controller.go:189] Forced deletion of orphaned Pod etl/<pod_id> succeeded
```
**Are you willing to submit a PR?** I think there is a state & event combination missing from the process_state function in the KubernetesJobWatcher. I will submit a PR to fix.
| https://github.com/apache/airflow/issues/17693 | https://github.com/apache/airflow/pull/18095 | db72f40707c041069f0fbb909250bde6a0aea53d | e2d069f3c78a45ca29bc21b25a9e96b4e36a5d86 | "2021-08-18T17:52:26Z" | python | "2021-09-11T20:18:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,684 | ["airflow/www/views.py", "tests/www/views/test_views_base.py", "tests/www/views/test_views_home.py"] | Role permissions and displaying dag load errors on webserver UI | Running Airflow 2.1.2. Setup using oauth authentication (azure). With the public permissions role if a user did not have a valid role in Azure AD or did not login I got all sorts of redirect loops and errors. So I updated the public permissions to: [can read on Website, menu access on My Profile, can read on My Profile, menu access on Documentation, menu access on Docs]. So basically a user not logged in can view a nearly empty airflow UI and click the "Login" button to login. I also added a group to my Azure AD with the role public that includes all users in the subscription. So users can login and it will create an account in airflow for them and they see the same thing as if they are not logged in. Then if someone added them in to a role in the azure enterprise application with a different role when the login they will have what they need. Keeps everything clean and no redirect errors, etc... always just nice airflow screens.
Now one issue I noticed is with the "can read on Website" permission added to a public role the dag errors appears and are not hidden. Since the errors are related to dags I and the user does not have any dag related permissions I would think the errors would not be displayed.
I'm wondering if this is an issue or more of something that should be a new feature? Cause if I can't view the dag should I be able to view the errors for it? Like adding a role "can read on Website Errors" if a new feature or update the code to tie the display of there errors into a role permission that they are related to like can view one of the dags permissions.
Not logged in or logged in as a user who is defaulted to "Public" role and you can see the red errors that can be expanded and see line reference details of the errors:
![image](https://user-images.githubusercontent.com/83670736/129922067-e0da80fb-f6ab-44fc-80ea-9b9c7f685bb2.png)
| https://github.com/apache/airflow/issues/17684 | https://github.com/apache/airflow/pull/17835 | a759499c8d741b8d679c683d1ee2757dd42e1cd2 | 45e61a965f64feffb18f6e064810a93b61a48c8a | "2021-08-18T15:03:30Z" | python | "2021-08-25T22:50:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,679 | ["chart/templates/scheduler/scheduler-deployment.yaml"] | Airflow not running as root when deployed to k8s via helm | **Apache Airflow version**: v2.1.2
**Deployment**: Helm Chart + k8s
**What happened**:
helm install with values:
uid=0
gid=0
Airflow pods must run as root.
error:
from container's bash:
root@airflow-xxx-scheduler-7f49549459-w9s67:/opt/airflow# airflow
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
ModuleNotFoundError: No module named 'airflow'
```
**What you expected to happen**:
should run as root
using airflow's helm only
in pod's describe I see:
securityContext:
fsGroup: 0
runAsUser: 0 | https://github.com/apache/airflow/issues/17679 | https://github.com/apache/airflow/pull/17688 | 4e59741ff9be87d6aced1164812ab03deab259c8 | 986381159ee3abf3442ff496cb553d2db004e6c4 | "2021-08-18T08:17:33Z" | python | "2021-08-18T18:40:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,665 | ["CONTRIBUTING.rst"] | Documentation "how to sync your fork" is wrong, should be updated. | **Description**
From documentation , "CONTRIBUTING.rst" :
_There is also an easy way using ``Force sync main from apache/airflow`` workflow. You can go to "Actions" in your repository and choose the workflow and manually trigger the workflow using "Run workflow" command._
This doesn't work in the current GitHub UI, there is no such option.
**Use case / motivation**
Should be changed to reflect current github options, selecting ""Fetch upstream" -> "Fetch and merge X upstream commits from apache:main"
**Are you willing to submit a PR?**
I will need help with wording
**Related Issues**
No.
| https://github.com/apache/airflow/issues/17665 | https://github.com/apache/airflow/pull/17675 | 7c96800426946e563d12fdfeb80b20f4c0002aaf | 1cd3d8f94f23396e2f1367822fecc466db3dd170 | "2021-08-17T20:37:10Z" | python | "2021-08-18T09:34:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,652 | ["airflow/www/security.py", "tests/www/test_security.py"] | Subdags permissions not getting listed on Airflow UI | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**: 2.0.2 and 2.1.1
<!-- AIRFLOW VERSION IS MANDATORY -->
**OS**: Debian
<!-- MANDATORY! You can get it via `cat /etc/oss-release` for example -->
**Apache Airflow Provider versions**:
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-imap==2.0.0
apache-airflow-providers-sqlite==2.0.0
<!-- You can use `pip freeze | grep apache-airflow-providers` (you can leave only relevant ones)-->
**Deployment**: docker-compose
<!-- e.g. Virtualenv / VM / Docker-compose / K8S / Helm Chart / Managed Airflow Service -->
<!-- Please include your deployment tools and versions: docker-compose, k8s, helm, etc -->
**What happened**:
<!-- Please include exact error messages if you can -->
**What you expected to happen**:
Permissions for subdags should get listed on list roles on Airflow UI but only the parent dag get listed but not the children dags.
<!-- What do you think went wrong? -->
**How to reproduce it**:
Run airflow 2.0.2 or 2.1.1 with a subdag in the dags directory. And then try to find subdag listing on list roles on Airflow UI.
![subdag_not_listed](https://user-images.githubusercontent.com/9834450/129717365-9620895e-9876-4ac2-8adb-fc448183a78b.png)
<!--
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images/screen-casts etc. by drag-dropping the image here.
-->
**Anything else we need to know**:
It appears this problem is there most of the time.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here inside fenced
``` ``` blocks or inside a foldable details tag if it's long:
<details><summary>x.log</summary> lots of stuff </details>
-->
<!---
This is absolutely not required, but we are happy to guide you in contribution process
especially if you already have a good understanding of how to implement the fix.
Airflow is a community-managed project and we love to bring new contributors in.
Find us in #airflow-how-to-pr on Slack!
-->
| https://github.com/apache/airflow/issues/17652 | https://github.com/apache/airflow/pull/18160 | 3d6c86c72ef474952e352e701fa8c77f51f9548d | 3e3c48a136902ac67efce938bd10930e653a8075 | "2021-08-17T11:23:38Z" | python | "2021-09-11T17:48:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,639 | ["airflow/api_connexion/openapi/v1.yaml"] | Airflow API (Stable) (1.0.0) Update a DAG endpoint documentation shows you can update is_active, but the api does not accept it | **Apache Airflow version**:
v2.1.1+astro.2
**OS**:
Ubuntu v18.04
**Apache Airflow Provider versions**:
<!-- You can use `pip freeze | grep apache-airflow-providers` (you can leave only relevant ones)-->
**Deployment**:
VM
Ansible
**What happened**:
PATCH call to api/v1/dags/{dagID} gives the following response when is_active is included in the update mask and/or body:
{
"detail": "Only `is_paused` field can be updated through the REST API",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
The API spec clearly indicates is_active is a modifiable field: https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/patch_dag
**What you expected to happen**:
Expected the is_active field to be updated in the DAG. Either fix the endpoint or fix the documentation.
**How to reproduce it**:
Send PATCH call to api/v1/dags/{dagID} with "is_active": true/false in the body
**Anything else we need to know**:
Occurs every time.
**Are you willing to submit a PR?**
No
| https://github.com/apache/airflow/issues/17639 | https://github.com/apache/airflow/pull/17667 | cbd9ad2ffaa00ba5d99926b05a8905ed9ce4e698 | 83a2858dcbc8ecaa7429df836b48b72e3bbc002a | "2021-08-16T18:01:18Z" | python | "2021-08-18T14:56:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,632 | ["airflow/hooks/dbapi.py", "airflow/providers/sqlite/example_dags/example_sqlite.py", "airflow/providers/sqlite/hooks/sqlite.py", "docs/apache-airflow-providers-sqlite/operators.rst", "tests/providers/sqlite/hooks/test_sqlite.py"] | SqliteHook invalid insert syntax | <!--
Welcome to Apache Airflow!
Please complete the next sections or the issue will be closed.
-->
**Apache Airflow version**: 2.1.0
<!-- AIRFLOW VERSION IS MANDATORY -->
**OS**: Raspbian GNU/Linux 10 (buster)
<!-- MANDATORY! You can get it via `cat /etc/oss-release` for example -->
**Apache Airflow Provider versions**: apache-airflow-providers-sqlite==1.0.2
<!-- You can use `pip freeze | grep apache-airflow-providers` (you can leave only relevant ones)-->
**Deployment**: Virtualenv
<!-- e.g. Virtualenv / VM / Docker-compose / K8S / Helm Chart / Managed Airflow Service -->
<!-- Please include your deployment tools and versions: docker-compose, k8s, helm, etc -->
**What happened**:
When using the insert_rows function of the sqlite hook the generated parametrized sql query has an invalid syntax. It will generate queries with the %s placeholder which result in the following error:
*Query*: INSERT INTO example_table (col1, col2) VALUES (%s,%s)
*Error*:
sqlite3.OperationalError: near "%": syntax error
*Stacktrace*:
File "/home/airflow/.pyenv/versions/3.8.10/envs/airflow_3.8.10/lib/python3.8/site-packages/airflow/hooks/dbapi.py", line 307, in insert_rows
cur.execute(sql, values)
I replaced the placeholder with "?" in the _generate_insert_sql function DbApiHook as a test. This solved the issue.
<!-- Please include exact error messages if you can -->
**What you expected to happen**:
Every inherited method of the DbApiHook should work for the SqliteHook. Using the insert_rows method should generate the correct parametrized query and insert rows as expected.
<!-- What do you think went wrong? -->
**How to reproduce it**:
1. Create an instance of the SqliteHook
2. Use the insert_rows method of the SqliteHook
<!--
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images/screen-casts etc. by drag-dropping the image here.
-->
| https://github.com/apache/airflow/issues/17632 | https://github.com/apache/airflow/pull/17695 | 986381159ee3abf3442ff496cb553d2db004e6c4 | 9b2e593fd4c79366681162a1da43595584bd1abd | "2021-08-16T10:53:25Z" | python | "2021-08-18T20:29:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,617 | ["tests/providers/alibaba/cloud/operators/test_oss.py", "tests/providers/alibaba/cloud/sensors/test_oss_key.py"] | Switch Alibaba tests to use Mocks | We should also take a look if there is a way to mock the calls rather than having to communicate with the reaal "Alibaba Cloud". This way our unit tests would be even more stable. For AWS we are using `moto` library. Seems that Alibaba has built-in way to use Mock server as backend (?) https://www.alibabacloud.com/help/doc-detail/48978.htm
CC: @Gabriel39 -> would you like to pick on that task? We had a number of transient failures of the unit tests over the last week - cause the unit tests were actually reaching out to cn-hengzou Alibaba region's servers, which has a lot of stability issues. I am switching to us-east-1 in https://github.com/apache/airflow/pull/17616 but ideally we should not reach out to "real" services in unit tests (we are working on standardizing system tests that will do that and reach-out to real services but it should not be happening for unit tests).
| https://github.com/apache/airflow/issues/17617 | https://github.com/apache/airflow/pull/22178 | 9e061879f92f67304f1afafdebebfd7ee2ae8e13 | 03d0c702cf5bb72dcb129b86c219cbe59fd7548b | "2021-08-14T16:34:02Z" | python | "2022-03-11T12:47:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,568 | ["airflow/utils/helpers.py", "airflow/utils/task_group.py", "tests/utils/test_helpers.py"] | Airflow all scheduler crashed and cannot restore because of incorrect Group name | **Apache Airflow docker image version**: apache/airflow:2.1.1-python3.8
** Environment: Kubernetes
**What happened**:
I set up a DAG with the group `Create Grafana alert` and inside the group a run a for loop for creating task
Then all schedulers crash because of the above error, K8s tried to restart but the scheduler still met this error inspire I fixed it on DAG.
My code:
```
with TaskGroup(group_id='Grafana grafana alert') as tg4:
config_alerts = []
for row in CONFIG:
task_id = 'alert_' + row['file_name_gcs'].upper()
alert_caller = SimpleHttpOperator(
task_id=task_id,
http_conn_id='escalation_webhook',
endpoint='api/json/send',
data=json.dumps({
"payload": {
"text": "Hello escalation",
}
}),
headers={
"Content-Type": "application/json",
"Authorization": "foo"
},
response_check=health_check
)
config_alerts.append(alert_caller)
config_alerts
```
Finally, I deleted DAG on UI after that scheduler created a new one and can start.
I think this can be an error but I cannot crash all schedulers and cannot back to normal until delete DAG.
```
Traceback (most recent call last):
8/12/2021 2:25:29 PM File "/home/airflow/.local/bin/airflow", line 8, in <module>
8/12/2021 2:25:29 PM sys.exit(main())
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
8/12/2021 2:25:29 PM args.func(args)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
8/12/2021 2:25:29 PM return func(*args, **kwargs)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 91, in wrapper
8/12/2021 2:25:29 PM return f(*args, **kwargs)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/scheduler_command.py", line 64, in scheduler
8/12/2021 2:25:29 PM job.run()
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 237, in run
8/12/2021 2:25:29 PM self._execute()
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1303, in _execute
8/12/2021 2:25:29 PM self._run_scheduler_loop()
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1396, in _run_scheduler_loop
8/12/2021 2:25:29 PM num_queued_tis = self._do_scheduling(session)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1535, in _do_scheduling
8/12/2021 2:25:29 PM self._schedule_dag_run(dag_run, active_runs_by_dag_id.get(dag_run.dag_id, set()), session)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1706, in _schedule_dag_run
8/12/2021 2:25:29 PM dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 67, in wrapper
8/12/2021 2:25:29 PM return func(*args, **kwargs)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dagbag.py", line 186, in get_dag
8/12/2021 2:25:29 PM self._add_dag_from_db(dag_id=dag_id, session=session)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dagbag.py", line 261, in _add_dag_from_db
8/12/2021 2:25:29 PM dag = row.dag
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/serialized_dag.py", line 175, in dag
8/12/2021 2:25:29 PM dag = SerializedDAG.from_dict(self.data) # type: Any
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 792, in from_dict
8/12/2021 2:25:29 PM return cls.deserialize_dag(serialized_obj['dag'])
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 716, in deserialize_dag
8/12/2021 2:25:29 PM v = {task["task_id"]: SerializedBaseOperator.deserialize_operator(task) for task in v}
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 716, in <dictcomp>
8/12/2021 2:25:29 PM v = {task["task_id"]: SerializedBaseOperator.deserialize_operator(task) for task in v}
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 446, in deserialize_operator
8/12/2021 2:25:29 PM op = SerializedBaseOperator(task_id=encoded_op['task_id'])
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 185, in apply_defaults
8/12/2021 2:25:29 PM result = func(self, *args, **kwargs)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 381, in __init__
8/12/2021 2:25:29 PM super().__init__(*args, **kwargs)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 185, in apply_defaults
8/12/2021 2:25:29 PM result = func(self, *args, **kwargs)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 527, in __init__
8/12/2021 2:25:29 PM validate_key(task_id)
8/12/2021 2:25:29 PM File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/helpers.py", line 44, in validate_key
8/12/2021 2:25:29 PM raise AirflowException(
8/12/2021 2:25:29 PM airflow.exceptions.AirflowException: The key (Create grafana alert.alert_ESCALATION_AGING_FORWARD) has to be made of alphanumeric characters, dashes, dots and underscores exclusively
``` | https://github.com/apache/airflow/issues/17568 | https://github.com/apache/airflow/pull/17578 | 7db43f770196ad9cdebded97665d6efcdb985a18 | 833e1094a72b5a09f6b2249001b977538f139a19 | "2021-08-12T07:52:49Z" | python | "2021-08-13T15:33:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,533 | ["airflow/www/static/js/tree.js", "airflow/www/templates/airflow/tree.html"] | Tree-view auto refresh ignores requested root task | Airflow 2.1.2
In Tree View, when you click on a task in a middle of a DAG, then click "Filter Upstream" in a popup, the webpage will reload the tree view with `root` argument in URL.
However, when clicking a small Refresh button right, or enabling auto-refresh, the whole dag will load back. It ignores `root`.
| https://github.com/apache/airflow/issues/17533 | https://github.com/apache/airflow/pull/17633 | 84de7c53c2ebabef98e0916b13b239a8e4842091 | c645d7ac2d367fd5324660c616618e76e6b84729 | "2021-08-10T11:45:23Z" | python | "2021-08-16T16:00:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,527 | ["airflow/api/client/json_client.py"] | Broken json_client | **Apache Airflow version**: 2.1.2
**Apache Airflow Provider versions** (please include all providers that are relevant to your bug):
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-imap==2.0.0
apache-airflow-providers-sqlite==2.0.0
**Environment**:
- **OS** (e.g. from /etc/os-release): Ubuntu
- **Python**: python 3.8
- **httpx**: httpx==0.6.8, httpx==0.18.2
**What happened**:
```
Traceback (most recent call last):
File "./bug.py", line 6, in <module>
response = airflow_client.trigger_dag(
File "/home/user/my_project/venv38/lib/python3.8/site-packages/airflow/api/client/json_client.py", line 54, in trigger_dag
data = self._request(
File "/home/user/my_project/venv38/lib/python3.8/site-packages/airflow/api/client/json_client.py", line 38, in _request
if not resp.ok:
AttributeError: 'Response' object has no attribute 'ok'
```
Json Client depends on httpx, which doesn't provide the attribute 'ok' and according to their spec (https://www.python-httpx.org/compatibility/) they are not going to do that. In the consequence handling the response crashes.
**What you expected to happen**:
The expected outcome is to handle the response smoothly.
**How to reproduce it**:
```
#!/usr/bin/env python3
from airflow.api.client import json_client
airflow_client = json_client.Client("http://rnd.airflow.mydomain.com", None)
response = airflow_client.trigger_dag("my_dag", execution_date="2021-04-27T00:00:00")
```
**Anything else we need to know**:
The problem always occurs. | https://github.com/apache/airflow/issues/17527 | https://github.com/apache/airflow/pull/17529 | 663f1735c75aaa2690483f0880a8555015ed1eeb | fbddefe5cbc0fe163c42ab070bc0d7e44c5cef5b | "2021-08-10T07:51:47Z" | python | "2021-08-10T11:26:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,516 | ["airflow/dag_processing/processor.py", "airflow/models/dag.py", "tests/dag_processing/test_processor.py", "tests/www/views/test_views_home.py"] | Removed dynamically generated DAG doesn't always remove from Airflow metadb | **Apache Airflow version**: 2.1.0
**What happened**:
We use a python script that reads JSON configs from db to dynamically generate DAGs. Sometimes, when JSON Configs is updated, we expect DAGs to be removed in Airflow. This doesn't always happen. From my observation, one of the following happens:
1) DAG is removed because the python script no longer generates it.
2) DAG still exists but run history is empty. When trigger the DAG, first task is stuck in `queued` status indefinitely.
**What you expected to happen**:
I expect the right behavior should be
>1) DAG is removed because the python script no longer generates it.
**How to reproduce it**:
* Create a python script in DAG folder that dynamically generates multiple DAGs
* Execute these dynamically generated DAGs a few times
* Use Airflow Variable to toggle (reduce) the # of DAGs generated
* Examine if number of dynamically generated DAGs in the web UI
| https://github.com/apache/airflow/issues/17516 | https://github.com/apache/airflow/pull/17121 | dc94ee26653ee4d3446210520036cc1f0eecfd81 | e81f14b85e2609ce0f40081caa90c2a6af1d2c65 | "2021-08-09T19:37:58Z" | python | "2021-09-18T19:52:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,513 | ["airflow/dag_processing/manager.py", "tests/dag_processing/test_manager.py"] | dag_processing.last_duration metric has random holes |
**Apache Airflow version**: 2.1.2
**Apache Airflow Provider versions** (please include all providers that are relevant to your bug): 2.0
apache-airflow-providers-apache-hive==2.0.0
apache-airflow-providers-celery==2.0.0
apache-airflow-providers-cncf-kubernetes==1.2.0
apache-airflow-providers-docker==2.0.0
apache-airflow-providers-elasticsearch==1.0.4
apache-airflow-providers-ftp==2.0.0
apache-airflow-providers-imap==2.0.0
apache-airflow-providers-jdbc==2.0.0
apache-airflow-providers-microsoft-mssql==2.0.0
apache-airflow-providers-mysql==2.0.0
apache-airflow-providers-oracle==2.0.0
apache-airflow-providers-postgres==2.0.0
apache-airflow-providers-sqlite==2.0.0
apache-airflow-providers-ssh==2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.17
**Environment**:
- **Install tools**: https://github.com/prometheus/statsd_exporter
**What happened**:
We are using an [statsd_exporter](https://github.com/prometheus/statsd_exporter) to store statsd metrics in prometheus. And we came across strange behavior, the metric dag_processing.last_duration. <dag_file> for different dags is drawn with holes at random intervals.
![image](https://user-images.githubusercontent.com/3032319/128734876-800ab58e-50d5-4c81-af4a-79c8660c2f0a.png)
![image](https://user-images.githubusercontent.com/3032319/128734946-27381da5-40cc-4c83-9bcc-60d8264d26a3.png)
![image](https://user-images.githubusercontent.com/3032319/128735019-a523e58c-5751-466d-abb7-1888fa79b9e1.png)
**What you expected to happen**:
Metrics should be sent with the frequency specified in the config in the AIRFLOV__SHEDULER__PRINT_STATS_INTERVAL parameter and the default value is 30, and this happens in the[ _log_file_processing_stats method](https://github.com/apache/airflow/blob/2fea4cdceaa12b3ac13f24eeb383af624aacb2e7/airflow/dag_processing/manager.py#L696), the problem is that the initial time is taken using the [get_start_time](https://github.com/apache/airflow/blob/2fea4cdceaa12b3ac13f24eeb383af624aacb2e7/airflow/dag_processing/manager.py#L827) function which looks only at the probability of active processes that some of the dags will have time to be processed in 30 seconds and removed from self._processors [file_path] and thus the metrics for them will not be sent to the statsd. While for outputting to the log, [lasttime](https://github.com/apache/airflow/blob/2fea4cdceaa12b3ac13f24eeb383af624aacb2e7/airflow/dag_processing/manager.py#L783) is used in which information on the last processing is stored and from which it would be nice to send metrics to the statistics.
| https://github.com/apache/airflow/issues/17513 | https://github.com/apache/airflow/pull/17769 | ffb81eae610f738fd45c88cdb27d601c0edf24fa | 1a929f73398bdd201c8c92a08b398b41f9c2f591 | "2021-08-09T16:09:44Z" | python | "2021-08-23T16:17:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,512 | ["airflow/providers/elasticsearch/log/es_task_handler.py", "tests/providers/elasticsearch/log/test_es_task_handler.py"] | Invalid log order with ElasticsearchTaskHandler |
**Apache Airflow version**: 2.*
**Apache Airflow Provider versions** (please include all providers that are relevant to your bug): providers-elasticsearch==1.*/2.*
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.17
**What happened**:
When using the ElasticsearchTaskHandler I see the broken order of the logs in the interface.
![IMAGE 2021-08-09 18:05:13](https://user-images.githubusercontent.com/3032319/128728656-86736b50-4bd4-4178-9066-b70197b60421.jpg)
raw log entries:
{"asctime": "2021-07-16 09:37:22,278", "filename": "pod_launcher_deprecated.py", "lineno": 131, "levelname": "WARNING", "message": "Pod not yet started: a3ca15e631a7f67697e10b23bae82644.6314229ca0c84d7ba29c870edc39a268", "exc_text": null, "dag_id": "dag_name_AS10_A_A_A_A", "task_id": "k8spod_make_upload_dir_1_2", "execution_date": "2021_07_16T09_10_00_000000", "try_number": "1", "log_id": "dag_name_AS10_A_A_A_A-k8spod_make_upload_dir_1_2-2021_07_16T09_10_00_000000-1", "offset": 1626427744958776832}
{"asctime": "2021-07-16 09:37:26,485", "filename": "taskinstance.py", "lineno": 1191, "levelname": "INFO", "message": "Marking task as SUCCESS. dag_id=pod_gprs_raw_from_nfs_AS10_A_A_A_A, task_id=k8spod_make_upload_dir_1_2, execution_date=20210716T091000, start_date=20210716T092905, end_date=20210716T093726", "exc_text": null, "dag_id": "dag_name_AS10_A_A_A_A", "task_id": "k8spod_make_upload_dir_1_2", "execution_date": "2021_07_16T09_10_00_000000", "try_number": "1", "log_id": "dag_name_AS10_A_A_A_A-k8spod_make_upload_dir_1_2-2021_07_16T09_10_00_000000-1", "offset": 1626427744958776832}
**What you expected to happen**:
The problem lies in the method [set_context](https://github.com/apache/airflow/blob/662cb8c6ac8becb26ff405f8b21acfccdd8de2ae/airflow/providers/elasticsearch/log/es_task_handler.py#L271) that is set on the instance of the class and then all entries in the log go with the same offset, which is used to select for display in the interface. When we redefined method emit and put the offset to the record, the problem disappeared
**How to reproduce it**:
Run a long-lived task that generates logs, in our case it is a spark task launched from a docker container
| https://github.com/apache/airflow/issues/17512 | https://github.com/apache/airflow/pull/17551 | a1834ce873d28c373bc11d0637bda57c17c79189 | 944dc32c2b4a758564259133a08f2ea8d28dcb6c | "2021-08-09T15:20:17Z" | python | "2021-08-11T22:31:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,476 | ["airflow/cli/commands/task_command.py", "airflow/utils/log/secrets_masker.py", "tests/cli/commands/test_task_command.py", "tests/utils/log/test_secrets_masker.py"] | Sensitive variables don't get masked when rendered with airflow tasks test | **Apache Airflow version**: 2.1.2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): No
**Environment**:
- **Cloud provider or hardware configuration**: No
- **OS** (e.g. from /etc/os-release): MacOS Big Sur 11.4
- **Kernel** (e.g. `uname -a`): -
- **Install tools**: -
- **Others**: -
**What happened**:
With the following code:
```
from airflow import DAG
from airflow.models import Variable
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
def _extract():
partner = Variable.get("my_dag_partner_secret")
print(partner)
with DAG("my_dag", start_date=datetime(2021, 1 , 1), schedule_interval="@daily") as dag:
extract = PythonOperator(
task_id="extract",
python_callable=_extract
)
```
By executing the command
`
airflow tasks test my_dag extract 2021-01-01
`
The value of the variable my_dag_partner_secret gets rendered in the logs whereas it shouldn't
```
[2021-08-06 19:05:30,088] {taskinstance.py:1303} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=my_dag
AIRFLOW_CTX_TASK_ID=extract
AIRFLOW_CTX_EXECUTION_DATE=2021-01-01T00:00:00+00:00
partner_a
[2021-08-06 19:05:30,091] {python.py:151} INFO - Done. Returned value was: None
[2021-08-06 19:05:30,096] {taskinstance.py:1212} INFO - Marking task as SUCCESS. dag_id=my_dag, task_id=extract, execution_date=20210101T000000, start_date=20210806T131013, end_date=20210806T190530
```
**What you expected to happen**:
The value should be masked like on the UI or in the logs
**How to reproduce it**:
DAG given above
**Anything else we need to know**:
Nop
| https://github.com/apache/airflow/issues/17476 | https://github.com/apache/airflow/pull/24362 | 770ee0721263e108c7c74218fd583fad415e75c1 | 3007159c2468f8e74476cc17573e03655ab168fa | "2021-08-06T19:06:55Z" | python | "2022-06-12T20:51:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,469 | ["airflow/models/taskmixin.py", "airflow/utils/edgemodifier.py", "airflow/utils/task_group.py", "tests/utils/test_edgemodifier.py"] | Label in front of TaskGroup breaks task dependencies | **Apache Airflow version**: 2.1.0
**What happened**:
Using a Label infront of a TaskGroup breaks task dependencies, as tasks before the TaskGroup will now point to the last task in the TaskGroup.
**What you expected to happen**:
Task dependencies with or without Labels should be the same (and preferably the Label should be visible outside of the TaskGroup).
Behavior without a Label is as follows:
```python
with TaskGroup("tg1", tooltip="Tasks related to task group") as tg1:
DummyOperator(task_id="b") >> DummyOperator(task_id="c")
DummyOperator(task_id="a") >> tg1
```
![without Taskgroup](https://user-images.githubusercontent.com/35238779/128502690-58b31577-91d7-4b5c-b348-4779c9286958.png)
**How to reproduce it**:
```python
with TaskGroup("tg1", tooltip="Tasks related to task group") as tg1:
DummyOperator(task_id="b") >> DummyOperator(task_id="c")
DummyOperator(task_id="a") >> Label("testlabel") >> tg1
```
![with TaskGroup](https://user-images.githubusercontent.com/35238779/128502758-5252b7e7-7c30-424a-ac3a-3f9a99b0a805.png)
| https://github.com/apache/airflow/issues/17469 | https://github.com/apache/airflow/pull/29410 | d0657f5539722257657f84837936c51ac1185fab | 4b05468129361946688909943fe332f383302069 | "2021-08-06T11:22:05Z" | python | "2023-02-18T17:57:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,449 | ["airflow/www/security.py", "tests/www/test_security.py"] | Missing menu items in navigation panel for "Op" role | **Apache Airflow version**: 2.1.1
**What happened**:
For user with "Op" role following menu items are not visible in navigation panel, however pages are accessible (roles has access to it):
- "Browse" -> "DAG Dependencies"
- "Admin" -> "Configurations"
**What you expected to happen**: available menu items in navigation panel | https://github.com/apache/airflow/issues/17449 | https://github.com/apache/airflow/pull/17450 | 9a0c10ba3fac3bb88f4f103114d4590b3fb191cb | 4d522071942706f4f7c45eadbf48caded454cb42 | "2021-08-05T15:36:55Z" | python | "2021-09-01T19:58:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,442 | ["airflow/www/views.py", "tests/www/views/test_views_blocked.py"] | Deleted DAG raises SerializedDagNotFound exception when accessing webserver | **Apache Airflow version**: 2.1.2
**Environment**:
- **Docker image**: official apache/airflow:2.1.2 extended with pip installed packages
- **Executor**: CeleryExecutor
- **Database**: PostgreSQL (engine version: 12.5)
**What happened**:
I've encountered `SerializedDagNotFound: DAG 'my_dag_id' not found in serialized_dag table` after deleting DAG via web UI. Exception is raised each time when UI is accessed. The exception itself does not impact UI, but still an event is sent to Sentry each time I interact with it.
**What you expected to happen**:
After DAG deletion I've expected that all records of it apart from logs would be deleted, but it's DAG run was still showing up in Webserver UI (even though, I couldn't find any records of DAG in the metadb, apart from dag_tag table still containing records related to deleted DAG (Which may be a reason for a new issue), but manual deletion of those records had no impact.
After I've deleted DAG Run via Web UI, the exception is no longer raised.
**How to reproduce it**:
To this moment, I've only encountered this with one DAG, which was used for debugging purposes. It consists of multiple PythonOperators and it uses the same boilerplate code I use for dynamic DAG generation (return DAG object from `create_dag` function, add it to globals), but in it I've replaced dynamic generation logic with just `dag = create_dag(*my_args, **my_kwargs)`. I think this may have been caused by DAG run still running, when DAG was deleted, but cannot support this theory with actual information.
<details>
<summary>Traceback</summary>
```python
SerializedDagNotFound: DAG 'my_dag_id' not found in serialized_dag table
File "flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "flask/_compat.py", line 39, in reraise
raise value
File "flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "airflow/www/views.py", line 1679, in blocked
dag = current_app.dag_bag.get_dag(dag_id)
File "airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "airflow/models/dagbag.py", line 186, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "airflow/models/dagbag.py", line 258, in _add_dag_from_db
raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
```
</details>
| https://github.com/apache/airflow/issues/17442 | https://github.com/apache/airflow/pull/17544 | e69c6a7a13f66c4c343efa848b9b8ac2ad93b1f5 | 60ddcd10bbe5c2375b14307456b8e5f76c1d0dcd | "2021-08-05T12:17:26Z" | python | "2021-08-12T23:44:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,438 | ["airflow/operators/trigger_dagrun.py", "tests/operators/test_trigger_dagrun.py"] | TriggerDagRunOperator to configure the run_id of the triggered dag | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
Airflow 1.10 support providing a run_id to TriggerDagRunOperator using [DagRunOrder](https://github.com/apache/airflow/blob/v1-10-stable/airflow/operators/dagrun_operator.py#L30-L33) object that will be returned after calling [TriggerDagRunOperator#python_callable](https://github.com/apache/airflow/blob/v1-10-stable/airflow/operators/dagrun_operator.py#L88-L95). With https://github.com/apache/airflow/pull/6317 (Airflow 2.0), this behavior changed and one could not provide run_id anymore to the triggered dag, which is very odd to say the least.
<!-- A short description of your feature -->
**Use case / motivation**
I want to be able to provide run_id for the TriggerDagRunOperator. In my case, I use the TriggerDagRunOperator to trigger 30K dag_run daily and will be so annoying to see them having unidentifiable run_id.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
After discussing with @ashb
**Suggested fixes can be**
* provide a templated run_id as a parameter to TriggerDagRunOperator.
* restore the old behavior of Airflow 1.10, where DagRunOrder holds the run_id and dag_run_conf of the triggered dag
* create a new sub-class of TriggerDagRunOperator to fully customize it
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/17438 | https://github.com/apache/airflow/pull/18788 | 5bc64fb923d7afcab420a1b4a6d9f6cc13362f7a | cdaa9aac80085b157c606767f2b9958cd6b2e5f0 | "2021-08-05T09:52:22Z" | python | "2021-10-07T15:54:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,437 | ["docs/apache-airflow/faq.rst"] | It's too slow to recognize new dag file when there are a log of dags files | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
There are 5000+ dag files in our production env. It will delay for almost 10mins to recognize the new dag file if a new one comes.
There are 16 CPU cores in the scheduler machine. The airflow version is 2.1.0.
<!-- A short description of your feature -->
**Use case / motivation**
I think there should be a feature to support recognizing new dag files or recently modified dag files faster.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
Maybe will.
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/17437 | https://github.com/apache/airflow/pull/17519 | 82229b363d53db344f40d79c173421b4c986150c | 7dfc52068c75b01a309bf07be3696ad1f7f9b9e2 | "2021-08-05T09:45:52Z" | python | "2021-08-10T10:05:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,422 | ["airflow/hooks/dbapi.py", "airflow/providers/postgres/hooks/postgres.py", "tests/hooks/test_dbapi.py"] | AttributeError: 'PostgresHook' object has no attribute 'schema' | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):1.21
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):Linux workspace
- **Install tools**:
- **Others**:
**What happened**:
Running PostgresOperator errors out with 'PostgresHook' object has no attribute 'schema'.
I tested it as well with the code from PostgresOperator tutorial in https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/operators/postgres_operator_howto_guide.html
This is happening since upgrade of apache-airflow-providers-postgres to version 2.1.0
```
*** Reading local file: /tmp/logs/postgres_operator_dag/create_pet_table/2021-08-04T15:32:40.520243+00:00/2.log
[2021-08-04 15:57:12,429] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: postgres_operator_dag.create_pet_table 2021-08-04T15:32:40.520243+00:00 [queued]>
[2021-08-04 15:57:12,440] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: postgres_operator_dag.create_pet_table 2021-08-04T15:32:40.520243+00:00 [queued]>
[2021-08-04 15:57:12,440] {taskinstance.py:1067} INFO -
--------------------------------------------------------------------------------
[2021-08-04 15:57:12,440] {taskinstance.py:1068} INFO - Starting attempt 2 of 2
[2021-08-04 15:57:12,440] {taskinstance.py:1069} INFO -
--------------------------------------------------------------------------------
[2021-08-04 15:57:12,457] {taskinstance.py:1087} INFO - Executing <Task(PostgresOperator): create_pet_table> on 2021-08-04T15:32:40.520243+00:00
[2021-08-04 15:57:12,461] {standard_task_runner.py:52} INFO - Started process 4692 to run task
[2021-08-04 15:57:12,466] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', '***_operator_dag', 'create_pet_table', '2021-08-04T15:32:40.520243+00:00', '--job-id', '6', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/test_dag.py', '--cfg-path', '/tmp/tmp2mez286k', '--error-file', '/tmp/tmpgvc_s17j']
[2021-08-04 15:57:12,468] {standard_task_runner.py:77} INFO - Job 6: Subtask create_pet_table
[2021-08-04 15:57:12,520] {logging_mixin.py:104} INFO - Running <TaskInstance: ***_operator_dag.create_pet_table 2021-08-04T15:32:40.520243+00:00 [running]> on host 5995a11eafd1
[2021-08-04 15:57:12,591] {taskinstance.py:1280} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=***_operator_dag
AIRFLOW_CTX_TASK_ID=create_pet_table
AIRFLOW_CTX_EXECUTION_DATE=2021-08-04T15:32:40.520243+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-08-04T15:32:40.520243+00:00
[2021-08-04 15:57:12,591] {postgres.py:68} INFO - Executing:
CREATE TABLE IF NOT EXISTS pet (
pet_id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL,
pet_type VARCHAR NOT NULL,
birth_date DATE NOT NULL,
OWNER VARCHAR NOT NULL);
[2021-08-04 15:57:12,608] {base.py:69} INFO - Using connection to: id: ***_default.
[2021-08-04 15:57:12,610] {taskinstance.py:1481} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1137, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.8/site-packages/airflow/providers/postgres/operators/postgres.py", line 70, in execute
self.hook.run(self.sql, self.autocommit, parameters=self.parameters)
File "/usr/local/lib/python3.8/site-packages/airflow/hooks/dbapi.py", line 177, in run
with closing(self.get_conn()) as conn:
File "/usr/local/lib/python3.8/site-packages/airflow/providers/postgres/hooks/postgres.py", line 97, in get_conn
dbname=self.schema or conn.schema,
AttributeError: 'PostgresHook' object has no attribute 'schema'
[2021-08-04 15:57:12,612] {taskinstance.py:1524} INFO - Marking task as FAILED. dag_id=***_operator_dag, task_id=create_pet_table, execution_date=20210804T153240, start_date=20210804T155712, end_date=20210804T155712
[2021-08-04 15:57:12,677] {local_task_job.py:151} INFO - Task exited with return code 1
```
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/17422 | https://github.com/apache/airflow/pull/17423 | d884a3d4aa65f65aca2a62f42012e844080a31a3 | 04b6559f8a06363a24e70f6638df59afe43ea163 | "2021-08-04T16:06:47Z" | python | "2021-08-07T17:51:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,381 | ["airflow/executors/celery_executor.py", "tests/executors/test_celery_executor.py"] | Bug in the logic of Celery executor for checking stalled adopted tasks? | **Apache Airflow version**: 2.1.1
**What happened**:
`_check_for_stalled_adopted_tasks` method breaks on first item which satisfies condition
https://github.com/apache/airflow/blob/2.1.1/airflow/executors/celery_executor.py#L353
From the comment/logic, it looks like the idea is to optimize this piece of code, however it is not seen that `self.adopted_task_timeouts` object maintains entries sorted by timestamp. This results in unstable behaviour of scheduler, which means it sometimes may not resend tasks to Celery (due to skipping them).
Confirmed with Airflow 2.1.1
**What you expected to happen**:
Deterministic behaviour of scheduler in this case
**How to reproduce it**:
These are steps to reproduce adoption of tasks. To reproduce unstable behaviour, you may need to do trigger some additional DAGs in the process.
- set [core]parallelism to 30
- trigger DAG with concurrency==100 and 30 tasks, each is running for 30 minutes (e.g. sleep 1800)
- 18 of them will be running, others will be in queued state
- restart scheduler
- observe "Adopted tasks were still pending after 0:10:00, assuming they never made it to celery and clearing:"
- tasks will be failed and marked as "up for retry"
The important thing is that scheduler has to restart once tasks get to the queue, so that it will adopt queued tasks. | https://github.com/apache/airflow/issues/17381 | https://github.com/apache/airflow/pull/18208 | e7925d8255e836abd8912783322d61b3a9ff657a | 9a7243adb8ec4d3d9185bad74da22e861582ffbe | "2021-08-02T14:33:22Z" | python | "2021-09-15T13:36:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,373 | ["airflow/cli/cli_parser.py", "airflow/executors/executor_loader.py", "tests/cli/conftest.py", "tests/cli/test_cli_parser.py"] | Allow using default celery commands for custom Celery executors subclassed from existing | **Description**
Allow custom executors subclassed from existing (CeleryExecutor, CeleryKubernetesExecutor, etc.) to use default CLI commands to start workers or flower monitoring.
**Use case / motivation**
Currently, users who decide to roll their own custom Celery-based executor cannot use default commands (i.e. `airflow celery worker`) even though it's built on top of existing CeleryExecutor. If they try to, they'll receive the following error: `airflow command error: argument GROUP_OR_COMMAND: celery subcommand works only with CeleryExecutor, your current executor: custom_package.CustomCeleryExecutor, see help above.`
One workaround for this is to create custom entrypoint script for worker/flower containers/processes that are still going to use the same Celery app as CeleryExecutor. This leads to unnecessary maintaining of this entrypoint script.
I'd suggest two ways of fixing that:
- Check if custom executor is subclassed from Celery executor (which might lead to errors, if custom executor is used to access other celery app, which might be a proper reason for rolling custom executor)
- Store `app` as attribute of Celery-based executors and match the one provided by custom executor with the default one
**Related Issues**
N/A | https://github.com/apache/airflow/issues/17373 | https://github.com/apache/airflow/pull/18189 | d3f445636394743b9298cae99c174cb4ac1fc30c | d0cea6d849ccf11e2b1e55d3280fcca59948eb53 | "2021-08-02T08:46:59Z" | python | "2021-12-04T15:19:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,368 | ["airflow/providers/slack/example_dags/__init__.py", "airflow/providers/slack/example_dags/example_slack.py", "airflow/providers/slack/operators/slack.py", "docs/apache-airflow-providers-slack/index.rst", "tests/providers/slack/operators/test.csv", "tests/providers/slack/operators/test_slack.py"] | Add example DAG for SlackAPIFileOperator | The SlackAPIFileOperator is not straight forward and it might be better to add an example DAG to demonstrate the usage.
| https://github.com/apache/airflow/issues/17368 | https://github.com/apache/airflow/pull/17400 | c645d7ac2d367fd5324660c616618e76e6b84729 | 2935be19901467c645bce9d134e28335f2aee7d8 | "2021-08-01T23:15:40Z" | python | "2021-08-16T16:16:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,348 | ["airflow/providers/google/cloud/example_dags/example_mlengine.py", "airflow/providers/google/cloud/operators/mlengine.py", "tests/providers/google/cloud/operators/test_mlengine.py"] | Add support for hyperparameter tuning on GCP Cloud AI | @darshan-majithiya had opened #15429 to add the hyperparameter tuning PR but it's gone stale. I'm adding this issue to see if they want to pick it back up, or if not, if someone wants to pick up where they left off in the spirit of open source 😄 | https://github.com/apache/airflow/issues/17348 | https://github.com/apache/airflow/pull/17790 | 87769db98f963338855f59cfc440aacf68e008c9 | aa5952e58c58cab65f49b9e2db2adf66f17e7599 | "2021-07-30T18:50:32Z" | python | "2021-08-27T18:12:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,340 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/apache/livy/operators/livy.py", "tests/providers/apache/livy/operators/test_livy.py"] | Retrieve session logs when using Livy Operator | **Description**
The Airflow logs generated by the Livy operator currently only state the status of the submitted batch. To view the logs from the job itself, one must go separately to the session logs. I think that Airflow should have the option (possibly on by default) that retrieves the session logs after the batch reaches a terminal state if a `polling_interval` has been set.
**Use case / motivation**
When debugging a task submitted via Livy, the session logs are the first place to check. For most other tasks, including SparkSubmitOperator, viewing the first-check logs can be done in the Airflow UI, but for Livy you must go externally or write a separate task to retrieve them.
**Are you willing to submit a PR?**
I don't yet have a good sense of how challenging this will be to set up and test. I can try but if anyone else wants to go for it, don't let my attempt stop you.
**Related Issues**
None I could find
| https://github.com/apache/airflow/issues/17340 | https://github.com/apache/airflow/pull/17393 | d04aa135268b8e0230be3af6598a3b18e8614c3c | 02a33b55d1ef4d5e0466230370e999e8f1226b30 | "2021-07-30T13:54:00Z" | python | "2021-08-20T21:49:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,326 | ["tests/jobs/test_local_task_job.py"] | `TestLocalTaskJob.test_mark_success_no_kill` fails consistently on MSSQL | **Apache Airflow version**: main
**Environment**: CI
**What happened**:
The `TestLocalTaskJob.test_mark_success_no_kill` test no longer passes on MSSQL. I initially thought it was a race condition, but even after 5 minutes the TI wasn't running.
https://github.com/apache/airflow/blob/36bdfe8d0ef7e5fc428434f8313cf390ee9acc8f/tests/jobs/test_local_task_job.py#L301-L306
I've tracked down that the issue was introduced with #16301 (cc @ephraimbuddy), but I haven't really dug into why.
**How to reproduce it**:
`./breeze --backend mssql tests tests/jobs/test_local_task_job.py`
```
_____________________________________________________________________________________ TestLocalTaskJob.test_mark_success_no_kill _____________________________________________________________________________________
self = <tests.jobs.test_local_task_job.TestLocalTaskJob object at 0x7f54652abf10>
def test_mark_success_no_kill(self):
"""
Test that ensures that mark_success in the UI doesn't cause
the task to fail, and that the task exits
"""
dagbag = DagBag(
dag_folder=TEST_DAG_FOLDER,
include_examples=False,
)
dag = dagbag.dags.get('test_mark_success')
task = dag.get_task('task1')
session = settings.Session()
dag.clear()
dag.create_dagrun(
run_id="test",
state=State.RUNNING,
execution_date=DEFAULT_DATE,
start_date=DEFAULT_DATE,
session=session,
)
ti = TaskInstance(task=task, execution_date=DEFAULT_DATE)
ti.refresh_from_db()
job1 = LocalTaskJob(task_instance=ti, ignore_ti_state=True)
process = multiprocessing.Process(target=job1.run)
process.start()
for _ in range(0, 50):
if ti.state == State.RUNNING:
break
time.sleep(0.1)
ti.refresh_from_db()
> assert State.RUNNING == ti.state
E AssertionError: assert <TaskInstanceState.RUNNING: 'running'> == None
E + where <TaskInstanceState.RUNNING: 'running'> = State.RUNNING
E + and None = <TaskInstance: test_mark_success.task1 2016-01-01 00:00:00+00:00 [None]>.state
tests/jobs/test_local_task_job.py:306: AssertionError
------------------------------------------------------------------------------------------------ Captured stderr call ------------------------------------------------------------------------------------------------
INFO [airflow.models.dagbag.DagBag] Filling up the DagBag from /opt/airflow/tests/dags
INFO [root] class_instance type: <class 'unusual_prefix_5d280a9b385120fec3c40cfe5be04e2f41b6b5e8_test_task_view_type_check.CallableClass'>
INFO [airflow.models.dagbag.DagBag] File /opt/airflow/tests/dags/test_zip.zip:file_no_airflow_dag.py assumed to contain no DAGs. Skipping.
Process Process-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2336, in _wrap_pool_connect
return fn()
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 364, in connect
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 809, in _checkout
result = pool._dialect.do_ping(fairy.connection)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 575, in do_ping
cursor.execute(self._dialect_specific_select_one)
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The server failed to resume the transaction. Desc:3400000012. (3971) (SQLExecDirectW)')
(truncated)
``` | https://github.com/apache/airflow/issues/17326 | https://github.com/apache/airflow/pull/17334 | 0f97b92c1ad15bd6d0a90c8dee8287886641d7d9 | 7bff44fba83933de1b420fbb4fc3655f28769bd0 | "2021-07-29T22:15:54Z" | python | "2021-07-30T14:38:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,316 | ["scripts/ci/pre_commit/pre_commit_check_provider_yaml_files.py"] | Docs validation function - Add meaningful errors | The following function just prints right/left error which is not very meaningful and very difficult to troubleshoot.
```python
def check_doc_files(yaml_files: Dict[str, Dict]):
print("Checking doc files")
current_doc_urls = []
current_logo_urls = []
for provider in yaml_files.values():
if 'integrations' in provider:
current_doc_urls.extend(
guide
for guides in provider['integrations']
if 'how-to-guide' in guides
for guide in guides['how-to-guide']
)
current_logo_urls.extend(
integration['logo'] for integration in provider['integrations'] if 'logo' in integration
)
if 'transfers' in provider:
current_doc_urls.extend(
op['how-to-guide'] for op in provider['transfers'] if 'how-to-guide' in op
)
expected_doc_urls = {
"/docs/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/apache-airflow-providers-*/operators/**/*.rst", recursive=True)
if not f.endswith("/index.rst") and '/_partials' not in f
}
expected_doc_urls |= {
"/docs/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/apache-airflow-providers-*/operators.rst", recursive=True)
}
expected_logo_urls = {
"/" + os.path.relpath(f, start=DOCS_DIR)
for f in glob(f"{DOCS_DIR}/integration-logos/**/*", recursive=True)
if os.path.isfile(f)
}
try:
assert_sets_equal(set(expected_doc_urls), set(current_doc_urls))
assert_sets_equal(set(expected_logo_urls), set(current_logo_urls))
except AssertionError as ex:
print(ex)
sys.exit(1)
``` | https://github.com/apache/airflow/issues/17316 | https://github.com/apache/airflow/pull/17322 | 213e337f57ef2ef9a47003214f40da21f4536b07 | 76e6315473671b87f3d5fe64e4c35a79658789d3 | "2021-07-29T14:58:56Z" | python | "2021-07-30T19:18:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 17,291 | ["tests/jobs/test_scheduler_job.py", "tests/test_utils/asserts.py"] | [QUARANTINE] Quarantine test_retry_still_in_executor | The `TestSchedulerJob.test_retry_still_in_executor` fails occasionally and should be quarantined.
```
________________ TestSchedulerJob.test_retry_still_in_executor _________________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob object at 0x7f4c9031f128>
def test_retry_still_in_executor(self):
"""
Checks if the scheduler does not put a task in limbo, when a task is retried
but is still present in the executor.
"""
executor = MockExecutor(do_update=False)
dagbag = DagBag(dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"), include_examples=False)
dagbag.dags.clear()
dag = DAG(dag_id='test_retry_still_in_executor', start_date=DEFAULT_DATE, schedule_interval="@once")
dag_task1 = BashOperator(
task_id='test_retry_handling_op', bash_command='exit 1', retries=1, dag=dag, owner='airflow'
)
dag.clear()
dag.is_subdag = False
with create_session() as session:
orm_dag = DagModel(dag_id=dag.dag_id)
orm_dag.is_paused = False
session.merge(orm_dag)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
@mock.patch('airflow.dag_processing.processor.DagBag', return_value=dagbag)
def do_schedule(mock_dagbag):
# Use a empty file since the above mock will return the
# expected DAGs. Also specify only a single file so that it doesn't
# try to schedule the above DAG repeatedly.
self.scheduler_job = SchedulerJob(
num_runs=1, executor=executor, subdir=os.path.join(settings.DAGS_FOLDER, "no_dags.py")
)
self.scheduler_job.heartrate = 0
self.scheduler_job.run()
do_schedule()
with create_session() as session:
ti = (
session.query(TaskInstance)
.filter(
TaskInstance.dag_id == 'test_retry_still_in_executor',
TaskInstance.task_id == 'test_retry_handling_op',
)
.first()
)
> ti.task = dag_task1
E AttributeError: 'NoneType' object has no attribute 'task'
tests/jobs/test_scheduler_job.py:2514: AttributeError
``` | https://github.com/apache/airflow/issues/17291 | https://github.com/apache/airflow/pull/19860 | d1848bcf2460fa82cd6c1fc1e9e5f9b103d95479 | 9b277dbb9b77c74a9799d64e01e0b86b7c1d1542 | "2021-07-28T17:40:09Z" | python | "2021-12-13T17:55:43Z" |