status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 15,179 | ["chart/templates/NOTES.txt"] | Kubernetes does not show logs for task instances if remote logging is not configured | Without configuring remote logging, logs from Kubernetes for task instances are not complete.
Without remote logging configured, the logging for task instances are outputted as :
logging_level: INFO
```log
BACKEND=postgresql
DB_HOST=airflow-postgresql.airflow.svc.cluster.local
DB_PORT=5432
[2021-04-03 12:35:52,047] {dagbag.py:448} INFO - Filling up the DagBag from /opt/airflow/dags/k8pod.py
/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/backcompat/backwards_compat_converters.py:26 DeprecationWarning: This module is deprecated. Please use `kubernetes.client.models.V1Volume`.
/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/backcompat/backwards_compat_converters.py:27 DeprecationWarning: This module is deprecated. Please use `kubernetes.client.models.V1VolumeMount`.
Running <TaskInstance: k8_pod_operator_xcom.task322 2021-04-03T12:25:49.515523+00:00 [queued]> on host k8podoperatorxcomtask322.7f2ee45d4d6443c5ad26bd8fbefb8292
```
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
**Environment**:
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04
**What happened**:
The logs for task instances run are not shown without remote logging configured
**What you expected to happen**:
I expected to see complete logs for tasks
**How to reproduce it**:
Start airflow using the helm chart without configuring remote logging.
Run a task and check the logs.
It's necessary to set `delete_worker_pods` to False so you can view the logs after the task has ended
| https://github.com/apache/airflow/issues/15179 | https://github.com/apache/airflow/pull/16784 | 1eed6b4f37ddf2086bf06fb5c4475c68fadac0f9 | 8885fc1d9516b30b316487f21e37d34bdd21e40e | "2021-04-03T21:20:52Z" | python | "2021-07-06T18:37:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,178 | ["airflow/example_dags/tutorial.py", "airflow/models/baseoperator.py", "airflow/serialization/schema.json", "airflow/www/utils.py", "airflow/www/views.py", "docs/apache-airflow/concepts.rst", "tests/serialization/test_dag_serialization.py", "tests/www/test_utils.py"] | Task doc is not shown on Airflow 2.0 Task Instance Detail view | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 932f8c2e9360de6371031d4d71df00867a2776e6
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**: locally run `airflow server`
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): mac
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
Task doc is shown on Airflow v1 Task Instance Detail view but not shown on v2.
**What you expected to happen**:
<!-- What do you think went wrong? -->
Task doc is shown.
**How to reproduce it**:
- install airflow latest master
- `airflow server`
- open `tutorial_etl_dag` in `example_dags`
- run dag(I don't know why but task instance detail can't open with error if no dag runs) and open task instance detail
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15178 | https://github.com/apache/airflow/pull/15191 | 7c17bf0d1e828b454a6b2c7245ded275b313c792 | e86f5ca8fa5ff22c1e1f48addc012919034c672f | "2021-04-03T20:48:59Z" | python | "2021-04-05T02:46:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,171 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | scheduler does not apply ordering when querying which task instances to queue | Issue type:
Bug
Airflow version:
2.0.1 (although bug may have existed earlier, and master still has the bug)
Issue:
The scheduler sometimes queues tasks in alphabetical order instead of in priority weight and execution date order. This causes priorities to not work at all, and causes some tasks with name later in the alphabet to never run as long as new tasks with names earlier in the alphabet are ready.
Where the issue is in code (I think):
The scheduler will query the DB to get a set of task instances that are ready to run: https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L915-L924
And will simply get the first `max_tis` task instances from the result (with the `limit` call in the last line of the query), where `max_tis` is computed earlier in the code as cumulative pools slots available. The code in master improved the query to filter out tasks from starved pools, but still it will get the first `max_tis` tasks only with no ordering or reasoning on which `max_tis` to take.
Later, the scheduler is smart and will queue tasks based on priority and execution order:
https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L978-L980
However, the correct sorting (second code link here) will only happen on the subset picked by the query (first code link here), but the query will not pick tasks following correct sorting.
This causes tasks with lower priority and / or later execution date to be queued BEFORE tasks with higher priority and / or earlier execution date, just because the first are higher in alphabet than the second, and therefore the first are returned by the unsorted limited SQL query only.
Proposed fix:
Add a "sort by" in the query that gets the tasks to examine (first code link here), so that tasks are sorted by priority weight and execution time (meaning, same logic as the list sorting done later). I am willing to submit a PR if at least I get some feedback on the proposed fix here. | https://github.com/apache/airflow/issues/15171 | https://github.com/apache/airflow/pull/15210 | 4752fb3eb8ac8827e6af6022fbcf751829ecb17a | 943292b4e0c494f023c86d648289b1f23ccb0ee9 | "2021-04-03T06:35:46Z" | python | "2021-06-14T11:34:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,150 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | "duplicate key value violates unique constraint "dag_run_dag_id_execution_date_key" when triggering a DAG | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
1.14
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
[2021-04-02 07:23:30,513] [ERROR] app.py:1892 - Exception on /api/v1/dags/auto_test/dagRuns [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "dag_run_dag_id_execution_date_key"
DETAIL: Key (dag_id, execution_date)=(auto_test, 1967-12-13 20:57:42.043+01) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 184, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 384, in wrapper
return function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/response.py", line 103, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/parameter.py", line 121, in wrapper
return function(**kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/api_connexion/security.py", line 47, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/api_connexion/endpoints/dag_run_endpoint.py", line 231, in post_dag_run
session.commit()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1046, in commit
self.transaction.commit()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 504, in commit
self._prepare_impl()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 483, in _prepare_impl
self.session.flush()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2540, in flush
self._flush(objects)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2682, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
with_traceback=exc_tb,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2642, in _flush
flush_context.execute()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/unitofwork.py", line 589, in execute
uow,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
insert,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1136, in _emit_insert_statements
statement, params
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1130, in _execute_clauseelement
distilled_params,
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1317, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1511, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "dag_run_dag_id_execution_date_key"
DETAIL: Key (dag_id, execution_date)=(auto_test, 1967-12-13 20:57:42.043+01) already exists.
[SQL: INSERT INTO dag_run (dag_id, execution_date, start_date, end_date, state, run_id, creating_job_id, external_trigger, run_type, conf, last_scheduling_decision, dag_hash) VALUES (%(dag_id)s, %(execution_date)s, %(start_date)s, %(end_date)s, %(state)s, %(run_id)s, %(creating_job_id)s, %(external_trigger)s, %(run_type)s, %(conf)s, %(last_scheduling_decision)s, %(dag_hash)s) RETURNING dag_run.id]
[parameters: {'dag_id': 'auto_test', 'execution_date': datetime.datetime(1967, 12, 13, 19, 57, 42, 43000, tzinfo=Timezone('UTC')), 'start_date': datetime.datetime(2021, 4, 2, 7, 23, 30, 511735, tzinfo=Timezone('UTC')), 'end_date': None, 'state': 'running', 'run_id': 'dag_run_id_zstp_4435_postman11', 'creating_job_id': None, 'external_trigger': True, 'run_type': <DagRunType.MANUAL: 'manual'>, 'conf': <psycopg2.extensions.Binary object at 0x7f07b30b71e8>, 'last_scheduling_decision': None, 'dag_hash': None}]
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
The second trigger success
<!-- What do you think went wrong? -->
**How to reproduce it**:
Trigger a dag with the following conf:
`{
"dag_run_id": "dag_run_id_zstp_4435_postman",
"execution_date": "1967-12-13T19:57:42.043Z",
"conf": {}
}`
Then trigerring the same dag only with a different dag_run_id:
`{
"dag_run_id": "dag_run_id_zstp_4435_postman111",
"execution_date": "1967-12-13T19:57:42.043Z",
"conf": {}
}`
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/15150 | https://github.com/apache/airflow/pull/15174 | 36d9274f4ea87f28e2dcbab393b21e34a04eec30 | d89bcad26445c8926093680aac84d969ac34b54c | "2021-04-02T07:33:21Z" | python | "2021-04-06T14:05:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,145 | ["airflow/providers/google/cloud/example_dags/example_bigquery_to_mssql.py", "airflow/providers/google/cloud/transfers/bigquery_to_mssql.py", "airflow/providers/google/provider.yaml", "tests/providers/google/cloud/transfers/test_bigquery_to_mssql.py"] | Big Query to MS SQL operator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
A new transfer operator for transferring records from Big Query to MSSQL.
**Use case / motivation**
Very similar to Bigquery to mysql, this will be an operator for transferring rows from Big Query to MSSQL.
**Are you willing to submit a PR?**
Yes
**Related Issues**
No
| https://github.com/apache/airflow/issues/15145 | https://github.com/apache/airflow/pull/15422 | 70cfe0135373d1f0400e7d9b275ebb017429794b | 7f8f75eb80790d4be3167f5e1ffccc669a281d55 | "2021-04-01T20:36:55Z" | python | "2021-06-12T21:07:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,131 | ["airflow/utils/cli.py", "tests/utils/test_cli_util.py"] | airflow scheduler -p command not working in airflow 2.0.1 | **Apache Airflow version**:
2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: 4GB RAM, Processor - Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
- **OS** (e.g. from /etc/os-release): 18.04.5 LTS (Bionic Beaver)
- **Kernel** (e.g. `uname -a`): Linux 4.15.0-136-generic
- **Install tools**: bare metal installation as per commands given [here](https://airflow.apache.org/docs/apache-airflow/stable/installation.html)
**What happened**:
On running `airflow scheduler -p` got following error:-
```
Traceback (most recent call last):
File "/home/vineet/Documents/projects/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/vineet/Documents/projects/venv/lib/python3.6/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/vineet/Documents/projects/venv/lib/python3.6/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/vineet/Documents/projects/venv/lib/python3.6/site-packages/airflow/utils/cli.py", line 86, in wrapper
metrics = _build_metrics(f.__name__, args[0])
File "/home/vineet/Documents/projects/venv/lib/python3.6/site-packages/airflow/utils/cli.py", line 118, in _build_metrics
full_command[idx + 1] = "*" * 8
IndexError: list assignment index out of range
```
As per docs, `-p` flag is a valid flag and gives correct result for `airflow scheduler --do-pickle`
**How to reproduce it**:
Install `airflow` 2.0.1 and run `airflow scheduler -p` | https://github.com/apache/airflow/issues/15131 | https://github.com/apache/airflow/pull/15143 | 6822665102c973d6e4d5892564294489ca094580 | 486b76438c0679682cf98cb88ed39c4b161cbcc8 | "2021-04-01T11:44:03Z" | python | "2021-04-01T21:02:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,113 | ["setup.py"] | ImportError: cannot import name '_check_google_client_version' from 'pandas_gbq.gbq' | **What happened**:
`pandas-gbq` released version [0.15.0](https://github.com/pydata/pandas-gbq/releases/tag/0.15.0) which broke `apache-airflow-backport-providers-google==2020.11.23`
```
../lib/python3.7/site-packages/airflow/providers/google/cloud/hooks/bigquery.py:49: in <module>
from pandas_gbq.gbq import (
E ImportError: cannot import name '_check_google_client_version' from 'pandas_gbq.gbq' (/usr/local/lib/python3.7/site-packages/pandas_gbq/gbq.py)
```
The fix is to pin `pandas-gpq==0.14.1`. | https://github.com/apache/airflow/issues/15113 | https://github.com/apache/airflow/pull/15114 | 64b00896d905abcf1fbae195a29b81f393319c5f | b3b412523c8029b1ffbc600952668dc233589302 | "2021-03-31T14:39:00Z" | python | "2021-04-04T17:25:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,107 | ["Dockerfile", "chart/values.yaml", "docs/docker-stack/build-arg-ref.rst", "docs/docker-stack/build.rst", "docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile", "docs/docker-stack/entrypoint.rst", "scripts/in_container/prod/entrypoint_prod.sh"] | Make the entrypoint in Prod image fail in case the user/group is not properly set | Airflow Production image accepts two types of uid/gid setting:
* airflow user (50000) with any GID
* any other user wit GID = 0 (this is to accommodate OpenShift Guidelines https://docs.openshift.com/enterprise/3.0/creating_images/guidelines.html)
We should check the uid/gid at entrypoint and fail it with clear error message if the uid/gid are set wrongly.
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
<!-- A short description of your feature -->
**Use case / motivation**
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/15107 | https://github.com/apache/airflow/pull/15162 | 1d635ef0aefe995553059ee5cf6847cf2db65b8c | ce91872eccceb8fb6277012a909ad6b529a071d2 | "2021-03-31T10:30:38Z" | python | "2021-04-08T17:28:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,103 | ["airflow/www/static/js/task_instance.js"] | Airflow web server redirects to a non-existing log folder - v2.1.0.dev0 | **Apache Airflow version**: v2.1.0.dev0
**Environment**:
- **Others**: Docker + docker compose
```
docker pull apache/airflow:master-python3.8
```
**What happened**:
Once the tasks finish successfully, I click on the Logs button in the web server, then I got redirected to this URL:
`http://localhost:8080/task?dag_id=testing&task_id=testing2&execution_date=2021-03-30T22%3A50%3A17.075509%2B00%3A00`
Everything looks fine just for 0.5-ish seconds (the screenshots below were taken by disabling the page refreshing):
![image](https://user-images.githubusercontent.com/5461023/113067907-84ee7a80-91bd-11eb-85ba-cda86eda9125.png)
![image](https://user-images.githubusercontent.com/5461023/113067993-aea7a180-91bd-11eb-9f5f-682d111f9fa8.png)
Then, it instantly gets redirected to the following URL:
`http://localhost:8080/task?dag_id=testing&task_id=testing2&execution_date=2021-03-30+22%3A50%3A17%2B00%3A00#
`
In which I cannot see any info:
![image](https://user-images.githubusercontent.com/5461023/113068254-3d1c2300-91be-11eb-98c8-fa578bfdfbd1.png)
![image](https://user-images.githubusercontent.com/5461023/113068278-4c02d580-91be-11eb-9785-b661d4c36463.png)
The problems lies in the log format specified in the URL:
```
2021-03-30T22%3A50%3A17.075509%2B00%3A00
2021-03-30+22%3A50%3A17%2B00%3A00#
```
This is my python code to run the DAG:
```python
args = {
'owner': 'airflow',
}
dag = DAG(
dag_id='testing',
default_args=args,
schedule_interval=None,
start_date=datetime(2019,1,1),
catchup=False,
tags=['example'],
)
task = PythonOperator(
task_id="testing2",
python_callable=test_python,
depends_on_past=False,
op_kwargs={'test': 'hello'},
dag=dag,
)
```
**Configuration details**
Environment variables from docker-compose.yml file:
```
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW_HOME: /opt/airflow
AIRFLOW__CORE__DEFAULT_TIMEZONE: Europe/Madrid
AIRFLOW__WEBSERVER__DEFAULT_UI_TIMEZONE: Europe/Madrid
AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true'
```
| https://github.com/apache/airflow/issues/15103 | https://github.com/apache/airflow/pull/15258 | 019241be0c839ba32361679ffecd178c0506d10d | 523fb5c3f421129aea10045081dc5e519859c1ae | "2021-03-30T23:29:48Z" | python | "2021-04-07T20:38:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,088 | ["airflow/providers/google/cloud/hooks/bigquery_dts.py", "airflow/providers/google/cloud/operators/bigquery_dts.py"] | GCP BigQuery Data Transfer Run Issue | **Apache Airflow version**: composer-1.15.1-airflow-1.10.14
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**: Google Composer
**What happened**:
After a `TransferConfig` is created successfully by operator `BigQueryCreateDataTransferOperator`, and also confirmed in GCP console, the created Resource name is:
```
projects/<project-id>/locations/europe/transferConfigs/<transfer-config-id>
```
Then I use operator `BigQueryDataTransferServiceStartTransferRunsOperator` to run the transfer, I got this error:
```
Traceback (most recent call last)
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_tas
result = task_copy.execute(context=context
File "/usr/local/lib/airflow/airflow/providers/google/cloud/operators/bigquery_dts.py", line 290, in execut
metadata=self.metadata
File "/usr/local/lib/airflow/airflow/providers/google/common/hooks/base_google.py", line 425, in inner_wrappe
return func(self, *args, **kwargs
File "/usr/local/lib/airflow/airflow/providers/google/cloud/hooks/bigquery_dts.py", line 235, in start_manual_transfer_run
metadata=metadata or ()
File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/bigquery_datatransfer_v1/services/data_transfer_service/client.py", line 1110, in start_manual_transfer_run
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,
File "/opt/python3.6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call_
return wrapped_func(*args, **kwargs
File "/opt/python3.6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 75, in error_remapped_callabl
six.raise_from(exceptions.from_grpc_error(exc), exc
File "<string>", line 3, in raise_fro
google.api_core.exceptions.NotFound: 404 Requested entity was not found
```
**What you expected to happen**:
`BigQueryDataTransferServiceStartTransferRunsOperator` should run the data transfer job.
**How to reproduce it**:
1. Create a `cross_region_copy` TransferConfig with operator`BigQueryCreateDataTransferOperator`
2. Run the job with operator `BigQueryDataTransferServiceStartTransferRunsOperator`
```python
create_transfer = BigQueryCreateDataTransferOperator(
task_id=f'create_{ds}_transfer',
transfer_config={
'destination_dataset_id': ds,
'display_name': f'Copy {ds}',
'data_source_id': 'cross_region_copy',
'schedule_options': {'disable_auto_scheduling': True},
'params': {
'source_project_id': source_project,
'source_dataset_id': ds,
'overwrite_destination_table': True
},
},
project_id=target_project,
)
transfer_config_id = f"{{{{ task_instance.xcom_pull('create_{ds}_transfer', key='transfer_config_id') }}}}"
start_transfer = BigQueryDataTransferServiceStartTransferRunsOperator(
task_id=f'start_{ds}_transfer',
transfer_config_id=transfer_config_id,
requested_run_time={"seconds": int(time.time() + 60)},
project_id=target_project,
)
run_id = f"{{{{ task_instance.xcom_pull('start_{ds}_transfer', key='run_id') }}}}"
```
**Anything else we need to know**:
So I went to the [Google's API reference page](https://cloud.google.com/bigquery-transfer/docs/reference/datatransfer/rest/v1/projects.locations.transferConfigs/startManualRuns) to run some test. When I use this parent parameter `projects/{projectId}/transferConfigs/{configId}`, it thrown the same error. But it works when I use `rojects/{projectId}/locations/{locationId}/transferConfigs/{configId}`
I guess the piece of code that causes this issue is here in the hook, why does it use `projects/{projectId}/transferConfigs/{configId}` instead of `rojects/{projectId}/locations/{locationId}/transferConfigs/{configId}`?
https://github.com/apache/airflow/blob/def961512904443db90e0a980c43dc4d8f8328d5/airflow/providers/google/cloud/hooks/bigquery_dts.py#L226-L232
| https://github.com/apache/airflow/issues/15088 | https://github.com/apache/airflow/pull/20221 | bc76126a9f6172a360fd4301eeb82372d000f70a | 98514cc1599751d7611b3180c60887da0a25ff5e | "2021-03-30T13:18:34Z" | python | "2021-12-12T23:40:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,071 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/scheduler_command.py", "chart/templates/scheduler/scheduler-deployment.yaml", "tests/cli/commands/test_scheduler_command.py"] | Run serve_logs process as part of scheduler command | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
- The `airflow serve_logs` command has been removed from the CLI as of 2.0.0
- When using `CeleryExecutor`, the `airflow celery worker` command runs the `serve_logs` process in the background.
- We should do the same with `airflow scheduler` command when using `LocalExecutor` or `SequentialExecutor`
<!-- A short description of your feature -->
**Use case / motivation**
- This will allow for viewing task logs in the UI when using `LocalExecutor` or `SequentialExecutor` without elasticsearch configured.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
Yes. Working with @dimberman .
<!--- We accept contributions! -->
**Related Issues**
- https://github.com/apache/airflow/issues/14222
- https://github.com/apache/airflow/issues/13331
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/15071 | https://github.com/apache/airflow/pull/15557 | 053d903816464f699876109b50390636bf617eff | 414bb20fad6c6a50c5a209f6d81f5ca3d679b083 | "2021-03-29T17:46:46Z" | python | "2021-04-29T15:06:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,059 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/user_schema.py", "tests/api_connexion/endpoints/test_user_endpoint.py", "tests/api_connexion/schemas/test_user_schema.py"] | Remove 'user_id', 'role_id' from User and Role in OpenAPI schema | Would be good to remove the 'id' of both User and Role schemas from what is dumped in REST API endpoints. ID of User and Role table are sensitive data that would be fine to hide from the endpoints
| https://github.com/apache/airflow/issues/15059 | https://github.com/apache/airflow/pull/15117 | b62ca0ad5d8550a72257ce59c8946e7f134ed70b | 7087541a56faafd7aa4b9bf9f94eb6b75eed6851 | "2021-03-28T15:40:00Z" | python | "2021-04-07T13:54:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,023 | ["airflow/www/api/experimental/endpoints.py", "airflow/www/templates/airflow/trigger.html", "airflow/www/views.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/www/api/experimental/test_endpoints.py", "tests/www/views/test_views_trigger_dag.py"] | DAG task execution and API fails if dag_run.conf is provided with an array or string (instead of dict) | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Tried both pip install and k8s image
**Environment**: Dev Workstation of K8s execution - both the same
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04 LTS
- **Others**: Python 3.6
**What happened**:
We use Airflow 1.10.14 currently in production and have a couple of DAGs defined today which digest a batch call. We implemented the batch (currently) in a way that the jobs are provided as dag_run.conf as an array of dicts, e.g. "[ { "job": "1" }, { "job": "2" } ]".
Trying to upgrade to Airflow 2.0.1 we see that such calls are still possible to submit but all further actions are failing:
- It is not possible to query status via REST API, generates a HTTP 500
- DAG starts but all tasks fail.
- Logs can not be displayed (actually there are none produced on the file system)
- Error logging is a bit complex, Celery worker does not provide meaningful logs on console nor produces log files, running a scheduler as SequentialExecutor reveals at least one meaningful sack trace as below
- (probably a couple of other internal logic is also failing
- Note that the dag_run.conf can be seen as submitted (so is correctly received) in Browse--> DAG Runs menu
As a regression using the same dag and passing a dag_run.conf = "{ "batch": [ { "job": "1" }, { "job": "2" } ] }" as well as "{}".
Example (simple) DAG to reproduce:
```
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.utils.dates import days_ago
from datetime import timedelta
dag = DAG(
'test1',
description='My first DAG',
default_args={
'owner': 'jscheffl',
'email': ['***@***.de'],
'email_on_failure': True,
'email_on_retry': True,
'retries': 5,
'retry_delay': timedelta(minutes=5),
},
start_date=days_ago(2)
)
hello_world = BashOperator(
task_id='hello_world',
bash_command='echo hello world',
dag=dag,
)
```
Stack trace from SequentialExecutor:
```
Traceback (most recent call last):
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 225, in task_run
ti.init_run_context(raw=args.raw)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1987, in init_run_context
self._set_context(self)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/logging_mixin.py", line 54, in _set_context
set_context(self.log, context)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/logging_mixin.py", line 174, in set_context
handler.set_context(value)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 56, in set_context
local_loc = self._init_file(ti)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 245, in _init_file
relative_path = self._render_filename(ti, ti.try_number)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 77, in _render_filename
jinja_context = ti.get_template_context()
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1606, in get_template_context
self.overwrite_params_with_dag_run_conf(params=params, dag_run=dag_run)
File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1743, in overwrite_params_with_dag_run_conf
params.update(dag_run.conf)
ValueError: dictionary update sequence element #0 has length 4; 2 is required
{sequential_executor.py:66} ERROR - Failed to execute task Command '['airflow', 'tasks', 'run', 'test1', 'hello_world', '2021-03-25T22:22:36.732899+00:00', '--local', '--pool', 'default_pool', '--subdir', '/home/jscheffl/Programmieren/Python/Airflow/airflow-home/dags/test1.py']' returned non-zero exit status 1..
[2021-03-25 23:42:47,209] {scheduler_job.py:1199} INFO - Executor reports execution of test1.hello_world execution_date=2021-03-25 22:22:36.732899+00:00 exited with status failed for try_number 5
```
**What you expected to happen**:
- EITHER the submission of arrays as dag_run.conf is supported like in 1.10.14
- OR I would expect that the submission contains a validation if array values are not supported by Airflow (which it seems it was at least working in 1.10)
**How to reproduce it**: See DAG code above, reproduce the error e.g. by triggering with "[ "test" ]" as dag_run.conf
**Anything else we need to know**: I assume not :-) | https://github.com/apache/airflow/issues/15023 | https://github.com/apache/airflow/pull/15057 | eeb97cff9c2cef46f2eb9a603ccf7e1ccf804863 | 01c9818405107271ee8341c72b3d2d1e48574e08 | "2021-03-25T22:50:15Z" | python | "2021-06-22T12:31:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,019 | ["airflow/ui/src/api/index.ts", "airflow/ui/src/components/TriggerRunModal.tsx", "airflow/ui/src/interfaces/api.ts", "airflow/ui/src/utils/memo.ts", "airflow/ui/src/views/Pipelines/Row.tsx", "airflow/ui/src/views/Pipelines/index.tsx", "airflow/ui/test/Pipelines.test.tsx"] | Establish mutation patterns via the API | https://github.com/apache/airflow/issues/15019 | https://github.com/apache/airflow/pull/15068 | 794922649982b2a6c095f7fa6be4e5d6a6d9f496 | 9ca49b69113bb2a1eaa0f8cec2b5f8598efc19ea | "2021-03-25T21:24:01Z" | python | "2021-03-30T00:32:11Z" |
|
closed | apache/airflow | https://github.com/apache/airflow | 15,018 | ["airflow/ui/package.json", "airflow/ui/src/api/index.ts", "airflow/ui/src/components/Table.tsx", "airflow/ui/src/interfaces/react-table-config.d.ts", "airflow/ui/src/views/Pipelines/PipelinesTable.tsx", "airflow/ui/src/views/Pipelines/Row.tsx", "airflow/ui/test/Pipelines.test.tsx", "airflow/ui/yarn.lock"] | Build out custom Table components | https://github.com/apache/airflow/issues/15018 | https://github.com/apache/airflow/pull/15805 | 65519ab83ddf4bd6fc30c435b5bfccefcb14d596 | 2c6b003fbe619d5d736cf97f20a94a3451e1a14a | "2021-03-25T21:22:50Z" | python | "2021-05-27T20:23:02Z" |
|
closed | apache/airflow | https://github.com/apache/airflow | 15,005 | ["airflow/providers/google/cloud/transfers/gcs_to_local.py", "tests/providers/google/cloud/transfers/test_gcs_to_local.py"] | `GCSToLocalFilesystemOperator` unnecessarily downloads objects when it checks object size | `GCSToLocalFilesystemOperator` in `airflow/providers/google/cloud/transfers/gcs_to_local.py` checks the file size if `store_to_xcom_key` is `True`.
https://github.com/apache/airflow/blob/b40dffa08547b610162f8cacfa75847f3c4ca364/airflow/providers/google/cloud/transfers/gcs_to_local.py#L137-L142
How it checks size is to download the object as `bytes` then check the size. This unnecessarily downloads the objects. `google.cloud.storage.blob.Blob` itself already has a `size` property ([documentation reference](https://googleapis.dev/python/storage/1.30.0/blobs.html#google.cloud.storage.blob.Blob.size)), and it should be used instead.
In extreme cases, if the object is big size, it adds unnecessary burden on the instance resources.
A new method, `object_size()`, can be added to `GCSHook`, then this can be addressed in `GCSToLocalFilesystemOperator`. | https://github.com/apache/airflow/issues/15005 | https://github.com/apache/airflow/pull/16171 | 19eb7ef95741e10d712845bc737b86615cbb8e7a | e1137523d4e9cb5d5cfe8584963620677a4ad789 | "2021-03-25T13:07:02Z" | python | "2021-05-30T22:48:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,001 | ["airflow/providers/amazon/aws/sensors/s3_prefix.py", "tests/providers/amazon/aws/sensors/test_s3_prefix.py"] | S3MultipleKeysSensor operator | **Description**
Currently we have an operator, S3KeySensor which polls for the given prefix in the bucket. At times, there is need to poll for multiple prefixes in given bucket in one go. To have that - I propose to have a S3MultipleKeysSensor, which would poll for multiple prefixes in the given bucket in one go.
**Use case / motivation**
To make it easier for users to poll multiple S3 prefixes in a given bucket.
**Are you willing to submit a PR?**
Yes, I have an implementation ready for that.
**Related Issues**
NA
| https://github.com/apache/airflow/issues/15001 | https://github.com/apache/airflow/pull/18807 | ec31b2049e7c3b9f9694913031553f2d7eb66265 | 176165de3b297c0ed7d2b60cf6b4c37fc7a2337f | "2021-03-25T07:24:52Z" | python | "2021-10-11T21:15:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 15,000 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | When an ECS Task fails to start, ECS Operator raises a CloudWatch exception | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.13
**Environment**:
- **Cloud provider or hardware configuration**:AWS
- **OS** (e.g. from /etc/os-release): Amazon Linux 2
- **Kernel** (e.g. `uname -a`): 4.14.209-160.339.amzn2.x86_64
- **Install tools**: pip
- **Others**:
**What happened**:
When an ECS Task exits with `stopCode: TaskFailedToStart`, the ECS Operator will exit with a ResourceNotFoundException for the GetLogEvents operation. This is because the task has failed to start, so no log is created.
```
[2021-03-14 02:32:49,792] {ecs_operator.py:147} INFO - ECS Task started: {'tasks': [{'attachments': [], 'availabilityZone': 'ap-northeast-1c', 'clusterArn': 'arn:aws:ecs:ap-northeast-1:xxxx:cluster/ecs-cluster', 'containerInstanceArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container-instance/ecs-cluster/xxxx', 'containers': [{'containerArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container/xxxx', 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'name': 'container_image', 'image': 'xxxx.dkr.ecr.ap-northeast-1.amazonaws.com/ecr/container_image:latest', 'lastStatus': 'PENDING', 'networkInterfaces': [], 'cpu': '128', 'memoryReservation': '128'}], 'cpu': '128', 'createdAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'desiredStatus': 'RUNNING', 'group': 'family:task', 'lastStatus': 'PENDING', 'launchType': 'EC2', 'memory': '128', 'overrides': {'containerOverrides': [{'name': 'container_image', 'command': ['/bin/bash', '-c', 'xxxx']}], 'inferenceAcceleratorOverrides': []}, 'startedBy': 'airflow', 'tags': [], 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'taskDefinitionArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task-definition/task:1', 'version': 1}], 'failures': [], 'ResponseMetadata': {'RequestId': 'xxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'xxxx', 'content-type': 'application/x-amz-json-1.1', 'content-length': '1471', 'date': 'Sun, 14 Mar 2021 02:32:48 GMT'}, 'RetryAttempts': 0}}
[2021-03-14 02:34:15,022] {ecs_operator.py:168} INFO - ECS Task stopped, check status: {'tasks': [{'attachments': [], 'availabilityZone': 'ap-northeast-1c', 'clusterArn': 'arn:aws:ecs:ap-northeast-1:xxxx:cluster/ecs-cluster', 'connectivity': 'CONNECTED', 'connectivityAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'containerInstanceArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container-instance/ecs-cluster/xxxx', 'containers': [{'containerArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container/xxxx', 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'name': 'container_image', 'image': 'xxxx.dkr.ecr.ap-northeast-1.amazonaws.com/ecr/container_image:latest', 'lastStatus': 'STOPPED', 'reason': 'CannotPullContainerError: failed to register layer: Error processing tar file(exit status 1): write /var/lib/xxxx: no space left on device', 'networkInterfaces': [], 'healthStatus': 'UNKNOWN', 'cpu': '128', 'memoryReservation': '128'}], 'cpu': '128', 'createdAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'desiredStatus': 'STOPPED', 'executionStoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 810000, tzinfo=tzlocal()), 'group': 'family:task', 'healthStatus': 'UNKNOWN', 'lastStatus': 'STOPPED', 'launchType': 'EC2', 'memory': '128', 'overrides': {'containerOverrides': [{'name': 'container_image', 'command': ['/bin/bash', '-c', 'xxxx']}], 'inferenceAcceleratorOverrides': []}, 'pullStartedAt': datetime.datetime(2021, 3, 14, 2, 32, 51, 68000, tzinfo=tzlocal()), 'pullStoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 13, 584000, tzinfo=tzlocal()), 'startedBy': 'airflow', 'stopCode': 'TaskFailedToStart', 'stoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 821000, tzinfo=tzlocal()), 'stoppedReason': 'Task failed to start', 'stoppingAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 821000, tzinfo=tzlocal()), 'tags': [], 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'taskDefinitionArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task-definition/task:1', 'version': 2}], 'failures': [], 'ResponseMetadata': {'RequestId': 'xxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'xxxx', 'content-type': 'application/x-amz-json-1.1', 'content-length': '1988', 'date': 'Sun, 14 Mar 2021 02:34:14 GMT'}, 'RetryAttempts': 0}}
[2021-03-14 02:34:15,024] {ecs_operator.py:172} INFO - ECS Task logs output:
[2021-03-14 02:34:15,111] {credentials.py:1094} INFO - Found credentials in environment variables.
[2021-03-14 02:34:15,416] {taskinstance.py:1150} ERROR - An error occurred (ResourceNotFoundException) when calling the GetLogEvents operation: The specified log stream does not exist.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 984, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/ecs_operator.py", line 152, in execute
self._check_success_task()
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/ecs_operator.py", line 175, in _check_success_task
for event in self.get_logs_hook().get_log_events(self.awslogs_group, stream_name):
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/aws_logs_hook.py", line 85, in get_log_events
**token_arg)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetLogEvents operation: The specified log stream does not exist.
```
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
ResourceNotFoundException is misleading because it feels like a problem with CloudWatchLogs. Expect AirflowException to indicate that the task has failed.
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
This can be reproduced by running an ECS Task that fails to start, for example by specifying a non-existent entry_point.
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
I suspect Issue #11663 has the same problem, i.e. it's not a CloudWatch issue, but a failure to start an ECS Task.
| https://github.com/apache/airflow/issues/15000 | https://github.com/apache/airflow/pull/18733 | a192b4afbd497fdff508b2a06ec68cd5ca97c998 | 767a4f5207f8fc6c3d8072fa780d84460d41fc7a | "2021-03-25T05:55:31Z" | python | "2021-10-05T21:34:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,991 | ["scripts/ci/libraries/_md5sum.sh", "scripts/ci/libraries/_verify_image.sh", "scripts/docker/compile_www_assets.sh"] | Static file not being loaded in web server in docker-compose | Apache Airflow version: apache/airflow:master-python3.8
Environment:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release): Mac OS 10.16.5
Kernel (e.g. uname -a): Darwin Kernel Version 19.6.0
Browser:
Google Chrome Version 89.0.4389.90
What happened:
I am having an issue with running `apache/airflow:master-python3.8` with docker-compose.
The log of the webserver says` Please make sure to build the frontend in static/ directory and restart the server` when it is running. Due to static files not being loaded, login and dags are not working.
What you expected to happen:
static files being loaded correctly.
How to reproduce it:
My docker-compose is based on the official example.
https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml
Anything else we need to know:
It used to work until 2 days ago when the new docker image was released. Login prompt looks like this.
![Screenshot 2021-03-24 at 21 54 16](https://user-images.githubusercontent.com/28846850/112381987-90d4cb00-8ceb-11eb-9324-461c6eae7b01.png)
| https://github.com/apache/airflow/issues/14991 | https://github.com/apache/airflow/pull/14995 | 775ee51d0e58aeab5d29683dd2ff21b8c9057095 | 5dc634bf74bbec68bbe1c7b6944d0a9efd85181d | "2021-03-24T20:54:58Z" | python | "2021-03-25T13:04:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,989 | [".github/workflows/ci.yml", "docs/exts/docs_build/fetch_inventories.py", "scripts/ci/docs/ci_docs.sh", "scripts/ci/docs/ci_docs_prepare.sh"] | Make Docs builds fallback in case external docs sources are missing | Every now and then our docs builds start to fail because of external dependency (latest example here #14985). And while we are doing caching now of that information, it does not help when the initial retrieval fails. This information does not change often but with the number of dependencies we have it will continue to fail regularly simply because many of those depenencies are not very reliable - they are just a web page hosted somewhere. They are nowhere near the stabilty of even PyPI or Apt sources and we have no mirroring in case of problem.
Maybe we could
a) see if we can use some kind of mirroring scheme (do those sites have mirrrors ? )
b) if not, simply write a simple script that will dump the cached content for those to S3, refresh it in the CI scheduled (nightly) master builds ad have a fallback mechanism to download that from there in case of any problems in CI?
| https://github.com/apache/airflow/issues/14989 | https://github.com/apache/airflow/pull/15109 | 2ac4638b7e93d5144dd46f2c09fb982c374db79e | 8cc8d11fb87d0ad5b3b80907874f695a77533bfa | "2021-03-24T18:15:48Z" | python | "2021-04-02T22:11:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,985 | ["docs/exts/docs_build/fetch_inventories.py", "docs/exts/docs_build/third_party_inventories.py"] | Docs build are failing with `requests.exceptions.TooManyRedirects` error | Our docs build is failing with the following error on PRs and locally, example https://github.com/apache/airflow/pull/14983/checks?check_run_id=2185523525#step:4:282:
```
Fetched inventory: https://googleapis.dev/python/videointelligence/latest/objects.inv
Traceback (most recent call last):
File "/opt/airflow/docs/build_docs.py", line 278, in <module>
main()
File "/opt/airflow/docs/build_docs.py", line 218, in main
priority_packages = fetch_inventories()
File "/opt/airflow/docs/exts/docs_build/fetch_inventories.py", line 126, in fetch_inventories
failed, success = list(failed), list(failed)
File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 586, in result_iterator
yield fs.pop().result()
File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/airflow/docs/exts/docs_build/fetch_inventories.py", line 53, in _fetch_file
response = session.get(url, allow_redirects=True, stream=True)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 677, in send
history = [resp for resp in gen]
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 677, in <listcomp>
history = [resp for resp in gen]
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 166, in resolve_redirects
raise TooManyRedirects('Exceeded {} redirects.'.format(self.max_redirects), response=resp)
requests.exceptions.TooManyRedirects: Exceeded 30 redirects.
###########################################################################################
```
To reproduce locally run:
```
./breeze build-docs -- --package-filter apache-airflow
``` | https://github.com/apache/airflow/issues/14985 | https://github.com/apache/airflow/pull/14986 | a2b285825323da5a72dc0201ad6dc7d258771d0d | f6a1774555341f6a82c7cae1ce65903676bde61a | "2021-03-24T16:31:53Z" | python | "2021-03-24T16:57:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,959 | ["airflow/providers/docker/operators/docker_swarm.py", "tests/providers/docker/operators/test_docker_swarm.py"] | Support all terminus task states for Docker Swarm Operator | **Apache Airflow version**: latest
**What happened**:
There are more terminus task states than the ones we currently check in Docker Swarm Operator. This makes the operator run infinitely when the service goes into these states.
**What you expected to happen**:
The operator should terminate.
**How to reproduce it**:
Run a Airflow task via the Docker Swarm operator and return failed status code from it.
**Anything else we need to know**:
So as a fix I have added the complete list of tasks from the Docker reference (https://docs.docker.com/engine/swarm/how-swarm-mode-works/swarm-task-states/)
We would like to send this patch back upstream to Apache Airflow.
| https://github.com/apache/airflow/issues/14959 | https://github.com/apache/airflow/pull/14960 | 6b78394617c7e699dda1acf42e36161d2fc29925 | ab477176998090e8fb94d6f0e6bf056bad2da441 | "2021-03-23T15:44:21Z" | python | "2021-04-07T12:39:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,957 | [".github/workflows/ci.yml", ".pre-commit-config.yaml", "BREEZE.rst", "STATIC_CODE_CHECKS.rst", "breeze-complete", "scripts/ci/selective_ci_checks.sh", "scripts/ci/static_checks/eslint.sh"] | Run selective CI pipeline for UI-only PRs | For PRs that only touch files in `airflow/ui/` we'd like to run a selective set of CI actions. We only need linting and UI tests.
Additionally, this update should pull the test runs out of the pre-commit. | https://github.com/apache/airflow/issues/14957 | https://github.com/apache/airflow/pull/15009 | a2d99293c9f5bdf1777fed91f1c48230111f53ac | 7417f81d36ad02c2a9d7feb9b9f881610f50ceba | "2021-03-23T14:32:41Z" | python | "2021-03-31T22:10:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,933 | ["airflow/task/task_runner/__init__.py"] | A Small Exception Message Typo | I guess, line 59 in file "[airflow/airflow/task/task_runner/__init__.py](https://github.com/apache/airflow/blob/5a864f0e456348e0a871cf4678e1ffeec541c52d/airflow/task/task_runner/__init__.py#L59)" should change like:
**OLD**: f'The task runner could not be loaded. Please check "executor" key in "core" section.'
**NEW**: f'The task runner could not be loaded. Please check "task_runner" key in "core" section.'
So, "executor" should be replaced with "task_runner". | https://github.com/apache/airflow/issues/14933 | https://github.com/apache/airflow/pull/15067 | 6415489390c5ec3679f8d6684c88c1dd74414951 | 794922649982b2a6c095f7fa6be4e5d6a6d9f496 | "2021-03-22T12:22:52Z" | python | "2021-03-29T23:20:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,924 | ["airflow/utils/cli.py", "airflow/utils/log/file_processor_handler.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/non_caching_file_handler.py"] | Scheduler Memory Leak in Airflow 2.0.1 | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.17.4
**Environment**: Dev
- **OS** (e.g. from /etc/os-release): RHEL7
**What happened**:
After running fine for some time my airflow tasks got stuck in scheduled state with below error in Task Instance Details:
"All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless: - The scheduler is down or under heavy load If this task instance does not start soon please contact your Airflow administrator for assistance."
**What you expected to happen**:
I restarted the scheduler then it started working fine. When i checked my metrics i realized the scheduler has a memory leak and over past 4 days it has reached up to 6GB of memory utilization
In version >2.0 we don't even have the run_duration config option to restart scheduler periodically to avoid this issue until it is resolved.
**How to reproduce it**:
I saw this issue in multiple dev instances of mine all running Airflow 2.0.1 on kubernetes with KubernetesExecutor.
Below are the configs that i changed from the default config.
max_active_dag_runs_per_dag=32
parallelism=64
dag_concurrency=32
sql_Alchemy_pool_size=50
sql_Alchemy_max_overflow=30
**Anything else we need to know**:
The scheduler memory leaks occurs consistently in all instances i have been running. The memory utilization keeps growing for scheduler.
| https://github.com/apache/airflow/issues/14924 | https://github.com/apache/airflow/pull/18054 | 6acb9e1ac1dd7705d9bfcfd9810451dbb549af97 | 43f595fe1b8cd6f325d8535c03ee219edbf4a559 | "2021-03-21T15:35:14Z" | python | "2021-09-09T10:50:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,888 | ["airflow/providers/amazon/aws/transfers/s3_to_redshift.py", "tests/providers/amazon/aws/transfers/test_s3_to_redshift.py"] | S3ToRedshiftOperator is not transaction safe for truncate | **Apache Airflow version**: 2.0.0
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): Amazon Linux 2
**What happened**:
The TRUNCATE operation has a fine print in Redshift that it is committing the transaction.
See https://docs.aws.amazon.com/redshift/latest/dg/r_TRUNCATE.html
> However, be aware that TRUNCATE commits the transaction in which it is run.
and
> The TRUNCATE command commits the transaction in which it is run; therefore, you can't roll back a TRUNCATE operation, and a TRUNCATE command may commit other operations when it commits itself.
Currently with truncate=True, the operator would generate a statement like:
```sql
BEGIN;
TRUNCATE TABLE schema.table; -- this commits the transaction
--- the table is now empty for any readers until the end of the copy
COPY ....
COMMIT;
```
**What you expected to happen**:
Replacing with a DELETE operation would solve the problem, in a normal database it is not considered a fast operation but with Redshift, a 1B+ rows table is deleted in less than 5 seconds on a 2-node ra3.xlplus. (not counting vacuum or analyze) and a vacuum of the empty table taking less than 3 minutes.
```sql
BEGIN;
DELETE FROM schema.table;
COPY ....
COMMIT;
```
It should be mentioned in the documentation that a delete is done for the reason above and that vacuum and analyze operations are left to manage.
**How often does this problem occur? Once? Every time etc?**
Always.
| https://github.com/apache/airflow/issues/14888 | https://github.com/apache/airflow/pull/17117 | 32582b5bf1432e7c7603b959a675cf7edd76c9e6 | f44d7bd9cfe00b1409db78c2a644516b0ab003e9 | "2021-03-19T00:33:07Z" | python | "2021-07-21T16:33:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,880 | ["airflow/providers/slack/operators/slack.py", "tests/providers/slack/operators/test_slack.py"] | SlackAPIFileOperator is broken | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Environment**: Docker
- **Cloud provider or hardware configuration**: Local file system
- **OS** (e.g. from /etc/os-release): Arch Linux
- **Kernel** (e.g. `uname -a`): 5.11.5-arch1-1
**What happened**:
I tried to post a file from a long Python string to a Slack channel through the SlackAPIFileOperator.
I defined the operator this way:
```
SlackAPIFileOperator(
task_id="{}-notifier".format(self.task_id),
channel="#alerts-metrics",
token=MY_TOKEN,
initial_comment=":warning: alert",
filename="{{ ds }}.csv",
filetype="csv",
content=df.to_csv()
)
```
Task failed with the following error:
```
DEBUG - Sending a request - url: https://www.slack.com/api/files.upload, query_params: {}, body_params: {}, files: {}, json_body: {'channels': '#alerts-metrics', 'content': '<a long pandas.DataFrame.to_csv output>', 'filename': '{{ ds }}.csv', 'filetype': 'csv', 'initial_comment': ':warning: alert'}, headers: {'Content-Type': 'application/json;charset=utf-8', 'Authorization': '(redacted)', 'User-Agent': 'Python/3.6.12 slackclient/3.3.2 Linux/5.11.5-arch1-1'}
DEBUG - Received the following response - status: 200, headers: {'date': 'Thu, 18 Mar 2021 13:28:44 GMT', 'server': 'Apache', 'x-xss-protection': '0', 'pragma': 'no-cache', 'cache-control': 'private, no-cache, no-store, must-revalidate', 'access-control-allow-origin': '*', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload', 'x-slack-req-id': '0ff5fd17ca7e2e8397559b6347b34820', 'x-content-type-options': 'nosniff', 'referrer-policy': 'no-referrer', 'access-control-expose-headers': 'x-slack-req-id, retry-after', 'x-slack-backend': 'r', 'x-oauth-scopes': 'incoming-webhook,files:write,chat:write', 'x-accepted-oauth-scopes': 'files:write', 'expires': 'Mon, 26 Jul 1997 05:00:00 GMT', 'vary': 'Accept-Encoding', 'access-control-allow-headers': 'slack-route, x-slack-version-ts, x-b3-traceid, x-b3-spanid, x-b3-parentspanid, x-b3-sampled, x-b3-flags', 'content-type': 'application/json; charset=utf-8', 'x-envoy-upstream-service-time': '37', 'x-backend': 'files_normal files_bedrock_normal_with_overflow files_canary_with_overflow files_bedrock_canary_with_overflow files_control_with_overflow files_bedrock_control_with_overflow', 'x-server': 'slack-www-hhvm-files-iad-xg4a', 'x-via': 'envoy-www-iad-xvw3, haproxy-edge-lhr-u1ge', 'x-slack-shared-secret-outcome': 'shared-secret', 'via': 'envoy-www-iad-xvw3', 'connection': 'close', 'transfer-encoding': 'chunked'}, body: {'ok': False, 'error': 'no_file_data'}
[2021-03-18 13:28:43,601] {taskinstance.py:1455} ERROR - The request to the Slack API failed.
The server responded with: {'ok': False, 'error': 'no_file_data'}
```
**What you expected to happen**:
I expect the operator to succeed and see a new message in Slack with a snippet of a downloadable CSV file.
**How to reproduce it**:
Just declare a DAG this way:
```
from airflow import DAG
from airflow.providers.slack.operators.slack import SlackAPIFileOperator
from pendulum import datetime
with DAG(dag_id="SlackFile",
default_args=dict(start_date=datetime(2021, 1, 1), owner='airflow', catchup=False)) as dag:
SlackAPIFileOperator(
task_id="Slack",
token=YOUR_TOKEN,
content="test-content"
)
```
And try to run it.
**Anything else we need to know**:
This seems to be a known issue: https://apache-airflow.slack.com/archives/CCQ7EGB1P/p1616079965083200
I workaround it with this following re-implementation:
```
from typing import Optional, Any
from airflow import AirflowException
from airflow.providers.slack.hooks.slack import SlackHook
from airflow.providers.slack.operators.slack import SlackAPIOperator
from airflow.utils.decorators import apply_defaults
class SlackAPIFileOperator(SlackAPIOperator):
"""
Send a file to a slack channel
Examples:
.. code-block:: python
slack = SlackAPIFileOperator(
task_id="slack_file_upload",
dag=dag,
slack_conn_id="slack",
channel="#general",
initial_comment="Hello World!",
file="hello_world.csv",
filename="hello_world.csv",
filetype="csv",
content="hello,world,csv,file",
)
:param channel: channel in which to sent file on slack name (templated)
:type channel: str
:param initial_comment: message to send to slack. (templated)
:type initial_comment: str
:param file: the file (templated)
:type file: str
:param filename: name of the file (templated)
:type filename: str
:param filetype: slack filetype. (templated)
- see https://api.slack.com/types/file
:type filetype: str
:param content: file content. (templated)
:type content: str
"""
template_fields = ('channel', 'initial_comment', 'file', 'filename', 'filetype', 'content')
ui_color = '#44BEDF'
@apply_defaults
def __init__(
self,
channel: str = '#general',
initial_comment: str = 'No message has been set!',
file: Optional[str] = None,
filename: str = 'default_name.csv',
filetype: str = 'csv',
content: Optional[str] = None,
**kwargs,
) -> None:
if (content is None) and (file is None):
raise AirflowException('At least one of "content" or "file" should be defined.')
self.method = 'files.upload'
self.channel = channel
self.initial_comment = initial_comment
self.file = file
self.filename = filename
self.filetype = filetype
self.content = content
super().__init__(method=self.method, **kwargs)
def execute(self, **kwargs):
slack = SlackHook(token=self.token, slack_conn_id=self.slack_conn_id)
args = dict(
channels=self.channel,
filename=self.filename,
filetype=self.filetype,
initial_comment=self.initial_comment
)
if self.content is not None:
args['content'] = self.content
elif self.file is not None:
args['file'] = self.content
slack.call(self.method, data=args)
def construct_api_call_params(self) -> Any:
pass
```
Maybe it is not the best solution as it does not leverage work from `SlackAPIOperator`.
But at least, it fullfill my use case.
| https://github.com/apache/airflow/issues/14880 | https://github.com/apache/airflow/pull/17247 | 797b515a23136d1f00c6bd938960882772c1c6bd | 07c8ee01512b0cc1c4602e356b7179cfb50a27f4 | "2021-03-18T16:07:03Z" | python | "2021-08-01T23:08:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,879 | [".github/boring-cyborg.yml"] | Add area:providers for general providers to boring-cyborg | **Description**
Add label **area-providers** to general providers.
For example the **provider:Google**
https://github.com/apache/airflow/blob/16f43605f3370f20611ba9e08b568ff8a7cd433d/.github/boring-cyborg.yml#L21-L25
**Use case / motivation**
Help better issue/pull request monitoring.
**Are you willing to submit a PR?**
Yes
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14879 | https://github.com/apache/airflow/pull/14941 | 01a5d36e6bbc1d9e7afd4e984376301ea378a94a | 3bbf9aea0b54a7cb577eb03f805e0b0566b759c3 | "2021-03-18T14:19:39Z" | python | "2021-03-22T23:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,864 | ["airflow/exceptions.py", "airflow/utils/task_group.py", "tests/utils/test_task_group.py"] | Using TaskGroup without context manager (Graph view visual bug) | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**What happened**:
When I do not use the context manager for the task group and instead call the add function to add the tasks, those tasks show up on the Graph view.
![Screen Shot 2021-03-17 at 2 06 17 PM](https://user-images.githubusercontent.com/5952735/111544849-5939b200-8732-11eb-80dc-89c013aeb083.png)
However, when I click on the task group item on the Graph UI, it will fix the issue. When I close the task group item, the tasks will not be displayed as expected.
![Screen Shot 2021-03-17 at 2 06 21 PM](https://user-images.githubusercontent.com/5952735/111544848-58a11b80-8732-11eb-928b-3c76207a0107.png)
**What you expected to happen**:
I expected the tasks inside the task group to not display on the Graph view.
![Screen Shot 2021-03-17 at 3 17 34 PM](https://user-images.githubusercontent.com/5952735/111545824-eaf5ef00-8733-11eb-99c2-75b051bfefe1.png)
**How to reproduce it**:
Render this DAG in Airflow
```python
from airflow.models import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
from airflow.utils.task_group import TaskGroup
from datetime import datetime
with DAG(dag_id="example_task_group", start_date=datetime(2021, 1, 1), tags=["example"], catchup=False) as dag:
start = BashOperator(task_id="start", bash_command='echo 1; sleep 10; echo 2;')
tg = TaskGroup("section_1", tooltip="Tasks for section_1")
task_1 = DummyOperator(task_id="task_1")
task_2 = BashOperator(task_id="task_2", bash_command='echo 1')
task_3 = DummyOperator(task_id="task_3")
tg.add(task_1)
tg.add(task_2)
tg.add(task_3)
``` | https://github.com/apache/airflow/issues/14864 | https://github.com/apache/airflow/pull/23071 | 9caa511387f92c51ab4fc42df06e0a9ba777e115 | 337863fa35bba8463d62e5cf0859f2bb73cf053a | "2021-03-17T22:25:05Z" | python | "2022-06-05T13:52:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,857 | ["airflow/providers/mysql/hooks/mysql.py", "tests/providers/mysql/hooks/test_mysql.py"] | MySQL hook uses wrong autocommit calls for mysql-connector-python | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**Environment**:
* **Cloud provider or hardware configuration**: WSL2/Docker running `apache/airflow:2.0.1-python3.7` image
* **OS** (e.g. from /etc/os-release): Host: Ubuntu 20.04 LTS, Docker Image: Debian GNU/Linux 10 (buster)
* **Kernel** (e.g. `uname -a`): 5.4.72-microsoft-standard-WSL2 x86_64
* **Others**: Docker version 19.03.8, build afacb8b7f0
**What happened**:
Received a `'bool' object is not callable` error when attempting to use the mysql-connector-python client for a task:
```
[2021-03-17 10:20:13,247] {{taskinstance.py:1455}} ERROR - 'bool' object is not callable
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1310, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/mysql/operators/mysql.py", line 74, in execute
hook.run(self.sql, autocommit=self.autocommit, parameters=self.parameters)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/hooks/dbapi.py", line 175, in run
self.set_autocommit(conn, autocommit)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/mysql/hooks/mysql.py", line 55, in set_autocommit
conn.autocommit(autocommit)
```
**What you expected to happen**:
The task to run without complaints.
**How to reproduce it**:
Create and use a MySQL connection with `{"client": "mysql-connector-python"}` specified in the Extra field.
**Anything else we need to know**:
The MySQL hook seems to be using `conn.get_autocommit()` and `conn.autocommit()` to get/set the autocommit flag for both mysqlclient and mysql-connector-python. These method don't actually exist in mysql-connector-python as it uses autocommit as a property rather than a method.
I was able to work around it by adding an `if not callable(conn.autocommit)` condition to detect when mysql-connector-python is being used, but I'm sure there's probably a more elegant way of detecting which client is being used.
mysql-connector-python documentation:
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlconnection-autocommit.html
Autocommit calls:
https://github.com/apache/airflow/blob/2a2adb3f94cc165014d746102e12f9620f271391/airflow/providers/mysql/hooks/mysql.py#L55
https://github.com/apache/airflow/blob/2a2adb3f94cc165014d746102e12f9620f271391/airflow/providers/mysql/hooks/mysql.py#L66 | https://github.com/apache/airflow/issues/14857 | https://github.com/apache/airflow/pull/14869 | b8cf46a12fba5701d9ffc0b31aac8375fbca37f9 | 9b428bbbdf4c56f302a1ce84f7c2caf34b81ffa0 | "2021-03-17T17:39:28Z" | python | "2021-03-29T03:33:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,839 | ["airflow/stats.py", "tests/core/test_stats.py"] | Enabling Datadog to tag metrics results in AttributeError | **Apache Airflow version**: 2.0.1
**Python version**: 3.8
**Cloud provider or hardware configuration**: AWS
**What happened**:
In order to add tags to [Airflow metrics,](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html), it's required to set `AIRFLOW__METRICS__STATSD_DATADOG_ENABLED` to `True` and add tags in the `AIRFLOW__METRICS__STATSD_DATADOG_TAGS` variable. We were routing our statsd metrics to Datadog anyway, so this should theoretically have not changed anything other than the addition of any specified tags.
Setting the environment variable `AIRFLOW__METRICS__STATSD_DATADOG_ENABLED` to `True` (along with the other required statsd connection variables) results in the following error, which causes the process to terminate. This is from the scheduler, but this would apply anywhere that `Stats.timer()` is being called.
```
AttributeError: 'DogStatsd' object has no attribute 'timer'
return Timer(self.dogstatsd.timer(stat, *args, tags=tags, **kwargs))
File "/usr/local/lib/python3.8/site-packages/airflow/stats.py", line 345, in timer
return fn(_self, stat, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/stats.py", line 233, in wrapper
timer = Stats.timer('scheduler.critical_section_duration')
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1538, in _do_scheduling
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1382, in _run_scheduler_loop
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
Traceback (most recent call last):
```
**What you expected to happen**:
The same default Airflow metrics get sent by connecting to datadog, tagged with the metrics specified in `AIRFLOW__METRICS__STATSD_DATADOG_TAGS`.
**What do you think went wrong?**:
There is a bug in the implementation of the `Timer` method of `SafeDogStatsdLogger`. https://github.com/apache/airflow/blob/master/airflow/stats.py#L341-L347
`DogStatsd` has no method called `timer`. Instead it should be `timed`: https://datadogpy.readthedocs.io/en/latest/#datadog.dogstatsd.base.DogStatsd.timed
**How to reproduce it**:
Set the environment variables (or their respective config values) `AIRFLOW__METRICS__STATSD_ON`, `AIRFLOW__METRICS__STATSD_HOST`, `AIRFLOW__METRICS__STATSD_PORT`, and then set `AIRFLOW__METRICS__STATSD_DATADOG_ENABLED` to `True` and start up Airflow.
**Anything else we need to know**:
How often does this problem occur? Every time
| https://github.com/apache/airflow/issues/14839 | https://github.com/apache/airflow/pull/15132 | 3a80b7076da8fbee759d9d996bed6e9832718e55 | b7cd2df056ac3ab113d77c5f6b65f02a77337907 | "2021-03-16T18:48:23Z" | python | "2021-04-01T13:46:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,830 | ["airflow/api_connexion/endpoints/role_and_permission_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/security/permissions.py", "tests/api_connexion/endpoints/test_role_and_permission_endpoint.py"] | Add Create/Update/Delete API endpoints for Roles | To be able to fully manage the permissions in the UI we will need to be able to modify the roles and the permissions they have.
It probably makes sense to have one PR that adds CUD (Read is already done) endpoints for Roles.
Permissions are not createable via anything but code, so we only need these endpoints for Roles, but not Permissions. | https://github.com/apache/airflow/issues/14830 | https://github.com/apache/airflow/pull/14840 | 266384a63f4693b667f308d49fcbed9a10a41fce | 6706b67fecc00a22c1e1d6658616ed9dd96bbc7b | "2021-03-16T10:58:54Z" | python | "2021-04-05T09:22:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,811 | ["setup.cfg"] | Latest SQLAlchemy (1.4) Incompatible with latest sqlalchemy_utils | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Mac OS Big Sur
- **Kernel** (e.g. `uname -a`):
- **Install tools**: pip 20.1.1
- **Others**:
**What happened**:
Our CI environment broke due to the release of SQLAlchemy 1.4, which is incompatible with the latest version of sqlalchemy-utils. ([Related issue](https://github.com/kvesteri/sqlalchemy-utils/issues/505))
Partial stacktrace:
```
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/airflow/www/utils.py", line 27, in <module>
from flask_appbuilder.models.sqla.interface import SQLAInterface
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 16, in <module>
from sqlalchemy_utils.types.uuid import UUIDType
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/__init__.py", line 1, in <module>
from .aggregates import aggregated # noqa
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/aggregates.py", line 372, in <module>
from .functions.orm import get_column_key
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/__init__.py", line 1, in <module>
from .database import ( # noqa
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/database.py", line 11, in <module>
from .orm import quote
File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/orm.py", line 14, in <module>
from sqlalchemy.orm.query import _ColumnEntity
ImportError: cannot import name '_ColumnEntity' from 'sqlalchemy.orm.query' (/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py)
```
I'm not sure what the typical procedure is in the case of breaking changes to dependencies, but seeing as there's an upcoming release I thought it might be worth pinning sqlalchemy to 1.3.x? (Or pin the version of sqlalchemy-utils to a compatible version if one is released before Airflow 2.0.2)
**What you expected to happen**:
`airflow db init` to run successfully.
<!-- What do you think went wrong? -->
**How to reproduce it**:
1) Create a new virtualenv
2) `pip install apache-airflow`
3) `airflow db init`
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
| https://github.com/apache/airflow/issues/14811 | https://github.com/apache/airflow/pull/14812 | 251eb7d170db3f677e0c2759a10ac1e31ac786eb | c29f6fb76b9d87c50713ae94fda805b9f789a01d | "2021-03-15T19:39:29Z" | python | "2021-03-15T20:28:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,807 | ["airflow/ui/package.json", "airflow/ui/src/components/AppContainer/AppHeader.tsx", "airflow/ui/src/components/AppContainer/TimezoneDropdown.tsx", "airflow/ui/src/components/MultiSelect.tsx", "airflow/ui/src/providers/TimezoneProvider.tsx", "airflow/ui/src/views/Pipelines/PipelinesTable.tsx", "airflow/ui/src/views/Pipelines/index.tsx", "airflow/ui/test/TimezoneDropdown.test.tsx", "airflow/ui/test/utils.tsx", "airflow/ui/yarn.lock"] | Design/build timezone switcher modal | - Once we have the current user's preference set and available in Context, add a modal that allows the preferred display timezone to be changed.
- Modal will be triggered by clicking the time/TZ in the global navigation. | https://github.com/apache/airflow/issues/14807 | https://github.com/apache/airflow/pull/15674 | 46d62782e85ff54dd9dc96e1071d794309497983 | 3614910b4fd32c90858cd9731fc0421078ca94be | "2021-03-15T15:14:24Z" | python | "2021-05-07T17:49:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,778 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/ci.yml", "scripts/ci/tools/ci_free_space_on_ci.sh", "tests/cli/commands/test_jobs_command.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py", "tests/test_utils/asserts.py"] | [QUARANTINED] The test_scheduler_verify_pool_full occasionally fails | Probably same root cause as in #14773 and #14772 but there is another test that fails occassionally:
https://github.com/apache/airflow/runs/2106723579?check_suite_focus=true#step:6:10811
```
_______________ TestSchedulerJob.test_scheduler_verify_pool_full _______________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_scheduler_verify_pool_full>
def test_scheduler_verify_pool_full(self):
"""
Test task instances not queued when pool is full
"""
dag = DAG(dag_id='test_scheduler_verify_pool_full', start_date=DEFAULT_DATE)
BashOperator(
task_id='dummy',
dag=dag,
owner='airflow',
pool='test_scheduler_verify_pool_full',
bash_command='echo hi',
)
dagbag = DagBag(
dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"),
include_examples=False,
read_dags_from_db=True,
)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
session = settings.Session()
pool = Pool(pool='test_scheduler_verify_pool_full', slots=1)
session.add(pool)
session.flush()
dag = SerializedDAG.from_dict(SerializedDAG.to_dict(dag))
scheduler = SchedulerJob(executor=self.null_exec)
scheduler.processor_agent = mock.MagicMock()
# Create 2 dagruns, which will create 2 task instances.
dr = dag.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=DEFAULT_DATE,
state=State.RUNNING,
)
> scheduler._schedule_dag_run(dr, {}, session)
tests/jobs/test_scheduler_job.py:2586:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/jobs/scheduler_job.py:1688: in _schedule_dag_run
dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
airflow/utils/session.py:62: in wrapper
return func(*args, **kwargs)
airflow/models/dagbag.py:178: in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <airflow.models.dagbag.DagBag object at 0x7f45f1959a90>
dag_id = 'test_scheduler_verify_pool_full'
session = <sqlalchemy.orm.session.Session object at 0x7f45f19eef70>
def _add_dag_from_db(self, dag_id: str, session: Session):
"""Add DAG to DagBag from DB"""
from airflow.models.serialized_dag import SerializedDagModel
row = SerializedDagModel.get(dag_id, session)
if not row:
> raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
E airflow.exceptions.SerializedDagNotFound: DAG 'test_scheduler_verify_pool_full' not found in serialized_dag table
airflow/models/dagbag.py:234: SerializedDagNotFound
```
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14778 | https://github.com/apache/airflow/pull/14792 | 3f61df11e7e81abc0ac4495325ccb55cc1c88af4 | 45cf89ce51b203bdf4a2545c67449b67ac5e94f1 | "2021-03-14T14:39:35Z" | python | "2021-03-18T13:01:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,773 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/ci.yml", "scripts/ci/tools/ci_free_space_on_ci.sh", "tests/cli/commands/test_jobs_command.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py", "tests/test_utils/asserts.py"] | [QUARANTINE] The test_verify_integrity_if_dag_changed occasionally fails | ERROR: type should be string, got "https://github.com/apache/airflow/pull/14531/checks?check_run_id=2106170690#step:6:10298\r\n\r\nLooks like it is connected with #14772 \r\n\r\n```\r\n____________ TestSchedulerJob.test_verify_integrity_if_dag_changed _____________\r\n \r\n self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_verify_integrity_if_dag_changed>\r\n \r\n def test_verify_integrity_if_dag_changed(self):\r\n # CleanUp\r\n with create_session() as session:\r\n session.query(SerializedDagModel).filter(\r\n SerializedDagModel.dag_id == 'test_verify_integrity_if_dag_changed'\r\n ).delete(synchronize_session=False)\r\n \r\n dag = DAG(dag_id='test_verify_integrity_if_dag_changed', start_date=DEFAULT_DATE)\r\n BashOperator(task_id='dummy', dag=dag, owner='airflow', bash_command='echo hi')\r\n \r\n scheduler = SchedulerJob(subdir=os.devnull)\r\n scheduler.dagbag.bag_dag(dag, root_dag=dag)\r\n scheduler.dagbag.sync_to_db()\r\n \r\n session = settings.Session()\r\n orm_dag = session.query(DagModel).get(dag.dag_id)\r\n assert orm_dag is not None\r\n \r\n scheduler = SchedulerJob(subdir=os.devnull)\r\n scheduler.processor_agent = mock.MagicMock()\r\n dag = scheduler.dagbag.get_dag('test_verify_integrity_if_dag_changed', session=session)\r\n scheduler._create_dag_runs([orm_dag], session)\r\n \r\n drs = DagRun.find(dag_id=dag.dag_id, session=session)\r\n assert len(drs) == 1\r\n dr = drs[0]\r\n \r\n dag_version_1 = SerializedDagModel.get_latest_version_hash(dr.dag_id, session=session)\r\n assert dr.dag_hash == dag_version_1\r\n assert scheduler.dagbag.dags == {'test_verify_integrity_if_dag_changed': dag}\r\n assert len(scheduler.dagbag.dags.get(\"test_verify_integrity_if_dag_changed\").tasks) == 1\r\n \r\n # Now let's say the DAG got updated (new task got added)\r\n BashOperator(task_id='bash_task_1', dag=dag, bash_command='echo hi')\r\n > SerializedDagModel.write_dag(dag=dag)\r\n \r\n tests/jobs/test_scheduler_job.py:2827: \r\n _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n airflow/utils/session.py:65: in wrapper\r\n return func(*args, session=session, **kwargs)\r\n /usr/local/lib/python3.6/contextlib.py:88: in __exit__\r\n next(self.gen)\r\n airflow/utils/session.py:32: in create_session\r\n session.commit()\r\n /usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py:1046: in commit\r\n self.transaction.commit()\r\n /usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py:504: in commit\r\n self._prepare_impl()\r\n /usr/local/lib/python3.6/site-packages/sqlalchemy/orm/session.py:483: in _prepare_impl\r\n value_params,\r\n True,\r\n )\r\n \r\n if check_rowcount:\r\n if rows != len(records):\r\n raise orm_exc.StaleDataError(\r\n \"UPDATE statement on table '%s' expected to \"\r\n \"update %d row(s); %d were matched.\"\r\n > % (table.description, len(records), rows)\r\n )\r\n E sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table 'serialized_dag' expected to update 1 row(s); 0 were matched.\r\n \r\n /usr/local/lib/python3.6/site-packages/sqlalchemy/orm/persistence.py:1028: StaleDataError\r\n```\r\n\r\n\r\n<!--\r\n\r\nWelcome to Apache Airflow! For a smooth issue process, try to answer the following questions.\r\nDon't worry if they're not all applicable; just try to include what you can :-)\r\n\r\nIf you need to include code snippets or logs, please put them in fenced code\r\nblocks. If they're super-long, please use the details tag like\r\n<details><summary>super-long log</summary> lots of stuff </details>\r\n\r\nPlease delete these comment blocks before submitting the issue.\r\n\r\n-->\r\n\r\n<!--\r\n\r\nIMPORTANT!!!\r\n\r\nPLEASE CHECK \"SIMILAR TO X EXISTING ISSUES\" OPTION IF VISIBLE\r\nNEXT TO \"SUBMIT NEW ISSUE\" BUTTON!!!\r\n\r\nPLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!\r\n\r\nPlease complete the next sections or the issue will be closed.\r\nThese questions are the first thing we need to know to understand the context.\r\n\r\n-->\r\n\r\n**Apache Airflow version**:\r\n\r\n\r\n**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):\r\n\r\n**Environment**:\r\n\r\n- **Cloud provider or hardware configuration**:\r\n- **OS** (e.g. from /etc/os-release):\r\n- **Kernel** (e.g. `uname -a`):\r\n- **Install tools**:\r\n- **Others**:\r\n\r\n**What happened**:\r\n\r\n<!-- (please include exact error messages if you can) -->\r\n\r\n**What you expected to happen**:\r\n\r\n<!-- What do you think went wrong? -->\r\n\r\n**How to reproduce it**:\r\n<!---\r\n\r\nAs minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.\r\n\r\nIf you are using kubernetes, please attempt to recreate the issue using minikube or kind.\r\n\r\n## Install minikube/kind\r\n\r\n- Minikube https://minikube.sigs.k8s.io/docs/start/\r\n- Kind https://kind.sigs.k8s.io/docs/user/quick-start/\r\n\r\nIf this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action\r\n\r\nYou can include images using the .md style of\r\n![alt text](http://url/to/img.png)\r\n\r\nTo record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.\r\n\r\n--->\r\n\r\n\r\n**Anything else we need to know**:\r\n\r\n<!--\r\n\r\nHow often does this problem occur? Once? Every time etc?\r\n\r\nAny relevant logs to include? Put them here in side a detail tag:\r\n<details><summary>x.log</summary> lots of stuff </details>\r\n\r\n-->\r\n" | https://github.com/apache/airflow/issues/14773 | https://github.com/apache/airflow/pull/14792 | 3f61df11e7e81abc0ac4495325ccb55cc1c88af4 | 45cf89ce51b203bdf4a2545c67449b67ac5e94f1 | "2021-03-14T11:55:00Z" | python | "2021-03-18T13:01:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,772 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/ci.yml", "scripts/ci/tools/ci_free_space_on_ci.sh", "tests/cli/commands/test_jobs_command.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py", "tests/test_utils/asserts.py"] | [QUARANTINE] Occasional failures from test_scheduler_verify_pool_full_2_slots_per_task | The test occasionally fails with:
DAG 'test_scheduler_verify_pool_full_2_slots_per_task' not found in serialized_dag table
https://github.com/apache/airflow/pull/14531/checks?check_run_id=2106170551#step:6:10314
```
______ TestSchedulerJob.test_scheduler_verify_pool_full_2_slots_per_task _______
self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_scheduler_verify_pool_full_2_slots_per_task>
def test_scheduler_verify_pool_full_2_slots_per_task(self):
"""
Test task instances not queued when pool is full.
Variation with non-default pool_slots
"""
dag = DAG(dag_id='test_scheduler_verify_pool_full_2_slots_per_task', start_date=DEFAULT_DATE)
BashOperator(
task_id='dummy',
dag=dag,
owner='airflow',
pool='test_scheduler_verify_pool_full_2_slots_per_task',
pool_slots=2,
bash_command='echo hi',
)
dagbag = DagBag(
dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"),
include_examples=False,
read_dags_from_db=True,
)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
session = settings.Session()
pool = Pool(pool='test_scheduler_verify_pool_full_2_slots_per_task', slots=6)
session.add(pool)
session.commit()
dag = SerializedDAG.from_dict(SerializedDAG.to_dict(dag))
scheduler = SchedulerJob(executor=self.null_exec)
scheduler.processor_agent = mock.MagicMock()
# Create 5 dagruns, which will create 5 task instances.
date = DEFAULT_DATE
for _ in range(5):
dr = dag.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=date,
state=State.RUNNING,
)
> scheduler._schedule_dag_run(dr, {}, session)
tests/jobs/test_scheduler_job.py:2641:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/jobs/scheduler_job.py:1688: in _schedule_dag_run
dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
airflow/utils/session.py:62: in wrapper
return func(*args, **kwargs)
airflow/models/dagbag.py:178: in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <airflow.models.dagbag.DagBag object at 0x7f05c934f5b0>
dag_id = 'test_scheduler_verify_pool_full_2_slots_per_task'
session = <sqlalchemy.orm.session.Session object at 0x7f05c838cb50>
def _add_dag_from_db(self, dag_id: str, session: Session):
"""Add DAG to DagBag from DB"""
from airflow.models.serialized_dag import SerializedDagModel
row = SerializedDagModel.get(dag_id, session)
if not row:
> raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
E airflow.exceptions.SerializedDagNotFound: DAG 'test_scheduler_verify_pool_full_2_slots_per_task' not found in serialized_dag table
```
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14772 | https://github.com/apache/airflow/pull/14792 | 3f61df11e7e81abc0ac4495325ccb55cc1c88af4 | 45cf89ce51b203bdf4a2545c67449b67ac5e94f1 | "2021-03-14T11:51:36Z" | python | "2021-03-18T13:01:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,771 | [".github/workflows/build-images-workflow-run.yml", ".github/workflows/ci.yml", "scripts/ci/tools/ci_free_space_on_ci.sh", "tests/cli/commands/test_jobs_command.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py", "tests/test_utils/asserts.py"] | [QUARANTINE] Test_retry_still_in_executor sometimes fail | Occasional failures:
https://github.com/apache/airflow/pull/14531/checks?check_run_id=2106170532#step:6:10454
```
________________ TestSchedulerJob.test_retry_still_in_executor _________________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_retry_still_in_executor>
def test_retry_still_in_executor(self):
"""
Checks if the scheduler does not put a task in limbo, when a task is retried
but is still present in the executor.
"""
executor = MockExecutor(do_update=False)
dagbag = DagBag(dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"), include_examples=False)
dagbag.dags.clear()
dag = DAG(dag_id='test_retry_still_in_executor', start_date=DEFAULT_DATE, schedule_interval="@once")
dag_task1 = BashOperator(
task_id='test_retry_handling_op', bash_command='exit 1', retries=1, dag=dag, owner='airflow'
)
dag.clear()
dag.is_subdag = False
with create_session() as session:
orm_dag = DagModel(dag_id=dag.dag_id)
orm_dag.is_paused = False
session.merge(orm_dag)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
@mock.patch('airflow.jobs.scheduler_job.DagBag', return_value=dagbag)
def do_schedule(mock_dagbag):
# Use a empty file since the above mock will return the
# expected DAGs. Also specify only a single file so that it doesn't
# try to schedule the above DAG repeatedly.
scheduler = SchedulerJob(
num_runs=1, executor=executor, subdir=os.path.join(settings.DAGS_FOLDER, "no_dags.py")
)
scheduler.heartrate = 0
scheduler.run()
do_schedule() # pylint: disable=no-value-for-parameter
with create_session() as session:
ti = (
session.query(TaskInstance)
.filter(
TaskInstance.dag_id == 'test_retry_still_in_executor',
TaskInstance.task_id == 'test_retry_handling_op',
)
.first()
)
ti.task = dag_task1
def run_with_error(ti, ignore_ti_state=False):
try:
ti.run(ignore_ti_state=ignore_ti_state)
except AirflowException:
pass
assert ti.try_number == 1
# At this point, scheduler has tried to schedule the task once and
# heartbeated the executor once, which moved the state of the task from
# SCHEDULED to QUEUED and then to SCHEDULED, to fail the task execution
# we need to ignore the TaskInstance state as SCHEDULED is not a valid state to start
# executing task.
run_with_error(ti, ignore_ti_state=True)
assert ti.state == State.UP_FOR_RETRY
assert ti.try_number == 2
with create_session() as session:
ti.refresh_from_db(lock_for_update=True, session=session)
ti.state = State.SCHEDULED
session.merge(ti)
# To verify that task does get re-queued.
executor.do_update = True
do_schedule() # pylint: disable=no-value-for-parameter
ti.refresh_from_db()
> assert ti.state == State.SUCCESS
E AssertionError: assert None == 'success'
E + where None = <TaskInstance: test_retry_still_in_executor.test_retry_handling_op 2016-01-01 00:00:00+00:00 [None]>.state
E + and 'success' = State.SUCCESS
tests/jobs/test_scheduler_job.py:2934: AssertionError
```
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14771 | https://github.com/apache/airflow/pull/14792 | 3f61df11e7e81abc0ac4495325ccb55cc1c88af4 | 45cf89ce51b203bdf4a2545c67449b67ac5e94f1 | "2021-03-14T11:49:38Z" | python | "2021-03-18T13:01:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,770 | ["airflow/sensors/smart_sensor.py"] | [Smart sensor] Runtime error: dictionary changed size during iteration | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**What happened**:
<!-- (please include exact error messages if you can) -->
Smart Sensor TI crashes with a Runtime error. Here's the logs:
```
RuntimeError: dictionary changed size during iteration
File "airflow/sentry.py", line 159, in wrapper
return func(task_instance, *args, session=session, **kwargs)
File "airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "airflow/models/taskinstance.py", line 1315, in _execute_task
result = task_copy.execute(context=context)
File "airflow/sensors/smart_sensor.py", line 736, in execute
self.flush_cached_sensor_poke_results()
File "airflow/sensors/smart_sensor.py", line 681, in flush_cached_sensor_poke_results
for ti_key, sensor_exception in self.cached_sensor_exceptions.items():
```
**What you expected to happen**:
<!-- What do you think went wrong? -->
Smart sensor should always execute without any runtime error.
**How to reproduce it**:
I haven't been able to reproduce it consistently since it sometimes works and sometimes errors.
**Anything else we need to know**:
It's a really noisy error in Sentry. In just 4 days, 3.8k events were reported in Sentry.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14770 | https://github.com/apache/airflow/pull/14774 | 2ab2cbf93df9eddfb527fcfd9d7b442678a57662 | 4aec25a80e3803238cf658c416c8e6d3975a30f6 | "2021-03-14T11:46:11Z" | python | "2021-06-23T22:22:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,755 | ["tests/jobs/test_backfill_job.py"] | [QUARANTINE] Backfill depends on past test is flaky | Test backfill_depends_on_past is flaky. The whole Backfill class was in Heisentest but I believe this is the only one that is problematic now so I remove the class from heisentests and move the depends_on_past to quarantine.
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14755 | https://github.com/apache/airflow/pull/19862 | 5ebd63a31b5bc1974fc8974f137b9fdf0a5f58aa | a804666347b50b026a8d3a1a14c0b2e27a369201 | "2021-03-13T13:00:28Z" | python | "2021-11-30T12:59:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,754 | ["breeze"] | Breeze fails to run on macOS with the old version of bash | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
master/HEAD (commit 99c74968180ab7bc6d7152ec4233440b62a07969)
**Environment**:
- **Cloud provider or hardware configuration**: MacBook Air (13-inch, Early 2015)
- **OS** (e.g. from /etc/os-release): macOS Big Sur version 11.2.2
- **Install tools**: git
- **Others**: GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)
Copyright (C) 2007 Free Software Foundation, Inc.
**What happened**:
When I executed `./breeze` initially following this section https://github.com/apache/airflow/blob/master/BREEZE.rst#installation, I got the following error.
```
$ ./breeze
./breeze: line 28: @: unbound variable
```
**What you expected to happen**:
> The First time you run Breeze, it pulls and builds a local version of Docker images.
It should start to pull and build a local version of Docker images.
**How to reproduce it**:
```
git clone git@github.com:apache/airflow.git
cd airflow
./breeze
```
**Anything else we need to know**:
The old version of bash reports the error when there's no argument with `${@}`.
https://github.com/apache/airflow/blob/master/breeze#L28 | https://github.com/apache/airflow/issues/14754 | https://github.com/apache/airflow/pull/14787 | feb6b8107e1e01aa1dae152f7b3861fe668b3008 | c613384feb52db39341a8d3a52b7f47695232369 | "2021-03-13T08:50:25Z" | python | "2021-03-15T09:58:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,752 | ["BREEZE.rst"] | Less confusable description about Cleaning the environment for new comers | **Description**
I think the section of Cleaning the environment in this document is confusable for new comers.
https://github.com/apache/airflow/blob/master/BREEZE.rst#cleaning-the-environment
Actual
Suddenly, `./breeze stop` appears in the document. It's not necessary for those who first run breeze.
Exepected
It's better to write the condition like below.
Stop Breeze with `./breeze stop.` (If Breeze is already running)
Use case / motivation
Because `./breeze stop.` is not required for new comers, it's better to write the condition.
Are you willing to submit a PR?
Yes, I'm ready. | https://github.com/apache/airflow/issues/14752 | https://github.com/apache/airflow/pull/14753 | b9e8ca48e61fdb8d80960981de0ee5409e3a6df9 | 3326babd02d02c87ec80bf29439614de4e636e10 | "2021-03-13T08:16:31Z" | python | "2021-03-13T09:38:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,750 | ["BREEZE.rst"] | Better description about Getopt and gstat when runnin Breeze | **Description**
For me, the section of Getopt and gstat in this document doesn't have enough description to run Breeze.
https://github.com/apache/airflow/blob/master/BREEZE.rst#getopt-and-gstat
Actual
After executing the following commands quoted from https://github.com/apache/airflow/blob/master/BREEZE.rst#getopt-and-gstat, I cannot know that the commads enabled the PATH properly.
```
echo 'export PATH="/usr/local/opt/gnu-getopt/bin:$PATH"' >> ~/.zprofile
. ~/.zprofile
```
Exepected
It's better to write commands for checking that `getopt` and `gstat` are succesfully installed.
```
$ getopt --version
getopt from util-linux 2.36.2
$ gstat --version
stat (GNU coreutils) 8.32
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Michael Meskes.
```
**Use case / motivation**
Because I'm not familiar with unix shell, with the exisiting document, I couldn't know those commands are properly installed or not.
**Are you willing to submit a PR?**
Yes, I'm ready.
| https://github.com/apache/airflow/issues/14750 | https://github.com/apache/airflow/pull/14751 | 99c74968180ab7bc6d7152ec4233440b62a07969 | b9e8ca48e61fdb8d80960981de0ee5409e3a6df9 | "2021-03-13T07:32:36Z" | python | "2021-03-13T09:37:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,726 | [".pre-commit-config.yaml", "BREEZE.rst", "STATIC_CODE_CHECKS.rst", "airflow/ui/package.json", "airflow/ui/yarn.lock", "breeze-complete"] | Add precommit linting and testing to the new /ui | **Description**
We just initialized the new UI for AIP-38 under `/airflow/ui`. To continue development, it would be best to add a pre-commit hook to run the linting and testing commands for the new project.
**Use case / motivation**
The new UI already has linting and testing setup with `yarn lint` and `yarn test`. We just need a pre-commit hook for them.
**Are you willing to submit a PR?**
Yes
**Related Issues**
no
| https://github.com/apache/airflow/issues/14726 | https://github.com/apache/airflow/pull/14836 | 5f774fae530577e302c153cc8726c93040ebbde0 | e395fcd247b8aa14dbff2ee979c1a0a17c42adf4 | "2021-03-11T16:18:27Z" | python | "2021-03-16T23:06:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,696 | ["UPDATING.md", "airflow/cli/cli_parser.py", "airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "airflow/config_templates/default_test.cfg", "airflow/configuration.py", "airflow/models/baseoperator.py", "chart/values.yaml", "docs/apache-airflow/executor/celery.rst"] | Decouple default_queue from celery config section |
**Description**
We are using a 3rd party executor which has the ability to use multiple queues, however the `default_queue` is defined under the `celery` heading and is used regardless of the executor that you are using.
See: https://github.com/apache/airflow/blob/2.0.1/airflow/models/baseoperator.py#L366
It would be nice to decouple the default_queue configuration argument away from the celery section to allow other executors to utilise this functionality with less confusion.
**Are you willing to submit a PR?**
Yep! Will open a pull request shortly
**Related Issues**
Not that I'm aware of.
| https://github.com/apache/airflow/issues/14696 | https://github.com/apache/airflow/pull/14699 | 7757fe32e0aa627cb849f2d69fbbb01f1d180a64 | 1d0c1684836fb0c3d1adf86a8f93f1b501474417 | "2021-03-10T13:06:20Z" | python | "2021-03-31T09:59:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,682 | ["airflow/providers/amazon/aws/transfers/local_to_s3.py", "airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py", "airflow/providers/google/cloud/transfers/s3_to_gcs.py", "tests/providers/amazon/aws/transfers/test_local_to_s3.py"] | The S3ToGCSOperator fails on templated `dest_gcs` URL | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: Docker
**What happened**:
When passing a templatized `dest_gcs` argument to the `S3ToGCSOperator` operator, the DAG fails to import because the constructor attempts to test the validity of the URL before the template has been populated in `execute`.
The error is:
```
Broken DAG: [/opt/airflow/dags/bad_gs_dag.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1051, in gcs_object_is_directory
_, blob = _parse_gcs_url(bucket)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1063, in _parse_gcs_url
raise AirflowException('Please provide a bucket name')
airflow.exceptions.AirflowException: Please provide a bucket name
```
**What you expected to happen**:
The DAG should successfully parse when using a templatized `dest_gcs` value.
**How to reproduce it**:
Instantiating a `S3ToGCSOperator` task with `dest_gcs="{{ var.gcs_url }}"` fails.
<details>
```python
from airflow.decorators import dag
from airflow.utils.dates import days_ago
from airflow.providers.google.cloud.transfers.s3_to_gcs import S3ToGCSOperator
@dag(
schedule_interval=None,
description="Demo S3-to-GS Bug",
catchup=False,
start_date=days_ago(1),
)
def demo_bug():
S3ToGCSOperator(
task_id="transfer_task",
bucket="example_bucket",
prefix="fake/prefix",
dest_gcs="{{ var.gcs_url }}",
)
demo_dag = demo_bug()
```
</details>
**Anything else we need to know**:
Should be fixable by moving the code that evaluates whether the URL is a folder to `execute()`.
| https://github.com/apache/airflow/issues/14682 | https://github.com/apache/airflow/pull/19048 | efdfd15477f92da059fa86b4fa18b6f29cb97feb | 3c08c025c5445ffc0533ac28d07ccf2e69a19ca8 | "2021-03-09T14:44:14Z" | python | "2021-10-27T06:15:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,675 | ["airflow/utils/helpers.py", "tests/utils/test_helpers.py"] | TriggerDagRunOperator OperatorLink doesn't work when HTML base url doesn't match the Airflow base url | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**What happened**:
When I click on the "Triggered DAG" Operator Link in TriggerDagRunOperator, I am redirected with a relative link.
![Screen Shot 2021-03-08 at 4 28 16 PM](https://user-images.githubusercontent.com/5952735/110399833-61f00100-802b-11eb-9399-146bfcf8627c.png)
The redirect uses the HTML base URL and not the airflow base URL. This is only an issue if the URLs do not match.
**What you expected to happen**:
I expect the link to take me to the Triggered DAG tree view (default view) instead of the base url of the service hosting the webserver.
**How to reproduce it**:
Create an airflow deployment where the HTML base url doesn't match the airflow URL.
| https://github.com/apache/airflow/issues/14675 | https://github.com/apache/airflow/pull/14990 | 62aa7965a32f1f8dde83cb9c763deef5b234092b | aaa3bf6b44238241bd61178426b692df53770c22 | "2021-03-09T01:03:33Z" | python | "2021-04-11T11:51:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,597 | ["airflow/models/taskinstance.py", "docs/apache-airflow/concepts/connections.rst", "docs/apache-airflow/macros-ref.rst", "tests/models/test_taskinstance.py"] | Provide jinja template syntax to access connections | **Description**
Expose the connection into the jinja template context via `conn.value.<connectionname>.{host,port,login,password,extra_config,etc}`
Today is possible to conveniently access [airflow's variables](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#variables) in jinja templates using `{{ var.value.<variable_name> }}`.
There is no equivalent (to my knowledge for [connections](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#connections)), I understand that most of the time connection are used programmatically in Operators and Hooks source code, but there are use cases where the connection info has to be pass as parameters to the operators and then it becomes cumbersome to do it without jinja template syntax.
I seen workarounds like using [user defined macros to provide get_login(my_conn_id)](https://stackoverflow.com/questions/65826404/use-airflow-connection-from-a-jinja-template/65873023#65873023
), but I'm after a consistent interface for accessing both variables and connections in the same way
**Workaround**
The following `user_defined_macro` (from my [stackoverflow answer](https://stackoverflow.com/a/66471911/90580)) provides the suggested syntax `connection.mssql.host` where `mssql` is the connection name:
```
class ConnectionGrabber:
def __getattr__(self, name):
return Connection.get_connection_from_secrets(name)
dag = DAG( user_defined_macros={'connection': ConnectionGrabber()}, ...)
task = BashOperator(task_id='read_connection', bash_command='echo {{connection.mssql.host }}', dag=dag)
```
This macro can be added to each DAG individually or to all DAGs via an Airflow's Plugin. What I suggest is to make this macro part of the default.
**Use case / motivation**
For example, passing credentials to a [KubernetesPodOperator](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#howto-operator-kubernetespodoperator) via [env_vars](https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator) today has to be done like this:
```
connection = Connection.get_connection_from_secrets('somecredentials')
k = KubernetesPodOperator(
task_id='task1',
env_vars={'MY_VALUE': '{{ var.value.my_value }}', 'PWD': conn.password,},
)
```
where I would prefer to use consistent syntax for both variables and connections like this:
```
# not needed anymore: connection = Connection.get_connection_from_secrets('somecredentials')
k = KubernetesPodOperator(
task_id='task1',
env_vars={'MY_VALUE': '{{ var.value.my_value }}', 'PWD': '{{ conn.somecredentials.password }}',},
)
```
The same applies to `BashOperator` where I sometimes feel the need to pass connection information to the templated script.
**Are you willing to submit a PR?**
yes, I can write the PR.
**Related Issues**
| https://github.com/apache/airflow/issues/14597 | https://github.com/apache/airflow/pull/16686 | 5034414208f85a8be61fe51d6a3091936fe402ba | d3ba80a4aa766d5eaa756f1fa097189978086dac | "2021-03-04T07:51:09Z" | python | "2021-06-29T10:50:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,592 | ["airflow/configuration.py", "airflow/models/connection.py", "airflow/models/variable.py", "tests/core/test_configuration.py"] | Unreachable Secrets Backend Causes Web Server Crash | **Apache Airflow version**:
1.10.12
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
n/a
**Environment**:
- **Cloud provider or hardware configuration**:
Amazon MWAA
- **OS** (e.g. from /etc/os-release):
Amazon Linux (latest)
- **Kernel** (e.g. `uname -a`):
n/a
- **Install tools**:
n/a
**What happened**:
If an unreachable secrets.backend is specified in airflow.cfg the web server crashes
**What you expected to happen**:
An invalid secrets backend should be ignored with a warning, and the system should default back to the metadatabase secrets
**How to reproduce it**:
In an environment without access to AWS Secrets Manager, add the following to your airflow.cfg:
```
[secrets]
backend = airflow.contrib.secrets.aws_secrets_manager.SecretsManagerBackend
```
**or** an environment without access to SSM specifiy:
```
[secrets]
backend = airflow.contrib.secrets.aws_systems_manager.SystemsManagerParameterStoreBackend
```
Reference: https://airflow.apache.org/docs/apache-airflow/1.10.12/howto/use-alternative-secrets-backend.html#aws-ssm-parameter-store-secrets-backend | https://github.com/apache/airflow/issues/14592 | https://github.com/apache/airflow/pull/16404 | 4d4830599578ae93bb904a255fb16b81bd471ef1 | 0abbd2d918ad9027948fd8a33ebb42487e4aa000 | "2021-03-03T23:17:03Z" | python | "2021-08-27T20:59:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,586 | ["airflow/executors/celery_executor.py"] | AttributeError: 'DatabaseBackend' object has no attribute 'task_cls' | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18.15
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release): ubuntu 18.04
- **Kernel** (e.g. `uname -a`): Linux mlb-airflow-infra-workers-75f589bcd9-wtls8 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1 (2019-04-12) x86_64 x86_64 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
Scheduler is constantly restarting due to this error:
```
[2021-03-03 17:31:19,393] {scheduler_job.py:1298} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1384, in _run_scheduler_loop
self.executor.heartbeat()
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/base_executor.py", line 162, in heartbeat
self.sync()
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py", line 340, in sync
self.update_all_task_states()
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py", line 399, in update_all_task_states
state_and_info_by_celery_task_id = self.bulk_state_fetcher.get_many(self.tasks.values())
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py", line 552, in get_many
result = self._get_many_from_db_backend(async_results)
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/celery_executor.py", line 570, in _get_many_from_db_backend
task_cls = app.backend.task_cls
AttributeError: 'DatabaseBackend' object has no attribute 'task_cls'
[2021-03-03 17:31:20,396] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 3852
[2021-03-03 17:31:20,529] {process_utils.py:66} INFO - Process psutil.Process(pid=3996, status='terminated', started='17:31:19') (3996) terminated with exit code None
[2021-03-03 17:31:20,533] {process_utils.py:66} INFO - Process psutil.Process(pid=3997, status='terminated', started='17:31:19') (3997) terminated with exit code None
[2021-03-03 17:31:20,533] {process_utils.py:206} INFO - Waiting up to 5 seconds for processes to exit...
[2021-03-03 17:31:20,540] {process_utils.py:66} INFO - Process psutil.Process(pid=3852, status='terminated', exitcode=0, started='17:31:13') (3852) terminated with exit code 0
[2021-03-03 17:31:20,540] {scheduler_job.py:1301} INFO - Exited execute loop
```
**What you expected to happen**: Scheduler running
<!-- What do you think went wrong? -->
**How to reproduce it**:
Install airflow 2.0.1 in an ubuntu 18.04 instance:
`pip install apache-airflow[celery,postgres,s3,crypto,jdbc,google_auth,redis,slack,ssh,sentry,kubernetes,statsd]==2.0.1`
Use the following variables:
```
AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://airflow:<pwd>@<DB>:<port>/airflow
AIRFLOW__CORE__DEFAULT_TIMEZONE=<TZ>
AIRFLOW__CORE__LOAD_DEFAULTS=false
AIRFLOW__CELERY__BROKER_URL=sqs://
AIRFLOW__CELERY__DEFAULT_QUEUE=<queue>
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER=s3://<bucket>/<prefix>/
AIRFLOW__CORE__LOAD_EXAMPLES=false
AIRFLOW__CORE__REMOTE_LOGGING=True
AIRFLOW__CORE__FERNET_KEY=<fernet_key>
AIRFLOW__CORE__EXECUTOR=CeleryExecutor
AIRFLOW__CELERY__BROKER_TRANSPORT_OPTIONS__REGION=<region>
AIRFLOW__CELERY__RESULT_BACKEND=db+postgresql://airflow:<pwd>@<DB>:<port>/airflow
```
Start airflow components (webserver, celery workers, scheduler). With no DAGs running everything is stable and not failing but once a DAGs start running the scheduler starts getting the error and from then it constantly reboots until crash.
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
This happens every time and the scheduler keeps restarting
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14586 | https://github.com/apache/airflow/pull/14612 | 511f0426530bfabd9d93f4737df7add1080b4e8d | 33910d6c699b5528db4be40d31199626dafed912 | "2021-03-03T17:51:45Z" | python | "2021-03-05T19:34:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,563 | ["airflow/example_dags/example_external_task_marker_dag.py", "airflow/models/dag.py", "airflow/sensors/external_task.py", "docs/apache-airflow/howto/operator/external_task_sensor.rst", "tests/sensors/test_external_task_sensor.py"] | TaskGroup Sensor | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
Enable the ability for a task in a DAG to wait upon the successful completion of an entire TaskGroup.
**Use case / motivation**
TaskGroups provide a great mechanism for authoring DAGs, however there are situations where it might be necessary for a task in an external DAG to wait upon the the completion of the TaskGroup as a whole.
At the moment this is only possible with one of the following workarounds:
1. Add an external task sensor for each task in the group.
2. Add a Dummy task after the TaskGroup which the external task sensor waits on.
I would envisage either adapting `ExternalTaskSensor` to also work with TaskGroups or creating a new `ExternalTaskGroupSensor`.
**Are you willing to submit a PR?**
Time permitting, yes!
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14563 | https://github.com/apache/airflow/pull/24902 | 0eb0b543a9751f3d458beb2f03d4c6ff22fcd1c7 | bc04c5ff0fa56e80d3d5def38b798170f6575ee8 | "2021-03-02T14:22:22Z" | python | "2022-08-22T18:13:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,518 | ["airflow/cli/commands/cheat_sheet_command.py", "airflow/cli/commands/info_command.py", "airflow/cli/simple_table.py"] | Airflow info command doesn't work properly with pbcopy on Mac OS | Hello,
Mac OS has a command for copying data to the clipboard - `pbcopy`. Unfortunately, with the [introduction of more fancy tables](https://github.com/apache/airflow/pull/12689) to this command, we can no longer use it together.
For example:
```bash
airflow info | pbcopy
```
<details>
<summary>Clipboard content</summary>
```
Apache Airflow: 2.1.0.dev0
System info
| Mac OS
| x86_64
| uname_result(system='Darwin', node='Kamils-MacBook-Pro.local',
| release='20.3.0', version='Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06
| PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64', machine='x86_64',
| processor='i386')
| (None, 'UTF-8')
| 3.8.7 (default, Feb 14 2021, 09:58:39) [Clang 12.0.0 (clang-1200.0.32.29)]
| /Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/bin/python3.8
Tools info
| git version 2.24.3 (Apple Git-128)
| OpenSSH_8.1p1, LibreSSL 2.7.3
| Client Version: v1.19.3
| Google Cloud SDK 326.0.0
| NOT AVAILABLE
| NOT AVAILABLE
| 3.32.3 2020-06-18 14:16:19
| 02c344aceaea0d177dd42e62c8541e3cab4a26c757ba33b3a31a43ccc7d4aapl
| psql (PostgreSQL) 13.2
Paths info
| /Users/kamilbregula/airflow
| /Users/kamilbregula/.pyenv/versions/airflow-py38/bin:/Users/kamilbregula/.pye
| v/libexec:/Users/kamilbregula/.pyenv/plugins/python-build/bin:/Users/kamilbre
| ula/.pyenv/plugins/pyenv-virtualenv/bin:/Users/kamilbregula/.pyenv/plugins/py
| nv-update/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-installer/bin:/Users/k
| milbregula/.pyenv/plugins/pyenv-doctor/bin:/Users/kamilbregula/.pyenv/plugins
| python-build/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-virtualenv/bin:/Use
| s/kamilbregula/.pyenv/plugins/pyenv-update/bin:/Users/kamilbregula/.pyenv/plu
| ins/pyenv-installer/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-doctor/bin:/
| sers/kamilbregula/.pyenv/plugins/pyenv-virtualenv/shims:/Users/kamilbregula/.
| yenv/shims:/Users/kamilbregula/.pyenv/bin:/usr/local/opt/gnu-getopt/bin:/usr/
| ocal/opt/mysql@5.7/bin:/usr/local/opt/mysql-client@5.7/bin:/usr/local/opt/ope
| ssl/bin:/Users/kamilbregula/Library/Python/2.7/bin/:/Users/kamilbregula/bin:/
| sers/kamilbregula/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin
| /sbin:/Users/kamilbregula/.cargo/bin
| /Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/bin:/Users/kamilb
| egula/.pyenv/versions/3.8.7/lib/python38.zip:/Users/kamilbregula/.pyenv/versi
| ns/3.8.7/lib/python3.8:/Users/kamilbregula/.pyenv/versions/3.8.7/lib/python3.
| /lib-dynload:/Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/lib/
| ython3.8/site-packages:/Users/kamilbregula/devel/airflow/airflow:/Users/kamil
| regula/airflow/dags:/Users/kamilbregula/airflow/config:/Users/kamilbregula/ai
| flow/plugins
| True
Config info
| SequentialExecutor
| airflow.utils.log.file_task_handler.FileTaskHandler
| sqlite:////Users/kamilbregula/airflow/airflow.db
| /Users/kamilbregula/airflow/dags
| /Users/kamilbregula/airflow/plugins
| /Users/kamilbregula/airflow/logs
Providers info
| 1.2.0
| 1.0.1
| 1.0.1
| 1.1.0
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.2
| 1.0.2
| 1.1.1
| 1.0.1
| 1.0.1
| 2.1.0
| 1.0.1
| 1.0.1
| 1.1.1
| 1.0.1
| 1.0.1
| 1.1.0
| 1.0.1
| 1.2.0
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.1.1
| 1.0.1
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.2
| 1.0.2
| 1.0.1
| 2.0.0
| 1.0.1
| 1.0.1
| 1.0.2
| 1.1.1
| 1.0.1
| 3.0.0
| 1.0.2
| 1.2.0
| 1.0.0
| 1.0.2
| 1.0.1
| 1.0.1
| 1.0.1
```
</details>
CC: @turbaszek
| https://github.com/apache/airflow/issues/14518 | https://github.com/apache/airflow/pull/14528 | 1b0851c9b75f0d0a15427898ae49a2f67d076f81 | a1097f6f29796bd11f8ed7b3651dfeb3e40eec09 | "2021-02-27T21:07:49Z" | python | "2021-02-28T15:42:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,517 | ["airflow/cli/cli_parser.py", "airflow/cli/simple_table.py", "docs/apache-airflow/usage-cli.rst", "docs/spelling_wordlist.txt"] | The tables are not parsable by standard linux utilities. | Hello,
I changed the format of the tables a long time ago so that they could be parsed in standard Linux tools such as AWK.
https://github.com/apache/airflow/pull/8409
For example, to list the files that contain the DAG, I could run the command below.
```
airflow dags list | grep -v "dag_id" | awk '{print $2}' | sort | uniq
```
To pause all dags:
```bash
airflow dags list | awk '{print $1}' | grep -v "dag_id"| xargs airflow dags pause
```
Unfortunately [that has changed](https://github.com/apache/airflow/pull/12704) and we now have more fancy tables, but harder to use in standard Linux tools.
Alternatively, we can use JSON output, but I don't always have JQ installed on production environment, so performing administrative tasks is difficult for me.
```bash
$ docker run apache/airflow:2.0.1 bash -c "jq"
/bin/bash: jq: command not found
```
Best regards,
Kamil Breguła
CC: @turbaszek | https://github.com/apache/airflow/issues/14517 | https://github.com/apache/airflow/pull/14546 | 8801a0cc3b39cf3d2a3e5ef6af004d763bdb0b93 | 0ef084c3b70089b9b061090f7d88ce86e3651ed4 | "2021-02-27T20:56:59Z" | python | "2021-03-02T19:12:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,515 | ["airflow/models/pool.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_pool.py"] | Tasks in an infinite slots pool are never scheduled | **Apache Airflow version**: v2.0.0 and up
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): not tested with K8
**Environment**:
all
**What happened**:
Executing the unit test included below, or create an infinite pool ( `-1` slots ) and tasks that should be executed in that pool.
```
INFO airflow.jobs.scheduler_job.SchedulerJob:scheduler_job.py:991 Not scheduling since there are -1 open slots in pool test_scheduler_verify_infinite_pool
```
**What you expected to happen**:
To schedule tasks, or to drop support for infinite slots pools?
**How to reproduce it**:
easiest one is this unit test:
```
def test_scheduler_verify_infinite_pool(self):
"""
Test that TIs are still scheduled if we only have one infinite pool.
"""
dag = DAG(dag_id='test_scheduler_verify_infinite_pool', start_date=DEFAULT_DATE)
BashOperator(
task_id='test_scheduler_verify_infinite_pool_t0',
dag=dag,
owner='airflow',
pool='test_scheduler_verify_infinite_pool',
bash_command='echo hi',
)
dagbag = DagBag(
dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"),
include_examples=False,
read_dags_from_db=True,
)
dagbag.bag_dag(dag=dag, root_dag=dag)
dagbag.sync_to_db()
session = settings.Session()
pool = Pool(pool='test_scheduler_verify_infinite_pool', slots=-1)
session.add(pool)
session.commit()
dag = SerializedDAG.from_dict(SerializedDAG.to_dict(dag))
scheduler = SchedulerJob(executor=self.null_exec)
scheduler.processor_agent = mock.MagicMock()
dr = dag.create_dagrun(
run_type=DagRunType.SCHEDULED,
execution_date=DEFAULT_DATE,
state=State.RUNNING,
)
scheduler._schedule_dag_run(dr, {}, session)
task_instances_list = scheduler._executable_task_instances_to_queued(max_tis=32, session=session)
# Let's make sure we don't end up with a `max_tis` == 0
assert len(task_instances_list) >= 1
```
**Anything else we need to know**:
Overall I'm not sure whether it's worth fixing in those various spots:
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L908
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L971
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L988
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L1041
https://github.com/bperson/airflow/blob/master/airflow/jobs/scheduler_job.py#L1056
Or whether to restrict `-1` ( infinite ) slots in pools:
https://github.com/bperson/airflow/blob/master/airflow/models/pool.py#L49 | https://github.com/apache/airflow/issues/14515 | https://github.com/apache/airflow/pull/15247 | 90f0088c5752b56177597725cc716f707f2f8456 | 96f764389eded9f1ea908e899b54bf00635ec787 | "2021-02-27T17:42:33Z" | python | "2021-06-22T08:31:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,489 | ["airflow/providers/ssh/CHANGELOG.rst", "airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/provider.yaml"] | Add a retry with wait interval for SSH operator | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
currently SSH operator fails if authentication fails without retying. We can add two more parameters as mentioned below which will retry in case of Authentication failure after waiting for configured time.
- max_retry - maximum time SSH operator should retry in case of exception
- wait - how many seconds it should wait before next retry
<!-- A short description of your feature -->
**Use case / motivation**
We are using SSH operator heavily in our production jobs, And what I have noticed is sometimes SSH operator fails to authenticate, however open re-running jobs it run successfully. And this happens ofthen. We have ended up writing our own custom operator for this. However, if we can implement this, this could help others as well.
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
Implement the suggested feature for ssh operator.
**Are you willing to submit a PR?**
I will submit the PR if feature gets approval.
<!--- We accept contributions! -->
**Related Issues**
N/A
<!-- Is there currently another issue associated with this? -->
No | https://github.com/apache/airflow/issues/14489 | https://github.com/apache/airflow/pull/19981 | 4a73d8f3d1f0c2cb52707901f9e9a34198573d5e | b6edc3bfa1ed46bed2ae23bb2baeefde3f9a59d3 | "2021-02-26T21:22:34Z" | python | "2022-02-01T09:30:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,486 | ["airflow/www/static/js/tree.js"] | tree view task instances have too much left padding in webserver UI | **Apache Airflow version**: 2.0.1
Here is tree view of a dag with one task:
![image](https://user-images.githubusercontent.com/15932138/109343250-e370b380-7821-11eb-9cff-5e1a5ef5fd44.png)
For some reason the task instances render partially off the page, and there's a large amount of empty space that could have been used instead.
**Environment**
MacOS
Chrome
| https://github.com/apache/airflow/issues/14486 | https://github.com/apache/airflow/pull/14566 | 8ef862eee6443cc2f34f4cc46425357861e8b96c | 3f7ebfdfe2a1fa90b0854028a5db057adacd46c1 | "2021-02-26T19:02:06Z" | python | "2021-03-04T00:00:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,481 | ["airflow/api_connexion/schemas/dag_schema.py", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"] | DAG /details endpoint returning empty array objects | When testing the following two endpoints, I get different results for the array of owners and tags. The former should be identical to the response of the latter endpoint.
`/api/v1/dags/{dag_id}/details`:
```json
{
"owners": [],
"tags": [
{},
{}
],
}
```
`/api/v1/dags/{dag_id}`:
```json
{
"owners": [
"airflow"
],
"tags": [
{
"name": "example"
},
{
"name": "example2"
}
]
}
``` | https://github.com/apache/airflow/issues/14481 | https://github.com/apache/airflow/pull/14490 | 9c773bbf0174a8153720d594041f886b2323d52f | 4424d10f05fa268b54c81ef8b96a0745643690b6 | "2021-02-26T14:59:56Z" | python | "2021-03-03T14:39:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,473 | ["airflow/www/static/js/tree.js"] | DagRun duration not visible in tree view tooltip if not currently running | On airflow 2.0.1
On tree view if dag run is running, duration shows as expected:
![image](https://user-images.githubusercontent.com/15932138/109248646-086e1380-779b-11eb-9d00-8cb785d88299.png)
But if dag run is complete, duration is null:
![image](https://user-images.githubusercontent.com/15932138/109248752-3ce1cf80-779b-11eb-8784-9a4aaed2209b.png)
| https://github.com/apache/airflow/issues/14473 | https://github.com/apache/airflow/pull/14566 | 8ef862eee6443cc2f34f4cc46425357861e8b96c | 3f7ebfdfe2a1fa90b0854028a5db057adacd46c1 | "2021-02-26T02:58:19Z" | python | "2021-03-04T00:00:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,469 | ["setup.cfg"] | Upgrade Flask-AppBuilder to 3.2.0 for improved OAUTH/LDAP | Version `3.2.0` of Flask-AppBuilder added support for LDAP group binding (see PR: https://github.com/dpgaspar/Flask-AppBuilder/pull/1374), we should update mainly for the `AUTH_ROLES_MAPPING` feature, which lets users bind to RBAC roles based on their LDAP/OAUTH group membership.
Here are the docs about Flask-AppBuilder security:
https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-ldap
This will resolve https://github.com/apache/airflow/issues/8179 | https://github.com/apache/airflow/issues/14469 | https://github.com/apache/airflow/pull/14665 | b718495e4caecb753742c3eb22919411a715f24a | 97b5e4cd6c001ec1a1597606f4e9f1c0fbea20d2 | "2021-02-25T23:00:08Z" | python | "2021-03-08T17:12:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,460 | ["BREEZE.rst", "breeze", "breeze-complete", "scripts/ci/libraries/_initialization.sh", "scripts/ci/libraries/_verbosity.sh"] | breeze: build-image output docker build parameters to be used in scripts | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
I would like to see a `--print-docker-args` option for `breeze build-image`
**Use case / motivation**
`breeze build-image --production-image --additional-extras microsoft.mssql --print-docker-args` should print something like
```--build-arg XXXX=YYYY --build-arg WWW=ZZZZ```
I would like to use that output in a script so that I can use `kaniko-executor` to build the image instead of `docker build` .
<!-- What do you want to happen?
Rather than telling us how you might implement this solution, try to take a
step back and describe what you are trying to achieve.
-->
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14460 | https://github.com/apache/airflow/pull/14468 | 4a54292b69bb9a68a354c34246f019331270df3d | aa28e4ed77d8be6558dbeb8161a5af82c4395e99 | "2021-02-25T14:27:09Z" | python | "2021-02-26T20:50:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,422 | ["airflow/jobs/local_task_job.py", "tests/jobs/test_local_task_job.py"] | on_failure_callback does not seem to fire on pod deletion/eviction | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.16.x
**Environment**: KubernetesExecutor with single scheduler pod
**What happened**: On all previous versions we used (from 1.10.x to 2.0.0), evicting or deleting a running task pod triggered the `on_failure_callback` from `BaseOperator`. We use this functionality quite a lot to detect eviction and provide work carry-over and automatic task clear.
We've recently updated our dev environment to 2.0.1 and it seems that now `on_failure_callback` is only fired when pod completes naturally, i.e. not evicted / deleted with kubectl
Everything looks the same on task log level when pod is removed with `kubectl delete pod...`:
```
Received SIGTERM. Terminating subprocesses
Sending Signals.SIGTERM to GPID 16
Received SIGTERM. Terminating subprocesses.
Task received SIGTERM signal
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1315, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/li_operator.py", line 357, in execute
self.operator_task_code(context)
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/yarn_jar_operator.py", line 62, in operator_task_code
ssh_connection=_ssh_con
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/li_mapreduce_cluster_operator.py", line 469, in watch_application
existing_apps=_associated_applications.keys()
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/li_mapreduce_cluster_operator.py", line 376, in get_associated_application_info
logger=self.log
File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/yarn_api/yarn_api_ssh_client.py", line 26, in send_request
_response = requests.get(request)
File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 392, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.7/http/client.py", line 1277, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1323, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1272, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1032, in _send_output
self.send(msg)
File "/usr/local/lib/python3.7/http/client.py", line 972, in send
self.connect()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 187, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1241, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
Marking task as FAILED. dag_id=mock_dag_limr, task_id=SetupMockScaldingDWHJob, execution_date=20190910T000000, start_date=20210224T162811, end_date=20210224T163044
Process psutil.Process(pid=16, status='terminated', exitcode=1, started='16:28:10') (16) terminated with exit code 1
```
But `on_failure_callback` is not triggered. For simplicity, let's assume the callback does this:
```
def act_on_failure(context):
send_slack_message(
message=f"{context['task_instance_key_str']} fired failure callback",
channel=get_stored_variable('slack_log_channel')
)
def get_stored_variable(variable_name, deserialize=False):
try:
return Variable.get(variable_name, deserialize_json=deserialize)
except KeyError:
if os.getenv('PYTEST_CURRENT_TEST'):
_root_dir = str(Path(__file__).parent)
_vars_path = os.path.join(_root_dir, "vars.json")
_vars_json = json.loads(open(_vars_path, 'r').read())
if deserialize:
return _vars_json.get(variable_name, {})
else:
return _vars_json.get(variable_name, "")
else:
raise
def send_slack_message(message, channel):
_web_hook_url = get_stored_variable('slack_web_hook')
post = {
"text": message,
"channel": channel
}
try:
json_data = json.dumps(post)
req = request.Request(
_web_hook_url,
data=json_data.encode('ascii'),
headers={'Content-Type': 'application/json'}
)
request.urlopen(req)
except request.HTTPError as em:
print('Failed to send slack messsage to the hook {hook}: {msg}, request: {req}'.format(
hook=_web_hook_url,
msg=str(em),
req=str(post)
))
```
Scheduler logs related to this event:
```
21-02-24 16:33:04,968] {kubernetes_executor.py:147} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 had an event of type MODIFIED
[2021-02-24 16:33:04,968] {kubernetes_executor.py:202} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 Pending
[2021-02-24 16:33:04,979] {kubernetes_executor.py:147} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 had an event of type DELETED
[2021-02-24 16:33:04,979] {kubernetes_executor.py:197} INFO - Event: Failed to start pod mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3, will reschedule
[2021-02-24 16:33:05,406] {kubernetes_executor.py:354} INFO - Attempting to finish pod; pod_id: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3; state: up_for_reschedule; annotations: {'dag_id': 'mock_dag_limr', 'task_id': 'SetupMockSparkDwhJob', 'execution_date': '2019-09-10T00:00:00+00:00', 'try_number': '9'}
[2021-02-24 16:33:05,419] {kubernetes_executor.py:528} INFO - Changing state of (TaskInstanceKey(dag_id='mock_dag_limr', task_id='SetupMockSparkDwhJob', execution_date=datetime.datetime(2019, 9, 10, 0, 0, tzinfo=tzlocal()), try_number=9), 'up_for_reschedule', 'mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3', 'airflow', '173647183') to up_for_reschedule
[2021-02-24 16:33:05,422] {scheduler_job.py:1206} INFO - Executor reports execution of mock_dag_limr.SetupMockSparkDwhJob execution_date=2019-09-10 00:00:00+00:00 exited with status up_for_reschedule for try_number 9
```
However task stays in failed state (not what scheduler says)
When pod completes on its own (fails, exits with 0), callbacks are triggered correctly
**What you expected to happen**: `on_failure_callback` is called regardless of how pod exists, including SIGTERM-based interruptions: pod eviction, pod deletion
<!-- What do you think went wrong? --> Not sure really. We believe this code is executed since we get full stack trace
https://github.com/apache/airflow/blob/2.0.1/airflow/models/taskinstance.py#L1149
But then it is unclear why `finally` clause here does not run:
https://github.com/apache/airflow/blob/master/airflow/models/taskinstance.py#L1422
**How to reproduce it**:
With Airflow 2.0.1 running KubernetesExecutor, execute `kubectl delete ...` on any running task pod. Task operator should define `on_failure_callback`. In order to check that it is/not called, send data from it to any external logging system
**Anything else we need to know**:
Problem is persistent and only exists in 2.0.1 version
| https://github.com/apache/airflow/issues/14422 | https://github.com/apache/airflow/pull/15172 | e5d69ad6f2d25e652bb34b6bcf2ce738944de407 | def1e7c5841d89a60f8972a84b83fe362a6a878d | "2021-02-24T16:55:21Z" | python | "2021-04-23T22:47:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,421 | ["airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | NULL values in the operator column of task_instance table cause API validation failures | **Apache Airflow version**: 2.0.1
**Environment**: Docker on Linux Mint 20.1, image based on apache/airflow:2.0.1-python3.8
**What happened**:
I'm using the airflow API and the following exception occurred:
```python
>>> import json
>>> import requests
>>> from requests.auth import HTTPBasicAuth
>>> payload = {"dag_ids": ["{my_dag_id}"]}
>>> r = requests.post("https://localhost:8080/api/v1/dags/~/dagRuns/~/taskInstances/list", auth=HTTPBasicAuth('username', 'password'), data=json.dumps(payload), headers={'Content-Type': 'application/json'})
>>> r.status_code
500
>>> print(r.text)
{
"detail": "None is not of type 'string'\n\nFailed validating 'type' in schema['allOf'][0]['properties'][
'task_instances']['items']['properties']['operator']:\n {'type': 'string'}\n\nOn instance['task_instanc
es'][5]['operator']:\n None",
"status": 500,
"title": "Response body does not conform to specification",
"type": "https://airflow.apache.org/docs/2.0.1/stable-rest-api-ref.html#section/Errors/Unknown"
}
None is not of type 'string'
Failed validating 'type' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['ope
rator']:
{'type': 'string'}
On instance['task_instances'][5]['operator']:
None
```
This happens on all the "old" task instances before upgrading to 2.0.0
There is no issue with new task instances created after the upgrade.
<!-- (please include exact error messages if you can) -->
**What do you think went wrong?**:
The `operator` column was introduced in 2.0.0. But during migration, all the existing database entries are filled with `NULL` values. So I had to execute this manually in my database
```sql
UPDATE task_instance SET operator = 'NoOperator' WHERE operator IS NULL;
```
**How to reproduce it**:
* Run airflow 1.10.14
* Create a DAG with multiple tasks and run them
* Upgrade airflow to 2.0.0 or 2.0.1
* Make the API call as above
**Anything else we need to know**:
Similar to https://github.com/apache/airflow/issues/13799 but not exactly the same
| https://github.com/apache/airflow/issues/14421 | https://github.com/apache/airflow/pull/16516 | 60925453b1da9fe54ca82ed59889cd65a0171516 | 087556f0c210e345ac1749933ff4de38e40478f6 | "2021-02-24T15:24:05Z" | python | "2021-06-18T07:56:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,417 | ["airflow/providers/apache/druid/operators/druid.py", "tests/providers/apache/druid/operators/test_druid.py"] | DruidOperator failing to submit ingestion tasks : Getting 500 error code from Druid | Issue : When trying to submit an ingestion task using DruidOperator, getting 500 error code in response from Druid. And can see no task submitted in Druid console.
In Airflow 1.10.x, everything is working fine. But when upgraded to 2.0.1, it is failing to submit the task. There is absolutely no change in the code/files except the import statements.
Resolution : I compared DruidOperator code for both Airflow 1.10.x & 2.0.1 and found one line causing the issue.
In Airflow 2.0.x, before submitting the indexing job json string is converted to python object. But it should be json string only.
In Airflow 1.10.x there is no conversion happening and hence it is working fine. (Please see below code snippets.)
I have already tried this change in my setup and re-ran the ingestion tasks. It is all working fine.
~~hook.submit_indexing_job(json.loads(self.json_index_file))~~
**hook.submit_indexing_job(self.json_index_file)**
Airflow 1.10.x - airflow/contrib/operators/druid_operator.py
```
def execute(self, context):
hook = DruidHook(
druid_ingest_conn_id=self.conn_id,
max_ingestion_time=self.max_ingestion_time
)
self.log.info("Submitting %s", self.index_spec_str)
hook.submit_indexing_job(self.index_spec_str)
```
Airflow 2.0.1 - airflow/providers/apache/druid/operators/druid.py
```
def execute(self, context: Dict[Any, Any]) -> None:
hook = DruidHook(druid_ingest_conn_id=self.conn_id, max_ingestion_time=self.max_ingestion_time)
self.log.info("Submitting %s", self.json_index_file)
hook.submit_indexing_job(json.loads(self.json_index_file))
```
**Apache Airflow version**: 2.0.x
**Error Logs**:
```
[2021-02-24 06:42:24,287] {{connectionpool.py:452}} DEBUG - http://druid-master:8081 "POST /druid/indexer/v1/task HTTP/1.1" 500 15714
[2021-02-24 06:42:24,287] {{taskinstance.py:570}} DEBUG - Refreshing TaskInstance <TaskInstance: druid_compact_daily 2021-02-23T01:20:00+00:00 [running]> from DB
[2021-02-24 06:42:24,296] {{taskinstance.py:605}} DEBUG - Refreshed TaskInstance <TaskInstance: druid_compact_daily 2021-02-23T01:20:00+00:00 [running]>
[2021-02-24 06:42:24,298] {{taskinstance.py:1455}} ERROR - Did not get 200 when submitting the Druid job to http://druid-master:8081/druid/indexer/v1/task
```
| https://github.com/apache/airflow/issues/14417 | https://github.com/apache/airflow/pull/14418 | c2a0cb958835d0cecd90f82311e2aa8b1bbd22a0 | 59065400ff6333e3ff085f3d9fe9005a0a849aef | "2021-02-24T11:31:24Z" | python | "2021-03-05T22:48:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,393 | ["scripts/ci/libraries/_build_images.sh"] | breeze: it relies on GNU date which is not the default on macOS | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **OS** macOS Catalina
**What happened**:
`breeze build-image` gives
```
Pulling the image apache/airflow:master-python3.6-ci
master-python3.6-ci: Pulling from apache/airflow
Digest: sha256:92351723f04bec6e6ef27c0be2b5fbbe7bddc832911988b2d18b88914bcc1256
Status: Downloaded newer image for apache/airflow:master-python3.6-ci
docker.io/apache/airflow:master-python3.6-ci
date: illegal option -- -
usage: date [-jnRu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
[-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
```
As I explained in [Slack](https://apache-airflow.slack.com/archives/CQ9QHSFQX/p1614092075007500) this is caused by
https://github.com/apache/airflow/blob/b9951279a0007db99a6f4c52197907ebfa1bf325/scripts/ci/libraries/_build_images.sh#L770
```
--build-arg AIRFLOW_IMAGE_DATE_CREATED="$(date --rfc-3339=seconds | sed 's/ /T/')" \
```
`--rfc-3339` is an option supported by GNU date but not by the regular `date` command present macOS.
**What you expected to happen**:
`--rfc-3339` is an option supported by GNU date but not by the regular `date` command present macOS.
**How to reproduce it**:
on macOS: `breeze build-image`
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
It happens every time
I think this can be solve either by checking for the presence of `gdate` and use that if present or by adhering to POSIX date options (I'm not 100% but I do believe the regular POSIX options are available in macOS's `date`)
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14393 | https://github.com/apache/airflow/pull/14458 | 997a009715fb82c241a47405cc8647d23580af25 | 64cf2aedd94d27be3ab7829b7c92bd6b1866295b | "2021-02-23T15:09:59Z" | python | "2021-02-25T14:57:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,390 | ["BREEZE.rst", "breeze"] | breeze: capture output to a file | Reference: [Slack conversation with Jarek Potiuk](https://apache-airflow.slack.com/archives/CQ9QHSFQX/p1614073317003900) @potiuk
**Description**
Suggestion: it would be great if breeze captured the output into a log file by default (for example when running breeze build-image) so that is easier to review the build process, I have seen at least errors invoking date utility and now I would need to run the whole thing again to capture that. I
**Use case / motivation**
I want to be able to review what happened during breeze command that output lots of text
and take a lot of time like `breeze build-image`.
Ideally I want to happen automatically as it's very time consuming to rerun the `breeze` command to get the error.
**Are you willing to submit a PR?**
<!--- We accept contributions! -->
**Related Issues**
<!-- Is there currently another issue associated with this? -->
| https://github.com/apache/airflow/issues/14390 | https://github.com/apache/airflow/pull/14470 | 8ad2f9c64e9ce89c252cc61f450947d53935e0f2 | 4a54292b69bb9a68a354c34246f019331270df3d | "2021-02-23T14:46:09Z" | python | "2021-02-26T20:49:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,384 | ["airflow/models/dagrun.py", "airflow/www/views.py"] | Scheduler ocassionally crashes with a TypeError when updating DagRun state | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**: Python 3.8
**What happened**:
Occasionally, the Airflow scheduler crashes with the following exception:
```
Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1382, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1521, in _do_scheduling
self._schedule_dag_run(dag_run, active_runs_by_dag_id.get(dag_run.dag_id, set()), session)
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1760, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/models/dagrun.py", line 478, in update_state
self._emit_duration_stats_for_finished_state()
File "/usr/local/lib/python3.8/site-packages/airflow/models/dagrun.py", line 615, in _emit_duration_stats_for_finished_state
duration = self.end_date - self.start_date
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
Just before this error surfaces, Airflow typically logs either one of the following messages:
```
Marking run <DagRun sessionization @ 2021-02-15 08:05:00+00:00: scheduled__2021-02-15T08:05:00+00:00, externally triggered: False> failed
```
or
```
Marking run <DagRun sessionization @ 2021-02-16 08:05:00+00:00: scheduled__2021-02-16T08:05:00+00:00, externally triggered: False> successful
```
The cause of this issue appears to be that the scheduler is attempting to update the state of a `DagRun` instance that is in a _running_ state, but does **not** have a `start_date` set. This will eventually cause a `TypeError` to be raised at L615 in `_emit_duration_stats_for_finished_state()` because `None` is subtracted from a `datetime` object.
During my testing I was able to resolve the issue by manually updating any records in the `DagRun` table which are missing a `start_date`.
However, it is a bit unclear to me _how_ it is possible for a DagRun instance to be transitioned into a `running` state, without having a `start_date` set. I spent some time digging through the code, and I believe the only code path that would allow a `DagRun` to end up in such a scenario is the state transition that occurs at L475 in [DagRun](https://github.com/apache/airflow/blob/2.0.1/airflow/models/dagrun.py#L475) where `DagRun.set_state(State.RUNNING)` is invoked without verifying that a `start_date` is set.
**What you expected to happen**:
I expect the Airflow scheduler not to crash, and to handle this scenario gracefully.
I have the impression that this is an edge-case, and even handling a missing `start_date` to be equal to a set `end_date` in `_emit_duration_stats_for_finished_state()` seems like a more favorable solution than raising a `TypeError` in the scheduler to me.
**How to reproduce it**:
I haven't been able to figure out a scenario which allows me to reproduce this issue reliably. We've had this issue surface for a fairly complex DAG twice in a time span of 5 days. We run over 25 DAGs on our Airflow instance, and so far the issue seems to be isolated to a single DAG.
**Anything else we need to know**:
While I'm unclear on what exactly causes a DagRun instance to not have a `start_date` set, the problem seems to be isolated to a single DAG on our Airflow instance. This DAG is fairly complex in the sense that it contains a number of root tasks, that have SubDagOperators set as downstream (leaf) dependencies. These SubDagOperators each represent a SubDag containing between 2 and 12 tasks. The SubDagOperator's have `depends_on_past` set to True, and `catchup` is enabled for the parent DAG. The parent DAG also has `max_active_runs` set to limit concurrency.
I also have the impression, that this issue mainly surfaces when there are multiple DagRuns running concurrently for this DAG, but I don't have hard evidence for this. I did at some point clear task instance states, and transition the DagRun's state from `failed` back to `running` through the Web UI around the period that some of these issues arose.
I've also been suspecting that this [block of code](https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L1498-L1511) in `_do_scheduling` may be related to this issue, in the sense that I've been suspecting that there exists an edge case in which Task Instances may be considered active for a particular `execution_date`, but for which the DagRun object itself is not "active". My hypothesis is that this would eventually cause the "inactive" DagRun to be transitioned to `running` in `DagRun.set_state()` without ensuring that a `start_date` was set for the DagRun. I haven't been able to gather strong evidence for this hypothesis yet, though, and I'm hoping that someone more familiar with the implementation will be able to provide some guidance as to whether that hypothesis makes sense or not.
| https://github.com/apache/airflow/issues/14384 | https://github.com/apache/airflow/pull/14452 | 258ec5d95e98eac09ecc7658dcd5226c9afe14c6 | 997a009715fb82c241a47405cc8647d23580af25 | "2021-02-23T11:38:10Z" | python | "2021-02-25T14:40:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,364 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Missing schedule_delay metric | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0 but applicable to master
**Environment**: Running on ECS but not relevant to question
**What happened**: I am not seeing the metric dagrun.schedule_delay.<dag_id> being reported. A search in the codebase seems to reveal that it no longer exists. It was originally added in https://github.com/apache/airflow/pull/5050.
<!-- (please include exact error messages if you can) -->
<!-- What do you think went wrong? -->
I suspect either:
1. This metric was intentionally removed, in which case the docs should be updated to remove it.
2. It was unintentionally removed during a refactor, in which case we should add it back.
3. I am bad at searching through code, and someone could hopefully point me to where it is reported from now.
**How to reproduce it**:
https://github.com/apache/airflow/search?q=schedule_delay
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14364 | https://github.com/apache/airflow/pull/15105 | 441b4ef19f07d8c72cd38a8565804e56e63b543c | ca4c4f3d343dea0a034546a896072b9c87244e71 | "2021-02-22T18:18:44Z" | python | "2021-03-31T12:38:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,363 | ["airflow/providers/google/cloud/hooks/gcs.py"] | Argument order change in airflow.providers.google.cloud.hooks.gcs.GCSHook:download appears to be a mistake. | ### Description
https://github.com/apache/airflow/blob/6019c78cb475800f58714a9dabb747b9415599c8/airflow/providers/google/cloud/hooks/gcs.py#L262-L265
Was this order swap of the `object_name` and `bucket_name` arguments in the `GCSHook.download` a mistake? The `upload` and `delete` methods still use `bucket_name` first and the commit where this change was made `1845cd11b77f302777ab854e84bef9c212c604a0` was supposed to just add strict type checking. The docstring also appears to reference the old order. | https://github.com/apache/airflow/issues/14363 | https://github.com/apache/airflow/pull/14497 | 77f5629a80cfec643bd3811bc92c48ef4ec39ceb | bfef559cf6138eec3ac77c64289fb1d45133d8be | "2021-02-22T17:21:20Z" | python | "2021-02-27T09:03:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,331 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/utils/state.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Airflow stable API taskInstance call fails if a task is removed from running DAG | **Apache Airflow version**: 2.0.1
**Environment**: Docker on Win 10 with WSL, image based on `apache/airflow:2.0.1-python3.8`
**What happened**:
I'm using the airflow API and the following (what I believe to be a) bug popped up:
```Python
>>> import requests
>>> r = requests.get("http://localhost:8084/api/v1/dags/~/dagRuns/~/taskInstances", auth=HTTPBasicAuth('username', 'password'))
>>> r.status_code
500
>>> print(r.text)
{
"detail": "'removed' is not one of ['success', 'running', 'failed', 'upstream_failed', 'skipped', 'up_for_retry', 'up_for_reschedule', 'queued', 'none', 'scheduled']\n\nFailed validating 'enum' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['state']:\n {'description': 'Task state.',\n 'enum': ['success',\n 'running',\n 'failed',\n 'upstream_failed',\n 'skipped',\n 'up_for_retry',\n 'up_for_reschedule',\n 'queued',\n 'none',\n 'scheduled'],\n 'nullable': True,\n 'type': 'string',\n 'x-scope': ['',\n '#/components/schemas/TaskInstanceCollection',\n '#/components/schemas/TaskInstance']}\n\nOn instance['task_instances'][16]['state']:\n 'removed'",
"status": 500,
"title": "Response body does not conform to specification",
"type": "https://airflow.apache.org/docs/2.0.1rc2/stable-rest-api-ref.html#section/Errors/Unknown"
}
>>> print(r.json()["detail"])
'removed' is not one of ['success', 'running', 'failed', 'upstream_failed', 'skipped', 'up_for_retry', 'up_for_reschedule', 'queued', 'none', 'scheduled']
Failed validating 'enum' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['state']:
{'description': 'Task state.',
'enum': ['success',
'running',
'failed',
'upstream_failed',
'skipped',
'up_for_retry',
'up_for_reschedule',
'queued',
'none',
'scheduled'],
'nullable': True,
'type': 'string',
'x-scope': ['',
'#/components/schemas/TaskInstanceCollection',
'#/components/schemas/TaskInstance']}
On instance['task_instances'][16]['state']:
'removed'
```
This happened after I changed a DAG in the corresponding instance, thus a task was removed from a DAG while the DAG was running.
**What you expected to happen**:
Give me all task instances, whether including the removed ones or not is up to the airflow team to decide (no preferences from my side, though I'd guess it makes more sense to supply all data as it is available).
**How to reproduce it**:
- Run airflow
- Create a DAG with multiple tasks
- While the DAG is running, remove one of the tasks (ideally one that did not yet run)
- Make the API call as above | https://github.com/apache/airflow/issues/14331 | https://github.com/apache/airflow/pull/14381 | ea7118316660df43dd0ac0a5e72283fbdf5f2396 | 7418679591e5df4ceaab6c471bc6d4a975201871 | "2021-02-20T13:15:11Z" | python | "2021-03-08T21:24:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,327 | ["airflow/utils/json.py"] | Kubernetes Objects are not serializable and break Graph View in UI | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.17.12
**Environment**:
- **Cloud provider or hardware configuration**: AWS EKS
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
When you click on Graph view for some DAGs in 2.0.1 the UI errors out (logs below).
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
The Graph view to display
<!-- What do you think went wrong? -->
**How to reproduce it**:
It is not clear to me. This only happening on a handle of our DAGs. Also the tree view displays fine.
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/www/decorators.py", line 97, in view_func
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/www/views.py", line 2080, in graph
return self.render_template(
File "/usr/local/lib/python3.8/site-packages/airflow/www/views.py", line 396, in render_template
return super().render_template(
File "/usr/local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 280, in render_template
return render_template(
File "/usr/local/lib/python3.8/site-packages/flask/templating.py", line 137, in render_template
return _render(
File "/usr/local/lib/python3.8/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/usr/local/lib/python3.8/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/usr/local/lib/python3.8/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/usr/local/lib/python3.8/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.8/site-packages/airflow/www/templates/airflow/graph.html", line 21, in top-level template code
{% from 'appbuilder/loading_dots.html' import loading_dots %}
File "/usr/local/lib/python3.8/site-packages/airflow/www/templates/airflow/dag.html", line 21, in top-level template code
{% from 'appbuilder/dag_docs.html' import dag_docs %}
File "/usr/local/lib/python3.8/site-packages/airflow/www/templates/airflow/main.html", line 20, in top-level template code
{% extends 'appbuilder/baselayout.html' %}
File "/usr/local/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
{% import 'appbuilder/baselib.html' as baselib %}
File "/usr/local/lib/python3.8/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 60, in top-level template code
{% block tail %}
File "/usr/local/lib/python3.8/site-packages/airflow/www/templates/airflow/graph.html", line 145, in block "tail"
var task_instances = {{ task_instances|tojson }};
File "/usr/local/lib/python3.8/site-packages/flask/json/__init__.py", line 376, in tojson_filter
return Markup(htmlsafe_dumps(obj, **kwargs))
File "/usr/local/lib/python3.8/site-packages/flask/json/__init__.py", line 290, in htmlsafe_dumps
dumps(obj, **kwargs)
File "/usr/local/lib/python3.8/site-packages/flask/json/__init__.py", line 211, in dumps
rv = _json.dumps(obj, **kwargs)
File "/usr/local/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/usr/local/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/json.py", line 74, in _default
raise TypeError(f"Object of type '{obj.__class__.__name__}' is not JSON serializable")
TypeError: Object of type 'V1ResourceRequirements' is not JSON serializable
``` | https://github.com/apache/airflow/issues/14327 | https://github.com/apache/airflow/pull/15199 | 6706b67fecc00a22c1e1d6658616ed9dd96bbc7b | 7b577c35e241182f3f3473ca02da197f1b5f7437 | "2021-02-20T00:17:59Z" | python | "2021-04-05T11:41:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,326 | ["airflow/kubernetes/pod_generator.py", "tests/kubernetes/test_pod_generator.py"] | Task Instances stuck in "scheduled" state | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.17.12
**Environment**:
- **Cloud provider or hardware configuration**: AWS EKS
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
Several Task Instances get `scheduled` but never move to `queued` or `running`. They then become orphan tasks.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
Tasks to get scheduled and run :)
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
I believe the issue is caused by this [limit](https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L923). If we have more Task Instances than Pool Slots Free then some Task Instances may never show up in this query.
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14326 | https://github.com/apache/airflow/pull/14703 | b1ce429fee450aef69a813774bf5d3404d50f4a5 | b5e7ada34536259e21fca5032ef67b5e33722c05 | "2021-02-20T00:11:56Z" | python | "2021-03-26T14:41:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,299 | ["airflow/www/templates/airflow/dag_details.html"] | UI: Start Date is incorrect in "DAG Details" view | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
The start date in the "DAG Details" view `{AIRFLOW_URL}/dag_details?dag_id={dag_id}` is incorrect if there's a schedule for the DAG.
**What you expected to happen**:
Start date should be the same as the specified date in the DAG.
**How to reproduce it**:
For example, I created a DAG with a start date of `2019-07-09` but the DAG details view shows:
![image](https://user-images.githubusercontent.com/12819087/108415107-ee07c900-71e1-11eb-8661-79bd0288291c.png)
Minimal code block to reproduce:
```
from datetime import datetime, timedelta
START_DATE = datetime(2019, 7, 9)
DAG_ID = '*redacted*'
dag = DAG(
dag_id=DAG_ID,
description='*redacted*',
catchup=False,
start_date=START_DATE,
schedule_interval=timedelta(weeks=1),
)
start = DummyOperator(task_id='start', dag=dag)
```
| https://github.com/apache/airflow/issues/14299 | https://github.com/apache/airflow/pull/16206 | 78c4f1a46ce74f13a99447207f8cdf0fcfc7df95 | ebc03c63af7282c9d826054b17fe7ed50e09fe4e | "2021-02-18T20:14:50Z" | python | "2021-06-08T14:13:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,279 | ["airflow/providers/amazon/aws/example_dags/example_s3_bucket.py", "airflow/providers/amazon/aws/example_dags/example_s3_bucket_tagging.py", "airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/operators/s3_bucket.py", "airflow/providers/amazon/aws/operators/s3_bucket_tagging.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/operators/s3.rst", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/operators/test_s3_bucket_tagging.py", "tests/providers/amazon/aws/operators/test_s3_bucket_tagging_system.py"] | Add AWS S3 Bucket Tagging Operator | **Description**
Add the missing AWS Operators for the three (get/put/delete) AWS S3 bucket tagging APIs, including testing.
**Use case / motivation**
I am looking to add an Operator that will implement the existing API functionality to manage the tags on an AWS S3 bucket.
**Are you willing to submit a PR?**
Yes
**Related Issues**
None that I saw
| https://github.com/apache/airflow/issues/14279 | https://github.com/apache/airflow/pull/14402 | f25ec3368348be479dde097efdd9c49ce56922b3 | 8ced652ecf847ed668e5eed27e3e47a51a27b1c8 | "2021-02-17T17:07:01Z" | python | "2021-02-28T20:50:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,270 | ["airflow/task/task_runner/standard_task_runner.py", "tests/task/task_runner/test_standard_task_runner.py"] | Specify that exit code -9 is due to RAM | Related to https://github.com/apache/airflow/issues/9655
It would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that.
I have found the code where the -9 is assigned but have no idea how to add a logging message.
self.process = None
if self._rc is None:
# Something else reaped it before we had a chance, so let's just "guess" at an error code.
self._rc = -9 | https://github.com/apache/airflow/issues/14270 | https://github.com/apache/airflow/pull/15207 | eae22cec9c87e8dad4d6e8599e45af1bdd452062 | 18e2c1de776c8c3bc42c984ea0d31515788b6572 | "2021-02-17T09:01:05Z" | python | "2021-04-06T19:02:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,264 | ["airflow/sensors/external_task.py", "tests/dags/test_external_task_sensor_check_existense.py", "tests/sensors/test_external_task_sensor.py"] | AirflowException: The external DAG was deleted when external_dag_id references zipped DAG | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.16.3
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- **Kernel** (e.g. `uname -a`): Linux airflow 3.10.0-1127.10.1.el7.x86_64 #1 SMP Tue May 26 15:05:43 EDT 2020 x86_64 GNU/Linux
- **Install tools**:
- **Others**:
**What happened**:
ExternalTaskSensor with check_existence=True referencing an external DAG inside a .zip file raises the following exception:
```
ERROR - The external DAG dag_a /opt/airflow-data/dags/my_dags.zip/dag_a.py was deleted.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1086, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1260, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1300, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/sensors/base.py", line 228, in execute
while not self.poke(context):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/sensors/external_task.py", line 159, in poke
self._check_for_existence(session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/sensors/external_task.py", line 184, in _check_for_existence
raise AirflowException(f'The external DAG {self.external_dag_id} was deleted.')
airflow.exceptions.AirflowException: The external DAG dag_a /opt/airflow-data/dags/my_dags.zip/dag_a.py was deleted.
```
**What you expected to happen**:
The existence check should PASS.
**How to reproduce it**:
1. Create a file *dag_a.py* with the following contents:
```
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.timezone import datetime
DEFAULT_DATE = datetime(2015, 1, 1)
with DAG("dag_a", start_date=DEFAULT_DATE, schedule_interval="@daily") as dag:
task_a = DummyOperator(task_id="task_a", dag=dag)
```
2. Create a file *dag_b.py* with contents:
```
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.sensors.external_task import ExternalTaskSensor
from airflow.utils.timezone import datetime
DEFAULT_DATE = datetime(2015, 1, 1)
with DAG("dag_b", start_date=DEFAULT_DATE, schedule_interval="@daily") as dag:
sense_a = ExternalTaskSensor(
task_id="sense_a",
external_dag_id="dag_a",
external_task_id="task_a",
check_existence=True
)
task_b = DummyOperator(task_id="task_b", dag=dag)
sense_a >> task_b
```
3. `zip my_dags.zip dag_a.py dag_b.py`
4. Load *my_dags.zip* into airflow and run *dag_b*
5. Task *sense_a* will fail with exception above.
**Anything else we need to know**:
| https://github.com/apache/airflow/issues/14264 | https://github.com/apache/airflow/pull/27056 | 911d90d669ab5d1fe1f5edb1d2353c7214611630 | 99a6bf783412432416813d1c4bb41052054dd5c6 | "2021-02-17T00:16:48Z" | python | "2022-11-16T12:53:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,260 | ["UPDATING.md", "airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/models/dag.py", "airflow/models/taskinstance.py", "tests/sensors/test_external_task_sensor.py"] | Clearing using ExternalTaskMarker will not activate external DagRuns | **Apache Airflow version**:
2.0.1
**What happened**:
When clearing task across dags using `ExternalTaskMarker` the dag state of the external `DagRun` is not set to active. So cleared tasks in the external dag will not automatically start if the `DagRun` is a Failed or Succeeded state.
**What you expected to happen**:
The external `DagRun` run should also be set to Running state.
**How to reproduce it**:
Clear tasks in an external dag using an ExternalTaskMarker.
**Anything else we need to know**:
Looking at the code is has:
https://github.com/apache/airflow/blob/b23fc137812f5eabf7834e07e032915e2a504c17/airflow/models/dag.py#L1323-L1335
It seems like it intentionally calls the dag member method `set_dag_run_state` instead of letting the helper function `clear_task_insntances` set the `DagRun` state. But the member method will only change the state of `DagRun`s of dag where the original task is, while I believe `clear_task_instances` would correctly change the state of all involved `DagRun`s.
| https://github.com/apache/airflow/issues/14260 | https://github.com/apache/airflow/pull/15382 | f75dd7ae6e755dad328ba6f3fd462ade194dab25 | 2bca8a5425c234b04fdf32d6c50ae3a91cd08262 | "2021-02-16T14:06:32Z" | python | "2021-05-29T15:01:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,252 | ["airflow/models/baseoperator.py", "tests/core/test_core.py"] | Unable to clear Failed task with retries |
**Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): NA
**Environment**: Windows WSL2 (Ubuntu) Local
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04
- **Kernel** (e.g. `uname -a`): Linux d255bce4dcd5 5.4.72-microsoft-standard-WSL2
- **Install tools**: Docker -compose
- **Others**:
**What happened**:
I have a dag with tasks:
Task1 - Get Date
Task 2 - Get data from Api call (Have set retires to 3)
Task 3 - Load Data
Task 2 had failed after three attempts. I am unable to clear the task Instance and get the below error in UI.
[Dag Code](https://github.com/anilkulkarni87/airflow-docker/blob/master/dags/covidNyDaily.py)
```
Python version: 3.8.7
Airflow version: 2.0.1rc2
Node: d255bce4dcd5
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1547, in clear
return self._clear_dag_tis(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1475, in _clear_dag_tis
count = dag.clear(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dag.py", line 1324, in clear
clear_task_instances(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 160, in clear_task_instances
ti.max_tries = ti.try_number + task_retries - 1
TypeError: unsupported operand type(s) for +: 'int' and 'str'
```
**What you expected to happen**:
I expected to clear the Task Instance so that the task could be scheduled again.
**How to reproduce it**:
1) Clone the repo link shared above
2) Follow instructions to setup cluster.
3) Change code to enforce error in Task 2
4) Execute and try to clear task instance after three attempts.
![Error pops up when clicked on Clear](https://user-images.githubusercontent.com/10644132/107998258-8e1ee180-6f99-11eb-8442-0c0be5b23478.png)
| https://github.com/apache/airflow/issues/14252 | https://github.com/apache/airflow/pull/16415 | 643f3c35a6ba3def40de7db8e974c72e98cfad44 | 15ff2388e8a52348afcc923653f85ce15a3c5f71 | "2021-02-15T22:27:00Z" | python | "2021-06-13T00:29:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,249 | ["airflow/models/dagrun.py"] | both airflow dags test and airflow backfill cli commands got same error in airflow Version 2.0.1 | **Apache Airflow version: 2.0.1**
**What happened:**
Running an airflow dags test or backfill CLI command shown in tutorial, produces the same error.
**dags test cli command result:**
```
(airflow_venv) (base) app@lunar_01:airflow$ airflow dags test tutorial 2015-06-01
[2021-02-16 04:29:22,355] {dagbag.py:448} INFO - Filling up the DagBag from /home/app/Lunar/src/airflow/dags
[2021-02-16 04:29:22,372] {example_kubernetes_executor_config.py:174} WARNING - Could not import DAGs in example_kubernetes_executor_config.py: No module named 'kubernetes'
[2021-02-16 04:29:22,373] {example_kubernetes_executor_config.py:175} WARNING - Install kubernetes dependencies with: pip install apache-airflow['cncf.kubernetes']
Traceback (most recent call last):
File "/home/app/airflow_venv/bin/airflow", line 10, in <module>
sys.exit(main())
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/commands/dag_command.py", line 389, in dag_test
dag.run(executor=DebugExecutor(), start_date=args.execution_date, end_date=args.execution_date)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/models/dag.py", line 1706, in run
job.run()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/base_job.py", line 237, in run
self._execute()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 805, in _execute
session=session,
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 715, in _execute_for_run_dates
tis_map = self._task_instances_for_dag_run(dag_run, session=session)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 359, in _task_instances_for_dag_run
dag_run.refresh_from_db()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/models/dagrun.py", line 178, in refresh_from_db
DR.run_id == self.run_id,
File "/home/app/airflow_venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3500, in one
raise orm_exc.NoResultFound("No row was found for one()")
sqlalchemy.orm.exc.NoResultFound: No row was found for one()
```
**backfill cli command result:**
```
(airflow_venv) (base) app@lunar_01:airflow$ airflow dags backfill tutorial --start-date 2015-06-01 --end-date 2015-06-07
/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/commands/dag_command.py:62 PendingDeprecationWarning: --ignore-first-depends-on-past is deprecated as the value is always set to True
[2021-02-16 04:30:16,979] {dagbag.py:448} INFO - Filling up the DagBag from /home/app/Lunar/src/airflow/dags
[2021-02-16 04:30:16,996] {example_kubernetes_executor_config.py:174} WARNING - Could not import DAGs in example_kubernetes_executor_config.py: No module named 'kubernetes'
[2021-02-16 04:30:16,996] {example_kubernetes_executor_config.py:175} WARNING - Install kubernetes dependencies with: pip install apache-airflow['cncf.kubernetes']
Traceback (most recent call last):
File "/home/app/airflow_venv/bin/airflow", line 10, in <module>
sys.exit(main())
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/cli/commands/dag_command.py", line 116, in dag_backfill
run_backwards=args.run_backwards,
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/models/dag.py", line 1706, in run
job.run()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/base_job.py", line 237, in run
self._execute()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 805, in _execute
session=session,
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 715, in _execute_for_run_dates
tis_map = self._task_instances_for_dag_run(dag_run, session=session)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/jobs/backfill_job.py", line 359, in _task_instances_for_dag_run
dag_run.refresh_from_db()
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/app/airflow_venv/lib/python3.7/site-packages/airflow/models/dagrun.py", line 178, in refresh_from_db
DR.run_id == self.run_id,
File "/home/app/airflow_venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3500, in one
raise orm_exc.NoResultFound("No row was found for one()")
sqlalchemy.orm.exc.NoResultFound: No row was found for one()
``` | https://github.com/apache/airflow/issues/14249 | https://github.com/apache/airflow/pull/16809 | 2b7c59619b7dd6fd5031745ade7756466456f803 | faffaec73385db3c6910d31ccea9fc4f9f3f9d42 | "2021-02-15T20:42:29Z" | python | "2021-07-07T11:04:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,208 | ["airflow/configuration.py", "docs/apache-airflow/howto/set-up-database.rst"] | Python 3.8 - Sqlite3 version error | python 3.8 centos 7 Docker image
Can't update sqlite3. Tried Airflow 2.0.1 and 2.0.0. Same issue on Python 3.6 with Airflow 2.0.0. I was able to force the install on 2.0.0 but when running a task it failed because of the sqlite3 version mismatch.
Am I just stupid? #13496
#```
(app-root) airflow db init
Traceback (most recent call last):
File "/opt/app-root/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/opt/app-root/lib64/python3.8/site-packages/airflow/__init__.py", line 34, in <module>
from airflow import settings
File "/opt/app-root/lib64/python3.8/site-packages/airflow/settings.py", line 37, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/opt/app-root/lib64/python3.8/site-packages/airflow/configuration.py", line 1007, in <module>
conf.validate()
File "/opt/app-root/lib64/python3.8/site-packages/airflow/configuration.py", line 209, in validate
self._validate_config_dependencies()
File "/opt/app-root/lib64/python3.8/site-packages/airflow/configuration.py", line 246, in _validate_config_dependencies
raise AirflowConfigException(f"error: cannot use sqlite version < {min_sqlite_version}")
airflow.exceptions.AirflowConfigException: error: cannot use sqlite version < 3.15.0
(app-root) python -c "import sqlite3; print(sqlite3.sqlite_version)"
3.7.17
(app-root) python --version
Python 3.8.6
(app-root) pip install --upgrade sqlite3
ERROR: Could not find a version that satisfies the requirement sqlite3 (from versions: none)
ERROR: No matching distribution found for sqlite3
``` | https://github.com/apache/airflow/issues/14208 | https://github.com/apache/airflow/pull/14209 | 59c94c679e996ab7a75b4feeb1755353f60d030f | 4c90712f192dd552d1791712a49bcdc810ebe82f | "2021-02-12T15:57:55Z" | python | "2021-02-13T17:46:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,202 | ["chart/templates/scheduler/scheduler-deployment.yaml"] | Scheduler in helm chart cannot access DAG with git sync | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0.1
**What happened**:
When dags `git-sync` is `true` and `persistent` is `false`, `airflow dags list` returns nothing and the `DAGS Folder` is empty
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
Scheduler container should still have a volumeMount to read the volume `dags` populated by the `git-sync` container
<!-- What do you think went wrong? -->
**How to reproduce it**:
```
--set dags.persistence.enabled=false \
--set dags.gitSync.enabled=true \
```
Scheduler cannot access git-sync DAG as Scheduler's configured `DAGS Folder` path isn't mounted on the volume `dags`
<!---
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14202 | https://github.com/apache/airflow/pull/14203 | 8f21fb1bf77fc67e37dc13613778ff1e6fa87cea | e164080479775aca53146331abf6f615d1f03ff0 | "2021-02-12T06:56:10Z" | python | "2021-02-19T01:03:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,200 | ["docs/apache-airflow/best-practices.rst", "docs/apache-airflow/security/index.rst", "docs/apache-airflow/security/secrets/secrets-backend/index.rst"] | Update Best practises doc | Update https://airflow.apache.org/docs/apache-airflow/stable/best-practices.html#variables to use Secret Backend (especially Environment Variables) as it asks user not to use Variable in top level | https://github.com/apache/airflow/issues/14200 | https://github.com/apache/airflow/pull/17319 | bcf719bfb49ca20eea66a2527307968ff290c929 | 2c1880a90712aa79dd7c16c78a93b343cd312268 | "2021-02-11T19:31:08Z" | python | "2021-08-02T20:43:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,182 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Scheduler dies if executor_config isnt passed a dict when using K8s executor | **Apache Airflow version**: 2.0.1
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.15
**Environment**:
- **Cloud provider or hardware configuration**: k8s on bare metal
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: pip3
- **Others**:
**What happened**:
Scheduler dies with
```
[2021-02-10 21:09:27,469] {scheduler_job.py:1298} ERROR - Exception when executing SchedulerJob._run_schedu
ler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1384, in _run_scheduler
_loop
self.executor.heartbeat()
File "/usr/local/lib/python3.8/site-packages/airflow/executors/base_executor.py", line 158, in heartbeat
self.trigger_tasks(open_slots)
File "/usr/local/lib/python3.8/site-packages/airflow/executors/base_executor.py", line 188, in trigger_ta
sks
self.execute_async(key=key, command=command, queue=None, executor_config=ti.executor_config)
File "/usr/local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 493, in exec
ute_async
kube_executor_config = PodGenerator.from_obj(executor_config)
File "/usr/local/lib/python3.8/site-packages/airflow/kubernetes/pod_generator.py", line 175, in from_obj
k8s_legacy_object = obj.get("KubernetesExecutor", None)
AttributeError: 'V1Pod' object has no attribute 'get'
[2021-02-10 21:09:28,475] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 60
[2021-02-10 21:09:29,222] {process_utils.py:66} INFO - Process psutil.Process(pid=66, status='terminated',
started='21:09:27') (66) terminated with exit code None
[2021-02-10 21:09:29,697] {process_utils.py:206} INFO - Waiting up to 5 seconds for processes to exit...
[2021-02-10 21:09:29,716] {process_utils.py:66} INFO - Process psutil.Process(pid=75, status='terminated',
started='21:09:28') (75) terminated with exit code None
[2021-02-10 21:09:29,717] {process_utils.py:66} INFO - Process psutil.Process(pid=60, status='terminated',
exitcode=0, started='21:09:27') (60) terminated with exit code 0
[2021-02-10 21:09:29,717] {scheduler_job.py:1301} INFO - Exited execute loop
```
**What you expected to happen**:
DAG loading fails, producing an error for just that DAG, instead of crashing the scheduler.
**How to reproduce it**:
Create a task like
```
test = DummyOperator(task_id="new-pod-spec",
executor_config=k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base",
image="myimage",
image_pull_policy="Always"
)
]
)))
```
or
```
test = DummyOperator(task_id="new-pod-spec",
executor_config={"KubernetesExecutor": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base",
image="myimage",
image_pull_policy="Always"
)
]
))})
```
essentially anything where it expects a dict but gets something else, and run the scheduler using the kubernetes executor
| https://github.com/apache/airflow/issues/14182 | https://github.com/apache/airflow/pull/14323 | 68ccda38a7877fdd0c3b207824c11c9cd733f0c6 | e0ee91e15f8385e34e3d7dfc8a6365e350ea7083 | "2021-02-10T21:36:28Z" | python | "2021-02-20T00:46:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,178 | ["chart/templates/configmaps/configmap.yaml", "chart/templates/configmaps/webserver-configmap.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/tests/test_webserver_deployment.py"] | Split out Airflow Configmap (webserver_config.py) | **Description**
Although the changes to FAB that include the [syncing of roles on login](https://github.com/dpgaspar/Flask-AppBuilder/commit/dbe1eded6369c199b777836eb08d829ba37634d7) hasn't been officially released, I'm proposing that we make some changes to the [airflow configmap](https://github.com/apache/airflow/blob/master/chart/templates/configmaps/configmap.yaml) in preparation for it.
Currently, this configmap contains the `airflow.cfg`, `webserver_config.py`, `airflow_local_settings.py`, `known_hosts`, `pod_template_file.yaml`, and the `krb5.conf`. With all of these tied together, changes to any of the contents across the listed files will force a restart for the flower deployment, scheduler deployment, worker deployment, and the webserver deployment through the setting of the `checksum/airflow-config` in each.
The reason I would like to split out at _least_ the `webserver_config.py` from the greater configmap is that I would like to have the opportunity to make incremental changes to the [AUTH_ROLES_MAPPING](https://github.com/dpgaspar/Flask-AppBuilder/blob/dbe1eded6369c199b777836eb08d829ba37634d7/docs/config.rst#configuration-keys) in that config without having to force restarts for all of the previously listed services apart from the webserver. Currently, if I were to add an additional group mapping that has no bearing on the workers/schedulers/flower these services would incur some down time despite not even mounting in this specific file to their pods. | https://github.com/apache/airflow/issues/14178 | https://github.com/apache/airflow/pull/14353 | a48bedf26d0f04901555187aed83296190604813 | 0891a8ea73813d878c0d00fbfdb59fa360e8d1cf | "2021-02-10T17:56:57Z" | python | "2021-02-22T20:17:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,163 | ["airflow/executors/celery_executor.py", "tests/executors/test_celery_executor.py"] | TypeError: object of type 'map' has no len(): When celery executor multi-processes to get Task Instances | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
2.0.1
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release): "18.04.1 LTS (Bionic Beaver)"
- **Kernel** (e.g. `uname -a`): 4.15.0-130-generic #134-Ubuntu
- **Install tools**:
- **Others**:
**What happened**:
I observe the following exception, in the scheduler intermittently:
```
[2021-02-10 03:51:26,651] {scheduler_job.py:1298} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1280, in _execute
self._run_scheduler_loop()
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1354, in _run_scheduler_loop
self.adopt_or_reset_orphaned_tasks()
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper
return func(*args, session=session, **kwargs)
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1837, in adopt_or_reset_orphaned_tasks
for attempt in run_with_db_retries(logger=self.log):
File "/home/foo/bar/.env38/lib/python3.8/site-packages/tenacity/__init__.py", line 390, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/foo/bar/.env38/lib/python3.8/site-packages/tenacity/__init__.py", line 356, in iter
return fut.result()
File "/home/foo/py_src/lib/python3.8/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/home/foo/py_src/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1882, in adopt_or_reset_orphaned_tasks
to_reset = self.executor.try_adopt_task_instances(tis_to_reset_or_adopt)
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 478, in try_adopt_task_instances
states_by_celery_task_id = self.bulk_state_fetcher.get_many(
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 554, in get_many
result = self._get_many_using_multiprocessing(async_results)
File "/home/foo/bar/.env38/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 595, in _get_many_using_multiprocessing
num_process = min(len(async_results), self._sync_parallelism)
TypeError: object of type 'map' has no len()
```
**What you expected to happen**:
I think the `len` should not be called on the `async_results`, or `map` should not be used in `try_adopt_task_instances`.
**How to reproduce it**:
Not sure how I can reproduce it. But, here are the offending lines:
https://github.com/apache/airflow/blob/90ab60bba877c65cb93871b97db13a179820d28b/airflow/executors/celery_executor.py#L479
Then, this branch gets hit:
https://github.com/apache/airflow/blob/90ab60bba877c65cb93871b97db13a179820d28b/airflow/executors/celery_executor.py#L554
The, we see the failure, here:
https://github.com/apache/airflow/blob/90ab60bba877c65cb93871b97db13a179820d28b/airflow/executors/celery_executor.py#L595
| https://github.com/apache/airflow/issues/14163 | https://github.com/apache/airflow/pull/14883 | aebacd74058d01cfecaf913c04c0dbc50bb188ea | 4ee442970873ba59ee1d1de3ac78ef8e33666e0f | "2021-02-10T04:13:18Z" | python | "2021-04-06T09:21:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,106 | ["airflow/lineage/__init__.py", "airflow/lineage/backend.py", "docs/apache-airflow/lineage.rst", "tests/lineage/test_lineage.py"] | Lineage Backend removed for no reason | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
**Description**
The possibility to add a lineage backend was removed in https://github.com/apache/airflow/pull/6564 but was never reintroduced. Now that this code is in 2.0, the lineage information is only in the xcoms and the only way to get it is through an experimental API that isn't very practical either.
**Use case / motivation**
A custom callback at the time lineage gets pushed is enough to send the lineage information to whatever lineage backend the user has.
**Are you willing to submit a PR?**
I'd be willing to make a PR recovering the LineageBackend and add changes if needed, unless there is a different plan for lineage from the maintainers.
| https://github.com/apache/airflow/issues/14106 | https://github.com/apache/airflow/pull/14146 | 9ac1d0a3963b0e152cb2ba4a58b14cf6b61a73a0 | af2d11e36ed43b0103a54780640493b8ae46d70e | "2021-02-05T16:47:46Z" | python | "2021-04-03T08:26:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,104 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | BACKEND: Unbound Variable issue in docker entrypoint | This is NOT a bug in Airflow, I'm writing this issue for documentation should someone come across this same issue and need to identify how to solve it. Please tag as appropriate.
**Apache Airflow version**: Docker 2.0.1rc2
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**: Dockered
- **Cloud provider or hardware configuration**: VMWare VM
- **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS
- **Kernel** (e.g. `uname -a`): 4.15.0-128-generic
- **Install tools**: Just docker/docker-compose
- **Others**:
**What happened**:
Worker, webserver and scheduler docker containers do not start, errors:
<details><summary>/entrypoint: line 71: BACKEND: unbound variable</summary>
af_worker | /entrypoint: line 71: BACKEND: unbound variable
af_worker | /entrypoint: line 71: BACKEND: unbound variable
af_worker exited with code 1
af_webserver | /entrypoint: line 71: BACKEND: unbound variable
af_webserver | /entrypoint: line 71: BACKEND: unbound variable
af_webserver | /entrypoint: line 71: BACKEND: unbound variable
af_webserver | /entrypoint: line 71: BACKEND: unbound variable
</details>
**What you expected to happen**:
Docker containers to start
**How to reproduce it**:
What ever docker-compose file I was copying, has a MySQL Connection String not compatible with: https://github.com/apache/airflow/blob/bc026cf6961626dd01edfaf064562bfb1f2baf42/scripts/in_container/prod/entrypoint_prod.sh#L58 -- Specifically, the connection string in the docker-compose did not have a password, and no : separator for a blank password.
Original Connection String: `mysql://root@mysql/airflow?charset=utf8mb4`
**Anything else we need to know**:
The solution is to use a password, or at the very least add the : to the user:password section
Fixed Connection String: `mysql://root:@mysql/airflow?charset=utf8mb4`
| https://github.com/apache/airflow/issues/14104 | https://github.com/apache/airflow/pull/14124 | d77f79d134e0d14443f75325b24dffed4b779920 | b151b5eea5057f167bf3d2f13a84ab4eb8e42734 | "2021-02-05T15:31:07Z" | python | "2021-03-22T15:42:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,097 | ["UPDATING.md", "airflow/contrib/sensors/gcs_sensor.py", "airflow/providers/google/BACKPORT_PROVIDER_README.md", "airflow/providers/google/cloud/sensors/gcs.py", "tests/always/test_project_structure.py", "tests/deprecated_classes.py", "tests/providers/google/cloud/sensors/test_gcs.py"] | Typo in Sensor: GCSObjectsWtihPrefixExistenceSensor (should be GCSObjectsWithPrefixExistenceSensor) | Typo in Google Cloud Storage sensor: airflow/providers/google/cloud/sensors/gcs/GCSObjectsWithPrefixExistenceSensor
The word _With_ is spelled incorrectly. It should be: GCSObjects**With**PrefixExistenceSensor
**Apache Airflow version**: 2.0.0
**Environment**:
- **Cloud provider or hardware configuration**: Google Cloud
- **OS** (e.g. from /etc/os-release): Mac OS BigSur
| https://github.com/apache/airflow/issues/14097 | https://github.com/apache/airflow/pull/14179 | 6dc6339635f41a9fa50a987c4fdae5af0bae9fdc | e3bcaa3ba351234effe52ad380345c4e39003fcb | "2021-02-05T12:13:09Z" | python | "2021-02-12T20:14:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,089 | ["airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/log/s3_task_handler.py", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/log/test_s3_task_handler.py"] | S3 Remote Logging Kubernetes Executor worker task keeps waiting to send log: "acquiring 0" | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.16.15
**Environment**:
- **Cloud provider or hardware configuration**: AWS
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**: Airflow Helm Chart
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
A running task in a worker created from the Kubernetes Executor is constantly running with no progress being made. I checked the log and I see that it is "stuck" with a `[2021-02-05 01:07:17,316] {utils.py:580} DEBUG - Acquiring 0`
I see it is able to talk to S3, in particular it does a HEAD request to see if the key exists in S3, and I get a 404, which means the object does not exist in S3. And then, the logs just stop and seems to be waiting. No more logs show up about what is going on.
I am using an access point for the s3 remote base log folder, and that works in Airflow 1.10.14.
Running the following, a simple dag that should just prints a statement:
```
airflow@testdag2dbdbscouter-b7f961ff64d6490e80c5cfa2fd33a37c:/opt/airflow$ airflow tasks run test_dag-2 dbdb-scouter now --local --pool default_pool --subdir /usr/airflow/dags/monitoring_and_alerts/test_dag2.py
[2021-02-05 02:04:22,962] {settings.py:208} DEBUG - Setting up DB connection pool (PID 185)
[2021-02-05 02:04:22,963] {settings.py:279} DEBUG - settings.prepare_engine_args(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=185
[2021-02-05 02:04:23,164] {cli_action_loggers.py:40} DEBUG - Adding <function default_action_log at 0x7f0984c30290> to pre execution callback
[2021-02-05 02:04:30,379] {cli_action_loggers.py:66} DEBUG - Calling callbacks: [<function default_action_log at 0x7f0984c30290>]
[2021-02-05 02:04:30,499] {settings.py:208} DEBUG - Setting up DB connection pool (PID 185)
[2021-02-05 02:04:30,499] {settings.py:241} DEBUG - settings.prepare_engine_args(): Using NullPool
[2021-02-05 02:04:30,500] {dagbag.py:440} INFO - Filling up the DagBag from /usr/airflow/dags/monitoring_and_alerts/test_dag2.py
[2021-02-05 02:04:30,500] {dagbag.py:279} DEBUG - Importing /usr/airflow/dags/monitoring_and_alerts/test_dag2.py
[2021-02-05 02:04:30,511] {dagbag.py:405} DEBUG - Loaded DAG <DAG: test_dag-2>
[2021-02-05 02:04:30,567] {plugins_manager.py:270} DEBUG - Loading plugins
[2021-02-05 02:04:30,567] {plugins_manager.py:207} DEBUG - Loading plugins from directory: /opt/airflow/plugins
[2021-02-05 02:04:30,567] {plugins_manager.py:184} DEBUG - Loading plugins from entrypoints
[2021-02-05 02:04:30,671] {plugins_manager.py:414} DEBUG - Integrate DAG plugins
Running <TaskInstance: test_dag-2.dbdb-scouter 2021-02-05T02:04:23.265117+00:00 [None]> on host testdag2dbdbscouter-b7f961ff64d6490e80c5cfa2fd33a37c
```
If I check the logs directory, and open the log, I see that the log
```
[2021-02-05 01:07:17,314] {retryhandler.py:187} DEBUG - No retry needed.
[2021-02-05 01:07:17,314] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f3e27182b80>>
[2021-02-05 01:07:17,314] {utils.py:1186} DEBUG - S3 request was previously to an accesspoint, not redirecting.
[2021-02-05 01:07:17,316] {utils.py:580} DEBUG - Acquiring 0
```
If I do a manual keyboard interrupt to terminate the running task, I see the following:
```
[2021-02-05 02:11:30,103] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f097f048110>
[2021-02-05 02:11:30,103] {retryhandler.py:187} DEBUG - No retry needed.
[2021-02-05 02:11:30,103] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f097f0293d0>>
[2021-02-05 02:11:30,103] {utils.py:1187} DEBUG - S3 request was previously to an accesspoint, not redirecting.
[2021-02-05 02:11:30,105] {utils.py:580} DEBUG - Acquiring 0
[2021-02-05 02:11:30,105] {futures.py:277} DEBUG - TransferCoordinator(transfer_id=0) cancel(cannot schedule new futures after interpreter shutdown) called
[2021-02-05 02:11:30,105] {s3_task_handler.py:193} ERROR - Could not write logs to s3://arn:aws:s3:us-west-2:<ACCOUNT>:accesspoint:<BUCKET,PATH>/2021-02-05T02:04:23.265117+00:00/1.log
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/log/s3_task_handler.py", line 190, in s3_write
encrypt=conf.getboolean('logging', 'ENCRYPT_S3_LOGS'),
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 61, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 90, in wrapper
return func(*bound_args.args, **bound_args.kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 547, in load_string
self._upload_file_obj(file_obj, key, bucket_name, replace, encrypt, acl_policy)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/s3.py", line 638, in _upload_file_obj
client.upload_fileobj(file_obj, bucket_name, key, ExtraArgs=extra_args)
File "/home/airflow/.local/lib/python3.7/site-packages/boto3/s3/inject.py", line 538, in upload_fileobj
extra_args=ExtraArgs, subscribers=subscribers)
File "/home/airflow/.local/lib/python3.7/site-packages/s3transfer/manager.py", line 313, in upload
call_args, UploadSubmissionTask, extra_main_kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/s3transfer/manager.py", line 471, in _submit_transfer
main_kwargs=main_kwargs
File "/home/airflow/.local/lib/python3.7/site-packages/s3transfer/futures.py", line 467, in submit
future = ExecutorFuture(self._executor.submit(task))
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 165, in submit
raise RuntimeError('cannot schedule new futures after '
RuntimeError: cannot schedule new futures after interpreter shutdown
```
My Airflow config:
```
[logging]
base_log_folder = /opt/airflow/logs
remote_logging = True
remote_log_conn_id = S3Conn
google_key_path =
remote_base_log_folder = s3://arn:aws:s3:us-west-2:<ACCOUNT>:accesspoint:<BUCKET>/logs
encrypt_s3_logs = False
logging_level = DEBUG
fab_logging_level = WARN
```
**What you expected to happen**:
<!-- What do you think went wrong? -->
I expected for the log to be sent to S3, but the task just waits at this point.
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
Extended the docker image, and baked in the test dag:
```
FROM apache/airflow:2.0.0-python3.7
COPY requirements.txt /requirements.txt
RUN pip install --user -r /requirements.txt
ENV AIRFLOW_DAG_FOLDER="/usr/airflow"
COPY --chown=airflow:root ./airflow ${AIRFLOW_DAG_FOLDER}
```
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14089 | https://github.com/apache/airflow/pull/14414 | 3dc762c8177264001793e20543c24c6414c14960 | 0d6cae4172ff185ec4c0fc483bf556ce3252b7b0 | "2021-02-05T02:30:53Z" | python | "2021-02-24T13:42:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,077 | ["airflow/providers/google/marketing_platform/hooks/display_video.py", "airflow/providers/google/marketing_platform/operators/display_video.py", "tests/providers/google/marketing_platform/hooks/test_display_video.py", "tests/providers/google/marketing_platform/operators/test_display_video.py"] | GoogleDisplayVideo360Hook.download_media does not pass the resourceName correctly | **Apache Airflow version**: 1.10.12
**Environment**: Google Cloud Composer 1.13.3
- **Cloud provider or hardware configuration**:
- Google Cloud Composer
**What happened**:
The GoogleDisplayVideo360Hook.download_media hook tries to download media using the "resource_name" argument. However, [per the API spec](https://developers.google.com/display-video/api/reference/rest/v1/media/download) it should pass "resourceName" Thus, it breaks every time and can never download media.
Error: `ERROR - Got an unexpected keyword argument "resource_name"`
**What you expected to happen**: The hook should pass in the correct resourceName and then download the media file.
**How to reproduce it**: Run any workflow that tries to download any DV360 media.
**Anything else we need to know**: I have written a patch that fixes the issue and will submit it shortly. | https://github.com/apache/airflow/issues/14077 | https://github.com/apache/airflow/pull/20528 | af4a2b0240fbf79a0a6774a9662243050e8fea9c | a6e60ce25d9f3d621a7b4089834ca5e50cd123db | "2021-02-04T16:35:25Z" | python | "2021-12-30T12:48:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,071 | ["airflow/providers/jenkins/operators/jenkins_job_trigger.py", "tests/providers/jenkins/operators/test_jenkins_job_trigger.py"] | Add support for UNSTABLE Jenkins status | **Description**
Don't mark dag as `failed` when `UNSTABLE` status received from Jenkins.
It can be done by adding `allow_unstable: bool` or `success_status_values: list` parameter to `JenkinsJobTriggerOperator.__init__`. For now `SUCCESS` status is hardcoded, any other lead to fail.
**Use case / motivation**
I want to restart a job (`retries` parameter) only if I get `FAILED` status. `UNSTABLE` is okay for me and it's no need to restart.
**Are you willing to submit a PR?**
Yes
**Related Issues**
No
| https://github.com/apache/airflow/issues/14071 | https://github.com/apache/airflow/pull/14131 | f180fa13bf2a0ffa31b30bb21468510fe8a20131 | 78adaed5e62fa604d2ef2234ad560eb1c6530976 | "2021-02-04T15:20:47Z" | python | "2021-02-08T21:43:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,054 | ["airflow/providers/samba/hooks/samba.py", "docs/apache-airflow-providers-samba/index.rst", "setup.py", "tests/providers/samba/hooks/test_samba.py"] | SambaHook using old unmaintained library |
**Description**
The [SambaHook](https://github.com/apache/airflow/blob/master/airflow/providers/samba/hooks/samba.py#L26) currently using [pysmbclient](https://github.com/apache/airflow/blob/master/setup.py#L408) this library hasn't been updated since 2017 https://pypi.org/project/PySmbClient/
I think worth moving to https://pypi.org/project/smbprotocol/ which newer and maintained.
| https://github.com/apache/airflow/issues/14054 | https://github.com/apache/airflow/pull/17273 | 6cc252635db6af6b0b4e624104972f0567f21f2d | f53dace36c707330e01c99204e62377750a5fb1f | "2021-02-03T23:05:43Z" | python | "2021-08-01T21:38:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,051 | ["docs/build_docs.py", "docs/exts/docs_build/spelling_checks.py", "docs/spelling_wordlist.txt"] | Docs Builder creates SpellingError for Sphinx error unrelated to spelling issues | <!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**Environment**:
- **Cloud provider or hardware configuration**: n/a
- **OS** (e.g. from /etc/os-release): n/a
- **Kernel** (e.g. `uname -a`): n/a
- **Install tools**: n/a
- **Others**: n/a
**What happened**:
A sphinx warning unrelated to spelling issues running `sphinx-build` resulted in an instance of `SpellingError` to cause a docs build failure.
```
SpellingError(
file_path=None,
line_no=None,
spelling=None,
suggestion=None,
context_line=None,
message=(
f"Sphinx spellcheck returned non-zero exit status: {completed_proc.returncode}."
)
)
# sphinx.errors.SphinxWarning: /opt/airflow/docs/apache-airflow-providers-google/_api/drive/index.rst:document isn't included in any toctree
```
The actual issue was that I failed to include an `__init__.py` file in a directory that I created.
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
I think an exception should be raised unrelated to a spelling error. Preferably one that would indicate that there's a directory that's missing an init file, but at least a generic error unrelated to spelling
<!-- What do you think went wrong? -->
**How to reproduce it**:
Create a new plugin directory (e.g. `airflow/providers/google/suite/sensors`) and don't include an `__init__.py` file, and run `./breeze build-docs -- --docs-only -v`
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of
![alt text](http://url/to/img.png)
To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
I'm specifically referring to lines 139 to 150 in `docs/exts/docs_build/docs_builder.py`
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14051 | https://github.com/apache/airflow/pull/14196 | e31b27d593f7379f38ced34b6e4ce8947b91fcb8 | cb4a60e9d059eeeae02909bb56a348272a55c233 | "2021-02-03T16:46:25Z" | python | "2021-02-12T23:46:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,050 | ["airflow/jobs/scheduler_job.py", "airflow/serialization/serialized_objects.py", "tests/jobs/test_scheduler_job.py", "tests/serialization/test_dag_serialization.py"] | SLA mechanism does not work | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
I have the following DAG:
```py
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
with DAG(
dag_id="sla_trigger",
schedule_interval="* * * * *",
start_date=datetime(2020, 2, 3),
) as dag:
BashOperator(
task_id="bash_task",
bash_command="sleep 30",
sla=timedelta(seconds=2),
)
```
And in my understanding this dag should result in SLA miss every time it is triggered (every minute). However, after few minutes of running I don't see any SLA misses...
**What you expected to happen**:
I expect to see SLA if task takes longer than expected.
**How to reproduce it**:
Use the dag from above.
**Anything else we need to know**:
N/A
| https://github.com/apache/airflow/issues/14050 | https://github.com/apache/airflow/pull/14056 | 914e9ce042bf29dc50d410f271108b1e42da0add | 604a37eee50715db345c5a7afed085c9afe8530d | "2021-02-03T14:58:32Z" | python | "2021-02-04T01:59:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,046 | ["airflow/www/templates/airflow/tree.html"] | Day change flag is in wrong place | **Apache Airflow version**: 2.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
In tree view, the "day marker" is shifted and one last dagrun of previous day is included in new day. See:
<img width="398" alt="Screenshot 2021-02-03 at 14 14 55" src="https://user-images.githubusercontent.com/9528307/106752180-7014c100-662a-11eb-9342-661a237ed66c.png">
The tooltip is on 4th dagrun, but the day flag in the same line as the 3rd one.
**What you expected to happen**:
I expect the to see the day flag between two days not earlier.
**How to reproduce it**:
Create a DAG with `schedule_interval="5 8-23/1 * * *"`
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14046 | https://github.com/apache/airflow/pull/14141 | 0f384f0644c8cfe55ca4c75d08b707be699b440f | 6dc6339635f41a9fa50a987c4fdae5af0bae9fdc | "2021-02-03T13:19:58Z" | python | "2021-02-12T18:50:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,045 | ["docs/apache-airflow/stable-rest-api-ref.rst"] | Inexistant reference in docs/apache-airflow/stable-rest-api-ref.rst |
**Apache Airflow version**: 2.1.0.dev0 (master branch)
**What happened**:
in file `docs/apache-airflow/stable-rest-api-ref.rst` there is a reference to a file that does not longer exist: `/docs/exts/sphinx_redoc.py`.
The whole text:
```
It's a stub file. It will be converted automatically during the build process
to the valid documentation by the Sphinx plugin. See: /docs/exts/sphinx_redoc.py
```
**What you expected to happen**:
A reference to `docs/conf.py`, where I think is where the contents are now replaced during the build process.
**How to reproduce it**:
Go to the file in question.
**Anything else we need to know**:
I would've made a PR but I'm not 100% sure this is wrong or I just do not find the file referenced.
| https://github.com/apache/airflow/issues/14045 | https://github.com/apache/airflow/pull/14079 | 2bc9b9ce2b9fdca2d29565fc833ddc3a543daaa7 | e8c7dc3f7a81fb3a7179e154920b2350f4e992c6 | "2021-02-03T11:48:55Z" | python | "2021-02-05T12:50:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 14,010 | ["airflow/www/templates/airflow/task.html"] | Order of items not preserved in Task instance view | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
The order of items is not preserved in Task Instance information:
<img width="542" alt="Screenshot 2021-02-01 at 16 49 09" src="https://user-images.githubusercontent.com/9528307/106482104-6a45a100-64ad-11eb-8d2f-e478c267bce9.png">
<img width="542" alt="Screenshot 2021-02-01 at 16 49 43" src="https://user-images.githubusercontent.com/9528307/106482167-7df10780-64ad-11eb-9434-ba3e54d56dec.png">
**What you expected to happen**:
I expect that the order will be always the same. Otherwise the UX is bad.
**How to reproduce it**:
Seems to happen randomly. But once seen the order is then consistent for given TI.
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
| https://github.com/apache/airflow/issues/14010 | https://github.com/apache/airflow/pull/14036 | 68758b826076e93fadecf599108a4d304dd87ac7 | fc67521f31a0c9a74dadda8d5f0ac32c07be218d | "2021-02-01T15:51:38Z" | python | "2021-02-05T15:38:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 13,989 | ["airflow/providers/telegram/operators/telegram.py", "tests/providers/telegram/operators/test_telegram.py"] | AttributeError: 'TelegramOperator' object has no attribute 'text' | Hi there 👋
I was playing with the **TelegramOperator** and stumbled upon a bug with the `text` field. It is supposed to be a template field but in reality the instance of the **TelegramOperator** does not have this attribute thus every time I try to execute code I get the error:
> AttributeError: 'TelegramOperator' object has no attribute 'text'
```python
TelegramOperator(
task_id='send_message_telegram',
telegram_conn_id='telegram_conn_id',
text='Hello from Airflow!'
)
``` | https://github.com/apache/airflow/issues/13989 | https://github.com/apache/airflow/pull/13990 | 9034f277ef935df98b63963c824ba71e0dcd92c7 | 106d2c85ec4a240605830bf41962c0197b003135 | "2021-01-30T19:25:35Z" | python | "2021-02-10T12:06:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 13,988 | ["airflow/www/utils.py", "airflow/www/views.py"] | List and Dict template fields are rendered as JSON. | **Apache Airflow version**: 2.0.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a
**Environment**: Linux
- **Cloud provider or hardware configuration**: amd64
- **OS** (e.g. from /etc/os-release): Centos 7
- **Kernel** (e.g. `uname -a`):
- **Install tools**: pip
- **Others**:
**What happened**:
The field `sql` is rendered as a serialized json `["select 1 from dual", "select 2 from dual"]` instead of a list of syntax-highlighted SQL statements.
![image](https://user-images.githubusercontent.com/5377410/106365382-2f8a1e80-6370-11eb-981a-43bf71e7b396.png)
**What you expected to happen**:
`lists` and `dicts` should be rendered as lists and dicts rather than serialized json unless the `template_field_renderer` is `json`
![image](https://user-images.githubusercontent.com/5377410/106365216-f3a28980-636e-11eb-9c48-15deb1fbe0d7.png)
**How to reproduce it**:
```
from airflow import DAG
from airflow.providers.oracle.operators.oracle import OracleOperator
with DAG("demo", default_args={owner='airflow'}, start_date= pendulum.yesterday(), schedule_interval='@daily',) as dag:
OracleOperator(task_id='single', sql='select 1 from dual')
OracleOperator(task_id='list', sql=['select 1 from dual', 'select 2 from dual'])
```
**Anything else we need to know**:
Introduced by #11061, .
A quick and dirty work-around:
Edit file [airflow/www/views.py](https://github.com/PolideaInternal/airflow/blob/13ba1ec5494848d4a54b3291bd8db5841bfad72e/airflow/www/views.py#L673)
```
if renderer in renderers:
- if isinstance(content, (dict, list)):
+ if isinstance(content, (dict, list)) and renderer is renderers['json']:
content = json.dumps(content, sort_keys=True, indent=4)
html_dict[template_field] = renderers[renderer](content)
``` | https://github.com/apache/airflow/issues/13988 | https://github.com/apache/airflow/pull/14024 | 84ef24cae657babe3882d7ad6eecc9be9967e08f | e2a06a32c87d99127d098243b311bd6347ff98e9 | "2021-01-30T19:04:12Z" | python | "2021-02-04T08:01:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 13,985 | ["airflow/www/static/js/connection_form.js"] | Can't save any connection if provider-provided connection form widgets have fields marked as InputRequired | **Apache Airflow version**: 2.0.0 with the following patch: https://github.com/apache/airflow/pull/13640
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A
**Environment**:
- **Cloud provider or hardware configuration**: AMD Ryzen 3900X (12C/24T), 64GB RAM
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04.1 LTS
- **Kernel** (e.g. `uname -a`): 5.9.8-050908-generic
- **Install tools**: N/A
- **Others**: N/A
**What happened**:
If there are custom hooks that implement the `get_connection_form_widgets` method that return fields using the `InputRequired` validator, saving breaks for all types of connections on the "Edit Connections" page.
In Chrome, the following message is logged to the browser console:
```
An invalid form control with name='extra__hook_name__field_name' is not focusable.
```
This happens because the field is marked as `<input required>` but is hidden using CSS when the connection type exposed by the custom hook is not selected.
**What you expected to happen**:
Should be able to save other types of connections.
In particular, either one of the following should happen:
1. The fields not belonging to the currently selected connection type should not just be hidden using CSS, but should be removed from the DOM entirely.
2. Remove the `required` attribute if the form field is hidden.
**How to reproduce it**:
Create a provider, and add a hook with something like:
```python
@staticmethod
def get_connection_form_widgets() -> Dict[str, Any]:
"""Returns connection widgets to add to connection form."""
return {
'extra__my_hook__client_id': StringField(
lazy_gettext('OAuth2 Client ID'),
widget=BS3TextFieldWidget(),
validators=[wtforms.validators.InputRequired()],
),
}
```
Go to the Airflow Web UI, click the "Add" button in the connection list page, then choose a connection type that's not the type exposed by the custom hook. Fill in details and click "Save".
**Anything else we need to know**: N/A
| https://github.com/apache/airflow/issues/13985 | https://github.com/apache/airflow/pull/14052 | f9c9e9c38f444a39987478f3d1a262db909de8c4 | 98bbe5aec578a012c1544667bf727688da1dadd4 | "2021-01-30T16:21:53Z" | python | "2021-02-11T13:59:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 13,924 | ["scripts/in_container/_in_container_utils.sh"] | Improve error messages and propagation in CI builds | Airflow version: dev
The error information in `Backport packages: wheel` is not that easy to find.
Here is the end of the step that failed and end of its log:
<img width="1151" alt="Screenshot 2021-01-27 at 12 02 01" src="https://user-images.githubusercontent.com/9528307/105982515-aa64e800-6097-11eb-91c8-9911448d1301.png">
but in fact the error happen some 500 lines earlier:
<img width="1151" alt="Screenshot 2021-01-27 at 12 01 47" src="https://user-images.githubusercontent.com/9528307/105982504-a769f780-6097-11eb-8873-02c1d9b2d670.png">
**What you expect to happen?**
I would expect that the error is at the end of the step. Otherwise the message `The previous step completed with error. Please take a look at output above ` is slightly miss-leading.
| https://github.com/apache/airflow/issues/13924 | https://github.com/apache/airflow/pull/15190 | 041a09f3ee6bc447c3457b108bd5431a2fd70ad9 | 7c17bf0d1e828b454a6b2c7245ded275b313c792 | "2021-01-27T11:07:09Z" | python | "2021-04-04T20:20:11Z" |