status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 19,901 | ["airflow/models/dagrun.py", "tests/jobs/test_scheduler_job.py"] | No scheduling when max_active_runs is 1 | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Since version 2.2 a field `DAG.next_dagrun_create_after` is not calculated when `DAG.max_active_runs` is 1.
### What you expected to happen
https://github.com/apache/airflow/blob/fca2b19a5e0c081ab323479e76551d66ab478d07/airflow/models/dag.py#L2466
If this condition is evaluated when a state is "running" then it is incorrect.
### How to reproduce
Create a DAG with a `schedule_interval` and `max_active_runs=1.`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19901 | https://github.com/apache/airflow/pull/21413 | 5fbf2471ab4746f5bc691ff47a7895698440d448 | feea143af9b1db3b1f8cd8d29677f0b2b2ab757a | "2021-11-30T18:30:43Z" | python | "2022-02-24T07:12:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,897 | ["airflow/providers/apache/spark/hooks/spark_submit.py", "tests/providers/apache/spark/hooks/test_spark_submit.py"] | Spark driver relaunches if "driverState" is not found in curl response due to transient network issue | ### Apache Airflow Provider(s)
apache-spark
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-spark==2.0.1
### Apache Airflow version
2.1.4
### Operating System
Amazon Linux 2
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
In the file `airflow/providers/apache/spark/hooks/spark_submit.py`, the function [_process_spark_status_log](https://github.com/apache/airflow/blob/main/airflow/providers/apache/spark/hooks/spark_submit.py#L516-L535) iterates through a `curl` response to get the driver state of a `SparkSubmitOperator` task.
If there's a transient network issue and there is no valid response from the cluster (e.g. timeout, etc.), there is no "driverState" in the `curl` response, which makes the driver state "UNKNOWN".
That state [exits the loop](https://github.com/apache/airflow/blob/main/airflow/providers/apache/spark/hooks/spark_submit.py#L573) and then makes the task to go on a [retry](https://github.com/apache/airflow/blob/main/airflow/providers/apache/spark/hooks/spark_submit.py#L464-L467), while the original task is actually still in a "RUNNING" state.
### What you expected to happen
I would expect the task not to go on a retry while the original task is running. The function `_process_spark_status_log` should probably ensure the `curl` response is valid before changing the driver state, e.g. check that there is a "submissionId" in the response as well, otherwise leave the state to `None` and continue with the polling loop. A valid response would be something like this:
```
curl http://spark-host:6066/v1/submissions/status/driver-FOO-BAR
{
"action" : "SubmissionStatusResponse",
"driverState" : "RUNNING",
"serverSparkVersion" : "2.4.6",
"submissionId" : "driver-FOO-BAR,
"success" : true,
"workerHostPort" : "FOO:BAR",
"workerId" : "worker-FOO-BAR-BAZ"
}
```
### How to reproduce
Use any DAG with a `SparkSubmitOperator` task on a Spark Standalone cluster where you can reset the network connection, or modify the `curl` command to return something other than the response above.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19897 | https://github.com/apache/airflow/pull/19978 | 9d5647b91246d9cdc0819de4dd306dc6d5102a2d | a50d2ac872da7e27d4cb32a2eb12cb75545c4a60 | "2021-11-30T16:41:41Z" | python | "2021-12-02T21:12:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,891 | [".github/workflows/ci.yml", "airflow/api/common/experimental/get_dag_runs.py", "airflow/jobs/base_job.py", "airflow/models/base.py", "airflow/models/dagrun.py", "airflow/providers/microsoft/winrm/hooks/winrm.py", "airflow/providers/ssh/hooks/ssh.py", "tests/providers/amazon/aws/hooks/test_eks.py", "tests/providers/google/cloud/hooks/test_gcs.py", "tests/www/api/experimental/test_dag_runs_endpoint.py"] | Re-enable MyPy | # Why Mypy re-enable
For a few weeks MyPy checks have been disabled after the switch to Python 3.7 (per https://github.com/apache/airflow/pull/19317).
We should, however, re-enable it back as it is very useful in catching a number of mistakes.
# How does it work
We 've re-added the mypy pre-commit now - with mypy bumped to 0.910. This version detects far more errors and we should fix them all before we switch the CI check back.
* mypy will be running for incremental changes in pre-commit, same as before. This will enable incremental fixes of the code changed by committers who use pre-commits locally
* mypy on CI runs in non-failing mode. When the main pre-commit check is run, mypy is disabled, but then it is run as a separate step (which does not fail but will show the result of running mypy on all our code). This will enable us to track the progress of fixes
# Can I help with the effort, you ask?
We started concerted effort now and incrementally fix all the mypy incompatibilities - ideally package/by/package to avoid huge code reviews. We'd really appreciate a number of people to contribute, so that we can re-enable mypy back fully and quickly :).
# How can I help?
What you need is:
* checkout `main`
* `./breeeze build-image`
* `pip install pre-commit`
* `pre-commit install`
This will enable automated checks for when you do a regular contribution. When you make your change, any MyPy issues will be reporteed and you need to fix them all to commit. You can also commit with `--no-verify` flag to skip that, bu, well, if you can improve airlfow a little - why not?
# How can I help more ?
You can add PRs that are fixing whole packages, without contributing features or bugfixes. Please refer to this issue #19891 and ideally comment below in the issue that you want to take care of a package (to avoid duplicate work).
An easy way to run MyPy check for package can be done either from the host:
```
find DIRECTORY -name "*.py" | xargs pre-commit run mypy --files
```
or from ./breeze shell:
```
mypy --namespace-packages DIRECTORY
```
# Current list of mypy PRs:
https://github.com/apache/airflow/pulls?q=is%3Aopen+is%3Apr+label%3Amypy
# Remaining packages
Here is the list of remaining packages to be "mypy compliant" generated with:
```
pre-commit run mypy --all-files 2>&1 | sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g" | grep "error:" | sort | awk 'FS=":" { print $1 }' | xargs dirname | sort | uniq -c | xargs -n 2 printf "* [ ] (%4d) %s\n"
```
* [ ] ( 1) airflow/api/common/experimental
* [ ] ( 1) airflow/contrib/sensors
* [ ] ( 1) airflow/example_dags
* [ ] ( 1) airflow/jobs
* [ ] ( 4) airflow/models
* [ ] ( 1) airflow/providers/microsoft/winrm/hooks
* [ ] ( 1) airflow/providers/ssh/hooks
* [ ] ( 1) tests/providers/amazon/aws/hooks
* [ ] ( 1) tests/providers/google/cloud/hooks
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/19891 | https://github.com/apache/airflow/pull/21020 | 07ea9fcaa10fc1a8c43ef5f627360d4adb12115a | 9ed9b5170c8dbb11469a88c41e323d8b61a1e7e6 | "2021-11-30T12:07:37Z" | python | "2022-01-24T21:39:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,888 | ["airflow/task/task_runner/standard_task_runner.py"] | Reference to undeclared variable: "local variable 'return_code' referenced before assignment" | ### Apache Airflow version
2.2.1
### Operating System
Ubuntu 20.04 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.3.0
apache-airflow-providers-apache-cassandra==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.0.0
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-jdbc==2.0.1
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-postgres==2.3.0
apache-airflow-providers-presto==2.0.1
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-sqlite==2.0.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Incorrect "finally" block invokes "UnboundLocalError: local variable 'return_code' referenced before assignment"
Traceback example:
```python
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 88, in _start_by_fork
self.log.exception(
File "/usr/lib/python3.8/logging/__init__.py", line 1481, in exception
self.error(msg, *args, exc_info=exc_info, **kwargs)
File "/usr/lib/python3.8/logging/__init__.py", line 1475, in error
self._log(ERROR, msg, args, **kwargs)
File "/usr/lib/python3.8/logging/__init__.py", line 1589, in _log
self.handle(record)
File "/usr/lib/python3.8/logging/__init__.py", line 1599, in handle
self.callHandlers(record)
File "/usr/lib/python3.8/logging/__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "/usr/lib/python3.8/logging/__init__.py", line 950, in handle
rv = self.filter(record)
File "/usr/lib/python3.8/logging/__init__.py", line 811, in filter
result = f.filter(record)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 167, in filter
self._redact_exception_with_context(exc)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 150, in _redact_exception_with_context
self._redact_exception_with_context(exception.__context__)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 150, in _redact_exception_with_context
self._redact_exception_with_context(exception.__context__)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 148, in _redact_exception_with_context
exception.args = (self.redact(v) for v in exception.args)
AttributeError: can't set attribute
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 121, in _execute_in_fork
args.func(args)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 292, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 105, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 163, in _run_task_by_local_task_job
run_job.run()
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 245, in run
self._execute()
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/jobs/local_task_job.py", line 103, in _execute
self.task_runner.start()
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 41, in start
self.process = self._start_by_fork()
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 98, in _start_by_fork
os._exit(return_code)
UnboundLocalError: local variable 'return_code' referenced before assignment
```
Bug location:
https://github.com/apache/airflow/blob/2.2.1/airflow/task/task_runner/standard_task_runner.py#L84-L98
Explanation:
Nested exception triggered when we are trying to log exception, so return_code remains undeclared.
### What you expected to happen
return_code variable should be declared
### How to reproduce
It is probably hard to reproduce because you need to have exception in task execution as well as exception in logging function.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19888 | https://github.com/apache/airflow/pull/19933 | 2539cb44b47d78e81a88fde51087f4cc77c924c5 | eaa8ac72fc901de163b912a94dbe675045d2a009 | "2021-11-30T10:36:52Z" | python | "2021-12-01T19:29:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,877 | ["airflow/providers/amazon/aws/hooks/emr.py", "tests/providers/amazon/aws/hooks/test_emr_containers.py"] | Refactor poll_query_status in EMRContainerHook | ### Body
The goal is to refactor the code so that we can remove this TODO
https://github.com/apache/airflow/blob/7640ba4e8ee239d6e2bbf950d53d624b9df93059/airflow/providers/amazon/aws/hooks/emr_containers.py#L174-L176
More information about the concerns can be found on https://github.com/apache/airflow/pull/16766#discussion_r668089559
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/19877 | https://github.com/apache/airflow/pull/21423 | 064b39f3faae26e5b1312510142b50765e58638b | c8d49f63ca60fa0fb447768546c2503b746a66dd | "2021-11-29T13:43:21Z" | python | "2022-03-08T12:59:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,869 | ["airflow/serialization/serialized_objects.py"] | Custom Timetable Import Error | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
Darwin Kernel Version 21.1.0 RELEASE_ARM64_T8101 arm64
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
python_version | 3.9.7 (default, Sep 16 2021, 23:53:23) [Clang 12.0.0 ]
### What happened
The following error is displayed in Web UI:
```
Broken DAG: [<EDITED>/scripts/airflow/dags/master/sample_dag/sample_dag.py] Traceback (most recent call last):
File "<EDITED>/miniconda3/envs/dev-airflow/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 271, in serialize_to_json
serialized_object[key] = _encode_timetable(value)
File "<EDITED>/miniconda3/envs/dev-airflow/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 152, in _encode_timetable
raise _TimetableNotRegistered(importable_string)
airflow.serialization.serialized_objects._TimetableNotRegistered: Timetable class 'workday.AfterWorkdayTimetable' is not registered
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<EDITED>/miniconda3/envs/dev-airflow/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 937, in to_dict
json_dict = {"__version": cls.SERIALIZER_VERSION, "dag": cls.serialize_dag(var)}
File "<EDITED>/miniconda3/envs/dev-airflow/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 849, in serialize_dag
raise SerializationError(f'Failed to serialize DAG {dag.dag_id!r}: {e}')
airflow.exceptions.SerializationError: Failed to serialize DAG 'learning_example_workday_timetable': Timetable class 'workday.AfterWorkdayTimetable' is not registered
```
### What you expected to happen
For the custom timetable to be implemented and used by DAG.
### How to reproduce
Following instructions from [Custom DAG Scheduling with Timetables](https://airflow.apache.org/docs/apache-airflow/stable/howto/timetable.html) with following new DAG to implement:
```python
import datetime
from airflow import DAG
from airflow import plugins_manager
from airflow.operators.dummy import DummyOperator
from workday import AfterWorkdayTimetable
with DAG(
dag_id="learning_example_workday_timetable",
start_date=datetime.datetime(2021,11,20),
timetable=AfterWorkdayTimetable(),
tags=["example","learning","timetable"],
) as dag:
DummyOperator(task_id="run_this")
```
### Anything else
I have tried digging through the code and believe the issue is in this line:
https://github.com/apache/airflow/blob/fb478c00cdc5e78d5e85fe5ac103707c829be2fb/airflow/serialization/serialized_objects.py#L149
Perhaps the [Custom DAG Scheduling with Timetables](https://airflow.apache.org/docs/apache-airflow/stable/howto/timetable.html) expects an `__eq__` implemented in the `AfterWorkdayTimetable` class but it would appear that the `AfterWorkdayTimetable` class imported through the DAG and the `AfterWorkdayTimetable` class imported through `plugin_manager` have different `id()`'s:
https://github.com/apache/airflow/blob/fb478c00cdc5e78d5e85fe5ac103707c829be2fb/airflow/serialization/serialized_objects.py#L129
The only way I could get it to import successfully was via the following sequence of import statements since [_get_registered_timetable](https://github.com/apache/airflow/blob/fb478c00cdc5e78d5e85fe5ac103707c829be2fb/airflow/serialization/serialized_objects.py#L124) uses a lazy import:
```python
import datetime
from airflow import DAG
from airflow import plugins_manager
from airflow.operators.dummy import DummyOperator
plugins_manager.initialize_timetables_plugins()
from workday import AfterWorkdayTimetable
```
I also had the webserver and scheduler restarted and confirmed the plugin is seen via cli:
```bash
airflow plugins ✔ dev-airflow 19:01:43
name | source
=========================+===========================
workday_timetable_plugin | $PLUGINS_FOLDER/workday.py
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19869 | https://github.com/apache/airflow/pull/19878 | 16b3ab5860bc766fa31bbeccfb08ea710ca4bae7 | 7d555d779dc83566d814a36946bd886c2e7468b3 | "2021-11-29T03:20:26Z" | python | "2021-11-29T16:50:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,850 | ["airflow/providers/amazon/aws/hooks/batch_client.py", "airflow/providers/amazon/aws/sensors/batch.py", "airflow/providers/amazon/provider.yaml", "tests/providers/amazon/aws/sensors/test_batch.py"] | AWS Batch Job Sensor | ### Description
Add a sensor for AWS Batch jobs that will poll the job until it reaches a terminal state.
### Use case/motivation
This feature will enable DAGs to track the status of batch jobs that are submitted in an upstream task that doesn't use the BatchOperator. An example use case - The batch job is submitted by an upstream PythonOperator, with its own functional logic, that returns the job_id of the submitted job.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19850 | https://github.com/apache/airflow/pull/19885 | 7627de383e5cdef91ca0871d8107be4e5f163882 | af28b4190316401c9dfec6108d22b0525974eadb | "2021-11-27T05:11:15Z" | python | "2021-12-05T21:52:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,832 | ["airflow/models/taskinstance.py"] | Deadlock in worker pod | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
multi schedulers. Deployed with kubernetes, and worker pod run with LocalExecutor.
### What happened
The worker pod error with log
there is a similar pr but for scheduler deadlock https://github.com/apache/airflow/pull/18975/files
```
`[2021-11-25 23:08:18,700] {dagbag.py:500} INFO - Filling up the DagBag from /usr/local/spaas-airflow/dags/xxx/xxx_load.py
Running <TaskInstance: xx-x.load.wait_for_data scheduled__2021-11-22T00:00:00+00:00 [running]> on host xxxloadwaitfordata.2388e61b4fd84b6e815f9a0f4e123ad9
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 292, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 105, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/usr/local/lib/python3.7/dist-packages/airflow/cli/commands/task_command.py", line 163, in _run_task_by_local_task_job
run_job.run()
File "/usr/local/lib/python3.7/dist-packages/airflow/jobs/base_job.py", line 245, in run
self._execute()
File "/usr/local/lib/python3.7/dist-packages/airflow/jobs/local_task_job.py", line 97, in _execute
external_executor_id=self.external_executor_id,
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/models/taskinstance.py", line 1176, in check_and_change_state_before_execution
self.refresh_from_db(session=session, lock_for_update=True)
File "/usr/local/lib/python3.7/dist-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/airflow/models/taskinstance.py", line 729, in refresh_from_db
ti: Optional[TaskInstance] = qry.with_for_update().first()
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/orm/query.py", line 3429, in first
ret = list(self[0:1])
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/orm/query.py", line 3203, in __getitem__
return list(res)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/orm/query.py", line 3535, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/orm/query.py", line 3560, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1130, in _execute_clauseelement
distilled_params,
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1317, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1511, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/dist-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.7/dist-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: SELECT task_instance.try_number AS task_instance_try_number, task_instance.task_id AS task_instance_task_id, task_instance.dag_id AS task_instance_dag_id, task_instance.run_id AS task_instance_run_id, task_instance.start_date AS task_instance_start_date, task_instance.end_date AS task_instance_end_date, task_instance.duration AS task_instance_duration, task_instance.state AS task_instance_state, task_instance.max_tries AS task_instance_max_tries, task_instance.hostname AS task_instance_hostname, task_instance.unixname AS task_instance_unixname, task_instance.job_id AS task_instance_job_id, task_instance.pool AS task_instance_pool, task_instance.pool_slots AS task_instance_pool_slots, task_instance.queue AS task_instance_queue, task_instance.priority_weight AS task_instance_priority_weight, task_instance.operator AS task_instance_operator, task_instance.queued_dttm AS task_instance_queued_dttm, task_instance.queued_by_job_id AS task_instance_queued_by_job_id, task_instance.pid AS task_instance_pid, task_instance.executor_config AS task_instance_executor_config, task_instance.external_executor_id AS task_instance_external_executor_id, task_instance.trigger_id AS task_instance_trigger_id, task_instance.trigger_timeout AS task_instance_trigger_timeout, task_instance.next_method AS task_instance_next_method, task_instance.next_kwargs AS task_instance_next_kwargs, dag_run_1.state AS dag_run_1_state, dag_run_1.id AS dag_run_1_id, dag_run_1.dag_id AS dag_run_1_dag_id, dag_run_1.queued_at AS dag_run_1_queued_at, dag_run_1.execution_date AS dag_run_1_execution_date, dag_run_1.start_date AS dag_run_1_start_date, dag_run_1.end_date AS dag_run_1_end_date, dag_run_1.run_id AS dag_run_1_run_id, dag_run_1.creating_job_id AS dag_run_1_creating_job_id, dag_run_1.external_trigger AS dag_run_1_external_trigger, dag_run_1.run_type AS dag_run_1_run_type, dag_run_1.conf AS dag_run_1_conf, dag_run_1.data_interval_start AS dag_run_1_data_interval_start, dag_run_1.data_interval_end AS dag_run_1_data_interval_end, dag_run_1.last_scheduling_decision AS dag_run_1_last_scheduling_decision, dag_run_1.dag_hash AS dag_run_1_dag_hash
FROM task_instance INNER JOIN dag_run AS dag_run_1 ON dag_run_1.dag_id = task_instance.dag_id AND dag_run_1.run_id = task_instance.run_id
WHERE task_instance.dag_id = %s AND task_instance.task_id = %s AND task_instance.run_id = %s
LIMIT %s FOR UPDATE]
[parameters: ('xx-x.load', 'wait_for_data', 'scheduled__2021-11-22T00:00:00+00:00', 1)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)`
```
### What you expected to happen
Shouldn't have this error.
### How to reproduce
_No response_
### Anything else
This issue happened sometimes not always.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19832 | https://github.com/apache/airflow/pull/20030 | a80ac1eecc0ea187de7984510b4ef6f981b97196 | 78c815e22b67e442982b53f41d7d899723d5de9f | "2021-11-26T02:01:16Z" | python | "2021-12-07T15:05:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,816 | ["airflow/utils/log/secrets_masker.py"] | logging error | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
Linux-5.4.0-1056-azure-x86_64-with-glibc2.2.5
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
kubernetes 1.21.2 (AKS)
### What happened
**DAG using logging class produce stack overflow and abort task.** It seems like a bug in /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py
**ENV:**
AIRFLOW__CORE__HIDE_SENSITIVE_VAR_CONN_FIELDS="False"
**Dag Code fragment:**
log_stream = StringIO()
logging.basicConfig(stream=log_stream, level=logging.DEBUG)
logger = logging.getLogger("airflow.task")
logger.addHandler(logging.StreamHandler(log_stream))
...
logger.info("------------------------------ ------------------- ------------------------------")
#####
**Log in worker pod:**
```
Fatal Python error: Cannot recover from stack overflow.
Python runtime state: initialized
Current thread 0x00007fbf8d8190c0 (most recent call first):
File "/usr/local/lib/python3.8/posixpath.py", line 42 in _get_sep
File "/usr/local/lib/python3.8/posixpath.py", line 143 in basename
File "/usr/local/lib/python3.8/logging/__init__.py", line 322 in __init__
File "/usr/local/lib/python3.8/logging/__init__.py", line 1556 in makeRecord
File "/usr/local/lib/python3.8/logging/__init__.py", line 1587 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 950 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/usr/local/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 950 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/usr/local/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 950 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/usr/local/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 950 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/usr/local/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 950 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/usr/local/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 950 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/usr/local/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 950 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/usr/local/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 950 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/usr/local/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/usr/local/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/usr/local/lib/python3.8/logging/__init__.py", line 1458 in warning
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 215 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in <genexpr>
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 208 in _redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 231 in redact
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 164 in filter
File "/usr/local/lib/python3.8/logging/__init__.py", line 811 in filter
```
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19816 | https://github.com/apache/airflow/pull/20039 | ee31f9fb42f08b9d95f26e2b90a5ad6eca134b88 | f44183334fc9c050e176f148461c808992083a37 | "2021-11-24T22:23:50Z" | python | "2021-12-07T21:40:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,801 | ["airflow/sensors/base.py", "tests/sensors/test_base.py"] | Airflow scheduler crashed with TypeError: '>=' not supported between instances of 'datetime.datetime' and 'NoneType' | ### Apache Airflow version
2.1.4
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
Airflow scheduler crashed with following exception
```
[2021-11-23 22:41:16,528] {scheduler_job.py:662} INFO - Starting the scheduler
[2021-11-23 22:41:16,528] {scheduler_job.py:667} INFO - Processing each file at most -1 times
[2021-11-23 22:41:16,639] {manager.py:254} INFO - Launched DagFileProcessorManager with pid: 19
[2021-11-23 22:41:16,641] {scheduler_job.py:1217} INFO - Resetting orphaned tasks for active dag runs
[2021-11-23 22:41:16,644] {settings.py:51} INFO - Configured default timezone Timezone('Etc/GMT-7')
[2021-11-23 22:41:19,016] {scheduler_job.py:711} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 695, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 788, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 901, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 1143, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/airflow/models/dagrun.py", line 438, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/airflow/models/dagrun.py", line 539, in task_instance_scheduling_decisions
schedulable_tis, changed_tis = self._get_ready_tis(scheduleable_tasks, finished_tasks, session)
File "/usr/local/lib/python3.8/dist-packages/airflow/models/dagrun.py", line 565, in _get_ready_tis
if st.are_dependencies_met(
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/airflow/models/taskinstance.py", line 890, in are_dependencies_met
for dep_status in self.get_failed_dep_statuses(dep_context=dep_context, session=session):
File "/usr/local/lib/python3.8/dist-packages/airflow/models/taskinstance.py", line 911, in get_failed_dep_statuses
for dep_status in dep.get_dep_statuses(self, session, dep_context):
File "/usr/local/lib/python3.8/dist-packages/airflow/ti_deps/deps/base_ti_dep.py", line 101, in get_dep_statuses
yield from self._get_dep_statuses(ti, session, dep_context)
File "/usr/local/lib/python3.8/dist-packages/airflow/ti_deps/deps/ready_to_reschedule.py", line 66, in _get_dep_statuses
if now >= next_reschedule_date:
TypeError: '>=' not supported between instances of 'datetime.datetime' and 'NoneType'
[2021-11-23 22:41:20,020] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 19
```
### What you expected to happen
_No response_
### How to reproduce
Define a `BaseSensorOperator` task with large `poke_interval` with `reschedule` mode
```
BaseSensorOperator(
task_id='task',
poke_interval=863998946,
mode='reschedule',
dag=dag
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19801 | https://github.com/apache/airflow/pull/19821 | 9c05a951175c231478cbc19effb0e2a4cccd7a3b | 2213635178ca9d0ae96f5f68c88da48f7f104bf1 | "2021-11-24T03:30:05Z" | python | "2021-12-13T09:38:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,797 | ["airflow/providers/snowflake/hooks/snowflake.py", "docs/apache-airflow-providers-snowflake/connections/snowflake.rst", "tests/providers/snowflake/hooks/test_snowflake.py"] | Support insecure mode in Snowflake provider | ### Description
Snowflake client drivers perform OCSP checking by default when connecting to a service endpoint. In normal circumstances, we should never turn off OCSP checking, and keep the default security posture of OCSP checking always ON. Only in rare emergency situations when Airflow is unable to connect to Snowflake due to OCSP issues should we consider using this workaround temporarily till the issue is resolved. For details, see: [how to: turn off OCSP checking in Snowflake client drivers](https://community.snowflake.com/s/article/How-to-turn-off-OCSP-checking-in-Snowflake-client-drivers)
To do this, we need to add the ability to pass the insecure mode = True flag to the Snowflake client in SnowflkeHook. For now, we support a limited subset of parameters:
https://github.com/apache/airflow/blob/de9900539c9731325e29fd1bbac37c4bc1363bc4/airflow/providers/snowflake/hooks/snowflake.py#L148-L209
### Use case/motivation
As above
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19797 | https://github.com/apache/airflow/pull/20106 | 6174198a3fa3ab7cffa7394afad48e5082210283 | 89a66ae02319a20d6170187527d4535a26078378 | "2021-11-23T23:09:26Z" | python | "2021-12-13T13:31:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,785 | ["airflow/utils/task_group.py", "tests/utils/test_task_group.py"] | Applying labels to task groups shows a cycle in the graph view for the dag | ### Apache Airflow version
2.2.2
### Operating System
Docker (debian:buster)
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
run airflow with this dag
```python3
with DAG(
dag_id="label_bug_without_chain"
) as dag:
with TaskGroup(group_id="group1") as taskgroup1:
t1 = DummyOperator(task_id="dummy1")
t2 = DummyOperator(task_id="dummy2")
t3 = DummyOperator(task_id="dummy3")
t4 = DummyOperator(task_id="dummy4")
chain([Label("branch three")], taskgroup1, t4,)
```
### What happened
expanded task views look like they have cycles
<img width="896" alt="Screen Shot 2021-11-22 at 2 33 49 PM" src="https://user-images.githubusercontent.com/17841735/143083099-d250fd7e-963f-4b34-b544-405b51ee2859.png">
### What you expected to happen
The task group shouldn't display as if it has loops in it.
### How to reproduce
View the dag shown in the deployment details.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19785 | https://github.com/apache/airflow/pull/24847 | 96b01a8012d164df7c24c460149d3b79ecad3901 | efc05a5f0b3d261293c2efaf6771e4af9a2f324c | "2021-11-23T18:39:01Z" | python | "2022-07-05T15:40:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,783 | ["airflow/config_templates/airflow_local_settings.py", "tests/core/test_logging_config.py"] | Bad parsing for Cloudwatch Group ARN for logging | ### Apache Airflow version
2.1.3
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### What happened
I am deploying Airflow in AWS ECS,
I am trying to send tasks logs to cloudwatch.
Usually log groups in AWS have this format /aws/name_of_service/name_of_component
I configured my env variables as follow
```
{
name = "AIRFLOW__LOGGING__REMOTE_LOGGING",
value = "true"
},
{
name = "AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER",
value = "cloudwatch://arn:aws:logs:aaaa:bbbbb:log-group:/aws/ecs/ccccc"
},
{
name = "AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID",
value = "aws_default"
},
```
I am getting this error in my tasks logs
```
*** Reading remote log from Cloudwatch log_group: log_stream: hello_airflow2/hello_task/2021-11-23T16_29_21.191922+00_00/1.log.
Could not read remote logs from log_group: log_stream: hello_airflow2/hello_task/2021-11-23T16_29_21.191922+00_00/1.log.
```
It's throwed because the log group is empty
The reason behind this error is this line
https://github.com/apache/airflow/blob/c4fd84accd143977cba57e4daf6daef2af2ff457/airflow/config_templates/airflow_local_settings.py#L202
the result of netloc is "arn:aws:logs:aaaa:bbbbb:log-group:"
```
urlparse("cloudwatch://arn:aws:logs:aaaa:bbbbb:log-group:/aws/ecs/ccccc")
ParseResult(scheme='cloudwatch', netloc='arn:aws:logs:aaaa:bbbbb:log-group:', path='/aws/ecs/ccccc', params='', query='', fragment='')
```
and this line
https://github.com/apache/airflow/blob/86a2a19ad2bdc87a9ad14bb7fde9313b2d7489bb/airflow/providers/amazon/aws/log/cloudwatch_task_handler.py#L53
which will result an empty log group
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19783 | https://github.com/apache/airflow/pull/19700 | 4ac35d723b73d02875d56bf000aafd2235ef0f4a | 43bd6f8db74e4fc0b901ec3c1d23b71fe7ca8eb6 | "2021-11-23T16:48:36Z" | python | "2021-12-19T23:03:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,761 | ["setup.cfg"] | removing upper bound for markupsafe | ### Description
Per discussion and guidance from #19753, opening this issue and PR https://github.com/apache/airflow/pull/19760 for review. Based on if all the tests pass, this could be reviewed further.
### Use case/motivation
Currently Jinja2 upper bound is jinja2>=2.10.1,<4 at https://github.com/apache/airflow/blob/main/setup.cfg#L121 as part of #16595
Jinja2 seems to require MarkupSafe>=2.0 at https://github.com/pallets/jinja/blob/main/setup.py#L6 as part of pallets/jinja@e2f673e
Based on this, is it feasible to consider updating upper bound in airflow for markupsafe>=1.1.1, <2.0 to allow 2.0 at https://github.com/apache/airflow/blob/main/setup.cfg#L126 ?
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19761 | https://github.com/apache/airflow/pull/20113 | 139be53f2373b322df1f7c3ca3b3dde64fc55587 | bcacc51a16697a656357c29c7a40240e422e4bf9 | "2021-11-23T03:59:07Z" | python | "2021-12-08T01:26:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,757 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | Change the default for `dag_processor_manager_log_location` | ### Description
Should the default for the [dag_processor_manager_log_location](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-processor-manager-log-location) be `{BASE_LOG_FOLDER}/dag_processor_manager/dag_processor_manager.log` instead of `{AIRFLOW_HOME}/logs/dag_processor_manager/dag_processor_manager.log`?
### Use case/motivation
I'm running the k8s executor and we are changing our security profile on the pods such that the filesystem is readonly except for `/tmp`. I started out by changing `base_log_folder` and I spent a while trying to debug parts of my logging config that were still trying to write to `{AIRFLOW_HOME}/logs`
I found that the processor config was the issue because the default location was `{AIRFLOW_HOME}/logs/dag_processor_manager/dag_processor_manager.log` ([here](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-processor-manager-log-location))
Maybe it is fine as is but I found it hard to debug
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19757 | https://github.com/apache/airflow/pull/19793 | 6c80149d0abf84caec8f4c1b4e8795ea5923f89a | 00fd3af52879100d8dbca95fd697d38fdd39e60a | "2021-11-22T21:42:23Z" | python | "2021-11-24T18:40:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,754 | ["airflow/sensors/external_task.py", "newsfragments/23647.bugfix.rst", "tests/sensors/test_external_task_sensor.py"] | ExternalTaskSensor should skip if soft_fail=True and external task in one of the failed_states | ### Apache Airflow version
2.1.4
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
I ran into a scenario where if I use an `ExternalTaskSensor` and set it to `soft_fail=True` but also set it to `failed_states=['skipped']` I would expect if the external task skipped then to mark this sensor as skipped, however for the `failed_states` check in the poke method if it is in one of those states it will explicitly fail with an `AirflowException`.
Wouldn't it make more sense to skip because of the `soft_fail`?
### What you expected to happen
The `ExternalTaskSensor` task should skip
### How to reproduce
1. Add a DAG with a task that is set to skip, such as this `BashOperator` task set to skip taken from https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/bash.html#skipping:
```
this_will_skip = BashOperator(
task_id='this_will_skip',
bash_command='echo "hello world"; exit 99;',
dag=dag,
)
```
2. Add a second DAG with an `ExternalTaskSensor`
3. Set that sensor to have `external_dag_id` be the other DAG and `external_task_id` be the skipped task in that other DAG and `failed_states=['skipped']` and `soft_fail=True`
4. The `ExternalTaskSensor` fails instead of skips
### Anything else
I don't know what is desirable for most Airflow users:
1. To have `soft_fail` to only cause skips if the sensor times out? (like it seems to currently do)
2. To have `ExternalTaskSensor` with `soft_fail` to skip any time it would otherwise fail, such as the external task being in one of the `failed_states`?
3. To have some other way for the `ExternalTaskSensor` to skip if the external task skipped?
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19754 | https://github.com/apache/airflow/pull/23647 | 7de050ceeb381fb7959b65acd7008e85b430c46f | 1b345981f6e8e910b3542ec53829e39e6c9b6dba | "2021-11-22T19:06:38Z" | python | "2022-06-24T13:50:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,716 | ["airflow/macros/__init__.py", "airflow/models/taskinstance.py", "airflow/operators/python.py", "airflow/utils/context.py", "airflow/utils/context.pyi", "airflow/utils/operator_helpers.py", "tests/models/test_taskinstance.py", "tests/operators/test_python.py", "tests/providers/amazon/aws/sensors/test_s3_key.py", "tests/providers/papermill/operators/test_papermill.py"] | [Airflow 2.2.2] execution_date Proxy object - str formatting error | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
Ubuntu 18.04.6
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
The deprecated variable `execution_date` raises an error when used in an f string template with date string formatting.
```python
In [1]: execution_date
DeprecationWarning: Accessing 'execution_date' from the template is deprecated and will be removed in a future version. Please use 'logical_date' or 'data_interval_start' instead.
Out[1]: <Proxy at 0x7fb6f9af81c0 wrapping DateTime(2021, 11, 18, 0, 0, 0, tzinfo=Timezone('UTC')) at 0x7fb6f9aeff90 with factory <function TaskInstance.get_template_context.<locals>.deprecated_proxy.<locals>.deprecated_func at 0x7fb6f98699d0>>
In [2]: f"{execution_date:%Y-%m-%d}"
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
----> 1 f"{execution_date:%Y-%m-%d}"
TypeError: unsupported format string passed to Proxy.__format__
```
### What you expected to happen
Executing `f"{execution_date:%Y-%m-%d}"` should return a string and not raise an error.
### How to reproduce
```python
from datetime import datetime
from airflow import DAG
from airflow.operators.python import PythonOperator
def test_str_fmt(execution_date: datetime):
return f"{execution_date:%Y-%m-%d}"
dag = DAG(
dag_id="Test_Date_String",
schedule_interval="@daily",
catchup=False,
default_args={
"depends_on_past": False,
"start_date": datetime(2021, 11, 1),
"email": None,
"email_on_failure": False,
"email_on_retry": False,
"retries": 0,
},
)
with dag:
test_task = PythonOperator(
task_id="test_task",
python_callable=test_str_fmt,
)
```
### Anything else
```python
from datetime import datetime
...
datetime.fromisoformat(next_ds)
TypeError: fromisoformat: argument must be str
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19716 | https://github.com/apache/airflow/pull/19886 | f6dca1fa5e70ef08798adeb5a6bfc70f41229646 | caaf6dcd3893bbf11db190f9969af9aacc773255 | "2021-11-19T20:13:25Z" | python | "2021-12-01T07:14:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,699 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/executors/celery_executor.py", "tests/executors/test_celery_executor.py"] | task_instances stuck in "queued" and are missing corresponding celery_taskmeta entries | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
Linux Mint 20.2
### Versions of Apache Airflow Providers
apache-airflow-providers-celery = "2.1.0"
apache-airflow-providers-papermill = "^2.1.0"
apache-airflow-providers-postgres = "^2.2.0"
apache-airflow-providers-google = "^6.1.0"
### Deployment
Docker-Compose
### Deployment details
Docker-compose deploys into our GCP k8s cluster
### What happened
Hi,
we're running Airflow for our ETL pipelines.
Our DAGs run in parallel and we spawn a fair bit of parallel DAGs and tasks every morning for our pipelines.
We run our Airflow in a k8s cluster in GCP and we use Celery for our executors.
And we use autopilot to dynamically scale up and down the cluster as the workload increases or decreases, thereby sometimes tearing down airflow workers.
Ever since upgrading to Airflow 2.0 we've had a lot of problems with tasks getting stuck in "queued" or "running", and we've had to clean it up by manually failing the stuck tasks and re-running the DAGs.
Following the discussions here over the last months it looks we've not been alone :-)
But, after upgrading to Airflow 2.2.1 we saw a significant decrease in the number of tasks getting stuck (yay!), something we hoped for given the bug fixes for the scheduler in that release.
However, we still have a few tasks getting stuck (Stuck = "Task in queued") on most mornings that require the same manual intervention.
I've started digging in the Airflow DB trying to see where there's a discrepancy, and every time a task gets stuck it's missing a correspondning task in the table "celery_taskmeta".
This is a consistent pattern for the tasks that are stuck with us at this point. The task has rows in the tables "task_instance", "job", and "dag_run" with IDs referencing each other.
But the "external_executor_id" in "task_instance" is missing a corresponding entry in the "celery_taskmeta" table. So nothing ever gets executed and the task_instance is forever stuck in "queued" and never cleaned up by the scheduler.
I can see in "dag_run::last_scheduling_decision" that the scheduler is continuously re-evaluating this task as the timestamp is updated, so it's inspecting it at least, but it leaves everything in the "queued" state.
The other day I bumped our Airflow to 2.2.2, but we still get the same behavior.
And finally, whenever we get tasks that are stuck in "Queued" in this way they usually occur within the same few seconds timestamp-wise, and it correlates timewise to a timestamp when autopilot scaled down the number of airflow-workers.
If the tasks end up in this orphaned/queued state then they never get executed and are stuck until we fail them. Longest I've seen so far is a few days in this state until the task was discovered.
Restarting the scheduler does not resolve this issue and tasks are still stuck in "queued" afterwards.
Would it be possible (and a good idea?) to include in the scheduler a check if a "task_instance" row has a corresponding row in the "celery_taskmeta", and if its missing in "celery_taskmeta" after a given amount of time clean it up?
After reading about and watching Ash Berlin-Taylor's most excellent video on a deep dive into the Airflow's scheduler this does seem exactly like the check that we should add to the scheduler?
Also if there's any data I can dig out and provide for this, don't hesitate to let me know.
### What you expected to happen
I expect orphaned tasks in the state queued and that are missing a corresponding entry in celery_taskmeta to be cleaned up and re-executed by the scheduler.
### How to reproduce
Currently no deterministic way to reproduce other than a large amount of tasks and then remove a worker at just the right time.
Occurs every morning in a handful of tasks, but no deterministic way to reproduce it.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19699 | https://github.com/apache/airflow/pull/19769 | 2b4bf7fe67fc656ceb7bdaad36453b7a5b83ef04 | 14ee831c7ad767e31a3aeccf3edbc519b3b8c923 | "2021-11-19T07:00:42Z" | python | "2022-01-14T08:55:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,688 | ["docs/helm-chart/airflow-configuration.rst"] | Airflow does not load default connections | ### Apache Airflow version
2.2.1
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
deploy with helm in EKS.
### What happened
When I deployed my airflow via helm, I wanted to use the aws_default connection in my dags, but passing this connection airflow logs that this connection does not exists. So I looked in airflow UI and in postgres connection table and this connection did not exist.
After that I checked the AIRFLOW__CORE__LOAD_DEFAULT_CONNECTIONS env var and it was equals true.
### What you expected to happen
I expected that the default connections was created with the initial deploy.
### How to reproduce
You just need to deploy the airflow via helm chart and check that value of the AIRFLOW__CORE__LOAD_DEFAULT_CONNECTIONS env var, goes to the connection table in the database and check that any connection was created.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19688 | https://github.com/apache/airflow/pull/19708 | d69b4c9dc82b6c35c387bb819b95cf41fb974ab8 | 1983bf95806422146a3750945a65fd71364dc973 | "2021-11-18T18:33:14Z" | python | "2021-11-24T10:46:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,654 | ["chart/templates/_helpers.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_pgbouncer.py"] | Pgbouncer auth type cannot be configured | ### Official Helm Chart version
1.3.0 (latest released)
### Apache Airflow version
2.2.1
### Kubernetes Version
1.20.9
### Helm Chart configuration
_No response_
### Docker Image customisations
_No response_
### What happened
The default pgbouncer config generated by the helm chart looks something like
```
[databases]
...
[pgbouncer]
pool_mode = transaction
listen_port = 6543
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/users.txt
stats_users = <user>
ignore_startup_parameters = extra_float_digits
max_client_conn = 200
verbose = 0
log_disconnections = 0
log_connections = 0
server_tls_sslmode = require
server_tls_ciphers = normal
```
If the database to connect against is Azure Postgresl, the auth type `md5` [does not seem](https://github.com/pgbouncer/pgbouncer/issues/325) to be supported. Hence, we need to add a configuration flag to change the auth type to something else in the helm chart.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19654 | https://github.com/apache/airflow/pull/21999 | 3c22565ac862cfe3a3a28a097dc1b7c9987c5d76 | f482ae5570b1a3979ee6b382633e7181a533ba93 | "2021-11-17T14:45:33Z" | python | "2022-03-26T19:55:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,647 | ["airflow/www/extensions/init_appbuilder.py"] | Wrong path for assets | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
macOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Running things locally with pyenv as well as Breeze.
### What happened
I have pulled the latest version last night but that doesn't seem to fix the issue, neither by local pyenv nor by Docker.
I cannot copy separate logs from tmux, iTerm doesn't seem to allow that, but I can show the logs from the boot process of the breeze env:
```
Good version of docker 20.10.8.
f321cbfd33eeb8db8effc8f9bc4c3d5317758927da26abeb4c08b14fad09ff6b
f321cbfd33eeb8db8effc8f9bc4c3d5317758927da26abeb4c08b14fad09ff6b
No need to pull the image. Yours and remote cache hashes are the same!
The CI image for Python python:3.8-slim-buster image likely needs to be rebuild
The files were modified since last build: setup.py setup.cfg Dockerfile.ci .dockerignore scripts/docker/compile_www_assets.sh scripts/docker/common.sh scripts/docker/install_additional_dependencies.sh scripts/docker/install_airflow.sh scripts/docker/install_airflow_dependencies_from_branch_tip.sh scripts/docker/install_from_docker_context_files.sh scripts/docker/install_mysql.sh airflow/www/package.json airflow/www/yarn.lock airflow/www/webpack.config.js airflow/ui/package.json airflow/ui/yarn.lock
WARNING!!!!:Make sure that you rebased to latest upstream before rebuilding or the rebuild might take a lot of time!
Please confirm pull and rebuild image CI-python3.8 (or wait 4 seconds to skip it). Are you sure? [y/N/q]
The answer is 'no'. Skipping pull and rebuild image CI-python3.8.
@&&&&&&@
@&&&&&&&&&&&@
&&&&&&&&&&&&&&&&
&&&&&&&&&&
&&&&&&&
&&&&&&&
@@@@@@@@@@@@@@@@ &&&&&&
@&&&&&&&&&&&&&&&&&&&&&&&&&&
&&&&&&&&&&&&&&&&&&&&&&&&&&&&
&&&&&&&&&&&&
&&&&&&&&&
&&&&&&&&&&&&
@@&&&&&&&&&&&&&&&@
@&&&&&&&&&&&&&&&&&&&&&&&&&&&& &&&&&&
&&&&&&&&&&&&&&&&&&&&&&&&&&&& &&&&&&
&&&&&&&&&&&&&&&&&&&&&&&& &&&&&&
&&&&&&
&&&&&&&
@&&&&&&&&
@&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
@&&&@ && @&&&&&&&&&&& &&&&&&&&&&&& && &&&&&&&&&& &&& &&& &&&
&&& &&& && @&& &&& && && &&& &&&@ &&& &&&&& &&&
&&& &&& && @&&&&&&&&&&&& &&&&&&&&&&& && && &&& &&& &&& &&@ &&&
&&&&&&&&&&& && @&&&&&&&&& && && &&@ &&& &&@&& &&@&&
&&& &&& && @&& &&&@ && &&&&&&&&&&& &&&&&&&&&&&& &&&& &&&&
&&&&&&&&&&&& &&&&&&&&&&&& &&&&&&&&&&&@ &&&&&&&&&&&& &&&&&&&&&&& &&&&&&&&&&&
&&& &&& && &&& && &&& &&&& &&
&&&&&&&&&&&&@ &&&&&&&&&&&& &&&&&&&&&&& &&&&&&&&&&& &&&& &&&&&&&&&&
&&& && && &&&& && &&& &&&& &&
&&&&&&&&&&&&& && &&&&@ &&&&&&&&&&&@ &&&&&&&&&&&& @&&&&&&&&&&& &&&&&&&&&&&
Use CI image.
Branch name: main
Docker image: ghcr.io/apache/airflow/main/ci/python3.8:latest
Airflow source version: 2.3.0.dev0
Python version: 3.8
Backend: mysql 5.7
####################################################################################################
Airflow Breeze CHEATSHEET
####################################################################################################
Adding breeze to your path:
When you exit the environment, you can add sources of Airflow to the path - you can
run breeze or the scripts above from any directory by calling 'breeze' commands directly
export PATH=${PATH}:"/Users/burakkarakan/Code/anything-else/airflow"
####################################################################################################
Port forwarding:
Ports are forwarded to the running docker containers for webserver and database
* 12322 -> forwarded to Airflow ssh server -> airflow:22
* 28080 -> forwarded to Airflow webserver -> airflow:8080
* 25555 -> forwarded to Flower dashboard -> airflow:5555
* 25433 -> forwarded to Postgres database -> postgres:5432
* 23306 -> forwarded to MySQL database -> mysql:3306
* 21433 -> forwarded to MSSQL database -> mssql:1443
* 26379 -> forwarded to Redis broker -> redis:6379
Here are links to those services that you can use on host:
* ssh connection for remote debugging: ssh -p 12322 airflow@127.0.0.1 pw: airflow
* Webserver: http://127.0.0.1:28080
* Flower: http://127.0.0.1:25555
* Postgres: jdbc:postgresql://127.0.0.1:25433/airflow?user=postgres&password=airflow
* Mysql: jdbc:mysql://127.0.0.1:23306/airflow?user=root
* Redis: redis://127.0.0.1:26379/0
####################################################################################################
You can setup autocomplete by running 'breeze setup-autocomplete'
####################################################################################################
You can toggle ascii/cheatsheet by running:
* breeze toggle-suppress-cheatsheet
* breeze toggle-suppress-asciiart
####################################################################################################
Checking resources.
* Memory available 5.9G. OK.
* CPUs available 3. OK.
WARNING!!!: Not enough Disk space available for Docker.
At least 40 GBs recommended. You have 28G
WARNING!!!: You have not enough resources to run Airflow (see above)!
Please follow the instructions to increase amount of resources available:
Please check https://github.com/apache/airflow/blob/main/BREEZE.rst#resources-required for details
Good version of docker-compose: 1.29.2
WARNING: The ENABLE_TEST_COVERAGE variable is not set. Defaulting to a blank string.
Creating network "docker-compose_default" with the default driver
Creating volume "docker-compose_sqlite-db-volume" with default driver
Creating volume "docker-compose_postgres-db-volume" with default driver
Creating volume "docker-compose_mysql-db-volume" with default driver
Creating volume "docker-compose_mssql-db-volume" with default driver
Creating docker-compose_mysql_1 ... done
Creating docker-compose_airflow_run ... done
Airflow home: /root/airflow
Airflow sources: /opt/airflow
Airflow core SQL connection: mysql://root@mysql/airflow?charset=utf8mb4
Using already installed airflow version
No need for www assets recompilation.
===============================================================================================
Checking integrations and backends
===============================================================================================
MySQL: OK.
-----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
Starting Airflow
Your dags for webserver and scheduler are read from /files/dags directory
which is mounted from your <AIRFLOW_SOURCES>/files/dags folder
You can add /files/airflow-breeze-config directory and place variables.env
In it to make breeze source the variables automatically for you
You can add /files/airflow-breeze-config directory and place .tmux.conf
in it to make breeze use your local .tmux.conf for tmux
```
The logs say `No need for www assets recompilation.` which signals that the assets are already up-to-date. However, when I visit the page, the files are not there:
<img width="687" alt="image" src="https://user-images.githubusercontent.com/16530606/142189803-92dc5e9f-c940-4272-98a1-e5845344b62d.png">
- When I search the whole directory with `find . -name 'ab_filters.js'`, there's no file, which means the issue is not with the browser cache.
- Just in case there was another race condition, I ran the following command again to see if it'd generate the file for some reason: `./breeze initialize-local-virtualenv --python 3.8` but the result of the `find . -name 'ab_filters.js' ` is still empty.
- Then I ran `./airflow/www/compile_assets.sh` but that didn't make a difference as well, `find` is still empty.
Here's the output from the `compile_assets.sh`:
```
❯ ./airflow/www/compile_assets.sh
yarn install v1.22.17
[1/4] 🔍 Resolving packages...
success Already up-to-date.
✨ Done in 0.56s.
yarn run v1.22.17
$ NODE_ENV=production webpack --colors --progress
23% building 26/39 modules 13 active ...www/node_modules/babel-loader/lib/index.js??ref--5!/Users/burakkarakan/Code/anything-else/airflow/airflow/www/static/js/datetime_utils.jspostcss-modules-values: postcss.plugin was deprecated. Migration guide:
https://evilmartians.com/chronicles/postcss-8-plugin-migration
postcss-modules-local-by-default: postcss.plugin was deprecated. Migration guide:
https://evilmartians.com/chronicles/postcss-8-plugin-migration
modules-extract-imports: postcss.plugin was deprecated. Migration guide:
https://evilmartians.com/chronicles/postcss-8-plugin-migration
postcss-modules-scope: postcss.plugin was deprecated. Migration guide:
https://evilmartians.com/chronicles/postcss-8-plugin-migration
postcss-import-parser: postcss.plugin was deprecated. Migration guide:
https://evilmartians.com/chronicles/postcss-8-plugin-migration
postcss-icss-parser: postcss.plugin was deprecated. Migration guide:
https://evilmartians.com/chronicles/postcss-8-plugin-migration
postcss-url-parser: postcss.plugin was deprecated. Migration guide:
https://evilmartians.com/chronicles/postcss-8-plugin-migration
Hash: c303806b9efab4bf5dec
Version: webpack 4.44.2
Time: 4583ms
Built at: 11/17/2021 11:13:58 AM
Asset Size Chunks Chunk Names
../../../../licenses/LICENSES-ui.txt 52.8 KiB [emitted]
airflowDefaultTheme.9ef6a9e2f0de25c0b346.css 102 KiB 0 [emitted] [immutable] airflowDefaultTheme
airflowDefaultTheme.9ef6a9e2f0de25c0b346.js 4.15 KiB 0 [emitted] [immutable] airflowDefaultTheme
bootstrap-datetimepicker.min.css 7.54 KiB [emitted]
bootstrap-datetimepicker.min.js 37.1 KiB [emitted]
bootstrap3-typeahead.min.js 10 KiB [emitted]
calendar.5260e8f126017610ad73.css 1.06 KiB 1 [emitted] [immutable] calendar
calendar.5260e8f126017610ad73.js 15.4 KiB 1 [emitted] [immutable] calendar
codemirror.css 5.79 KiB [emitted]
codemirror.js 389 KiB [emitted] [big]
coffeescript-lint.js 1.43 KiB [emitted]
connectionForm.be3bf4692736d58cfdb0.js 12.8 KiB 2 [emitted] [immutable] connectionForm
css-lint.js 1.28 KiB [emitted]
d3-shape.min.js 29.1 KiB [emitted]
d3-tip.js 8.99 KiB [emitted]
d3.min.js 148 KiB [emitted]
dag.c0b8852bb690f83bb55e.js 20.4 KiB 3 [emitted] [immutable] dag
dagCode.98dce599559f03115f1f.js 6.48 KiB 4 [emitted] [immutable] dagCode
dagDependencies.c2cdb377b2d3b7be7d1b.js 10.4 KiB 5 [emitted] [immutable] dagDependencies
dagre-d3.core.min.js 27.5 KiB [emitted]
dagre-d3.core.min.js.map 26.3 KiB [emitted]
dagre-d3.min.js 708 KiB [emitted] [big]
dagre-d3.min.js.map 653 KiB [emitted] [big]
dags.0ca53db014891875da7d.css 2.59 KiB 6, 3, 18 [emitted] [immutable] dags
dags.0ca53db014891875da7d.js 45.9 KiB 6, 3, 18 [emitted] [immutable] dags
dataTables.bootstrap.min.css 4.13 KiB [emitted]
dataTables.bootstrap.min.js 1.93 KiB [emitted]
durationChart.ca520df04ff71dd5fab9.js 5.11 KiB 7 [emitted] [immutable] durationChart
flash.ab8296a74435427f9b53.css 1.36 KiB 8 [emitted] [immutable] flash
flash.ab8296a74435427f9b53.js 4.12 KiB 8 [emitted] [immutable] flash
gantt.d7989000350b53dc0855.css 1.1 KiB 9, 3, 18 [emitted] [immutable] gantt
gantt.d7989000350b53dc0855.js 42 KiB 9, 3, 18 [emitted] [immutable] gantt
graph.eaba1e30424750441353.css 2.37 KiB 10, 3, 18 [emitted] [immutable] graph
graph.eaba1e30424750441353.js 55.5 KiB 10, 3, 18 [emitted] [immutable] graph
html-lint.js 1.94 KiB [emitted]
ie.fc8f40153cdecb7eb0b3.js 16.4 KiB 11 [emitted] [immutable] ie
javascript-lint.js 2.11 KiB [emitted]
javascript.js 37.9 KiB [emitted]
jquery.dataTables.min.js 81.6 KiB [emitted]
jshint.js 1.22 MiB [emitted] [big]
json-lint.js 1.3 KiB [emitted]
lint.css 2.55 KiB [emitted]
lint.js 8.91 KiB [emitted]
loadingDots.e4fbfc09969e91db1f49.css 1.21 KiB 12 [emitted] [immutable] loadingDots
loadingDots.e4fbfc09969e91db1f49.js 4.13 KiB 12 [emitted] [immutable] loadingDots
main.216f001f0b6da7966a9f.css 6.85 KiB 13 [emitted] [immutable] main
main.216f001f0b6da7966a9f.js 16.4 KiB 13 [emitted] [immutable] main
manifest.json 3.31 KiB [emitted]
materialIcons.e368f72fd0a7e9a40455.css 109 KiB 14 [emitted] [immutable] materialIcons
materialIcons.e368f72fd0a7e9a40455.js 4.13 KiB 14 [emitted] [immutable] materialIcons
moment.f2be510679d38b9c54e9.js 377 KiB 15 [emitted] [immutable] [big] moment
nv.d3.min.css 8.13 KiB [emitted]
nv.d3.min.css.map 3.59 KiB [emitted]
nv.d3.min.js 247 KiB [emitted] [big]
nv.d3.min.js.map 1.86 MiB [emitted] [big]
oss-licenses.json 66.3 KiB [emitted]
redoc.standalone.js 970 KiB [emitted] [big]
redoc.standalone.js.LICENSE.txt 2.75 KiB [emitted]
redoc.standalone.js.map 3.23 MiB [emitted] [big]
switch.3e30e60646cdea5e4216.css 2.04 KiB 16 [emitted] [immutable] switch
switch.3e30e60646cdea5e4216.js 4.12 KiB 16 [emitted] [immutable] switch
task.8082a6cd3c389845ca0c.js 5.33 KiB 17 [emitted] [immutable] task
taskInstances.d758e4920a32ca069541.js 33.1 KiB 18, 3 [emitted] [immutable] taskInstances
tiLog.fc2c3580403a943ccddb.js 23.8 KiB 19 [emitted] [immutable] tiLog
tree.57c43dd706cbd3d74ef9.css 1.31 KiB 20, 3 [emitted] [immutable] tree
tree.57c43dd706cbd3d74ef9.js 1.48 MiB 20, 3 [emitted] [immutable] [big] tree
trigger.57a3ebbaee0f22bd5022.js 5.03 KiB 21 [emitted] [immutable] trigger
variableEdit.45c5312f076fbe019680.js 4.97 KiB 22 [emitted] [immutable] variableEdit
yaml-lint.js 1.23 KiB [emitted]
Entrypoint airflowDefaultTheme = airflowDefaultTheme.9ef6a9e2f0de25c0b346.css airflowDefaultTheme.9ef6a9e2f0de25c0b346.js
Entrypoint connectionForm = connectionForm.be3bf4692736d58cfdb0.js
Entrypoint dag = dag.c0b8852bb690f83bb55e.js
Entrypoint dagCode = dagCode.98dce599559f03115f1f.js
Entrypoint dagDependencies = dagDependencies.c2cdb377b2d3b7be7d1b.js
Entrypoint dags = dags.0ca53db014891875da7d.css dags.0ca53db014891875da7d.js
Entrypoint flash = flash.ab8296a74435427f9b53.css flash.ab8296a74435427f9b53.js
Entrypoint gantt = gantt.d7989000350b53dc0855.css gantt.d7989000350b53dc0855.js
Entrypoint graph = graph.eaba1e30424750441353.css graph.eaba1e30424750441353.js
Entrypoint ie = ie.fc8f40153cdecb7eb0b3.js
Entrypoint loadingDots = loadingDots.e4fbfc09969e91db1f49.css loadingDots.e4fbfc09969e91db1f49.js
Entrypoint main = main.216f001f0b6da7966a9f.css main.216f001f0b6da7966a9f.js
Entrypoint materialIcons = materialIcons.e368f72fd0a7e9a40455.css materialIcons.e368f72fd0a7e9a40455.js
Entrypoint moment [big] = moment.f2be510679d38b9c54e9.js
Entrypoint switch = switch.3e30e60646cdea5e4216.css switch.3e30e60646cdea5e4216.js
Entrypoint task = task.8082a6cd3c389845ca0c.js
Entrypoint taskInstances = taskInstances.d758e4920a32ca069541.js
Entrypoint tiLog = tiLog.fc2c3580403a943ccddb.js
Entrypoint tree [big] = tree.57c43dd706cbd3d74ef9.css tree.57c43dd706cbd3d74ef9.js
Entrypoint calendar = calendar.5260e8f126017610ad73.css calendar.5260e8f126017610ad73.js
Entrypoint durationChart = durationChart.ca520df04ff71dd5fab9.js
Entrypoint trigger = trigger.57a3ebbaee0f22bd5022.js
Entrypoint variableEdit = variableEdit.45c5312f076fbe019680.js
[8] ./static/js/dag.js 8.77 KiB {3} {6} {9} {10} {18} {20} [built]
[16] ./static/js/task_instances.js 4.54 KiB {6} {9} {10} {18} [built]
[69] ./static/css/bootstrap-theme.css 50 bytes {0} [built]
[70] ./static/js/connection_form.js 7.35 KiB {2} [built]
[71] ./static/js/dag_code.js 1.09 KiB {4} [built]
[72] ./static/js/dag_dependencies.js 6.39 KiB {5} [built]
[73] multi ./static/css/dags.css ./static/js/dags.js 40 bytes {6} [built]
[76] ./static/css/flash.css 50 bytes {8} [built]
[77] multi ./static/css/gantt.css ./static/js/gantt.js 40 bytes {9} [built]
[80] multi ./static/css/graph.css ./static/js/graph.js 40 bytes {10} [built]
[83] ./static/js/ie.js 887 bytes {11} [built]
[85] ./static/css/loading-dots.css 50 bytes {12} [built]
[86] multi ./static/css/main.css ./static/js/main.js 40 bytes {13} [built]
[88] ./static/css/material-icons.css 50 bytes {14} [built]
[93] ./static/css/switch.css 50 bytes {16} [built]
+ 425 hidden modules
WARNING in configuration
The 'mode' option has not been set, webpack will fallback to 'production' for this value. Set 'mode' option to 'development' or 'production' to enable defaults for each environment.
You can also set it to 'none' to disable any default behavior. Learn more: https://webpack.js.org/configuration/mode/
WARNING in asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).
This can impact web performance.
Assets:
moment.f2be510679d38b9c54e9.js (377 KiB)
tree.57c43dd706cbd3d74ef9.js (1.48 MiB)
nv.d3.min.js (247 KiB)
nv.d3.min.js.map (1.86 MiB)
dagre-d3.min.js (708 KiB)
dagre-d3.min.js.map (653 KiB)
redoc.standalone.js (970 KiB)
redoc.standalone.js.map (3.23 MiB)
codemirror.js (389 KiB)
jshint.js (1.22 MiB)
WARNING in entrypoint size limit: The following entrypoint(s) combined asset size exceeds the recommended limit (244 KiB). This can impact web performance.
Entrypoints:
moment (377 KiB)
moment.f2be510679d38b9c54e9.js
tree (1.48 MiB)
tree.57c43dd706cbd3d74ef9.css
tree.57c43dd706cbd3d74ef9.js
WARNING in webpack performance recommendations:
You can limit the size of your bundles by using import() or require.ensure to lazy load some parts of your application.
For more info visit https://webpack.js.org/guides/code-splitting/
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/bootstrap-theme.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/bootstrap-theme.css 130 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/calendar.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/calendar.css 1.47 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/dags.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/dags.css 3.31 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/flash.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/flash.css 2.25 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/gantt.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/gantt.css 1.58 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/graph.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/graph.css 3.16 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/loading-dots.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/loading-dots.css 1.64 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/main.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/main.css 10.7 KiB {0} [built]
[3] ./static/sort_both.png 307 bytes {0} [built]
[4] ./static/sort_desc.png 251 bytes {0} [built]
[5] ./static/sort_asc.png 255 bytes {0} [built]
+ 2 hidden modules
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/material-icons.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/material-icons.css 110 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/switch.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/switch.css 2.69 KiB {0} [built]
+ 1 hidden module
Child mini-css-extract-plugin node_modules/css-loader/dist/cjs.js!static/css/tree.css:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader/dist/cjs.js!./static/css/tree.css 1.8 KiB {0} [built]
+ 1 hidden module
✨ Done in 7.53s.
```
### What you expected to happen
_No response_
### How to reproduce
Checkout the `main` branch locally and run the project.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19647 | https://github.com/apache/airflow/pull/19661 | 7cda7d4b5e413925bf639976e77ebf2442b4bff9 | a81ae61ecfc4274780b571ff2f599f7c75875e14 | "2021-11-17T13:30:59Z" | python | "2021-11-17T19:02:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,641 | ["airflow/providers/amazon/aws/hooks/eks.py", "airflow/providers/amazon/aws/operators/eks.py", "tests/providers/amazon/aws/hooks/test_eks.py", "tests/providers/amazon/aws/operators/test_eks.py", "tests/providers/amazon/aws/utils/eks_test_constants.py", "tests/providers/amazon/aws/utils/eks_test_utils.py"] | EKSCreateNodegroupOperator - missing kwargs | ### Description
Boto3 / eks / create_nodegroup api supports the following kwargs:
```python
clusterName='string',
nodegroupName='string',
scalingConfig={
'minSize': 123,
'maxSize': 123,
'desiredSize': 123
},
diskSize=123,
subnets=[
'string',
],
instanceTypes=[
'string',
],
amiType='AL2_x86_64'|'AL2_x86_64_GPU'|'AL2_ARM_64'|'CUSTOM'|'BOTTLEROCKET_ARM_64'|'BOTTLEROCKET_x86_64',
remoteAccess={
'ec2SshKey': 'string',
'sourceSecurityGroups': [
'string',
]
},
nodeRole='string',
labels={
'string': 'string'
},
taints=[
{
'key': 'string',
'value': 'string',
'effect': 'NO_SCHEDULE'|'NO_EXECUTE'|'PREFER_NO_SCHEDULE'
},
],
tags={
'string': 'string'
},
clientRequestToken='string',
launchTemplate={
'name': 'string',
'version': 'string',
'id': 'string'
},
updateConfig={
'maxUnavailable': 123,
'maxUnavailablePercentage': 123
},
capacityType='ON_DEMAND'|'SPOT',
version='string',
releaseVersion='string'
```
The current implementation of the operator support the following kwargs only:
```python
clusterName='string',
nodegroupName='string',
subnets=[ 'string', ],
nodeRole='string',
```
### Use case/motivation
With the current implementation of the Operator you can bring up basic EKS nodegroup which is not useful. I want to fully configure my nodegroup with this operation like I can do it with boto3.
### Related issues
Continue of #19372
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19641 | https://github.com/apache/airflow/pull/20819 | 88814587d451be7493e005e4d477609a39caa1d9 | 307d35651998901b064b02a0748b1c6f96ae3ac0 | "2021-11-17T10:42:50Z" | python | "2022-01-14T17:05:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,638 | ["airflow/models/connection.py", "airflow/operators/sql.py", "airflow/sensors/sql.py"] | Resolve parameter name for SqlSensor, BaseSqlOperator, Connection | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
Ubuntu
### Deployment
Virtualenv installation
### What happened
There are different names for the same keyword argument:
`BaseSqlOperator(hook_params)` - by PR https://github.com/apache/airflow/pull/18718
`Connection.get_hook(hook_kwargs)` - by PR https://github.com/apache/airflow/pull/18718
`SqlSensor(hook_params)` - by PR https://github.com/apache/airflow/pull/18431
Plan:
- Rename `hook_kwargs ` to `hook_params` in Connection.get_hook
https://github.com/apache/airflow/blob/6df69df421dfc6f1485f637aa74fc3dab0e2b9a4/airflow/models/connection.py#L288
- Rename usage in BaseSqlOperator
https://github.com/apache/airflow/blob/6df69df421dfc6f1485f637aa74fc3dab0e2b9a4/airflow/operators/sql.py#L68
- Rename usage in SqlSensor
https://github.com/kazanzhy/airflow/blob/459a652e831785f362fd3443c46b53161f8bfc72/airflow/sensors/sql.py#L87
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19638 | https://github.com/apache/airflow/pull/19849 | e87856df32275768ba72ed85b6ea11dbca4d2f48 | 5a5d50d1e347dfc8d8ee99370c41de3106dea558 | "2021-11-17T09:55:30Z" | python | "2021-11-27T16:56:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,622 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Tasks get stuck in "scheduled" state and starved when dags with huge amount of tasks is scheduled | ### Apache Airflow version
2.0.2
### Operating System
amzn linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
AIRFLOW__CORE__PARALLELISM: "128"
worker_concurrency: 32
2 Celery workers
MySQL 8.0.23 RDS as a DB backend
### What happened
* Tasks get stuck in "scheduled" state for hours, in task details it says that "All dependencies are met but the task instance is not running"
* The stuck tasks are executed eventually
* Usually, at the same time, there're DAGs with >100 tasks are running
* The big dags are limited by dag-level `concurrency` parameter to 10 tasks at a time
* Workers and pools have plenty of free slots
* If big dags are switched off - starving tasks are picked up immediately, even if tasks from the big dags are still running
* In scheduler logs, the starving task do not appear in the "tasks up for execution" list
* Number of concurrent tasks that are actually running is around 30 total on both executors (out of 64 available slots)
### What you expected to happen
As there are enough slots on the workers & pools, I expect tasks that are ready and actually *can* run to be picked up and moved to queued by scheduler
### How to reproduce
This example dag should reproduce the problem on environment with at least 20-25 available slots and core parallelism of 128. The dag that will get starved is the "tester_multi_load_3". Not each task, but on my env there were holes of up to 20 minutes between tasks execution. Guess the starvation time depends on ordering (?), as I'm not adding any weights...
<details><summary>CLICK ME</summary>
<p>
```python
import os
import time
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'Tester',
'depends_on_past': False,
'start_date': datetime(2021, 7, 17),
'retries': 5
}
def sleep(timer):
if not timer:
timer = 60
print(f'Timer is {str(timer)}')
time.sleep(timer)
with DAG(
dag_id=os.path.basename(__file__).replace('.py', '') + '_1', # name of the dag
default_args=default_args,
concurrency=10,
max_active_runs=5,
schedule_interval='@hourly',
orientation='LR',
tags=['testers']
) as dag1:
for i in range(150):
t = PythonOperator(
task_id=f'python_{i}',
python_callable=sleep,
op_args=[""],
priority_weight=-100,
)
with DAG(
os.path.basename(__file__).replace('.py', '') + '_2', # name of the dag
default_args=default_args,
concurrency=7,
max_active_runs=2,
schedule_interval='@hourly',
orientation='LR',
tags=['testers']
) as dag2:
for i in range(150):
t = PythonOperator(task_id=f'python_{i}',
python_callable=sleep,
op_args=[""],
)
with DAG(
os.path.basename(__file__).replace('.py', '') + '_3', # name of the dag
default_args=default_args,
concurrency=1,
max_active_runs=1,
schedule_interval='@hourly',
orientation='LR',
tags=['testers']
) as dag3:
t1 = PythonOperator(task_id=f'python', python_callable=sleep, op_args=[""])
for i in range(10):
t2 = PythonOperator(task_id=f'python_{i}', python_callable=sleep, op_args=[""])
t1 >> t2
t1 = t2
```
</p>
</details>
### Anything else
Digging around the code, I found that there's a limit on the query scheduler preforms [here](https://github.com/apache/airflow/blob/10023fdd65fa78033e7125d3d8103b63c127056e/airflow/jobs/scheduler_job.py#L928) , that comes from [here](https://github.com/apache/airflow/blob/10023fdd65fa78033e7125d3d8103b63c127056e/airflow/jobs/scheduler_job.py#L1141), and actually seems to be calculated overall from the global `parallelism` value.
So actually what happens, is that scheduler queries DB with a limit, gets back a partial list of tasks that are actually cannot be executed because of the dag-level concurrency, and gets to other tasks that are able to run only when there's a window between big dags execution. Increasing the `parallelism` to 1024 solved the issue in our case.
The `parallelism` parameter in this case is very confusing, because it should indicate tasks that can be `run concurrently`, but actually limits the scheduler's ability to move tasks from scheduled to queued...
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19622 | https://github.com/apache/airflow/pull/19747 | 4de9d6622c0cb5899a286f4ec8f131b625eb1d44 | cd68540ef19b36180fdd1ebe38435637586747d4 | "2021-11-16T16:09:11Z" | python | "2022-03-22T17:30:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,618 | ["RELEASE_NOTES.rst"] | Execution_date not rendering after airflow upgrade | ### Apache Airflow version
2.2.2 (latest released)
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.3.0
apache-airflow-providers-apache-spark==2.0.1
apache-airflow-providers-cncf-kubernetes==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-sqlite==2.0.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Hi,
We recently upgraded airflow from 2.1.0 to 2.2.2 (2.1.0 to 2.2.0 to 2.2.1 to 2.2.2) and DAGs aren't running as expected. All these DAGs were added before the upgrade itself and they were running fine.
We use execution_date parameter in SparkSubmitOperator which was rendering fine before the upgrade fails now returning None
"{{ (execution_date if execution_date.microsecond > 0 else dag.following_schedule(execution_date)).isoformat() }}"
DAG run fails with the error
jinja2.exceptions.UndefinedError: 'None' has no attribute 'isoformat'
Tried wiping out the database and ran as a fresh DAG but still same error
Any help would be appreciated
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19618 | https://github.com/apache/airflow/pull/24413 | dfdf8eb28f952bc42d8149043bcace9bfe882d76 | ce48567c54d0124840062b6bd86c2a745d3cc6d0 | "2021-11-16T14:53:15Z" | python | "2022-06-13T14:42:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,613 | ["airflow/www/views.py"] | Can't open task log from Gantt view | ### Apache Airflow version
2.2.1
### Operating System
Linux 5.4.149-73.259.amzn2.x86_64
### Versions of Apache Airflow Providers
default for 2.2.1
### Deployment
Other 3rd-party Helm chart
### Deployment details
aws eks using own-developed helm chart
### What happened
When trying to open log from gantt view - receiving an exception
```
File "/home/airflow/.local/lib/python3.9/site-packages/pendulum/parsing/__init__.py", line 177, in _parse_common
return date(year, month, day)
ValueError: year 0 is out of range
```
due to incorrect query parameter push: no value for `execution_date` pushed
```
/log?dag_id=report_generator_daily&task_id=check_quints_earnings&execution_date=
```
### What you expected to happen
Logs should be available
### How to reproduce
Open dag's `gantt` chart
click on task ribbon
click on `log`
observe an error
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19613 | https://github.com/apache/airflow/pull/20121 | b37c0efabd29b9f20ba05c0e1281de22809e0624 | f59decd391b75c509020e603e5857bb63ec891be | "2021-11-16T07:52:23Z" | python | "2021-12-08T16:11:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,581 | ["airflow/hooks/dbapi.py", "airflow/operators/generic_transfer.py", "airflow/providers/google/cloud/hooks/workflows.py", "airflow/providers/google/cloud/operators/workflows.py", "airflow/providers/postgres/hooks/postgres.py", "airflow/providers/sqlite/hooks/sqlite.py", "scripts/in_container/run_generate_constraints.sh"] | Miscellaneous documentation typos | ### Describe the issue with documentation
Recently starting our with Airflow I've been reading parts of the documentations quite carefully. There's at least two typos that could be fixed. First, looking at [Module management](https://airflow.apache.org/docs/apache-airflow/stable/modules_management.html) I see: "You can see the `.ariflowignore` file at the root of your folder." I'm rather confused by this, since when looking at [module_management.rst](https://github.com/apache/airflow/blob/main/docs/apache-airflow/modules_management.rst) it seems to say "You can see the `.airflowignore` file at the root of your folder" (no typo). Why's that?
Second, looking at the docs for [GenericTransfer](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/operators/generic_transfer/index.html), the explanation for `destination_conn_id (str)` I believe should be e.g., `destination connection` and not `source connection` (compare with `source_conn_id (str)`). However, I'm unable to find the corresponding doc from the repo. When clicking on "Suggest a change on this page" I end up with 404.
### How to solve the problem
See above for the suggested fixes as well. I'd be happy to submit a PR too (but see some questions above as well).
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19581 | https://github.com/apache/airflow/pull/19599 | b9d31cd44962fc376fcf98380eaa1ea60fb6c835 | 355dec8fea5e2ef1a9b88363f201fce4f022fef3 | "2021-11-14T12:15:15Z" | python | "2021-11-17T18:12:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,570 | ["airflow/providers/google/cloud/hooks/bigquery.py"] | get_records() don't work out for BigQueryHook | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google 4.0.0`
### Apache Airflow version
2.1.2
### Operating System
PRETTY_NAME="Debian GNU/Linux 10 (buster)" NAME="Debian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
### Deployment
Docker-Compose
### Deployment details
I'm using official docker-compose
### What happened
```bash
[2021-11-12 09:24:21,409] {logging_mixin.py:104} WARNING - /home/***/.local/lib/python3.8/site-packages/***/providers/google/cloud/hooks/bigquery.py:141 DeprecationWarning: This method will be deprecated. Please use `BigQueryHook.get_client` method
[2021-11-12 09:24:23,033] {logging_mixin.py:104} WARNING - /home/***/.local/lib/python3.8/site-packages/***/providers/google/cloud/hooks/bigquery.py:2195 DeprecationWarning: This method is deprecated. Please use `BigQueryHook.insert_job` method.
[2021-11-12 09:24:23,042] {bigquery.py:1639} INFO - Inserting job ***_1636709063039439_14741bf3004db91ee4cbb5eb024ac5ba
[2021-11-12 09:24:27,463] {taskinstance.py:1501} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 150, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 161, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/dags/dags/utils/others/subscription_related.py", line 112, in wrapper
return func(*args, **kwargs)
File "/opt/airflow/dags/dags/utils/extractors/platform_data_extractors/shopify_extractor.py", line 75, in wrapper
return func(*args, **kwargs)
File "/opt/airflow/dags/dags/utils/extractors/platform_data_extractors/shopify_extractor.py", line 1019, in add_abandoned
abandoned_checkouts_of_this_page = _parse_this_page(response_json)
File "/opt/airflow/dags/dags/utils/extractors/platform_data_extractors/shopify_extractor.py", line 980, in _parse_this_page
persons_queried_by_checkout_id = db_hook.get_records(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/hooks/dbapi.py", line 135, in get_records
return cur.fetchall()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2886, in fetchall
one = self.fetchone()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2811, in fetchone
return self.next()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2827, in next
self.service.jobs()
File "/home/airflow/.local/lib/python3.8/site-packages/googleapiclient/_helpers.py", line 134, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/googleapiclient/http.py", line 915, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 404 when requesting https://bigquery.googleapis.com/bigquery/v2/projects/tresl-co-001/queries/airflow_1636709063039439_14741bf3004db91ee4cbb5eb024ac5ba?alt=json returned "Not found: Job tresl-co-001:airflow_1636709063039439_14741bf3004db91ee4cbb5eb024ac5ba". Details: "Not found: Job tresl-co-001:airflow_1636709063039439_14741bf3004db91ee4cbb5eb024ac5ba">
```
### What you expected to happen
`bq_hook.get_records(sql)` should work anyway
### How to reproduce
```python
from dags.utils.hooks.bigquery import BigQueryHook
bq_hook = BigQueryHook(gcp_conn_id='xxxx', use_legacy_sql=False)
bq_hook.get_records(sql)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19570 | https://github.com/apache/airflow/pull/19571 | 6579648af2a21aa01cb93f051d091569a03c04a4 | 6bb0857df94c0f959e7ebe421a00b942fd60b199 | "2021-11-13T05:40:03Z" | python | "2022-02-13T19:03:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,569 | ["dev/README_RELEASE_AIRFLOW.md", "scripts/ci/tools/prepare_prod_docker_images.sh"] | The latest docker image is not the "latest" | ### Apache Airflow version
2.2.1 (latest released)
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
The image tags with `latest` and `latest-python3.X` is either release two months ago or even 5 months ago.
https://hub.docker.com/r/apache/airflow/tags?name=latest
### What you expected to happen
According to the documentation here [1], seems it should be aligned with the latest stable version.
BTW, I am willing to submit a PR, but might need some hints how we manage the docker image tags.
[1] https://airflow.apache.org/docs/docker-stack/index.html
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19569 | https://github.com/apache/airflow/pull/19573 | 1453b959a614ab1ac045e61b9e5939def0ad9dff | 4a072725cbe63bff8f69b05dfb960134783ee17e | "2021-11-13T03:07:10Z" | python | "2021-11-15T21:34:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,538 | [".pre-commit-config.yaml", "airflow/decorators/base.py", "airflow/decorators/python.py", "airflow/decorators/python_virtualenv.py", "tests/decorators/test_python.py"] | TaskFlow API `multiple_outputs` inference not handling all flavors of dict typing | ### Apache Airflow version
2.2.0
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```shell
apache-airflow-providers-amazon 1!2.2.0
apache-airflow-providers-cncf-kubernetes 1!2.0.3
apache-airflow-providers-elasticsearch 1!2.0.3
apache-airflow-providers-ftp 1!2.0.1
apache-airflow-providers-google 1!6.0.0
apache-airflow-providers-http 1!2.0.1
apache-airflow-providers-imap 1!2.0.1
apache-airflow-providers-microsoft-azure 1!3.2.0
apache-airflow-providers-mysql 1!2.1.1
apache-airflow-providers-postgres 1!2.3.0
apache-airflow-providers-redis 1!2.0.1
apache-airflow-providers-slack 1!4.1.0
apache-airflow-providers-sqlite 1!2.0.1
apache-airflow-providers-ssh 1!2.2.0
```
### Deployment
Astronomer
### Deployment details
Local deployment using the Astronomer CLI.
### What happened
Creating a TaskFlow function with a return type annotation of `dict` does not yield `XComs` for each key within the returned dict. Additionally, the inference does not work for both `dict` and `Dict` (without arg annotation) types in Python 3.6.
### What you expected to happen
When creating a TaskFlow function and not explicitly setting `multiple_outputs=True`, the unfurling of `XComs` into separate keys is inferred by the return type annotation (as noted [here](https://airflow.apache.org/docs/apache-airflow/stable/tutorial_taskflow_api.html#multiple-outputs-inference)). When using a return type annotation of `dict`, separate `XComs` should be created. There is an explicit check for this type as well:
https://github.com/apache/airflow/blob/7622f5e08261afe5ab50a08a6ca0804af8c7c7fe/airflow/decorators/base.py#L207
Additionally, on Python 3.6, the inference should handle generating multiple `XComs` for both `dict` and `typing.Dict` return type annotations as expected on other Python versions.
### How to reproduce
This DAG can be used to demonstrate the different results of dict typing:
```python
from datetime import datetime
from typing import Dict
from airflow.decorators import dag, task
from airflow.models.baseoperator import chain
from airflow.models import XCom
@dag(
start_date=datetime(2021, 11, 11),
schedule_interval=None,
)
def __test__():
@task
def func_no_return_anno():
return {"key1": "value1", "key2": "value2"}
@task
def func_with_dict() -> dict:
return {"key1": "value1", "key2": "value2"}
@task
def func_with_typing_dict() -> Dict:
return {"key1": "value1", "key2": "value2"}
@task
def func_with_typing_dict_explicit() -> Dict[str, str]:
return {"key1": "value1", "key2": "value2"}
@task
def get_xcoms(run_id=None):
xcoms = XCom.get_many(
dag_ids="__test__",
task_ids=[
"func_no",
"func_with_dict",
"func_with_typing_dict",
"func_with_typing_dict_explicit",
],
run_id=run_id,
).all()
for xcom in xcoms:
print(f"Task ID: {xcom.task_id} \n", f"Key: {xcom.key} \n", f"Value: {xcom.value}")
chain(
[
func_no_return_anno(),
func_with_dict(),
func_with_typing_dict(),
func_with_typing_dict_explicit(),
],
get_xcoms(),
)
dag = __test__()
```
**Expected `XCom` keys**
- func_no_return_anno
- `return_value`
- func_with_dict
- `return_value`, `key1`, and `key2`
- func_with_typing_dict
- `return_value`, `key1`, and `key2`
- func_with_typing_dict_explicit
- `return_value`, `key1`, and `key2`
Here is the output from the `get_xcoms` task which is gathering all of the `XComs` generated for the run:
![image](https://user-images.githubusercontent.com/48934154/141336206-259bd78b-8ef3-4edb-81a6-b161d783f39f.png)
The `func_with_dict` task does not yield `XComs` for `key1` and `key2`.
### Anything else
The inference also doesn't function as intended on Python 3.6 when using simple `dict` or `Dict` return types.
For example, isolating the existing `TestAirflowTaskDecorator::test_infer_multiple_outputs_using_typing` unit test and adding some parameterization on Python 3.6:
```python
@parameterized.expand(
[
("dict", dict),
("Dict", Dict),
("Dict[str, int]", Dict[str, int]),
]
)
def test_infer_multiple_outputs_using_typing(self, _, test_return_annotation):
@task_decorator
def identity_dict(x: int, y: int) -> test_return_annotation:
return {"x": x, "y": y}
assert identity_dict(5, 5).operator.multiple_outputs is True
```
**Results**
![image](https://user-images.githubusercontent.com/48934154/141338408-5d7f2877-6465-4c81-857f-5ca6d1b612ee.png)
However, since Python 3.6 will reach EOL on 2021-12-23, this _may_ not be an aspect that needs to be fixed.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19538 | https://github.com/apache/airflow/pull/19608 | bdb5ae098ce0b8ae06e0e76d8084817e36a562ae | 4198550bba474e7942705a4c6df2ad916fb76561 | "2021-11-11T17:01:43Z" | python | "2022-01-10T16:14:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,506 | ["airflow/providers/salesforce/hooks/salesforce.py", "tests/providers/salesforce/hooks/test_salesforce.py"] | Salesforce connections should not require all extras | ### Apache Airflow Provider(s)
salesforce
### Versions of Apache Airflow Providers
apache-airflow-providers-salesforce==1!3.2.0
### Apache Airflow version
2.1.4
### Operating System
Debian GNU/Linux 10 (buster)
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
Right now the `SalesforceHook` is requiring all of the extras for a Salesforce connection type in Airflow.
I am assuming this issue was never brought up before because users of this hook have been using the Airflow UI to make connections (which presents all extra fields), however with things such as Secrets Backends, it should be possible to set up a Salesforce connection URI without having to explicitly provide all of the extras.
The issue is the hook's author designed the hook's `get_conn` to pass in every extra value using an improper/invalid method of defaulting to None by using `or None` but still referencing the extras key directly. I tested this in as low as Python 2.7 and as high as Python 3.7 and in both versions if any one of these extras are not provided you will get a `KeyError`
https://github.com/apache/airflow/blob/e9a72a4e95e6d23bae010ad92499cd7b06d50037/airflow/providers/salesforce/hooks/salesforce.py#L137-L149
### What you expected to happen
Any extras value not provided should just default to None (or to `api.DEFAULT_API_VERSION` for the "version" extra). It should be possible to set up a Salesforce connection URI using a secrets backend without having to explicitly provide all of the extras.
### How to reproduce
Set up a secrets backend (such as via environment variable) and pass in the minimum connection values needed for "Password" connection type:
_(this is an example, no real passwords shown)_
`export AIRFLOW_CONN_SALESFORCE_DEFAULT='http://your_username:your_password@https%3A%2F%2Fyour_host.lightning.force.com?extra__salesforce__security_token=your_token'`
It will error with `KeyError: 'extra__salesforce__domain'` and keep resulting in key errors for each extras key until you finally provide all extras, like so:
_(this is an example, no real passwords shown)_
`export AIRFLOW_CONN_SALESFORCE_DEFAULT='http://your_username:your_password@https%3A%2F%2Fyour_host.lightning.force.com?extra__salesforce__security_token=your_token&extra__salesforce__domain=&extra__salesforce__instance=&extra__salesforce__instance_url=&extra__salesforce__organization_id=&extra__salesforce__version=&extra__salesforce__proxies=&extra__salesforce__client_id=&extra__salesforce__consumer_key=&extra__salesforce__private_key_file_path=&extra__salesforce__private_key='`
### Anything else
In addition to this, the `SalesforceHook` should also accept extras without the need for the `extra__salesforce__` prefix, like many other connections do.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19506 | https://github.com/apache/airflow/pull/19530 | 304e92d887ef9ef6c253767acda14bd7a9e558df | a24066bc4c3d8a218bd29f6c8fef80781488dd55 | "2021-11-10T07:55:58Z" | python | "2021-11-12T03:52:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,503 | ["airflow/providers/snowflake/hooks/snowflake.py"] | SnowflakeHook opening connection twice | ### Apache Airflow Provider(s)
snowflake
### Versions of Apache Airflow Providers
apache-airflow-providers-snowflake==2.2.0
### Apache Airflow version
2.2.0
### Operating System
Linux-5.4.141-67.229.amzn2.x86_64-x86_64-with-glibc2.31
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
The `get_connection()` method is called twice in the SnowflakeHook, initializing the connection twice and slowing the hook down. See the following example logs on a single `run` of the SnowflakeHook:
<img width="1383" alt="Screen Shot 2021-11-09 at 9 59 26 PM" src="https://user-images.githubusercontent.com/30101670/141052877-648a5543-cf9a-4c16-82a6-02f6b62b47ba.png">
### What you expected to happen
The code is using the `get_connection` method on [line 271](https://github.com/apache/airflow/blob/main/airflow/providers/snowflake/hooks/snowflake.py#L271) and then calling the method again on [line 272](https://github.com/apache/airflow/blob/main/airflow/providers/snowflake/hooks/snowflake.py#L272)
### How to reproduce
Run the SnowflakeHook
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19503 | https://github.com/apache/airflow/pull/19543 | 37a12e9c278209d7e8ea914012a31a91a6c6ccff | de9900539c9731325e29fd1bbac37c4bc1363bc4 | "2021-11-10T05:04:32Z" | python | "2021-11-11T23:50:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,490 | ["airflow/providers/databricks/hooks/databricks.py", "tests/providers/databricks/hooks/test_databricks.py"] | provider Databricks : add cluster API | ### Description
Add Databricks Cluster API in Databricks Hook
### Use case/motivation
Databricks provider lacks [Cluster API](https://docs.databricks.com/dev-tools/api/latest/clusters.html) access to allow
to CRUD cluster as well as manage clusters (start/restart/delete/...)
Those API are very convenient to control clusters (get status, force shutdown...) and to optimize some specific flows (scale cluster in advance when a coming need is detected )
### Related issues
none
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19490 | https://github.com/apache/airflow/pull/34643 | 6ba2c4485cb8ff2cf3c2e4d8043e4c7fe5008b15 | 946b539f0dbdc13272a44bdb6f756282f1d373e1 | "2021-11-09T13:33:30Z" | python | "2023-10-12T09:57:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,489 | ["airflow/providers/postgres/hooks/postgres.py", "docs/apache-airflow-providers-postgres/connections/postgres.rst", "tests/providers/postgres/hooks/test_postgres.py"] | PostgresSqlHook needs to override DbApiHook.get_uri to pull in extra for client_encoding=utf-8 during create_engine | ### Description
I got following error
```
[2021-11-09, 08:25:30 UTC] {base.py:70} INFO - Using connection to: id: rdb_conn_id. Host: rdb, Port: 5432, Schema: configuration, Login: user, Password: ***, extra: {'sslmode': 'allow', 'client_encoding': 'utf8'}
[2021-11-09, 08:25:30 UTC] {taskinstance.py:1703} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1332, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1458, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1514, in _execute_task
result = execute_callable(context=context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/operators/python.py", line 151, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/operators/python.py", line 162, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/dags/repo/dags/run_configuration.py", line 34, in run_job
dagsUtils.run_step_insert_to_temp_table(tenant, job_name, table_name, job_type)
File "/opt/airflow/dags/rev-e3db01f68e7979d71d12ae24008a97065db2144f/dags/utils/dag_util.py", line 106, in run_step_insert_to_temp_table
for df in df_result:
File "/home/airflow/.local/lib/python3.9/site-packages/pandas/io/sql.py", line 1499, in _query_iterator
data = result.fetchmany(chunksize)
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/result.py", line 1316, in fetchmany
self.connection._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1514, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/result.py", line 1311, in fetchmany
l = self.process_rows(self._fetchmany_impl(size))
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/result.py", line 1224, in _fetchmany_impl
return self.cursor.fetchmany(size)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 12: ordinal not in range(128)
```
Try to set `extra` in airflow connection but it does not work
![image](https://user-images.githubusercontent.com/37215642/140909372-20805f0c-aa98-44a7-94c2-419aea176f70.png)
### Use case/motivation
See that `airflow/providers/mysql/hooks` support getting `extra` configs from airflow connection https://github.com/apache/airflow/pull/6816 but not yet for postgresql hook
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19489 | https://github.com/apache/airflow/pull/19827 | a192cecf6bb9b22e058b8c0015c351131185282b | c97a2e8ab84991bb08e811b9d5b6d5f95de150b2 | "2021-11-09T10:44:10Z" | python | "2021-11-26T20:29:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,477 | ["airflow/www/views.py"] | 'NoneType' object has no attribute 'refresh_from_task' Error when manually running task instance | ### Discussed in https://github.com/apache/airflow/discussions/19366
<div type='discussions-op-text'>
<sup>Originally posted by **a-pertsev** November 2, 2021</sup>
### Apache Airflow version
2.2.1 (latest released)
### Operating System
Ubuntu 20.04 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.3.0
apache-airflow-providers-apache-cassandra==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.0.0
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-jdbc==2.0.1
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-postgres==2.3.0
apache-airflow-providers-presto==2.0.1
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-sqlite==2.0.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Got error when manually run newly created task in already finished dag run (in UI).
### What you expected to happen
"Run" button should work without exceptions.
### How to reproduce
1. Define a dag with some tasks.
2. Create dag run (manually or by schedule)
3. Add new task into dag, deploy code
4. Select new task in UI and press "Run" button
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/19477 | https://github.com/apache/airflow/pull/19478 | 950a390770b04f8990f6cd962bc9c001124f1903 | 51d61a9ec2ee66a7f1b45901a2bb732786341cf4 | "2021-11-08T18:30:20Z" | python | "2021-11-08T19:42:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,461 | ["airflow/jobs/scheduler_job.py", "airflow/models/dag.py", "tests/jobs/test_scheduler_job.py"] | Missing DagRuns when catchup=True | ### Apache Airflow version
2.2.1 (latest released)
### Operating System
PRETTY_NAME="Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-sqlite==2.0.1
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
Backfilling via catchup=True leads to missing DagRuns.
See reproduction steps for full details
### What you expected to happen
_No response_
### How to reproduce
Note, this is an issue which we have experienced in our production environment, with a much more complicated DAG. Below are the reproduction steps using breeze.
1. Setup ./breeze environment with the below config modifications
2. Create a simple DAG, with dummy tasks in it (see below example)
2. Set a `start_date` in the past
3. Set `catchup=True`
4. Unpause the DAG
5. Catch up starts and if you view the tree view, you have the false impression that everything has caught up correctly.
6. However, access the calendar view, you can see the missing DagRuns.
**Breeze Config**
```
export DB_RESET="true"
export START_AIRFLOW="true"
export INSTALL_AIRFLOW_VERSION="2.2.1"
export USE_AIRFLOW_VERSION="2.2.1"
```
**Dummy DAG**
```
from datetime import datetime
from airflow import DAG
from airflow.operators.dummy import DummyOperator
dag = DAG(
dag_id="temp_test",
schedule_interval="@daily",
catchup=True,
start_date=datetime(2021, 8, 1),
max_active_tasks=10,
max_active_runs=5,
is_paused_upon_creation=True,
)
with dag:
task1 = DummyOperator(task_id="task1")
task2 = DummyOperator(task_id="task2")
task3 = DummyOperator(task_id="task3")
task4 = DummyOperator(task_id="task4")
task5 = DummyOperator(task_id="task5")
task1 >> task2 >> task3 >> task4 >> task5
```
**Results**
<img width="1430" alt="tree_view" src="https://user-images.githubusercontent.com/10559757/140715465-6bc3831c-d71c-4025-bcde-985010ab31f8.png">
<img width="1435" alt="calendar_view" src="https://user-images.githubusercontent.com/10559757/140715467-1a1a5c9a-3eb6-40ff-8720-ebe6db999028.png">
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19461 | https://github.com/apache/airflow/pull/19528 | 8d63bdf0bb7a8d695cc00f10e4ebba37dea96af9 | 2bd4b55c53b593f2747a88f4c018d7e420460d9a | "2021-11-08T09:23:52Z" | python | "2021-11-11T09:44:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,458 | ["airflow/www/views.py"] | DAG Run Views showing information of DAG duration | ### Description
In Airflow UI. If we go to Browse --> DAG Runs, then DAG Runs view will be displayed.
Table contains a lot of insightful values likes DAG execution date, Start date, end date, Dag config etc
Request to add DAG duration column in the same table.
DAG Duration i.e. DAG end_date timestamp - DAG start_date timestamp
### Use case/motivation
Analytics purpose of knowing DAG duration at all DAGs level. So that I can easily find out which DAG have least and max duration.
### Related issues
Checked open issues. There aren't any
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19458 | https://github.com/apache/airflow/pull/19482 | 7640ba4e8ee239d6e2bbf950d53d624b9df93059 | b5c0158b2eb646eb1db5d2c094d3da8f88a08a8b | "2021-11-08T03:57:41Z" | python | "2021-11-29T14:47:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,432 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "docs/spelling_wordlist.txt", "tests/cli/commands/test_dag_command.py"] | Trigger Reserialization on Demand | ### Description
Dag serialization is currently out of the hands of the user. Whenever dag reserialization is required, I run this python script.
```
from airflow.models.serialized_dag import SerializedDagModel
from airflow.settings import Session
session = Session()
session.query(SerializedDagModel).delete()
session.commit()
```
It would be great to have this available from the CLI and/or UI. Or even be able to do it for individual dags without having to delete the whole dag with the trash can button. Also would be worth considering triggering reserialization for any version upgrades that affect dag serialization as part of `airflow db upgrade`.
### Use case/motivation
If your serialized dags get broken for any reason, like the incompatibilities with the changes made in 2.2 recently, reserializing will fix the issue, but it's fairly difficult to trigger reserializing.
### Related issues
_No response_
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19432 | https://github.com/apache/airflow/pull/19471 | 7d555d779dc83566d814a36946bd886c2e7468b3 | c4e8959d141512226a994baeea74d5c7e643c598 | "2021-11-05T19:25:37Z" | python | "2021-11-29T16:54:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,416 | ["airflow/timetables/interval.py", "tests/timetables/test_interval_timetable.py"] | A dag's schedule interval can no longer be an instance of dateutils.relativedelta | ### Apache Airflow version
2.2.1 (latest released)
### Operating System
debian
### Versions of Apache Airflow Providers
apache-airflow==2.2.1
apache-airflow-providers-amazon==2.3.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.0.0
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-jira==2.0.1
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-postgres==2.3.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sqlite==2.0.1
apache-airflow-providers-ssh==2.2.0
### Deployment
Other Docker-based deployment
### Deployment details
Dask executor, custom-built Docker images, postgres 12.7 backend
### What happened
I upgraded Airflow from 2.0.2 to 2.2.1, and some DAGs I have that used dateutils.relativedelta objects as schedule intervals stopped running
### What you expected to happen
The [code](https://github.com/apache/airflow/blob/2.2.1/airflow/models/dag.py#L101) for the schedule_interval parameter of the DAG constructor indicates that a relativedelta object is allowed, so I expected the DAG to be correctly parsed and scheduled.
### How to reproduce
Create a DAG that has a relativedelta object as its schedule interval, and it will not appear in the UI or be scheduled.
### Anything else
Here is the code that causes the failure within the PR where it was introduced: [link](https://github.com/apache/airflow/pull/17414/files#diff-ed37fe966e8247e0bfd8aa28bc2698febeec3807df5f5a00545ca80744f8aff6R267)
Here are the logs for the exception, found in the scheduler logs for the file that contains the offending DAG
<details><pre>
ERROR | {dagbag.py:528} - 'relativedelta' object has no attribute 'total_seconds'
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 515, in collect_dags
found_dags = self.process_file(filepath, only_if_updated=only_if_updated, safe_mode=safe_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 298, in process_file
found_dags = self._process_modules(filepath, mods, file_last_changed_on_disk)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 401, in _process_modules
dag.timetable.validate()
File "/usr/local/lib/python3.9/site-packages/airflow/timetables/interval.py", line 274, in validate
if self._delta.total_seconds() <= 0:
AttributeError: 'relativedelta' object has no attribute 'total_seconds'
</pre></details>
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19416 | https://github.com/apache/airflow/pull/19418 | d3d2a38059a09635ebd0dda83167d404b1860c2a | b590cc8a976da9fa7fe8c5850bd16d3dc856c52c | "2021-11-04T22:55:44Z" | python | "2021-11-05T22:48:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,386 | ["airflow/www/static/js/connection_form.js"] | Relabeling in Connection form causes falsely named fields (UI only) | ### Apache Airflow version
2.2.0
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
airflow standalone
### What happened
Relabeling of fields in **Add Connection** never gets reverted, so switching connection types results in the form showing wrong field names.
### What you expected to happen
Changing connection type while adding a connection should show the fields/fieldnames for that connection.
### How to reproduce
1. Install microsoft-azure and docker provider package (or any provider package that has relabeling)
2. Click **Add a new record** in the Connections UI
3. This will default to connection type Azure
4. Change connection type to HTTP or other connection with Login and/or Password
5. Login and Password fields now show as Azure Client ID and Azure Secret
6. Change connection type to Docker
7. Change connection type to HTTP again
8. Host will now show as Registry URL, Login will show Username and Password fields still shows Azure Secret
### Anything else
If I am not mistaken, the issues is with the code in [connection_form.js](https://github.com/apache/airflow/blob/main/airflow/www/static/js/connection_form.js#L87)
```js
if (connection.relabeling) {
Object.keys(connection.relabeling).forEach((field) => {
const label = document.querySelector(`label[for='${field}']`);
label.dataset.origText = label.innerText;
label.innerText = connection.relabeling[field];
});
}
```
and there should probably be a reset to the default label texts before it.
Also, when trying to open a provider issue the newest available Airflow version in the version selector is 2.1.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19386 | https://github.com/apache/airflow/pull/19411 | 72679bef3abee141dd20f75d6ff977bd3978ecc8 | e99c14e76c28890f1493ed9804154f1184ae85e4 | "2021-11-03T16:04:45Z" | python | "2021-11-04T20:49:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,384 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/apache/livy/operators/livy.py", "tests/providers/apache/livy/operators/test_livy.py"] | Add retries to LivyOperator polling / LivyHook | ### Description
Add an optional retry loop to LivyOperator.poll_for_termination() or LivyHook.get_batch_state() to improve resiliency against temporary errors. The retry counter should reset with successful requests.
### Use case/motivation
1. Using LivyOperator, we run a Spark Streaming job in a cluster behind Knox with LDAP authentication.
2. While the streaming job is running, LivyOperator keeps polling for termination.
3. In our case, the LDAP service might be unavailable for a few of the polling requests per day, resulting in Knox returning an error.
4. LivyOperator marks the task as failed even though the streaming job should still be running, as subsequent polling requests might have revealed.
5. We would like LivyOperator/LivyHook to send a number of retries in order to overcome those brief availability issues.
Workarounds we considered:
- increase polling interval to reduce the chance of running into an error. For reference, we are currently using an interval of 10s
- use BaseOperator retries to start a new job, only send notification email for the final failure. But this would start a new job unnecessarily
- activate knox authentication caching to decrease the chance of errors substantially, but it was causing issues not related to Airflow
### Related issues
No related issues were found
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19384 | https://github.com/apache/airflow/pull/21550 | f6e0ed0dcc492636f6d1a3a413d5df2f9758f80d | 5fdd6fa4796bd52b3ce52ef8c3280999f4e2bb1c | "2021-11-03T15:04:56Z" | python | "2022-02-15T21:59:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,382 | ["airflow/models/taskinstance.py"] | Deferrable Operators don't respect `execution_timeout` after being deferred | ### Apache Airflow version
2.2.0
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
```bash
./breeze start-airflow --python 3.7 --backend postgres
```
### What happened
When a task is resumed after being deferred, its `start_time` is not equal to the original `start_time`, but to the timestamp when a task has resumed.
In case a task has `execution_timeout` set up and it's running longer, it might not raise a timeout error, because technically a brand new task instance starts after being deferred.
I know it's expected that it'd be a brand new task instance, but the documentation describes the behaviour with `execution_timeout` set differently (see below in "What you expected to happen")
It is especially true, if an Operator needs to be deferred multiple times, so every time it continues after `defer`, time starts to count again.
Some task instance details after an example task has completed:
| Attribute | Value |
| ------------- | ------------- |
| execution_date | 2021-11-03, 14:45:29 |
| trigger_timeout | 2021-11-03, 14:46:30 |
| start_date | 2021-11-03, 14:46:32 |
| end_date | 2021-11-03, 14:47:02 |
| execution_timeout | 0:01:00 |
| duration | 30.140004 |
| state | success |
### What you expected to happen
Task failure with Timeout Exception.
[Documentation](https://airflow.apache.org/docs/apache-airflow/2.2.0/concepts/deferring.html#triggering-deferral) says:
- Note that ``execution_timeout`` on Operators is considered over the *total runtime*, not individual executions in-between deferrals - this means that if ``execution_timeout`` is set, an Operator may fail while it's deferred or while it's running after a deferral, even if it's only been resumed for a few seconds.
Also, I see the [following code part](https://github.com/apache/airflow/blob/94a0a0e8ce4d2b54cd6a08301684e299ca3c36cb/airflow/models/taskinstance.py#L1495-L1500) trying to check the timeout value after the task is coming back from the deferral state:
```python
# If we are coming in with a next_method (i.e. from a deferral),
# calculate the timeout from our start_date.
if self.next_method:
timeout_seconds = (
task_copy.execution_timeout - (timezone.utcnow() - self.start_date)
).total_seconds()
```
But the issue is that `self.start_date` isn't equal to the original task's `start_date`
### How to reproduce
DAG:
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.sensors.time_delta import TimeDeltaSensorAsync
with DAG(
dag_id='time_delta_async_bug',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
catchup=False,
) as dag:
time_delta_async_sensor = TimeDeltaSensorAsync(task_id='time_delta_task_id',
delta=timedelta(seconds=60),
execution_timeout=timedelta(seconds=60),
)
```
Since there're not so many async Operators at the moment I slightly modified `TimeDeltaSensorAsync` in order to simulate task work after defer.
Here is the full code for `TimeDeltaSensorAsync` class I used for to reproduce the issue, the only difference is the line with `time.sleep(30)` to simulate post-processing after a trigger has completed.
```python
class TimeDeltaSensorAsync(TimeDeltaSensor):
"""
A drop-in replacement for TimeDeltaSensor that defers itself to avoid
taking up a worker slot while it is waiting.
:param delta: time length to wait after the data interval before succeeding.
:type delta: datetime.timedelta
"""
def execute(self, context):
target_dttm = context['data_interval_end']
target_dttm += self.delta
self.defer(trigger=DateTimeTrigger(moment=target_dttm), method_name="execute_complete")
def execute_complete(self, context, event=None): # pylint: disable=unused-argument
"""Callback for when the trigger fires - returns immediately."""
time.sleep(30) # Simulate processing event after trigger completed
return None
```
### Anything else
I've checked the mark box "I'm willing to submit a PR", but not sure where to start, would be happy if someone could help me with the guidance in which direction I should look at.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19382 | https://github.com/apache/airflow/pull/20062 | 95740a87083c703968ce3da45b15113851ef09f7 | 5cc5f434d745452651bca76ba6d2406167b7e2b9 | "2021-11-03T14:20:22Z" | python | "2022-01-05T18:19:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,369 | ["airflow/providers/cncf/kubernetes/utils/pod_launcher.py", "tests/providers/cncf/kubernetes/utils/test_pod_launcher.py"] | KubernetesPodOperator : TypeError: 'NoneType' object is not iterable | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==2.0.2
### Apache Airflow version
2.1.2
### Operating System
GCP Container-Optimized OS
### Deployment
Composer
### Deployment details
_No response_
### What happened
```
[2021-11-02 14:16:27,086] {pod_launcher.py:193} ERROR - Error parsing timestamp. Will continue execution but won't update timestamp
[2021-11-02 14:16:27,086] {pod_launcher.py:149} INFO - rpc error: code = NotFound desc = an error occurred when try to find container "8f8c2f3dce295f70ba5d60175ff847854e05ab288f7efa3ce6d0bd976d0378ea": not found
[2021-11-02 14:16:28,152] {taskinstance.py:1503} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1158, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1333, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1363, in _execute_task
result = task_copy.execute(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 367, in execute
final_state, remote_pod, result = self.create_new_pod_for_operator(labels, launcher)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 521, in create_new_pod_for_operator
final_state, remote_pod, result = launcher.monitor_pod(pod=self.pod, get_logs=self.get_logs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 154, in monitor_pod
if not self.base_container_is_running(pod):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 215, in base_container_is_running
status = next(iter(filter(lambda s: s.name == 'base', event.status.container_statuses)), None)
TypeError: 'NoneType' object is not iterable
```
in the GKE logs I see
```
airflow-XXX.XXXXX.b14f7e38312c4e83984d1d7a3987655a
"Pod ephemeral local storage usage exceeds the total limit of containers 10Mi. "
```
So the pod failed because I set to low limit for local storage , but the airflow operator should not raise an exception but fail normally.
### What you expected to happen
The KubernetesPodOperator should managed this kind of error
### How to reproduce
launch KubernetesPodOperator with a very small local disk limit and run a container using more than this limit
```python
KubernetesPodOperator(
task_id="XXX",
name="XXXXXXXXX",
namespace="default",
resources={'limit_memory': "512M",
'limit_cpu': "250m",
'limit_ephemeral_storage': "10M"},
is_delete_operator_pod=True,
image="any_image_using_more_than_limit_ephemeral_storage")
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19369 | https://github.com/apache/airflow/pull/19713 | e0b1d6d11858c41eb8d9d336a49968e9c28fa4c9 | 0d60d1af41280d3ee70bf9b1582419ada200e5e3 | "2021-11-02T14:28:49Z" | python | "2021-11-23T10:18:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,365 | ["airflow/cli/commands/standalone_command.py"] | 'NoneType' object has no attribute 'is_alive' - while running 'airflow standalone' | ### Apache Airflow version
2.2.1 (latest released)
### Operating System
Ubuntu (16.04)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-sqlite==2.0.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Got an error while running `$ airflow standalone`.
Stacktrace:
webserver | [2021-11-02 16:59:56 +0530] [15195] [INFO] Booting worker with pid: 15195
Traceback (most recent call last):
File "/home/jonathan/programs/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/jonathan/programs/venv/lib/python3.6/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/home/jonathan/programs/venv/lib/python3.6/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/jonathan/programs/venv/lib/python3.6/site-packages/airflow/cli/commands/standalone_command.py", line 48, in entrypoint
StandaloneCommand().run()
File "/home/jonathan/programs/venv/lib/python3.6/site-packages/airflow/cli/commands/standalone_command.py", line 95, in run
if not self.ready_time and self.is_ready():
File "/home/jonathan/programs/venv/lib/python3.6/site-packages/airflow/cli/commands/standalone_command.py", line 209, in is_ready
return self.port_open(8080) and self.job_running(SchedulerJob) and self.job_running(TriggererJob)
File "/home/jonathan/programs/venv/lib/python3.6/site-packages/airflow/cli/commands/standalone_command.py", line 231, in job_running
return job.most_recent_job().is_alive()
AttributeError: 'NoneType' object has no attribute 'is_alive'
### What you expected to happen
I expect the server to start without any errors or stacktraces.
### How to reproduce
Follow the installation instructions in the docs and then run `$ airflow standalone`.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19365 | https://github.com/apache/airflow/pull/19380 | 6148ddd365939bb5129b342900a576bd855e9fc4 | 5c157047b2cc887130fb46fd1a9e88a354c1fb19 | "2021-11-02T13:05:46Z" | python | "2021-11-03T07:19:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,357 | ["airflow/providers/databricks/hooks/databricks.py", "tests/providers/databricks/hooks/test_databricks.py"] | DatabricksHook method get_run_state returns error | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==1!2.0.2
### Apache Airflow version
2.1.4 (latest released)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
Running local deployment using Astronomer CLI
### What happened
When calling the `get_run_state` method from the `DatabricksHook`, I get the following error:
`TypeError: Object of type RunState is not JSON serializable`
I think this is due to [the method returning a RunState custom class](https://github.com/apache/airflow/blob/main/airflow/providers/databricks/hooks/databricks.py#L275) as opposed to a `str` like the rest of the methods in the databricks hook (i.e. `get_job_id`, `get_run_page_url`, etc.)
### What you expected to happen
When calling the `get_run_state` method, simply return the `result_state` or `state_message` [variables](https://github.com/apache/airflow/blob/main/airflow/providers/databricks/hooks/databricks.py#L287-L288) instead of the `RunState` class.
### How to reproduce
Create a dag that references a databricks deployment and use this task to see the error:
```
from airflow.providers.databricks.hooks.databricks import DatabricksHook
run_id = <insert run id from databricks ui here>
def get_run_state(self, run_id: str):
return self.hook.get_run_state(run_id=run_id)
python_get_run_state = PythonOperator(
task_id="python_get_run_state",
python_callable=get_run_state,
op_kwargs={
"run_id": str(run_id)
}
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19357 | https://github.com/apache/airflow/pull/19723 | 00fd3af52879100d8dbca95fd697d38fdd39e60a | 11998848a4b07f255ae8fcd78d6ad549dabea7e6 | "2021-11-01T22:55:28Z" | python | "2021-11-24T19:08:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,343 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Airflow 2.2.1 : airflow scheduler is not able to start: AttributeError: 'NoneType' object has no attribute 'utcoffset' | ### Apache Airflow version
2.2.1 (latest released)
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
```
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-microsoft-mssql==2.0.1
apache-airflow-providers-microsoft-winrm==2.0.1
apache-airflow-providers-openfaas==2.0.0
apache-airflow-providers-oracle==2.0.1
apache-airflow-providers-samba==3.0.0
apache-airflow-providers-sftp==2.1.1
apache-airflow-providers-sqlite==2.0.1
apache-airflow-providers-ssh==2.2.0
```
### Deployment
Virtualenv installation
### Deployment details
Airflow 2.2.1 on a LXD Container "all in one" (web, scheduler, database == postgres)
### What happened
I don't know if it is related to the change of the time we had in Italy from 03:00 to 02:00 happened on October 30th.
The result is that during the same day the scheduler has been compromised.
The output of the commad `airflow scheduler` is:
```
root@new-airflow:~/airflow# airflow scheduler
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2021-11-01 02:00:41,181] {scheduler_job.py:596} INFO - Starting the scheduler
[2021-11-01 02:00:41,181] {scheduler_job.py:601} INFO - Processing each file at most -1 times
[2021-11-01 02:00:41,267] {manager.py:163} INFO - Launched DagFileProcessorManager with pid: 12284
[2021-11-01 02:00:41,268] {scheduler_job.py:1115} INFO - Resetting orphaned tasks for active dag runs
[2021-11-01 02:00:41,269] {settings.py:52} INFO - Configured default timezone Timezone('UTC')
[2021-11-01 02:00:41,332] {celery_executor.py:493} INFO - Adopted the following 7 tasks from a dead executor
<TaskInstance: EXEC_SAVE_ORACLE_SOURCE.PSOFA_PSO_PKG_UTILITY_SP_SAVE_ORACLE_SOURCE scheduled__2021-10-30T17:30:00+00:00 [running]> in state STARTED
<TaskInstance: EXEC_MAIL_VENDUTO_LIMASTE-BALLETTA.PSO_SP_DATI_BB_V4 scheduled__2021-10-30T19:00:00+00:00 [running]> in state STARTED
<TaskInstance: EXEC_MAIL_TASSO_CONV.PSO_SP_DATI_BB_V6 scheduled__2021-10-30T20:35:00+00:00 [running]> in state STARTED
<TaskInstance: EXEC_MAIL_VENDUTO_UNICA_AM.PSO_SP_DATI_BB_V6 scheduled__2021-10-30T19:20:00+00:00 [running]> in state STARTED
<TaskInstance: EXEC_BI_ASYNC.bi_pkg_batch_carica_async_2 scheduled__2021-10-30T23:00:00+00:00 [running]> in state STARTED
<TaskInstance: EXEC_MAIL_INGRESSI_UNICA.PSO_SP_INGRESSI_BB_V4 scheduled__2021-10-30T20:15:00+00:00 [running]> in state STARTED
<TaskInstance: API_REFRESH_PSO_ANALISI_CONS_ORDINE_EXCEL.Refresh_Table scheduled__2021-10-31T07:29:00+00:00 [running]> in state STARTED
[2021-11-01 02:00:41,440] {dagrun.py:511} INFO - Marking run <DagRun EXEC_CALCOLO_FILTRO_RR_INCREMENTALE @ 2021-10-30 18:00:00+00:00: scheduled__2021-10-30T18:00:00+00:00, externally triggered: False> successful
[2021-11-01 02:00:41,441] {dagrun.py:556} INFO - DagRun Finished: dag_id=EXEC_CALCOLO_FILTRO_RR_INCREMENTALE, execution_date=2021-10-30 18:00:00+00:00, run_id=scheduled__2021-10-30T18:00:00+00:00, run_start_date=2021-10-31 09:00:00.440704+00:00, run_end_date=2021-11-01 01:00:41.441139+00:00, run_duration=57641.000435, state=success, external_trigger=False, run_type=scheduled, data_interval_start=2021-10-30 18:00:00+00:00, data_interval_end=2021-10-31 09:00:00+00:00, dag_hash=91db1a3fa29d7dba470ee53feddb124b
[2021-11-01 02:00:41,444] {scheduler_job.py:644} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 628, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 709, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 792, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 1044, in _schedule_dag_run
self._update_dag_next_dagruns(dag, dag_model, active_runs)
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 935, in _update_dag_next_dagruns
data_interval = dag.get_next_data_interval(dag_model)
File "/usr/local/lib/python3.8/dist-packages/airflow/models/dag.py", line 629, in get_next_data_interval
return self.infer_automated_data_interval(dag_model.next_dagrun)
File "/usr/local/lib/python3.8/dist-packages/airflow/models/dag.py", line 667, in infer_automated_data_interval
end = cast(CronDataIntervalTimetable, self.timetable)._get_next(start)
File "/usr/local/lib/python3.8/dist-packages/airflow/timetables/interval.py", line 171, in _get_next
naive = make_naive(current, self._timezone)
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/timezone.py", line 143, in make_naive
if is_naive(value):
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/timezone.py", line 50, in is_naive
return value.utcoffset() is None
AttributeError: 'NoneType' object has no attribute 'utcoffset'
[2021-11-01 02:00:42,459] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 12284
[2021-11-01 02:00:42,753] {process_utils.py:212} INFO - Waiting up to 5 seconds for processes to exit...
[2021-11-01 02:00:42,792] {process_utils.py:66} INFO - Process psutil.Process(pid=12342, status='terminated', started='02:00:41') (12342) terminated with exit code None
[2021-11-01 02:00:42,792] {process_utils.py:66} INFO - Process psutil.Process(pid=12284, status='terminated', exitcode=0, started='02:00:40') (12284) terminated with exit code 0
[2021-11-01 02:00:42,792] {process_utils.py:66} INFO - Process psutil.Process(pid=12317, status='terminated', started='02:00:41') (12317) terminated with exit code None
[2021-11-01 02:00:42,792] {scheduler_job.py:655} INFO - Exited execute loop
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/dist-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.8/dist-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/usr/local/lib/python3.8/dist-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/base_job.py", line 245, in run
self._execute()
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 628, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 709, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 792, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 1044, in _schedule_dag_run
self._update_dag_next_dagruns(dag, dag_model, active_runs)
File "/usr/local/lib/python3.8/dist-packages/airflow/jobs/scheduler_job.py", line 935, in _update_dag_next_dagruns
data_interval = dag.get_next_data_interval(dag_model)
File "/usr/local/lib/python3.8/dist-packages/airflow/models/dag.py", line 629, in get_next_data_interval
return self.infer_automated_data_interval(dag_model.next_dagrun)
File "/usr/local/lib/python3.8/dist-packages/airflow/models/dag.py", line 667, in infer_automated_data_interval
end = cast(CronDataIntervalTimetable, self.timetable)._get_next(start)
File "/usr/local/lib/python3.8/dist-packages/airflow/timetables/interval.py", line 171, in _get_next
naive = make_naive(current, self._timezone)
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/timezone.py", line 143, in make_naive
if is_naive(value):
File "/usr/local/lib/python3.8/dist-packages/airflow/utils/timezone.py", line 50, in is_naive
return value.utcoffset() is None
AttributeError: 'NoneType' object has no attribute 'utcoffset'
```
There was non way to have Airflow started !!
I restored the backup of the day before in order to have Airflow up and running again.
Now it works, but at the startup Airflow launched all the jobs he thinks were non executed, causing some problems on the database due to these unusual load.
Is there a way to avoid this behaviour at the startup ?
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19343 | https://github.com/apache/airflow/pull/19307 | ddcc84833740a73c6b1bdeaeaf80ce4b354529dd | dc4dcaa9ccbec6a1b1ce84d5ee42322ce1fbb081 | "2021-11-01T02:46:39Z" | python | "2021-11-01T19:53:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,342 | ["airflow/www/static/js/ti_log.js"] | Webserver shows wrong datetime (timezone) in log | ### Apache Airflow version
2.2.1 (latest released)
### Operating System
Ubuntu 20.04.2 (docker)
### Versions of Apache Airflow Providers
- apache-airflow-providers-amazon==2.3.0
- apache-airflow-providers-ftp==2.0.1
- apache-airflow-providers-http==2.0.1
- apache-airflow-providers-imap==2.0.1
- apache-airflow-providers-mongo==2.1.0
- apache-airflow-providers-postgres==2.3.0
- apache-airflow-providers-sqlite==2.0.1
### Deployment
Docker-Compose
### Deployment details
Docker image build on Ubuntu 20.04 -> installed apache airflow via pip.
Localtime in image changed to Europe/Moscow.
Log format ariflow.cfg option:
log_format = %%(asctime)s %%(filename)s:%%(lineno)d %%(levelname)s - %%(message)s
### What happened
For my purposes it's more usefull to run dags when it's midnight in my timezone.
So I changed _default_timezone_ option in airflow.cfg to "Europe/Moscow" and also changed /etc/localtime in my docker image.
It works nice:
- dags with _@daily_ schedule_interval runs at midnight
- python`s datetime.now() get me my localtime by default
- airflow webserver shows all time correctly when I change timezone in right top corner
... except one thing.
Python logging module saves asctime without timezone (for example "2021-10-31 18:25:42,550").
And when I open task`s log in web interface, it shifts this time forward by three hours (for my timzone), but it's **already** in my timzone.
It is a little bit confusing :(
### What you expected to happen
I expected to see my timezone in logs :)
I see several solutions for that:
1) any possibility to turn that shift off?
2) setup logging timezone in airflow.cfg?
That problem is gone when I change system (in container) /etc/localtime to UTC.
But this is very problematic because of the ability to affect a lot of python tasks.
### How to reproduce
1) build docker container with different /etc/localtime
> FROM ubuntu:20.04
>
> ARG DEBIAN_FRONTEND=noninteractive
> RUN apt-get update && apt-get install -y apt-utils locales tzdata \
> && locale-gen en_US.UTF-8 \
> && ln -sf /usr/share/zoneinfo/Europe/Moscow /etc/localtime
> ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en AIRFLOW_GPL_UNIDECODE=yes
>
> RUN apt-get install -y \
> python3-pip \
> && python3 -m pip install --upgrade pip setuptools wheel \
> && pip3 install --no-cache-dir \
> apache-airflow-providers-amazon \
> apache-airflow-providers-mongo \
> apache-airflow-providers-postgres \
> apache-airflow==2.2.1 \
> celery \
> ... anything else
2) run webserver / scheduler / celery worker inside
3) open web page -> trigger dag with python operator which prints something via logging
4) open done dag -> task log -> find asctime mark in log
![image](https://user-images.githubusercontent.com/23495999/139592313-06d6f1a5-0c07-4e73-8a77-d8f858d34e59.png)
5) switch timezone in web interface
6) watch how airflow thinks that asctime in log in UTC, but it's not
![image](https://user-images.githubusercontent.com/23495999/139592339-8c09563c-a410-4a8e-ab1a-827390de01c1.png)
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19342 | https://github.com/apache/airflow/pull/19401 | 35f3bf8fb447be0ed581cc317897180b541c63ce | c96789b85cf59ece65f055e158f9559bb1d18faa | "2021-10-31T16:08:56Z" | python | "2021-11-05T16:04:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,320 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Task Instance stuck inside KubernetesExecutor | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
![Screen Shot 2021-10-29 at 10 58 48 AM](https://user-images.githubusercontent.com/5952735/139481226-04b4edad-c0cc-4bc9-9dde-e80fffa4e0f5.png)
### What happened
Task stuck inside KubernetsExecutor task queue when the pod spec is unresolvable.
```
from datetime import datetime
from airflow.models import DAG
from airflow.operators.bash import BashOperator
from kubernetes.client import models as k8s
with DAG(
dag_id='resource',
catchup=False,
schedule_interval='@once',
start_date=datetime(2020, 1, 1),
) as dag:
op = BashOperator(
task_id='task',
bash_command="sleep 30",
dag=dag,
executor_config={
"pod_override": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base",
resources=k8s.V1ResourceRequirements(
requests={
"cpu": 0.5,
"memory": "500Mi",
},
limits={
"cpu": 0.5,
"memory": "100Mi"
}
)
)
]
)
)
}
)
```
### What you expected to happen
I expect the pod to be scheduled or at least fail and not stuck inside KubernetesExecutor's task queue / BaseExecutor's running set. Since the pod never spawned, it cannot run and more importantly finish, which is when the pod state changes to anything other than RUNNING and the task instance key is removed from the running set.
https://github.com/apache/airflow/blob/2.2.0/airflow/executors/kubernetes_executor.py#L572-L588
On the other hand, I did expect the pod to get stuck in the task queue.
The executor was written to allow for pod specs that fail due to resource contention to continue retrying until there are resources. The request itself will resolve eventually since resources will be freed eventually as well without human intervention.
In theis case, the pod spec where the resource request is greater than the resource limit will never resolve not because the pod spec is invalid but the failure to comply with the Kubernetes cluster's limit range. The original creator of the KubernetesExecutor may not had considering that during the conception of the executor.
### How to reproduce
1. Create a DAG with a task that has a pod spec that is not resolve-able.
2. See Airflow task instance stuck in queued state because the task is never ran to move the task state to running.
### Anything else
<details>
<summary>Full scheduler logs</summary>
```
[2021-10-29 00:15:02,152] {scheduler_job.py:442} INFO - Sending TaskInstanceKey(dag_id='resource', task_id='task', run_id='scheduled__2020-01-01T00:00:00+00:00', try_number=1) to executor with priority 1 and queue celery
[2021-10-29 00:15:02,152] {base_executor.py:82} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'resource', 'task', 'scheduled__2020-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/2.0/resource.py']
[2021-10-29 00:15:02,154] {base_executor.py:150} DEBUG - 0 running task instances
[2021-10-29 00:15:02,154] {base_executor.py:151} DEBUG - 1 in queue
[2021-10-29 00:15:02,154] {base_executor.py:152} DEBUG - 32 open slots
[2021-10-29 00:15:02,155] {kubernetes_executor.py:534} INFO - Add task TaskInstanceKey(dag_id='resource', task_id='task', run_id='scheduled__2020-01-01T00:00:00+00:00', try_number=1) with command ['airflow', 'tasks', 'run', 'resource', 'task', 'scheduled__2020-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/2.0/resource.py'] with executor_config {'pod_override': {'api_version': None,
'kind': None,
'metadata': None,
'spec': {'active_deadline_seconds': None,
'affinity': None,
'automount_service_account_token': None,
'containers': [{'args': None,
'command': None,
'env': None,
'env_from': None,
'image': None,
'image_pull_policy': None,
'lifecycle': None,
'liveness_probe': None,
'name': 'base',
'ports': None,
'readiness_probe': None,
'resources': {'limits': None,
'requests': {'cpu': '0.5',
'memory': '500Mi'}},
'security_context': None,
'stdin': None,
'stdin_once': None,
'termination_message_path': None,
'termination_message_policy': None,
'tty': None,
'volume_devices': None,
'volume_mounts': None,
'working_dir': None}],
'dns_config': None,
'dns_policy': None,
'enable_service_links': None,
'host_aliases': None,
'host_ipc': None,
'host_network': None,
'host_pid': None,
'hostname': None,
'image_pull_secrets': None,
'init_containers': None,
'node_name': None,
'node_selector': None,
'preemption_policy': None,
'priority': None,
'priority_class_name': None,
'readiness_gates': None,
'restart_policy': None,
'runtime_class_name': None,
'scheduler_name': None,
'security_context': None,
'service_account': None,
'service_account_name': None,
'share_process_namespace': None,
'subdomain': None,
'termination_grace_period_seconds': None,
'tolerations': None,
'volumes': None},
'status': None}}
[2021-10-29 00:15:02,157] {base_executor.py:161} DEBUG - Calling the <class 'airflow.executors.kubernetes_executor.KubernetesExecutor'> sync method
[2021-10-29 00:15:02,157] {kubernetes_executor.py:557} DEBUG - self.running: {TaskInstanceKey(dag_id='resource', task_id='task', run_id='scheduled__2020-01-01T00:00:00+00:00', try_number=1)}
[2021-10-29 00:15:02,158] {kubernetes_executor.py:361} DEBUG - Syncing KubernetesExecutor
[2021-10-29 00:15:02,158] {kubernetes_executor.py:286} DEBUG - KubeJobWatcher alive, continuing
[2021-10-29 00:15:02,160] {kubernetes_executor.py:300} INFO - Kubernetes job is (TaskInstanceKey(dag_id='resource', task_id='task', run_id='scheduled__2020-01-01T00:00:00+00:00', try_number=1), ['airflow', 'tasks', 'run', 'resource', 'task', 'scheduled__2020-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/2.0/resource.py'], {'api_version': None, 'kind': None, 'metadata': None, 'spec': {'active_deadline_seconds': None, 'affinity': None, 'automount_service_account_token': None, 'containers': [{'args': None, 'command': None, 'env': None, 'env_from': None, 'image': None, 'image_pull_policy': None, 'lifecycle': None, 'liveness_probe': None, 'name': 'base', 'ports': None, 'readiness_probe': None, 'resources': {'limits': None, 'requests': {'cpu': '0.5', 'memory': '500Mi'}}, 'security_context': None, 'stdin': None, 'stdin_once': None, 'termination_message_path': None, 'termination_message_policy': None, 'tty': None, 'volume_devices': None, 'volume_mounts': None, 'working_dir': None}], 'dns_config': None, 'dns_policy': None, 'enable_service_links': None, 'host_aliases': None, 'host_ipc': None, 'host_network': None, 'host_pid': None, 'hostname': None, 'image_pull_secrets': None, 'init_containers': None, 'node_name': None, 'node_selector': None, 'preemption_policy': None, 'priority': None, 'priority_class_name': None, 'readiness_gates': None, 'restart_policy': None, 'runtime_class_name': None, 'scheduler_name': None, 'security_context': None, 'service_account': None, 'service_account_name': None, 'share_process_namespace': None, 'subdomain': None, 'termination_grace_period_seconds': None, 'tolerations': None, 'volumes': None}, 'status': None}, None)
[2021-10-29 00:15:02,173] {kubernetes_executor.py:330} DEBUG - Kubernetes running for command ['airflow', 'tasks', 'run', 'resource', 'task', 'scheduled__2020-01-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/2.0/resource.py']
[2021-10-29 00:15:02,173] {kubernetes_executor.py:331} DEBUG - Kubernetes launching image registry.gcp0001.us-east4.astronomer.io/quasarian-spectroscope-0644/airflow:deploy-2
[2021-10-29 00:15:02,174] {kubernetes_executor.py:260} DEBUG - Pod Creation Request:
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"checksum/airflow-secrets": "70012f356e10dceeb7c58fb0ce05014197b5fa1c1a5d8955ce1a1a4cc7347fa8bc67336041bded0f5bb700f6b5a17c794d7dc1ec00b72e6e98998f1f45efd286",
"dag_id": "resource",
"task_id": "task",
"try_number": "1",
"run_id": "scheduled__2020-01-01T00:00:00+00:00"
},
"labels": {
"tier": "airflow",
"component": "worker",
"release": "quasarian-spectroscope-0644",
"platform": "astronomer",
"workspace": "cki7jmbr53180161pjtfor7aoj1",
"airflow-worker": "8",
"dag_id": "resource",
"task_id": "task",
"try_number": "1",
"airflow_version": "2.2.0-astro.2",
"kubernetes_executor": "True",
"run_id": "scheduled__2020-01-01T0000000000-cc3b7db2b",
"kubernetes-pod-operator": "False"
},
"name": "resourcetask.4c8ffd0238954d27b7891491a8d6da4f",
"namespace": "astronomer-quasarian-spectroscope-0644"
},
"spec": {
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "astronomer.io/multi-tenant",
"operator": "In",
"values": [
"true"
]
}
]
}
]
}
}
},
"containers": [
{
"args": [
"airflow",
"tasks",
"run",
"resource",
"task",
"scheduled__2020-01-01T00:00:00+00:00",
"--local",
"--subdir",
"DAGS_FOLDER/2.0/resource.py"
],
"command": [
"tini",
"--",
"/entrypoint"
],
"env": [
{
"name": "AIRFLOW__CORE__EXECUTOR",
"value": "LocalExecutor"
},
{
"name": "AIRFLOW__CORE__FERNET_KEY",
"valueFrom": {
"secretKeyRef": {
"key": "fernet-key",
"name": "quasarian-spectroscope-0644-fernet-key"
}
}
},
{
"name": "AIRFLOW__CORE__SQL_ALCHEMY_CONN",
"valueFrom": {
"secretKeyRef": {
"key": "connection",
"name": "quasarian-spectroscope-0644-airflow-metadata"
}
}
},
{
"name": "AIRFLOW_CONN_AIRFLOW_DB",
"valueFrom": {
"secretKeyRef": {
"key": "connection",
"name": "quasarian-spectroscope-0644-airflow-metadata"
}
}
},
{
"name": "AIRFLOW__WEBSERVER__SECRET_KEY",
"valueFrom": {
"secretKeyRef": {
"key": "webserver-secret-key",
"name": "quasarian-spectroscope-0644-webserver-secret-key"
}
}
},
{
"name": "AIRFLOW__ELASTICSEARCH__HOST",
"valueFrom": {
"secretKeyRef": {
"key": "connection",
"name": "quasarian-spectroscope-0644-elasticsearch"
}
}
},
{
"name": "AIRFLOW__ELASTICSEARCH__ELASTICSEARCH_HOST",
"valueFrom": {
"secretKeyRef": {
"key": "connection",
"name": "quasarian-spectroscope-0644-elasticsearch"
}
}
},
{
"name": "AIRFLOW_IS_K8S_EXECUTOR_POD",
"value": "True"
}
],
"envFrom": [
{
"secretRef": {
"name": "quasarian-spectroscope-0644-env"
}
}
],
"image": "registry.gcp0001.us-east4.astronomer.io/quasarian-spectroscope-0644/airflow:deploy-2",
"imagePullPolicy": "IfNotPresent",
"name": "base",
"resources": {
"requests": {
"cpu": "0.5",
"memory": "500Mi"
}
},
"volumeMounts": [
{
"mountPath": "/usr/local/airflow/logs",
"name": "logs"
},
{
"mountPath": "/usr/local/airflow/airflow.cfg",
"name": "config",
"readOnly": true,
"subPath": "airflow.cfg"
},
{
"mountPath": "/usr/local/airflow/config/airflow_local_settings.py",
"name": "config",
"readOnly": true,
"subPath": "airflow_local_settings.py"
}
]
}
],
"imagePullSecrets": [
{
"name": "quasarian-spectroscope-0644-registry"
}
],
"restartPolicy": "Never",
"securityContext": {
"fsGroup": 50000,
"runAsUser": 50000
},
"serviceAccountName": "quasarian-spectroscope-0644-airflow-worker",
"volumes": [
{
"emptyDir": {},
"name": "logs"
},
{
"configMap": {
"name": "quasarian-spectroscope-0644-airflow-config"
},
"name": "config"
}
]
}
}
[2021-10-29 00:15:02,215] {rest.py:228} DEBUG - response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod \"resourcetask.4c8ffd0238954d27b7891491a8d6da4f\" is invalid: [spec.containers[0].resources.requests: Invalid value: \"500m\": must be less than or equal to cpu limit, spec.containers[0].resources.requests: Invalid value: \"500Mi\": must be less than or equal to memory limit]","reason":"Invalid","details":{"name":"resourcetask.4c8ffd0238954d27b7891491a8d6da4f","kind":"Pod","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"500m\": must be less than or equal to cpu limit","field":"spec.containers[0].resources.requests"},{"reason":"FieldValueInvalid","message":"Invalid value: \"500Mi\": must be less than or equal to memory limit","field":"spec.containers[0].resources.requests"}]},"code":422}
[2021-10-29 00:15:02,215] {kubernetes_executor.py:267} ERROR - Exception when attempting to create Namespaced Pod: {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"checksum/airflow-secrets": "70012f356e10dceeb7c58fb0ce05014197b5fa1c1a5d8955ce1a1a4cc7347fa8bc67336041bded0f5bb700f6b5a17c794d7dc1ec00b72e6e98998f1f45efd286",
"dag_id": "resource",
"task_id": "task",
"try_number": "1",
"run_id": "scheduled__2020-01-01T00:00:00+00:00"
},
"labels": {
"tier": "airflow",
"component": "worker",
"release": "quasarian-spectroscope-0644",
"platform": "astronomer",
"workspace": "cki7jmbr53180161pjtfor7aoj1",
"airflow-worker": "8",
"dag_id": "resource",
"task_id": "task",
"try_number": "1",
"airflow_version": "2.2.0-astro.2",
"kubernetes_executor": "True",
"run_id": "scheduled__2020-01-01T0000000000-cc3b7db2b",
"kubernetes-pod-operator": "False"
},
"name": "resourcetask.4c8ffd0238954d27b7891491a8d6da4f",
"namespace": "astronomer-quasarian-spectroscope-0644"
},
"spec": {
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "astronomer.io/multi-tenant",
"operator": "In",
"values": [
"true"
]
}
]
}
]
}
}
},
"containers": [
{
"args": [
"airflow",
"tasks",
"run",
"resource",
"task",
"scheduled__2020-01-01T00:00:00+00:00",
"--local",
"--subdir",
"DAGS_FOLDER/2.0/resource.py"
],
"command": [
"tini",
"--",
"/entrypoint"
],
"env": [
{
"name": "AIRFLOW__CORE__EXECUTOR",
"value": "LocalExecutor"
},
{
"name": "AIRFLOW__CORE__FERNET_KEY",
"valueFrom": {
"secretKeyRef": {
"key": "fernet-key",
"name": "quasarian-spectroscope-0644-fernet-key"
}
}
},
{
"name": "AIRFLOW__CORE__SQL_ALCHEMY_CONN",
"valueFrom": {
"secretKeyRef": {
"key": "connection",
"name": "quasarian-spectroscope-0644-airflow-metadata"
}
}
},
{
"name": "AIRFLOW_CONN_AIRFLOW_DB",
"valueFrom": {
"secretKeyRef": {
"key": "connection",
"name": "quasarian-spectroscope-0644-airflow-metadata"
}
}
},
{
"name": "AIRFLOW__WEBSERVER__SECRET_KEY",
"valueFrom": {
"secretKeyRef": {
"key": "webserver-secret-key",
"name": "quasarian-spectroscope-0644-webserver-secret-key"
}
}
},
{
"name": "AIRFLOW__ELASTICSEARCH__HOST",
"valueFrom": {
"secretKeyRef": {
"key": "connection",
"name": "quasarian-spectroscope-0644-elasticsearch"
}
}
},
{
"name": "AIRFLOW__ELASTICSEARCH__ELASTICSEARCH_HOST",
"valueFrom": {
"secretKeyRef": {
"key": "connection",
"name": "quasarian-spectroscope-0644-elasticsearch"
}
}
},
{
"name": "AIRFLOW_IS_K8S_EXECUTOR_POD",
"value": "True"
}
],
"envFrom": [
{
"secretRef": {
"name": "quasarian-spectroscope-0644-env"
}
}
],
"image": "registry.gcp0001.us-east4.astronomer.io/quasarian-spectroscope-0644/airflow:deploy-2",
"imagePullPolicy": "IfNotPresent",
"name": "base",
"resources": {
"requests": {
"cpu": "0.5",
"memory": "500Mi"
}
},
"volumeMounts": [
{
"mountPath": "/usr/local/airflow/logs",
"name": "logs"
},
{
"mountPath": "/usr/local/airflow/airflow.cfg",
"name": "config",
"readOnly": true,
"subPath": "airflow.cfg"
},
{
"mountPath": "/usr/local/airflow/config/airflow_local_settings.py",
"name": "config",
"readOnly": true,
"subPath": "airflow_local_settings.py"
}
]
}
],
"imagePullSecrets": [
{
"name": "quasarian-spectroscope-0644-registry"
}
],
"restartPolicy": "Never",
"securityContext": {
"fsGroup": 50000,
"runAsUser": 50000
},
"serviceAccountName": "quasarian-spectroscope-0644-airflow-worker",
"volumes": [
{
"emptyDir": {},
"name": "logs"
},
{
"configMap": {
"name": "quasarian-spectroscope-0644-airflow-config"
},
"name": "config"
}
]
}
}
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 262, in run_pod_async
resp = self.kube_client.create_namespaced_pod(
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 6174, in create_namespaced_pod
(data) = self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) # noqa: E501
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 6251, in create_namespaced_pod_with_http_info
return self.api_client.call_api(
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 340, in call_api
return self.__call_api(resource_path, method,
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 172, in __call_api
response_data = self.request(
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 382, in request
return self.rest_client.POST(url,
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/rest.py", line 272, in POST
return self.request("POST", url,
File "/usr/local/lib/python3.9/site-packages/kubernetes/client/rest.py", line 231, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (422)
Reason: Unprocessable Entity
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'be88cd36-49fb-4398-92c4-b50bb80084c8', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 29 Oct 2021 00:15:02 GMT', 'Content-Length': '799'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod \"resourcetask.4c8ffd0238954d27b7891491a8d6da4f\" is invalid: [spec.containers[0].resources.requests: Invalid value: \"500m\": must be less than or equal to cpu limit, spec.containers[0].resources.requests: Invalid value: \"500Mi\": must be less than or equal to memory limit]","reason":"Invalid","details":{"name":"resourcetask.4c8ffd0238954d27b7891491a8d6da4f","kind":"Pod","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"500m\": must be less than or equal to cpu limit","field":"spec.containers[0].resources.requests"},{"reason":"FieldValueInvalid","message":"Invalid value: \"500Mi\": must be less than or equal to memory limit","field":"spec.containers[0].resources.requests"}]},"code":422}
[2021-10-29 00:15:02,308] {kubernetes_executor.py:609} WARNING - ApiException when attempting to run task, re-queueing. Message: Pod "resourcetask.4c8ffd0238954d27b7891491a8d6da4f" is invalid: [spec.containers[0].resources.requests: Invalid value: "500m": must be less than or equal to cpu limit, spec.containers[0].resources.requests: Invalid value: "500Mi": must be less than or equal to memory limit]
[2021-10-29 00:15:02,308] {kubernetes_executor.py:621} DEBUG - Next timed event is in 53.668401
```
</details>
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19320 | https://github.com/apache/airflow/pull/19359 | 490a382ed6ce088bee650751b6409c510e19845a | eb12bb2f0418120be31cbcd8e8722528af9eb344 | "2021-10-29T18:10:44Z" | python | "2021-11-04T00:22:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,313 | ["airflow/providers/jdbc/operators/jdbc.py", "tests/providers/jdbc/operators/test_jdbc.py"] | JdbcOperator should pass handler parameter to JdbcHook.run | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-jdbc==1!2.0.1
### Deployment
Astronomer
### Deployment details
Astro CLI Version: 0.25.4, Git Commit: e95011fbf800fda9fdef1dc1d5149d62bc017aed
### What happened
The execute method calls the [hook run method](https://github.com/apache/airflow/blob/20847fdbf8ecd3be394d24d47ce151c26d018ea1/airflow/providers/jdbc/operators/jdbc.py#L70) without passing any optional `handler` parameter to the DbApiHook parent class of JdbcHook.
### What you expected to happen
Without the handler, the results of the DbApiHook, and the JdbcOperator, [will always be an empty list](https://github.com/apache/airflow/blob/20847fdbf8ecd3be394d24d47ce151c26d018ea1/airflow/hooks/dbapi.py#L206). In the case of the JdbcOperator, the underlying `JayDeBeApi` connection uses a handler of the form `lambda x: x.fetchall()` to return results.
### How to reproduce
1. Using the astro cli
2. Download [postgresql-42.3.0.jar](https://jdbc.postgresql.org/download/postgresql-42.3.0.jar) into the working directory
3. Download [zulu11.52.13-ca-jdk11.0.13-linux_x64.tar.gz](https://cdn.azul.com/zulu/bin/zulu11.52.13-ca-jdk11.0.13-linux_x64.tar.gz) to the working directory
4. Copy the following to the Dockerfile
```
FROM quay.io/astronomer/ap-airflow:2.2.0-buster-onbuild
COPY postgresql-42.3.0.jar /usr/local/airflow/.
USER root
RUN cd /opt && mkdir java
COPY zulu11.52.13-ca-jdk11.0.13-linux_x64.tar.gz /opt/java
RUN cd /opt/java && pwd && ls && tar xfvz ./zulu11.52.13-ca-jdk11.0.13-linux_x64.tar.gz
ENV JAVA_HOME /opt/java/zulu11.52.13-ca-jdk11.0.13-linux_x64
RUN export JAVA_HOME
```
5. Copy the following into the `airflow_settings.yaml` file
```yml
airflow:
connections:
- conn_id: my_jdbc_connection
conn_type: jdbc
conn_host: "jdbc:postgresql://postgres:5432/"
conn_schema:
conn_login: postgres
conn_password: postgres
conn_port:
conn_extra: '{"extra__jdbc__drv_clsname":"org.postgresql.Driver", "extra__jdbc__drv_path":"/usr/local/airflow/postgresql-42.3.0.jar"}'
```
6. Copy the following DAG and run it. No data will be passed to XCOM
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.providers.jdbc.operators.jdbc import JdbcOperator
with DAG(
dag_id='example_jdbc_operator',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
dagrun_timeout=timedelta(minutes=60),
tags=['example'],
catchup=False,
) as dag:
run_this_last = DummyOperator(task_id='run_this_last')
query_dags_data = JdbcOperator(
task_id='query_dags_data',
sql='SELECT dag_id, is_active FROM dag',
jdbc_conn_id='my_jdbc_connection',
autocommit=True,
)
query_dags_data >> run_this_last
```
7. Copy the following DAG and run it. All DAGs and their active status will be put onto XCOM
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.providers.jdbc.operators.jdbc import JdbcOperator
from airflow.providers.jdbc.hooks.jdbc import JdbcHook
class JdbcHandlerOperator(JdbcOperator):
def execute(self, context) -> None:
self.log.info('Executing: %s', self.sql)
hook = JdbcHook(jdbc_conn_id=self.jdbc_conn_id)
return hook.run(
self.sql,
self.autocommit,
parameters=self.parameters,
# Defined by how JayDeBeApi operates
handler=lambda x: x.fetchall()
)
with DAG(
dag_id='example_jdbc_operator',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
dagrun_timeout=timedelta(minutes=60),
tags=['example'],
catchup=False,
) as dag:
run_this_last = DummyOperator(task_id='run_this_last')
query_dags_data = JdbcHandlerOperator(
task_id='query_dags_data',
sql='SELECT dag_id, is_active FROM dag',
jdbc_conn_id='my_jdbc_connection',
autocommit=True,
)
query_dags_data >> run_this_last
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19313 | https://github.com/apache/airflow/pull/25412 | 3dfa44566c948cb2db016e89f84d6fe37bd6d824 | 1708da9233c13c3821d76e56dbe0e383ff67b0fd | "2021-10-29T14:18:45Z" | python | "2022-08-07T09:18:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,304 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | "Not a valid timetable" when returning None from next_dagrun_info in a custom timetable | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Mac
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
Getting an exception when returning None from next_dagrun_info in a custom timetable. The timetable protocol says that when None is returned a DagRun will not happen but right now it throws an exception.
Exception:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 623, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 704, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 787, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1039, in _schedule_dag_run
self._update_dag_next_dagruns(dag, dag_model, active_runs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 930, in _update_dag_next_dagruns
data_interval = dag.get_next_data_interval(dag_model)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 629, in get_next_data_interval
return self.infer_automated_data_interval(dag_model.next_dagrun)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 671, in infer_automated_data_interval
raise ValueError(f"Not a valid timetable: {self.timetable!r}")
ValueError: Not a valid timetable: <my_timetables.workday_timetable.WorkdayTimetable object at 0x7f42b1f02430>
```
### What you expected to happen
DagRun to not happen.
### How to reproduce
Create a custom timetable and return None in next_dagrun_info after a few DagRuns are created by that timetable
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19304 | https://github.com/apache/airflow/pull/19307 | ddcc84833740a73c6b1bdeaeaf80ce4b354529dd | dc4dcaa9ccbec6a1b1ce84d5ee42322ce1fbb081 | "2021-10-29T09:48:42Z" | python | "2021-11-01T19:53:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,293 | ["BREEZE.rst", "breeze", "breeze-complete"] | Disable MSSQL Flag is not supported in Breeze | ### Apache Airflow version
main (development)
### Operating System
macOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
I am building a custom docker image for production using Breeze, and want to disable MSSQL client installation in a similar manner to disabling MySQL client installation (`--disable-mysql-client-installation`). The flag `--disable-mssql-client-installation` exists, but the error `breeze: unrecognized option '--disable-mssql-client-installation'` is thrown when it is applied.
### What you expected to happen
`--disable-mssql-client-installation` should not throw an error when applied.
### How to reproduce
- Clone this repository
```
./breeze build-image \
--production-image \
--disable-mssql-client-installation \
--dry-run-docker
```
### Anything else
I have been able to fix this by editing `breeze` and `breeze-complete`, bringing `--disable-mssql-client-installation` to parity with `--disable-mysql-client-installation`.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19293 | https://github.com/apache/airflow/pull/19295 | 96dd70348ad7e31cfeae6d21af70671b41551fe9 | c6aed34feb9321757eeaaaf2f3c055d51786a4f9 | "2021-10-28T20:55:12Z" | python | "2021-11-04T07:26:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,272 | ["airflow/providers/amazon/aws/hooks/glue_catalog.py", "tests/providers/amazon/aws/hooks/test_glue_catalog.py"] | Create get_partition, create_partition methods in AWS Glue Catalog Hook | ### Description
Current AWS Glue Catalog Hook does not have all the methods.
If possible, we can add get_partition and create_partition methods in existing Glue Catalog Hook.
### Use case/motivation
I do work in Amazon.com. Our team is using Airflow to orchestrate Data Builds for our services.
In process of data builds, we need to create new Glue partitions for generated data and also retrieve particular Glue partition. Existing Glue Catalog Hook does not contain all the available methods.
### Related issues
I was not able to find any related existing issues or feature request.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19272 | https://github.com/apache/airflow/pull/23857 | 8f3a9b8542346c35712cba373baaafb518503562 | 94f2ce9342d995f1d2eb00e6a9444e57c90e4963 | "2021-10-28T03:18:42Z" | python | "2022-05-30T19:26:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,269 | ["airflow/providers/facebook/ads/hooks/ads.py", "airflow/providers/google/cloud/transfers/facebook_ads_to_gcs.py", "tests/providers/facebook/ads/hooks/test_ads.py", "tests/providers/google/cloud/transfers/test_facebook_ads_to_gcs.py"] | Add support for multiple AdAccount when getting Bulk Facebook Report from Facebook Ads Hook | ### Description
Currently, the Facebook Ads Hook only supports one `account_id` for every hook(`facebook_conn_id`). It would be great if we could pass multiple `account_id` from the Facebook connection(`facebook_conn_id`). When multiple `account_id` is provided in the connection, we could support getting insights from one `facebook_conn_id` for all of them rather than creating a Facebook connection for every `account_id`.
### Use case/motivation
We are using FacebookAdsReportToGcsOperator to export our Facebook Ad data since our infrastructure is on the Google Cloud Platform products and services. This feature will prevent us to create and maintain a connection per account id which will lead us to provide a feature to export multiple CSV files from one connection(one task in a DAG|one operator in a DAG) with one operator. In addition, this feature should not prevent adding multiple Facebook connections per account id to use as it is.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19269 | https://github.com/apache/airflow/pull/19377 | 704ec82404dea0001e67a74596d82e138ef0b7ca | ed8b63ba2460f47744f4dcf40019592816bb89b5 | "2021-10-28T01:33:54Z" | python | "2021-12-08T09:13:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,260 | ["airflow/cli/commands/triggerer_command.py"] | Airflow standalone command does not exit gracefully | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
macOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
run `airflow standalone`
enter `ctrl+c`
hangs here:
```
webserver | 127.0.0.1 - - [27/Oct/2021:09:30:57 -0700] "GET /static/pin_32.png HTTP/1.1" 304 0 "http://localhost:8080/home" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36"
^Cstandalone | Shutting down components
```
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19260 | https://github.com/apache/airflow/pull/23274 | 6cf0176f2a676008a6fbe5b950ab2e3231fd1f76 | 6bdbed6c43df3c5473b168a75c50e0139cc13e88 | "2021-10-27T16:34:33Z" | python | "2022-04-27T16:18:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,252 | ["airflow/www/views.py"] | Task modal links are broken in the dag gantt view | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
N/A
### Deployment
Other Docker-based deployment
### Deployment details
CeleryExecutor / ECS / Postgres
### What happened
![image](https://user-images.githubusercontent.com/160865/139075540-25fca98f-a858-49ce-8f3f-1b9a145a6853.png)
Clicking on logs / instance details on the following dialog causes an exception:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.9/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.9/site-packages/airflow/www/auth.py", line 51, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/www/decorators.py", line 63, in wrapper
log.execution_date = pendulum.parse(execution_date_value, strict=False)
File "/usr/local/lib/python3.9/site-packages/pendulum/parser.py", line 29, in parse
return _parse(text, **options)
File "/usr/local/lib/python3.9/site-packages/pendulum/parser.py", line 45, in _parse
parsed = base_parse(text, **options)
File "/usr/local/lib/python3.9/site-packages/pendulum/parsing/__init__.py", line 74, in parse
return _normalize(_parse(text, **_options), **_options)
File "/usr/local/lib/python3.9/site-packages/pendulum/parsing/__init__.py", line 120, in _parse
return _parse_common(text, **options)
File "/usr/local/lib/python3.9/site-packages/pendulum/parsing/__init__.py", line 177, in _parse_common
return date(year, month, day)
ValueError: year 0 is out of range
```
This is because the execution_date in the query param of the url is empty i.e:
`http://localhost:50008/log?dag_id=test_logging&task_id=check_exception_to_sentry&execution_date=`
### What you expected to happen
The logs to load / task instance detail page to load
### How to reproduce
See above
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19252 | https://github.com/apache/airflow/pull/19258 | e0aa36ead4bb703abf5702bb1c9b105d60c15b28 | aa6c951988123edc84212d98b5a2abad9bd669f9 | "2021-10-27T13:32:24Z" | python | "2021-10-29T06:29:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,241 | ["airflow/macros/__init__.py", "tests/macros/test_macros.py"] | next_ds changed to proxy and it cannot be used in ds_add macro function | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
Tried to use this this code:
`some_variable='{{macros.ds_format(macros.ds_add(next_ds, '
'(ti.start_date - ti.execution_date).days), '
'"%Y-%m-%d", "%Y-%m-%d 21:00:00")}}')`
but got this error:
`strptime() argument 1 must be str, not Proxy`
because the `next_ds` variable changed to proxy.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19241 | https://github.com/apache/airflow/pull/19592 | 0da54f1dbe65f55316d238308155103f820192a8 | fca2b19a5e0c081ab323479e76551d66ab478d07 | "2021-10-27T06:34:57Z" | python | "2021-11-24T23:12:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,223 | ["airflow/providers/mongo/sensors/mongo.py", "tests/providers/mongo/sensors/test_mongo.py"] | Add mongo_db to MongoSensor Operator | ### Description
[MongoSensor](https://airflow.apache.org/docs/apache-airflow-providers-mongo/2.1.0/_api/airflow/providers/mongo/sensors/mongo/index.html) does not have a DB in the Python API. It has to use the schema from Mongo Connection.
### Use case/motivation
In mongo, the schema is usually the Auth DB and not the data DB. The user will not have the option to switch DB at the DAG level.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19223 | https://github.com/apache/airflow/pull/19276 | c955078b22ad797a48baf690693454c9b8ba516d | fd569e714403176770b26cf595632812bd384bc0 | "2021-10-26T11:19:08Z" | python | "2021-10-28T09:32:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,222 | ["airflow/models/dag.py", "tests/jobs/test_local_task_job.py"] | none_failed_min_one_success trigger rule not working with BranchPythonOperator in certain cases. | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
CentOS Linux 7 (Core)
### Versions of Apache Airflow Providers
apache-airflow-providers-ftp 2.0.1
apache-airflow-providers-http 2.0.1
apache-airflow-providers-imap 2.0.1
apache-airflow-providers-sqlite 2.0.1
### Deployment
Other
### Deployment details
centos 7.3
postgres - 12.2
### What happened
### DAG 1
In this DAG, I am expecting task_6 to run even when one of task_4 or task_5 get skipped. But, as soon as task_5 skips, task_6 also gets skipped. Ideally, task_6 should have waited for task_4 to finish, then only it should have taken the decision whether to run or skip.
```
import datetime as dt
import time
from airflow import DAG
from airflow.operators.python_operator import BranchPythonOperator
from airflow.operators.dummy_operator import DummyOperator
dag = DAG(
dag_id="test_non_failed_min_one_success",
schedule_interval="@once",
start_date=dt.datetime(2019, 2, 28),
)
def sleep(seconds, return_val=None):
time.sleep(seconds)
return return_val
op1 = DummyOperator(task_id="task_1", dag=dag)
op2 = BranchPythonOperator(
task_id="task_2", python_callable=sleep, op_args=[30, ["task_4"]], dag=dag
)
op3 = BranchPythonOperator(task_id="task_3", python_callable=sleep, op_args=[10, []], dag=dag)
op4 = DummyOperator(task_id="task_4", dag=dag)
op5 = DummyOperator(task_id="task_5", dag=dag)
op6 = DummyOperator(task_id="task_6", dag=dag, trigger_rule="none_failed_min_one_success")
op1 >> [op2, op3]
op2 >> op4
op3 >> op5
[op4, op5] >> op6
```
![image](https://user-images.githubusercontent.com/20776426/138857168-b4cd7fb3-80a4-4989-bc32-981db0938587.png)
![image](https://user-images.githubusercontent.com/20776426/138857227-2fcb8aad-9933-49ef-9e4c-3f9d67d6e644.png)
### DAG 2
This is just a modification of DAG 1 where I have created two more dummy tasks between task_5 and task_6. Now, I get the desired behaviour, i.e. task_6 waits for both task_4 and dummy_2 to finish before taking the decision of whether to run or skip.
```
import datetime as dt
import time
from airflow import DAG
from airflow.operators.python_operator import BranchPythonOperator
from airflow.operators.dummy_operator import DummyOperator
dag = DAG(
dag_id="test_non_failed_min_one_success",
schedule_interval="@once",
start_date=dt.datetime(2019, 2, 28),
)
def sleep(seconds, return_val=None):
time.sleep(seconds)
return return_val
op1 = DummyOperator(task_id="task_1", dag=dag)
op2 = BranchPythonOperator(
task_id="task_2", python_callable=sleep, op_args=[30, ["task_4"]], dag=dag
)
op3 = BranchPythonOperator(task_id="task_3", python_callable=sleep, op_args=[10, []], dag=dag)
op4 = DummyOperator(task_id="task_4", dag=dag)
op5 = DummyOperator(task_id="task_5", dag=dag)
dummy1 = DummyOperator(task_id="dummy1", dag=dag)
dummy2 = DummyOperator(task_id="dummy2", dag=dag)
op6 = DummyOperator(task_id="task_6", dag=dag, trigger_rule="none_failed_min_one_success")
op1 >> [op2, op3]
op2 >> op4
op3 >> op5
op5 >> dummy1 >> dummy2
[op4, dummy2] >> op6
```
![image](https://user-images.githubusercontent.com/20776426/138857664-5d3d03c3-3fd7-4831-9cf5-b8a19ffb8b42.png)
![image](https://user-images.githubusercontent.com/20776426/138856704-17654f90-f867-4289-80cf-dc5756a619b2.png)
### What you expected to happen
I expected task_6 in DAG 1 to wait for both the parent tasks to finish their runs and then run. This is because I have set trigger rule = "none_failed_min_one_success" in task_6. But, this trigger rule is not being respected in DAG 1.
### How to reproduce
The above code can be used to reproduce the issue.
### Anything else
This works fine in version 2.0.2 with trigger rule = "none_failed_or_skipped".
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19222 | https://github.com/apache/airflow/pull/23181 | 2bb1cd2fec4e2069cb4bbb42d1a880e905d9468e | b4c88f8e44e61a92408ec2cb0a5490eeaf2f0dba | "2021-10-26T10:31:46Z" | python | "2022-04-26T10:53:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,207 | ["airflow/providers/amazon/aws/hooks/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py"] | AWSGlueJobOperator using 'AllocatedCapacity' variable even after specifying the "WorkerType" and "NumberOfWorkers" causing issues with the AWS Glue version 2.0 and 3.0 | ### Apache Airflow version
2.0.2
### Operating System
Linux (AWS MWAA)
### Versions of Apache Airflow Providers
Version 2.0.2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
While running an Airflow step using the AWSGlueJobOperator to run a glue script using Airflow, these were the job args given:
```
"GlueVersion": "3.0",
"WorkerType": "G.2X",
"NumberOfWorkers": 60,
```
At this point, this is the error encountered:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1138, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1311, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1341, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/glue.py", line 121, in execute
glue_job_run = glue_job.initialize_job(self.script_args)
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 108, in initialize_job
job_name = self.get_or_create_glue_job()
File "/usr/local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 186, in get_or_create_glue_job
**self.create_job_kwargs,
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.InvalidInputException: An error occurred (InvalidInputException) when calling the CreateJob operation: Please do not set Allocated Capacity if using Worker Type and Number of Workers.
```
This issue is because AWSGlueJobHook (when called by AWSGlueJobOperator) assigns num_of_dpus (defaulted to 6 by the init method) to the AllocatedCapacity variable as shown in the screenshot below (taken from the AWSGlueJobHook class)
![image](https://user-images.githubusercontent.com/36181425/138749154-d0c715c6-5f8c-426d-ae56-0aae684b04f3.png)
The links to the source code are:
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/_modules/airflow/providers/amazon/aws/hooks/glue.html#AwsGlueJobHook.initialize_job
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/_modules/airflow/providers/amazon/aws/operators/glue.html#AwsGlueJobOperator.template_fields
So, there is no way for the user to proceed forward by specifying the 'WorkerType' and 'NumberOfWorkers' and not encountering the error above.
This is because AWS Glue API does not allow to use "AllocatedCapacity" or "MaxCapacity" parameters when the 'WorkerType' and 'NumberOfWorkers' are being assigned. Here is the link to the AWS documentation for the same: https://docs.aws.amazon.com/en_us/glue/latest/dg/aws-glue-api-jobs-job.html
### What you expected to happen
The expected outcome is that Airflow runs the Glue job by taking "WorkerType" and "NumberOfWorkers" as the parameter for Glue version 3.0.
### How to reproduce
This issue can be reproduced by the following steps:
1. Set the job args dict to include the following keys and values.
` "GlueVersion": "3.0",
"WorkerType": "G.2X",
"NumberOfWorkers": 60,`
2. Create a dag with one step using AwsGlueJobOperator and assign the job_args dict to the `create_job_kwargs` parameter.
3. Run the dag and this issue will be encoutered.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19207 | https://github.com/apache/airflow/pull/19787 | 374574b8d0ef795855f8d2bb212ba6d653e62727 | 6e15e3a65ec15d9b52abceed36da9f8cccee72d9 | "2021-10-25T18:34:35Z" | python | "2021-12-06T01:38:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,206 | ["airflow/www/app.py"] | Webserver uses /tmp and does not respect user-configured tempdir path | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Red Hat Enterprise Linux Server 7.6 (Maipo)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
For this usecase - I am testing with a non-root user on a machine where the user does not have write access to /tmp.
The `TMPDIR` environment variable is set to a different directory.
### What happened
When running webserver, errors like this are shown and slow down the webserver.
```
[2021-10-25 13:46:51,164] {filesystemcache.py:224} ERROR - set key '\x1b[01m__wz_cache_count\x1b[22m' -> [Errno 1] Operation not permitted: '/tmp/tmpbw3h3p93.__wz_cache' -> '/tmp/2029240f6d1128be89ddc32729463129'
```
### What you expected to happen
Airflow should respect the `TMPDIR` environment variable and use a different temp dir
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19206 | https://github.com/apache/airflow/pull/19208 | 726a1517ec368e0f5906368350d6fa96836943ae | 77c92687e613c5648303acd7cebfb89fa364fc94 | "2021-10-25T17:49:10Z" | python | "2021-10-26T16:11:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,204 | ["UPDATING.md"] | TaskInstance.execution_date usage in query creates incorrect SQL syntax | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Red Hat Enterprise Linux Server 7.6 (Maipo)
### Versions of Apache Airflow Providers
apache-airflow-providers-postgres==2.3.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
`psycopg2.errors.SyntaxError` raised when using a custom "external task sensor" with the following code:
```python
TI = TaskInstance
DR = DagRun
if self.external_task_id:
last_instance = session.query(TI).filter(
TI.dag_id == self.external_dag_id,
TI.task_id == self.external_task_id,
TI.execution_date.in_(dttm_filter)
).order_by(TI.execution_date.desc()).first()
else:
last_instance = session.query(DR).filter(
DR.dag_id == self.dag_id,
DR.execution_date.in_(dttm_filter)
).order_by(DR.execution_date.desc()).first()
return last_instance.state
```
This code was a modified from https://github.com/apache/airflow/blob/2.2.0/airflow/sensors/external_task.py#L231 and worked on airflow 1.10.7 - 2.1.4.
Error details:
```
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "DESC"
LINE 7: ..._id = task_instance.run_id AND dag_run.execution_date DESC)
^
[SQL: SELECT task_instance.try_number AS task_instance_try_number, task_instance.task_id AS task_instance_task_id, task_instance.dag_id AS task_instance_dag_id, task_instance.run_id AS task_instance_run_id, task_instance.start_date AS task_instance_start_date, task_instance.end_date AS task_instance_end_date, task_instance.duration AS task_instance_duration, task_instance.state AS task_instance_state, task_instance.max_tries AS task_instance_max_tries, task_instance.hostname AS task_instance_hostname, task_instance.unixname AS task_instance_unixname, task_instance.job_id AS task_instance_job_id, task_instance.pool AS task_instance_pool, task_instance.pool_slots AS task_instance_pool_slots, task_instance.queue AS task_instance_queue, task_instance.priority_weight AS task_instance_priority_weight, task_instance.operator AS task_instance_operator, task_instance.queued_dttm AS task_instance_queued_dttm, task_instance.queued_by_job_id AS task_instance_queued_by_job_id, task_instance.pid AS task_instance_pid, task_instance.executor_config AS task_instance_executor_config, task_instance.external_executor_id AS task_instance_external_executor_id, task_instance.trigger_id AS task_instance_trigger_id, task_instance.trigger_timeout AS task_instance_trigger_timeout, task_instance.next_method AS task_instance_next_method, task_instance.next_kwargs AS task_instance_next_kwargs, dag_run_1.state AS dag_run_1_state, dag_run_1.id AS dag_run_1_id, dag_run_1.dag_id AS dag_run_1_dag_id, dag_run_1.queued_at AS dag_run_1_queued_at, dag_run_1.execution_date AS dag_run_1_execution_date, dag_run_1.start_date AS dag_run_1_start_date, dag_run_1.end_date AS dag_run_1_end_date, dag_run_1.run_id AS dag_run_1_run_id, dag_run_1.creating_job_id AS dag_run_1_creating_job_id, dag_run_1.external_trigger AS dag_run_1_external_trigger, dag_run_1.run_type AS dag_run_1_run_type, dag_run_1.conf AS dag_run_1_conf, dag_run_1.data_interval_start AS dag_run_1_data_interval_start, dag_run_1.data_interval_end AS dag_run_1_data_interval_end, dag_run_1.last_scheduling_decision AS dag_run_1_last_scheduling_decision, dag_run_1.dag_hash AS dag_run_1_dag_hash
FROM task_instance JOIN dag_run AS dag_run_1 ON dag_run_1.dag_id = task_instance.dag_id AND dag_run_1.run_id = task_instance.run_id
WHERE task_instance.dag_id = %(dag_id_1)s AND task_instance.task_id = %(task_id_1)s AND (EXISTS (SELECT 1
FROM dag_run
WHERE dag_run.dag_id = task_instance.dag_id AND dag_run.run_id = task_instance.run_id AND dag_run.execution_date IN (%(execution_date_1)s))) ORDER BY EXISTS (SELECT 1
FROM dag_run
WHERE dag_run.dag_id = task_instance.dag_id AND dag_run.run_id = task_instance.run_id AND dag_run.execution_date DESC)
LIMIT %(param_1)s]
```
### What you expected to happen
Either the query should work (best), or `TI.execution_date` should be more strictly-checked and documented to only be usable for some actions.
Although a deprecation waring is raised when accessing `TI.execution_date`, the ExternalTaskSensor uses it and it is a part of the model of TaskInstance.
### How to reproduce
Reproduces consistently.
See code in "what happened" can provide full sensor code if needed.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19204 | https://github.com/apache/airflow/pull/19593 | 48f228cf9ef7602df9bea6ce20d663ac0c4393e1 | 1ee65bb8ae9f98233208ebb7918cf9aa1e01823e | "2021-10-25T17:27:56Z" | python | "2021-11-15T21:41:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,161 | ["airflow/cli/cli_parser.py"] | airflow-triggerer service healthcheck broken | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Windows 10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Via docker-compose
### What happened
In using the [docker-compose.yml](https://airflow.apache.org/docs/apache-airflow/stable/docker-compose.yaml) example file, the service for `airflow-triggerer` shows up in an unhealthy state. It's the only service that shows up this way. I *have* modified this file to add certificates and fix other health checks so I'm not sure if it's something I did.
```
8ef17f8a28d7 greatexpectations_data_quality_airflow-triggerer "/usr/bin/dumb-init …" 14 minutes ago Up 14 minutes (unhealthy)
```
The healthcheck portion of the `docker-compose.yml` for this service shows:
```
airflow-triggerer:
<<: *airflow-common
command: triggerer
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
```
When I shell into the container and run this command I get:
```
airflow@8ef17f8a28d7:/opt/airflow$ airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"
WARNING:root:/opt/airflow/logs/scheduler/latest already exists as a dir/file. Skip creating symlink.
usage: airflow jobs check [-h] [--allow-multiple] [--hostname HOSTNAME] [--job-type {BackfillJob,LocalTaskJob,SchedulerJob}] [--limit LIMIT]
Checks if job(s) are still alive
optional arguments:
-h, --help show this help message and exit
--allow-multiple If passed, this command will be successful even if multiple matching alive jobs are found.
--hostname HOSTNAME The hostname of job(s) that will be checked.
--job-type {BackfillJob,LocalTaskJob,SchedulerJob}
The type of job(s) that will be checked.
--limit LIMIT The number of recent jobs that will be checked. To disable limit, set 0.
examples:
To check if the local scheduler is still working properly, run:
$ airflow jobs check --job-type SchedulerJob --hostname "$(hostname)"
To check if any scheduler is running when you are using high availability, run:
$ airflow jobs check --job-type SchedulerJob --allow-multiple --limit 100
```
From the description, it appears the `job-type` of `TriggererJob` is not a valid parameter to this call.
### What you expected to happen
The service should show up as "healthy".
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19161 | https://github.com/apache/airflow/pull/19179 | 10e2a88bdc9668931cebe46deb178ab2315d6e52 | d3ac01052bad07f6ec341ab714faabed913169ce | "2021-10-22T14:17:04Z" | python | "2021-10-22T23:32:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,154 | ["airflow/api_connexion/openapi/v1.yaml"] | Add Release date for when an endpoint/field was added in the REST API | ### Description
This will help users to know the airflow version a field or an endpoint was added.
We could do this by checking the changelogs of previous airflow versions and adding a release date to new fields/endpoints corresponding to that version.
Take a look at the dag_run_id below
![openapi](https://user-images.githubusercontent.com/4122866/138428156-e2690c7b-1c17-453c-9086-e6f6a13ac529.PNG)
### Use case/motivation
We recently added the feature here: https://github.com/apache/airflow/pull/19105/files#diff-191056f40fba6bf5886956aa281e0e0d2bb4ddaa380beb012d922a25f5c65750R2305
If you decided to do this, take the opportunity to update the above to 2.3.0
### Related issues
Here is an issue a user created that could have been avoided: https://github.com/apache/airflow/issues/19101
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19154 | https://github.com/apache/airflow/pull/19203 | df465497ad59b0a2f7d3fd0478ea446e612568bb | 8dfc3cab4bf68477675c901e0678ca590684cfba | "2021-10-22T09:15:47Z" | python | "2021-10-27T17:51:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,143 | ["airflow/models/taskinstance.py"] | TypeError: unsupported operand type(s) for +=: 'NoneType' and 'datetime.timedelta' | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
I've upgraded to Airflow 2.2.0 and added a new operator to the existing DAG:
```
delay_sensor = TimeDeltaSensorAsync(task_id="wait", delta=timedelta(hours=24))
```
I've started getting an error:
```
[2021-10-21, 20:01:17 UTC] {taskinstance.py:1686} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1324, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1443, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1499, in _execute_task
result = execute_callable(context=context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/sensors/time_delta.py", line 54, in execute
target_dttm += self.delta
TypeError: unsupported operand type(s) for +=: 'NoneType' and 'datetime.timedelta'
[2021-10-21, 20:01:17 UTC] {taskinstance.py:1270} INFO - Marking task as UP_FOR_RETRY. dag_id=integration-ups-mpos-2215, task_id=wait, execution_date=20211020T070000, start_date=20211021T200117, end_date=20211021T200117
[2021-10-21, 20:01:17 UTC] {standard_task_runner.py:88} ERROR - Failed to execute job 1458 for task wait
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 292, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1324, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1443, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1499, in _execute_task
result = execute_callable(context=context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/sensors/time_delta.py", line 54, in execute
target_dttm += self.delta
TypeError: unsupported operand type(s) for +=: 'NoneType' and 'datetime.timedelta'
```
It looks like 2 new fields that were introduced in "dag_run" table:
* data_interval_start
* data_interval_end
I see that these fields are populated correctly for new DAG-s but for the old DAG-s the data is not back-filled.
![image](https://user-images.githubusercontent.com/131281/138353823-aaede2dc-47fd-4caf-a519-ff758ead8943.png)
### What you expected to happen
The `TimeDeltaSensorAsync` sensor should work correctly even in older DAG-s.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19143 | https://github.com/apache/airflow/pull/19148 | 6deebec04c71373f5f99a14a3477fc4d6dc9bcdc | 1159133040b3513bcc88921823fa001e9773276d | "2021-10-21T20:44:19Z" | python | "2021-10-22T06:50:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,138 | ["airflow/models/baseoperator.py"] | BaseOperator type hints for retry_delay and max_retry_delay should reveal float option | ### Describe the issue with documentation
`BaseOperator` type hints for `retry_delay` and `max_retry_delay` shows `timedelta` only, however the params also accept `float` seconds values.
Also, type hint for `dag` param is missing.
More precise type hints and params descriptions in the docs can help to understand the code behavior easier.
### How to solve the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19138 | https://github.com/apache/airflow/pull/19142 | aa6c951988123edc84212d98b5a2abad9bd669f9 | 73b0ea18edb2bf8df79f11c7a7c746b2dc510861 | "2021-10-21T19:06:25Z" | python | "2021-10-29T07:33:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,135 | ["airflow/decorators/__init__.pyi", "airflow/providers/cncf/kubernetes/decorators/__init__.py", "airflow/providers/cncf/kubernetes/decorators/kubernetes.py", "airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/provider.yaml", "airflow/providers/cncf/kubernetes/python_kubernetes_script.jinja2", "airflow/providers/cncf/kubernetes/python_kubernetes_script.py", "tests/providers/cncf/kubernetes/decorators/__init__.py", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py", "tests/system/providers/cncf/kubernetes/example_kubernetes_decorator.py"] | PythonKubernetesOperator and kubernetes taskflow decorator | ### Description
After the implementation of the Docker Taskflow Operator in 2.2, having a simple function on Kubernetes should be quite simple to accomplish.
One major difference I see is how to get the Code into the Pod.
My prefered way here would be to add an init container which takes care of this - much similar on how we use a Sidecar to extract XCom.
I would also prefer to add more defaults than the KubernetesPodOperator does by default to keep the call as simple and straightforward as possible. We could for example default the namespace to the namespace Airflow is running in if it is a Kubernetes Deployment. Also, we could generate a reasonable pod name from DAG-ID and Task ID.
### Use case/motivation
Beeing able to run a task as simple as:
```python
@task.kubernetes(image='my_image:1.1.0')
def sum(a, b):
from my_package_in_version_1 import complex_sum
return complex_sum(a,b)
```
would be awesome!
It would cleanly separate environments and resources between tasks just as the Docker or Virtualenv Operators do - and unlinke the `@task.python` decorator. We could specify the resources and environment of each task freely by specifying a `resources` Argument.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19135 | https://github.com/apache/airflow/pull/25663 | a7cc4678177f25ce2899da8d96813fee05871bbb | 0eb0b543a9751f3d458beb2f03d4c6ff22fcd1c7 | "2021-10-21T16:08:46Z" | python | "2022-08-22T17:54:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,103 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py", "tests/api_connexion/schemas/test_task_instance_schema.py"] | REST API: taskInstances and taskInstances/list not returning dag_run_id | ### Description
Hi,
In order to avoid extra calls to the REST API to figure out the dag_run_id linked to a task instance, it would be great to have that information in the response of the following methods of the REST API:
- `/api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances`
- `/api/v1/dags/~/dagRuns/~/taskInstances/list`
The current response returns a list of task instances without the `dag_run_id`:
```json
{
"task_instances": [
{
"task_id": "string",
"dag_id": "string",
"execution_date": "string",
"start_date": "string",
"end_date": "string",
"duration": 0,
"state": "success",
"try_number": 0,
"max_tries": 0,
"hostname": "string",
"unixname": "string",
"pool": "string",
"pool_slots": 0,
"queue": "string",
"priority_weight": 0,
"operator": "string",
"queued_when": "string",
"pid": 0,
"executor_config": "string",
"sla_miss": {
"task_id": "string",
"dag_id": "string",
"execution_date": "string",
"email_sent": true,
"timestamp": "string",
"description": "string",
"notification_sent": true
}
}
],
"total_entries": 0
}
```
Our proposal is to add the dag_run_id in the response:
```diff
{
"task_instances": [
{
"task_id": "string",
"dag_id": "string",
"execution_date": "string",
+ "dag_run_id": "string",
"start_date": "string",
"end_date": "string",
"duration": 0,
"state": "success",
"try_number": 0,
"max_tries": 0,
"hostname": "string",
"unixname": "string",
"pool": "string",
"pool_slots": 0,
"queue": "string",
"priority_weight": 0,
"operator": "string",
"queued_when": "string",
"pid": 0,
"executor_config": "string",
"sla_miss": {
"task_id": "string",
"dag_id": "string",
"execution_date": "string",
"email_sent": true,
"timestamp": "string",
"description": "string",
"notification_sent": true
}
}
],
"total_entries": 0
}
```
Thanks!
### Use case/motivation
Having tthe dag_run_id when we get a list of task instances.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19103 | https://github.com/apache/airflow/pull/19105 | d75cf4d60ddbff5b88bfe348cb83f9d173187744 | cc3b062a2bdca16a7b239e73c4dc9e2a3a43c4f0 | "2021-10-20T11:32:02Z" | python | "2021-10-20T18:56:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,098 | ["airflow/www/static/js/dag.js", "airflow/www/static/js/dags.js", "airflow/www/static/js/datetime_utils.js"] | Showing approximate Time until next dag run in Airflow UI | ### Description
It will be really helpful if we can add a dynamic message/tooltip which shows time remaining for next dagrun in UI.
### Use case/motivation
Although we have next_run available in UI, user has to look into schedule and find the timedifference between schedule and current time, It will be really convenient to have that information available on fingertips.
### Related issues
None
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19098 | https://github.com/apache/airflow/pull/20273 | c4d2e16197c5f49493c142bfd9b754ea3c816f48 | e148bf6b99b9b62415a7dd9fbfa594e0f5759390 | "2021-10-20T06:11:16Z" | python | "2021-12-16T17:17:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,080 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "airflow/utils/dot_renderer.py", "tests/cli/commands/test_dag_command.py", "tests/utils/test_dot_renderer.py"] | Display DAGs dependencies in CLI | ### Description
Recently, we added [a new DAG dependencies view](https://github.com/apache/airflow/pull/13199) to the webserver. It would be helpful if a similar diagram could also be displayed/generated using the CLI. For now, only [one DAG can be displayed](http://airflow.apache.org/docs/apache-airflow/stable/usage-cli.html#exporting-dag-structure-as-an-image) using CLI.
![image](https://user-images.githubusercontent.com/12058428/137945580-6a33f919-2648-4980-bd2c-b40cfcacc9fe.png)
If anyone is interested, I will be happy to help with the review.
### Use case/motivation
* Keep parity between CLI and web serwer.
* Enable the writing of scripts that use these diagrams, e.g. for attaching in the documentation.
### Related issues
https://github.com/apache/airflow/pull/13199/files
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19080 | https://github.com/apache/airflow/pull/19985 | 728e94a47e0048829ce67096235d34019be9fac7 | 498f98a8fb3e53c9323faeba1ae2bf4083c28e81 | "2021-10-19T15:48:30Z" | python | "2021-12-05T22:11:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,078 | ["airflow/www/static/js/graph.js", "airflow/www/views.py"] | TaskGroup tooltip missing actual tooltip and default_args issue | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Just used `airflow standalone`
### What happened
First:
TaskGroup does not show the tooltip that is defined for that group on hover
![2021-10-19_17h00_18](https://user-images.githubusercontent.com/35238779/137937706-376922b8-2f79-47ac-9789-6007d135f855.png)
Second:
Providing an Operator with `task_group=` instead of TaskGroup as context manager ignores that TaskGroup's default_args. Using the code I provided below, the task "end" in section_3 has the wrong owner.
### What you expected to happen
First:
If a TaskGroup is used and the tooltip is defined like so:
```python
with TaskGroup("section_1", tooltip="Tasks for section_1") as section_1:
```
the information provided with`tooltip="..."` should show on hover, as shown in the official documentation at https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html#taskgroups
Second:
Defining a TaskGroup with default_args like
```python
section_3 = TaskGroup("section_3", tooltip="Tasks for section_2", default_args={"owner": "bug"})
```
should result in those default_args being used by tasks defined like so:
```python
end = DummyOperator(task_id="end", task_group=section_3)
```
### How to reproduce
I have tested this with the below code
```python
from datetime import datetime
from airflow.models.dag import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
from airflow.utils.task_group import TaskGroup
with DAG(
dag_id="task_group_test",
start_date=datetime(2021, 1, 1),
catchup=False,
tags=["example"],
default_args={"owner": "test"},
) as dag:
start = DummyOperator(task_id="start")
with TaskGroup("section_1", tooltip="Tasks for section_1") as section_1:
task_1 = DummyOperator(task_id="task_1")
task_2 = BashOperator(task_id="task_2", bash_command="echo 1")
task_3 = DummyOperator(task_id="task_3")
task_1 >> [task_2, task_3]
with TaskGroup(
"section_2", tooltip="Tasks for section_2", default_args={"owner": "overwrite"}
) as section_2:
task_1 = DummyOperator(task_id="task_1")
section_3 = TaskGroup("section_3", tooltip="Tasks for section_3", default_args={"owner": "bug"})
end = DummyOperator(task_id="end", task_group=section_3)
start >> section_1 >> section_2 >> section_3
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19078 | https://github.com/apache/airflow/pull/19083 | d5a029e119eb50e78b5144e5405f2b249d5e4435 | 8745fb903069ac6174134d52513584538a2b8657 | "2021-10-19T15:23:04Z" | python | "2021-10-19T18:08:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,077 | ["airflow/providers/microsoft/azure/hooks/data_factory.py", "docs/apache-airflow-providers-microsoft-azure/connections/adf.rst", "tests/providers/microsoft/azure/hooks/test_azure_data_factory.py"] | Add support for more authentication types in Azure Data Factory hook | ### Description
At the moment the Azure Data Factory hook only supports `TokenCredential` (service principal client ID and secret). It would be very useful to have support for other authentication types like managed identity. Preferably, we would create a `DefaultTokenCredential` if the client ID and secret are not provided in the connection.
### Use case/motivation
We're using Airflow on Azure Kubernetes Service and this would allow us to use the pod identity for authentication which is a lot cleaner than creating a service principal.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19077 | https://github.com/apache/airflow/pull/19079 | 9efb989d19e657a2cde2eef98804c5007f148ee1 | ca679c014cad86976c1b2e248b099d9dc9fc99eb | "2021-10-19T14:42:14Z" | python | "2021-11-07T16:42:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,071 | ["airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | Error in SSHOperator "'XComArg' object has no attribute 'startswith'" | ### Apache Airflow Provider(s)
ssh
### Versions of Apache Airflow Providers
apache-airflow-providers-ssh 2.2.0
### Apache Airflow version
2.1.3
### Operating System
Ubuntu 20.04.2 LTS (Focal Fossa)
### Deployment
Other
### Deployment details
_No response_
### What happened
Specified `command` of SSHOperator to the return value of @task function, it raised AttributeError "'XComArg' object has no attribute 'startswith'".
- DAG
```python
from airflow.decorators import dag, task
from airflow.providers.ssh.operators.ssh import SSHOperator
from airflow.utils.dates import days_ago
DEFAULT_ARGS = {"owner": "airflow"}
# [START example_dag_decorator_ssh_usage]
@dag(default_args=DEFAULT_ARGS, schedule_interval=None, start_date=days_ago(2), tags=['example'])
def example_dag_decorator_ssh():
@task
def build_command() -> str:
return 'whoami'
command = build_command()
SSHOperator(
task_id='run_my_command',
command=command,
ssh_conn_id='default_conn',
retries=0
)
dag = example_dag_decorator_ssh()
# [END example_dag_decorator_ssh_usage]
```
- Test
```python
import os
import unittest
from glob import glob
from airflow.models import DagBag
ROOT_FOLDER = os.path.realpath(
os.path.join(os.path.dirname(os.path.realpath(__file__)), os.pardir, os.pardir)
)
class TestExampleDagDecoratorSsh(unittest.TestCase):
def test_should_be_importable(self):
dagbag = DagBag(
dag_folder=f"{ROOT_FOLDER}/airflow/example_dags",
include_examples=False,
)
assert 0 == len(dagbag.import_errors), f"import_errors={str(dagbag.import_errors)}"
```
- Result
```
------------------------------------------------------------------------------------------ Captured log call ------------------------------------------------------------------------------------------
INFO airflow.models.dagbag.DagBag:dagbag.py:496 Filling up the DagBag from /home/masayuki/git/airflow/airflow/example_dags
ERROR airflow.models.dagbag.DagBag:dagbag.py:329 Failed to import: /home/masayuki/git/airflow/airflow/example_dags/example_dag_decorator_ssh.py
Traceback (most recent call last):
File "/home/masayuki/git/airflow/airflow/models/dagbag.py", line 326, in _load_modules_from_file
loader.exec_module(new_module)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/masayuki/git/airflow/airflow/example_dags/example_dag_decorator_ssh.py", line 26, in <module>
dag = example_dag_decorator_ssh()
File "/home/masayuki/git/airflow/airflow/models/dag.py", line 2351, in factory
f(**f_kwargs)
File "/home/masayuki/git/airflow/airflow/example_dags/example_dag_decorator_ssh.py", line 18, in example_dag_decorator_ssh
SSHOperator(
File "/home/masayuki/git/airflow/airflow/models/baseoperator.py", line 178, in apply_defaults
result = func(self, *args, **kwargs)
File "/home/masayuki/git/airflow/airflow/providers/ssh/operators/ssh.py", line 81, in __init__
self.get_pty = (self.command.startswith('sudo') or get_pty) if self.command else get_pty
AttributeError: 'XComArg' object has no attribute 'startswith'
```
### What you expected to happen
No error
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19071 | https://github.com/apache/airflow/pull/19323 | 9929fb952352110b3d2a9429aff9a6a501be08ef | 2197e4b59a7cf859eff5969b5f27b5e4f1084d3b | "2021-10-19T13:29:44Z" | python | "2021-10-29T21:16:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,028 | ["airflow/task/task_runner/base_task_runner.py", "tests/jobs/test_local_task_job.py", "tests/task/task_runner/test_base_task_runner.py"] | PermissionError when `core:default_impersonation` is set | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
```
apache-airflow-providers-celery | 2.1.0
apache-airflow-providers-ftp | 2.0.1
apache-airflow-providers-http | 2.0.1
apache-airflow-providers-imap | 2.0.1
apache-airflow-providers-postgres | 2.3.0
apache-airflow-providers-redis | 2.0.1
apache-airflow-providers-sqlite | 2.0.1
```
### Deployment
Virtualenv installation
### Deployment details
I closed followed the installations instructions given at [Installation from PyPI](https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html), with `AIRFLOW_VERSION=2.2.0` and `PYTHON_VERSION=3.9`. The installation was conducted using the initial user created by the Ubuntu installer, i.e. `uid=1000(ubuntu) gid=1000(ubuntu)`.
Airflow is running locally on a single machine, which hosts the scheduler, webserver and worker. The database backend used is [PostgreSQL](https://airflow.apache.org/docs/apache-airflow/stable/howto/set-up-database.html#setting-up-a-postgresql-database), and the chosen executor is the [Celery executor](https://airflow.apache.org/docs/apache-airflow/stable/executor/celery.html).
Airflow is running with systemd, using the instructions given [here](https://airflow.apache.org/docs/apache-airflow/stable/howto/run-with-systemd.html) and the unit files found [here](https://github.com/apache/airflow/tree/main/scripts/systemd). The only changes made are regarding the `User=` and `Group=` settings, as follows:
```
User=ubuntu
Group=ubuntu
```
As stated in the [Ubuntu Server Guide](https://ubuntu.com/server/docs/security-users):
> By default, the initial user created by the Ubuntu installer is a member of the group `sudo` which is added to the file `/etc/sudoers` as an authorized `sudo` user.
Additionally, the requirements for impersonation to work, given [here](https://ubuntu.com/server/docs/security-users), are strictly followed:
```
$ sudo cat /etc/sudoers.d/90-cloud-init-users
# Created by cloud-init v. 20.1-10-g71af48df-0ubuntu5 on Wed, 13 May 2020 14:17:09 +0000
# User rules for ubuntu
ubuntu ALL=(ALL) NOPASSWD:ALL
```
Since the requirements are met, the config `core:default_impersonation` is set as follows:
```
# If set, tasks without a ``run_as_user`` argument will be run with this user
# Can be used to de-elevate a sudo user running Airflow when executing tasks
default_impersonation = produser
```
Here, `produser` is an existing Unix user on the host machine:
```
$ id produser
uid=115(produser) gid=122(produser) groups=122(produser)
```
### What happened
After finishing installation and opening the Airflow UI, I tested the installation by manually triggering the `tutorial_etl_dag` example DAG. The execution immediately failed. Here is what the worker logs show (the worker was immediately stopped after the error in order to keep the logs short):
<details><summary>journalctl -u airflow-worker.service -b</summary>
```
-- Logs begin at Wed 2020-05-13 14:16:58 UTC, end at Sun 2021-10-17 00:25:01 UTC. --
Oct 16 23:39:54 ip-172-19-250-178 systemd[1]: Started Airflow celery worker daemon.
Oct 16 23:39:57 ip-172-19-250-178 airflow[4184]: [2021-10-16 23:39:57 +0000] [4184] [INFO] Starting gunicorn 20.1.0
Oct 16 23:39:57 ip-172-19-250-178 airflow[4184]: [2021-10-16 23:39:57 +0000] [4184] [INFO] Listening at: http://0.0.0.0:8793 (4184)
Oct 16 23:39:57 ip-172-19-250-178 airflow[4184]: [2021-10-16 23:39:57 +0000] [4184] [INFO] Using worker: sync
Oct 16 23:39:57 ip-172-19-250-178 airflow[4189]: [2021-10-16 23:39:57 +0000] [4189] [INFO] Booting worker with pid: 4189
Oct 16 23:39:57 ip-172-19-250-178 airflow[4199]: [2021-10-16 23:39:57 +0000] [4199] [INFO] Booting worker with pid: 4199
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]:
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: -------------- celery@ip-172-19-250-178 v5.1.2 (sun-harmonics)
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: --- ***** -----
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: -- ******* ---- Linux-5.11.0-1019-aws-x86_64-with-glibc2.31 2021-10-16 23:39:58
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: - *** --- * ---
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: - ** ---------- [config]
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: - ** ---------- .> app: airflow.executors.celery_executor:0x7f4f6c0350a0
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: - ** ---------- .> transport: redis://localhost:6379/0
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: - ** ---------- .> results: postgresql://airflow:**@localhost:5432/airflow
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: - *** --- * --- .> concurrency: 16 (prefork)
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: --- ***** -----
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: -------------- [queues]
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: .> default exchange=default(direct) key=default
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]:
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: [tasks]
Oct 16 23:39:58 ip-172-19-250-178 airflow[4020]: . airflow.executors.celery_executor.execute_command
Oct 16 23:39:59 ip-172-19-250-178 airflow[4020]: [2021-10-16 23:39:59,748: INFO/MainProcess] Connected to redis://localhost:6379/0
Oct 16 23:39:59 ip-172-19-250-178 airflow[4020]: [2021-10-16 23:39:59,756: INFO/MainProcess] mingle: searching for neighbors
Oct 16 23:40:00 ip-172-19-250-178 airflow[4020]: [2021-10-16 23:40:00,774: INFO/MainProcess] mingle: all alone
Oct 16 23:40:00 ip-172-19-250-178 airflow[4020]: [2021-10-16 23:40:00,790: INFO/MainProcess] celery@ip-172-19-250-178 ready.
Oct 16 23:40:02 ip-172-19-250-178 airflow[4020]: [2021-10-16 23:40:02,817: INFO/MainProcess] Events of group {task} enabled by remote.
Oct 16 23:42:10 ip-172-19-250-178 airflow[4020]: [2021-10-16 23:42:10,310: INFO/MainProcess] Task airflow.executors.celery_executor.execute_command[400d064a-4849-4203-95ba-f5744bd3313b] received
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: [2021-10-16 23:42:10,355: INFO/ForkPoolWorker-15] Executing command in Celery: ['airflow', 'tasks', 'run', 'tutorial_etl_dag', 'extract', 'manual__2021-10-16T23:42:09.773269+00:00', '--local', '--subdir', '/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/example_dags/tutorial_etl_dag.py']
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: [2021-10-16 23:42:10,355: INFO/ForkPoolWorker-15] Celery task ID: 400d064a-4849-4203-95ba-f5744bd3313b
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: [2021-10-16 23:42:10,413: INFO/ForkPoolWorker-15] Filling up the DagBag from /opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/example_dags/tutorial_etl_dag.py
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: [2021-10-16 23:42:10,479: WARNING/ForkPoolWorker-15] Could not import DAGs in example_kubernetes_executor_config.py: No module named 'kubernetes'
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: [2021-10-16 23:42:10,479: WARNING/ForkPoolWorker-15] Install kubernetes dependencies with: pip install apache-airflow['cncf.kubernetes']
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: [2021-10-16 23:42:10,552: WARNING/ForkPoolWorker-15] Running <TaskInstance: tutorial_etl_dag.extract manual__2021-10-16T23:42:09.773269+00:00 [queued]> on host ip-172-19-250-178
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: [2021-10-16 23:42:10,552: WARNING/ForkPoolWorker-15]
Oct 16 23:42:10 ip-172-19-250-178 sudo[4671]: ubuntu : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/chown produser /tmp/tmp51f_x5wa
Oct 16 23:42:10 ip-172-19-250-178 sudo[4671]: pam_unix(sudo:session): session opened for user root by (uid=0)
Oct 16 23:42:10 ip-172-19-250-178 sudo[4671]: pam_unix(sudo:session): session closed for user root
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: [2021-10-16 23:42:10,626: ERROR/ForkPoolWorker-15] Failed to execute task [Errno 1] Operation not permitted: '/tmp/tmpsa9pohg6'.
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: Traceback (most recent call last):
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/executors/celery_executor.py", line 121, in _execute_in_fork
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: args.func(args)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 48, in command
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: return func(*args, **kwargs)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/utils/cli.py", line 92, in wrapper
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: return f(*args, **kwargs)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 292, in task_run
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: _run_task_by_selected_method(args, dag, ti)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 105, in _run_task_by_selected_method
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: _run_task_by_local_task_job(args, ti)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 163, in _run_task_by_local_task_job
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: run_job.run()
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 245, in run
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: self._execute()
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 78, in _execute
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: self.task_runner = get_task_runner(self)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/task/task_runner/__init__.py", line 63, in get_task_runner
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: task_runner = task_runner_class(local_task_job)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/task/task_runner/standard_task_runner.py", line 35, in __init__
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: super().__init__(local_task_job)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/task/task_runner/base_task_runner.py", line 91, in __init__
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: os.chown(self._error_file.name, getpwnam(self.run_as_user).pw_uid, -1)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: PermissionError: [Errno 1] Operation not permitted: '/tmp/tmpsa9pohg6'
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: [2021-10-16 23:42:10,645: ERROR/ForkPoolWorker-15] Task airflow.executors.celery_executor.execute_command[400d064a-4849-4203-95ba-f5744bd3313b] raised unexpected: AirflowException('Celery command failed on host: ip-172-19-250-178')
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: Traceback (most recent call last):
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/celery/app/trace.py", line 450, in trace_task
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: R = retval = fun(*args, **kwargs)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/celery/app/trace.py", line 731, in __protected_call__
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: return self.run(*args, **kwargs)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/executors/celery_executor.py", line 90, in execute_command
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: _execute_in_fork(command_to_exec, celery_task_id)
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: File "/opt/conda/envs/airflow/lib/python3.9/site-packages/airflow/executors/celery_executor.py", line 101, in _execute_in_fork
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: raise AirflowException('Celery command failed on host: ' + get_hostname())
Oct 16 23:42:10 ip-172-19-250-178 airflow[4284]: airflow.exceptions.AirflowException: Celery command failed on host: ip-172-19-250-178
Oct 16 23:42:39 ip-172-19-250-178 airflow[4189]: [2021-10-16 23:42:39 +0000] [4189] [INFO] Worker exiting (pid: 4189)
Oct 16 23:42:39 ip-172-19-250-178 systemd[1]: Stopping Airflow celery worker daemon...
Oct 16 23:42:39 ip-172-19-250-178 airflow[4020]: worker: Warm shutdown (MainProcess)
Oct 16 23:42:39 ip-172-19-250-178 airflow[4199]: [2021-10-16 23:42:39 +0000] [4199] [INFO] Worker exiting (pid: 4199)
Oct 16 23:42:39 ip-172-19-250-178 airflow[5006]: [2021-10-16 23:42:39 +0000] [5006] [INFO] Booting worker with pid: 5006
Oct 16 23:42:39 ip-172-19-250-178 airflow[4184]: [2021-10-16 23:42:39 +0000] [4184] [INFO] Handling signal: term
Oct 16 23:42:39 ip-172-19-250-178 airflow[5006]: [2021-10-16 23:42:39 +0000] [5006] [INFO] Worker exiting (pid: 5006)
Oct 16 23:42:39 ip-172-19-250-178 airflow[4184]: [2021-10-16 23:42:39 +0000] [4184] [INFO] Shutting down: Master
Oct 16 23:42:41 ip-172-19-250-178 systemd[1]: airflow-worker.service: Succeeded.
Oct 16 23:42:41 ip-172-19-250-178 systemd[1]: Stopped Airflow celery worker daemon.
```
</details>
It seems that, although `sudo` is correctly used to set file ownership:
```
Oct 16 23:42:10 ip-172-19-250-178 sudo[4671]: ubuntu : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/chown produser /tmp/tmp51f_x5wa
Oct 16 23:42:10 ip-172-19-250-178 sudo[4671]: pam_unix(sudo:session): session opened for user root by (uid=0)
Oct 16 23:42:10 ip-172-19-250-178 sudo[4671]: pam_unix(sudo:session): session closed for user root
```
a `PermissionError` is being raised immediately thereafter:
```
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: [2021-10-16 23:42:10,626: ERROR/ForkPoolWorker-15] Failed to execute task [Errno 1] Operation not permitted: '/tmp/tmpsa9pohg6'.
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: Traceback (most recent call last):
.
.
.
Oct 16 23:42:10 ip-172-19-250-178 airflow[4664]: PermissionError: [Errno 1] Operation not permitted: '/tmp/tmpsa9pohg6'
```
Note that the file names given above are different:
```
$ /bin/ls -l /tmp/tmp51f_x5wa /tmp/tmpsa9pohg6
-rw------- 1 produser ubuntu 8984 Oct 16 23:42 /tmp/tmp51f_x5wa
-rw------- 1 ubuntu ubuntu 0 Oct 16 23:42 /tmp/tmpsa9pohg6
```
### What you expected to happen
The example DAG was expected to run successfully, when the config `core:default_impersonation` is set.
### How to reproduce
In an installation following the deployment details given above, with the config `core:default_impersonation` set, the example DAG (or any other DAG, for that matter) fails every time. Commenting out the `core:default_impersonation` config from `airflow.cfg` **will not** trigger the error.
### Anything else
I would like to know whether this behavior is indeed a bug or the result of an installation oversight on my part. If the latter, what should have I done differently?
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19028 | https://github.com/apache/airflow/pull/20114 | 35be8bdee0cdd3e5c73270b0b65e0552fb9d9946 | b37c0efabd29b9f20ba05c0e1281de22809e0624 | "2021-10-17T01:33:26Z" | python | "2021-12-08T16:00:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 19,001 | ["chart/values.schema.json", "chart/values.yaml"] | Slow liveness probe causes frequent restarts (scheduler and triggerer) | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
official docker image
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
I noticed scheduler was restarting a lot, and often ended up in CrashLoopBackoff state, apparently due to failed liveness probe:
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 19m (x25 over 24m) kubelet Back-off restarting failed container
Warning Unhealthy 4m15s (x130 over 73m) kubelet Liveness probe failed:
```
Triggerer also has this issue and also enters CrashLoopBackoff state frequently.
e.g.
```
NAME READY STATUS RESTARTS AGE
airflow-prod-redis-0 1/1 Running 0 2d7h
airflow-prod-scheduler-75dc64bc8-m8xdd 2/2 Running 14 77m
airflow-prod-triggerer-7897c44dd4-mtnq9 1/1 Running 126 12h
airflow-prod-webserver-7bdfc8ff48-gfnvs 1/1 Running 0 12h
airflow-prod-worker-659b566588-w8cd2 1/1 Running 0 147m
```
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 18m (x398 over 11h) kubelet Back-off restarting failed container
Warning Unhealthy 3m32s (x1262 over 12h) kubelet Liveness probe failed:
```
It turns out what was going on is the liveness probe takes too long to run and so it failed continuously, so the scheduler would just restart every 10 minutes.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
I ran the liveness probe code in a container on k8s and found that it generally takes longer than 5 seconds.
Probably we should increase default timeout to 10 seconds and possibly reduce frequency so that it's not wasting as much CPU.
```
❯ keti airflow-prod-scheduler-6956684c7f-swfgb -- bash
Defaulted container "scheduler" out of: scheduler, scheduler-log-groomer, wait-for-airflow-migrations (init)
airflow@airflow-prod-scheduler-6956684c7f-swfgb:/opt/airflow$ time /entrypoint python -Wignore -c "import os
> os.environ['AIRFLOW__CORE__LOGGING_LEVEL'] = 'ERROR'
> os.environ['AIRFLOW__LOGGING__LOGGING_LEVEL'] = 'ERROR'
>
> from airflow.jobs.scheduler_job import SchedulerJob
> from airflow.utils.db import create_session
> from airflow.utils.net import get_hostname
> import sys
>
> with create_session() as session:
> job = session.query(SchedulerJob).filter_by(hostname=get_hostname()).order_by(
> SchedulerJob.latest_heartbeat.desc()).limit(1).first()
>
> print(0 if job.is_alive() else 1)
> "
0
real 0m5.696s
user 0m4.989s
sys 0m0.375s
airflow@airflow-prod-scheduler-6956684c7f-swfgb:/opt/airflow$ time /entrypoint python -Wignore -c "import os
os.environ['AIRFLOW__CORE__LOGGING_LEVEL'] = 'ERROR'
os.environ['AIRFLOW__LOGGING__LOGGING_LEVEL'] = 'ERROR'
from airflow.jobs.scheduler_job import SchedulerJob
from airflow.utils.db import create_session
from airflow.utils.net import get_hostname
import sys
with create_session() as session:
job = session.query(SchedulerJob).filter_by(hostname=get_hostname()).order_by(
SchedulerJob.latest_heartbeat.desc()).limit(1).first()
print(0 if job.is_alive() else 1)
"
0
real 0m7.261s
user 0m5.273s
sys 0m0.411s
airflow@airflow-prod-scheduler-6956684c7f-swfgb:/opt/airflow$
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/19001 | https://github.com/apache/airflow/pull/19003 | b814ab43d62fad83c1083a7bc3a8d009c6103213 | 866c764ae8fc17c926e245421d607e4e84ac9ec6 | "2021-10-15T04:33:41Z" | python | "2021-10-15T14:32:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,969 | ["airflow/providers/trino/hooks/trino.py", "docs/apache-airflow-providers-trino/connections.rst", "docs/apache-airflow-providers-trino/index.rst", "tests/providers/trino/hooks/test_trino.py"] | Trino JWT Authentication Support | ### Description
Would be great to support JWT Authentication in Trino Hook (which also can be used for presto hook).
For example like this
```
elif extra.get('auth') == 'jwt':
auth = trino.auth.JWTAuthentication(
token=extra.get('jwt__token')
)
```
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18969 | https://github.com/apache/airflow/pull/23116 | 6065d1203e2ce0aeb19551c545fb668978b72506 | ccb5ce934cd521dc3af74b83623ca0843211be62 | "2021-10-14T05:10:17Z" | python | "2022-05-06T19:45:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,967 | ["airflow/hooks/dbapi.py", "airflow/providers/oracle/hooks/oracle.py", "tests/providers/oracle/hooks/test_oracle.py"] | DbApiHook.test_connection() does not work with Oracle db | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-oracle==2.0.1
### Deployment
Other
### Deployment details
![Screen Shot 2021-10-14 at 06 27 02](https://user-images.githubusercontent.com/17742862/137252045-64393b28-2287-499d-a596-e56542acf54e.png)
### What happened
The title and screenshot are self explaining
### What you expected to happen
To have a successful message similar to what I got with SQL Server.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18967 | https://github.com/apache/airflow/pull/21699 | a1845c68f9a04e61dd99ccc0a23d17a277babf57 | 900bad1c67654252196bb095a2a150a23ae5fc9a | "2021-10-14T04:30:00Z" | python | "2022-02-26T23:56:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,963 | ["airflow/sentry.py", "tests/core/test_sentry.py"] | Missing value for new before_send config causes airflow.exceptions.AirflowConfigException: section/key [sentry/before_send] not found in config | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.3.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==2.0.3
apache-airflow-providers-docker==2.2.0
apache-airflow-providers-elasticsearch==2.0.3
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.0.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-microsoft-azure==3.2.0
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==2.3.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.1.1
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-sqlite==2.0.1
apache-airflow-providers-ssh==2.2.0
### Deployment
Other Docker-based deployment
### Deployment details
Airflow is deployed through Kubernetes and configured via environment variables.
### What happened
Enabling airflow sentry via these env vars:
```
AIRFLOW__SENTRY__SENTRY_ON="True"
SENTRY_DSN=<dsn>
SENTRY_ENVIRONMENT=<env>
```
Worked fine in airflow 2.1.4.
Upgrading to Airflow 2.2 and staring up any container fails with warning and then an exception "airflow.exceptions.AirflowConfigException: section/key [sentry/before_send] not found in config"
Adding the following env var fixes it:
```
AIRFLOW__SENTRY__BEFORE_SEND=""
```
### What you expected to happen
The new env var should not be required and should have a reasonable fallback.
I believe the code change in #18261 should have had a `fallback=None` arg passed in to the `config.getimport`call like I see in other calls to `getimport`
### How to reproduce
Set up env vars
```
AIRFLOW__SENTRY__SENTRY_ON="True"
SENTRY_DSN=<dsn>
SENTRY_ENVIRONMENT=<env>
```
Run any airflow command.
Tail the logs
```
WARNING - section/key [sentry/before_send] not found in config
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 47, in command
func = import_string(import_path)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 24, in <module>
from airflow.utils import cli as cli_utils, db
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 27, in <module>
from airflow.jobs.base_job import BaseJob # noqa: F401
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/__init__.py", line 19, in <module>
import airflow.jobs.backfill_job
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/backfill_job.py", line 28, in <module>
from airflow import models
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/__init__.py", line 20, in <module>
from airflow.models.baseoperator import BaseOperator, BaseOperatorLink
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 61, in <module>
from airflow.models.taskinstance import Context, TaskInstance, clear_task_instances
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 84, in <module>
from airflow.sentry import Sentry
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/sentry.py", line 190, in <module>
Sentry = ConfiguredSentry()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/sentry.py", line 112, in __init__
sentry_config_opts['before_send'] = conf.getimport('sentry', 'before_send')
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py", line 477, in getimport
full_qualified_path = conf.get(section=section, key=key, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py", line 373, in get
return self._get_option_from_default_config(section, key, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py", line 383, in _get_option_from_default_config
raise AirflowConfigException(f"section/key [{section}/{key}] not found in config")
airflow.exceptions.AirflowConfigException: section/key [sentry/before_send] not found in config
```
### Anything else
This is only hit by users that enable sentry, as in the default airflow configuration this code is not executed.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18963 | https://github.com/apache/airflow/pull/18980 | d5aebf149ac81e3e81a903762a4a153568e67728 | 1edcd420eca227dadf8653917772ee27de945996 | "2021-10-13T22:45:01Z" | python | "2021-10-14T14:33:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,925 | ["airflow/providers/databricks/operators/databricks.py"] | Add template_ext = ('.json') to databricks operators | ### Description
Add template_ext = ('.json') in databricks operators. I will improve the debug and the way to organize parameters.
### Use case/motivation
Bigger parameters on file. Easier to debug and maintain.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18925 | https://github.com/apache/airflow/pull/21530 | 5590e98be30d00ab8f2c821b2f41f524db8bae07 | 0a2d0d1ecbb7a72677f96bc17117799ab40853e0 | "2021-10-12T22:42:56Z" | python | "2022-02-12T12:52:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,921 | ["airflow/providers/google/cloud/transfers/cassandra_to_gcs.py", "tests/providers/google/cloud/transfers/test_cassandra_to_gcs.py"] | Add query timeout as an argument in CassandraToGCSOperator | ### Description
In the current [CassandraToGCSOperator](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/cassandra_to_gcs.py#L40), [session.execute()](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/cassandra_to_gcs.py#L165) is only able to use the default timeout set by cassandra driver.
The GH issue intends to support timeout as an argument in CassandraToGCSOperator so that the session can execute a query with the custom timeout.
reference: [execute()](https://github.com/datastax/python-driver/blob/master/cassandra/cluster.py#L2575) in cassandra_driver
### Use case/motivation
This support is going to make the query execution more flexible in cassandra operator for the use case where the query is expected to require a different timeout.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18921 | https://github.com/apache/airflow/pull/18927 | fd569e714403176770b26cf595632812bd384bc0 | 55abc2f620a96832661d1797442a834bf958bb3e | "2021-10-12T21:25:02Z" | python | "2021-10-28T11:14:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,917 | ["docs/apache-airflow/concepts/xcoms.rst"] | Document xcom clearing behaviour on task retries | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Debian Buster (in Docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
Dockerfile:
```
FROM quay.io/astronomer/ap-airflow-dev:2.2.0-buster-onbuild-44762
```
Launched via `astro dev start`
**DAG**
```python3
def fail_then_succeed(**kwargs):
val = kwargs["ti"].xcom_pull(key="foo")
if not val:
kwargs["ti"].xcom_push(key="foo", value="bar")
raise Exception("fail")
with DAG(
dag_id="fail_then_succeed",
start_date=days_ago(1),
schedule_interval=None,
) as dag:
PythonOperator(
task_id="succeed_second_try",
python_callable=fail_then_succeed,
retries=2,
retry_delay=timedelta(seconds=15)
)
```
### What happened
The pushed value appears in XCOM, but even on successive retries `xcom_pull` returned `None`
### What you expected to happen
On the second try, the XCOM value pushed by the first try is available, so the task succeeds after spending some time in "up for retry".
### How to reproduce
Run the dag shown above, notice that it fails.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18917 | https://github.com/apache/airflow/pull/19968 | e8b1120f26d49df1a174d89d51d24c6e7551bfdf | 538612c3326b5fd0be4f4114f85e6f3063b5d49c | "2021-10-12T18:54:45Z" | python | "2021-12-05T23:03:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,913 | ["docs/docker-stack/docker-images-recipes/gcloud.Dockerfile"] | Better "recipe" for image with gcloud | ### Description
Here an alternative to the `Dockerfile` @ https://github.com/apache/airflow/blob/main/docs/docker-stack/docker-images-recipes/gcloud.Dockerfile that produces a much smaller image with `gcloud sdk`.
It has **_not_** been tested with all relevant operators but it meets our needs and possibly yours if you use `gcloud`, `gsutil`, `bq` and `kubectl` utils.
It leverages multi-layers, removes bin dupes for kubectl and removes anthos components. Also, no need to version the SDK, the latest will be used & copied.
```
ARG VERSION
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine as gcloud-image
RUN gcloud components install alpha beta kubectl --quiet \
&& rm -rf \
google-cloud-sdk/bin/anthoscli \
google-cloud-sdk/bin/kubectl.* \
$(find google-cloud-sdk/ -regex ".*/__pycache__") \
google-cloud-sdk/.install/.backup
FROM apache/airflow:${VERSION}
ENV GCLOUD_HOME=/opt/google-cloud-sdk
COPY --from=gcloud-image /google-cloud-sdk ${GCLOUD_HOME}
ENV PATH="${GCLOUD_HOME}/bin/:${PATH}"
USER ${AIRFLOW_UID}
```
### Use case/motivation
Efficiency.
Building based on AF `2.1.4-python3.9`, the compressed image size is **40%** smaller than using the original recipe.
That's **327 MB** vs **567 MB** ...
Give it a try and let me know !
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18913 | https://github.com/apache/airflow/pull/21268 | 0ae31e9cb95e5061a23c2f397ab9716391c1a488 | 874a22ee9b77f8f100736558723ceaf2d04b446b | "2021-10-12T17:13:56Z" | python | "2022-02-03T18:20:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,901 | ["setup.py"] | Pandas Not Installed with Pip When Required for Providers Packages | ### Apache Airflow version
2.2.0 (latest released)
### Operating System
Ubuntu 20.04.2 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-hive==2.0.2
### Deployment
Virtualenv installation
### Deployment details
Commands used to install:
```
python3.8 -m venv ./airflow-2.2.0
source airflow-2.2.0/bin/activate
pip install --upgrade pip
export AIRFLOW_VERSION=2.2.0
export PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
export CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
pip install wheel
pip install --upgrade "apache-airflow[amazon,apache.druid,apache.hdfs,apache.hive,async,celery,http,jdbc,mysql,password,redis,ssh]==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
```
### What happened
After installing, Airflow started throwing these errors:
```
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveCliHook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveServer2Hook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveMetastoreHook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveCliHook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveServer2Hook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
WARNI [airflow.providers_manager] Exception when importing 'airflow.providers.apache.hive.hooks.hive.HiveMetastoreHook' from 'apache-airflow-providers-apache-hive' package: No module named 'pandas'
```
### What you expected to happen
I expected required packages (pandas) to be installed with the pip install command
### How to reproduce
Install using the commands in the deployment details.
### Anything else
This seems to be related to pandas being made optional in 2.2.0 but not accounting for providers packages still requiring it.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18901 | https://github.com/apache/airflow/pull/18997 | ee8e0b3a2a87a76bcaf088960bce35a6cee8c500 | de98976581294e080967e2aa52043176dffb644f | "2021-10-12T08:23:28Z" | python | "2021-10-16T18:21:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,894 | ["airflow/settings.py", "airflow/utils/db.py", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "tests/www/views/test_views_base.py"] | Migrate from 2.1.4 to 2.2.0 | ### Apache Airflow version
2.2.0
### Operating System
Linux
### Versions of Apache Airflow Providers
default.
### Deployment
Docker-Compose
### Deployment details
Using airflow-2.2.0python3.7
### What happened
Upgrading image from apache/airflow:2.1.4-python3.7
to apache/airflow:2.2.0-python3.7
Cause this inside scheduler, which is not starting:
```
Python version: 3.7.12
Airflow version: 2.2.0
Node: 6dd55b0a5dd7
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedColumn: column dag.max_active_tasks does not exist
LINE 1: ..., dag.schedule_interval AS dag_schedule_interval, dag.max_ac...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/auth.py", line 51, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/views.py", line 588, in index
filter_dag_ids = current_app.appbuilder.sm.get_accessible_dag_ids(g.user)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/security.py", line 377, in get_accessible_dag_ids
return {dag.dag_id for dag in accessible_dags}
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3535, in __iter__
return self._execute_and_instances(context)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3560, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1130, in _execute_clauseelement
distilled_params,
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1317, in _execute_context
e, statement, parameters, cursor, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1511, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column dag.max_active_tasks does not exist
LINE 1: ..., dag.schedule_interval AS dag_schedule_interval, dag.max_ac...
^
[SQL: SELECT dag.dag_id AS dag_dag_id, dag.root_dag_id AS dag_root_dag_id, dag.is_paused AS dag_is_paused, dag.is_subdag AS dag_is_subdag, dag.is_active AS dag_is_active, dag.last_parsed_time AS dag_last_parsed_time, dag.last_pickled AS dag_last_pickled, dag.last_expired AS dag_last_expired, dag.scheduler_lock AS dag_scheduler_lock, dag.pickle_id AS dag_pickle_id, dag.fileloc AS dag_fileloc, dag.owners AS dag_owners, dag.description AS dag_description, dag.default_view AS dag_default_view, dag.schedule_interval AS dag_schedule_interval, dag.max_active_tasks AS dag_max_active_tasks, dag.max_active_runs AS dag_max_active_runs, dag.has_task_concurrency_limits AS dag_has_task_concurrency_limits, dag.next_dagrun AS dag_next_dagrun, dag.next_dagrun_data_interval_start AS dag_next_dagrun_data_interval_start, dag.next_dagrun_data_interval_end AS dag_next_dagrun_data_interval_end, dag.next_dagrun_create_after AS dag_next_dagrun_create_after
FROM dag]
(Background on this error at: http://sqlalche.me/e/13/f405)
```
### What you expected to happen
Automatic database migration and properly working scheduler.
### How to reproduce
Ugrade from 2.1.4 to 2.2.0 with some dags history.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18894 | https://github.com/apache/airflow/pull/18953 | 2ba722db4857db2881ee32c1b2e9330bc7163535 | f967ca91058b4296edb507c7826282050188b501 | "2021-10-11T19:18:28Z" | python | "2021-10-14T17:09:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,878 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/sensors/base.py", "tests/core/test_config_templates.py"] | Change the Sensors default timeout using airflow configs | ### Description
Add the configuration parameters to set the default `timeout` on `BaseSensorOperator`.
```
[sensors]
default_timeout=
```
### Use case/motivation
By default the sensor timeout is 7 days, this is to much time for some environments and could be reduced to release workers slot faster if a sensor never matches the result and have no custom timeout.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18878 | https://github.com/apache/airflow/pull/19119 | 0a6850647e531b08f68118ff8ca20577a5b4062c | 34e586a162ad9756d484d17b275c7b3dc8cefbc2 | "2021-10-11T01:12:50Z" | python | "2021-10-21T19:33:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,874 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | DockerOperator xcom is broken | ### Apache Airflow version
2.0.2
### Operating System
ubuntu 20.04
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==1.3.0
apache-airflow-providers-celery==1.0.1
apache-airflow-providers-databricks==1.0.1
apache-airflow-providers-docker==1.2.0
apache-airflow-providers-ftp==1.0.1
apache-airflow-providers-http==1.1.1
apache-airflow-providers-imap==1.0.1
apache-airflow-providers-oracle==1.1.0
apache-airflow-providers-postgres==1.0.2
apache-airflow-providers-presto==1.0.2
apache-airflow-providers-sftp==1.2.0
apache-airflow-providers-sqlite==1.0.2
apache-airflow-providers-ssh==1.3.0
apache-airflow-providers-tableau==1.0.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using DockerOperator with 'do_xcom_push' set as True,
Airflow documentation states that without xcom_all also True, the default False behavior is that airflow would push only the last line from docker into xcom return_value, but this statement is False
do_xcom_push just returns the entire \ chunked part of the entire docker output as a single 'merged' string,
all the lines, as far as I can tell..
See code lines:
https://github.com/apache/airflow/blob/10023fdd65fa78033e7125d3d8103b63c127056e/airflow/providers/docker/operators/docker.py#L258
lines is the generator that cli.attach returns earlier in the code, the generator comes from docker py, see here:
https://github.com/docker/docker-py/blob/7172269b067271911a9e643ebdcdca8318f2ded3/docker/api/client.py#L418
Following the api code for docker-py, and into the generator logic (socket read and stuff..)
we can conclude that the assumption that 'cli.attach' returns 'lines' is wrong, therefore do_xcom_push is braking the 'contract' set by the documentation of the parameter, it never actually returns the last line.
So it is probably better just to also use 'xcom_all' because that at least uses cli.logs which is a safer interface to use, until this issue is fixed, probably we need to create a matching issue in the docker-py repo, but maybe airflow should just handle the stream itself and split it into lines, returning the last line, see this example, modified from another issue on xcoms:
```python
from docker import APIClient
d = APIClient()
c = d.create_container(image='ubuntu:20.04', name='TEST', command="""bash -c "echo 'test' && echo 'test2'" """, host_config=d.create_host_config(auto_remove=False, network_mode='bridge'))
gen=d.attach(c['Id'], stderr=True, stdout=True, stream=True)
d.start(c['Id'])
print( [i for i in gen] )
print([i for i in d.logs(c['Id'], stream=True, stdout=True, stderr=True)])
d.remove_container(c['Id'])
```
Thank you @nullhack
Related issues:
https://github.com/apache/airflow/issues/15952
https://github.com/apache/airflow/issues/9164
https://github.com/apache/airflow/issues/14809 - This is a real nasty issue, I'm also having
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
Edits:
- airflow version correction
- Code chunk formatting | https://github.com/apache/airflow/issues/18874 | https://github.com/apache/airflow/pull/21175 | b8564daf50e049bdb27971104973b8981b7ea121 | 2f4a3d4d4008a95fc36971802c514fef68e8a5d4 | "2021-10-10T16:12:56Z" | python | "2022-02-01T17:27:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,867 | ["airflow/providers/docker/operators/docker_swarm.py", "setup.py", "tests/providers/docker/operators/test_docker_swarm.py"] | Fix DockerSwarmOperator timeout | ### Body
See discussion in https://github.com/apache/airflow/discussions/18270
`docker-py` released version 5.0.3 https://github.com/docker/docker-py/releases/tag/5.0.3 which fixed https://github.com/docker/docker-py/issues/931
**Task:**
Update `DockerSwarmOperator`
https://github.com/apache/airflow/blob/4da4c186ecdcdae308fe8b4a7994c21faf42bc96/airflow/providers/docker/operators/docker_swarm.py#L205-L215
as we don't need the workaround any longer.
Note: this also require set minimum version of the package to 5.0.3 in [steup.py](https://github.com/apache/airflow/blob/fd45f5f3e38b80993d5624480a793be381194f04/setup.py#L263)
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/18867 | https://github.com/apache/airflow/pull/18872 | 7d3b6b51c0227f6251fd5b0023970c19fcc3c402 | 3154935138748a8ac89aa4c8fde848e31610941b | "2021-10-10T07:34:38Z" | python | "2021-10-12T11:54:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,851 | ["breeze"] | command not found: complete when using bash completion for breeze setup-autocomplete | ### Describe the issue with documentation
OS: MacOS Big Sur (M1 chip)
Shell used: zsh
I am trying to set up the airflow local development setup using Breeze as described in the video in the link https://github.com/apache/airflow/blob/main/BREEZE.rst
I forked and cloned the repo, checked the docker and docker-compose version, and have given the below command
```./breeze setup-autocomplete```
At the end of the above command, I got the result like below,
```Breeze completion is installed to ~/.bash_completion.d/breeze-complete
Please exit and re-enter your shell or run:
source ~/.bash_completion.d/breeze-complete```
I entered the below command in prompt to finish the process:
```$source ~/.bash_completion.d/breeze-complete```
and got the below response:
```/Users/sk/.bash_completion.d/breeze-complete:466: command not found: complete
/Users/sk/.bash_completion.d/breeze-complete:467: command not found: complete
```
### How to solve the problem
i have to execute:
```source ~/.zshrc``` command first and then execute the command
``` source ~/.bash_completion.d/breeze-complete ```
we can either put instruction to execute both of the above command or add them as part of breeze.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18851 | https://github.com/apache/airflow/pull/18893 | d425f894101c1462930c66cde3499fb70941c1bc | ec31b2049e7c3b9f9694913031553f2d7eb66265 | "2021-10-09T14:15:29Z" | python | "2021-10-11T20:54:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,845 | ["airflow/providers/facebook/ads/hooks/ads.py"] | Facebook Ads Provider uses a deprecated version of the API | ### Apache Airflow Provider(s)
facebook
### Versions of Apache Airflow Providers
2.0.1
### Apache Airflow version
2.1.1
### Operating System
Ubuntu 20.04
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
Task fails because the hook uses a deprecated Facebook API version. The hook is calling v6.0 which is longer supported.
### What you expected to happen
I expected this task to connected to the Facebook API and fetch the requested data.
My log files for the failed task output the following message:
```
facebook_business.exceptions.FacebookRequestError:
Message: Call was not successful
Method: POST
Path: https://graph.facebook.com/v6.0/act_1210763848963620/insights
Params: {'level': 'ad', 'date_preset': 'yesterday', 'fields': '["campaign_name","campaign_id","ad_id","clicks","impressions"]'}
Status: 400
Response:
{
"error": {
"message": "(#2635) You are calling a deprecated version of the Ads API. Please update to the latest version: v11.0.",
"type": "OAuthException",
"code": 2635,
"fbtrace_id": "AGRidwR5VhjU3kAJVUSkvuz"
}
}
```
Line 69 of https://github.com/apache/airflow/blob/main/airflow/providers/facebook/ads/hooks/ads.py should be changed to a newer API version.
### How to reproduce
Run the sample DAG posted here: https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_modules/airflow/providers/google/cloud/example_dags/example_facebook_ads_to_gcs.html
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18845 | https://github.com/apache/airflow/pull/18883 | 0a82a422e42072db459f527db976e0621ccab9fb | d5aebf149ac81e3e81a903762a4a153568e67728 | "2021-10-08T21:36:03Z" | python | "2021-10-14T14:29:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,843 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | SerializedDagNotFound: DAG not found in serialized_dag table | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
Linux 5.4.149-73.259.amzn2.x86_64
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
AWS EKS over own helm chart
### What happened
We have an issue back from 2.0.x #13504
Each time scheduler is restarted it deletes all DAGS deom serialized_dag table and trying to serialize them again from the scratch. Afterwards scheduler pod become failed with error:
```
[2021-10-08 20:19:40,683] {kubernetes_executor.py:761} INFO - Shutting down Kubernetes executor
[2021-10-08 20:19:41,705] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 32
[2021-10-08 20:19:42,207] {process_utils.py:207} INFO - Waiting up to 5 seconds for processes to exit...
[2021-10-08 20:19:42,223] {process_utils.py:66} INFO - Process psutil.Process(pid=32, status='terminated', exitcode=0, started='20:19:40') (32) terminated with exit code 0
[2021-10-08 20:19:42,225] {process_utils.py:66} INFO - Process psutil.Process(pid=40, status='terminated', started='20:19:40') (40) terminated with exit code None
[2021-10-08 20:19:42,226] {process_utils.py:66} INFO - Process psutil.Process(pid=36, status='terminated', started='20:19:40') (36) terminated with exit code None
[2021-10-08 20:19:42,226] {scheduler_job.py:722} INFO - Exited execute loop
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 70, in scheduler
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 245, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 695, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 788, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 927, in _do_scheduling
num_queued_tis = self._critical_section_execute_task_instances(session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 551, in _critical_section_execute_task_instances
queued_tis = self._executable_task_instances_to_queued(max_tis, session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 431, in _executable_task_instances_to_queued
serialized_dag = self.dagbag.get_dag(dag_id, session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 67, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 186, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 258, in _add_dag_from_db
raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
airflow.exceptions.SerializedDagNotFound: DAG 'aws_transforms_player_hourly' not found in serialized_dag table
```
causing All DAGs to be absent in serialized_dag table
```
Python version: 3.9.7
Airflow version: 2.1.4
Node: airflow-webserver-7b45758f99-rk8dg
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.9/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/auth.py", line 49, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/decorators.py", line 97, in view_func
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/decorators.py", line 60, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/views.py", line 2027, in tree
dag = current_app.dag_bag.get_dag(dag_id)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 186, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 258, in _add_dag_from_db
raise SerializedDagNotFound(f"DAG '{dag_id}' not found in serialized_dag table")
airflow.exceptions.SerializedDagNotFound: DAG 'canary_dag' not found in serialized_dag table
```
### What you expected to happen
Scheduler shouldn't fail
### How to reproduce
restart scheduler pod
observe its failure
open dag in webserver
observe an error
### Anything else
issue is temporary gone when i've run "serialize" script from webserver pod until next scheduler reboot
```python
from airflow.models import DagBag
from airflow.models.serialized_dag import SerializedDagModel
dag_bag = DagBag()
# Check DB for missing serialized DAGs, and add them if missing
for dag_id in dag_bag.dag_ids:
if not SerializedDagModel.get(dag_id):
dag = dag_bag.get_dag(dag_id)
SerializedDagModel.write_dag(dag)
```
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18843 | https://github.com/apache/airflow/pull/19113 | ceb2b53a109b8fdd617f725a72c6fdb9c119550b | 5dc375aa7744f37c7a09f322cd9f4a221aa4ccbe | "2021-10-08T20:31:54Z" | python | "2021-10-20T20:25:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,799 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | Add test for https://github.com/apache/airflow/pull/17305 | ### Body
Add unit test for changes in https://github.com/apache/airflow/pull/17305
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/18799 | https://github.com/apache/airflow/pull/18806 | e0af0b976c0cc43d2b1aa204d047fe755e4c5be7 | e286ee64c5c0aadd79a5cd86f881fb1acfbf317e | "2021-10-07T09:53:33Z" | python | "2021-10-08T03:24:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,789 | ["airflow/providers/google/cloud/_internal_client/secret_manager_client.py", "tests/providers/google/cloud/_internal_client/test_secret_manager_client.py"] | Google Cloud Secret Manager Backend fail on variable with a non allow character | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
5.1.0
### Apache Airflow version
2.1.2
### Operating System
GCP Container-Optimized OS
### Deployment
Composer
### Deployment details
_No response_
### What happened
When a variable have a non allow character like a `.` the secret manager is going to fail even if the variable is inside the airflow database itelf ( registred in the the airflow variable )
### What you expected to happen
The secret manager should catch the error and say that it can't find it
```log
[2021-10-07 06:22:09,330] {secret_manager_client.py:93} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID example-variables-prefix-toto.tata.
```
and then look for the airflow variable
### How to reproduce
```python
[secrets]
backend = airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
backend_kwargs = {"connections_prefix": "example-connections-prefix", "variables_prefix": "example-variables-prefix"}
```
```python
"{{ var.value.get('toto.tata') }}"
```
throw this error :
```
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1158, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1296, in _prepare_and_execute_task_with_callbacks
self.render_templates(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1796, in render_templates
self.task.render_template_fields(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 992, in render_template_fields
self._do_render_template_fields(self, self.template_fields, context, jinja_env, set())
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 1005, in _do_render_template_fields
rendered_content = self.render_template(content, context, jinja_env, seen_oids)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/baseoperator.py", line 1042, in render_template
return jinja_env.from_string(content).render(**context)
File "/opt/python3.8/lib/python3.8/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/opt/python3.8/lib/python3.8/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/opt/python3.8/lib/python3.8/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "<template>", line 1, in top-level template code
File "/opt/python3.8/lib/python3.8/site-packages/jinja2/sandbox.py", line 462, in call
return __context.call(__obj, *args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1682, in get
return Variable.get(item, default_var=default_var)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/variable.py", line 135, in get
var_val = Variable.get_variable_from_secrets(key=key)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/variable.py", line 204, in get_variable_from_secrets
var_val = secrets_backend.get_variable(key=key)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/secrets/secret_manager.py", line 154, in get_variable
return self._get_secret(self.variables_prefix, key)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/secrets/secret_manager.py", line 178, in _get_secret
return self.client.get_secret(secret_id=secret_id, project_id=self.project_id)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/_internal_client/secret_manager_client.py", line 86, in get_secret
response = self.client.access_secret_version(name)
File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/secretmanager_v1/gapic/secret_manager_service_client.py", line 967, in access_secret_version
return self._inner_api_calls["access_secret_version"](
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
return retry_target(
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/retry.py", line 189, in retry_target
return target()
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 69, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 The provided Secret Version ID [projects/XXXXXXXXXXX/secrets/example-variables-prefix-toto.tata/versions/latest] does not match the expected format [projects/*/secrets/*/versions/*]
```
### Anything else
this line https://github.com/apache/airflow/blob/8505d2f0a4524313e3eff7a4f16b9a9439c7a79f/airflow/providers/google/cloud/_internal_client/secret_manager_client.py#L85
should apparently catch the exception `InvalidArgument` and do the same than the others catched exeptions
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18789 | https://github.com/apache/airflow/pull/18790 | 88583095c408ef9ea60f793e7072e3fd4b88e329 | 0e95b5777242b00f41812c099f1cf8e2fc0df40c | "2021-10-07T06:36:36Z" | python | "2021-10-19T06:25:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,777 | ["tests/jobs/test_scheduler_job.py", "tests/test_utils/asserts.py"] | [Quarantine] test_no_orphan_process_will_be_left | ### Body
The `test_no_orphan_process_will_be_left` in `TestSchedulerJob` is really flaky.
Before we know how to fix it, I will quarantine it:
https://github.com/apache/airflow/pull/18691/checks?check_run_id=3811660138
```
=================================== FAILURES ===================================
_____________ TestSchedulerJob.test_no_orphan_process_will_be_left _____________
self = <tests.jobs.test_scheduler_job.TestSchedulerJob object at 0x7fc2f91d2ac8>
def test_no_orphan_process_will_be_left(self):
empty_dir = mkdtemp()
current_process = psutil.Process()
old_children = current_process.children(recursive=True)
self.scheduler_job = SchedulerJob(
subdir=empty_dir, num_runs=1, executor=MockExecutor(do_update=False)
)
self.scheduler_job.run()
shutil.rmtree(empty_dir)
# Remove potential noise created by previous tests.
current_children = set(current_process.children(recursive=True)) - set(old_children)
> assert not current_children
E AssertionError: assert not {psutil.Process(pid=2895, name='pytest', status='running', started='06:53:45')}
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/18777 | https://github.com/apache/airflow/pull/19860 | d1848bcf2460fa82cd6c1fc1e9e5f9b103d95479 | 9b277dbb9b77c74a9799d64e01e0b86b7c1d1542 | "2021-10-06T16:04:33Z" | python | "2021-12-13T17:55:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,771 | ["airflow/providers/pagerduty/hooks/pagerduty.py", "airflow/providers/pagerduty/hooks/pagerduty_events.py", "airflow/providers/pagerduty/provider.yaml", "tests/providers/pagerduty/hooks/test_pagerduty.py", "tests/providers/pagerduty/hooks/test_pagerduty_events.py"] | Current implementation of PagerdutyHook requires two tokens to be present in connection | ### Apache Airflow Provider(s)
pagerduty
### Versions of Apache Airflow Providers
apache-airflow-providers-pagerduty==2.0.1
### Apache Airflow version
2.1.4 (latest released)
### Operating System
macOS catalina
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Currently, the PagerdutyHook can be used to access two API's:
- Events API: interface to send alerts. Needs only a integration key/routing key/Events API key. (https://developer.pagerduty.com/docs/get-started/getting-started/#events-api)
- Pagerduty API: interface to interact with account data, project setup, etc. Needs only a Pagerduty rest API Key or account token. (https://developer.pagerduty.com/docs/get-started/getting-started/#rest-api)
In order to interact with the API's, the PagerdutyHook uses two attributes that refer to these api keys:
- `token`: Refers to the account token/rest API key. This attribute is retrieved from the `Password` field in the connection or can be set at initialization of a class instance by passing it into the init method of the class. In the __Init__ method, its value is asserted to be not `None`. The token is **not used** for sending alerts to Pagerduty.
- `routing_key`: Refers to the integration key/Events API key. This attribute is retrieved from the `Extra` field and is used in the `create_event` method to send request to the Events API. In the `create_event` method the value is asserted to be not `None`.
As a result if users want to use the hook to only send events, they will need to provide a connection with a random string as a password (which can't be None) and the actual integration key in the `extra` field. This to me makes no sense.
**Proposed solution:** Rather than having the two api's in a single hook, separate them into two different hooks. This increases code simplicity and maintainability.
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18771 | https://github.com/apache/airflow/pull/18784 | 5d9e5f69b9d9c7d4f4e5e5c040ace0589b541a91 | 923f5a5912785649be7e61c8ea32a0bd6dc426d8 | "2021-10-06T12:00:57Z" | python | "2021-10-12T18:00:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,770 | ["airflow/providers/trino/hooks/trino.py", "setup.py", "tests/providers/trino/hooks/test_trino.py"] | Properly handle verify parameter in TrinoHook | ### Body
This task refers to comment left in the Hook:
https://github.com/apache/airflow/blob/c9bf5f33e5d5bcbf7d31663a8571628434d7073f/airflow/providers/trino/hooks/trino.py#L96-L100
The change of https://github.com/trinodb/trino-python-client/pull/31 was released in trino version 0.301.0
**The task:**
Change the code + [tests](https://github.com/apache/airflow/blob/main/tests/providers/trino/hooks/test_trino.py) to work with the verify parameter directly. It will also require setting min version for trino in `setup.py`
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/18770 | https://github.com/apache/airflow/pull/18791 | cfa8fe26faf4b1ab83b4ff5060905a1c8efdb58e | 6bc0f87755e3f9b3d736d7c1232b7cd93001ad06 | "2021-10-06T11:20:56Z" | python | "2021-10-07T11:15:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,732 | ["airflow/www/static/css/main.css"] | Tooltip element is not removed and overlays another clickable elements | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
Debian GNU/Linux 11 rodete
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
On a clean Airflow install, sometimes there is a problem with accessing clickable elements like 'Airflow Home' button (logo with name), or 'DAGs' menu button. This is a problem that can be replicated also in older versions on Airflow.
The issue is caused by tooltip elements that are created each time another element is hovered and they appear as transparent elements in the 0,0 point which is the top left corner of the interface. That's why they cover Airflow Home button and sometimes even DAGs menu button (depending on the size of the tooltip).
This is causing some of the elements that are located near top left corner not clickable (e.g. Airflow Home button is clickable but only the bottom side of it).
This is a screenshot of the interface with highlighted place where the redundant tooltips appear. I also showed here the tooltip elements in the HTML code.
![image](https://user-images.githubusercontent.com/7412964/135996630-01640dac-6433-405c-8e7c-398091dbfe34.png)
### What you expected to happen
I expect that the tooltips disappear when the element triggering them is not hovered anymore. The Airflow Home button and DAGs button should be clickable.
### How to reproduce
1. Hover over Airflow Home button and see that all of the element can be clicked (note the different pointer).
2. Now hover over any other element that shows a tooltip, e.g. filters for All/Active/Paused DAGs just a bit below.
3. Hover again over Airflow Home button and see that part of it is not clickable.
4. Open devtools and inspect the element that covers the top left corner of the interface.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18732 | https://github.com/apache/airflow/pull/19261 | 7b293c548a92d2cd0eea4f9571c007057aa06482 | 37767c1ba05845266668c84dec7f9af967139f42 | "2021-10-05T09:25:43Z" | python | "2021-11-02T17:09:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,716 | ["scripts/in_container/prod/entrypoint_prod.sh"] | /entrypoint Environment Variable Typo "_AIRFLOW_WWW_USER_LASTNME" | ### Apache Airflow version
2.1.4 (latest released)
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Docker
### What happened
Environment Variable `_AIRFLOW_WWW_USER_LASTNAME` will not be recognized by script /entrypoint. I think there is a typo in the environment variable here: https://github.com/apache/airflow/blob/main/scripts/in_container/prod/entrypoint_prod.sh#L144 . i.e. `LASTNME` should be `LASTNAME` as shown in the [docs](https://github.com/apache/airflow/blob/main/scripts/in_container/prod/entrypoint_prod.sh#L144)
### What you expected to happen
Environment Variable `_AIRFLOW_WWW_USER_LASTNAME` should populate the `--lastname` field of the command `airflow users create ...`
### How to reproduce
_No response_
### Anything else
Thank you
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18716 | https://github.com/apache/airflow/pull/18727 | c596ef43456429d80bef24ff3755b1c1bc31bc1c | cc5254841a9e8c1676fb5d0e43410a30973f0210 | "2021-10-04T17:36:46Z" | python | "2021-10-05T07:07:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,703 | ["airflow/api_connexion/endpoints/user_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_user_endpoint.py"] | "Already exists." message is missing while updating user email with existing email id through API | ### Apache Airflow version
2.2.0b2 (beta snapshot)
### Operating System
Debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro dev start
### What happened
"Already exists." error message is missing while updating user email with existing email id through API.
### What you expected to happen
"Email id already exists" error message should appear
### How to reproduce
1. Use a patch request with the below URL.
`{{url}}/users/{{username}}`
2. In payload use an exiting email id
```
{
"username": "{{username}}",
"password": "password1",
"email": "{{exiting_email}}@example.com",
"first_name": "{{$randomFirstName}}",
"last_name": "{{$randomLastName}}",
"roles":[{ "name": "Op"}]
}
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18703 | https://github.com/apache/airflow/pull/18757 | cf27419cfe058750cde4247935e20deb60bda572 | a36e7ba4176eeacab1aeaf72ce452d3b30f4de3c | "2021-10-04T11:48:44Z" | python | "2021-10-06T15:17:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,702 | ["airflow/api_connexion/endpoints/user_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_user_endpoint.py"] | Unable to change username while updating user information through API | ### Apache Airflow version
2.2.0b2 (beta snapshot)
### Operating System
Debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro dev start
### What happened
Unable to change username while updating user information through API though it's possible from UI.
### What you expected to happen
Either username should not be editable from UI or It should be editable from API.
### How to reproduce
1. Use a patch request with the below URL.
`{{url}}/users/{{username}}`
2. In payload use a different username
```
{
"username": "{{username1}}",
"password": "password1",
"email": "{{email}}@example.com",
"first_name": "{{$randomFirstName}}",
"last_name": "{{$randomLastName}}",
"roles":[{ "name": "Op"}]
}
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18702 | https://github.com/apache/airflow/pull/18757 | cf27419cfe058750cde4247935e20deb60bda572 | a36e7ba4176eeacab1aeaf72ce452d3b30f4de3c | "2021-10-04T11:39:13Z" | python | "2021-10-06T15:17:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,664 | ["airflow/providers/oracle/hooks/oracle.py", "tests/providers/oracle/hooks/test_oracle.py"] | [Oracle] Oracle Hook - make it possible to define a schema in the connection parameters | ### Description
Currently the oracle hook it not setting a CURRENT_SCHEMA after connecting with the Database.
In a lot of use cases we have production and test databases with seperate connections and database schemas e.g. TEST.Table1, PROD.Table1
"Hard-coding" the database schema in SQL Scripts is not elegant due to having different Airflow Instances for developing and Production.
An Option would be to store the database schema in a airflow Variable and getting it into the sql script with JINJA.
In Large SQL files with several tables it is not elegant either, because for every table a query to the metadatabase is made.
Why not using the Schema parameter in the Airflow Connections and executing
`ALTER SESSION SET CURRENT_SCHEMA='SCHEMA'`
right after successfully connecting to the database?
An alternative would be to use option `Connection.current_schema ` of Library cx_Oracle.
https://cx-oracle.readthedocs.io/en/6.4.1/connection.html#Connection.current_schema
### Use case/motivation
It makes Query development much easier by storing environment attributes directly in the Airflow Connection.
You have full flexibility without touching your SQL Script.
It makes separation of Test and Production environments and connections possible.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18664 | https://github.com/apache/airflow/pull/19084 | 6d110b565a505505351d1ff19592626fb24e4516 | 471e368eacbcae1eedf9b7e1cb4290c385396ea9 | "2021-10-01T09:21:06Z" | python | "2022-02-07T20:37:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 18,634 | ["docs/apache-airflow/upgrading-from-1-10/index.rst"] | Pendulum 1.x -> 2.x with Airflow 1.x -> 2.x Documentation Updates | ### Describe the issue with documentation
With the upgrade from Pendulum 1.x to 2.x that coincides with upgrading from Airflow 1.x to 2.x, there are some breaking changes that aren't mentioned here:
[Upgrading from 1.x to 2.x](https://airflow.apache.org/docs/apache-airflow/stable/upgrading-from-1-10/index.html)
The specific breaking change I experienced is the .copy() method is completely removed in Pendulum 2.x. It turns out this isn't needed in my case as each macro that provides a pendulum.Pendulum provides a brand new instance, but it still caught me by surprise.
I also noticed that the stable documentation (2.1.4 at time of writing) for macros still links to Pendulum 1.x:
[Macros reference](https://airflow.apache.org/docs/apache-airflow/stable/macros-ref.html)
Specifically the macros: execution_date, prev_execution_date, prev_execution_date_success, prev_start_date_success, next_execution_date
**EDIT**
Another breaking change: .format() now defaults to the alternative formatter in 2.x, meaning
`execution_date.format('YYYY-MM-DD HH:mm:ss', formatter='alternative')`
now throws errors.
### How to solve the problem
The macros reference links definitely need to be changed to Pendulum 2.x, maybe it would be a nice-to-have in the 1.x -> 2.x documentation but I was effectively using .copy() in a misguided way so I wouldn't say it's totally necessary.
**EDIT**
I think it's worth including that
`execution_date.format('YYYY-MM-DD HH:mm:ss', formatter='alternative')`
will throw errors in the 1.x -> 2.x upgrade documentation.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/18634 | https://github.com/apache/airflow/pull/18955 | f967ca91058b4296edb507c7826282050188b501 | 141d9f2d5d3e47fe7beebd6a56953df1f727746e | "2021-09-30T11:49:40Z" | python | "2021-10-14T17:56:28Z" |