status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 27,512 | ["airflow/www/static/js/dag/Main.tsx", "airflow/www/static/js/dag/details/Dag.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Nav.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "airflow/www/static/js/dag/grid/index.tsx", "airflow/www/static/js/datasets/index.tsx", "airflow/www/static/js/utils/useOffsetHeight.tsx"] | Resizable grid view components | ### Description
~1. Ability to change change the split ratio of the grid section and the task details section.~ - already done in #27273
![resizable_horizontal](https://user-images.githubusercontent.com/10968348/200072881-0f0cd1f0-0b91-46fa-8d6d-6c72d9ff6a97.jpg)
2. Ability for the log window to be resized.
![resizable_vertical](https://user-images.githubusercontent.com/10968348/200073200-412c9637-d537-4749-8ad0-0fe50a8df6a3.jpg)
3. Would love if the choices stuck between reloads as well.
### Use case/motivation
I love the new grid view and use it day to day to check logs quickly. It would be easier to do so without having to scroll within the text box if you could resize the grid view to accommodate a larger view of the logs.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27512 | https://github.com/apache/airflow/pull/27560 | 7ea8475128009b348a82d122747ca1df2823e006 | 65bfea2a20830baa10d2e1e8328c07a7a11bbb0c | "2022-11-04T21:09:12Z" | python | "2022-11-17T20:10:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,509 | ["airflow/models/dataset.py", "tests/models/test_taskinstance.py"] | Removing DAG dataset dependency when it is already ready results in SQLAlchemy cascading delete error | ### Apache Airflow version
2.4.2
### What happened
I have a DAG that is triggered by three datasets. When I remove one or more of these datasets, the web server fails to update the DAG, and `airflow dags reserialize` fails with an `AssertionError` within SQLAlchemy. Full stack trace below:
```
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
docker-airflow-scheduler-1 | return func(*args, session=session, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/dag_processing/processor.py", line 781, in process_file
docker-airflow-scheduler-1 | dagbag.sync_to_db(processor_subdir=self._dag_directory, session=session)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
docker-airflow-scheduler-1 | return func(*args, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 644, in sync_to_db
docker-airflow-scheduler-1 | for attempt in run_with_db_retries(logger=self.log):
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __iter__
docker-airflow-scheduler-1 | do = self.iter(retry_state=retry_state)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 349, in iter
docker-airflow-scheduler-1 | return fut.result()
docker-airflow-scheduler-1 | File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
docker-airflow-scheduler-1 | return self.__get_result()
docker-airflow-scheduler-1 | File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
docker-airflow-scheduler-1 | raise self._exception
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 658, in sync_to_db
docker-airflow-scheduler-1 | DAG.bulk_write_to_db(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
docker-airflow-scheduler-1 | return func(*args, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2781, in bulk_write_to_db
docker-airflow-scheduler-1 | session.flush()
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
docker-airflow-scheduler-1 | self._flush(objects)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
docker-airflow-scheduler-1 | transaction.rollback(_capture_exception=True)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
docker-airflow-scheduler-1 | compat.raise_(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
docker-airflow-scheduler-1 | raise exception
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
docker-airflow-scheduler-1 | flush_context.execute()
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
docker-airflow-scheduler-1 | rec.execute(self)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
docker-airflow-scheduler-1 | self.dependency_processor.process_deletes(uow, states)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
docker-airflow-scheduler-1 | self._synchronize(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
docker-airflow-scheduler-1 | sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
docker-airflow-scheduler-1 | raise AssertionError(
docker-airflow-scheduler-1 | AssertionError: Dependency rule tried to blank-out primary key column 'dataset_dag_run_queue.dataset_id' on instance '<DatasetDagRunQueue at 0xffff5d213d00>'
```
### What you think should happen instead
The DAG does not properly load in the UI, and no error is displayed. Instead, the old datasets that have been removed should be removed as dependencies and the DAG should be updated with the new dataset dependencies.
### How to reproduce
Initial DAG:
```python
def foo():
pass
@dag(
dag_id="test",
start_date=pendulum.datetime(2022, 1, 1),
catchup=False,
schedule=[
Dataset('test/1'),
Dataset('test/2'),
Dataset('test/3'),
]
)
def test_dag():
@task
def test_task():
foo()
test_task()
test_dag()
```
At least one of the datasets should be 'ready'. Now `dataset_dag_run_queue` will look something like below:
```
airflow=# SELECT * FROM dataset_dag_run_queue ;
dataset_id | target_dag_id | created_at
------------+-------------------------------------+-------------------------------
16 | test | 2022-11-02 19:47:53.938748+00
(1 row)
```
Then, update the DAG with new datasets:
```python
def foo():
pass
@dag(
dag_id="test",
start_date=pendulum.datetime(2022, 1, 1),
catchup=False,
schedule=[
Dataset('test/new/1'), # <--- updated
Dataset('test/new/2'),
Dataset('test/new/3'),
]
)
def test_dag():
@task
def test_task():
foo()
test_task()
test_dag()
```
Now you will observe the error in the web server logs or when running `airflow dags reserialize`.
I suspect this issue is related to handling of cascading deletes on the `dataset_id` foreign key for the run queue table. Dataset `id = 16` is one of the datasets that has been renamed.
### Operating System
docker image - apache/airflow:2.4.2-python3.9
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
```
### Deployment
Docker-Compose
### Deployment details
Running using docker-compose locally.
### Anything else
To trigger this problem the dataset to be removed must be in the "ready" state so that there is an entry in `dataset_dag_run_queue`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27509 | https://github.com/apache/airflow/pull/27538 | 7297892558e94c8cc869b175e904ca96e0752afe | fc59b02cfac7fd691602edc92a7abac38ed51531 | "2022-11-04T16:21:02Z" | python | "2022-11-07T13:03:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,507 | ["airflow/providers/http/hooks/http.py"] | Making logging for HttpHook optional | ### Description
In tasks that perform multiple requests, the log file is getting cluttered by the logging in `run`, line 129
I propose that we add a kwarg `log_request` with default value True to control this behavior
### Use case/motivation
reduce unnecessary entries in log files
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27507 | https://github.com/apache/airflow/pull/28911 | 185faab2112c4d3f736f8d40350401d8c1cac35b | a9d5471c66c788d8469ca65556e5820f1e96afc1 | "2022-11-04T16:04:07Z" | python | "2023-01-13T21:09:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,483 | ["airflow/www/views.py"] | DAG loading very slow in Graph view when using Dynamic Tasks | ### Apache Airflow version
2.4.2
### What happened
The web UI is very slow when loading the Graph view on DAGs that have a large number of expansions in the mapped tasks.
The problem is very similar to the one described in #23786 (resolved), but for the Graph view instead of the grid view.
It takes around 2-3 minutes to load DAGs that have ~1k expansions, with the default Airflow settings the web server worker will timeout. One can configure [web_server_worker_timeout](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#web-server-worker-timeout) to increase the timeout wait time.
### What you think should happen instead
The Web UI takes a reasonable amount of time to load the Graph view after the dag run is finished.
### How to reproduce
Same way as in #23786, you can create a mapped task that spans a large number of expansions then when you run it, the Graph view will take a very long amount of time to load and eventually time out.
You can use this code to generate multiple dags with `2^x` expansions. After running the DAGs you should notice how slow it is when attempting to open the Graph view of the DAGs with the largest number of expansions.
```python
from datetime import datetime
from airflow.models import DAG
from airflow.operators.empty import EmptyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
}
initial_scale = 7
max_scale = 12
scaling_factor = 2
for scale in range(initial_scale, max_scale + 1):
dag_id = f"dynamic_task_mapping_{scale}"
with DAG(
dag_id=dag_id,
default_args=default_args,
catchup=False,
schedule_interval=None,
start_date=datetime(1970, 1, 1),
render_template_as_native_obj=True,
) as dag:
start = EmptyOperator(task_id="start")
mapped = PythonOperator.partial(
task_id="mapped",
python_callable=lambda m: print(m),
).expand(
op_args=[[x] for x in list(range(2**scale))]
)
end = EmptyOperator(task_id="end")
start >> mapped >> end
globals()[dag_id] = dag
```
### Operating System
MacOS Version 12.6 (Apple M1)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==4.0.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-sqlite==3.2.1
```
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27483 | https://github.com/apache/airflow/pull/29791 | 0db38ad1a2cf403eb546f027f2e5673610626f47 | 60d98a1bc2d54787fcaad5edac36ecfa484fb42b | "2022-11-03T08:46:08Z" | python | "2023-02-28T05:15:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,479 | ["airflow/www/fab_security/views.py"] | webserver add role to an existing user -> KeyError: 'userinfoedit' | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
airflow 2.4.1
a existing user only have the role Viewer
I add with the UI the role Admin
click on button save
then error ->
```log
[03/Nov/2022:01:28:08 +0000] "POST /XXXXXXXX/users/edit/2 HTTP/1.1" 302 307 "https://XXXXXXXXXXXXX.net/XXXXXXXX/users/edit/2" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0"
[2022-11-03T01:28:09.014+0000] {app.py:1742} ERROR - Exception on /users/show/1 [GET]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/security/decorators.py", line 133, in wraps
return f(self, *args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/fab_security/views.py", line 222, in show
widgets['show'].template_args['actions'].pop('userinfoedit')
KeyError: 'userinfoedit'
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
1.7.0
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27479 | https://github.com/apache/airflow/pull/27537 | 9409293514cef574179a5320ed3ed50881064423 | 6434b5770877d75fba3c0c49fd808d6413367ab4 | "2022-11-03T01:33:41Z" | python | "2022-11-08T13:45:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,478 | ["airflow/models/dagrun.py", "airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Scheduler crash when clear a previous run of a normal task that is now a mapped task | ### Apache Airflow version
2.4.2
### What happened
I have clear a task A that was a normal task but that is now a mapped task
```log
[2022-11-02 23:33:20 +0000] [17] [INFO] Worker exiting (pid: 17)
2022-11-02T23:33:20.390911528Z Traceback (most recent call last):
2022-11-02T23:33:20.390935788Z File "/usr/local/bin/airflow", line 8, in <module>
2022-11-02T23:33:20.390939798Z sys.exit(main())
2022-11-02T23:33:20.390942302Z File "/usr/local/lib/python3.10/site-packages/airflow/__main__.py", line 39, in main
2022-11-02T23:33:20.390944924Z args.func(args)
2022-11-02T23:33:20.390947345Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 52, in command
2022-11-02T23:33:20.390949893Z return func(*args, **kwargs)
2022-11-02T23:33:20.390952237Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/cli.py", line 103, in wrapper
2022-11-02T23:33:20.390954862Z return f(*args, **kwargs)
2022-11-02T23:33:20.390957163Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 85, in scheduler
2022-11-02T23:33:20.390959672Z _run_scheduler_job(args=args)
2022-11-02T23:33:20.390961979Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 50, in _run_scheduler_job
2022-11-02T23:33:20.390964496Z job.run()
2022-11-02T23:33:20.390966931Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/base_job.py", line 247, in run
2022-11-02T23:33:20.390969441Z self._execute()
2022-11-02T23:33:20.390971778Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 746, in _execute
2022-11-02T23:33:20.390974368Z self._run_scheduler_loop()
2022-11-02T23:33:20.390976612Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 866, in _run_scheduler_loop
2022-11-02T23:33:20.390979125Z num_queued_tis = self._do_scheduling(session)
2022-11-02T23:33:20.390981458Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 946, in _do_scheduling
2022-11-02T23:33:20.390984819Z callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
2022-11-02T23:33:20.390988440Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/retries.py", line 78, in wrapped_function
2022-11-02T23:33:20.390991893Z for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
2022-11-02T23:33:20.391008515Z File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 384, in __iter__
2022-11-02T23:33:20.391012668Z do = self.iter(retry_state=retry_state)
2022-11-02T23:33:20.391016220Z File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 351, in iter
2022-11-02T23:33:20.391019633Z return fut.result()
2022-11-02T23:33:20.391022534Z File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
2022-11-02T23:33:20.391025820Z return self.__get_result()
2022-11-02T23:33:20.391029555Z File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
2022-11-02T23:33:20.391033787Z raise self._exception
2022-11-02T23:33:20.391037611Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/retries.py", line 87, in wrapped_function
2022-11-02T23:33:20.391040339Z return func(*args, **kwargs)
2022-11-02T23:33:20.391042660Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 1234, in _schedule_all_dag_runs
2022-11-02T23:33:20.391045166Z for dag_run in dag_runs:
2022-11-02T23:33:20.391047413Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2887, in __iter__
2022-11-02T23:33:20.391049815Z return self._iter().__iter__()
2022-11-02T23:33:20.391052252Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2894, in _iter
2022-11-02T23:33:20.391054786Z result = self.session.execute(
2022-11-02T23:33:20.391057119Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1688, in execute
2022-11-02T23:33:20.391059741Z conn = self._connection_for_bind(bind)
2022-11-02T23:33:20.391062247Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1529, in _connection_for_bind
2022-11-02T23:33:20.391065901Z return self._transaction._connection_for_bind(
2022-11-02T23:33:20.391069140Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
2022-11-02T23:33:20.391078064Z self._assert_active()
2022-11-02T23:33:20.391081939Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
2022-11-02T23:33:20.391085250Z raise sa_exc.PendingRollbackError(
2022-11-02T23:33:20.391087747Z sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_fail_ti_fkey" on table "task_fail"
2022-11-02T23:33:20.391091226Z DETAIL: Key (dag_id, task_id, run_id, map_index)=(kubernetes_dag, task-one, scheduled__2022-11-01T00:00:00+00:00, -1) is still referenced from table "task_fail".
2022-11-02T23:33:20.391093987Z
2022-11-02T23:33:20.391102116Z [SQL: UPDATE task_instance SET map_index=%(map_index)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s]
2022-11-02T23:33:20.391105554Z [parameters: {'map_index': 0, 'task_instance_dag_id': 'kubernetes_dag', 'task_instance_task_id': 'task-one', 'task_instance_run_id': 'scheduled__2022-11-01T00:00:00+00:00', 'task_instance_map_index': -1}]
2022-11-02T23:33:20.391108241Z (Background on this error at: https://sqlalche.me/e/14/gkpj) (Background on this error at: https://sqlalche.me/e/14/7s2a)
2022-11-02T23:33:20.489698500Z [2022-11-02 23:33:20 +0000] [7] [INFO] Shutting down: Master
```
### What you think should happen instead
Airflow should evaluate the existing and previous runs as mapped task of 1 task
cause I can't see the logs anymore of a task that is now a mapped task
### How to reproduce
dag with a normal task A
run dag
task A success
edit dag to make task A a mapped task ( without changing name of task )
clear task
scheduler crash
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27478 | https://github.com/apache/airflow/pull/29645 | e02bfc870396387ef2052ab375cdd2a54e704ae2 | a770edfac493f3972c10a43e45bcd0e7cfaea65f | "2022-11-02T23:43:43Z" | python | "2023-02-20T19:45:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,462 | ["airflow/models/dag.py", "tests/sensors/test_external_task_sensor.py"] | Clearing the parent dag will not clear child dag's mapped tasks | ### Apache Airflow version
2.4.2
### What happened
In the scenario where we have 2 dags, 1 dag dependent on the other by having an ExternalTaskMarker on the parent dag pointing to the child dag and we have some number of mapped tasks in the child dag that have been expanded (map_index is not -1).
If we were to clear the parent dag, the child dag's mapped tasks will NOT be cleared. It will not appear in the "Task instances to be cleared" list
### What you think should happen instead
I believe the behaviour should be having the child dag's mapped tasks cleared when the parent dag is cleared.
### How to reproduce
1. Create a parent dag with an ExternalTaskMarker
2. Create a child dag which has some ExternalTaskSensor that the ExternalTaskMarker is pointing to
3. Add any number of mapped tasks downstream of that ExternalTaskSensor
4. Clear the parent dag's ExternalTaskMarker (or any task upstream of it)
### Operating System
Mac OS Monterey 12.6
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27462 | https://github.com/apache/airflow/pull/27501 | bc0063af99629e6b3eb5c76c88ac5bfaf92afaaf | 5ce9c827f7bcdef9c526fd4416533fc481de4675 | "2022-11-02T05:55:29Z" | python | "2022-11-17T01:54:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,449 | ["airflow/jobs/local_task_job.py", "airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "tests/jobs/test_local_task_job.py", "tests/models/test_taskinstance.py"] | Dynamic tasks marked as `upstream_failed` when none of their upstream tasks are `failed` or `upstream_failed` | ### Apache Airflow version
2.4.2
### What happened
There is a mapped task is getting marked as `upstream_failed` when none of its upstream tasks are `failed` or `upstream_failed`.
![image](https://user-images.githubusercontent.com/10802053/199615824-c44e7e2f-ed05-49c6-a225-e054836466e1.png)
In the above graph view, if `first_task` finishes before `second_task`, `first_task` immediately tries to expand `middle_task`. **Note - this is an important step to reproduce - The order the tasks finish matter.**
Note that the value of the Airflow configuration variable [`schedule_after_task_execution`](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#schedule-after-task-execution) must be `True` (the default) for this to occur.
The expansion occurs when the Task supervisor performs the "mini scheduler", in [this line in `dagrun.py`](https://github.com/apache/airflow/blob/63638cd2162219bd0a67caface4b1f1f8b88cc10/airflow/models/dagrun.py#L749).
Which then marks `middle_task` as `upstream_failed` in [this line in `mappedoperator.py`](https://github.com/apache/airflow/blob/63638cd2162219bd0a67caface4b1f1f8b88cc10/airflow/models/mappedoperator.py#L652):
```
# If the map length cannot be calculated (due to unavailable
# upstream sources), fail the unmapped task.
```
I believe this was introduced by the PR [Fail task if mapping upstream fails](https://github.com/apache/airflow/pull/25757).
### What you think should happen instead
The dynamic tasks should successfully execute. I don't think the mapped task should expand because its upstream task hasn't completed at the time it's expanded. If the upstream task were to complete earlier, it would expand successfully.
### How to reproduce
Execute this DAG, making sure Airflow configuration `schedule_after_task_execution` is set to default value `True`.
```
from datetime import datetime, timedelta
import time
from airflow import DAG, XComArg
from airflow.operators.python import PythonOperator
class PrintIdOperator(PythonOperator):
def __init__(self, id, **kwargs) -> None:
super().__init__(**kwargs)
self.op_kwargs["id"] = id
DAG_ID = "test_upstream_failed_on_mapped_operator_expansion"
default_args = {
"owner": "airflow",
"depends_on_past": False,
"retry_delay": timedelta(minutes=1),
"retries": 0,
}
def nop(id):
print(f"{id=}")
def get_ids(delay: int = 0):
print(f"Delaying {delay} seconds...")
time.sleep(delay)
print("Done!")
return [0, 1, 2]
with DAG(
dag_id=DAG_ID,
default_args=default_args,
start_date=datetime(2022, 8, 3),
catchup=False,
schedule=None,
max_active_runs=1,
) as dag:
second_task = PythonOperator(
task_id="second_task",
python_callable=get_ids,
op_kwargs={"delay": 10}
)
first_task = PythonOperator(
task_id="first_task",
python_callable=get_ids,
)
middle_task = PrintIdOperator.partial(
task_id="middle_task",
python_callable=nop,
).expand(id=XComArg(second_task))
last_task = PythonOperator(
task_id="last_task",
python_callable=nop,
op_kwargs={"id": 1},
)
[first_task, middle_task] >> last_task
```
### Operating System
debian buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27449 | https://github.com/apache/airflow/pull/27506 | 47a2b9ee7f1ff2cc1cc1aa1c3d1b523c88ba29fb | ed92e5d521f958642615b038ec13068b527db1c4 | "2022-11-01T18:00:20Z" | python | "2022-11-09T14:05:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,429 | ["BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/setup_commands_config.py", "dev/breeze/src/airflow_breeze/utils/reinstall.py"] | Incorrect command displayed in warning when breeze dependencies are changed | ### Apache Airflow version
main (development)
### What happened
During switching between some old branches I noticed the below warning. The `--force` option seems to have been removed in 3dfa44566c948cb2db016e89f84d6fe37bd6d824 and is default now. This message could be updated in below places
https://github.com/apache/airflow/blob/b29ca4e77d4d80fb1f4d6d4b497a3a14979dd244/dev/breeze/src/airflow_breeze/utils/reinstall.py#L50
https://github.com/apache/airflow/blob/b29ca4e77d4d80fb1f4d6d4b497a3a14979dd244/dev/breeze/src/airflow_breeze/utils/reinstall.py#L59
Also here I guess
https://github.com/apache/airflow/blob/b29ca4e77d4d80fb1f4d6d4b497a3a14979dd244/dev/breeze/src/airflow_breeze/utils/path_utils.py#L232
```
$ breeze shell
Breeze dependencies changed since the installation!
This might cause various problems!!
If you experience problems - reinstall Breeze with:
breeze setup self-upgrade --force
This should usually take couple of seconds.
```
```
breeze setup self-upgrade --force
Usage: breeze setup self-upgrade [OPTIONS]
Try running the '--help' flag for more information.
╭─ Error ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ No such option: --force │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
To find out more, visit https://github.com/apache/airflow/blob/main/BREEZE.rst
```
### What you think should happen instead
The warning message could be updated with correct option.
### How to reproduce
_No response_
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27429 | https://github.com/apache/airflow/pull/27438 | 98bd9b3d6b58bac3d019d3c7f8c6983a9dee463e | 10d2a71073a23b0b8c9fae0de5a79fb4f3ac1935 | "2022-11-01T04:56:29Z" | python | "2022-11-01T11:56:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,409 | ["airflow/models/skipmixin.py", "airflow/operators/python.py", "tests/models/test_skipmixin.py", "tests/operators/test_python.py"] | Improve error message when branched task_id does not exist | ### Body
This issue is to handle the TODO left in the code:
https://github.com/apache/airflow/blob/64174ce25ae800a38e712aa0bd62a5893ea2ff99/airflow/operators/python.py#L211
Related: https://github.com/apache/airflow/pull/18471/files#r716030104
Currently only BranchPythonOperator will show informed error message when the requested branched task_id does not exist other branch operators will show :
```
File "/usr/local/lib/python3.9/site-packages/airflow/models/skipmixin.py", line 147, in skip_all_except
branch_task_ids = set(branch_task_ids)
TypeError: 'NoneType' object is not iterable
```
This task is to generalize the solution of https://github.com/apache/airflow/pull/18471/ to all branch operators.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/27409 | https://github.com/apache/airflow/pull/27434 | fc59b02cfac7fd691602edc92a7abac38ed51531 | baf2f3fc329d5b4029d9e17defb84cefcd9c490a | "2022-10-31T14:01:34Z" | python | "2022-11-07T13:38:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,402 | ["chart/values.yaml", "helm_tests/airflow_aux/test_configmap.py"] | #26415 Broke flower dashboard | ### Discussed in https://github.com/apache/airflow/discussions/27401
<div type='discussions-op-text'>
<sup>Originally posted by **Flogue** October 25, 2022</sup>
### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.4.1
### Kubernetes Version
1.24.6
### Helm Chart configuration
```
flower:
enabled: true
```
### Docker Image customisations
None
### What happened
Flower dashboard is unreachable.
"Failed to load resource: net::ERR_CONNECTION_RESET" in browser console
### What you think should happen instead
Dashboard should load.
### How to reproduce
Just enable flower:
```
helm install airflow-rl apache-airflow/airflow --namespace airflow-np --set flower.enables=true
kubectl port-forward svc/airflow-rl-flower 5555:5555 --namespace airflow-np
```
### Anything else
A quick fix for this is:
```
config:
celery:
flower_url_prefix: ''
```
Basically, the new default value '/' makes it so the scripts and links read:
`<script src="//static/js/jquery....`
where it should be:
`<script src="/static/js/jquery....`
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/27402 | https://github.com/apache/airflow/pull/33134 | ca5acda1617a5cdb1d04f125568ffbd264209ec7 | 6e4623ab531a1b6755f6847d2587d014a387560d | "2022-10-31T03:49:04Z" | python | "2023-08-07T20:04:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,396 | ["airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py"] | CloudWatch task handler doesn't fall back to local logs when Amazon CloudWatch logs aren't found | This is really a CloudWatch handler issue - not "airflow" core.
### Discussed in https://github.com/apache/airflow/discussions/27395
<div type='discussions-op-text'>
<sup>Originally posted by **matthewblock** October 24, 2022</sup>
### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We recently activated AWS Cloudwatch logs. We were hoping the logs server would gracefully handle task logs that previously existed but were not written to Cloudwatch, but when fetching the remote logs failed (expected), the logs server didn't fall back to local logs.
```
*** Reading remote log from Cloudwatch log_group: <our log group> log_stream: <our log stream>
```
### What you think should happen instead
According to documentation [Logging for Tasks](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/logging-tasks.html#writing-logs-locally), when fetching remote logs fails, the logs server should fall back to looking for local logs:
> In the Airflow UI, remote logs take precedence over local logs when remote logging is enabled. If remote logs can not be found or accessed, local logs will be displayed.
This should be indicated by the message `*** Falling back to local log`.
If this is not the intended behavior, the documentation should be modified to reflect the intended behavior.
### How to reproduce
1. Run a test DAG without [AWS CloudWatch logging configured](https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/logging/cloud-watch-task-handlers.html)
2. Configure AWS CloudWatch remote logging and re-run a test DAG
### Operating System
Debian buster-slim
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/27396 | https://github.com/apache/airflow/pull/27564 | 3aed495f50e8bc0e22ff90efee7671a73168b19e | c490a328f4d0073052d8b5205c7c4cab96c3d559 | "2022-10-31T02:25:54Z" | python | "2022-11-11T00:40:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,358 | ["docs/apache-airflow/executor/kubernetes.rst"] | Airflow 2.2.2 pod_override does not override `args` of V1Container | ### Apache Airflow version
2.2.2
### What happened
I have a bash sensor defined as follows:
```python
foo_sensor_task = BashSensor(
task_id="foo_task",
poke_interval=3600,
bash_command="python -m foo.run",
retries=0,
executor_config={
"pod_template_file: "path-to-file-yaml",
"pod_override": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(name="base, image="foo-image", args=["abc"])
]
)
)
}
)
```
Entrypoint command in the `foo-image` is `python -m foo.run`. However, when I deploy the image onto Openshift (Kubernetes), the command somehow turns out to be the following:
```bash
python -m foo.run airflow tasks run foo_dag foo_sensor_task manual__2022-10-28T21:08:39+00:00 ...
```
which is wrong.
### What you think should happen instead
I assume the expected command should override `args` (see V1Container `args` value above) and therefore should be:
```bash
python -m foo.run abc
```
and **not**:
```bash
python -m foo.run airflow tasks run foo_dag foo_sensor_task manual__2022-10-28T21:08:39+00:00 ...
```
### How to reproduce
To reproduce the above issue, create a simple DAG and a sensor as defined above. Use a sample image and try to override the args. I cannot provide the same code due to NDA.
### Operating System
RHLS 7.9
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
apache-airflow-providers-cncf-kubernetes==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-sqlite==2.0.1
### Deployment
Other
### Deployment details
N/A
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27358 | https://github.com/apache/airflow/pull/27450 | aa36f754e2307ccd8a03987b81ea1e1a04b03c14 | 8f5e100f30764e7b1818a336feaa8bb390cbb327 | "2022-10-29T01:08:10Z" | python | "2022-11-02T06:08:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,345 | ["airflow/utils/log/file_task_handler.py", "airflow/utils/log/logging_mixin.py", "tests/utils/test_logging_mixin.py"] | Duplicate log lines in CloudWatch after upgrade to 2.4.2 | ### Apache Airflow version
2.4.2
### What happened
We upgraded airflow from 2.4.1 to 2.4.2 and immediately notice that every task log line is duplicated _into_ CloudWatch. Comparing logs from tasks run before upgrade and after upgrade indicates that the issue is not in how the logs are displayed in Airflow, but rather that it now produces two log lines instead of one.
When observing both the CloudWatch log streams and the Airflow UI, we can see duplicate log lines for ~_all_~ most log entries post upgrade, whilst seeing single log lines in tasks before upgrade.
This happens _both_ for tasks ran in a remote `EcsRunTaskOperator`'s as well as in regular `PythonOperator`'s.
### What you think should happen instead
A single non-duplicate log line should be produced into CloudWatch.
### How to reproduce
From my understanding now, any setup on 2.4.2 that uses CloudWatch remote logging will produce duplicate log lines. (But I have not been able to confirm other setups)
### Operating System
Docker: `apache/airflow:2.4.2-python3.9` - Running on AWS ECS Fargate
### Versions of Apache Airflow Providers
```
apache-airflow[celery,postgres,apache.hive,jdbc,mysql,ssh,amazon,google,google_auth]==2.4.2
apache-airflow-providers-amazon==6.0.0
```
### Deployment
Other Docker-based deployment
### Deployment details
We are running a docker inside Fargate ECS on AWS.
The following environment variables + config in CloudFormation control remote logging:
```
- Name: AIRFLOW__LOGGING__REMOTE_LOGGING
Value: True
- Name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
Value: !Sub "cloudwatch://${TasksLogGroup.Arn}"
```
### Anything else
We did not change any other configuration during the upgrade, simply bumped the requirements for provider list + docker image from 2.4.1 to 2.4.2.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27345 | https://github.com/apache/airflow/pull/27591 | 85ec17fbe1c07b705273a43dae8fbdece1938e65 | 933fefca27a5cd514c9083040344a866c7f517db | "2022-10-28T10:32:13Z" | python | "2022-11-10T17:58:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,328 | ["airflow/providers/sftp/operators/sftp.py", "tests/providers/sftp/operators/test_sftp.py"] | SFTPOperator throws object of type 'PlainXComArg' has no len() when using with Taskflow API | ### Apache Airflow Provider(s)
sftp
### Versions of Apache Airflow Providers
apache-airflow-providers-sftp==4.1.0
### Apache Airflow version
2.4.2 Python 3.10
### Operating System
Debian 11 (Official docker image)
### Deployment
Docker-Compose
### Deployment details
Base image is apache/airflow:2.4.2-python3.10
### What happened
When combining Taskflow API and SFTPOperator, it throws an exception that didn't happen with apache-airflow-providers-sftp 4.0.0
### What you think should happen instead
The DAG should work as expected
### How to reproduce
```python
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.providers.sftp.operators.sftp import SFTPOperator
with DAG(
"example_sftp",
schedule="@once",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
tags=["example"],
) as dag:
@task
def get_file_path():
return "test.csv"
local_filepath = get_file_path()
upload = SFTPOperator(
task_id=f"upload_file_to_sftp",
ssh_conn_id="sftp_connection",
local_filepath=local_filepath,
remote_filepath="test.csv",
)
```
### Anything else
```logs
[2022-10-27T15:21:38.106+0000]` {logging_mixin.py:120} INFO - [2022-10-27T15:21:38.102+0000] {dagbag.py:342} ERROR - Failed to import: /opt/airflow/dags/test.py
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/dagbag.py", line 338, in parse
loader.exec_module(new_module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/opt/airflow/dags/test.py", line 21, in <module>
upload = SFTPOperator(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 408, in apply_defaults
result = func(self, **kwargs, default_args=default_args)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/sftp/operators/sftp.py", line 116, in __init__
if len(self.local_filepath) != len(self.remote_filepath):
TypeError: object of type 'PlainXComArg' has no len()
```
It looks like the offending code was introduced in commit 5f073e38dd46217b64dbc16d7b1055d89e8c3459
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27328 | https://github.com/apache/airflow/pull/29068 | 10f0f8bc4be521fd8c6cdd057cc02b6ea2c2d5c1 | bac7b3027d57d2a31acb9a2d078c6af4dc777162 | "2022-10-27T15:45:48Z" | python | "2023-01-20T19:32:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,290 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Publish a container's port(s) to the host with DockerOperator | ### Description
[`create_container` method](https://github.com/docker/docker-py/blob/bc0a5fbacd7617fd338d121adca61600fc70d221/docker/api/container.py#L370) has a `ports` param to open inside the container, and the `host_config` to [declare port bindings](https://github.com/docker/docker-py/blob/bc0a5fbacd7617fd338d121adca61600fc70d221/docker/api/container.py#L542).
We can learn from [Expose port using DockerOperator](https://stackoverflow.com/questions/65157416/expose-port-using-dockeroperator) for this feature on DockerOperator. I have already tested it and works, also created a custom docker decorator based on this DockerOperator extension.
### Use case/motivation
I would like to publish the container's port(s) that is created with DockerOperator to the host. These changes should also be applied to the Docker decorator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27290 | https://github.com/apache/airflow/pull/30730 | cb1ecb0647d459999041ee6018f8f282fc25b09b | d8c0e3009a649ce057595539b96a566b7faa5584 | "2022-10-26T07:56:51Z" | python | "2023-05-17T09:03:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,282 | ["airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | KubernetesPodOperator: Option to show logs from all containers in a pod | ### Description
Currently, KubernetesPodOperator fetches logs using
```
self.pod_manager.fetch_container_logs(
pod=self.pod,
container_name=self.BASE_CONTAINER_NAME,
follow=True,
)
```
and so only shows log from the main container in a pod. It would be very useful/helpful to have the possibility to fetch logs for all the containers in a pod.
### Use case/motivation
Making the cause of failed KubernetesPodOperator tasks a lot more visible.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27282 | https://github.com/apache/airflow/pull/31663 | e7587b3369af30848c3cf1c7eff9e801b1440793 | 9a0f41ba53185031bc2aa56ead2928ae4b20de99 | "2022-10-25T23:29:19Z" | python | "2023-07-06T09:49:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,237 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"] | BigQueryCheckOperator fail in deferrable mode even if col val 0 | ### Apache Airflow version
main (development)
### What happened
The Bigquery hook `get_records` always return a list of string irrespective of the Bigquery table column type. So if even my table column has value 0 the task succeeds since ```boo("0")``` return ```True```
https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/hooks/bigquery.py#L3062
### What you think should happen instead
Bigquery hook ```get_records``` should return the value having the correct col type
### How to reproduce
create an Airflow google cloud connection `google_cloud_default` and try the below DAG.
Make sure to update the DATASET and Table name.
Your table first row should contain a 0 value in this case expected behaviour is DAG should fail but it will pass
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.providers.google.cloud.operators.bigquery import BigQueryCheckOperator
default_args = {
"execution_timeout": timedelta(minutes=30),
}
with DAG(
dag_id="bq_check_op",
start_date=datetime(2022, 8, 22),
schedule_interval=None,
catchup=False,
default_args=default_args,
tags=["example", "async", "bigquery"],
) as dag:
check_count = BigQueryCheckOperator(
task_id="check_count",
sql=f"SELECT * FROM DATASET.TABLE",
use_legacy_sql=False,
deferrable=True
)
```an
### Operating System
Mac
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27237 | https://github.com/apache/airflow/pull/27236 | 95e5675714f12c177e30d83a14d28222b06d217b | 1447158e690f3d63981b3d8ec065665ec91ca54e | "2022-10-24T20:10:28Z" | python | "2022-10-31T04:21:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,232 | ["airflow/operators/python.py"] | ExternalPythonOperator: AttributeError: 'python_path' is configured as a template field but ExternalPythonOperator does not have this attribute. | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using the ExternalPythonOperator directly in v2.4.2 as opposed to via the @task.external decorator described in https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html#externalpythonoperator causes the following error:
```
AttributeError: 'python_path' is configured as a template field but ExternalPythonOperator does not have this attribute.
```
This seems to be due to https://github.com/apache/airflow/blob/main/airflow/operators/python.py#L624 having 'python_path' as an additional template field, instead of 'python', which is the correct additional keyword argument for the operator
### What you think should happen instead
We should change https://github.com/apache/airflow/blob/main/airflow/operators/python.py#L624 to
read:
```
template_fields: Sequence[str] = tuple({'python'} | set(PythonOperator.template_fields))
```
instead of
```
template_fields: Sequence[str] = tuple({'python_path'} | set(PythonOperator.template_fields))
```
This has been verified by adding:
```
ExternalPythonOperator.template_fields = tuple({'python'} | set(PythonOperator.template_fields))
```
in the sample DAG code below, which causes the DAG to run successfully
### How to reproduce
```
import airflow
from airflow.models import DAG
from airflow.operators.python import ExternalPythonOperator
args = dict(
start_date=airflow.utils.dates.days_ago(3),
email=["x@y.com"],
email_on_failure=False,
email_on_retry=False,
retries=0
)
dag = DAG(
dag_id='test_dag',
default_args=args,
schedule_interval='0 20 * * *',
catchup=False,
)
def print_kwargs(*args, **kwargs):
print('args', args)
print('kwargs', kwargs)
with dag:
def print_hello():
print('hello')
# Due to a typo in the airflow library :(
# ExternalPythonOperator.template_fields = tuple({'python'} | set(PythonOperator.template_fields))
t1 = ExternalPythonOperator(
task_id='test_task',
python='/opt/airflow/miniconda/envs/nexus/bin/python',
python_callable=print_kwargs
)
```
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27232 | https://github.com/apache/airflow/pull/27256 | 544c93f0a4d2673c8de64d97a7a8128387899474 | 27a92fecc9be30c9b1268beb60db44d2c7b3628f | "2022-10-24T16:18:43Z" | python | "2022-10-31T04:34:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,228 | ["airflow/serialization/serialized_objects.py", "tests/www/views/test_views_trigger_dag.py"] | Nested Parameters Break for DAG Run Configurations | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow Version Used: 2.3.3
This bug report is being created out of the following discussion - https://github.com/apache/airflow/discussions/25064
With the following DAG definition (with nested params):
```
DAG(
dag_id="some_id",
start_date=datetime(2021, 1, 1),
catchup=False,
doc_md=__doc__,
schedule_interval=None,
params={
"accounts": Param(
[{'name': 'account_name_1', 'country': 'usa'}],
schema = {
"type": "array",
"minItems": 1,
"items": {
"type": "object",
"default": {"name": "account_name_1", "country": "usa"},
"properties": {
"name": {"type": "string"},
"country": {"type": "string"},
},
"required": ["name", "country"]
},
}
),
}
)
```
**Note:** It does not matter whether `Param` and JSONSchema is used or not, I mean you can try to put a simple nested object too.
Then the UI displays the following:
```
{
"accounts": null
}
```
### What you think should happen instead
Following is what the UI should display instead:
```
{
"accounts": [
{
"name": "account_name_1",
"country": "usa"
}
]
}
```
### How to reproduce
_No response_
### Operating System
Debian Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
Although I am personally using Composer, it is most likely related to Airflow only given there are more non-Composer folks facing this (from the discussion's original author and the Slack community).
### Anything else
I have put some more explanation and a quick way to reproduce this [as a comment in the discussion](https://github.com/apache/airflow/discussions/25064#discussioncomment-3907974) linked.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27228 | https://github.com/apache/airflow/pull/27482 | 2d2f0daad66416d565e874e35b6a487a21e5f7b1 | 9409293514cef574179a5320ed3ed50881064423 | "2022-10-24T09:58:34Z" | python | "2022-11-08T13:43:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,225 | ["airflow/www/templates/analytics/google_analytics.html"] | Tracking User Activity Issue: Google Analytics tag version is not up-to-date | ### Apache Airflow version
2.4.1
### What happened
Airflow uses the previous Google Analytics tag version so Google Analytics does not collect User Activity Metric from Airflow
### What you think should happen instead
The Tracking User Activity feature should work properly with Google Analytics
### How to reproduce
- Configure to use Google Analytics with Airflow
- Google Analytics does not collect User Activity Metric from Airflow
Note: with the upgraded Google Analytics tag it works properly
https://support.google.com/analytics/answer/9304153#add-tag&zippy=%2Cadd-your-tag-using-google-tag-manager%2Cfind-your-g--id-for-any-platform-that-accepts-a-g--id%2Cadd-the-google-tag-directly-to-your-web-pages
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27225 | https://github.com/apache/airflow/pull/27226 | 55f8a63d012d4ca5ca726195bed4b38e9b1a05f9 | 5e6cec849a5fa90967df1447aba9521f1cfff3d0 | "2022-10-24T09:00:49Z" | python | "2022-10-27T13:25:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,200 | ["airflow/models/serialized_dag.py"] | Handle TODO: .first() is not None can be changed to .scalar() | ### Body
The TODO part of:
https://github.com/apache/airflow/blob/d67ac5932dabbf06ae733fc57b48491a8029b8c2/airflow/models/serialized_dag.py#L156-L158
can now be addressed since we are on sqlalchemy 1.4+ and https://github.com/sqlalchemy/sqlalchemy/issues/5481 is resolved
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/27200 | https://github.com/apache/airflow/pull/27323 | 6f20d4d3247e44ea04558226aeeed09bf8379173 | 37c0038a18ace092079d23988f76d90493ff294c | "2022-10-22T17:01:34Z" | python | "2022-10-31T02:31:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,182 | ["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | SSHOperator ignores cmd_timeout | ### Apache Airflow Provider(s)
ssh
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.4.1
### Operating System
linux
### Deployment
Other
### Deployment details
_No response_
### What happened
Hi,
SSHOperator documentation states that we should be using cmd_timeout instead of timeout
```
:param timeout: (deprecated) timeout (in seconds) for executing the command. The default is 10 seconds.
Use conn_timeout and cmd_timeout parameters instead.
```
But the code doesn't use cmd_timeout at all - and it's still passing `self.timeout` when running the ssh command:
```
return self.ssh_hook.exec_ssh_client_command(
ssh_client, command, timeout=self.timeout, environment=self.environment, get_pty=self.get_pty
)
```
It seems to me that we should `self.cmd_timeout` here instead. When creating the hook, it correctly uses `self.conn_timeout`.
I'll try to work on a PR for this.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27182 | https://github.com/apache/airflow/pull/27184 | cfd63df786e0c40723968cb8078f808ca9d39688 | dc760b45eaeccc3ff35a5acdfe70968ca0451331 | "2022-10-21T12:29:48Z" | python | "2022-11-07T02:07:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,166 | ["airflow/www/static/css/flash.css", "airflow/www/static/js/dag/grid/TaskName.test.tsx", "airflow/www/static/js/dag/grid/TaskName.tsx", "airflow/www/static/js/dag/grid/index.test.tsx"] | Carets in Grid view are the wrong way around | ### Apache Airflow version
main (development)
### What happened
When expanding tasks to see sub-tasks in the Grid UI, the carets to expand the task are pointing the wrong way.
### What you think should happen instead
Can you PLEASE use the accepted Material UI standard for expansion & contraction - https://mui.com/material-ui/react-list/#nested-list
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27166 | https://github.com/apache/airflow/pull/28624 | 69ab7d8252f830d8c1a013d34f8305a16da26bcf | 0ab881a4ab78ca7d30712c893a6f01b83eb60e9e | "2022-10-20T15:52:50Z" | python | "2023-01-02T21:01:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,165 | ["airflow/providers/google/cloud/hooks/workflows.py", "tests/providers/google/cloud/hooks/test_workflows.py"] | WorkflowsCreateExecutionOperator execution argument only receive bytes | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google==7.0.0`
### Apache Airflow version
2.3.2
### Operating System
Ubuntu 20.04.5 LTS (Focal Fossa)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
WorkflowsCreateExecutionOperator triggers google cloud workflows and execution param receives argument as {"argument": {"key": "val", "key", "val"...}
But, When I passed argument as dict using render_template_as_native_obj=True, protobuf error occured TypeError: {'projectId': 'project-id', 'location': 'us-east1'} has type dict, but expected one of: bytes, unicode.
When I passed argument as bytes {"argument": b'{\n "projectId": "project-id",\n "location": "us-east1"\n}' It working.
### What you think should happen instead
execution argument should be Dict instead of bytes.
### How to reproduce
not working
```python
from airflow import DAG
from airflow.models.param import Param
from airflow.operators.dummy_operator import DummyOperator
from airflow.providers.google.cloud.operators.workflows import WorkflowsCreateExecutionOperator
with DAG(
dag_id="continual_learning_deid_norm_h2h_test",
params={
"location": Param(type="string", default="us-east1"),
"project_id": Param(type="string", default="project-id"),
"workflow_id": Param(type="string", default="orkflow"),
"workflow_execution_info": {
"argument": {
"projectId": "project-id",
"location": "us-east1"
}
}
},
render_template_as_native_obj=True
) as dag:
execution = "{{ params.workflow_execution_info }}"
create_execution = WorkflowsCreateExecutionOperator(
task_id="create_execution",
location="{{ params.location }}",
project_id="{{ params.project_id }}",
workflow_id="{{ params.workflow_id }}",
execution="{{ params.workflow_execution_info }}"
)
start_operator = DummyOperator(task_id='test_task')
start_operator >> create_execution
```
working
```python
from airflow import DAG
from airflow.models.param import Param
from airflow.operators.dummy_operator import DummyOperator
from airflow.providers.google.cloud.operators.workflows import WorkflowsCreateExecutionOperator
with DAG(
dag_id="continual_learning_deid_norm_h2h_test",
params={
"location": Param(type="string", default="us-east1"),
"project_id": Param(type="string", default="project-id"),
"workflow_id": Param(type="string", default="orkflow"),
"workflow_execution_info": {
"argument": b'{\n "projectId": "project-id",\n "location": "us-east1"\n}'
}
},
render_template_as_native_obj=True
) as dag:
execution = "{{ params.workflow_execution_info }}"
create_execution = WorkflowsCreateExecutionOperator(
task_id="create_execution",
location="{{ params.location }}",
project_id="{{ params.project_id }}",
workflow_id="{{ params.workflow_id }}",
execution="{{ params.workflow_execution_info }}"
)
start_operator = DummyOperator(task_id='test_task')
start_operator >> create_execution
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27165 | https://github.com/apache/airflow/pull/27361 | 9c41bf35e6149d4edfc585d97c348a4f864e7973 | 332c01d6e0bef41740e8fbc2c9600e7b3066615b | "2022-10-20T14:50:46Z" | python | "2022-10-31T05:35:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,146 | ["airflow/providers/dbt/cloud/hooks/dbt.py", "docs/apache-airflow-providers-dbt-cloud/connections.rst", "tests/providers/dbt/cloud/hooks/test_dbt_cloud.py"] | dbt Cloud Provider Not Compatible with emea.dbt.com | ### Apache Airflow Provider(s)
dbt-cloud
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.3
### Operating System
Linux
### Deployment
Composer
### Deployment details
_No response_
### What happened
Trying to use the provider with dbt Cloud's new EMEA region (https://docs.getdbt.com/docs/deploy/regions) but not able to use the emea.dbt.com as a tenant, as it automatically adds `.getdbt.com` to the tenant
### What you think should happen instead
We should be able to change the entire URL - and it could still default to cloud.getdbt.com
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27146 | https://github.com/apache/airflow/pull/28890 | ed8788bb80764595ba2872cba0d2da9e4b137e07 | 141338b24efeddb9460b53b8501654b50bc6b86e | "2022-10-19T15:41:37Z" | python | "2023-01-12T19:25:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,140 | ["airflow/cli/commands/dag_processor_command.py", "airflow/jobs/dag_processor_job.py", "tests/cli/commands/test_dag_processor_command.py"] | Invalid livenessProbe for Standalone DAG Processor | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.3.4
### Kubernetes Version
1.22.12-gke.1200
### Helm Chart configuration
```yaml
dagProcessor:
enabled: true
```
### Docker Image customisations
```dockerfile
FROM apache/airflow:2.3.4-python3.9
USER root
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
RUN apt-get update && apt-get install -y google-cloud-cli
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
RUN sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
USER airflow
```
### What happened
Current DAG Processor livenessProbe is the following:
```
CONNECTION_CHECK_MAX_COUNT=0 AIRFLOW__LOGGING__LOGGING_LEVEL=ERROR exec /entrypoint \
airflow jobs check --hostname $(hostname)
```
This command checks the metadata DB searching for an active job whose hostname is the current pod's one (_airflow-dag-processor-xxxx_).
However, after running the dag-processor pod for more than 1 hour, there are no jobs with the processor hostname in the jobs table.
![image](https://user-images.githubusercontent.com/28935464/196711859-98dadb8f-3273-42ec-a4db-958890db34b7.png)
![image](https://user-images.githubusercontent.com/28935464/196711947-5a0fc5d7-4b91-4e82-9ff0-c721e6a4c1cd.png)
As a consequence, the livenessProbe fails and the pod is constantly restarting.
After investigating the code, I found out that DagFileProcessorManager is not creating jobs in the metadata DB, so the livenessProbe is not valid.
### What you think should happen instead
A new job should be created for the Standalone DAG Processor.
By doing that, the _airflow jobs check --hostname <hostname>_ command would work correctly and the livenessProbe wouldn't fail
### How to reproduce
1. Deploy airflow with a standalone dag-processor.
2. Wait for ~ 5 minutes
3. Check that the livenessProbe has been failing for 5 minutes and the pod has been restarted.
### Anything else
I think this behavior is inherited from the NOT standalone dag-processor mode (the livenessProbe checks for a SchedulerJob, that in fact contains the "DagProcessorJob")
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27140 | https://github.com/apache/airflow/pull/28799 | 1edaddbb1cec740db2ff2a86fb23a3a676728cb0 | 0018b94a4a5f846fc87457e9393ca953ba0b5ec6 | "2022-10-19T14:02:51Z" | python | "2023-02-21T09:54:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,107 | ["airflow/providers/dbt/cloud/operators/dbt.py", "tests/providers/dbt/cloud/operators/test_dbt_cloud.py"] | Dbt cloud download artifact to a path not present fails | ### Apache Airflow Provider(s)
dbt-cloud
### Versions of Apache Airflow Providers
```
apache-airflow-providers-dbt-cloud==2.2.0
```
### Apache Airflow version
2.2.5
### Operating System
Ubuntu 18.04
### Deployment
Composer
### Deployment details
_No response_
### What happened
Instructing `DbtCloudGetJobRunArtifactOperator` to save results in a path like `{{ var.value.base_path }}/dbt_run_warehouse/{{ run_id }}/run_results.json` fails because it contains a path not created yet.
```
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/dbt/cloud/operators/dbt.py", line 216, in execute
with open(self.output_file_name, "w") as file:
FileNotFoundError: [Errno 2] No such file or directory: '/home/airflow/gcs/data/dbt/dbt_run_warehouse/manual__2022-10-17T22:18:25.469526+00:00/run_results.json'
```
### What you think should happen instead
It should create this path and dump the content of the requested artefact
### How to reproduce
```python
with DAG('test') as dag:
DbtCloudGetJobRunArtifactOperator(
task_id='dbt_run_warehouse',
run_id=12341234,
path='run_results.json',
output_file_name='{{ var.value.dbt_base_target_folder }}/dbt_run_warehouse/{{ run_id }}/run_results.json'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27107 | https://github.com/apache/airflow/pull/29048 | 6190e34388394b0f8b0bc01c66d56a0e8277fe6c | f805b4154a8155823d7763beb9b6da76889ebd62 | "2022-10-18T10:35:01Z" | python | "2023-01-23T17:08:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,096 | ["airflow/providers/amazon/aws/hooks/rds.py", "airflow/providers/amazon/aws/operators/rds.py", "airflow/providers/amazon/aws/sensors/rds.py", "tests/providers/amazon/aws/hooks/test_rds.py", "tests/providers/amazon/aws/operators/test_rds.py", "tests/providers/amazon/aws/sensors/test_rds.py"] | Use Boto waiters instead of customer _await_status method for RDS Operators | ### Description
Currently some code in RDS Operators use boto waiters and some uses a custom `_await_status`, the former is preferred over the later (waiters are vetted code provided by the boto sdk, they have features like exponential backoff, etc). See [this discussion thread](https://github.com/apache/airflow/pull/27076#discussion_r997325535) for more details/context.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27096 | https://github.com/apache/airflow/pull/27410 | b717853e4c17d67f8ea317536c98c7416eb080ca | 2bba98f109cc7737f4293a195e03a0cc21a624cb | "2022-10-17T17:46:53Z" | python | "2022-11-17T17:02:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,079 | ["airflow/macros/__init__.py", "tests/macros/test_macros.py"] | Option to deserialize JSON from last log line in BashOperator and DockerOperator before sending to XCom | ### Description
In order to create an XCom value with a BashOperator or a DockerOperator, we can use the option `do_xcom_push` that pushes to XCom the last line of the command logs.
It would be interesting to provide an option `xcom_json` to deserialize this last log line in case it's a JSON string, before sending it as XCom. This would allow to access its attributes later in other tasks with the `xcom_pull()` method.
### Use case/motivation
See my StackOverflow post : https://stackoverflow.com/questions/74083466/how-to-deserialize-xcom-strings-in-airflow
Consider a DAG containing two tasks: `DAG: Task A >> Task B` (BashOperators or DockerOperators). They need to communicate through XComs.
- `Task A` outputs the informations through a one-line json in stdout, which can then be retrieve in the logs of `Task A`, and so in its *return_value* XCom key if `xcom_push=True`. For instance : `{"key1":1,"key2":3}`
- `Task B` only needs the `key2` information from `Task A`, so we need to deserialize the *return_value* XCom of `Task A` to extract only this value and pass it directly to `Task B`, using the jinja template `{{xcom_pull('task_a')['key2']}}`. Using it as this results in `jinja2.exceptions.UndefinedError: 'str object' has no attribute 'key2'` because *return_value* is just a string.
For example we can deserialize Airflow Variables in jinja templates (ex: `{{ var.json.my_var.path }}`). Globally I would like to do the same thing with XComs.
**Current workaround**:
We can create a custom Operator (inherited from BashOperator or DockerOperator) and augment the `execute` method:
1. execute the original `execute` method
2. intercepts the last log line of the task
3. tries to `json.loads()` it in a Python dictionnary
4. finally return the output (which is now a dictionnary, not a string)
The previous jinja template `{{ xcom_pull('task_a')['key2'] }}` is now working in `task B`, since the XCom value is now a Python dictionnary.
```python
class BashOperatorExtended(BashOperator):
def execute(self, context):
output = BashOperator.execute(self, context)
try:
output = json.loads(output)
except:
pass
return output
class DockerOperatorExtended(DockerOperator):
def execute(self, context):
output = DockerOperator.execute(self, context)
try:
output = json.loads(output)
except:
pass
return output
```
But creating a new operator just for that purpose is not really satisfying..
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27079 | https://github.com/apache/airflow/pull/28930 | d20300018a38159f5452ae16bc9df90b1e7270e5 | ffdc696942d96a14a5ee0279f950e3114817055c | "2022-10-16T20:14:05Z" | python | "2023-02-19T14:41:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,069 | ["tests/jobs/test_local_task_job.py"] | test_heartbeat_failed_fast not failing fast enough. | ### Apache Airflow version
main (development)
### What happened
In the #14915 change to make tests run in parallel, the heartbeat interval threshold was raised an order of magnitude from 0.05 to 0.5. Though I frequently see tests failing in PRs due breaching that threshold by a tiny amount. Do we need to increase that theshold again? CC @potiuk
Example below where the time was `0.5193889999999999`, `0.0193...` past the threshold for the test:
```
=================================== FAILURES ===================================
_________________ TestLocalTaskJob.test_heartbeat_failed_fast __________________
self = <tests.jobs.test_local_task_job.TestLocalTaskJob object at 0x7f4088400950>
def test_heartbeat_failed_fast(self):
"""
Test that task heartbeat will sleep when it fails fast
"""
self.mock_base_job_sleep.side_effect = time.sleep
dag_id = 'test_heartbeat_failed_fast'
task_id = 'test_heartbeat_failed_fast_op'
with create_session() as session:
dag_id = 'test_heartbeat_failed_fast'
task_id = 'test_heartbeat_failed_fast_op'
dag = self.dagbag.get_dag(dag_id)
task = dag.get_task(task_id)
dr = dag.create_dagrun(
run_id="test_heartbeat_failed_fast_run",
state=State.RUNNING,
execution_date=DEFAULT_DATE,
start_date=DEFAULT_DATE,
session=session,
)
ti = dr.task_instances[0]
ti.refresh_from_task(task)
ti.state = State.QUEUED
ti.hostname = get_hostname()
ti.pid = 1
session.commit()
job = LocalTaskJob(task_instance=ti, executor=MockExecutor(do_update=False))
job.heartrate = 2
heartbeat_records = []
job.heartbeat_callback = lambda session: heartbeat_records.append(job.latest_heartbeat)
job._execute()
assert len(heartbeat_records) > 2
for i in range(1, len(heartbeat_records)):
time1 = heartbeat_records[i - 1]
time2 = heartbeat_records[i]
# Assert that difference small enough
delta = (time2 - time1).total_seconds()
> assert abs(delta - job.heartrate) < 0.5
E assert 0.5193889999999999 < 0.5
E + where 0.5193889999999999 = abs((2.519389 - 2))
E + where 2 = <airflow.jobs.local_task_job.LocalTaskJob object at 0x7f408835a7d0>.heartrate
tests/jobs/test_local_task_job.py:312: AssertionError
```
(source)[https://github.com/apache/airflow/actions/runs/3253568905/jobs/5341352671]
### What you think should happen instead
Tests should not be flaky and should pass reliably :)
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27069 | https://github.com/apache/airflow/pull/27397 | f35b41e7533b09052dfcc591ec25c58207f1518c | 594c6eef6938d7a4975a0d87003160c2390d7ebb | "2022-10-15T02:17:24Z" | python | "2022-10-31T16:54:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,065 | ["airflow/config_templates/airflow_local_settings.py", "airflow/utils/log/non_caching_file_handler.py", "newsfragments/27065.misc.rst"] | Log files are still being cached causing ever-growing memory usage when scheduler is running | ### Apache Airflow version
2.4.1
### What happened
My Airflow scheduler memory usage started to grow after I turned on the `dag_processor_manager` log by doing
```bash
export CONFIG_PROCESSOR_MANAGER_LOGGER=True
```
see the red arrow below
![2022-10-11_12-06 (1)](https://user-images.githubusercontent.com/14293802/195940156-3248f68a-656c-448a-9140-e50cfa3a8311.png)
By looking closely at the memory usage as mentioned in https://github.com/apache/airflow/issues/16737#issuecomment-917677177, I discovered that it was the cache memory that's keep growing:
![2022-10-12_14-42 (1)](https://user-images.githubusercontent.com/14293802/195940416-1da0ab08-a3b4-4f72-b35b-fba86918cdbc.png)
Then I turned off the `dag_processor_manager` log, memory usage returned to normal (not growing anymore, steady at ~400 MB)
This issue is similar to #14924 and #16737. This time the culprit is the rotating logs under `~/logs/dag_processor_manager/dag_processor_manager.log*`.
### What you think should happen instead
Cache memory shouldn't keep growing like this
### How to reproduce
Turn on the `dag_processor_manager` log by doing
```bash
export CONFIG_PROCESSOR_MANAGER_LOGGER=True
```
in the `entrypoint.sh` and monitor the scheduler memory usage
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
k8s
### Anything else
I'm not sure why the previous fix https://github.com/apache/airflow/pull/18054 has stopped working :thinking:
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27065 | https://github.com/apache/airflow/pull/27223 | 131d339696e9568a2a2dc55c2a6963897cdc82b7 | 126b7b8a073f75096d24378ffd749ce166267826 | "2022-10-14T20:50:24Z" | python | "2022-10-25T08:38:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,057 | ["airflow/models/trigger.py"] | Race condition in multiple triggerer process can lead to both picking up same trigger. | ### Apache Airflow version
main (development)
### What happened
Currently airflow triggerer loop picks triggers to process by below steps
query_unassinged_Triggers
update_triggers from above id
query which triggers are assigned to current process
If two triggerer process executes above queries in below order
query unassigned trigger both will get all triggers then if one triggerer completes 2nd and 3rd operation before 2nd triggerer does 2nd operation that will lead to both triggerer running same triggers
there is sync happening after that but unnecessary cleanup operations are done in that case.
### What you think should happen instead
There should be locking on rows which are updated.
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
HA setup with multiple triggerers can have this issue
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27057 | https://github.com/apache/airflow/pull/27072 | 4e55d7fa2b7d5f8d63465d2c5270edf2d85f08c6 | 9c737f6d192ef864dd4cde89a0a90c53f5336566 | "2022-10-14T11:29:13Z" | python | "2022-10-31T01:30:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,029 | ["airflow/providers/apache/druid/hooks/druid.py"] | Druid Operator is not getting host | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We use Airflow 2.3.3. I see that this test is successful, but I take a this error. This is the picture
```
File "/home/airflow/.local/lib/python3.7/site-packages/requests/sessions.py", line 792, in get_adapter
raise InvalidSchema(f"No connection adapters were found for {url!r}")
```
<img width="1756" alt="Screen Shot 2022-10-12 at 15 34 40" src="https://user-images.githubusercontent.com/47830986/195560866-0527c5f6-3795-460b-b78b-2488e2a77bfb.png">
<img width="1685" alt="Screen Shot 2022-10-12 at 15 37 27" src="https://user-images.githubusercontent.com/47830986/195560954-f5604d10-eb7d-4bab-b10b-2684d8fbe4a2.png">
I take dag like this
![Screen Shot 2022-10-13 at 12 36 25](https://user-images.githubusercontent.com/47830986/195561373-8bc4fd37-4f22-4a40-8b71-52efa10d622d.png)
![Screen Shot 2022-10-13 at 12 37 15](https://user-images.githubusercontent.com/47830986/195561566-9a911dd5-cdb2-4b42-98d2-214ed944a4c5.png)
Also I tried this type but I failed
```python
ingestion_2 = SimpleHttpOperator(
task_id='test_task',
method='POST',
http_conn_id=DRUID_CONN_ID,
endpoint='/druid/indexer/v1/task',
data=json.dumps(read_file),
dag=dag,
do_xcom_push=True,
headers={
'Content-Type': 'application/json'
},
response_check=lambda response: response.json()['Status'] == 200)
```
I get this log
```
[2022-10-13, 06:16:46 UTC] {http.py:143} ERROR - {"error":"Missing type id when trying to resolve subtype of [simple type, class org.apache.druid.indexing.common.task.Task]: missing type id property 'type'\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 1]"}
```
I don't know this is bug or issue or networking problem but can we check this?
P.S - We use Airflow on Kubernetes so that we can not debug it.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27029 | https://github.com/apache/airflow/pull/27174 | 7dd7400dd4588e063078986026e14ea606a55a76 | 8b5f1d91936bb87ba9fa5488715713e94297daca | "2022-10-13T09:42:34Z" | python | "2022-10-31T10:19:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,010 | ["airflow/dag_processing/manager.py", "tests/dag_processing/test_manager.py"] | DagProcessor doesnt pick new files until queued file parsing completes | ### Apache Airflow version
2.4.1
### What happened
When there are large number of dag files, lets say 10K and each takes sometime to parse, dag_parser doesnt pick any newly created files till all 10k files are finished parsing
`if not self._file_path_queue:
self.emit_metrics()
self.prepare_file_path_queue()`
Above logic only adds new files to queue when queue is empty
### What you think should happen instead
Every loop of dag processor should pick new files and add into file for parsing queue
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27010 | https://github.com/apache/airflow/pull/27060 | fb9e5e612e3ddfd10c7440b7ffc849f0fd2d0b09 | 65b78b7dbd1d824d2c22b65922149985418acbc8 | "2022-10-12T11:34:30Z" | python | "2022-11-14T01:43:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,987 | ["airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | DataprocLink is not available for dataproc workflow operators | ### Apache Airflow version
main (development)
### What happened
For DataprocInstantiateInlineWorkflowTemplateOperator and DataprocInstantiateWorkflowTemplateOperator, the dataproc link is available only for the jobs that have succeeded. Incase of Failure, the DataprocLink is not available
### What you think should happen instead
Like other dataproc operators, this should be available for workflow operators as well
### How to reproduce
_No response_
### Operating System
MacOS Monterey
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.5.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26987 | https://github.com/apache/airflow/pull/26986 | 7cfa1be467b995b886a97b68498137a76a31f97c | 0cb6450d6df853e1061dbcafbc437c07a8e0e555 | "2022-10-11T09:17:26Z" | python | "2022-11-16T21:30:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,984 | ["dev/breeze/src/airflow_breeze/utils/path_utils.py"] | Running pre-commit without installing breeze errors out | ### Apache Airflow version
main (development)
### What happened
Running pre-commit without installing `apache-airflow-breeze` errors out
```
Traceback (most recent call last):
File "/Users/bhavaniravi/projects/airflow/./scripts/ci/pre_commit/pre_commit_flake8.py", line 39, in <module>
from airflow_breeze.global_constants import MOUNT_SELECTED
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/global_constants.py", line 30, in <module>
from airflow_breeze.utils.path_utils import AIRFLOW_SOURCES_ROOT
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 240, in <module>
AIRFLOW_SOURCES_ROOT = find_airflow_sources_root_to_operate_on().resolve()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 235, in find_airflow_sources_root_to_operate_on
reinstall_if_setup_changed()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 148, in reinstall_if_setup_changed
if sources_hash != package_hash:
UnboundLocalError: local variable 'package_hash' referenced before assignment
```
And to understand the error better, I commented out the exception handling code.
```
try:
package_hash = get_package_setup_metadata_hash()
except ModuleNotFoundError as e:
if "importlib_metadata" in e.msg:
return False
```
It returned
```
Traceback (most recent call last):
File "/Users/bhavaniravi/projects/airflow/./scripts/ci/pre_commit/pre_commit_flake8.py", line 39, in <module>
from airflow_breeze.global_constants import MOUNT_SELECTED
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/global_constants.py", line 30, in <module>
from airflow_breeze.utils.path_utils import AIRFLOW_SOURCES_ROOT
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 240, in <module>
AIRFLOW_SOURCES_ROOT = find_airflow_sources_root_to_operate_on().resolve()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 235, in find_airflow_sources_root_to_operate_on
reinstall_if_setup_changed()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 141, in reinstall_if_setup_changed
package_hash = get_package_setup_metadata_hash()
File "/Users/bhavaniravi/projects/airflow/dev/breeze/src/airflow_breeze/utils/path_utils.py", line 86, in get_package_setup_metadata_hash
for line in distribution('apache-airflow-breeze').metadata.as_string().splitlines(keepends=False):
File "/opt/homebrew/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/metadata.py", line 524, in distribution
return Distribution.from_name(distribution_name)
File "/opt/homebrew/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/metadata.py", line 187, in from_name
raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: apache-airflow-breeze
```
### What you think should happen instead
The error should be handled gracefully, and print out the command to install `apache-airflow-breeze`
### How to reproduce
1. Here is the weird part. I install `pip install -e ./dev/breeze` breeze erorr vanishes.
But when I uninstall`pip uninstall apache-airflow-breeze` the error doesn't re-appear
2. The error occurred again after stopping the docker desktop. Running `pip install -e ./dev/breeze` fixed it.
### Operating System
MacOS Monetary
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26984 | https://github.com/apache/airflow/pull/26985 | 58378cfd42b137a31032404783b2957284a1e538 | ee3625540ff3712bbc6215214e4534d7e91c45fa | "2022-10-11T07:24:07Z" | python | "2022-10-22T23:35:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,960 | ["airflow/api/common/mark_tasks.py", "airflow/models/taskinstance.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/log_reader.py", "airflow/utils/state.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/views/test_views_grid.py"] | can't see failed sensor task log on webpage | ### Apache Airflow version
2.4.1
### What happened
![image](https://user-images.githubusercontent.com/24224756/194797283-6c26ad63-d432-4c41-9f91-2dbc47417ec7.png)
when the sensor running, I can see the log above, but when I manual set the task state to failed or the task failed by other reason, I can't see log at here
![image](https://user-images.githubusercontent.com/24224756/194797633-0c7bf825-8e83-42af-8298-03491a26b7c9.png)
In other version airflow, like 2.3/2.2, this still happens
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.4 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26960 | https://github.com/apache/airflow/pull/26993 | ad7f8e09f8e6e87df2665abdedb22b3e8a469b49 | f110cb11bf6fdf6ca9d0deecef9bd51fe370660a | "2022-10-10T06:42:09Z" | python | "2023-01-05T16:42:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,912 | ["airflow/www/static/js/api/useTaskLog.ts", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx"] | Log-tab under grid view is automatically re-fetching completed logs every 3 sec. | ### Apache Airflow version
2.4.1
### What happened
The new inline log-tab under grid view is fantastic.
What's not so great though, is that it is automatically reloading the logs on the `/api/v1/dags/.../dagRuns/.../taskInstances/.../logs/1` api endpoint every 3 seconds. Same interval as the reload of the grid status it seems.
This:
* Makes it difficult for users to scroll in the log panel and to select text in the log panel, because it is replaced all the time
* Put unnecessary load on the client and the link between client-webserver.
* Put unnecssary load on the webserver and on the logging-backend, in our case it involves queries to an external Loki server.
This happens even if the TaskLogReader has set `metadata["end_of_log"] = True`
### What you think should happen instead
Logs should not automatically be reloaded if `end_of_log=True`
For logs which are not at end, some other slower reload or more incremental query/streaming is preferred.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.2.0
apache-airflow-providers-sqlite==3.1.0
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26912 | https://github.com/apache/airflow/pull/27233 | 8d449ae04aa67ecbabf84f35a34fc2e53665ee17 | e73e90e388f7916ae5eea48ba39687d99f7a94b1 | "2022-10-06T12:38:34Z" | python | "2022-10-25T14:26:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,910 | ["dev/provider_packages/MANIFEST_TEMPLATE.in.jinja2", "dev/provider_packages/SETUP_TEMPLATE.py.jinja2"] | python_kubernetes_script.jinja2 file missing from apache-airflow-providers-cncf-kubernetes==4.4.0 release | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
```
$ pip freeze | grep apache-airflow-providers
apache-airflow-providers-cncf-kubernetes==4.4.0
```
### Apache Airflow version
2.4.1
### Operating System
macos-12.6
### Deployment
Other Docker-based deployment
### Deployment details
Using the astro cli.
### What happened
Trying to test the `@task.kubernetes` decorator with Airflow 2.4.1 and the `apache-airflow-providers-cncf-kubernetes==4.4.0` package, I get the following error:
```
[2022-10-06, 10:49:01 UTC] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/decorators/kubernetes.py", line 95, in execute
write_python_script(jinja_context=jinja_context, filename=script_filename)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/python_kubernetes_script.py", line 79, in write_python_script
template = template_env.get_template('python_kubernetes_script.jinja2')
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/environment.py", line 1010, in get_template
return self._load_template(name, globals)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/environment.py", line 969, in _load_template
template = self.loader.load(self, name, self.make_globals(globals))
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/loaders.py", line 126, in load
source, filename, uptodate = self.get_source(environment, name)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/loaders.py", line 218, in get_source
raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: python_kubernetes_script.jinja2
```
Looking the [source file](https://files.pythonhosted.org/packages/5d/54/0ea031a9771ded6036d05ad951359f7361995e1271a302ba2af99bdce1af/apache-airflow-providers-cncf-kubernetes-4.4.0.tar.gz) for the `apache-airflow-providers-cncf-kubernetes==4.4.0` package, I can see that `python_kubernetes_script.py` is there but not `python_kubernetes_script.jinja2`
```
$ tar -ztvf apache-airflow-providers-cncf-kubernetes-4.4.0.tar.gz 'apache-airflow-providers-cncf-kubernetes-4.4.0/airflow/providers/cncf/kubernetes/py*'
-rw-r--r-- 0 root root 2949 Sep 22 15:25 apache-airflow-providers-cncf-kubernetes-4.4.0/airflow/providers/cncf/kubernetes/python_kubernetes_script.py
```
### What you think should happen instead
The `python_kubernetes_script.jinja2` file that is available here https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/python_kubernetes_script.jinja2 should be included in the `apache-airflow-providers-cncf-kubernetes==4.4.0` pypi package.
### How to reproduce
With a default installation of `apache-airflow==2.4.1` and `apache-airflow-providers-cncf-kubernetes==4.4.0`, running the following DAG will reproduce the issue.
```
import pendulum
from airflow.decorators import dag, task
@dag(
schedule_interval=None,
start_date=pendulum.datetime(2022, 7, 20, tz="UTC"),
catchup=False,
tags=['xray_classifier'],
)
def k8s_taskflow():
@task.kubernetes(
image="python:3.8-slim-buster",
name="k8s_test",
namespace="default",
in_cluster=False,
config_file="/path/to/config"
)
def execute_in_k8s_pod():
import time
print("Hello from k8s pod")
time.sleep(2)
execute_in_k8s_pod_instance = execute_in_k8s_pod()
k8s_taskflow_dag = k8s_taskflow()
```
### Anything else
If I manually add the `python_kubernetes_script.jinja2` into my `site-packages/airflow/providers/cncf/kubernetes/` folder, then it works as expected.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26910 | https://github.com/apache/airflow/pull/27451 | 4cdea86d4cc92a51905aa44fb713f530e6bdadcf | 8975d7c8ff00841f4f2f21b979cb1890e6d08981 | "2022-10-06T11:33:31Z" | python | "2022-11-01T20:31:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,905 | ["airflow/www/static/js/api/index.ts", "airflow/www/static/js/api/useTaskXcom.ts", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Nav.tsx", "airflow/www/static/js/dag/details/taskInstance/Xcom/XcomEntry.tsx", "airflow/www/static/js/dag/details/taskInstance/Xcom/index.tsx", "airflow/www/templates/airflow/dag.html"] | Display selected task outputs (xcom) in task UI | ### Description
I often find myself checking the stats of a passed task, e.g. "inserted 3 new rows" or "discovered 4 new files" in the task logs. It would be very handy to see these on the UI directly, as part of the task details or elsewhere.
One idea would be to choose in the Task definition, which XCOM keys should be output in the task details, like so:
![image](https://user-images.githubusercontent.com/97735/194236391-9a8b4d97-9523-4461-a49f-182442d2727f.png)
### Use case/motivation
As a developer, I want to better monitor the results of my tasks in terms of key metrics, so I can see the data processed by them. While for production, this can be achieved by forwarding/outputting metrics to other systems, like notification hooks, or ingesting them into e.g. grafana, I would like to do this already in AirFlow to a certain extent. This would certainly cut down on my clicks while running beta DAGs.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26905 | https://github.com/apache/airflow/pull/35719 | d0f4512ecb9c0683a60be7b0de8945948444df8e | 77c01031d6c569d26f6fabd331597b7e87274baa | "2022-10-06T07:05:39Z" | python | "2023-12-04T15:59:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,892 | ["airflow/www/views.py"] | Dataset Next Trigger Modal Not Populating Latest Update | ### Apache Airflow version
2.4.1
### What happened
When using dataset scheduling, it isn't obvious which datasets a downstream dataset consumer is awaiting in order for the DAG to be scheduled.
I would assume that this is supposed to be solved by the `Latest Update` column in the modal that opens when selecting `x of y datasets updated`, but it appears that the data isn't being populated.
<img width="601" alt="image" src="https://user-images.githubusercontent.com/5778047/194116186-d582cede-c778-47f7-8341-fc13a69a2358.png">
Although one of the datasets has been produced, there is no data in the `Latest Update` column of the modal.
In the above example, both datasets have been produced > 1 time.
<img width="581" alt="image" src="https://user-images.githubusercontent.com/5778047/194116368-ceff241f-a623-4893-beb7-637b821c4b53.png">
<img width="581" alt="image" src="https://user-images.githubusercontent.com/5778047/194116410-19045f5a-8400-47b0-afcb-9fbbffca26ee.png">
### What you think should happen instead
The `Latest Update` column should be populated with the latest update timestamp for each dataset required to schedule a downstream, dataset consuming DAG.
Ideally there would be some form of highlighting on the "missing" datasets for quick visual feedback when DAGs have a large number of datasets required for scheduling.
### How to reproduce
1. Create a DAG (or 2 individual DAGs) that produces 2 datasets
2. Produce both datasets
3. Then produce _only one_ dataset
4. Check the modal by clicking from the home screen on the `x of y datasets updated` button.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26892 | https://github.com/apache/airflow/pull/29441 | 0604033829787ebed59b9982bf08c1a68d93b120 | 6f9efbd0537944102cd4a1cfef06e11fe0a3d03d | "2022-10-05T16:51:49Z" | python | "2023-02-20T08:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,875 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py"] | SQLToGCSOperators Add Support for Dumping Json value in CSV | ### Description
If output format is `CSV`, then any "dict" type object returned from a database, for example a Postgres JSON column, is not dumped to a string and is kept as a dict object.
### Use case/motivation
Currently if export_format is `CSV` and data column in Postgres is defined as `json` or `jsonb` data type, the param `stringify_dict` in abstract method `convert_type` has been hardcoded to `False`, which results in param `stringify_dict` in subclass cannot be customised, for instance in subclass `PostgresToGCSOperator`.
Function `convert_types` in base class `BaseSQLToGCSOperator`:
```
def convert_types(self, schema, col_type_dict, row, stringify_dict=False) -> list:
"""Convert values from DBAPI to output-friendly formats."""
return [
self.convert_type(value, col_type_dict.get(name), stringify_dict=stringify_dict)
for name, value in zip(schema, row)
]
```
Function `convert_type` in subclass `PostgresToGCSOperator`:
```
def convert_type(self, value, schema_type, stringify_dict=True):
"""
Takes a value from Postgres, and converts it to a value that's safe for
JSON/Google Cloud Storage/BigQuery.
Timezone aware Datetime are converted to UTC seconds.
Unaware Datetime, Date and Time are converted to ISO formatted strings.
Decimals are converted to floats.
:param value: Postgres column value.
:param schema_type: BigQuery data type.
:param stringify_dict: Specify whether to convert dict to string.
"""
if isinstance(value, datetime.datetime):
iso_format_value = value.isoformat()
if value.tzinfo is None:
return iso_format_value
return pendulum.parse(iso_format_value).float_timestamp
if isinstance(value, datetime.date):
return value.isoformat()
if isinstance(value, datetime.time):
formatted_time = time.strptime(str(value), "%H:%M:%S")
time_delta = datetime.timedelta(
hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec
)
return str(time_delta)
if stringify_dict and isinstance(value, dict):
return json.dumps(value)
if isinstance(value, Decimal):
return float(value)
return value
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26875 | https://github.com/apache/airflow/pull/26876 | bab6dbec3883084e5872123b515c2a8491c32380 | a67bcf3ecaabdff80c551cff1f987523211e7af4 | "2022-10-04T23:21:37Z" | python | "2022-10-06T08:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,812 | ["chart/values.schema.json", "tests/charts/test_webserver.py"] | Add NodePort Option to the values schema | ### Official Helm Chart version
1.6.0 (latest released)
### Apache Airflow version
2.3.4
### Kubernetes Version
1.23
### Helm Chart configuration
```
# shortened values.yaml file
webserver:
service:
type: NodePort
ports:
- name: airflow-ui
port: 80
targetPort: airflow-ui
nodePort: 8081 # Note this line does not work, this is what'd be nice to have for defining nodePort
```
### Docker Image customisations
_No response_
### What happened
Supplying nodePort like in the above Helm Chart Configuration example fails with an error saying a value of nodePort is not supported.
### What you think should happen instead
It'd be nice if we could define the nodePort we want the airflow-webserver service to listen on at launch. As it currently stands, supplying nodePort like in the above values.yaml example will fail, saying nodePort cannot be supplied. The workaround is to manually edit the webserver service post-deployment and specify the desired port for nodePort.
I looked at the way [the webserver service template file](https://github.com/apache/airflow/blob/main/chart/templates/webserver/webserver-service.yaml#L44-L50) is set up, and the logic there should allow this, but I believe the missing definition in the [schema.json](https://github.com/apache/airflow/blob/main/chart/values.schema.json#L3446-L3472) file is causing this to error out.
### How to reproduce
Attempt to install the Airflow Helm chart using the above values.yaml config
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26812 | https://github.com/apache/airflow/pull/26945 | be8a62e596a0dc0f935114a9d585007b497312a2 | cc571e8e0e27420d179870c8ddc7274c4f47ba44 | "2022-09-30T22:31:13Z" | python | "2022-11-14T00:57:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,802 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | pdb no longer works with airflow test command since 2.3.3 | Converted back to issue as I've reproduced it and traced the issue back to https://github.com/apache/airflow/pull/24362
### Discussed in https://github.com/apache/airflow/discussions/26352
<div type='discussions-op-text'>
<sup>Originally posted by **GuruComposer** September 12, 2022</sup>
### Apache Airflow version
2.3.4
### What happened
I used to be able to use ipdb to debug DAGs by running `airflow tasks test <dag_name> <dag_id>`, and hitting an ipdb breakpoint (ipdb.set_trace()).
This no longer works. I get a strange type error:
``` File "/usr/local/lib/python3.9/bdb.py", line 88, in trace_dispatch
return self.dispatch_line(frame)
File "/usr/local/lib/python3.9/bdb.py", line 112, in dispatch_line
self.user_line(frame)
File "/usr/local/lib/python3.9/pdb.py", line 262, in user_line
self.interaction(frame, None)
File "/home/astro/.local/lib/python3.9/site-packages/IPython/core/debugger.py", line 336, in interaction
OldPdb.interaction(self, frame, traceback)
File "/usr/local/lib/python3.9/pdb.py", line 357, in interaction
self._cmdloop()
File "/usr/local/lib/python3.9/pdb.py", line 322, in _cmdloop
self.cmdloop()
File "/usr/local/lib/python3.9/cmd.py", line 126, in cmdloop
line = input(self.prompt)
TypeError: an integer is required (got type NoneType)```
### What you think should happen instead
I should get the ipdb shell.
### How to reproduce
1. Add ipdb breakpoint anywhere in airflow task.
import ipdb; ipdb.set_trace()
2. Run that task:
Run `airflow tasks test <dag_name> <dag_id>`, and
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
2.3.4 | https://github.com/apache/airflow/issues/26802 | https://github.com/apache/airflow/pull/26806 | 677df102542ab85aab4efbbceb6318a3c7965e2b | 029ebacd9cbbb5e307a03530bdaf111c2c3d4f51 | "2022-09-30T13:51:53Z" | python | "2022-09-30T17:46:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,796 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py"] | Incorrect await_container_completion in KubernetesPodOperator | ### Apache Airflow version
2.3.4
### What happened
The [await_container_completion](
https://github.com/apache/airflow/blob/2.4.0/airflow/providers/cncf/kubernetes/utils/pod_manager.py#L259) function has a negated condition
```
while not self.container_is_running(pod=pod, container_name=container_name):
```
that causes our Airflow tasks running <1 s to never be completed, causing an infinite loop.
I see this was addressed and released in https://github.com/apache/airflow/pull/23883, but later reverted in https://github.com/apache/airflow/pull/24474. How come it was reverted? The thread on that revert PR with comments from @jedcunningham and @potiuk didn't really address why the fix was reverted.
### What you think should happen instead
Pods finishing within 1s should be properly handled.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==4.3.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26796 | https://github.com/apache/airflow/pull/28771 | 4f603364d364586a2062b061ddac18c4b58596d2 | ce677862be4a7de777208ba9ba9e62bcd0415393 | "2022-09-30T07:47:04Z" | python | "2023-01-07T18:17:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,785 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | BigQueryHook Requires Optional Field When Parsing Results Schema | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google 8.3.0
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When querying BigQuery using the BigQueryHook, sometimes the following error is returned:
```
[2022-09-29, 19:04:57 UTC] {{taskinstance.py:1902}} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/operators/python.py", line 171, in execute
return_value = self.execute_callable()
File "/usr/local/lib/python3.9/site-packages/airflow/operators/python.py", line 189, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/usr/src/app/dags/test.py", line 23, in curried_bigquery
cursor.execute(UPDATE_HIGHWATER_MARK_QUERY)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2700, in execute
self.description = _format_schema_for_description(query_results["schema"])
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 3009, in _format_schema_for_description
field["mode"] == "NULLABLE",
KeyError: 'mode'
```
### What you think should happen instead
The schema should be returned without error.
### How to reproduce
_No response_
### Anything else
According to the official docs, only `name` and `type` are required to be present. `mode` is listed as optional.
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#TableFieldSchema
The code currently expects `mode` to be present.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26785 | https://github.com/apache/airflow/pull/26786 | b7203cd36eef20de583df3e708f49073d689ac84 | cee610ae5cf14c117527cdfc9ac2ef0ddb5dcf3b | "2022-09-29T19:05:19Z" | python | "2022-10-01T13:47:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,774 | ["airflow/providers/trino/provider.yaml", "generated/provider_dependencies.json"] | Trino and Presto hooks do not properly execute statements other than SELECT | ### Apache Airflow Provider(s)
presto, trino
### Versions of Apache Airflow Providers
apache-airflow-providers-trino==4.0.1
apache-airflow-providers-presto==4.0.1
### Apache Airflow version
2.4.0
### Operating System
macOS 12.6
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When using the TrinoHook (PrestoHook also applies), only the `get_records()` and `get_first()` methods work as expected, the `run()` and `insert_rows()` do not.
The SQL statements sent by the problematic methods reach the database (visible in logs and UI), but they don't get executed.
The issue is caused by the hook not making the required subsequent requests to the Trino HTTP endpoints after the first request. More info [here](https://trino.io/docs/current/develop/client-protocol.html#overview-of-query-processing):
> If the JSON document returned by the POST to /v1/statement does not contain a nextUri link, the query has completed, either successfully or unsuccessfully, and no additional requests need to be made. If the nextUri link is present in the document, there are more query results to be fetched. The client should loop executing a GET request to the nextUri returned in the QueryResults response object until nextUri is absent from the response.
SQL statements made by methods like `get_records()` do get executed because internally they call `fetchone()` or `fetchmany()` on the cursor, which do make the subsequent requests.
### What you think should happen instead
The Hook is able to execute SQL statements other than SELECT.
### How to reproduce
Connect to a Trino or Presto instance and execute any SQL statement (INSERT or CREATE TABLE) using `TrinoHook.run()`, the statements will reach the API but they won't get executed.
Then, provide a dummy handler function like this:
`TrinoHook.run(..., handler=lambda cur: cur.description)`
The `description` property internally iterates over the cursor requests, causing the statement getting executed.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26774 | https://github.com/apache/airflow/pull/27168 | e361be74cd800efe1df9fa5b00a0ad0df88fcbfb | 09c045f081feeeea09e4517d05904b38660f525c | "2022-09-29T11:32:29Z" | python | "2022-10-26T03:13:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,767 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | MaxID logic for GCSToBigQueryOperator Causes XCom Serialization Error | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google 8.4.0rc1
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
The Max ID parameter, when used, causes an XCom serialization failure when trying to retrieve the value back out of XCom
### What you think should happen instead
Max ID value is returned from XCom call
### How to reproduce
Set `max_id_key=column,` on the operator, check XCom of the operator after it runs.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26767 | https://github.com/apache/airflow/pull/26768 | 9a6fc73aba75a03b0dd6c700f0f8932f6a474ff7 | b7203cd36eef20de583df3e708f49073d689ac84 | "2022-09-29T03:03:25Z" | python | "2022-10-01T13:39:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,754 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "airflow/providers/amazon/aws/utils/connection_wrapper.py", "tests/providers/amazon/aws/utils/test_connection_wrapper.py"] | Cannot retrieve config from alternative secrets backend. | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
Providers included in apache/airflow:2.4.0 docker image:
```
apache-airflow==2.4.0
apache-airflow-providers-amazon==5.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.3.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-docker==3.1.0
apache-airflow-providers-elasticsearch==4.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.3.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.2.0
apache-airflow-providers-mysql==3.2.0
apache-airflow-providers-odbc==3.1.1
apache-airflow-providers-postgres==5.2.1
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.0.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.1.0
```
### Apache Airflow version
2.4
### Operating System
AWS Fargate
### Deployment
Docker-Compose
### Deployment details
We have configure the alternative secrets backend to use AWS SMP:
```
[secrets]
backend = airflow.providers.amazon.aws.secrets.systems_manager.SystemsManagerParameterStoreBackend
backend_kwargs = {"config_prefix": "/airflow2/config", "connections_prefix": "/airflow2/connections", "variables_prefix": "/airflow2/variables"}
```
### What happened
```
All processes fail with:
`Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 178, in _get_secret
response = self.client.get_parameter(Name=ssm_path, WithDecryption=True)
File "/home/airflow/.local/lib/python3.7/site-packages/cached_property.py", line 36, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 98, in client
from airflow.providers.amazon.aws.hooks.base_aws import SessionFactory
ImportError: cannot import name 'SessionFactory' from 'airflow.providers.amazon.aws.hooks.base_aws' (/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 122, in _get_config_value_from_secret_backend
return secrets_client.get_config(config_key)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 167, in get_config
return self._get_secret(self.config_prefix, key)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 180, in _get_secret
except self.client.exceptions.ParameterNotFound:
File "/home/airflow/.local/lib/python3.7/site-packages/cached_property.py", line 36, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 98, in client
from airflow.providers.amazon.aws.hooks.base_aws import SessionFactory
ImportError: cannot import name 'SessionFactory' from 'airflow.providers.amazon.aws.hooks.base_aws' (/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 178, in _get_secret
response = self.client.get_parameter(Name=ssm_path, WithDecryption=True)
File "/home/airflow/.local/lib/python3.7/site-packages/cached_property.py", line 36, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/secrets/systems_manager.py", line 98, in client
from airflow.providers.amazon.aws.hooks.base_aws import SessionFactory
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py", line 49, in <module>
from airflow.models.connection import Connection
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/connection.py", line 32, in <module>
from airflow.models.base import ID_LEN, Base
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/base.py", line 76, in <module>
COLLATION_ARGS = get_id_collation_args()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/base.py", line 70, in get_id_collation_args
conn = conf.get('database', 'sql_alchemy_conn', fallback='')
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 557, in get
option = self._get_option_from_secrets(deprecated_key, deprecated_section, key, section)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 577, in _get_option_from_secrets
option = self._get_secret_option(section, key)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 502, in _get_secret_option
return _get_config_value_from_secret_backend(secrets_path)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/configuration.py", line 125, in _get_config_value_from_secret_backend
'Cannot retrieve config from alternative secrets backend. '
airflow.exceptions.AirflowConfigException: Cannot retrieve config from alternative secrets backend. Make sure it is configured properly and that the Backend is accessible.
cannot import name 'SessionFactory' from 'airflow.providers.amazon.aws.hooks.base_aws' (/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/hooks/base_aws.py)
```
`
### What you think should happen instead
Airflow 2.3.4 was using amazon provider 5.0.0 and everything was working fine. Looking at the SystemsManagerParameterStoreBackend class, the client method changed in amazon 5.1.0 (coming with AF 2.4). There use to be a simple boto3.session call. The new code calls for an import of SessionFactory. I do not understand why this import fails though.
### How to reproduce
I assume anyone that sets up the parameter store as backend and try to use the docker image (FROM apache/airflow:2.4.0) will run into this issue.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26754 | https://github.com/apache/airflow/pull/26784 | 8898db999c88c98b71f4a5999462e6858aab10eb | 8a1bbcfcb31c1adf5c0ea2dff03b507f584ad1f3 | "2022-09-28T15:37:01Z" | python | "2022-10-06T16:38:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,607 | ["airflow/www/views.py"] | resolve web ui views warning re DISTINCT ON | ### Body
Got this warning in webserver output when loading home page
```
/Users/dstandish/code/airflow/airflow/www/views.py:710 SADeprecationWarning: DISTINCT ON is currently supported only by the PostgreSQL dialect. Use of DISTINCT ON for other backends is currently silently ignored, however this usage is deprecated, and will raise CompileError in a future release for all backends that do not support this syntax.
```
looks like it's this line
```
dagtags = session.query(DagTag.name).distinct(DagTag.name).all()
```
may be able to change to `func.distinct`
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26607 | https://github.com/apache/airflow/pull/26608 | 7179eba69e9cb40c4122f3589c5977e536469b13 | 55d11464c047d2e74f34cdde75d90b633a231df2 | "2022-09-22T21:05:43Z" | python | "2022-09-23T08:52:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,571 | ["airflow/providers/amazon/aws/secrets/secrets_manager.py", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager-json.png", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager-uri.png", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager.png", "docs/apache-airflow-providers-amazon/secrets-backends/aws-secrets-manager.rst", "tests/providers/amazon/aws/secrets/test_secrets_manager.py"] | Migrate Amazon Provider Package's `SecretsManagerBackend`'s `full_url_mode=False` implementation. | # Objective
I am following up on all the changes I've made in PR #25432 and which were originally discussed in issue #25104.
The objective of the deprecations introduced in #25432 is to flag and remove "odd" behaviors in the `SecretsManagerBackend`.
The objective of _this issue_ being opened is to discuss them, and hopefully reach a consensus on how to move forward implementing the changes.
I realize that a lot of the changes I made and their philosophy were under-discussed, so I will place the discussion here.
## What does it mean for a behavior to be "odd"?
You can think of the behaviors of `SecretsManagerBackend`, and which secret encodings it supports, as a Venn diagram.
Ideally, `SecretsManagerBackend` supports _at least_ everything the base API supports. This is a pretty straightforward "principle of least astonishment" requirement.
For example, it would be "astonishing" if copy+pasting a secret that works with the base API did not work in the `SecretsManagerBackend`.
To be clear, it would also be "astonishing" if the reverse were not true-- i.e. copy+pasting a valid secret from `SecretsManagerBackend` doesn't work with, say, environment variables. That said, adding new functionality is less astonishing than when a subclass doesn't inherit 100% of the supported behaviors of what it is subclassing. So although adding support for new secret encodings is arguably not desirable (all else equal), I think we can all agree it's not as bad as the reverse.
## Examples
I will cover two examples where we can see the "Venn diagram" nature of the secrets backend, and how some behaviors that are supported in one implementation are not supported in another:
### Example 1
Imagine the following environment variable secret:
```shell
export AIRFLOW_CONN_POSTGRES_DEFAULT='{
"conn_type": "postgres",
"login": "usr",
"password": "not%url@encoded",
"host": "myhost"
}'
```
Prior to #25432, this was _**not**_ a secret that worked with the `SecretsManagerBackend`, even though it did work with base Airflow's `EnvironmentVariablesBackend`(as of 2.3.0) due to the values not being URL-encoded.
Additionally, the `EnvironmentVariablesBackend` is smart enough to choose whether a secret should be treated as a JSON or a URI _without having to be explicitly told_. In a sense, this is also an incompatibility-- why should the `EnvironmentVariablesBackend` be "smarter" than the `SecretsManagerBackend` when it comes to discerning JSONs from URIs, and supporting both at the same time rather than needing secrets to be always one type of serialization?
### Example 2
Imagine the following environment variable secret:
```shell
export AIRFLOW_CONN_POSTGRES_DEFAULT="{
'conn_type': 'postgres',
'user': 'usr',
'pass': 'is%20url%20encoded',
'host': 'myhost'
}"
```
This is _not_ a valid secret in Airflow's base `EnvironmentVariablesBackend` implementation, although it _is_ a valid secret in `SecretsManagerBackend`.
There are two things that make it invalid in the `EnvironmentVariablesBackend` but valid in `SecretsManagerBackend`:
- `ast.litera_eval` in `SecretsManagerBackend` means that a Python dict repr is allowed to be read in.
- `user` and `pass` are invalid field names in the base API; these should be `login` and `password`, respectively. But the `_standardize_secret_keys()` method in the `SecretsManagerBackend` implementation makes it valid.
Additionally, note that this secret also parses differently in the `SecretsManagerBackend` than the `EnvironmentVariablesBackend`: the password `"is%20url%20encoded"` renders as `"is url encoded"` in the `SecretsManagerBackend`, but is left untouched by the base API when using a JSON.
## List of odd behaviors
Prior to #25432, the following behaviors were a part of the `SecretsManagerBackend` specification that I would consider "odd" because they are not part of the base API implementation:
1. `full_url_mode=False` still required URL-encoded parameters for JSON values.
2. `ast.literal_eval` was used instead of `json.dumps`, which means that the `SecretsManagerBackend` on `full_url_mode=False` was supporting Python dict reprs and other non-JSONs.
3. The Airflow config required setting `full_url_mode=False` to determine what is a JSON or URI.
4. `get_conn_value()` always must return a URI.
5. The `SecretsManagerBackend` allowed for atypical / flexible field names (such as `user` instead of `login`) via the `_standardize_secret_keys()` method.
We also introduced a new odd behavior in order to assist with the migration effort, which is:
6. New kwarg called `are_secret_values_urlencoded` to support secrets whose encodings are "non-idempotent".
In the below sections, I discuss each behavior in more detail, and I've added my own opinions about how odd these behaiors are and the estimated adverse impact of removing the behaviors.
### Behavior 1: URL-encoding JSON values
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated|High|High|
This was the original behavior that frustrated me and motivated me to open issues + submit PRs.
With the "idempotency" check, I've done my best to smooth out the transition away from URL-encoded JSON values.
The requirement is now _mostly_ removed, to the extent that the removal of this behavior can be backwards compatible and as smooth as possible:
- Users whose secrets do not contain special characters will not have even noticed a change took place.
- Users who _do_ have secrets with special characters hopefully are checking their logs and will have seen a deprecation warning telling them to remove the URL-encoding.
- In a _select few rare cases_ where a user has a secret with a "non-idempotent" encoding, the user will have to reconfigure their `backend_kwargs` to have `are_secret_values_urlencoded` set.
I will admit that I was surprised at how smooth we could make the developer experience around migrating this behavior for the majority of use cases.
When you consider...
- How smooth migrating is (just remove the URL-encoding! In most cases you don't need to do anything else!), and
- How disruptive full removal of URL-encoding is (to people who have not migrated yet),
.. it makes me almost want to hold off on fully removing this behavior for a little while longer, just to make sure developers read their logs and see the deprecation warning.
### Behavior 2: `ast.literal_eval` for deserializing JSON secrets
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated|High|Low|
It is hard to feel bad for anyone who is adversely impacted by this removal:
- This behavior should never have been introduced
- This behavior was never a documented behavior
- A reasonable and educated user will have known better than to rely on non-JSONs.
Providing a `DeprecationWarning` for this behavior was already going above and beyond, and we can definitely remove this soon.
### Behavior 3: `full_url_mode=False` is required for JSON secrets
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Active|Medium|Low|
This behavior is odd because the base API does not require such a thing-- whether it is a JSON or a URI can be inferred by checking whether the first character of the string is `{`.
Because it is possible to modify this behavior without introducing breaking changes, moving from _lack_ of optionality for the `full_url_mode` kwarg can be considered a feature addition.
Ultimately, we would want to switch from `full_url_mode: bool = True` to `full_url_mode: bool | None = None`.
In the proposed implementation, when `full_url_mode is None`, we just use whether the value starts with `{` to check if it is a JSON. _Only when_ `full_url_mode` is a `bool` would we explicitly raise errors e.g. if a JSON was given when `full_url_mode=True`, or a URI was given when `full_url_mode=False`.
### Behavior 4: `get_conn_value()` must return URI
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated + Active (until at least October 11th)|Low|Medium|
The idea that the callback invoked by `get_connection()` (now called `get_conn_value()`, but previously called `get_conn_uri()`) can return a JSON is a new Airflow 2.3.0 behavior.
This behavior _**cannot**_ change until at least October 11th, because it is required for `2.2.0` backwards compatibility. Via Airflow's `README.md`:
> [...] by default we upgrade the minimum version of Airflow supported by providers to 2.3.0 in the first Provider's release after 11th of October 2022 (11th of October 2021 is the date when the first PATCHLEVEL of 2.2 (2.2.0) has been released.
Changing this behavior _after_ October 11th is just a matter of whether maintainers are OK with introduce a breaking change to the `2.2.x` folks who are relying on JSON secrets.
Note that right now, `get_conn_value()` is avoided _entirely_ when `full_url_mode=False` and `get_connection()` is called. So although there is a deprecation warning, it is almost never hit.
```python
if self.full_url_mode:
return self._get_secret(self.connections_prefix, conn_id)
else:
warnings.warn(
f'In future versions, `{type(self).__name__}.get_conn_value` will return a JSON string when'
' full_url_mode is False, not a URI.',
DeprecationWarning,
)
```
### Behavior 5: Flexible field names via `_standardize_secret_keys()`
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Active|Medium|High|
This is one of those things that is very hard to remove. Removing it can be quite disruptive!
It is also a low priority to remove because unlike some other behaviors, it does not detract from `SecretsManagerBackend` being a "strict superset" with the base API.
Maybe it will just be the case that `SecretsManagerBackend` has this incompatibility with the base API going forward indefinitely?
Even still, we should consider the two following proposals:
1. The default field name of `user` should probably be switched to `login`, both in the `dict[str, list[str]]` used to implement the find+replace, and also in the documentation. I do not forsee any issues with doing this.
2. Remove documentation for this feature if we think it is "odd" enough to warrant discouraging users from seeking it out.
I think # 1 should be uncontroversial, but # 2 may be more controversial. I do not want to detract from my other points by staking out too firm an opinion on this one, so the best solution may simply be to not touch this for now. In fact, not touching this is exactly what I did with the original PR.
### Behavior 6: `are_secret_values_urlencoded` kwarg
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Pending Deprecation|Medium|Medium|
In the original discussion #25104, @potiuk told me to add something like this. I tried my best to avoid users needing to do this, hence the "idempotency" check. So only a few users actually need to specify this to assist in the migration of their secrets.
This was introduced as a "pending" deprecation because frankly, it is an odd behavior to have ever been URL-encoding these JSON values, and it only exists as a necessity to aid in migrating secrets. In our ideal end state, this doesn't exist.
Eventually when it comes time, removing this will not be _all_ that disruptive:
- This only impacts users who have `full_url_mode=False`
- This only impacts users with secrets that have non-idempotent encodings.
- `are_secret_values_urlencoded` should be set to `False`. Users should never be manually setting to `True`!
So we're talking about a small percent of a small minority of users who will ever see or need to set this `are_secret_values_urlencoded` kwarg. And even then, they should have set `are_secret_values_urlencoded` to `False` to assist in migrating.
# Proposal for Next Steps
All three steps require breaking changes.
## Proposed Step 1
- Remove: **Behavior 2: `ast.literal_eval` for deserializing JSON secrets**
- Remove: **Behavior 3: `full_url_mode=False` is required for JSON secrets**
- Remove: **Behavior 4: `get_conn_value()` must return URI**
- Note: Must wait until at least October 11th!
Right now the code is frankly a mess. I take some blame for that, as I did introduce the mess. But the mess is all inservice of backwards compatibility.
Removing Behavior 4 _**vastly**_ simplifies the code, and means we don't need to continue overriding the `get_connection()` method.
Removing Behavior 2 also simplifies the code, and is a fairly straightforward change.
Removing Behavior 3 is fully backwards compatible (so no deprecation warnings required) and provides a much nicer user experience overall.
The main thing blocking "Proposed Step 1" is the requirement that `2.2.x` be supported until at least October 11th.
### Alternative to Proposed Step 1
It _is_ possible to remove Behavior 2 and Behavior 3 without removing Behavior 4, and do so in a way that keeps `2.2.x` backwards compatibility.
It will still however leave a mess of the code.
I am not sure how eager the Amazon Provider Package maintainers are to keep backwards compatibility here. Between the 1 year window, plus the fact that this can only possibly impact people using both the `SecretsManagerBackend` _and_ who have `full_url_mode=False` turned on, it seems like not an incredibly huge deal to just scrap support for `2.2.x` here when the time comes. But it is not appropriate for me to decide this, so I must be clear and say that we _can_ start trimming away some of the odd behaviors _without_ breaking Airflow `2.2.x` backwards compatibility, and that the main benefit of breaking backwards compatibility is the source code becoming way simpler.
## Proposed Step 2
- Remove: **Behavior 1: URL-encoding JSON values**
- Switch status from Pending Deprecation to Deprecation: **Behavior 6: `are_secret_values_urlencoded` kwarg**
Personally, I don't think we should rush on this. The reason I think we should take our time here is because the current way this works is surprisingly non-disruptive (i.e. no config changes required to migrate for most users), but fully removing the behavior may be pretty disruptive, especially to people who don't read their logs carefully.
### Alternative to Proposed Step 2
The alternative to this step is to combine this step with step 1, instead of holding off for a future date.
The main arguments in favor of combining with step 1 are:
- Reducing the number of version bumps that introduce breaking changes by simply combining all breaking changes into one step. It's unclear how many users even use `full_url_mode` and it is arguable that we're being too delicate with what was arguably a semi-experimental and odd feature to begin with; it's only become less experimental by the stroke of luck that Airflow 2.3.0 supports JSON-encoded secrets in the base API.
- A sort of "rip the BandAid" ethos, or a "get it done and over with" ethos. I don't think this is very nice to users, but I see the appeal of not keeping odd code around for a while.
## Proposed Step 3
- Remove: **Behavior 6: `are_secret_values_urlencoded` kwarg**
Once URL-encoding is no longer happening for JSON secrets, and all non-idempotent secrets have been cast or explicitly handled, and we've deprecated everything appropriately, we can finally remove `are_secret_values_urlencoded`.
# Conclusion
The original deprecations introduced were under-discussed, but hopefully now you both know where I was coming from, and also agree with the changes I made.
If you _disagree_ with the deprecations that I introduced, I would also like to hear about that and why, and we can see about rolling any of them back.
Please let me know what you think about the proposed steps for changes to the code base.
Please also let me know what you think an appropriate schedule is for introducing the changes, and whether you think I should consider one of the alternatives (both considered and otherwise) to the steps I outlined in the penultimate section.
# Other stuff
### Use case/motivation
(See above)
### Related issues
- #25432
- #25104
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26571 | https://github.com/apache/airflow/pull/27920 | c8e348dcb0bae27e98d68545b59388c9f91fc382 | 8f0265d0d9079a8abbd7b895ada418908d8b9909 | "2022-09-21T18:31:22Z" | python | "2022-12-05T19:21:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,566 | ["docs/apache-airflow/concepts/tasks.rst"] | Have SLA docs reflect reality | ### What do you see as an issue?
The [SLA documentation](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html#slas) currently states the following:
> An SLA, or a Service Level Agreement, is an expectation for the maximum time a Task should take. If a task takes longer than this to run...
However this is not how SLAs currently work in Airflow, the SLA time is calculated from the start of the DAG not from the start of the task.
For example if you have a DAG like this the SLA will always trigger after the DAG has started for 5 minutes even though the task never takes 5 minutes to run:
```python
import datetime
from airflow import DAG
from airflow.sensors.time_sensor import TimeSensor
from airflow.operators.python import PythonOperator
with DAG(dag_id="my_dag", schedule_interval="0 0 * * *") as dag:
wait_time_mins = TimeSensor(target_time=datetime.time(minute=10))
run_fast = PythonOperator(
python_callable=lambda *a, **kw: True,
sla=datetime.timedelta(minutes=5),
)
run_fast.set_upstream(wait_time_mins)
```
### Solving the problem
Update the docs to explain how SLAs work in reality.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26566 | https://github.com/apache/airflow/pull/27111 | 671029bebc33a52d96f9513ae997e398bd0945c1 | 639210a7e0bfc3f04f28c7d7278292d2cae7234b | "2022-09-21T16:00:36Z" | python | "2022-10-27T14:34:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,565 | ["docs/apache-airflow/core-concepts/executor/local.rst"] | Documentation unclear about multiple LocalExecutors on HA Scheduler deployment | ### What do you see as an issue?
According to Airflow documentation, it's now possible to run multiple Airflow Schedulers starting with Airflow 2.x.
What's not clear from the documentation is what happens if each of the machines running the Scheduler has executor = LocalExecutor in the [core] section of airflow.cfg. In this context, if I have Airflow Scheduler running on 3 machines, does this mean that there will also be 3 LocalExecutors processing tasks in a distributed fashion?
### Solving the problem
Enhancing documentation to clarify the details about multiple LocalExecutors on HA Scheduler deployment
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26565 | https://github.com/apache/airflow/pull/32310 | 61f33304d587b3b0a48a876d3bfedab82e42bacc | e53320d62030a53c6ffe896434bcf0fc85803f31 | "2022-09-21T15:53:02Z" | python | "2023-07-05T09:22:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,555 | ["airflow/cli/commands/task_command.py", "tests/cli/commands/test_task_command.py"] | "airflow tasks render/state" cli commands do not work for mapped task instances | ### Apache Airflow version
Other Airflow 2 version
### What happened
Running following cli command:
```
airflow tasks render test-dynamic-mapping consumer scheduled__2022-09-18T15:14:15.107780+00:00 --map-index
```
fails with exception:
```
Traceback (most recent call last):
File "/opt/python3.8/bin/airflow", line 8, in <module>
sys.exit(main())
File "/opt/python3.8/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 101, in wrapper
return f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 337, in _wrapper
f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 576, in task_render
for attr in task.__class__.template_fields:
TypeError: 'member_descriptor' object is not iterable
```
Running following cli command:
```
airflow tasks state test-dynamic-mapping consumer scheduled__2022-09-18T15:14:15.107780+00:00 --map-index
```
fails with exception:
```
Traceback (most recent call last):
File "/opt/python3.8/bin/airflow", line 8, in <module>
sys.exit(main())
File "/opt/python3.8/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 101, in wrapper
return f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/cli.py", line 337, in _wrapper
f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 422, in task_state
print(ti.current_state())
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 849, in current_state
session.query(TaskInstance.state)
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2879, in scalar
ret = self.one()
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2856, in one
return self._iter().one()
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 1190, in one
return self._only_one_row(
File "/opt/python3.8/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 613, in _only_one_row
raise exc.MultipleResultsFound(
sqlalchemy.exc.MultipleResultsFound: Multiple rows were found when exactly one was required
```
### What you think should happen instead
Command successfully executed
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26555 | https://github.com/apache/airflow/pull/28698 | a7e1cb2fbfc684508f4b832527ae2371f99ad37d | 1da17be37627385fed7fc06584d72e0abda6a1b5 | "2022-09-21T13:56:19Z" | python | "2023-01-04T20:43:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,548 | ["airflow/models/renderedtifields.py", "airflow/utils/sqlalchemy.py"] | Resolve warning about renderedtifields query | ### Body
This warning is emitted when running a task instance, at least on mysql:
```
[2022-09-21, 05:22:56 UTC] {logging_mixin.py:117} WARNING -
/home/airflow/.local/lib/python3.8/site-packages/airflow/models/renderedtifields.py:258
SAWarning: Coercing Subquery object into a select() for use in IN();
please pass a select() construct explicitly
```
Need to resolve.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26548 | https://github.com/apache/airflow/pull/26667 | 22d52c00f6397fde8d97cf2479c0614671f5b5ba | 0e79dd0b1722a610c898da0ba8557b8a94da568c | "2022-09-21T05:26:52Z" | python | "2022-09-26T13:49:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,544 | ["airflow/utils/db.py"] | Choose setting for sqlalchemy SQLALCHEMY_TRACK_MODIFICATIONS | ### Body
We need to determine what to do about this warning:
```
/Users/dstandish/.virtualenvs/2.4.0/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py:872 FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
```
Should we set to true or false?
@ashb @potiuk @jedcunningham @uranusjr
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26544 | https://github.com/apache/airflow/pull/26617 | 3396d1f822caac7cbeb14e1e67679b8378a84a6c | 051ba159e54b992ca0111107df86b8abfd8b7279 | "2022-09-21T00:57:27Z" | python | "2022-09-23T07:18:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,529 | ["airflow/serialization/serialized_objects.py", "docs/apache-airflow/best-practices.rst", "docs/apache-airflow/concepts/timetable.rst", "tests/serialization/test_dag_serialization.py"] | Variable.get inside of a custom Timetable breaks the Scheduler | ### Apache Airflow version
2.3.4
### What happened
If you try to use `Variable.get` from inside of a custom Timetable, the Scheduler will break with errors like:
```
scheduler | [2022-09-20 10:19:36,104] {variable.py:269} ERROR - Unable to retrieve variable from secrets backend (MetastoreBackend). Checking subsequent secrets backend.
scheduler | Traceback (most recent call last):
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/variable.py", line 265, in get_variable_from_secrets
scheduler | var_val = secrets_backend.get_variable(key=key)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
scheduler | return func(*args, session=session, **kwargs)
scheduler | File "/opt/conda/envs/production/lib/python3.9/contextlib.py", line 126, in __exit__
scheduler | next(self.gen)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 33, in create_session
scheduler | session.commit()
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1435, in commit
scheduler | self._transaction.commit(_to_root=self.future)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 829, in commit
scheduler | self._prepare_impl()
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 797, in _prepare_impl
scheduler | self.session.dispatch.before_commit(self.session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/sqlalchemy/event/attr.py", line 343, in __call__
scheduler | fn(*args, **kw)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/sqlalchemy.py", line 341, in _validate_commit
scheduler | raise RuntimeError("UNEXPECTED COMMIT - THIS WILL BREAK HA LOCKS!")
scheduler | RuntimeError: UNEXPECTED COMMIT - THIS WILL BREAK HA LOCKS!
scheduler | [2022-09-20 10:19:36,105] {plugins_manager.py:264} ERROR - Failed to import plugin /home/tsanders/airflow_standalone_sqlite/plugins/custom_timetable.py
scheduler | Traceback (most recent call last):
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/plugins_manager.py", line 256, in load_plugins_from_plugin_directory
scheduler | loader.exec_module(mod)
scheduler | File "<frozen importlib._bootstrap_external>", line 850, in exec_module
scheduler | File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
scheduler | File "/home/tsanders/airflow_standalone_sqlite/plugins/custom_timetable.py", line 9, in <module>
scheduler | class CustomTimetable(CronDataIntervalTimetable):
scheduler | File "/home/tsanders/airflow_standalone_sqlite/plugins/custom_timetable.py", line 10, in CustomTimetable
scheduler | def __init__(self, *args, something=Variable.get('something'), **kwargs):
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/variable.py", line 138, in get
scheduler | raise KeyError(f'Variable {key} does not exist')
scheduler | KeyError: 'Variable something does not exist'
scheduler | [2022-09-20 10:19:36,179] {scheduler_job.py:769} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
scheduler | Traceback (most recent call last):
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 752, in _execute
scheduler | self._run_scheduler_loop()
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 840, in _run_scheduler_loop
scheduler | num_queued_tis = self._do_scheduling(session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 914, in _do_scheduling
scheduler | self._start_queued_dagruns(session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1086, in _start_queued_dagruns
scheduler | dag = dag_run.dag = self.dagbag.get_dag(dag_run.dag_id, session=session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
scheduler | return func(*args, **kwargs)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/dagbag.py", line 179, in get_dag
scheduler | self._add_dag_from_db(dag_id=dag_id, session=session)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/dagbag.py", line 254, in _add_dag_from_db
scheduler | dag = row.dag
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/models/serialized_dag.py", line 209, in dag
scheduler | dag = SerializedDAG.from_dict(self.data) # type: Any
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1099, in from_dict
scheduler | return cls.deserialize_dag(serialized_obj['dag'])
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1021, in deserialize_dag
scheduler | v = _decode_timetable(v)
scheduler | File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 189, in _decode_timetable
scheduler | raise _TimetableNotRegistered(importable_string)
scheduler | airflow.serialization.serialized_objects._TimetableNotRegistered: Timetable class 'custom_timetable.CustomTimetable' is not registered
```
Note that in this case, the Variable in question *does* exist, and the `KeyError` is a red herring.
If you add a `default_var`, things seem to work, though I wouldn't trust it since there is clearly some context where it will fail to load the Variable and will always fall back to the default. Additionally, this still raises the `UNEXPECTED COMMIT - THIS WILL BREAK HA LOCKS!` error, which I assume is a bad thing.
### What you think should happen instead
I'm not sure whether or not this should be allowed. In my case, I was able to work around the error by making all Timetable initializer args required (no default values) and pulling the `Variable.get` out into a wrapper function.
### How to reproduce
`custom_timetable.py`
```
#!/usr/bin/env python3
from __future__ import annotations
from airflow.models.variable import Variable
from airflow.plugins_manager import AirflowPlugin
from airflow.timetables.interval import CronDataIntervalTimetable
class CustomTimetable(CronDataIntervalTimetable):
def __init__(self, *args, something=Variable.get('something'), **kwargs):
self._something = something
super().__init__(*args, **kwargs)
class CustomTimetablePlugin(AirflowPlugin):
name = 'custom_timetable_plugin'
timetables = [CustomTimetable]
```
`test_custom_timetable.py`
```
#!/usr/bin/env python3
import datetime
import pendulum
from airflow.decorators import dag, task
from custom_timetable import CustomTimetable
@dag(
start_date=datetime.datetime(2022, 9, 19),
timetable=CustomTimetable(cron='0 0 * * *', timezone=pendulum.UTC),
)
def test_custom_timetable():
@task
def a_task():
print('hello')
a_task()
dag = test_custom_timetable()
if __name__ == '__main__':
dag.cli()
```
```
airflow variables set something foo
airflow dags trigger test_custom_timetable
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
I was able to reproduce this with:
* Standalone mode, SQLite DB, SequentialExecutor
* Self-hosted deployment, Postgres DB, CeleryExecutor
### Anything else
Related: #21895
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26529 | https://github.com/apache/airflow/pull/26649 | 26f94c5370587f73ebd935cecf208c6a36bdf9b6 | 37c0cb6d3240062106388449cf8eed9c948fb539 | "2022-09-20T16:02:09Z" | python | "2022-09-26T22:01:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,499 | ["airflow/models/xcom_arg.py"] | Dynamic task mapping zip() iterates unexpected number of times | ### Apache Airflow version
2.4.0
### What happened
When running `zip()` with different-length lists, I get an unexpected result:
```python
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(
dag_id="demo_dynamic_task_mapping_zip",
start_date=datetime(2022, 1, 1),
schedule=None,
):
@task
def push_letters():
return ["a", "b", "c"]
@task
def push_numbers():
return [1, 2, 3, 4]
@task
def pull(value):
print(value)
pull.expand(value=push_letters().zip(push_numbers()))
```
Iterates over `[("a", 1), ("b", 2), ("c", 3), ("a", 1)]`, so it iterates for the length of the longest collection, but restarts iterating elements when reaching the length of the shortest collection.
I would expect it to behave like Python's builtin `zip` and iterate for the length of the shortest collection, so 3x in the example above, i.e. `[("a", 1), ("b", 2), ("c", 3)]`.
Additionally, I went digging in the source code and found the `fillvalue` argument which works as expected:
```python
pull.expand(value=push_letters().zip(push_numbers(), fillvalue="foo"))
```
Iterates over `[("a", 1), ("b", 2), ("c", 3), ("foo", 4)]`.
However, with `fillvalue` not set, I would expect it to iterate only for the length of the shortest collection.
### What you think should happen instead
I expect `zip()` to iterate over the number of elements of the shortest collection (without `fillvalue` set).
### How to reproduce
See above.
### Operating System
MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
OSS Airflow
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26499 | https://github.com/apache/airflow/pull/26636 | df3bfe3219da340c566afc9602278e2751889c70 | f219bfbe22e662a8747af19d688bbe843e1a953d | "2022-09-19T18:51:49Z" | python | "2022-09-26T09:02:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,497 | ["airflow/migrations/env.py", "airflow/migrations/versions/0118_2_4_2_add_missing_autoinc_fab.py", "airflow/migrations/versions/0119_2_5_0_add_updated_at_to_dagrun_and_ti.py", "airflow/settings.py", "airflow/utils/db.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/migrations-ref.rst"] | Upgrading to airflow 2.4.0 from 2.3.4 causes NotNullViolation error | ### Apache Airflow version
2.4.0
### What happened
Stopped existing processes, upgraded from airflow 2.3.4 to 2.4.0, and ran airflow db upgrade successfully. Upon restarting the services, I'm not seeing any dag runs from the past 10 days. I kick off a new job, and I don't see it show up in the grid view. Upon checking the systemd logs, I see that there are a lot of postgress errors with webserver. Below is a sample of such errors.
```
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'Datasets'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,183] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 13, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 13, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,209] {manager.py:420} ERROR - Add View Menu Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, Datasets).
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'Datasets'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,212] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 17, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 17, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,229] {manager.py:420} ERROR - Add View Menu Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, DAG Warnings).
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'DAG Warnings'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,232] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 17, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 17, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,250] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 13, 23).
```
I tried running airflow db check, init, check-migration, upgrade without any errors, but the errors still remain.
Please let me know if I missed any steps during the upgrade, or if this is a known issue with a workaround.
### What you think should happen instead
All dag runs should be visible
### How to reproduce
upgrade airflow, upgrade db, restart the services
### Operating System
Ubuntu 18.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26497 | https://github.com/apache/airflow/pull/26885 | 2f326a6c03efed8788fe0263df96b68abb801088 | 7efdeed5eccbf5cb709af40c8c66757e59c957ed | "2022-09-19T18:13:02Z" | python | "2022-10-07T16:37:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,492 | ["airflow/utils/log/file_task_handler.py"] | Cannot fetch log from Celery worker | ### Discussed in https://github.com/apache/airflow/discussions/26490
<div type='discussions-op-text'>
<sup>Originally posted by **emredjan** September 19, 2022</sup>
### Apache Airflow version
2.4.0
### What happened
When running tasks on a remote celery worker, webserver fails to fetch logs from the machine, giving a '403 - Forbidden' error on version 2.4.0. This behavior does not happen on 2.3.3, where the remote logs are retrieved and displayed successfully.
The `webserver / secret_key` configuration is the same in all nodes (the config files are synced), and their time is synchronized using a central NTP server, making the solution in the warning message not applicable.
My limited analysis pointed to the `serve_logs.py` file, and the flask request object that's passed to it, but couldn't find the root cause.
### What you think should happen instead
It should fetch and show remote celery worker logs on the webserver UI correctly, as it did in previous versions.
### How to reproduce
Use airflow version 2.4.0
Use CeleryExecutor with RabbitMQ
Use a separate Celery worker machine
Run a dag/task on the remote worker
Try to display task log on the web UI
### Operating System
Red Hat Enterprise Linux 8.6 (Ootpa)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-hashicorp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-mssql==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-odbc==3.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
```
### Deployment
Virtualenv installation
### Deployment details
Using CeleryExecutor / rabbitmq with 2 servers
### Anything else
All remote task executions has the same problem.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/26492 | https://github.com/apache/airflow/pull/26493 | b9c4e98d8f8bcc129cbb4079548bd521cd3981b9 | 52560b87c991c9739791ca8419219b0d86debacd | "2022-09-19T14:10:25Z" | python | "2022-09-19T16:37:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,425 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | get_dags does not fetch more than 100 dags. | Hi,
The function does not return more than 100 dags even setting the limit to more than 100. So `get_dags(limit=500)` will only return max of 100 dags.
I have to do the hack to mitigate this problem.
```
def _get_dags(self, max_dags: int = 500):
i = 0
responses = []
while i <= max_dags:
response = self._api.get_dags(offset=i)
responses += response['dags']
i = i + 100
return [dag['dag_id'] for dag in responses]
```
Versions I am using are:
```
apache-airflow==2.3.2
apache-airflow-client==2.3.0
```
and
```
apache-airflow==2.2.2
apache-airflow-client==2.1.0
```
Best,
Hamid | https://github.com/apache/airflow/issues/27425 | https://github.com/apache/airflow/pull/29773 | a0e13370053452e992d45e7956ff33290563b3a0 | 228d79c1b3e11ecfbff5a27c900f9d49a84ad365 | "2022-09-16T22:11:08Z" | python | "2023-02-26T16:19:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,427 | ["airflow/www/static/js/main.js", "airflow/www/utils.py"] | Can not get task which status is null | ### Apache Airflow version
Other Airflow 2 version
### What happened
with List Task Instance airflow webUI,when we search the task which state is null,the result is:no records found.
### What you think should happen instead
should list the task which status is null
### How to reproduce
use airflow webui
List Task Instance
add filter
state equal to null
### Operating System
oracle linux
### Versions of Apache Airflow Providers
2.2.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26427 | https://github.com/apache/airflow/pull/26584 | 64622929a043436b235b9fb61fb076c5d2e02124 | 8e2e80a0ce0e1819874e183fb1662e879cdd8a08 | "2022-09-16T06:41:55Z" | python | "2022-10-11T19:31:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,424 | ["airflow/www/extensions/init_views.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | `POST /taskInstances/list` with wildcards returns unhelpful error | ### Apache Airflow version
2.3.4
### What happened
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_task_instances_batch
fails with an error with wildcards while
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_task_instances
succeeds with wildcards
Error:
```
400
"None is not of type 'object'"
```
### What you think should happen instead
_No response_
### How to reproduce
1) `astro dev init`
2) `astro dev start`
3) `test1.py` and `python test1.py`
```
import requests
host = "http://localhost:8080/api/v1"
kwargs = {
'auth': ('admin', 'admin'),
'headers': {'content-type': 'application/json'}
}
r = requests.post(f'{host}/dags/~/dagRuns/~/taskInstances/list', **kwargs, timeout=10)
print(r.url, r.text)
```
output
```
http://localhost:8080/api/v1/dags/~/dagRuns/~/taskInstances/list
{
"detail": "None is not of type 'object'",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
3) `test2.py` and `python test2.py`
```
import requests
host = "http://localhost:8080/api/v1"
kwargs = {
'auth': ('admin', 'admin'),
'headers': {'content-type': 'application/json'}
}
r = requests.get(f'{host}/dags/~/dagRuns/~/taskInstances', **kwargs, timeout=10) # change here
print(r.url, r.text)
```
```
<correct output>
```
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26424 | https://github.com/apache/airflow/pull/30596 | c2679c57aa0281dd455c6a01aba0e8cfbb6a0e1c | e89a7eeea6a7a5a5a30a3f3cf86dfabf7c343412 | "2022-09-15T22:52:20Z" | python | "2023-04-12T12:40:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,399 | ["airflow/providers/google/cloud/hooks/kubernetes_engine.py", "tests/providers/google/cloud/hooks/test_kubernetes_engine.py"] | GKEHook.create_cluster is not wait_for_operation using the input project_id parameter | ### Apache Airflow version
main (development)
### What happened
In the GKEHook, the `create_cluster` method is creating a GKE cluster in the project_id specified by the input, but in `wait_for_operation`, it's waiting for the operation to appear in the default project_id (because no project_id explicitly provided)
https://github.com/apache/airflow/blob/f6c579c1c0efb8cdd2eaf905909cda7bc7314f88/airflow/providers/google/cloud/hooks/kubernetes_engine.py#L231-L237
this throws a bug when we are trying to create clusters under different project_id (compared to the default project_id)
### What you think should happen instead
instead it should be
```python
resource = self.wait_for_operation(resource, project_id)
```
so that we won't get errors when trying to create a cluster under a different project_id
### How to reproduce
```python
create_cluster = GKECreateClusterOperator(
task_id="create_cluster",
project_id=GCP_PROJECT_ID,
location=GCP_LOCATION,
body=CLUSTER,
)
```
and make sure the GCP_PROJECT_ID is not the same as the default project_id used by the default google service account
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26399 | https://github.com/apache/airflow/pull/26418 | 0bca962cd2c9671adbe68923e17ebecf66a0c6be | e31590039634ff722ad005fe9f1fc02e5a669699 | "2022-09-14T17:15:25Z" | python | "2022-09-20T07:46:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,380 | ["airflow/datasets/__init__.py", "tests/datasets/test_dataset.py", "tests/models/test_dataset.py"] | UI doesn't handle whitespace/empty dataset URI's well | ### Apache Airflow version
main (development)
### What happened
Here are some poor choices for dataset URI's:
```python3
empty = Dataset("")
colons = Dataset("::::::")
whitespace = Dataset("\t\n")
emoji = Dataset("😊")
long = Dataset(5000 * "x")
injection = Dataset("105'; DROP TABLE 'dag")
```
And a dag file which replicates the problems mentioned below: https://gist.github.com/MatrixManAtYrService/a32bba5d382cd9a925da72571772b060 (full tracebacks included as comments)
Here's how they did:
|dataset|behavior|
|:-:|:--|
|empty| dag triggered with no trouble, not selectable in the datasets UI|
|emoji| `airflow dags reserialize`: `UnicodeEncodeError: 'ascii' codec can't encode character '\U0001f60a' in position 0: ordinal not in range(128)`|
|colons| no trouble|
|whitespace| dag triggered with no trouble, selectable in the datasets UI, but shows no history|
|long|sqlalchemy error during serialization|
|injection| no trouble|
Finally, here's a screenshot:
<img width="1431" alt="Screen Shot 2022-09-13 at 11 29 02 PM" src="https://user-images.githubusercontent.com/5834582/190069341-dc17c66a-f941-424d-a455-cd531580543a.png">
Notice that there are two empty rows in the datasets list, one for `empty`, the other for `whitespace`. Only `whitespace` is selectable, both look weird.
### What you think should happen instead
I propose that we add a uri sanity check during serialization and just reject dataset URI's that are:
- only whitespace
- empty
- long enough that they're going to cause a database problem
The `emoji` case failed in a nice way. Ideally `whitespace`, `long` and `empty` can fail in the same way. If implemented, this would prevent any of the weird cases above from making it to the UI in the first place.
### How to reproduce
Unpause the above dags
### Operating System
Docker/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26380 | https://github.com/apache/airflow/pull/26389 | af39faafb7fdd53adbe37964ba88a3814f431cd8 | bd181daced707680ed22f5fd74e1e13094f6b164 | "2022-09-14T05:53:23Z" | python | "2022-09-14T16:11:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,375 | ["airflow/www/extensions/init_views.py", "airflow/www/templates/airflow/error.html", "airflow/www/views.py", "tests/api_connexion/test_error_handling.py"] | Airflow Webserver returns incorrect HTTP Error Response for custom REST API endpoints | ### Apache Airflow version
Other Airflow 2 version
### What happened
We are using Airflow 2.3.1 Version. Apart from Airflow provided REST endpoints, we are also using the airflow webserver to host our own application REST API endpoints. We are doing this by loading our own blueprints and registering Flask Blueprint routes within the airflow plugin.
Issue: Our Custom REST API endpoints are returning incorrect HTTP Error response code of 404 when 405 is expected (Invoke the REST API endpoint with an incorrect HTTP method, say POST instead of PUT) . This was working in airflow 1.x but is giving an issue with airflow 2.x
Here is a sample airflow plugin code . If the '/sample-app/v1' API below is invoked with POST method, I would expect a 405 response. However, it returns a 404.
I tried registering a blueprint error handler for 405 inside the plugin, but that did not work.
```
test_bp = flask.Blueprint('test_plugin', __name__)
@test_bp.route(
'/sample-app/v1/tags/<tag>', methods=['PUT'])
def initialize_deployment(tag):
"""
Initialize the deployment of the metadata tag
:rtype: flask.Response
"""
return 'Hello, World'
class TestPlugin(plugins_manager.AirflowPlugin):
name = 'test_plugin'
flask_blueprints = [test_bp]
```
### What you think should happen instead
Correct HTTP Error response code should be returned.
### How to reproduce
Issue the following curl request after loading the plugin -
curl -X POST "http://localhost:8080/sample-app/v1/tags/abcd" -d ''
The response will be 404 instead of 405.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26375 | https://github.com/apache/airflow/pull/26880 | ea55626d79fdbd96b6d5f371883ac1df2a6313ec | 8efb678e771c8b7e351220a1eb7eb246ae8ed97f | "2022-09-13T21:56:54Z" | python | "2022-10-18T12:50:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,367 | ["airflow/providers/google/cloud/operators/bigquery.py", "docs/apache-airflow-providers-google/operators/cloud/bigquery.rst", "tests/system/providers/google/cloud/bigquery/example_bigquery_queries.py"] | Add SQLColumnCheck and SQLTableCheck Operators for BigQuery | ### Description
New operators under the Google provider for table and column data quality checking that is integrated with OpenLineage.
### Use case/motivation
Allow OpenLineage support for BigQuery when using column and table check operators.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26367 | https://github.com/apache/airflow/pull/26368 | 3cd4df16d4f383c27f7fc6bd932bca1f83ab9977 | c4256ca1a029240299b83841bdd034385665cdda | "2022-09-13T15:21:52Z" | python | "2022-09-21T08:49:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,360 | ["airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | dynamic dataset ref breaks when viewed in UI or when triggered (dagbag.py:_add_dag_from_db) | ### Apache Airflow version
2.4.0b1
### What happened
Here's a file which defines three dags. "source" uses `Operator.partial` to reference either "sink". I'm not sure if it's supported to do so, but airlflow should at least fail more gracefully than it does.
```python3
from datetime import datetime, timedelta
from time import sleep
from airflow import Dataset
from airflow.decorators import dag
from airflow.operators.dummy import DummyOperator
from airflow.operators.python import PythonOperator
ps = Dataset("partial_static")
p1 = Dataset("partial_dynamic_1")
p2 = Dataset("partial_dynamic_2")
p3 = Dataset("partial_dynamic_3")
def sleep_n(n):
sleep(n)
@dag(start_date=datetime(1970, 1, 1), schedule=timedelta(days=365 * 30))
def two_kinds_dynamic_source():
# dataset ref is not dynamic
PythonOperator.partial(
task_id="partial_static", python_callable=sleep_n, outlets=[ps]
).expand(op_args=[[1], [20], [40]])
# dataset ref is dynamic
PythonOperator.partial(
task_id="partial_dynamic", python_callable=sleep_n
).expand_kwargs(
[
{"op_args": [1], "outlets": [p1]},
{"op_args": [20], "outlets": [p2]},
{"op_args": [40], "outlets": [p3]},
]
)
two_kinds_dynamic_source()
@dag(schedule=[ps], start_date=datetime(1970, 1, 1))
def two_kinds_static_sink():
DummyOperator(task_id="dummy")
two_kinds_static_sink()
@dag(schedule=[p1, p2, p3], start_date=datetime(1970, 1, 1))
def two_kinds_dynamic_sink():
DummyOperator(task_id="dummy")
two_kinds_dynamic_sink()
```
Tried to trigger the dag in the browser, saw this traceback instead:
```
Python version: 3.9.13
Airflow version: 2.4.0.dev1640+astro.1
Node: airflow-webserver-6b969cbd87-4q5kh
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/usr/local/lib/python3.9/site-packages/airflow/www/auth.py", line 46, in decorated
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/www/decorators.py", line 117, in view_func
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/www/decorators.py", line 80, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/www/views.py", line 2532, in grid
dag = get_airflow_app().dag_bag.get_dag(dag_id, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 176, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 251, in _add_dag_from_db
dag = row.dag
File "/usr/local/lib/python3.9/site-packages/airflow/models/serialized_dag.py", line 223, in dag
dag = SerializedDAG.from_dict(self.data) # type: Any
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1220, in from_dict
return cls.deserialize_dag(serialized_obj['dag'])
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1197, in deserialize_dag
setattr(task, k, kwargs_ref.deref(dag))
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 224, in deref
value = {k: v.deref(dag) if isinstance(v, _XComRef) else v for k, v in self.value.items()}
AttributeError: 'list' object has no attribute 'items'
```
I can also summon a similar traceback by just trying to view the dag in the grid view, or when running `airflow dags trigger`
### What you think should happen instead
If there's something invalid about this dag, it should fail to parse--rather than successfully parsing and then breaking the UI.
I'm a bit uncertain about what should happen in the dag dependency graph when the source dag runs. The dynamic outlets are not known until runtime, so it's reasonable that they don't show up in the graph. But what about after the dag runs?
- do they still trigger the "sink" dag even though we didn't know about the dependency up front?
- do we update the dependency graph now that we know about the dynamic dependency?
Because of this error, we don't get far enough to find out.
### How to reproduce
Include the dag above, try to display it in the grid view.
### Operating System
kubernetes-in-docker on MacOS via helm
### Versions of Apache Airflow Providers
n/a
### Deployment
Other 3rd-party Helm chart
### Deployment details
Deployed using [the astronomer helm chart ](https://github.com/astronomer/airflow-chart)and these values:
```yaml
airflow:
airflowHome: /usr/local/airflow
airflowVersion: $VERSION
defaultAirflowRepository: img
defaultAirflowTag: $TAG
executor: KubernetesExecutor
gid: 50000
images:
airflow:
repository: img
logs:
persistence:
enabled: true
size: 2Gi
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26360 | https://github.com/apache/airflow/pull/26369 | 5e9589c685bcec769041e0a1692035778869f718 | b816a6b243d16da87ca00e443619c75e9f6f5816 | "2022-09-13T06:54:16Z" | python | "2022-09-14T10:01:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,283 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator max_id_key Not Written to XCOM | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.3.0
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
`max_id` is not returned through XCOM when `max_id_key` is set.
### What you think should happen instead
When `max_id_key` is set, the `max_id` value should be returned as the default XCOM value.
This is based off of the parameter description:
```
The results will be returned by the execute() command, which in turn gets stored in XCom for future operators to use.
```
### How to reproduce
Execute the `GCSToBigQueryOperator` operator with a `max_id_key` parameter set. No XCOM value is set.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26283 | https://github.com/apache/airflow/pull/26285 | b4f8a069f07b18ce98c9b1286da5a5fcde2bff9f | 07fe356de0743ca64d936738b78704f7c05774d1 | "2022-09-09T20:01:59Z" | python | "2022-09-18T20:12:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,279 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator `max_id_key` Feature Throws Error | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
8.3.0
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using the `max_id_key` feature of `GCSToBigQueryOperator` it fails with the error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 312, in execute
row = list(bq_hook.get_job(job_id).result())
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/common/hooks/base_google.py", line 444, in inner_wrapper
raise AirflowException(
airflow.exceptions.AirflowException: You must use keyword arguments in this methods rather than positional
```
### What you think should happen instead
The max id value for the key should be returned.
### How to reproduce
Any use of this column fails, since the error is related to retrieving the job result.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26279 | https://github.com/apache/airflow/pull/26285 | b4f8a069f07b18ce98c9b1286da5a5fcde2bff9f | 07fe356de0743ca64d936738b78704f7c05774d1 | "2022-09-09T17:47:29Z" | python | "2022-09-18T20:12:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,273 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py"] | SQLToGCSOperators Add Support for Dumping JSON | ### Description
If your output format for a SQLToGCSOperator is `json`, then any "dict" type object returned from a database, for example a Postgres JSON column, is not dumped to a string and is kept as a nested JSON object.
Add option to dump `dict` objects to string in JSON exporter.
### Use case/motivation
Currently JSON type columns are hard to ingest into BQ since a JSON field in a source database does not enforce a schema, and we can't reliably generate a `RECORD` schema for the column.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26273 | https://github.com/apache/airflow/pull/26277 | 706a618014a6f94d5ead0476f26f79d9714bf93d | b4f8a069f07b18ce98c9b1286da5a5fcde2bff9f | "2022-09-09T15:25:54Z" | python | "2022-09-18T20:11:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,262 | ["docs/helm-chart/manage-dags-files.rst"] | helm chart doc Manage DAGs files recommended Bake DAGs in Docker image need improvement. | ### What do you see as an issue?
https://airflow.apache.org/docs/helm-chart/1.6.0/manage-dags-files.html#bake-dags-in-docker-image
> The recommended way to update your DAGs with this chart is to build a new Docker image with the latest DAG code:
In this doc , recommended user manage dags way is build in image.
But , ref this issue:
https://github.com/airflow-helm/charts/issues/211#issuecomment-859678503
> but having the scheduler being restarted and not scheduling any task each time you do a change that is not even scheduler related (just to deploy a new DAG!!)
> Helm Chart should be used to deploy "application" not to deploy another version of DAGs.
So, I think bake dags in docker image should not be the most recommended way.
At least. We should say this way weaknesses (restart all components when jsut deploy a new DAG!) in docs.
### Solving the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26262 | https://github.com/apache/airflow/pull/26401 | 2382c12cc3aa5d819fd089c73e62f8849a567a0a | 11f8be879ba2dd091adc46867814bcabe5451540 | "2022-09-09T08:11:29Z" | python | "2022-09-15T21:09:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,259 | ["airflow/models/dag.py", "airflow/models/dagrun.py", "airflow/www/views.py", "tests/models/test_dag.py"] | should we limit max queued dag runs for dataset-triggered dags | if a dataset-triggered dag is running, and upstreams are updated multiple times, many dag runs will be queued up because the scheduler checks frequently for new dag runs needed.
you can easily limit max active dag runs but cannot easily limit max queued dag runs. in the dataset case this represents a meaningful difference in behavior and seems undesirable.
i think it may make sense to limit max queued dag runs (for datasets) to 1. cc @ash @jedcunningham @uranusjr @blag @norm
the graph below illustrates what happens in this scenario. you can reproduce with the example datasets dag file. change consumes 1 to be `sleep 60` , produces 1 to be `sleep 1`, then trigger producer repeatedly.
![image](https://user-images.githubusercontent.com/15932138/189264897-bbb6abba-9cea-4307-b17b-554599a03821.png)
| https://github.com/apache/airflow/issues/26259 | https://github.com/apache/airflow/pull/26348 | 9444d9789bc88e1063d81d28e219446b2251c0e1 | b99d1cd5d32aea5721c512d6052b6b7b3e0dfefb | "2022-09-09T03:15:54Z" | python | "2022-09-14T12:28:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,256 | ["airflow/datasets/manager.py", "airflow/jobs/scheduler_job.py", "tests/models/test_taskinstance.py"] | "triggered runs" dataset counter doesn't update until *next* run and never goes above 1 | ### Apache Airflow version
2.4.0b1
### What happened
I have [this test dag](https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58) which I created to report [this issue](https://github.com/apache/airflow/issues/25210). The idea is that if you unpause "sink" and all of the "sources" then the sources will wait until the clock is like \*:\*:00 and they'll terminate at the same time.
Since each source triggers the sink with a dataset called "counter", the "sink" dag will run just once, and it will have output like: `INFO - [(16, 1)]`, that's 16 sources and 1 sink that ran.
At this point, you can look at the dataset history for "counter" and you'll see this:
<img width="524" alt="Screen Shot 2022-09-08 at 6 07 44 PM" src="https://user-images.githubusercontent.com/5834582/189248999-d31141a4-2d0b-4ec2-9ea5-c4c3536b3a28.png">
So we've got a timestamp, but the "triggered runs" count is empty. That's weird. One run was triggered (and it finished by the time the screenshot was taken), so why doesn't it say `1`?
So I redeploy and try it again, except this time I wait several seconds between each "unpause" click, the idea being that maybe some of them fire at 07:16:00 and the others fire at 07:17:00. I end up with this:
<img width="699" alt="Screen Shot 2022-09-08 at 6 19 12 PM" src="https://user-images.githubusercontent.com/5834582/189252116-69067189-751d-40e7-89c5-8d1da1720237.png">
So fifteen of them finished at once and caused the dataset to update, and then just one straggler (number 9) is waiting for an additional minute. I wait for the straggler to complete and go back to the dataset view:
<img width="496" alt="Screen Shot 2022-09-08 at 6 20 41 PM" src="https://user-images.githubusercontent.com/5834582/189253874-87bb3eb3-2237-42a1-bc3f-9fc210419f1a.png">
Now it's the straggler that is blank, but the rest of them are populated. Continuing to manually run these, I find that whichever one I have run most recently is blank, and all of the others are 1, even if this is the second or third time I've run them
### What you think should happen instead
- The triggered runs counter should increment beyond 1
- It should increment immediately after the dag was triggered, not wait until after the *next* dag gets triggered.
### How to reproduce
See dags in in this gist: https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58
1. unpause "sink"
2. unpause half of sources
3. wait one minute
4. unpause the other half of the sources
5. wait for "sink" to run a second time
6. view the dataset history for "counter"
7. ask why only half of the counts are populated
8. manually trigger some sources, wait for them to trigger sink
9. view the dataset history again
10. ask why none of them show more than 1 dagrun triggered
### Operating System
Kubernetes in Docker, deployed via helm
### Versions of Apache Airflow Providers
n/a
### Deployment
Other 3rd-party Helm chart
### Deployment details
see "deploy.sh" in the gist:
https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58
It's just a fresh install into a k8s cluster
### Anything else
n/a
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26256 | https://github.com/apache/airflow/pull/26276 | eb03959e437e11891b8c3696b76f664a991a37a4 | 954349a952d929dc82087e4bb20d19736f84d381 | "2022-09-09T01:45:19Z" | python | "2022-09-09T20:15:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,215 | ["airflow/example_dags/example_params_ui_tutorial.py", "airflow/www/static/js/trigger.js"] | Trigger DAG UI Extension w/ Flexible User Form Concept | ### Description
Proposal for Contribution for an extensible Trigger UI feature in Airflow.
## Design proposal (Feedback welcome)
### Part 1) Specifying Trigger UI on DAG Level
We propose to extend the DAG class with an additional attribute so that UI(s) (one or multiple per DAG) can be specified in the DAG.
* Attribute name proposal: `trigger_ui`
* Type proposal: `Union[TriggerUIBase, List[TriggerUIBase]` (One or a list of UI definition inherited from an abstract UI class which implements the trigger UI)
* Default value proposal: `[TriggerNoUI(), TriggerJsonUI()]` (Means the current/today's state, user can pick to trigger with or without parameters)
With this extension the current behavior is continued and users can specify if a specific or multiple UIs are offered for the Trigger DAG option.
### Part 2) UI Changes for Trigger Button
The function of the trigger DAG button in DAG overview landing ("Home" / `templates/airflow/dags.html`) as well as DAG detail pages (grid, graph, ... view / `templates/airflow/dag.html`) is adjusted so that:
1) If there is a single Trigger UI specified for the DAG, the button directly opens the form on click
2) If a list of Trigger UIs is defined for the DAG, then a list of UI's is presented, similar like today's drop-down with the today's two options (with and without parameters).
Menu names for (2) and URLs are determined by the UI class members linked to the DAG.
### Part 3) Standard implementations for TriggerNoUI, TriggerJsonUI
Two implementations for triggering w/o UI and parameters and the current JSON entry form will be migrated to the new UI structure, so that users can define that one, the other or both can be used for DAGs.
Name proposals:
0) TriggerUIBase: Base class for any Trigger UI, defines the base parameters and defaults which every Trigger UI is expected to provide:
* `url_template`: URL template (into which the DAG name is inserted to direct the user to)
* `name`: Name of the trigger UI to display in the drop-down
* `description`: Optional descriptive test to supply as hover-over/tool-tip)
1) TriggerNoUI (inherits TriggerUIBase): Skips a user confirmation and entry form and upon call runs the DAG w/o parameters (`DagRun.conf = {}`)
2) TriggerJsonUI (inherits TriggerUIBase): Same like the current UI to enter a JSON into a text box and trigger the DAG. Any valid JSON accepted.
### Part 4) Standard Implementation for Simple Forms (Actually core new feature)
Implement/Contribute a user-definable key/value entry form named `TriggerFormUI` (inherits TriggerUIBase) which allows the user to easily enter parameters for triggering a DAG. Form could look like:
```
Parameter 1: <HTML input box for entering a value>
(Optional Description and hints)
Parameter 2: <HTML Select box of options>
(Optional Description and hints)
Parameter 3: <HTML Checkbox on/off>
(Optional Description and hints)
<Trigger DAG Button>
```
The resulting JSON would use the parameter keys and values and render the following `DagRun.conf` and trigger the DAG:
```
{
"parameter_1": "user input",
"parameter_2": "user selection",
"parameter_3": true/false value
}
```
The number of form values, parameter names, parameter types, options, order and descriptions should be freely configurable in the DAG definition.
The trigger form should provide the following general parameters (at least):
* `name`: The name of the form to be used in pick lists and in the headline
* `description`: Descriptive test which is printed in hover over of menus and which will be rendered as description between headline and form start
* (Implicitly the DAG to which the form is linked to which will be triggered)
The trigger form elements (list of elements can be picked freely):
* General options of each form element (Base class `TriggerFormUIElement`:
* `name` (str): Name of the parameter, used as technical key in the JSON, must be unique per form (e.g. "param1")
* `display` (str): Label which is displayed on left side of entry field (e.g. "Parameter 1")
* `help` (Optional[str]=Null): Descriptive help text which is optionally rendered below the form element, might contain HTML formatting code
* `required` (Optional[bool]=False): Flag if the user is required to enter/pick a value before submission is possible
* `default` (Optional[str]=Null): Default value to present when the user opens the form
* Element types provided in the base implementation
* `TriggerFormUIString` (inherits `TriggerFormUIElement`): Provides a simple HTML string input box.
* `TriggerFormUISelect` (inherits `TriggerFormUIElement`): Provides a HTML select box with a list of pre-defined string options. Options are provided static as array of strings.
* `TriggerFormUIArray` (inherits `TriggerFormUIElement`): Provides a simple HTML text area allowing to enter multiple lines of text. Each line entered will be converted to a string and the strings will be used as value array.
* `TriggerFormUICheckbox` (inherits `TriggerFormUIElement`): Provides a HTML Checkbox to select on/off, will be converted to true/false as value
* Other element types (optionally, might be added later?) for making futher cool features - depending on how much energy is left
* `TriggerFormUIHelp` (inherits `TriggerFormUIElement`): Provides no actual parameter value but allows to add a HTML block of help
* `TriggerFormUIBreak` (inherits `TriggerFormUIElement`): Provides no actual parameter value but adds a horizontal splitter
* Adding the options to validate string values e.g. with a RegEx
* Allowing to provide int values (besides just strings)
* Allowing to have an "advanced" section for more options which the user might not need in all cases
* Allowing to view the generated `DagRun.conf` so that a user can copy/paste as well
* Allowing to user extend the form elements...
### Part 5) (Optional) Extended for Templated Form based on the Simple form but uses fields to run a template through Jinja
Implement (optionally, might be future extension as well?) a `TriggerTemplateFormUI` (inherits TriggerFormUI) which adds a Jinja2 JSON template which will be templated with the collected form fields so that more complex `DagRun.conf` parameter structures can be created on top of just key/value
### Part 6) Examples
Provide 1-2 example DAGs which show how the trigger forms can be used. Adjust existing examples as needed.
### Part 7) Documentation
Provide needed documentation to describe the feature and options. This would include an description how to add custom forms above the standards via Airflow Plugins and custom Python code.
### Use case/motivation
As user of Airflow for our custom workflows we often use `DagRun.conf` attributes to control content and flow. Current UI allows (only) to launch via REST API with given parameters or using a JSON structure in the UI to trigger with parameters. This is technically feasible but not user friendly. A user needs to model, check and understand the JSON and enter parameters manually without the option to validate before trigger.
Similar like Jenkins or Github/Azure pipelines we desire an UI option to trigger with a UI and specifying parameters. We'd like to have a similar capability in Airflow.
Current workarounds used in multiple places are:
1) Implementing a custom (additional) Web UI which implements the required forms outside/on top of Airflow. This UI accepts user input and in the back-end triggers Airflow via REST API. This is flexible but replicates the efforts for operation, deployment, release as well and redundantly need to implement access control, logging etc.
2) Implementing an custom Airflow Plugin which hosts additional launch/trigger UIs inside Airflow. We are using this but it is actually a bit redundant to other trigger options and is only 50% user friendly
I/we propose this as a feature and would like to contribute this with a following PR - would this be supported if we contribute this feature to be merged?
### Related issues
Note: This proposal is similar and/or related to #11054 but a bit more detailed and concrete. Might be also related to #22408 and contribute to AIP-38 (https://github.com/apache/airflow/projects/9)?
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26215 | https://github.com/apache/airflow/pull/29376 | 7ee1a5624497fc457af239e93e4c1af94972bbe6 | 9c6f83bb6f3e3b57ae0abbe9eb0582fcde265702 | "2022-09-07T14:36:30Z" | python | "2023-02-11T14:38:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,194 | ["airflow/www/static/js/dag/details/taskInstance/Logs/index.test.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx"] | Extra entry for logs generated with 0 try number when clearing any task instances | ### Apache Airflow version
main (development)
### What happened
When clearing any task instances an extra logs entry generated with Zero try number.
<img width="1344" alt="Screenshot 2022-09-07 at 1 06 54 PM" src="https://user-images.githubusercontent.com/88504849/188819289-13dd4936-cd03-48b6-8406-02ee5fbf293f.png">
### What you think should happen instead
It should not create a entry with zero try number
### How to reproduce
Clear a task instance by hitting clear button on UI and then observe the entry for logs in logs tab
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26194 | https://github.com/apache/airflow/pull/26556 | 6f1ab37d2091e26e67717d4921044029a01d6a22 | 6a69ad033fdc224aee14b8c83fdc1b672d17ac20 | "2022-09-07T07:43:59Z" | python | "2022-09-22T19:39:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,189 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py"] | GCSToBigQueryOperator Schema in Alternate GCS Bucket | ### Description
Currently the `GCSToBigQueryOperator` requires that a Schema object located in GCS be located in the same bucket as the Source Object(s). I'd like an option to have it located in a different bucket.
### Use case/motivation
I have a GCS bucket where I store files with a 90 day auto-expiration on the whole bucket. I want to be able to store a fixed schema in GCS, but since this bucket has an auto-expiration of 90 days the schema is auto deleted at that time.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26189 | https://github.com/apache/airflow/pull/26190 | 63562d7023a9d56783f493b7ea13accb2081121a | 8cac96918becf19a4a04eef1e5bcf175f815f204 | "2022-09-07T01:50:01Z" | python | "2022-09-07T20:26:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,185 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Webserver fails to pull secrets from Hashicorp Vault on start up | ### Apache Airflow version
2.3.4
### What happened
Since upgrading to Airflow 2.3.4 our webserver fails on start up to pull secrets from our Vault instance. Setting AIRFLOW__WEBSERVER_WORKERS = 1 allowed the webserver to start up successfully, but reverting the change added here [https://github.com/apache/airflow/pull/25556](url) was the only way we found to fix the issue without adjusting the webserver's worker count.
### What you think should happen instead
The airflow webserver should be able to successfully read from Vault with AIRFLOW__WEBSERVERS__WORKERS > 1.
### How to reproduce
Star a Webserver instance set to authenticate with Vault using the approle method and AIRFLOW__DATABASE__SQL_ALCHEMY_CONN_SECRET and AIRFLOW__WEBSERVER__SECRET_KEY_SECRET set. The webserver should fail to initialize all of the gunicorn workers and exit.
### Operating System
Fedora 29
### Versions of Apache Airflow Providers
apache-airflow-providers-hashicorp==3.1.0
### Deployment
Docker-Compose
### Deployment details
Python 3.9.13
Vault 1.9.4
### Anything else
None
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26185 | https://github.com/apache/airflow/pull/26223 | ebef9ed3fa4a9a1e69b4405945e7cd939f499ee5 | c63834cb24c6179c031ce0d95385f3fa150f442e | "2022-09-06T21:36:02Z" | python | "2022-09-08T00:35:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,174 | ["airflow/api_connexion/endpoints/xcom_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_xcom_endpoint.py"] | API Endpoints - /xcomEntries/{xcom_key} cannot deserialize customized xcom backend | ### Description
We use S3 as our xcom backend database and write serialize/deserialize method for xcoms.
However, when we want to access xcom through REST API, it returns the s3 file url instead of the deserialized value. Could you please add the feature to support customized xcom backend for REST API access?
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26174 | https://github.com/apache/airflow/pull/26343 | 3c9c0f940b67c25285259541478ebb413b94a73a | ffee6bceb32eba159a7a25a4613d573884a6a58d | "2022-09-06T09:35:30Z" | python | "2022-09-12T21:05:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,155 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/role_command.py", "tests/cli/commands/test_role_command.py"] | Add CLI to add/remove permissions from existed role | ### Body
Followup on https://github.com/apache/airflow/pull/25854
[Roles CLI](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#roles) currently support create, delete, export, import, list
It can be useful to have the ability to add/remove permissions from existed role.
This has also been asked in https://github.com/apache/airflow/issues/15318#issuecomment-872496184
cc @chenglongyan
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26155 | https://github.com/apache/airflow/pull/26338 | e31590039634ff722ad005fe9f1fc02e5a669699 | 94691659bd73381540508c3c7c8489d60efb2367 | "2022-09-05T08:01:19Z" | python | "2022-09-20T08:18:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,130 | ["Dockerfile.ci", "airflow/serialization/serialized_objects.py", "setup.cfg"] | Remove `cattrs` from project | Cattrs is currently only used in two places: Serialization for operator extra links, and for Lineage.
However cattrs is not a well maintained project and doesn't support many features that attrs itself does; in short, it's not worth the brain cycles to keep cattrs. | https://github.com/apache/airflow/issues/26130 | https://github.com/apache/airflow/pull/34672 | 0c8e30e43b70e9d033e1686b327eb00aab82479c | e5238c23b30dfe3556fb458fa66f28e621e160ae | "2022-09-02T12:15:18Z" | python | "2023-10-05T07:34:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,101 | ["airflow/utils/sqlalchemy.py", "tests/utils/test_sqlalchemy.py"] | Kubernetes Invalid executor_config, pod_override filled with Encoding.VAR | ### Apache Airflow version
2.3.4
### What happened
Trying to start Kubernetes tasks using a `pod_override` results in pods not starting after upgrading from 2.3.2 to 2.3.4
The pod_override look very odd, filled with many Encoding.VAR objects, see following scheduler log:
```
{kubernetes_executor.py:550} INFO - Add task TaskInstanceKey(dag_id='commit_check', task_id='sync_and_build', run_id='5776-2-1662037155', try_number=1, map_index=-1) with command ['airflow', 'tasks', 'run', 'commit_check', 'sync_and_build', '5776-2-1662037155', '--local', '--subdir', 'DAGS_FOLDER/dag_on_commit.py'] with executor_config {'pod_override': {'Encoding.VAR': {'Encoding.VAR': {'Encoding.VAR': {'metadata': {'Encoding.VAR': {'annotations': {'Encoding.VAR': {}, 'Encoding.TYPE': 'dict'}}, 'Encoding.TYPE': 'dict'}, 'spec': {'Encoding.VAR': {'containers': REDACTED 'Encoding.TYPE': 'k8s.V1Pod'}, 'Encoding.TYPE': 'dict'}}
{kubernetes_executor.py:554} ERROR - Invalid executor_config for TaskInstanceKey(dag_id='commit_check', task_id='sync_and_build', run_id='5776-2-1662037155', try_number=1, map_index=-1)
```
Looking in the UI, the task get stuck in scheduled state forever. By clicking instance details, it shows similar state of the pod_override with many Encoding.VAR.
This appears like a recent addition, in 2.3.4 via https://github.com/apache/airflow/pull/24356.
@dstandish do you understand if this is connected?
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.3.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-docker==3.1.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.2.0
apache-airflow-providers-sqlite==3.2.0
kubernetes==23.6.0
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26101 | https://github.com/apache/airflow/pull/26191 | af3a07427023d7089f3bc74a708723d13ce3cf73 | 87108d7b62a5c79ab184a50d733420c0930fdd93 | "2022-09-01T13:26:56Z" | python | "2022-09-07T22:44:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,099 | ["airflow/models/baseoperator.py", "airflow/ti_deps/deps/trigger_rule_dep.py", "airflow/utils/trigger_rule.py", "docs/apache-airflow/concepts/dags.rst", "tests/ti_deps/deps/test_trigger_rule_dep.py", "tests/utils/test_trigger_rule.py"] | Add one_done trigger rule | ### Body
Action: trigger as soon as 1 upstream task is in success or failuire
This has been requested in https://stackoverflow.com/questions/73501232/how-to-implement-the-one-done-trigger-rule-for-airflow
I think this can be useful for the community.
**The Task:**
Add support for new trigger rule `one_done`
You can use as reference previous PRs that added other trigger rules for example: https://github.com/apache/airflow/pull/21662
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26099 | https://github.com/apache/airflow/pull/26146 | 55d11464c047d2e74f34cdde75d90b633a231df2 | baaea097123ed22f62c781c261a1d9c416570565 | "2022-09-01T07:27:12Z" | python | "2022-09-23T17:05:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,097 | ["airflow/providers/microsoft/azure/operators/container_instances.py"] | Add the parameter `network_profile` in `AzureContainerInstancesOperator` | ### Description
[apache-airflow-providers-microsoft-azure](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/index.html) uses `azure-mgmt-containerinstance==>=1.5.0,<2.0`.
In `azure-mgmt-containerinstance==1.5.0`, [ContainerGroup](https://github.com/Azure/azure-sdk-for-python/blob/azure-mgmt-containerinstance_1.5.0/sdk/containerinstance/azure-mgmt-containerinstance/azure/mgmt/containerinstance/models/container_group_py3.py) accepts a parameter called `network_profile`, which is expecting a [ContainerGroupNetworkProfile](https://github.com/Azure/azure-sdk-for-python/blob/azure-mgmt-containerinstance_1.5.0/sdk/containerinstance/azure-mgmt-containerinstance/azure/mgmt/containerinstance/models/container_group_network_profile_py3.py).
### Use case/motivation
I received the following error when I provide value to `IpAddress` in the `AzureContainerInstancesOperator`.
```
msrestazure.azure_exceptions.CloudError: Azure Error: PrivateIPAddressNotSupported
Message: IP Address type in container group 'data-quality-test' is invalid. Private IP address is only supported when network profile is defined.
```
I would like to pass a ContainerGroupNetworkProfile object through so AzureContainerInstancesOperator can use in the [Container Group instantiation](https://github.com/apache/airflow/blob/main/airflow/providers/microsoft/azure/operators/container_instances.py#L243-L254).
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26097 | https://github.com/apache/airflow/pull/26117 | dd6b2e4e6cb89d9eea2f3db790cb003a2e89aeff | 5060785988f69d01ee2513b1e3bba73fbbc0f310 | "2022-08-31T23:41:27Z" | python | "2022-09-09T02:50:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,095 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | Creative use of BigQuery Hook Leads to Exception | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
8.3.0
### Apache Airflow version
2.3.4
### Operating System
Debian
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When executing a query through a BigQuery Hook Cursor that does not have a schema, an exception is thrown.
### What you think should happen instead
If a cursor does not contain a schema, revert to a `self.description` of None, like before the update.
### How to reproduce
Execute an `UPDATE` sql statement using a cursor.
```
conn = bigquery_hook.get_conn()
cursor = conn.cursor()
cursor.execute(sql)
```
### Anything else
I'll be the first to admit that my users are slightly abusing cursors in BigQuery by running all statement types through them, but BigQuery doesn't care and lets you.
Ref: https://github.com/apache/airflow/issues/22328
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26095 | https://github.com/apache/airflow/pull/26096 | b7969d4a404f8b441efda39ce5c2ade3e8e109dc | 12cbc0f1ddd9e8a66c5debe7f97b55a2c8001502 | "2022-08-31T21:43:47Z" | python | "2022-09-07T15:56:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,071 | ["airflow/example_dags/example_branch_day_of_week_operator.py", "airflow/operators/weekday.py", "airflow/sensors/weekday.py"] | BranchDayOfWeekOperator documentation don't mention how to use parameter use_taks_execution_day or how to use WeekDay | ### What do you see as an issue?
The constructor snippet shows clearly that there's a keyword parameter `use_task_exection_day=False`, but the doc does not explain how to use it. It also has `{WeekDay.TUESDAY}, {WeekDay.SATURDAY, WeekDay.SUNDAY}` as options for `week_day` but does not clarify how to import WeekDay. The tutorial is also very basic and only shows one usecase. The sensor has the same issues.
### Solving the problem
I think docs should be added for `use_taks_execution_day` and there should be mentions of how one uses `WeekDay` class and where to import it from. The tutorial is also incomplete there. I would like to see examples for, say, multiple different workdays branches and/or some graph for resulting dags
### Anything else
I feel like BranchDayOfWeekOperator is tragically underrepresented and hard to find, and I hope that improving docs would help make its use more common
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26071 | https://github.com/apache/airflow/pull/26098 | 4b26c8c541a720044fa96475620fc70f3ac6ccab | dd6b2e4e6cb89d9eea2f3db790cb003a2e89aeff | "2022-08-30T16:30:15Z" | python | "2022-09-09T02:05:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,067 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Include external_executor_id in zombie detection method | ### Description
Adjust the SimpleTaskInstance to include the external_executor_id so that it shows up when the zombie detection method prints the SimpleTaskInstance to logs.
### Use case/motivation
Since the zombie detection message originates in the dag file processor, further troubleshooting of the zombie task requires figuring out which worker was actually responsible for the task. Printing the external_executor_id makes it easier to find the task in a log aggregator like Kibana or Splunk than it is when using the combination of dag_id, task_id, logical_date, and map_index, at least for executors like Celery.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26067 | https://github.com/apache/airflow/pull/26141 | b6ba11ebece2c3aaf418738cb157174491a1547c | ef0b97914a6d917ca596200c19faed2f48dca88a | "2022-08-30T13:27:51Z" | python | "2022-09-03T13:23:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,059 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | [Graph view] After clearing the task (and its downstream tasks) in a task group the task group becomes disconnected from the dag | ### Apache Airflow version
2.3.4
### What happened
n the graph view of the dag, after clearing the task (and its downstream tasks) in a task group and refreshing the page the browser the task group becomes disconnected from the dag. See attached gif.
![airflow_2_3_4_task_group_bug](https://user-images.githubusercontent.com/6542519/187409008-767e13e6-ab91-4875-9f3e-bd261b346d0f.gif)
The issue is not persistent and consistent. The graph view becomes disconnected from time to time as you can see on the attached video.
### What you think should happen instead
The graph should be rendered properly and consistently.
### How to reproduce
1. Add the following dag to the dag folder:
```
import logging
import time
from typing import List
import pendulum
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow.utils.task_group import TaskGroup
def log_function(message: str, **kwargs):
logging.info(message)
time.sleep(3)
def create_file_handling_task_group(supplier):
with TaskGroup(group_id=f"file_handlig_task_group_{supplier}", ui_color='#666666') as file_handlig_task_group:
entry = PythonOperator(
task_id='entry',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_task_group-Entry-task'}
)
with TaskGroup(group_id=f"file_handling_task_sub_group-{supplier}",
ui_color='#666666') as file_handlig_task_sub_group:
sub_group_submit = PythonOperator(
task_id='sub_group_submit',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_submit'}
)
sub_group_monitor = PythonOperator(
task_id='sub_group_monitor',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_monitor'}
)
sub_group_submit >> sub_group_monitor
entry >> file_handlig_task_sub_group
return file_handlig_task_group
def get_stage_1_taskgroups(supplierlist: List) -> List[TaskGroup]:
return [create_file_handling_task_group(supplier) for supplier in supplierlist]
def connect_stage1_to_stage2(self, stage1_tasks: List[TaskGroup], stage2_tasks: List[TaskGroup]) -> None:
if stage2_tasks:
for stage1_task in stage1_tasks:
supplier_code: str = self.get_supplier_code(stage1_task)
stage2_task = self.get_suppliers_tasks(supplier_code, stage2_tasks)
stage1_task >> stage2_task
def get_stage_2_taskgroup(taskgroup_id: str):
with TaskGroup(group_id=taskgroup_id, ui_color='#666666') as stage_2_taskgroup:
sub_group_submit = PythonOperator(
task_id='sub_group_submit',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_submit'}
)
sub_group_monitor = PythonOperator(
task_id='sub_group_monitor',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_monitor'}
)
sub_group_submit >> sub_group_monitor
return stage_2_taskgroup
def create_dag():
with DAG(
dag_id="horizon-task-group-bug",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
description="description"
) as dag:
start = PythonOperator(
task_id='start_main',
python_callable=log_function,
op_kwargs={'message': 'Entry-task'}
)
end = PythonOperator(
task_id='end_main',
python_callable=log_function,
op_kwargs={'message': 'End-task'}
)
with TaskGroup(group_id=f"main_file_task_group", ui_color='#666666') as main_file_task_group:
end_main_file_task_stage_1 = PythonOperator(
task_id='end_main_file_task_stage_1',
python_callable=log_function,
op_kwargs={'message': 'end_main_file_task_stage_1'}
)
first_stage = get_stage_1_taskgroups(['9001', '9002'])
first_stage >> get_stage_2_taskgroup("stage_2_1_taskgroup")
first_stage >> get_stage_2_taskgroup("stage_2_2_taskgroup")
first_stage >> end_main_file_task_stage_1
start >> main_file_task_group >> end
return dag
dag = create_dag()
```
2. Go to de graph view of the dag.
3. Run the dag.
4. After the dag run has finished. Clear the "sub_group_submit" task within the "stage_2_1_taskgroup" with downstream tasks.
5. Refresh the page multiple times and notice how from time to time the "stage_2_1_taskgroup" becomes disconnected from the dag.
6. Clear the "sub_group_submit" task within the "stage_2_2_taskgroup" with downstream tasks.
7. Refresh the page multiple times and notice how from time to time the "stage_2_2_taskgroup" becomes disconnected from the dag.
### Operating System
Mac OS, Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Custom docker image based on apache/airflow:2.3.4-python3.10
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26059 | https://github.com/apache/airflow/pull/30129 | 4dde8ececf125abcded5910817caad92fcc82166 | 76a884c552a78bfb273fe8b65def58125fc7961a | "2022-08-30T10:12:04Z" | python | "2023-03-15T20:05:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,046 | ["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"] | `isinstance()` check in `_hook()` breaking provider hook usage | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
Using `apache-airflow-providers-common-sql==1.1.0`
### Apache Airflow version
2.3.2
### Operating System
Debian GNU/Linux 11 bullseye
### Deployment
Astronomer
### Deployment details
astro-runtime:5.0.5
### What happened
The `isinstance()` method to check that the hook is a `DbApiHook` is breaking when a snowflake connection is passed to an operator's `conn_id` parameter, as the check finds an instance of `SnowflakeHook` and not `DbApiHook`.
### What you think should happen instead
There should not be an error when subclasses of `DbApiHook` are used. This can be fixed by replacing `isinstance()` with something that checks the inheritance hierarchy.
### How to reproduce
Run an operator from the common-sql provider with a Snowflake connection passed to `conn_id`.
### Anything else
Occurs every time.
Log:
```
[2022-08-29, 19:10:42 UTC] {manager.py:49} ERROR - Failed to extract metadata The connection type is not supported by SQLColumnCheckOperator. The associated hook should be a subclass of `DbApiHook`. Got SnowflakeHook task_type=SQLColumnCheckOperator airflow_dag_id=complex_snowflake_transform task_id=quality_check_group_forestfire.forestfire_column_checks airflow_run_id=manual__2022-08-29T19:04:54.998289+00:00
Traceback (most recent call last):
File "/usr/local/airflow/include/openlineage/airflow/extractors/manager.py", line 38, in extract_metadata
task_metadata = extractor.extract_on_complete(task_instance)
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_check_extractors.py", line 26, in extract_on_complete
return super().extract()
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_extractor.py", line 50, in extract
authority=self._get_authority(),
File "/usr/local/airflow/include/openlineage/airflow/extractors/snowflake_extractor.py", line 57, in _get_authority
return self.conn.extra_dejson.get(
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_extractor.py", line 102, in conn
self._conn = get_connection(self._conn_id())
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_extractor.py", line 91, in _conn_id
return getattr(self.hook, self.hook.conn_name_attr)
File "/usr/local/airflow/include/openlineage/airflow/extractors/sql_extractor.py", line 96, in hook
self._hook = self._get_hook()
File "/usr/local/airflow/include/openlineage/airflow/extractors/snowflake_extractor.py", line 63, in _get_hook
return self.operator.get_db_hook()
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 112, in get_db_hook
return self._hook
File "/usr/local/lib/python3.9/functools.py", line 969, in __get__
val = self.func(instance)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 95, in _hook
raise AirflowException(
airflow.exceptions.AirflowException: The connection type is not supported by SQLColumnCheckOperator. The associated hook should be a subclass of `DbApiHook`. Got SnowflakeHook
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26046 | https://github.com/apache/airflow/pull/26051 | d356560baa5a41d4bda87e4010ea6d90855d25f3 | 27e2101f6ee5567b2843cbccf1dca0b0e7c96186 | "2022-08-29T19:58:59Z" | python | "2022-08-30T17:05:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,019 | ["dev/breeze/src/airflow_breeze/commands/release_management_commands.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output_release-management_generate-constraints.svg", "scripts/in_container/_in_container_script_init.sh", "scripts/in_container/_in_container_utils.sh", "scripts/in_container/in_container_utils.py", "scripts/in_container/install_airflow_and_providers.py", "scripts/in_container/run_generate_constraints.py", "scripts/in_container/run_generate_constraints.sh", "scripts/in_container/run_system_tests.sh"] | Rewrite the in-container scripts in Python | We have a number of "in_container" scripts written in Bash, They are doing a number of houseekeeping stuff but since we already have Python 3.7+ inside the CI image, we could modularise them more and make them run from external and simplify entrypoint_ci (for example separate script for tests). | https://github.com/apache/airflow/issues/26019 | https://github.com/apache/airflow/pull/36158 | 36010f6d0e3231081dbae095baff5a5b5c5b34eb | f39cdcceff4fa64debcaaef6e30f345b7b21696e | "2022-08-28T09:23:08Z" | python | "2023-12-11T07:02:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,013 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | schedule_interval is not respecting the value assigned to that either it's one day or none | ### Apache Airflow version
main (development)
### What happened
Schedule_interval is `none` even if `timedelta(days=365, hours=6)` also its 1 day for `schedule_interval=None` and `schedule_interval=timedelta(days=3)`
### What you think should happen instead
It should respect the value assigned to it.
### How to reproduce
Create a dag with `schedule_interval=None` or `schedule_interval=timedelta(days=5)` and observe the behaviour.
![2022-08-27 17 42 07](https://user-images.githubusercontent.com/88504849/187039335-90de6855-b674-47ba-9c03-3c437722bae5.gif)
**DAG-**
```
with DAG(
dag_id="branch_python_operator",
start_date=days_ago(1),
schedule_interval="* * * * *",
doc_md=docs,
tags=['core']
) as dag:
```
**DB Results-**
```
postgres=# select schedule_interval from dag where dag_id='branch_python_operator';
schedule_interval
------------------------------------------------------------------------------
{"type": "timedelta", "attrs": {"days": 1, "seconds": 0, "microseconds": 0}}
(1 row)
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26013 | https://github.com/apache/airflow/pull/26082 | d4db9aecc3e534630c76e59c54d90329ed20a6ab | c982080ca1c824dd26c452bcb420df0f3da1afa8 | "2022-08-27T16:35:56Z" | python | "2022-08-31T09:09:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,000 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | `start_date` for an existing dagrun is not set when ran with backfill flags ` --reset-dagruns --yes` | ### Apache Airflow version
2.3.4
### What happened
When the dagrun already exists and is backfilled with the flags `--reset-dagruns --yes`, the dag run will not have a start date. This is because reset_dagruns calls [clear_task_instances](https://github.com/apache/airflow/blob/main/airflow/models/dag.py#L2020) which [sets the dagrun start date to None](https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L286-L291).
Since the dagrun goes into running via BackfillJob rather than the SchedulerJob, the start date is not set. This doesn't happen to a new dagrun created by a BackfillJob because the [start date is determined at creation](https://github.com/apache/airflow/blob/main/airflow/jobs/backfill_job.py#L310-L320).
Here is a recreation of the behaviour.
First run of the backfill dagrun. No odd warnings and start date exists for Airflow to calculate the duration.
```
astro@75512ab5e882:/usr/local/airflow$ airflow dags backfill -s 2021-12-01 -e 2021-12-01 test_module
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528: DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
option = self._get_environment_variables(deprecated_key, deprecated_section, key, section)
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
/usr/local/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py:57 PendingDeprecationWarning: --ignore-first-depends-on-past is deprecated as the value is always set to True
[2022-08-25 21:29:55,574] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
Nothing to clear.
[2022-08-25 21:29:55,650] {executor_loader.py:105} INFO - Loaded executor: LocalExecutor
[2022-08-25 21:29:55,896] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'test_module', 'run_python', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp_nuoic9m']
[2022-08-25 21:30:00,665] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'test_module', 'run_python', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp_nuoic9m']
[2022-08-25 21:30:00,679] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
[2022-08-25 21:30:00,695] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags/test_module.py
[2022-08-25 21:30:00,759] {task_command.py:371} INFO - Running <TaskInstance: test_module.run_python backfill__2021-12-01T00:00:00+00:00 [queued]> on host 75512ab5e882
[2022-08-25 21:30:05,686] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 1 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
[2022-08-25 21:30:05,709] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'test_module', 'test', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp3w9pm1jj']
[2022-08-25 21:30:10,659] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'test_module', 'test', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp3w9pm1jj']
[2022-08-25 21:30:10,668] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
[2022-08-25 21:30:10,693] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags/test_module.py
[2022-08-25 21:30:10,765] {task_command.py:371} INFO - Running <TaskInstance: test_module.test backfill__2021-12-01T00:00:00+00:00 [queued]> on host 75512ab5e882
[2022-08-25 21:30:15,678] {dagrun.py:564} INFO - Marking run <DagRun test_module @ 2021-12-01T00:00:00+00:00: backfill__2021-12-01T00:00:00+00:00, externally triggered: False> successful
[2022-08-25 21:30:15,679] {dagrun.py:609} INFO - DagRun Finished: dag_id=test_module, execution_date=2021-12-01T00:00:00+00:00, run_id=backfill__2021-12-01T00:00:00+00:00, run_start_date=2022-08-25 21:29:55.815199+00:00, run_end_date=2022-08-25 21:30:15.679256+00:00, run_duration=19.864057, state=success, external_trigger=False, run_type=backfill, data_interval_start=2021-12-01T00:00:00+00:00, data_interval_end=2021-12-02T00:00:00+00:00, dag_hash=None
[2022-08-25 21:30:15,680] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 2 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
[2022-08-25 21:30:15,684] {local_executor.py:390} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait.
[2022-08-25 21:30:15,829] {backfill_job.py:879} INFO - Backfill done. Exiting.
```
Second run of the backfill dagrun with the flags `--reset-dagruns --yes`. There is a warning about start_date is not set.
```
astro@75512ab5e882:/usr/local/airflow$ airflow dags backfill -s 2021-12-01 -e 2021-12-01 --reset-dagruns --yes test_module
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528: DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
option = self._get_environment_variables(deprecated_key, deprecated_section, key, section)
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
/usr/local/lib/python3.9/site-packages/airflow/cli/commands/dag_command.py:57 PendingDeprecationWarning: --ignore-first-depends-on-past is deprecated as the value is always set to True
[2022-08-25 21:30:46,895] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dag
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:528 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
[2022-08-25 21:30:46,996] {executor_loader.py:105} INFO - Loaded executor: LocalExecutor
[2022-08-25 21:30:47,275] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'test_module', 'run_python', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp3s_3bn80']
[2022-08-25 21:30:52,010] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'test_module', 'run_python', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmp3s_3bn80']
[2022-08-25 21:30:52,029] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
[2022-08-25 21:30:52,045] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags/test_module.py
[2022-08-25 21:30:52,140] {task_command.py:371} INFO - Running <TaskInstance: test_module.run_python backfill__2021-12-01T00:00:00+00:00 [queued]> on host 75512ab5e882
[2022-08-25 21:30:57,028] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 1 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
[2022-08-25 21:30:57,048] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'test_module', 'test', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmprxg7g5o8']
[2022-08-25 21:31:02,024] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'test_module', 'test', 'backfill__2021-12-01T00:00:00+00:00', '--ignore-depends-on-past', '--local', '--pool', 'default_pool', '--subdir', 'DAGS_FOLDER/test_module.py', '--cfg-path', '/tmp/tmprxg7g5o8']
[2022-08-25 21:31:02,032] {backfill_job.py:367} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
[2022-08-25 21:31:02,085] {dagbag.py:508} INFO - Filling up the DagBag from /usr/local/airflow/dags/test_module.py
[2022-08-25 21:31:02,178] {task_command.py:371} INFO - Running <TaskInstance: test_module.test backfill__2021-12-01T00:00:00+00:00 [queued]> on host 75512ab5e882
[2022-08-25 21:31:07,039] {dagrun.py:564} INFO - Marking run <DagRun test_module @ 2021-12-01 00:00:00+00:00: backfill__2021-12-01T00:00:00+00:00, externally triggered: False> successful
[2022-08-25 21:31:07,039] {dagrun.py:609} INFO - DagRun Finished: dag_id=test_module, execution_date=2021-12-01 00:00:00+00:00, run_id=backfill__2021-12-01T00:00:00+00:00, run_start_date=None, run_end_date=2022-08-25 21:31:07.039737+00:00, run_duration=None, state=success, external_trigger=False, run_type=backfill, data_interval_start=2021-12-01 00:00:00+00:00, data_interval_end=2021-12-02 00:00:00+00:00, dag_hash=None
[2022-08-25 21:31:07,040] {dagrun.py:795} WARNING - Failed to record duration of <DagRun test_module @ 2021-12-01 00:00:00+00:00: backfill__2021-12-01T00:00:00+00:00, externally triggered: False>: start_date is not set.
[2022-08-25 21:31:07,040] {backfill_job.py:367} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 2 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
[2022-08-25 21:31:07,043] {local_executor.py:390} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait.
[2022-08-25 21:31:07,177] {backfill_job.py:879} INFO - Backfill done. Exiting.
```
### What you think should happen instead
When the BackfillJob fetches the dagrun, it will also need to set the start date.
It can be done right after setting the run variable. ([source](https://github.com/apache/airflow/blob/main/airflow/jobs/backfill_job.py#L310-L320))
### How to reproduce
Run the backfill command first without `--reset-dagruns --yes` flags.
```
airflow dags backfill -s 2021-12-01 -e 2021-12-01 test_module
```
Run the backfill command with the `--reset-dagruns --yes` flags.
```
airflow dags backfill -s 2021-12-01 -e 2021-12-01 --reset-dagruns --yes test_module
```
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26000 | https://github.com/apache/airflow/pull/26135 | 4644a504f2b64754efb40f4c61f8d050f3e7b1b7 | 2d031ee47bc7af347040069a3162273de308aef6 | "2022-08-27T01:00:57Z" | python | "2022-09-02T16:14:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,976 | ["airflow/api_connexion/schemas/pool_schema.py", "airflow/models/pool.py", "airflow/www/views.py", "tests/api_connexion/endpoints/test_pool_endpoint.py", "tests/api_connexion/schemas/test_pool_schemas.py", "tests/api_connexion/test_auth.py", "tests/www/views/test_views_pool.py"] | Include "Scheduled slots" column in Pools view | ### Description
It would be nice to have a "Scheduled slots" column to see how many slots want to enter each pool. Currently we are only displaying the running and queued slots.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25976 | https://github.com/apache/airflow/pull/26006 | 1c73304bdf26b19d573902bcdfefc8ca5160511c | bcdc25dd3fbda568b5ff2c04701623d6bf11a61f | "2022-08-26T07:53:27Z" | python | "2022-08-29T06:31:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,968 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Unable to configure Google Secrets Manager in 2.3.4 | ### Apache Airflow version
2.3.4
### What happened
I am attempting to configure a Google Secrets Manager secrets backend using the `gcp_keyfile_dict` param in a `.env` file with the following ENV Vars:
```
AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
AIRFLOW__SECRETS__BACKEND_KWARGS='{"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "gcp_keyfile_dict": <json-keyfile>}'
```
In previous versions including 2.3.3 this worked without issue
After upgrading to Astro Runtime 5.0.8 I get the following error taken from the scheduler container logs. The scheduler, webserver, and triggerer are continually restarting
```
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/usr/local/lib/python3.9/site-packages/airflow/__init__.py", line 35, in <module>
from airflow import settings
File "/usr/local/lib/python3.9/site-packages/airflow/settings.py", line 35, in <module>
from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401
File "/usr/local/lib/python3.9/site-packages/airflow/configuration.py", line 1618, in <module>
secrets_backend_list = initialize_secrets_backends()
File "/usr/local/lib/python3.9/site-packages/airflow/configuration.py", line 1540, in initialize_secrets_backends
custom_secret_backend = get_custom_secret_backend()
File "/usr/local/lib/python3.9/site-packages/airflow/configuration.py", line 1523, in get_custom_secret_backend
return _custom_secrets_backend(secrets_backend_cls, **alternative_secrets_config_dict)
TypeError: unhashable type: 'dict'
```
### What you think should happen instead
Containers should remain healthy and the secrets backend should successfully be added
### How to reproduce
`astro dev init` a fresh project
Dockerfile:
`FROM quay.io/astronomer/astro-runtime:5.0.8`
`.env` file:
```
AIRFLOW__SECRETS__BACKEND=airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
AIRFLOW__SECRETS__BACKEND_KWARGS='{"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "gcp_keyfile_dict": <service-acct-json-keyfile>}'
```
`astro dev start`
### Operating System
macOS 11.6.8
### Versions of Apache Airflow Providers
[apache-airflow-providers-google](https://airflow.apache.org/docs/apache-airflow-providers-google/8.1.0/) 8.1.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25968 | https://github.com/apache/airflow/pull/25970 | 876536ea3c45d5f15fcfbe81eda3ee01a101faa3 | aa877637f40ddbf3b74f99847606b52eb26a92d9 | "2022-08-25T22:01:21Z" | python | "2022-08-26T09:24:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,952 | ["airflow/providers/amazon/aws/operators/rds.py", "airflow/providers/amazon/aws/sensors/rds.py", "docs/apache-airflow-providers-amazon/operators/rds.rst", "tests/providers/amazon/aws/operators/test_rds.py", "tests/system/providers/amazon/aws/example_rds_instance.py"] | Add RDS operators/sensors | ### Description
I think adding the following operators/sensors would benefit companies that need to start/stop RDS instances programmatically.
Name | Description | PR
:- | :- | :-
`RdsStartDbOperator` | Start an instance, and optionally wait for it enter "available" state | #27076
`RdsStopDbOperator` | Start an instance, and optionally wait for it to enter "stopped" state | #27076
`RdsDbSensor` | Wait for the requested status (eg. available, stopped) | #26003
Is this something that would be accepted into the codebase?
Please let me know.
### Use case/motivation
#### 1. Saving money
RDS is expensive. To save money, a company keeps test/dev environment relational databases shutdown until it needs to use them. With Airflow, they can start a database instance before running a workload, then turn it off after the workload finishes (or errors).
#### 2. Force RDS to stay shutdown
RDS automatically starts a database after 1 week of downtime. A company does not need this feature. They can create a DAG to continuously run the shutdown command on a list of databases instance ids stored in a `Variable`. The alternative is to create a shell script or login to the console and manually shutdown each database every week.
#### 3. Making sure a database is running before scheduling workload
A company programmatically starts/stops its RDS instances. Before they run a workload, they want to make sure it's running. They can use a sensor to make sure a database is available before attempting to run any jobs that require access.
Also, during maintenance windows, RDS instances may be taken offline. Rather than tuning each DAG schedule to run outside of this window, a company can use a sensor to wait until the instance is available. (Yes, the availability check could also take place immediately before the maintenance window.)
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25952 | https://github.com/apache/airflow/pull/27076 | d4bfccb3c90d889863bb1d1500ad3158fc833aae | a2413cf6ca8b93e491a48af11d769cd13bce8884 | "2022-08-25T08:51:53Z" | python | "2022-10-19T05:36:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,949 | ["airflow/www/static/js/api/useGridData.test.js", "airflow/www/static/js/api/useGridData.ts"] | Auto-refresh is broken in 2.3.4 | ### Apache Airflow version
2.3.4
### What happened
In PR #25042 a bug was introduced that prevents auto-refresh from working when tasks of type `scheduled` are running.
### What you think should happen instead
Auto-refresh should work for any running or queued task, rather than only manually-scheduled tasks.
### How to reproduce
_No response_
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25949 | https://github.com/apache/airflow/pull/25950 | e996a88c7b19a1d30c529f5dd126d0a8871f5ce0 | 37ec752c818d4c42cba6e7fdb2e11cddc198e810 | "2022-08-25T03:39:42Z" | python | "2022-08-25T11:46:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,937 | ["airflow/providers/common/sql/hooks/sql.py", "airflow/providers/common/sql/provider.yaml", "airflow/providers/presto/hooks/presto.py", "airflow/providers/presto/provider.yaml", "airflow/providers/sqlite/hooks/sqlite.py", "airflow/providers/sqlite/provider.yaml", "airflow/providers/trino/hooks/trino.py", "airflow/providers/trino/provider.yaml", "generated/provider_dependencies.json"] | TrinoHook uses wrong parameter representation when inserting rows | ### Apache Airflow Provider(s)
trino
### Versions of Apache Airflow Providers
apache-airflow-providers-trino==4.0.0
### Apache Airflow version
2.3.3
### Operating System
macOS 12.5.1 (21G83)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
`TrinoHook.insert_rows()` throws a syntax error due to the underlying prepared statement using "%s" as representation for parameters, instead of "?" [which Trino uses](https://trino.io/docs/current/sql/prepare.html#description).
### What you think should happen instead
`TrinoHook.insert_rows()` should insert rows using Trino-compatible SQL statements.
The following exception is raised currently:
`trino.exceptions.TrinoUserError: TrinoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:88: mismatched input '%'. Expecting: ')', <expression>, <query>", query_id=xxx)`
### How to reproduce
Instantiate an `airflow.providers.trino.hooks.trino.TrinoHook` instance and use it's `insert_rows()` method.
Operators using this method internally are also broken: e.g. `airflow.providers.trino.transfers.gcs_to_trino.GCSToTrinoOperator`
### Anything else
The issue seems to come from `TrinoHook.insert_rows()` relying on `DbApiHook.insert_rows()`, which uses "%s" to represent query parameters.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25937 | https://github.com/apache/airflow/pull/25939 | 4c3fb1ff2b789320cc2f19bd921ac0335fc8fdf1 | a74d9349919b340638f0db01bc3abb86f71c6093 | "2022-08-24T14:02:00Z" | python | "2022-08-27T01:15:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,926 | ["docs/apache-airflow-providers-docker/decorators/docker.rst", "docs/apache-airflow-providers-docker/index.rst"] | How to guide for @task.docker decorator | ### Body
Hi.
[The documentation for apache-airflow-providers-docker](https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/index.html) does not provide information on how to use the `@task.dockker `decorator. We have this decorator described only in the API reference for this provider and documentation for the apache airflow package.
Best regards,
Kamil
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/25926 | https://github.com/apache/airflow/pull/28251 | fd5846d256b6d269b160deb8df67cd3d914188e0 | 74b69030efbb87e44c411b3563989d722fa20336 | "2022-08-24T04:39:14Z" | python | "2022-12-14T08:48:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,851 | ["airflow/providers/common/sql/hooks/sql.py", "tests/providers/common/sql/hooks/test_sqlparse.py", "tests/providers/databricks/hooks/test_databricks_sql.py", "tests/providers/oracle/hooks/test_oracle.py"] | PL/SQL statement stop working after upgrade common-sql to 1.1.0 | ### Apache Airflow Provider(s)
common-sql, oracle
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-oracle==3.3.0
### Apache Airflow version
2.3.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
After upgrade provider common-sql==1.0.0 to 1.1.0 version, SQL with DECLARE stop working.
Using OracleProvider 3.2.0 with common-sql 1.0.0:
```
[2022-08-19, 13:16:46 -04] {oracle.py:66} INFO - Executing: DECLARE
v_sql LONG;
BEGIN
v_sql := '
create table usr_bi_cgj.dim_tarefa
(
id_tarefa NUMBER(22) not null primary key,
ds_tarefa VARCHAR2(4000) not NULL
);
';
EXECUTE IMMEDIATE v_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN EXECUTE IMMEDIATE 'TRUNCATE TABLE usr_bi_cgj.dim_tarefa';
COMMIT;
END;
[2022-08-19, 13:16:46 -04] {base.py:68} INFO - Using connection ID 'bitjro' for task execution.
[2022-08-19, 13:16:46 -04] {sql.py:255} INFO - Running statement: DECLARE
v_sql LONG;
BEGIN
v_sql := '
create table usr_bi_cgj.dim_tarefa
(
id_tarefa NUMBER(22) not null primary key,
ds_tarefa VARCHAR2(4000) not NULL
);
';
EXECUTE IMMEDIATE v_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN EXECUTE IMMEDIATE 'TRUNCATE TABLE usr_bi_cgj.dim_tarefa';
COMMIT;
END;, parameters: None
[2022-08-19, 13:16:46 -04] {sql.py:264} INFO - Rows affected: 0
[2022-08-19, 13:16:46 -04] {taskinstance.py:1420} INFO - Marking task as SUCCESS. dag_id=caixa_tarefa_pje, task_id=cria_temp_dim_tarefa, execution_date=20220819T080000, start_date=20220819T171646, end_date=20220819T171646
[2022-08-19, 13:16:46 -04] {local_task_job.py:156} INFO - Task exited with return code 0
```
![image](https://user-images.githubusercontent.com/226773/185792377-2c0f9190-e315-4b9c-9731-c8e57aea282c.png)
After upgrade OracleProvider to 3.3.0 with common-sql to 1.1.0 version, same statement now throws an exception:
```
[2022-08-20, 14:58:14 ] {sql.py:315} INFO - Running statement: DECLARE
v_sql LONG;
BEGIN
v_sql := '
create table usr_bi_cgj.dim_tarefa
(
id_tarefa NUMBER(22) not null primary key,
ds_tarefa VARCHAR2(4000) not NULL
);
';
EXECUTE IMMEDIATE v_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN EXECUTE IMMEDIATE 'TRUNCATE TABLE usr_bi_cgj.dim_tarefa';
COMMIT;
END, parameters: None
[2022-08-20, 14:58:14 ] {taskinstance.py:1909} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/oracle/operators/oracle.py", line 69, in execute
hook.run(self.sql, autocommit=self.autocommit, parameters=self.parameters)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/common/sql/hooks/sql.py", line 295, in run
self._run_command(cur, sql_statement, parameters)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/common/sql/hooks/sql.py", line 320, in _run_command
cur.execute(sql_statement)
File "/home/airflow/.local/lib/python3.7/site-packages/oracledb/cursor.py", line 378, in execute
impl.execute(self)
File "src/oracledb/impl/thin/cursor.pyx", line 121, in oracledb.thin_impl.ThinCursorImpl.execute
File "src/oracledb/impl/thin/protocol.pyx", line 375, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 376, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 369, in oracledb.thin_impl.Protocol._process_message
oracledb.exceptions.DatabaseError: ORA-06550: linha 17, coluna 3:
PLS-00103: Encontrado o símbolo "end-of-file" quando um dos seguintes símbolos era esperado:
; <um identificador>
<um identificador delimitado por aspas duplas>
O símbolo ";" foi substituído por "end-of-file" para continuar.
```
![image](https://user-images.githubusercontent.com/226773/185762143-4f96e425-7eda-4140-a281-e096cc7d3148.png)
### What you think should happen instead
I think stripping `;` from statement is causing this error
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25851 | https://github.com/apache/airflow/pull/25855 | ccdd73ec50ab9fb9d18d1cce7a19a95fdedcf9b9 | 874a95cc17c3578a0d81c5e034cb6590a92ea310 | "2022-08-21T13:19:59Z" | python | "2022-08-21T23:51:11Z" |