input
stringlengths 1
12.6k
| output
stringlengths 1
12k
|
---|---|
<@U03APPBC2SC>, were you able to figure this out?
| Sorry haven’t had time to look at it yet - I reverted it to 1.4.2 for now
Will let you know when I do
|
I encountered issue with task retry, somehow the following task is not retried as per the configuration
```@task(retries=2)
def should_fail():
raise ValueError("fail")
@workflow
def multi_task_pipeline_workflow():
should_fail()```
Is it a known issue or there is some admin configuration that need to be updated ?
NVM, I missed this part.
| this is still an interesting topic tho <@U02B12QHY9J>, I feel like from a UX perspective exceptions raised explicitly in user-defined code should be recoverable. <@U0265RTUJ5B> <@UNR3C6Y4T> thoughts?
looking at some of the workflows I’ve been playing with, I think this is a bug
<@U02B12QHY9J> would you mind copy-pasting a screenshot when you click onf “Show Error”?
|
this is still an interesting topic tho <@U02B12QHY9J>, I feel like from a UX perspective exceptions raised explicitly in user-defined code should be recoverable. <@U0265RTUJ5B> <@UNR3C6Y4T> thoughts?
looking at some of the workflows I’ve been playing with, I think this is a bug
<@U02B12QHY9J> would you mind copy-pasting a screenshot when you click onf “Show Error”?
| > I feel like from a UX perspective exceptions raised explicitly in user-defined code should be recoverable.
Agree with this. User-space exception might be thrown by some library that a task depends on, explicitly catching it and converting to `FlyteRecoverableException` might not be a lot of code but feel verbose, especially if we have to do it for every tasks.
The error
```Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flytekit/exceptions/scopes.py", line 203, in user_entry_point
return wrapped(*args, **kwargs)
File "/root/retry_workflows/multi_task_pipeline_workflow.py", line 57, in should_fail
raise ValueError("fail")
Message:
fail
User error.```
|
How does flyte identify system error? Was trying ShellTask and the script I am providing is invalid, but instead of returning user error it’s classifying it as system error.
| The type of exception raised
For all unknown exception it says system error
Please open an issue and ideally a PR
|
Is it possible to retrieve workflow name from within a task? I am browsing trough `FlyteContext` but can’t see any suitable property available.
| Not today, can be added, but what if it’s a single task execution
Also launchplan maybe better?
|
Not today, can be added, but what if it’s a single task execution
Also launchplan maybe better?
| Yeah, I think launchplan information is also useful. I have a use-case to calculate the time boundary in a workflow based on current execution time and its schedule. E.g. to determine `start_time` and `end_time` of a query. `end_time` is retrieved based on the `kickoff_time` and `start_time` should be the previous scheduled execution. Currently, I have to pass the cron schedule as workflow input and figure out the previous scheduled execution based on that.
|
Yeah, I think launchplan information is also useful. I have a use-case to calculate the time boundary in a workflow based on current execution time and its schedule. E.g. to determine `start_time` and `end_time` of a query. `end_time` is retrieved based on the `kickoff_time` and `start_time` should be the previous scheduled execution. Currently, I have to pass the cron schedule as workflow input and figure out the previous scheduled execution based on that.
| IMO you should add a duration/ interval as an argument that you can set
But if the question is how can this be automatic- atm I do not have an idea
|
I am trying something a bit unconventional. Given that several task is performing modification to a `FlyteDirectory` and output the same `FlyteDirectory`. Is it possible to force all the `FlyteDirectory` to be backed by the same GCS path? Essentially sharing a directory content across tasks.
```@task
def task(work_dir: FlyteDirectory) -> FlyteDirectory:
# do something in work_dir
# potentially add files
return work_dir
@workflow
def workflow():
work_dir = init_workdir()
work_dir = task(work_dir)
work_dir = task(work_dir)
work_dir = task(work_dir)```
| This is what will happen
Every downstream task will use the same location
|
This is what will happen
Every downstream task will use the same location
| Oh you are right!
|
hi folks! looking for some documentation on ray integration - esp as it pertains to persistent clusters. can't find my last question on this since it was more than 90d ago. even the original RFC would suffice, the public docs don't go into detail on limitations/architecture
| we do not support persistent clusters today
|
we do not support persistent clusters today
| any timeline on that?
|
any timeline on that?
| <https://docs.google.com/document/d/1-695lxz8a-GFz4cFamGkF1NhspKMkDGIDdkY9EDLMz8/edit#heading=h.xqpnlyepejm>
nothing we can commit - please contribute :slightly_smiling_face:
TBH no one asked for it
even folks at spotify found corruption issues
|
<https://docs.google.com/document/d/1-695lxz8a-GFz4cFamGkF1NhspKMkDGIDdkY9EDLMz8/edit#heading=h.xqpnlyepejm>
nothing we can commit - please contribute :slightly_smiling_face:
TBH no one asked for it
even folks at spotify found corruption issues
| last i heard it was a blocker on ray's side
yea could use more info on that. is there a ticket out there?
|
last i heard it was a blocker on ray's side
yea could use more info on that. is there a ticket out there?
| ya ray would get corrupted
i would not know
|
ya ray would get corrupted
i would not know
| someone on our side? :slightly_smiling_face:
|
someone on our side? :slightly_smiling_face:
| sorry <@U01E1L78CLA> / but, this was tried by folks at spotify and did not really tell us - as this is a big deal
yup :slightly_smiling_face:
|
sorry <@U01E1L78CLA> / but, this was tried by folks at spotify and did not really tell us - as this is a big deal
yup :slightly_smiling_face:
| cool. will check there. but from your perspective, persistent resources across tasks do work? or youre waiting on implementation
|
cool. will check there. but from your perspective, persistent resources across tasks do work? or youre waiting on implementation
| no there is no implementation in flyte today
we need better requirements
|
I'm trying to use datetimes in a dataclass as input to a workflow like this:
```@dataclass_json
@dataclass
class TrainingArgs:
latest_timestamps: List[datetime.datetime] = field(default_factory=lambda: [])
```
But now I'm getting a schema warning (I haven't tried to run it yet):
```{"asctime": "2023-03-15 15:13:19,393", "name": "flytekit", "levelname": "WARNING", "message": "Failed to extract schema for object <class 'TrainingArgs'>, (will run schemaless) error: unsupported field type <fields._TimestampField(dump_default=datetime.datetime(2023, 3, 15, 15, 13, 19, 383752), attribute=None, validate=None, required=False, load_only=False, dump_only=False, load_default=<marshmallow.missing>, allow_none=False, error_messages={'required': 'Missing data for required field.', 'null': 'Field may not be null.', 'validator_failed': 'Invalid value.'})>If you have postponed annotations turned on (PEP 563) turn it off please. Postponedevaluation doesn't work with json dataclasses"}
```
Does anyone have an idea how to get this working?
| Try
```latest_timestamps: typing.List[datetime.datetime] = field(metadata=config(mm_field=fields.List(fields.DateTime())))```
|
Try
```latest_timestamps: typing.List[datetime.datetime] = field(metadata=config(mm_field=fields.List(fields.DateTime())))```
| Thanks <@USU6W5ATA> that's working great. I think I'm still confused sometimes by the fact that dataclasses are converted to JSON instead of "native" Literals like top-level Parameters.
For reference, here's full example in the `dataclasses_json` docs: <https://lidatong.github.io/dataclasses-json/#extending>
|
Hey,
Do you have an idea of what has changed in the way we use Oauth2 with Flytekit? I saw that it was reworked recently and it seems like now it’s taking the `scopes` defined in FlyteAdmin correctly. I am asking because when I use Flytekit `v1.3.1` , I only get the scope `['all']` for the `PlatformConfig`
```PlatformConfig(endpoint='<http://flyteurl.com|flyteurl.com>', insecure=False, insecure_skip_verify=False, console_endpoint=None, command=None, client_id='github-client', client_credentials_secret='nice_secret', scopes=['all'], auth_mode='ClientSecret')```
And when I use Flytekit `v1.4.1` then I get different scopes
```/.venv/lib/python3.9/site-packages/flytekit/clients/auth/authenticator.py(193)get_token()
-> if scopes is not None:
(Pdb) scopes
['offline', 'all']```
Flytekit `v1.4.1` correctly reflects the scopes we have but I was wondering how come `v1.3.1` has different scopes?
| Ohh I did a complete overhaul of the auth system in flytekit
Check it out, it is actually now designed as a usable library
|
Ohh I did a complete overhaul of the auth system in flytekit
Check it out, it is actually now designed as a usable library
| So I guess it’s why it’s actually reflecting the correct scopes now and before it was some kind of bug I guess?
|
So I guess it’s why it’s actually reflecting the correct scopes now and before it was some kind of bug I guess?
| I think so
|
I think so
| Hey <@UNZB4NW3S>, how are you doing? :slightly_smiling_face:
<https://github.com/flyteorg/flytekit/pull/1458/files#diff-68068a13c159b1b1618b3fb6587cbec5fe18f4637414359b8f7580e03e19b536R22|This> is the remote client config being used which is called <https://github.com/flyteorg/flytekit/pull/1458/files#diff-68068a13c159b1b1618b3fb6587cbec5fe18f4637414359b8f7580e03e19b536R95|here> (and maybe in some other places)
When <https://github.com/flyteorg/flytekit/pull/1458/files#diff-68068a13c159b1b1618b3fb6587cbec5fe18f4637414359b8f7580e03e19b536R47|this> function is called, it receives 2 configs. The one we define (the PlatformConfig), and that RemoteStoreConfig I just linked. `scopes` passed from the PlatformConfig are 100% ignored, and the ones defined in that `RemoteClientConfigStore.get_client_config` are used, which are `[offline, all]` . Shouldn’t there be something like a merge if those values are defined by us?
I was about to open an issue for this but wanted to double check before doing so
|
Hey <@UNZB4NW3S>, how are you doing? :slightly_smiling_face:
<https://github.com/flyteorg/flytekit/pull/1458/files#diff-68068a13c159b1b1618b3fb6587cbec5fe18f4637414359b8f7580e03e19b536R22|This> is the remote client config being used which is called <https://github.com/flyteorg/flytekit/pull/1458/files#diff-68068a13c159b1b1618b3fb6587cbec5fe18f4637414359b8f7580e03e19b536R95|here> (and maybe in some other places)
When <https://github.com/flyteorg/flytekit/pull/1458/files#diff-68068a13c159b1b1618b3fb6587cbec5fe18f4637414359b8f7580e03e19b536R47|this> function is called, it receives 2 configs. The one we define (the PlatformConfig), and that RemoteStoreConfig I just linked. `scopes` passed from the PlatformConfig are 100% ignored, and the ones defined in that `RemoteClientConfigStore.get_client_config` are used, which are `[offline, all]` . Shouldn’t there be something like a merge if those values are defined by us?
I was about to open an issue for this but wanted to double check before doing so
| <@U04P5R3PJ2J> is this causing a bug. I am not sure a merge is a solution too
Cc <@UNW4VP36V> can you help here
|
<@U04P5R3PJ2J> is this causing a bug. I am not sure a merge is a solution too
Cc <@UNW4VP36V> can you help here
| Hey! Yes, we have a function:
```from flytekit.configuration import Config
from flytekit.remote import FlyteRemote
def get_latest_workflow_version(project, domain, workflow):
cfg = Config.auto(config_file=path)
remote = FlyteRemote(cfg)
wf_version = remote.fetch_workflow(project, domain, workflow).id.version
return wf_version```
with a config like:
```admin:
endpoint: dns:///console.flyte.dev.foo.bar.com
insecure: false
authType: ClientSecret
clientId: github-client
clientSecretLocation: /etc/secrets/client_secret
scopes: ["all"]```
and that “scopes” is ignored, `['offline', 'all']` are used, and causes an error authenticating
The error only takes place after upgrading `flytekit` . With the `1.3` version it works okay
|
Hey! Yes, we have a function:
```from flytekit.configuration import Config
from flytekit.remote import FlyteRemote
def get_latest_workflow_version(project, domain, workflow):
cfg = Config.auto(config_file=path)
remote = FlyteRemote(cfg)
wf_version = remote.fetch_workflow(project, domain, workflow).id.version
return wf_version```
with a config like:
```admin:
endpoint: dns:///console.flyte.dev.foo.bar.com
insecure: false
authType: ClientSecret
clientId: github-client
clientSecretLocation: /etc/secrets/client_secret
scopes: ["all"]```
and that “scopes” is ignored, `['offline', 'all']` are used, and causes an error authenticating
The error only takes place after upgrading `flytekit` . With the `1.3` version it works okay
| I guess then please contribute
Sorry for the confusion
I thought no one was using locally defined scopes
|
I guess then please contribute
Sorry for the confusion
I thought no one was using locally defined scopes
| <https://github.com/flyteorg/flyte/issues/3486>
|
<https://github.com/flyteorg/flyte/issues/3486>
| I agree this is a bug... the reference implementation is here: <https://github.com/flyteorg/flyteidl/blob/e1a667ef2536bbfc61331490b5417404f45ddc0b/clients/go/admin/token_source_provider.go#L67>
And it should indeed use the locally defined scopes (if they exist) otherwise, it should read from the admin defined ones...
|
I agree this is a bug... the reference implementation is here: <https://github.com/flyteorg/flyteidl/blob/e1a667ef2536bbfc61331490b5417404f45ddc0b/clients/go/admin/token_source_provider.go#L67>
And it should indeed use the locally defined scopes (if they exist) otherwise, it should read from the admin defined ones...
| Just to kickoff this. Maybe something in this direction could help?
<https://github.com/flyteorg/flytekit/pull/1553>
|
This message was deleted.
| |
Seeing a somewhat weird UI/caching bug. I have a workflow that launches a dynamic with cache enabled on the tasks that the dynamic runs. Cache works properly when launching fresh from console, but when I click the "Relaunch" button, cache is hit for the first few tasks (outside of the dynamic), but not for any of the tasks in the dynamic :thinking_face:
On closer inspection, the inputs are identical for the dynamic in both executions (where cache worked, and cache did not work). I then looked at the inputs to the workflow in both executions, and it looks like when I click "Relaunch" and copy the inputs, the order of the keys are different, but their contents are identical to the original run. Would that somehow mess up the cache?
| what type is the input
|
what type is the input
| These are the inputs
``` a: str,
b: str,
c: list[str] = [],
d: list[str] = [],
e: Optional[str] = None,
f: Optional[str] = None,
g: bool = False,```
|
These are the inputs
``` a: str,
b: str,
c: list[str] = [],
d: list[str] = [],
e: Optional[str] = None,
f: Optional[str] = None,
g: bool = False,```
| not sure if it matters but this is one dynamic task with seven different inputs?
can you copy paste the task signature?
could you also go to the inputs tab of both runs and copy the literal here (redact as needed of course)
|
not sure if it matters but this is one dynamic task with seven different inputs?
can you copy paste the task signature?
could you also go to the inputs tab of both runs and copy the literal here (redact as needed of course)
| Sorry this was the input to the workflow. The inputs to the dynamic are:
``` datasets_with_paths: DatasetsWithPaths,
reference_file: ReferenceFile,
region_file: FlyteFile,
gcs_output_dir: str,
keep_consensus_bams: bool,```
Inputs to dynamic that successfully used cache:
```{"keep_consensus_bams":true,"reference_file":{"type":"single blob","uri":"REDACTED"},"region_file":{"type":"single blob","uri":"REDACTED"},"datasets_with_paths":{"type":"single (yaml) blob","uri":"REDACTED"},"gcs_output_dir":"REDACTED"}```
Inputs to dynamic that did not use cache (when using Relaunch)
```{"gcs_output_dir":"REDACTED","keep_consensus_bams":true,"reference_file":{"type":"single blob","uri":"REDACTED"},"region_file":{"type":"single blob","uri":"REDACTED"},"datasets_with_paths":{"type":"single (yaml) blob","uri":"REDACTED"}}```
All values were the same, just order of the keys are different
I think I got lucky when I hit cache..
if I try again, I don't hit it - seems like I have to get lucky and have the keys match :sob:
or it's a UI bug
but given there are logs, it looks like it's running everything
<@UNR3C6Y4T> any other ideas here?
|
Sorry this was the input to the workflow. The inputs to the dynamic are:
``` datasets_with_paths: DatasetsWithPaths,
reference_file: ReferenceFile,
region_file: FlyteFile,
gcs_output_dir: str,
keep_consensus_bams: bool,```
Inputs to dynamic that successfully used cache:
```{"keep_consensus_bams":true,"reference_file":{"type":"single blob","uri":"REDACTED"},"region_file":{"type":"single blob","uri":"REDACTED"},"datasets_with_paths":{"type":"single (yaml) blob","uri":"REDACTED"},"gcs_output_dir":"REDACTED"}```
Inputs to dynamic that did not use cache (when using Relaunch)
```{"gcs_output_dir":"REDACTED","keep_consensus_bams":true,"reference_file":{"type":"single blob","uri":"REDACTED"},"region_file":{"type":"single blob","uri":"REDACTED"},"datasets_with_paths":{"type":"single (yaml) blob","uri":"REDACTED"}}```
All values were the same, just order of the keys are different
I think I got lucky when I hit cache..
if I try again, I don't hit it - seems like I have to get lucky and have the keys match :sob:
or it's a UI bug
but given there are logs, it looks like it's running everything
<@UNR3C6Y4T> any other ideas here?
| and all the paths are exactly the same?
what types do `DatasetsWithPaths` and `ReferenceFile` resolve to on the flyte side?
|
and all the paths are exactly the same?
what types do `DatasetsWithPaths` and `ReferenceFile` resolve to on the flyte side?
| all paths are identical
they resolve to blob
|
all paths are identical
they resolve to blob
| also maybe worth it to check data catalog logs (if you’re running them separate)
this has been mostly resolved, or at least understood. the confusion stemmed from the fact that console does not show the version of tasks that is actually run, instead pulling them from admin.
debugged by downloading the futures file and inspecting the <https://github.com/flyteorg/flyteidl/blob/3dfcaf6671d85ee72c1ce00961c17421c5c91111/protos/flyteidl/core/dynamic_job.proto#L28|tasks> included in the dynamic job spec.
the version shown in the console for child tasks had one discovery version but they actually ran with another.
command used to inspect the `futures.pb` file was
```flyte-cli parse-proto -f futures.pb -p flyteidl.core.dynamic_job_pb2.DynamicJobSpec```
<@U0231BEP02E> can you confirm the source for the Task tab for children of dynamic tasks?
|
:wave: Can someone help review <https://github.com/flyteorg/flytekit/pull/1545>? We use a lot of remote entities, so this is a blocking issue to us. Thank you. :pray:
<@UNR3C6Y4T> Thank you for reviewing and merging this. Could you please make a new release? Also, do you think we should backport this to 1.3 and 1.2?
| yes we should back port.
<@U0265RTUJ5B> i will do this in a bit
we’re also cutting two beta releases today for 1.5
actually can you bump to 1.4?
|
yes we should back port.
<@U0265RTUJ5B> i will do this in a bit
we’re also cutting two beta releases today for 1.5
actually can you bump to 1.4?
| <@UNR3C6Y4T> We tried 1.4 and so far so good. Is there any known problem? If no, let's make a new 1.4.x release to include this PR?
|
<@UNR3C6Y4T> We tried 1.4 and so far so good. Is there any known problem? If no, let's make a new 1.4.x release to include this PR?
| on it.
<https://github.com/flyteorg/flytekit/pull/1551> <@U0265RTUJ5B>
things are failing but they’re intermittent… will re-run in a sec. this is a straight fast-forward of master basically, should work fine unless dependencies have changed
<https://github.com/flyteorg/flytekit/releases/tag/v1.4.2>
|
Hi, I came across `with_override` feature in <https://docs.flyte.org/projects/cookbook/en/latest/auto/deployment/customizing_resources.html#using-with-overrides|flytekit>. The example demonstrate overriding task’s cpu & memory limits.
```@workflow
def my_pipeline(x: typing.List[int]) -> int:
return square_1(x=count_unique_numbers_1(x=x)).with_overrides(
limits=Resources(cpu="6", mem="500Mi")
)```
Can it be done by using workflow input. e.g.
```@workflow
def my_pipeline(x: typing.List[int], cpu: str, mem: str) -> int:
return square_1(x=count_unique_numbers_1(x=x)).with_overrides(
limits=Resources(cpu=cpu, mem=mem)
)```
?
| Hey <@U02B12QHY9J>, we plan to support generic override resource, so most of the resource config can be overridden at execution time
Just like overriding role and service account. Here is an <https://github.com/flyteorg/flyte/issues/475#issuecomment-1430501231|example>
|
Hey <@U02B12QHY9J>, we plan to support generic override resource, so most of the resource config can be overridden at execution time
Just like overriding role and service account. Here is an <https://github.com/flyteorg/flyte/issues/475#issuecomment-1430501231|example>
| Oh ok, I thought the word `dynamically` here refers to execution overrides. Thanks for clarifying!
|
Oh ok, I thought the word `dynamically` here refers to execution overrides. Thanks for clarifying!
| You can do it in a dynamic workflows
|
You can do it in a dynamic workflows
| We’d love the functionality described above as well! (also described in the feature request)!
|
We’d love the functionality described above as well! (also described in the feature request)!
| Please help with specing it out
|
Hi I get those errors on my Mac M1 when following the quickstart tutorial
Thank you for the tool and the time
`"Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minio\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minio pod=flyte-sandbox-minio-645c8ddf7c-rrj5x_flyte(9d8637fe-a97e-40e7-a5a6-985bc0fbc21b)\"" pod="flyte/flyte-sandbox-minio-645c8ddf7c-rrj5x" podUID=9d8637fe-a97e-40e7-a5a6-985bc0fbc21b`
`I0519 19:26:18.275183 59 scope.go:110] "RemoveContainer" containerID="38a7520207d8aaaa55ca89553770b83c086e9d2e671580fbbe85f0b34cc8608a"`
`E0519 19:26:18.276055 59 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"flyte\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=flyte pod=flyte-sandbox-b789778f6-c6lxd_flyte(94b613f8-e323-4cf8-b250-bfcfc1bb1f4a)\"" pod="flyte/flyte-sandbox-b789778f6-c6lxd" podUID=94b613f8-e323-4cf8-b250-bfcfc1bb1f4a`
| what was the command you ran to start everything?
|
what was the command you ran to start everything?
| this one flytectl demo start
this is the tuttorial i follow <https://docs.flyte.org/projects/cookbook/en/latest/index.html>
also have no_proxy=localhost,127.0.0.1
i see. this too
```INFO[0000] [0] Couldn't find a config file []. Relying on env vars and pflags.```
|
this one flytectl demo start
this is the tuttorial i follow <https://docs.flyte.org/projects/cookbook/en/latest/index.html>
also have no_proxy=localhost,127.0.0.1
i see. this too
```INFO[0000] [0] Couldn't find a config file []. Relying on env vars and pflags.```
| i don’t understand the significance of the no_proxy… where is that set?
you mean as env var?
on host or in the container?
can you `kubectl -n flyte get pod`
and `kubectl -n flyte describe <all failing pods>`
|
i don’t understand the significance of the no_proxy… where is that set?
you mean as env var?
on host or in the container?
can you `kubectl -n flyte get pod`
and `kubectl -n flyte describe <all failing pods>`
| well don’t know how and why but after deleting everything and pulling a new image i can see the flyte console.
when executing the example i get an error
|
well don’t know how and why but after deleting everything and pulling a new image i can see the flyte console.
when executing the example i get an error
| i see
what’s the error?
|
i see
what’s the error?
| pyflyte run --remote wine_flyte.py training_workflow --hyperparameters ‘{“C”: 0.1}’
Failed with Unknown Exception <class ‘requests.exceptions.ConnectionError’> Reason: (‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’))
(‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’))
|
pyflyte run --remote wine_flyte.py training_workflow --hyperparameters ‘{“C”: 0.1}’
Failed with Unknown Exception <class ‘requests.exceptions.ConnectionError’> Reason: (‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’))
(‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’))
| kubectl get pods
-n flyte
anything failing?
|
kubectl get pods
-n flyte
anything failing?
| NAME READY STATUS RESTARTS AGE
flyte-sandbox-docker-registry-7744c9999-lbxzc 1/1 Running 0 5m51s
flyte-sandbox-proxy-d95874857-r5lzh 1/1 Running 0 5m51s
flyte-sandbox-kubernetes-dashboard-6757db879c-6t2rf 1/1 Running 0 5m51s
flyte-sandbox-b789778f6-hw96t 1/1 Running 0 5m51s
flyte-sandbox-postgresql-0 1/1 Running 1 (77s ago) 5m51s
flyte-sandbox-minio-645c8ddf7c-cchgz 0/1 CrashLoopBackOff 5 (23s ago) 5m51s
|
NAME READY STATUS RESTARTS AGE
flyte-sandbox-docker-registry-7744c9999-lbxzc 1/1 Running 0 5m51s
flyte-sandbox-proxy-d95874857-r5lzh 1/1 Running 0 5m51s
flyte-sandbox-kubernetes-dashboard-6757db879c-6t2rf 1/1 Running 0 5m51s
flyte-sandbox-b789778f6-hw96t 1/1 Running 0 5m51s
flyte-sandbox-postgresql-0 1/1 Running 1 (77s ago) 5m51s
flyte-sandbox-minio-645c8ddf7c-cchgz 0/1 CrashLoopBackOff 5 (23s ago) 5m51s
| i see
|
i see
| i think my resources in the cluster?
|
i think my resources in the cluster?
| yeah minio shouldn’t be crashing
`describe` it
and `logs -p`
it probably doesn’t have logs if it’s crashing but it might
|
yeah minio shouldn’t be crashing
`describe` it
and `logs -p`
it probably doesn’t have logs if it’s crashing but it might
| no logs
I think i will give more resources to the cluster more space restart it and let you know
|
no logs
I think i will give more resources to the cluster more space restart it and let you know
| k
describe has more info too often
in the events section
|
k
describe has more info too often
in the events section
| by the way why i see this INFO[0000] [0] Couldn’t find a config file []. Relying on env vars and pflags.
|
by the way why i see this INFO[0000] [0] Couldn’t find a config file []. Relying on env vars and pflags.
| depends on where you’re seeing it
|
depends on where you’re seeing it
| even if I have set it up correctly the config file
|
even if I have set it up correctly the config file
| copy paste the full command/stacktrace
|
copy paste the full command/stacktrace
| flytectl demo start
INFO[0000] [0] Couldn’t find a config file []. Relying on env vars and pflags.
:factory_worker: Bootstrapping a brand new flyte cluster... :hammer: :wrench:
:whale2: Going to use Flyte v1.6.0 release with image <http://cr.flyte.org/flyteorg/flyte-sandbox-bundled:sha-d391691c6db314da7298520e4fc83b2f5fe01eb9|cr.flyte.org/flyteorg/flyte-sandbox-bundled:sha-d391691c6db314da7298520e4fc83b2f5fe01eb9>
:whale2: pulling docker image for release <http://cr.flyte.org/flyteorg/flyte-sandbox-bundled:sha-d391691c6db314da7298520e4fc83b2f5fe01eb9|cr.flyte.org/flyteorg/flyte-sandbox-bundled:sha-d391691c6db314da7298520e4fc83b2f5fe01eb9>
|
flytectl demo start
INFO[0000] [0] Couldn’t find a config file []. Relying on env vars and pflags.
:factory_worker: Bootstrapping a brand new flyte cluster... :hammer: :wrench:
:whale2: Going to use Flyte v1.6.0 release with image <http://cr.flyte.org/flyteorg/flyte-sandbox-bundled:sha-d391691c6db314da7298520e4fc83b2f5fe01eb9|cr.flyte.org/flyteorg/flyte-sandbox-bundled:sha-d391691c6db314da7298520e4fc83b2f5fe01eb9>
:whale2: pulling docker image for release <http://cr.flyte.org/flyteorg/flyte-sandbox-bundled:sha-d391691c6db314da7298520e4fc83b2f5fe01eb9|cr.flyte.org/flyteorg/flyte-sandbox-bundled:sha-d391691c6db314da7298520e4fc83b2f5fe01eb9>
| oh i think that’s fine actually
shouldn’t matter for that command
but it’s looking for a file at `~/.flyte/config.yaml`
|
oh i think that’s fine actually
shouldn’t matter for that command
but it’s looking for a file at `~/.flyte/config.yaml`
| waiting to start ….
now i have a problem with the flyte console
flyte-sandbox-proxy-d95874857-6hqf2 1/1 Running 0 9m2s
flyte-sandbox-kubernetes-dashboard-6757db879c-txnwr 1/1 Running 0 9m2s
flyte-sandbox-docker-registry-7744c9999-kw497 1/1 Running 0 9m2s
flyte-sandbox-minio-645c8ddf7c-q8492 1/1 Running 5 (3m44s ago) 9m2s
flyte-sandbox-postgresql-0 1/1 Running 2 (2m59s ago) 9m2s
flyte-sandbox-b789778f6-fwkvq 0/1 CrashLoopBackOff 6 (40s ago) 9m2s
|
waiting to start ….
now i have a problem with the flyte console
flyte-sandbox-proxy-d95874857-6hqf2 1/1 Running 0 9m2s
flyte-sandbox-kubernetes-dashboard-6757db879c-txnwr 1/1 Running 0 9m2s
flyte-sandbox-docker-registry-7744c9999-kw497 1/1 Running 0 9m2s
flyte-sandbox-minio-645c8ddf7c-q8492 1/1 Running 5 (3m44s ago) 9m2s
flyte-sandbox-postgresql-0 1/1 Running 2 (2m59s ago) 9m2s
flyte-sandbox-b789778f6-fwkvq 0/1 CrashLoopBackOff 6 (40s ago) 9m2s
| logs?
|
logs?
| {“json”:{“src”:“composite_workqueue.go:88”},“level”:“debug”,“msg”:“Subqueue handler batch round”,“ts”:“2023-05-19T21:53:48Z”}
{“json”:{“src”:“composite_workqueue.go:98"},“level”:“debug”,“msg”:“Dynamically configured batch size [-1]“,”ts”:“2023-05-19T21:53:48Z”}
{“json”:{“src”:“composite_workqueue.go:129"},“level”:“debug”,“msg”:“Exiting SubQueue handler batch round”,“ts”:“2023-05-19T21:53:48Z”}
{“json”:{“src”:“composite_workqueue.go:88”},“level”:“debug”,“msg”:“Subqueue handler batch round”,“ts”:“2023-05-19T21:53:49Z”}
{“json”:{“src”:“composite_workqueue.go:98"},“level”:“debug”,“msg”:“Dynamically configured batch size [-1]“,”ts”:“2023-05-19T21:53:49Z”}
{“json”:{“src”:“composite_workqueue.go:129"},“level”:“debug”,“msg”:“Exiting SubQueue handler batch round”,“ts”:“2023-05-19T21:53:49Z”}
{“json”:{“src”:“composite_workqueue.go:88”},“level”:“debug”,“msg”:“Subqueue handler batch round”,“ts”:“2023-05-19T21:53:50Z”}
{“json”:{“src”:“composite_workqueue.go:98"},“level”:“debug”,“msg”:“Dynamically configured batch size [-1]“,”ts”:“2023-05-19T21:53:50Z”}
{“json”:{“src”:“composite_workqueue.go:129"},“level”:“debug”,“msg”:“Exiting SubQueue handler batch round”,“ts”:“2023-05-19T21:53:50Z”}
{“json”:{“src”:“composite_workqueue.go:88”},“level”:“debug”,“msg”:“Subqueue handler batch round”,“ts”:“2023-05-19T21:53:51Z”}
{“json”:{“src”:“composite_workqueue.go:98"},“level”:“debug”,“msg”:“Dynamically configured batch size [-1]“,”ts”:“2023-05-19T21:53:51Z”}
{“json”:{“src”:“composite_workqueue.go:129"},“level”:“debug”,“msg”:“Exiting SubQueue handler batch round”,“ts”:“2023-05-19T21:53:51Z”}
|
{“json”:{“src”:“composite_workqueue.go:88”},“level”:“debug”,“msg”:“Subqueue handler batch round”,“ts”:“2023-05-19T21:53:48Z”}
{“json”:{“src”:“composite_workqueue.go:98"},“level”:“debug”,“msg”:“Dynamically configured batch size [-1]“,”ts”:“2023-05-19T21:53:48Z”}
{“json”:{“src”:“composite_workqueue.go:129"},“level”:“debug”,“msg”:“Exiting SubQueue handler batch round”,“ts”:“2023-05-19T21:53:48Z”}
{“json”:{“src”:“composite_workqueue.go:88”},“level”:“debug”,“msg”:“Subqueue handler batch round”,“ts”:“2023-05-19T21:53:49Z”}
{“json”:{“src”:“composite_workqueue.go:98"},“level”:“debug”,“msg”:“Dynamically configured batch size [-1]“,”ts”:“2023-05-19T21:53:49Z”}
{“json”:{“src”:“composite_workqueue.go:129"},“level”:“debug”,“msg”:“Exiting SubQueue handler batch round”,“ts”:“2023-05-19T21:53:49Z”}
{“json”:{“src”:“composite_workqueue.go:88”},“level”:“debug”,“msg”:“Subqueue handler batch round”,“ts”:“2023-05-19T21:53:50Z”}
{“json”:{“src”:“composite_workqueue.go:98"},“level”:“debug”,“msg”:“Dynamically configured batch size [-1]“,”ts”:“2023-05-19T21:53:50Z”}
{“json”:{“src”:“composite_workqueue.go:129"},“level”:“debug”,“msg”:“Exiting SubQueue handler batch round”,“ts”:“2023-05-19T21:53:50Z”}
{“json”:{“src”:“composite_workqueue.go:88”},“level”:“debug”,“msg”:“Subqueue handler batch round”,“ts”:“2023-05-19T21:53:51Z”}
{“json”:{“src”:“composite_workqueue.go:98"},“level”:“debug”,“msg”:“Dynamically configured batch size [-1]“,”ts”:“2023-05-19T21:53:51Z”}
{“json”:{“src”:“composite_workqueue.go:129"},“level”:“debug”,“msg”:“Exiting SubQueue handler batch round”,“ts”:“2023-05-19T21:53:51Z”}
| reload the web page?
|
reload the web page?
| kk
```upstream request timeout```
in the cluster is like this E0519 21:58:50.734034 57 pod_workers.go:951] “Error syncing pod, skipping” err=“failed to \“StartContainer\” for \“flyte\” with CrashLoopBackOff: \“back-off 5m0s restarting failed container=flyte pod=flyte-sandbox-b789778f6-fwkvq_flyte(ce5aaf96-ef0e-4985-8d9a-7b1a75d33d73)\“” pod=“flyte/flyte-sandbox-b789778f6-fwkvq” podUID=ce5aaf96-ef0e-4985-8d9a-7b1a75d33d73
the same message as before
|
kk
```upstream request timeout```
in the cluster is like this E0519 21:58:50.734034 57 pod_workers.go:951] “Error syncing pod, skipping” err=“failed to \“StartContainer\” for \“flyte\” with CrashLoopBackOff: \“back-off 5m0s restarting failed container=flyte pod=flyte-sandbox-b789778f6-fwkvq_flyte(ce5aaf96-ef0e-4985-8d9a-7b1a75d33d73)\“” pod=“flyte/flyte-sandbox-b789778f6-fwkvq” podUID=ce5aaf96-ef0e-4985-8d9a-7b1a75d33d73
the same message as before
| the pod is still crashing?
can you describe?
and copy paste
|
the pod is still crashing?
can you describe?
and copy paste
| % kubectl describe pods
Name: py39-cacher
Namespace: default
Priority: 0
Service Account: default
Node: 856e6b497fcd/172.17.0.2
Start Time: Sat, 20 May 2023 00:42:55 +0300
Labels: <none>
Annotations: <none>
Status: Succeeded
IP: 10.42.0.11
IPs:
IP: 10.42.0.11
Containers:
flytekit:
Container ID: <containerd://837ac176431d8b34f990a0ee037272059f55ad30f4361ccb94b86bbc1eaa085>c
Image: <http://ghcr.io/flyteorg/flytekit:py3.9-latest|ghcr.io/flyteorg/flytekit:py3.9-latest>
Image ID: <http://ghcr.io/flyteorg/flytekit@sha256:757c05c2b8cfea93ba9b0952a7cdd1adf6952e58dfab62993aaceee8b8357beb|ghcr.io/flyteorg/flytekit@sha256:757c05c2b8cfea93ba9b0952a7cdd1adf6952e58dfab62993aaceee8b8357beb>
Port: <none>
Host Port: <none>
Command:
echo
Args:
Flyte
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 20 May 2023 00:47:43 +0300
Finished: Sat, 20 May 2023 00:47:43 +0300
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mxptd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-mxptd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/py39-cacher to 856e6b497fcd
Normal Pulling 18m kubelet Pulling image “<http://ghcr.io/flyteorg/flytekit:py3.9-latest|ghcr.io/flyteorg/flytekit:py3.9-latest>”
Normal Pulled 13m kubelet Successfully pulled image “<http://ghcr.io/flyteorg/flytekit:py3.9-latest|ghcr.io/flyteorg/flytekit:py3.9-latest>” in 4m46.943337882s
Normal Created 13m kubelet Created container flytekit
Normal Started 13m kubelet Started container flytekit
on image is still on pulling phase
|
% kubectl describe pods
Name: py39-cacher
Namespace: default
Priority: 0
Service Account: default
Node: 856e6b497fcd/172.17.0.2
Start Time: Sat, 20 May 2023 00:42:55 +0300
Labels: <none>
Annotations: <none>
Status: Succeeded
IP: 10.42.0.11
IPs:
IP: 10.42.0.11
Containers:
flytekit:
Container ID: <containerd://837ac176431d8b34f990a0ee037272059f55ad30f4361ccb94b86bbc1eaa085>c
Image: <http://ghcr.io/flyteorg/flytekit:py3.9-latest|ghcr.io/flyteorg/flytekit:py3.9-latest>
Image ID: <http://ghcr.io/flyteorg/flytekit@sha256:757c05c2b8cfea93ba9b0952a7cdd1adf6952e58dfab62993aaceee8b8357beb|ghcr.io/flyteorg/flytekit@sha256:757c05c2b8cfea93ba9b0952a7cdd1adf6952e58dfab62993aaceee8b8357beb>
Port: <none>
Host Port: <none>
Command:
echo
Args:
Flyte
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 20 May 2023 00:47:43 +0300
Finished: Sat, 20 May 2023 00:47:43 +0300
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mxptd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-mxptd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/py39-cacher to 856e6b497fcd
Normal Pulling 18m kubelet Pulling image “<http://ghcr.io/flyteorg/flytekit:py3.9-latest|ghcr.io/flyteorg/flytekit:py3.9-latest>”
Normal Pulled 13m kubelet Successfully pulled image “<http://ghcr.io/flyteorg/flytekit:py3.9-latest|ghcr.io/flyteorg/flytekit:py3.9-latest>” in 4m46.943337882s
Normal Created 13m kubelet Created container flytekit
Normal Started 13m kubelet Started container flytekit
on image is still on pulling phase
| `kubectl -n flyte describe pod flyte-sandbox-xyzxyz-xyzxyz`
|
`kubectl -n flyte describe pod flyte-sandbox-xyzxyz-xyzxyz`
| kubectl -n flyte describe pod flyte-sandbox-b789778f6-fwkvq
Name: flyte-sandbox-b789778f6-fwkvq
Namespace: flyte
Priority: 0
Service Account: flyte-sandbox
Node: 856e6b497fcd/172.17.0.2
Start Time: Sat, 20 May 2023 00:41:50 +0300
Labels: <http://app.kubernetes.io/instance=flyte-sandbox|app.kubernetes.io/instance=flyte-sandbox>
<http://app.kubernetes.io/name=flyte-sandbox|app.kubernetes.io/name=flyte-sandbox>
pod-template-hash=b789778f6
Annotations: checksum/cluster-resource-templates: 6fd9b172465e3089fcc59f738b92b8dc4d8939360c19de8ee65f68b0e7422035
checksum/configuration: 7ef5ef618ebb04f965552e2e4814dc053ef5338fee3ada32517e4e4b1695989b
checksum/db-password-secret: 669e1cdf4633c6dd40085f78d1bb6b9672d8120ff1f62077a879a4d46db133e2
Status: Running
IP: 10.42.0.4
IPs:
IP: 10.42.0.4
Controlled By: ReplicaSet/flyte-sandbox-b789778f6
Init Containers:
wait-for-db:
Container ID: <containerd://2b824dc0bc7c10d47400bd68e59b0c9fd20734d3b37ed811475509f408c0a7c>8
Image: bitnami/postgresql:sandbox
Image ID: sha256:a729f5f0de5fa39ba4d649e7366d499299304145d2456d60a16b0e63395bd61a
Port: <none>
Host Port: <none>
Command:
sh
-ec
Args:
until pg_isready \
-h flyte-sandbox-postgresql \
-p 5432 \
-U postgres
do
echo waiting for database
sleep 0.1
done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 20 May 2023 00:41:54 +0300
Finished: Sat, 20 May 2023 00:42:24 +0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9js4w (ro)
Containers:
flyte:
Container ID: <containerd://8d9c7124a05059352065eec0dc9b9a68144e34d6a05e8747f8de1f54989c049>3
Image: flyte-binary:sandbox
Image ID: sha256:b26c073652ff86b27f03a534177274b17dcb45f5ced24987c11282e5ddd7f110
Ports: 8088/TCP, 8089/TCP, 9443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
start
--config
/etc/flyte/config.d/*.yaml
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 20 May 2023 01:00:02 +0300
Finished: Sat, 20 May 2023 01:00:02 +0300
Ready: False
Restart Count: 9
Liveness: http-get http://:http/healthcheck delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/healthcheck delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: flyte-sandbox-b789778f6-fwkvq (v1:metadata.name)
POD_NAMESPACE: flyte (v1:metadata.namespace)
Mounts:
/etc/flyte/cluster-resource-templates from cluster-resource-templates (rw)
/etc/flyte/config.d from config (rw)
/var/run/flyte from state (rw)
/var/run/secrets/flyte/db-pass from db-pass (rw,path=“db-pass”)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9js4w (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cluster-resource-templates:
Type: Projected (a volume that contains injected data from multiple sources)
ConfigMapName: flyte-sandbox-cluster-resource-templates
ConfigMapOptional: <nil>
ConfigMapName: flyte-sandbox-extra-cluster-resource-templates
ConfigMapOptional: <nil>
config:
Type: Projected (a volume that contains injected data from multiple sources)
ConfigMapName: flyte-sandbox-config
ConfigMapOptional: <nil>
ConfigMapName: flyte-sandbox-extra-config
ConfigMapOptional: <nil>
db-pass:
Type: Secret (a volume populated by a Secret)
SecretName: flyte-sandbox-db-pass
Optional: false
state:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-9js4w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned flyte/flyte-sandbox-b789778f6-fwkvq to 856e6b497fcd
Normal Pulled 22m kubelet Container image “bitnami/postgresql:sandbox” already present on machine
Normal Created 22m kubelet Created container wait-for-db
Normal Started 22m kubelet Started container wait-for-db
Warning Unhealthy 21m (x3 over 21m) kubelet Liveness probe failed: Get “<http://10.42.0.4:8088/healthcheck>”: dial tcp 10.42.0.4:8088: connect: connection refused
Normal Killing 21m kubelet Container flyte failed liveness probe, will be restarted
Normal Pulled 20m (x2 over 21m) kubelet Container image “flyte-binary:sandbox” already present on machine
Normal Created 20m (x2 over 21m) kubelet Created container flyte
Normal Started 20m (x2 over 21m) kubelet Started container flyte
Warning Unhealthy 17m (x46 over 21m) kubelet Readiness probe failed: Get “<http://10.42.0.4:8088/healthcheck>”: dial tcp 10.42.0.4:8088: connect: connection refused
Warning BackOff 2m23s (x59 over 16m) kubelet Back-off restarting failed container
|
kubectl -n flyte describe pod flyte-sandbox-b789778f6-fwkvq
Name: flyte-sandbox-b789778f6-fwkvq
Namespace: flyte
Priority: 0
Service Account: flyte-sandbox
Node: 856e6b497fcd/172.17.0.2
Start Time: Sat, 20 May 2023 00:41:50 +0300
Labels: <http://app.kubernetes.io/instance=flyte-sandbox|app.kubernetes.io/instance=flyte-sandbox>
<http://app.kubernetes.io/name=flyte-sandbox|app.kubernetes.io/name=flyte-sandbox>
pod-template-hash=b789778f6
Annotations: checksum/cluster-resource-templates: 6fd9b172465e3089fcc59f738b92b8dc4d8939360c19de8ee65f68b0e7422035
checksum/configuration: 7ef5ef618ebb04f965552e2e4814dc053ef5338fee3ada32517e4e4b1695989b
checksum/db-password-secret: 669e1cdf4633c6dd40085f78d1bb6b9672d8120ff1f62077a879a4d46db133e2
Status: Running
IP: 10.42.0.4
IPs:
IP: 10.42.0.4
Controlled By: ReplicaSet/flyte-sandbox-b789778f6
Init Containers:
wait-for-db:
Container ID: <containerd://2b824dc0bc7c10d47400bd68e59b0c9fd20734d3b37ed811475509f408c0a7c>8
Image: bitnami/postgresql:sandbox
Image ID: sha256:a729f5f0de5fa39ba4d649e7366d499299304145d2456d60a16b0e63395bd61a
Port: <none>
Host Port: <none>
Command:
sh
-ec
Args:
until pg_isready \
-h flyte-sandbox-postgresql \
-p 5432 \
-U postgres
do
echo waiting for database
sleep 0.1
done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 20 May 2023 00:41:54 +0300
Finished: Sat, 20 May 2023 00:42:24 +0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9js4w (ro)
Containers:
flyte:
Container ID: <containerd://8d9c7124a05059352065eec0dc9b9a68144e34d6a05e8747f8de1f54989c049>3
Image: flyte-binary:sandbox
Image ID: sha256:b26c073652ff86b27f03a534177274b17dcb45f5ced24987c11282e5ddd7f110
Ports: 8088/TCP, 8089/TCP, 9443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
start
--config
/etc/flyte/config.d/*.yaml
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 20 May 2023 01:00:02 +0300
Finished: Sat, 20 May 2023 01:00:02 +0300
Ready: False
Restart Count: 9
Liveness: http-get http://:http/healthcheck delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/healthcheck delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: flyte-sandbox-b789778f6-fwkvq (v1:metadata.name)
POD_NAMESPACE: flyte (v1:metadata.namespace)
Mounts:
/etc/flyte/cluster-resource-templates from cluster-resource-templates (rw)
/etc/flyte/config.d from config (rw)
/var/run/flyte from state (rw)
/var/run/secrets/flyte/db-pass from db-pass (rw,path=“db-pass”)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9js4w (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cluster-resource-templates:
Type: Projected (a volume that contains injected data from multiple sources)
ConfigMapName: flyte-sandbox-cluster-resource-templates
ConfigMapOptional: <nil>
ConfigMapName: flyte-sandbox-extra-cluster-resource-templates
ConfigMapOptional: <nil>
config:
Type: Projected (a volume that contains injected data from multiple sources)
ConfigMapName: flyte-sandbox-config
ConfigMapOptional: <nil>
ConfigMapName: flyte-sandbox-extra-config
ConfigMapOptional: <nil>
db-pass:
Type: Secret (a volume populated by a Secret)
SecretName: flyte-sandbox-db-pass
Optional: false
state:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-9js4w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned flyte/flyte-sandbox-b789778f6-fwkvq to 856e6b497fcd
Normal Pulled 22m kubelet Container image “bitnami/postgresql:sandbox” already present on machine
Normal Created 22m kubelet Created container wait-for-db
Normal Started 22m kubelet Started container wait-for-db
Warning Unhealthy 21m (x3 over 21m) kubelet Liveness probe failed: Get “<http://10.42.0.4:8088/healthcheck>”: dial tcp 10.42.0.4:8088: connect: connection refused
Normal Killing 21m kubelet Container flyte failed liveness probe, will be restarted
Normal Pulled 20m (x2 over 21m) kubelet Container image “flyte-binary:sandbox” already present on machine
Normal Created 20m (x2 over 21m) kubelet Created container flyte
Normal Started 20m (x2 over 21m) kubelet Started container flyte
Warning Unhealthy 17m (x46 over 21m) kubelet Readiness probe failed: Get “<http://10.42.0.4:8088/healthcheck>”: dial tcp 10.42.0.4:8088: connect: connection refused
Warning BackOff 2m23s (x59 over 16m) kubelet Back-off restarting failed container
| can you get all the logs for flyte binary container as well?
`{"json":{"src":"composite_workqueue.go:88"},"level":"debug","msg":"Subqueue handler batch round","ts":"2023-05-19T21:53:48Z"}` isn’t quite enough - need the messages from startup
|
can you get all the logs for flyte binary container as well?
`{"json":{"src":"composite_workqueue.go:88"},"level":"debug","msg":"Subqueue handler batch round","ts":"2023-05-19T21:53:48Z"}` isn’t quite enough - need the messages from startup
| need to go now unfortunatelly
<@UNR3C6Y4T> thanks for your help last night. today I run
1. docker system prune -a
2. started docker again
and it worked
not sure exactly with the prob last night perhaps a resource issue
it worked in the sense i could access the UI and run a workflow at the demo cluster. the execution of the wine dataset workflow fails though
i think problem was with db 2023-05-20 19:22:36.738 GMT [266] LOG: checkpoint starting: end-of-recovery immediate wait
2023-05-20 19:22:36.752 GMT [266] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 s, sync=0.002 s, total=0.014 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB
2023-05-20 19:22:36.774 GMT [1] LOG: database system is ready to accept connections
2023-05-20 19:27:10.073 GMT [1] LOG: received smart shutdown request
2023-05-20 19:27:12.860 GMT [831] FATAL: the database system is shutting down
2023-05-20 19:27:16.648 GMT [841] FATAL: the database system is shutting down
2023-05-20 19:27:26.640 GMT [851] FATAL: the database system is shutting down
2023-05-20 19:27:36.586 GMT [860] FATAL: the database system is shutting down
FYI <@UNR3C6Y4T>
|
need to go now unfortunatelly
<@UNR3C6Y4T> thanks for your help last night. today I run
1. docker system prune -a
2. started docker again
and it worked
not sure exactly with the prob last night perhaps a resource issue
it worked in the sense i could access the UI and run a workflow at the demo cluster. the execution of the wine dataset workflow fails though
i think problem was with db 2023-05-20 19:22:36.738 GMT [266] LOG: checkpoint starting: end-of-recovery immediate wait
2023-05-20 19:22:36.752 GMT [266] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 s, sync=0.002 s, total=0.014 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB
2023-05-20 19:22:36.774 GMT [1] LOG: database system is ready to accept connections
2023-05-20 19:27:10.073 GMT [1] LOG: received smart shutdown request
2023-05-20 19:27:12.860 GMT [831] FATAL: the database system is shutting down
2023-05-20 19:27:16.648 GMT [841] FATAL: the database system is shutting down
2023-05-20 19:27:26.640 GMT [851] FATAL: the database system is shutting down
2023-05-20 19:27:36.586 GMT [860] FATAL: the database system is shutting down
FYI <@UNR3C6Y4T>
| why is the database shutting down? can you `describe` the postgres pod?
something feels wrong outside of flyte. what type of system are you on? how much resources have you given docker?
may need to bump it up a bit.
|
Is there a way to add a custom toleration to the flag for spot nodes?
We need both the affinity for spot, as well as a spot toleration to enable scheduling - however I can’t find where this would be done via the `flyte-binary` helm chart? I’ve seen <https://github.com/flyteorg/flyte/blob/master/rsts/deployment/configuration/generated/flyteadmin_config.rst#interruptible-tolerations-v1toleration|this> but I’m unsure how that is surfaced via the helm chart?
| via values.yaml you should be able to set:
```configuration:
inline:
plugins:
k8s:
interruptible-tolerations:
- <http://my.toleration.io|my.toleration.io>
- <http://my.toleration.io/some=value|my.toleration.io/some=value>```
I’ve not tried this; we use the `inline` block to insert `default-pod-template-name` which is the same level as `interruptible-tolerations` and that works for us.
Whatever is in `inline` gets inserted into this portion of the config map: <https://github.com/flyteorg/flyte/blob/master/charts/flyte-binary/templates/configmap.yaml#L198-L200>
|
via values.yaml you should be able to set:
```configuration:
inline:
plugins:
k8s:
interruptible-tolerations:
- <http://my.toleration.io|my.toleration.io>
- <http://my.toleration.io/some=value|my.toleration.io/some=value>```
I’ve not tried this; we use the `inline` block to insert `default-pod-template-name` which is the same level as `interruptible-tolerations` and that works for us.
Whatever is in `inline` gets inserted into this portion of the config map: <https://github.com/flyteorg/flyte/blob/master/charts/flyte-binary/templates/configmap.yaml#L198-L200>
| Thanks <@U036GA8565T>. Note that interruptible-tolerations is a list. Does that work for you <@U04RHGT28F2>?
|
Thanks <@U036GA8565T>. Note that interruptible-tolerations is a list. Does that work for you <@U04RHGT28F2>?
| (updated, thanks for the catch jeev)
|
(updated, thanks for the catch jeev)
| Yes! Thank you both :pray:
|
I have deployed flyte on k8s using the flyte-binary helm chart. I set create=true for ingress in values.yaml and it created the dns records and 2 ingresses, one for http and one for grpc.
```NAME CLASS HOSTS ADDRESS PORTS AGE
orchestrator-grpc <none> <http://orchestrator.playground.cloud.abc.com|orchestrator.playground.cloud.abc.com> <http://b37bbe12aafaf42a7b06f611f91b07bd-7362ba7e14b8b74e.elb.eu-central-1.amazonaws.com|b37bbe12aafaf42a7b06f611f91b07bd-7362ba7e14b8b74e.elb.eu-central-1.amazonaws.com> 80 3h41m
orchestrator-http <none> <http://orchestrator.playground.cloud.abc.com|orchestrator.playground.cloud.abc.com> <http://b37bbe12aafaf42a7b06f611f91b07bd-7362ba7e14b8b74e.elb.eu-central-1.amazonaws.com|b37bbe12aafaf42a7b06f611f91b07bd-7362ba7e14b8b74e.elb.eu-central-1.amazonaws.com> 80 3h41m```
1. When I list the ingresses, I get this. As you can see the hostnames for both the ingresses are same. Is this expected? I am told that this could be a problem. Also, is there anything needed to be done for the webhook service? like setting up an ingress or anything to handle https traffic?
2. We are using nginx ingress controller and cert-manager to provision certificates in k8s. To set up nginx to use the certificate provided by cert-manager, 2 things are required - a) annotation specifying the cert-manager / issuer - `<http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: "abc-issuer"` and b) in the ingress spec, a tls section needs to be added like so -
``` tls:
- hosts:
- <http://echo1.example.com|echo1.example.com>
- <http://echo2.example.com|echo2.example.com>
secretName: echo-tls```
As shown <https://cert-manager.io/docs/usage/ingress/|here> in the cert-manager docs. However, the flyte-binary <https://github.com/flyteorg/flyte/blob/master/charts/flyte-binary/templates/ingress/grpc.yaml|ingress template> does not have these elements to be overridden from values.yaml. I tried adding the spec section in values.yaml, but this does not work.
``` spec:
tls:
hosts:
- "${app_name}.${env}.<http://cloud.abc.com|cloud.abc.com>"
secretName: "${app_name}.${env}.<http://cloud.abc.com|cloud.abc.com>"```
At this point, with the dns configured, the UI works, but without setting up SSL, grpc will not work, so we are not able to register workflow. Please let me know if anyone has faced this issue and found a resolution. Thanks!
| HI <@U056YP7TQ5A>
1. Yes, it's expected. If you `describe` each one of the Ingress resources you'll see that, despite using the same host name, the controller will route traffic to the corresponding service depending on the path (multiplexor pattern). I don't think there's anything you need to do for the `webhook` service
2. In `values.yaml` under `commonAnnotations` you can add the required annotations for, in this case, Ingress resource:
```commonAnnotations:
ingress:
<http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: nameOfClusterIssuer```
...
regarding the `tls`section, I'm looking at what can be done besides patching the deployed Ingress
definitely would be a great addition to <https://github.com/davidmirror-ops/flyte-the-hard-way|Flyte the Hard Way> (currently uses only ACM/Route53)
|
HI <@U056YP7TQ5A>
1. Yes, it's expected. If you `describe` each one of the Ingress resources you'll see that, despite using the same host name, the controller will route traffic to the corresponding service depending on the path (multiplexor pattern). I don't think there's anything you need to do for the `webhook` service
2. In `values.yaml` under `commonAnnotations` you can add the required annotations for, in this case, Ingress resource:
```commonAnnotations:
ingress:
<http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: nameOfClusterIssuer```
...
regarding the `tls`section, I'm looking at what can be done besides patching the deployed Ingress
definitely would be a great addition to <https://github.com/davidmirror-ops/flyte-the-hard-way|Flyte the Hard Way> (currently uses only ACM/Route53)
| <@U04H6UUE78B> Yes please! That will be helpful. There is a root commonAnnotations and there is an ingress.commonAnnotations. I have added the `<http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: nameOfClusterIssuer` in the latter and it did not work. That is why I am thinking it might need the `tls` section because it said so in the cert-manager docs. Should it be added in the root commonAnnotations as you have shown above?
|
<@U04H6UUE78B> Yes please! That will be helpful. There is a root commonAnnotations and there is an ingress.commonAnnotations. I have added the `<http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: nameOfClusterIssuer` in the latter and it did not work. That is why I am thinking it might need the `tls` section because it said so in the cert-manager docs. Should it be added in the root commonAnnotations as you have shown above?
| I don't think so. Whatever you put as annotation there will be merged with the base config as an annotation, not a new `spec` section
|
I don't think so. Whatever you put as annotation there will be merged with the base config as an annotation, not a new `spec` section
| Hi <@U04H6UUE78B> / <@U017K8AJBAN>, any ideas on how to inject the tls section?
|
Hi <@U04H6UUE78B> / <@U017K8AJBAN>, any ideas on how to inject the tls section?
| The cleanest way will be to extend the helm chart with support for specifying the tls block
<@U056YP7TQ5A> spent a bit more time reading through the thread. are you terminating SSL at the nginx ingress controller instead of ELB?
if terminating at ELB, you shouldn’t need cert-manager or the TLS block on the ingress object.
Also, for my own clarity, is the nginx ingress controller just for Flyte, or are there other applications served through the same controller?
|
The cleanest way will be to extend the helm chart with support for specifying the tls block
<@U056YP7TQ5A> spent a bit more time reading through the thread. are you terminating SSL at the nginx ingress controller instead of ELB?
if terminating at ELB, you shouldn’t need cert-manager or the TLS block on the ingress object.
Also, for my own clarity, is the nginx ingress controller just for Flyte, or are there other applications served through the same controller?
| We are terminating SSL at nginx ingress controller and the controller serves other applications as well.
|
We are terminating SSL at nginx ingress controller and the controller serves other applications as well.
| Ok then we should add a TLS block to the helm chart ingress template. Would you be open to creating a PR with the change? :)
|
Ok then we should add a TLS block to the helm chart ingress template. Would you be open to creating a PR with the change? :)
| Sure :slightly_smiling_face: I just did that on our side to get it working!
|
Hi Everyone. i am trying to deploy flye in azure AKS cluster. Pods are up and running after running helm command for single cluster deployment.i am able to access flyte service without authentication through port-forward. As per documentation I am trying to test workflow by cloning flytesnacks repo. This will require pyflyte cli but I am unable to find any link or reference where to download pyflyte binary from. Can somebody please suggest on it?
| Hi <@U056CHZ05A4>! `pyflyte` gets automatically installed when you install flytekit.
|
Hi <@U056CHZ05A4>! `pyflyte` gets automatically installed when you install flytekit.
| Hi <@U01J90KBSU9>, thanks. I am trying to test the workflow now in azure AKS cluster where flyte is installed following single cluster deployment mode in flyte namespace .Could somebody please suggest if I am doing anything wrong here.i have brought up another pod with python and flytekit installed in it in default namespace.
Then i am running below command
git clone <https://github.com/flyteorg/flytesnacks|https://github.com/flyteorg/flytesnacks>
cd flytesnacks/cookbook
pyflyte --config $HOME/.flyte/config.yaml run --remote core/flyte_basics/hello_world.py my_wf
Content of config.yaml is:-
admin:
# For GRPC endpoints you might want to use dns:///flyte.myexample.com
endpoint: dns:///flyte-flyte-binary-grpc.flyte.svc.cluater.local:8089
authType: Pkce
insecure: true
logger:
show-source: true
level: 0
It is erroring out with below error:
root@python-flytekit:~/flytesnacks/cookbook# pyflyte --config $HOME/.flyte/config.yaml run --remote core/flyte_basics/hello_world.py my_wf
Failed with Exception Code: SYSTEM:Unknown
RPC Failed, with Status: StatusCode.INTERNAL
details: failed to create a signed url. Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Debug string UNKNOWN:Error received from peer {grpc_message:"failed to create a signed url. Error: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors", grpc_status:13, created_time:"2023-05-19T17:32:09.146454034+00:00"}
Can somebody please suggest how to fix this?
|
Hi <@U01J90KBSU9>, thanks. I am trying to test the workflow now in azure AKS cluster where flyte is installed following single cluster deployment mode in flyte namespace .Could somebody please suggest if I am doing anything wrong here.i have brought up another pod with python and flytekit installed in it in default namespace.
Then i am running below command
git clone <https://github.com/flyteorg/flytesnacks|https://github.com/flyteorg/flytesnacks>
cd flytesnacks/cookbook
pyflyte --config $HOME/.flyte/config.yaml run --remote core/flyte_basics/hello_world.py my_wf
Content of config.yaml is:-
admin:
# For GRPC endpoints you might want to use dns:///flyte.myexample.com
endpoint: dns:///flyte-flyte-binary-grpc.flyte.svc.cluater.local:8089
authType: Pkce
insecure: true
logger:
show-source: true
level: 0
It is erroring out with below error:
root@python-flytekit:~/flytesnacks/cookbook# pyflyte --config $HOME/.flyte/config.yaml run --remote core/flyte_basics/hello_world.py my_wf
Failed with Exception Code: SYSTEM:Unknown
RPC Failed, with Status: StatusCode.INTERNAL
details: failed to create a signed url. Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Debug string UNKNOWN:Error received from peer {grpc_message:"failed to create a signed url. Error: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors", grpc_status:13, created_time:"2023-05-19T17:32:09.146454034+00:00"}
Can somebody please suggest how to fix this?
| I think you need to set `insecure` to false in your config.yaml file.
|
I was able to install a helm release via terraform to deploy flyte on our eks cluster. I am seeing a couple of issues, if someone has seen these please share possible issue / fix -
1. I cannot see any pods running when I run `kubectl get pods -n <my-namespace>`
2. When I run `kubectl get svc -n <my-namespace>` I see three services instead of one flyte-binary service -
```NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
flyte-backend-flyte-binary-grpc ClusterIP 10.100.105.242 <none> 8089/TCP 33m
flyte-backend-flyte-binary-http ClusterIP 10.100.202.80 <none> 8088/TCP 33m
flyte-backend-flyte-binary-webhook ClusterIP 10.100.196.242 <none> 443/TCP 33m```
I am following <https://docs.flyte.org/en/latest/deployment/deployment/cloud_simple.html|this> simple cloud deployment. When I try to port-forward any of the three services, the kubectl command times out. `kubectl get events` does not show anything related to flyte.
| if you’re getting a timeout, i would try port forwarding from the node backing the service first.
i don’t think i’ve seen a timeout before
|
if you’re getting a timeout, i would try port forwarding from the node backing the service first.
i don’t think i’ve seen a timeout before
| <@U056YP7TQ5A> you mean no pods in your namespace? or is there any in a state different than `Running`? It's weird that it created the services and not the Pod.
Regarding #2 yes, recent chart versions create separate services, this was needed to meet the requirements of some ingress controllers that have specific annotation for each type of traffic (gRPC, http,etc)
|
<@U056YP7TQ5A> you mean no pods in your namespace? or is there any in a state different than `Running`? It's weird that it created the services and not the Pod.
Regarding #2 yes, recent chart versions create separate services, this was needed to meet the requirements of some ingress controllers that have specific annotation for each type of traffic (gRPC, http,etc)
| To close this out, 1. was not an issue, there was an un scheduled maintenance on the cluster due to which pods were not allowed to run.
2. I have some additional questions regarding the split services, ingress and SSL. I'll post a new question for that. Thanks!
|