input
stringlengths
1
12.6k
output
stringlengths
1
12k
From my perspective, being able to update our helm values.yaml and pass values into the admin_config has been quite nice.
right
right
like the <https://github.com/flyteorg/flyte/blob/507a466077883bd2c34677c928ac277be7be9c28/cmd/single/start.go#L154|cfg.Admin> block here
like the <https://github.com/flyteorg/flyte/blob/507a466077883bd2c34677c928ac277be7be9c28/cmd/single/start.go#L154|cfg.Admin> block here
<https://github.com/flyteorg/flyte/blob/507a466077883bd2c34677c928ac277be7be9c28/cmd/single/config.go#L23|https://github.com/flyteorg/flyte/blob/507a466077883bd2c34677c928ac277be7be9c28/cmd/single/config.go#L23>
<https://github.com/flyteorg/flyte/blob/507a466077883bd2c34677c928ac277be7be9c28/cmd/single/config.go#L23|https://github.com/flyteorg/flyte/blob/507a466077883bd2c34677c928ac277be7be9c28/cmd/single/config.go#L23>
yeah right?
yeah right?
that’s what i was thinking too. then some plumbing from helm values into the configmap?
that’s what i was thinking too. then some plumbing from helm values into the configmap?
yeah mike you want to do this? or jeev?
yeah mike you want to do this? or jeev?
<@U036GA8565T> up to you
<@U036GA8565T> up to you
I'm happy to take a look at it. It gives me a good reason to better familiarize myself with your configmap.
I'm happy to take a look at it. It gives me a good reason to better familiarize myself with your configmap.
awesome :slightly_smiling_face:
awesome :slightly_smiling_face:
is flytestdlib/config just a light wrapper over pflag? (lazy web, I'm looking now :))
is flytestdlib/config just a light wrapper over pflag? (lazy web, I'm looking now :))
yeah
yeah
If I create an issue to track this; can one of you assign it to me?
If I create an issue to track this; can one of you assign it to me?
yup thanks
yup thanks
:+1::skin-tone-3: <https://github.com/flyteorg/flyte/issues/3545> <@UNR3C6Y4T> ty
:+1::skin-tone-3: <https://github.com/flyteorg/flyte/issues/3545> <@UNR3C6Y4T> ty
thank you
thank you
<@U017K8AJBAN> &amp; <@UNR3C6Y4T> I (finally!!) got around to implementing this. <https://github.com/flyteorg/flyte/pull/3631> I'm not sure where is best to document this change; I'm happy to do so, but I'll need a pointer to where best to put those docs.
<@U017K8AJBAN> &amp; <@UNR3C6Y4T> I (finally!!) got around to implementing this. <https://github.com/flyteorg/flyte/pull/3631> I'm not sure where is best to document this change; I'm happy to do so, but I'll need a pointer to where best to put those docs.
oh nice thanks. not to pile, but would you be up for adding to the flyte-binary helm chart as well? i think it’s okay to keep as seed projects
oh nice thanks. not to pile, but would you be up for adding to the flyte-binary helm chart as well? i think it’s okay to keep as seed projects
<@U036GA8565T> looks like the PR is close and just needs some final touches :) wanna talk about the names and configmap structure here?
<@U036GA8565T> looks like the PR is close and just needs some final touches :) wanna talk about the names and configmap structure here?
hey <@U017K8AJBAN> :wave::skin-tone-3: just looking over the changes; there are three outstanding: • <https://github.com/flyteorg/flyte/pull/3631/files/d20b1d154656080bac9bad846fb4e7214b0fe616#diff-b99944585db983fbc60a5dfb9f598ca08dc213f6a59c739a3c4bcacdc8f41fbcR10|not bumping the chart number> - I’ve rolled this back • <https://github.com/flyteorg/flyte/pull/3631/files/d20b1d154656080bac9bad846fb4e7214b0fe616#diff-e7e819bf9787aeb88715055a4f872f9ab0f11f48d8e9f68a70a1b09d1ac3512aR193|moving the flyte args up to 000-core.yaml> - done • <https://github.com/flyteorg/flyte/pull/3631/files/d20b1d154656080bac9bad846fb4e7214b0fe616#diff-cbcadfa3abd9e4771deaab1b0f28d2c7f6a67f3ebd9b1b1ec6a0eebb38a718e4R122|what to call the helm chart key> - I’m happy to take direction on this.
hey <@U017K8AJBAN> :wave::skin-tone-3: just looking over the changes; there are three outstanding: • <https://github.com/flyteorg/flyte/pull/3631/files/d20b1d154656080bac9bad846fb4e7214b0fe616#diff-b99944585db983fbc60a5dfb9f598ca08dc213f6a59c739a3c4bcacdc8f41fbcR10|not bumping the chart number> - I’ve rolled this back • <https://github.com/flyteorg/flyte/pull/3631/files/d20b1d154656080bac9bad846fb4e7214b0fe616#diff-e7e819bf9787aeb88715055a4f872f9ab0f11f48d8e9f68a70a1b09d1ac3512aR193|moving the flyte args up to 000-core.yaml> - done • <https://github.com/flyteorg/flyte/pull/3631/files/d20b1d154656080bac9bad846fb4e7214b0fe616#diff-cbcadfa3abd9e4771deaab1b0f28d2c7f6a67f3ebd9b1b1ec6a0eebb38a718e4R122|what to call the helm chart key> - I’m happy to take direction on this.
<@U036GA8565T>: still seeing the version changes, but that probably needs `make helm` as far as the helm chart key goes: do we even need it? or can we just expose directly under the ` configuration` key? im not super opinionated on this. will defer to you and <@UNR3C6Y4T>. everything else lgtm! thanks for the valuable contribution!
<@U036GA8565T>: still seeing the version changes, but that probably needs `make helm` as far as the helm chart key goes: do we even need it? or can we just expose directly under the ` configuration` key? im not super opinionated on this. will defer to you and <@UNR3C6Y4T>. everything else lgtm! thanks for the valuable contribution!
<@U017K8AJBAN> good spot with the `make helm`; I forgot that. I’ve run it and the version changes have bumped. re: the configuration key. You’re suggesting: • `configuration.admin` • `configuration.propeller` • `configuration.dataCatalog` instead of `configuration.flyte-binary.[admin,propeller,dataCatalog]` . I’m fine with this personally; it’s sort of the same thing from my POV :slightly_smiling_face: so I’m happy to do whatever you and <@UNR3C6Y4T> suggest I do.
<@U017K8AJBAN> good spot with the `make helm`; I forgot that. I’ve run it and the version changes have bumped. re: the configuration key. You’re suggesting: • `configuration.admin` • `configuration.propeller` • `configuration.dataCatalog` instead of `configuration.flyte-binary.[admin,propeller,dataCatalog]` . I’m fine with this personally; it’s sort of the same thing from my POV :slightly_smiling_face: so I’m happy to do whatever you and <@UNR3C6Y4T> suggest I do.
no strong opinions from me. i was suggesting that because all of configuration is technically already for `flyte-binary`
no strong opinions from me. i was suggesting that because all of configuration is technically already for `flyte-binary`
<@U017K8AJBAN> / <@UNR3C6Y4T> here’s a diff that demonstrates the suggestion above: <https://github.com/ossareh/flyte/pull/1/files> some things to consider: 1. `000-core.yaml` already has a top level `admin` entry 2. the args for `flyte-binary` are in a `Section` called `flyte` a. we either change `cmd/single/config.go` to use a different section name b. or we accept that 000-core.yaml cannot have a top level `admin` key and we put the flyte-binary args under `flyte:` in the ConfgMap. If the diff above is acceptable, I’ll cherry pick the commit onto the existing PR in the flyte repo - the PR above is just for demonstration purposes. Here’s another example: <https://github.com/ossareh/flyte/pull/2> This is based on <https://github.com/flyteorg/flyte/pull/3631#issuecomment-1539390784|this feedback> from Yee.
<@U017K8AJBAN> / <@UNR3C6Y4T> here’s a diff that demonstrates the suggestion above: <https://github.com/ossareh/flyte/pull/1/files> some things to consider: 1. `000-core.yaml` already has a top level `admin` entry 2. the args for `flyte-binary` are in a `Section` called `flyte` a. we either change `cmd/single/config.go` to use a different section name b. or we accept that 000-core.yaml cannot have a top level `admin` key and we put the flyte-binary args under `flyte:` in the ConfgMap. If the diff above is acceptable, I’ll cherry pick the commit onto the existing PR in the flyte repo - the PR above is just for demonstration purposes. Here’s another example: <https://github.com/ossareh/flyte/pull/2> This is based on <https://github.com/flyteorg/flyte/pull/3631#issuecomment-1539390784|this feedback> from Yee.
i like that last one. jeev if you don’t mind can we just merge that?
i like that last one. jeev if you don’t mind can we just merge that?
yea feel free!
yea feel free!
OK; I'll migrate the commit over to the main PR branch and then ping y'all here when it's pushed. <@UNR3C6Y4T> I've pushed the update with the new values.yaml - as per above. LMK if you see anything odd.
Hello everyone, I followed the installation document from this link <https://docs.flyte.org/en/v1.0.0/deployment/aws/manual.html#prerequisites>. All the outputs were as per the documentation, but on the final step where I try to login, "endpoint/console", I get 503 Service Temporarily Unavailable. Any help on this? thanks in advance
Hi <@U04UG7X2Y69> Could you share the content of `config.yaml` ?
Hi <@U04UG7X2Y69> Could you share the content of `config.yaml` ?
server .yaml? follow
server .yaml? follow
thanks I mean, the output of `cat $HOME/.flyte/config.yaml`
thanks I mean, the output of `cat $HOME/.flyte/config.yaml`
which pod? admin? on admin pod i have only these files
which pod? admin? on admin pod i have only these files
nope, just in your system The guide you followed ask you to update the `~/.flyte/config.yaml`before attempting to connect
nope, just in your system The guide you followed ask you to update the `~/.flyte/config.yaml`before attempting to connect
I installed flyte by helm. There is no config.yaml helm repo add flyteorg <https://flyteorg.github.io/flyte>
I installed flyte by helm. There is no config.yaml helm repo add flyteorg <https://flyteorg.github.io/flyte>
anyways, that guide is helpful to prepare prerequisites but it's a bit old. I'm working on documenting the EKS deployment process and will share it soon. Once your prereqs are in place, you could use the new charts, especifically the `flyte-binary` one as documented here: <https://docs.flyte.org/en/latest/deployment/deployment/cloud_simple.html>
anyways, that guide is helpful to prepare prerequisites but it's a bit old. I'm working on documenting the EKS deployment process and will share it soon. Once your prereqs are in place, you could use the new charts, especifically the `flyte-binary` one as documented here: <https://docs.flyte.org/en/latest/deployment/deployment/cloud_simple.html>
thank you <@U04H6UUE78B>. Let me know when is ready
Hi Folks, We need to be able to provide different images for different tasks in a workflow, so I am testing the Multiple Container Images in a Single Workflow feature. I am using the <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/whylogs_examples/index.html|Whylogs> example. It won't work as is in a remote cluster. So, I added the flytecoobook whylogs container image to each task to be able to run the workflow successfully in a remote cluster with `pyflyte run --remote whylogs_example wf` ```@task(container_image="<http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest>") def get_reference_data() -&gt; pd.DataFrame: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest>") def get_target_data() -&gt; pd.DataFrame: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest>") def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest>") def constraints_report(profile_view: DatasetProfileView) -&gt; bool: ...``` However, the `get_reference_data` and `get_target_data` should not need whylogs. They just work with pandas and scikit-learn. We should be able to run those tasks with the `@task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>")` image. I did try that but it fails, k8s logs say: ``` File "/opt/venv/lib/python3.8/site-packages/flytekit/core/python_auto_container.py", line 279, in load_task task_module = importlib.import_module(name=task_module) # type: ignore ... File "/root/whylogs_example.py", line 17, in &lt;module&gt; import whylogs as why ModuleNotFoundError: No module named 'whylogs' Traceback (most recent call last):``` Every task container is trying to parse the entire whyogs_example.py file and since `fytecookbook:core-latest` does not have whylogs it is failing. What is the best design pattern or strategy to be followed in such cases ? How can I make it work remotely ? I read the <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/containerization/multi_images.html|containerization/multi_images.html>, that example has two methods `svm_trainer` and `svm_predictor` but both end up using the same image. All the examples I see in <https://github.com/flyteorg/flytesnacks/tree/master/cookbook/case_studies/ml_training> also have only one custom docker file. Is there a production grade example workflow with tasks taking different images which are significantly different with each other ? Looking for a reference complex workflow that talks about these nuances on how to organize the pieces together with different custom images for each task.
<@U01J90KBSU9> can you take this one when you get a chance Cc <@U0265RTUJ5B> fyi <@U04V98CBRBJ> sorry i just read this. So the problem is you have ModuleLevel import for WhyLogs. In python who cannot simply load a module partially (at least I am not aware of a technique) One idea is to avoid having module level imports and hide them in the task function itself - example ```@task def foo(): import my_module1 <http://my_module1.xyz|my_module1.xyz>() ... @task def foo2(): import my_module2 my_module2.abc(): ...``` This may be cumbersome and not the best - but this is the reason why there are so many platforms out there that infact show this as the default way of working. This is lazy loading of the module
<@U01J90KBSU9> can you take this one when you get a chance Cc <@U0265RTUJ5B> fyi <@U04V98CBRBJ> sorry i just read this. So the problem is you have ModuleLevel import for WhyLogs. In python who cannot simply load a module partially (at least I am not aware of a technique) One idea is to avoid having module level imports and hide them in the task function itself - example ```@task def foo(): import my_module1 <http://my_module1.xyz|my_module1.xyz>() ... @task def foo2(): import my_module2 my_module2.abc(): ...``` This may be cumbersome and not the best - but this is the reason why there are so many platforms out there that infact show this as the default way of working. This is lazy loading of the module
About to tell the same! Let me know if you're still seeing issues, Anindya.
About to tell the same! Let me know if you're still seeing issues, Anindya.
Also for different images we use the following convention ```@task(container_image={{.image.my_image1}}) def foo(): ... @task(container_image={{.image.my_image2}) def foo2(): ... @task def foo3(): ...``` Then to invoke ```pyflyte run --image my_image1="image-uri" --image my_image2="image_uri"``` Note that `foo3 will use the default base image` More docs on this <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/containerization/multi_images.html#sphx-glr-auto-core-containerization-multi-images-py>
Also for different images we use the following convention ```@task(container_image={{.image.my_image1}}) def foo(): ... @task(container_image={{.image.my_image2}) def foo2(): ... @task def foo3(): ...``` Then to invoke ```pyflyte run --image my_image1="image-uri" --image my_image2="image_uri"``` Note that `foo3 will use the default base image` More docs on this <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/containerization/multi_images.html#sphx-glr-auto-core-containerization-multi-images-py>
~Yes, I thought about it already. But, I think this particular example will still not work, because we cannot put all imports inside the method. The reason being these two methods which have typed inputs and outputs (which is an amazing feature of Flyte)~ ```def constraints_report(profile_view: DatasetProfileView) -&gt; bool:``` ```def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView:``` ~so, the imports need to be at top level.~ Scratch my comment above. I see what you are suggesting. Break the tasks into separate modules of py files and import them in the workflow. Let me try that.
~Yes, I thought about it already. But, I think this particular example will still not work, because we cannot put all imports inside the method. The reason being these two methods which have typed inputs and outputs (which is an amazing feature of Flyte)~ ```def constraints_report(profile_view: DatasetProfileView) -&gt; bool:``` ```def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView:``` ~so, the imports need to be at top level.~ Scratch my comment above. I see what you are suggesting. Break the tasks into separate modules of py files and import them in the workflow. Let me try that.
yup aah nm, at "launch time" you will need all dependencies but at runtime, you wont but this is a good thing to capture - cc <@U01J90KBSU9> how about adding to the docs, an example that uses different modules?
yup aah nm, at "launch time" you will need all dependencies but at runtime, you wont but this is a good thing to capture - cc <@U01J90KBSU9> how about adding to the docs, an example that uses different modules?
Yeah, <@U01J90KBSU9> could you please try that on your end and see if that works. I am happy to work with you. It would be good to have a working example for everyone. What is the difference between "launch time" and "runtime" ? Also, if I break into multiple task .py files, when the worklow is run on remote will it be able to pack all related submodules.py files along with the workflow_example.py and ship it to the remote cluster ? Or how can Flyte understand when executing each task it does not need the other task files so that whylogs is not needed in all cases ?
Yeah, <@U01J90KBSU9> could you please try that on your end and see if that works. I am happy to work with you. It would be good to have a working example for everyone. What is the difference between "launch time" and "runtime" ? Also, if I break into multiple task .py files, when the worklow is run on remote will it be able to pack all related submodules.py files along with the workflow_example.py and ship it to the remote cluster ? Or how can Flyte understand when executing each task it does not need the other task files so that whylogs is not needed in all cases ?
Sure thing. I'll create a docs issue and work on it. &gt; What is the difference between "launch time" and "runtime" ? I think what Ketan's referring to is, "launch time" meaning when you are serializing and registering your workflows. "runtime" meaning when the workflow is running. Anindya, can you register your workflow with a default image (the whylogs one) by sending an image argument to `pyflyte run` (`pyflyte run --image &lt;your-whylogs-image&gt;`) and use `core` image for the tasks that aren't dependent on whylogs?
Sure thing. I'll create a docs issue and work on it. &gt; What is the difference between "launch time" and "runtime" ? I think what Ketan's referring to is, "launch time" meaning when you are serializing and registering your workflows. "runtime" meaning when the workflow is running. Anindya, can you register your workflow with a default image (the whylogs one) by sending an image argument to `pyflyte run` (`pyflyte run --image &lt;your-whylogs-image&gt;`) and use `core` image for the tasks that aren't dependent on whylogs?
Yes, I already tried on the whylogs_example.py but it would not work in the remote as we discussed. I ran this: ```@task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_reference_data() -&gt; pd.DataFrame: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_target_data() -&gt; pd.DataFrame: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest>") def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest>") def constraints_report(profile_view: DatasetProfileView) -&gt; bool: ...``` Which should be same as what you mentioned above. Is my understanding correct ?
Yes, I already tried on the whylogs_example.py but it would not work in the remote as we discussed. I ran this: ```@task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_reference_data() -&gt; pd.DataFrame: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_target_data() -&gt; pd.DataFrame: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest>") def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView: ... @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest>") def constraints_report(profile_view: DatasetProfileView) -&gt; bool: ...``` Which should be same as what you mentioned above. Is my understanding correct ?
I don't think it should be. Can you give my suggestion a try?
I don't think it should be. Can you give my suggestion a try?
Okay. Let me try that
Okay. Let me try that
<@U04V98CBRBJ> currently `pyflyte run` does not work with `more than 1 file`. this is in progress (coming soon) But, you can use the traditional route of `pyflyte register` OR `pyflyte package` &gt;&gt; `flytectl register` this can use fast registration, which does not need image builds
<@U04V98CBRBJ> currently `pyflyte run` does not work with `more than 1 file`. this is in progress (coming soon) But, you can use the traditional route of `pyflyte register` OR `pyflyte package` &gt;&gt; `flytectl register` this can use fast registration, which does not need image builds
Actual workflow code with only flytecookbook:core-latest overridden <@U01J90KBSU9> You mean like this `pyflyte run --image <http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest> --remote whylogs_example wf` ? I changed the code as in the above snippet to override with flytecookbook:core-latest where needed.
Actual workflow code with only flytecookbook:core-latest overridden <@U01J90KBSU9> You mean like this `pyflyte run --image <http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest> --remote whylogs_example wf` ? I changed the code as in the above snippet to override with flytecookbook:core-latest where needed.
Is `whylogs_example` a file or a directory? If a file, please make it `whylogs_example.py`
Is `whylogs_example` a file or a directory? If a file, please make it `whylogs_example.py`
That was my typo in slack. Please see image
That was my typo in slack. Please see image
What's your flytekit version? The command works for me.
What's your flytekit version? The command works for me.
It shows 1.4.2. Is that okay ?
It shows 1.4.2. Is that okay ?
Yes. 1.4.2 is the latest. Can you reinitialize your virtual env and install the requirements again?
Yes. 1.4.2 is the latest. Can you reinitialize your virtual env and install the requirements again?
Is your whylogs_example a directory or file ?
Is your whylogs_example a directory or file ?
Oh it's a file. With and without .py are working for me.
Oh it's a file. With and without .py are working for me.
Is the workflow passing successfully ? And did you add the flytecookbok core container image in the code ? Can you pls share the file ?
Is the workflow passing successfully ? And did you add the flytecookbok core container image in the code ? Can you pls share the file ?
I haven't modified the file at all. Let me try that too. Did you intentionally remove the task decorators for `create_profile_view` and `constraints_report`? You shouldn't see that error if you add the decorators. Also, it won't work if they aren't tasks because promises are sent to the tasks.
I haven't modified the file at all. Let me try that too. Did you intentionally remove the task decorators for `create_profile_view` and `constraints_report`? You shouldn't see that error if you add the decorators. Also, it won't work if they aren't tasks because promises are sent to the tasks.
Thanks. I fixed the task decorator. That was the cause of not implemented error :pray: Please run `pyflyte run --image <http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest> --remote whylogs_example.py wf` Can you please check if your code matches this code: ```""" whylogs Example --------------- This examples shows users how to profile pandas DataFrames with whylogs, pass them within tasks and also use our renderers to create a SummaryDriftReport and a ConstraintsReport with failed and passed constraints. """ # %% # First, let's make all the necessary imports for our example to run properly import os import flytekit import numpy as np import pandas as pd import whylogs as why from flytekit import conditional, task, workflow from flytekitplugins.whylogs.renderer import WhylogsConstraintsRenderer, WhylogsSummaryDriftRenderer from flytekitplugins.whylogs.schema import WhylogsDatasetProfileTransformer from sklearn.datasets import load_diabetes from whylogs.core import DatasetProfileView from whylogs.core.constraints import ConstraintsBuilder from whylogs.core.constraints.factories import ( greater_than_number, mean_between_range, null_percentage_below_number, smaller_than_number, ) # %% # Next thing is defining a task to read our reference dataset. # For this, we will take scikit-learn's entire example Diabetes dataset @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_reference_data() -&gt; pd.DataFrame: diabetes = load_diabetes() df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df["target"] = pd.DataFrame(diabetes.target) return df # %% # To some extent, we wanted to show kinds of drift in our example, # so in order to reproduce some of what real-life data behaves # we will take an arbitrary subset of the reference dataset @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_target_data() -&gt; pd.DataFrame: diabetes = load_diabetes() df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df["target"] = pd.DataFrame(diabetes.target) return df.mask(df["age"] &lt; 0.0).dropna(axis=0) # %% # Now we will define a task that can take in any pandas DataFrame # and return a ``DatasetProfileView``, which is our data profile. # With it, users can either visualize and check overall statistics # or even run a constraint suite on top of it. @task def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView: result = why.log(df) return result.view() # %% # And we will also define a constraints report task # that will run some checks in our existing profile. @task def constraints_report(profile_view: DatasetProfileView) -&gt; bool: builder = ConstraintsBuilder(dataset_profile_view=profile_view) builder.add_constraint(greater_than_number(column_name="age", number=45.0)) builder.add_constraint(smaller_than_number(column_name="bp", number=20.0)) builder.add_constraint(mean_between_range(column_name="s3", lower=-1.5, upper=1.5)) builder.add_constraint(null_percentage_below_number(column_name="sex", number=0.0)) constraints = builder.build() renderer = WhylogsConstraintsRenderer() flytekit.Deck("constraints", renderer.to_html(constraints=constraints)) return constraints.validate() # %% # This is a representation of a prediction task. Since we are looking # to take some of the complexity away from our demonstrations, # our model prediction here will be represented by generating a bunch of # random numbers with numpy. This task will take place if we pass our # constraints suite. @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def make_predictions(input_data: pd.DataFrame, output_path: str) -&gt; str: input_data["predictions"] = np.random.random(size=len(input_data)) if not os.path.exists(output_path): os.makedirs(output_path) input_data.to_csv(os.path.join(output_path, "predictions.csv")) return f"wrote predictions successfully to {output_path}" # %% # Lastly, if the constraint checks fail, we will create a FlyteDeck # with the Summary Drift Report, which can provide further intuition into # whether there was a data drift to the failed constraint checks. @task def summary_drift_report(new_data: pd.DataFrame, reference_data: pd.DataFrame) -&gt; str: renderer = WhylogsSummaryDriftRenderer() report = renderer.to_html(target_data=new_data, reference_data=reference_data) flytekit.Deck("summary drift", report) return f"reported summary drift for target dataset with n={len(new_data)}" # %% # Finally, we can then create a Flyte workflow that will # chain together our example data pipeline @workflow def wf() -&gt; str: # 1. Read data target_df = get_target_data() # 2. Profile data and validate it profile_view = create_profile_view(df=target_df) validated = constraints_report(profile_view=profile_view) # 3. Conditional actions if data is valid or not return ( conditional("stop_if_fails") .if_(validated.is_false()) .then( summary_drift_report( new_data=target_df, reference_data=get_reference_data(), ) ) .else_() .then(make_predictions(input_data=target_df, output_path="./data")) ) # %% if __name__ == "__main__": wf()""" whylogs Example --------------- This examples shows users how to profile pandas DataFrames with whylogs, pass them within tasks and also use our renderers to create a SummaryDriftReport and a ConstraintsReport with failed and passed constraints. """ # %% # First, let's make all the necessary imports for our example to run properly import os import flytekit import numpy as np import pandas as pd import whylogs as why from flytekit import conditional, task, workflow from flytekitplugins.whylogs.renderer import WhylogsConstraintsRenderer, WhylogsSummaryDriftRenderer from flytekitplugins.whylogs.schema import WhylogsDatasetProfileTransformer from sklearn.datasets import load_diabetes from whylogs.core import DatasetProfileView from whylogs.core.constraints import ConstraintsBuilder from whylogs.core.constraints.factories import ( greater_than_number, mean_between_range, null_percentage_below_number, smaller_than_number, ) # %% # Next thing is defining a task to read our reference dataset. # For this, we will take scikit-learn's entire example Diabetes dataset @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_reference_data() -&gt; pd.DataFrame: diabetes = load_diabetes() df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df["target"] = pd.DataFrame(diabetes.target) return df # %% # To some extent, we wanted to show kinds of drift in our example, # so in order to reproduce some of what real-life data behaves # we will take an arbitrary subset of the reference dataset @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_target_data() -&gt; pd.DataFrame: diabetes = load_diabetes() df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df["target"] = pd.DataFrame(diabetes.target) return df.mask(df["age"] &lt; 0.0).dropna(axis=0) # %% # Now we will define a task that can take in any pandas DataFrame # and return a ``DatasetProfileView``, which is our data profile. # With it, users can either visualize and check overall statistics # or even run a constraint suite on top of it. @task def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView: result = why.log(df) return result.view() # %% # And we will also define a constraints report task # that will run some checks in our existing profile. @task def constraints_report(profile_view: DatasetProfileView) -&gt; bool: builder = ConstraintsBuilder(dataset_profile_view=profile_view) builder.add_constraint(greater_than_number(column_name="age", number=45.0)) builder.add_constraint(smaller_than_number(column_name="bp", number=20.0)) builder.add_constraint(mean_between_range(column_name="s3", lower=-1.5, upper=1.5)) builder.add_constraint(null_percentage_below_number(column_name="sex", number=0.0)) constraints = builder.build() renderer = WhylogsConstraintsRenderer() flytekit.Deck("constraints", renderer.to_html(constraints=constraints)) return constraints.validate() # %% # This is a representation of a prediction task. Since we are looking # to take some of the complexity away from our demonstrations, # our model prediction here will be represented by generating a bunch of # random numbers with numpy. This task will take place if we pass our # constraints suite. @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def make_predictions(input_data: pd.DataFrame, output_path: str) -&gt; str: input_data["predictions"] = np.random.random(size=len(input_data)) if not os.path.exists(output_path): os.makedirs(output_path) input_data.to_csv(os.path.join(output_path, "predictions.csv")) return f"wrote predictions successfully to {output_path}" # %% # Lastly, if the constraint checks fail, we will create a FlyteDeck # with the Summary Drift Report, which can provide further intuition into # whether there was a data drift to the failed constraint checks. @task def summary_drift_report(new_data: pd.DataFrame, reference_data: pd.DataFrame) -&gt; str: renderer = WhylogsSummaryDriftRenderer() report = renderer.to_html(target_data=new_data, reference_data=reference_data) flytekit.Deck("summary drift", report) return f"reported summary drift for target dataset with n={len(new_data)}" # %% # Finally, we can then create a Flyte workflow that will # chain together our example data pipeline @workflow def wf() -&gt; str: # 1. Read data target_df = get_target_data() # 2. Profile data and validate it profile_view = create_profile_view(df=target_df) validated = constraints_report(profile_view=profile_view) # 3. Conditional actions if data is valid or not return ( conditional("stop_if_fails") .if_(validated.is_false()) .then( summary_drift_report( new_data=target_df, reference_data=get_reference_data(), ) ) .else_() .then(make_predictions(input_data=target_df, output_path="./data")) ) # %% if __name__ == "__main__": wf()```
Thanks. I fixed the task decorator. That was the cause of not implemented error :pray: Please run `pyflyte run --image <http://ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest|ghcr.io/flyteorg/flytecookbook:whylogs_examples-latest> --remote whylogs_example.py wf` Can you please check if your code matches this code: ```""" whylogs Example --------------- This examples shows users how to profile pandas DataFrames with whylogs, pass them within tasks and also use our renderers to create a SummaryDriftReport and a ConstraintsReport with failed and passed constraints. """ # %% # First, let's make all the necessary imports for our example to run properly import os import flytekit import numpy as np import pandas as pd import whylogs as why from flytekit import conditional, task, workflow from flytekitplugins.whylogs.renderer import WhylogsConstraintsRenderer, WhylogsSummaryDriftRenderer from flytekitplugins.whylogs.schema import WhylogsDatasetProfileTransformer from sklearn.datasets import load_diabetes from whylogs.core import DatasetProfileView from whylogs.core.constraints import ConstraintsBuilder from whylogs.core.constraints.factories import ( greater_than_number, mean_between_range, null_percentage_below_number, smaller_than_number, ) # %% # Next thing is defining a task to read our reference dataset. # For this, we will take scikit-learn's entire example Diabetes dataset @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_reference_data() -&gt; pd.DataFrame: diabetes = load_diabetes() df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df["target"] = pd.DataFrame(diabetes.target) return df # %% # To some extent, we wanted to show kinds of drift in our example, # so in order to reproduce some of what real-life data behaves # we will take an arbitrary subset of the reference dataset @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_target_data() -&gt; pd.DataFrame: diabetes = load_diabetes() df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df["target"] = pd.DataFrame(diabetes.target) return df.mask(df["age"] &lt; 0.0).dropna(axis=0) # %% # Now we will define a task that can take in any pandas DataFrame # and return a ``DatasetProfileView``, which is our data profile. # With it, users can either visualize and check overall statistics # or even run a constraint suite on top of it. @task def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView: result = why.log(df) return result.view() # %% # And we will also define a constraints report task # that will run some checks in our existing profile. @task def constraints_report(profile_view: DatasetProfileView) -&gt; bool: builder = ConstraintsBuilder(dataset_profile_view=profile_view) builder.add_constraint(greater_than_number(column_name="age", number=45.0)) builder.add_constraint(smaller_than_number(column_name="bp", number=20.0)) builder.add_constraint(mean_between_range(column_name="s3", lower=-1.5, upper=1.5)) builder.add_constraint(null_percentage_below_number(column_name="sex", number=0.0)) constraints = builder.build() renderer = WhylogsConstraintsRenderer() flytekit.Deck("constraints", renderer.to_html(constraints=constraints)) return constraints.validate() # %% # This is a representation of a prediction task. Since we are looking # to take some of the complexity away from our demonstrations, # our model prediction here will be represented by generating a bunch of # random numbers with numpy. This task will take place if we pass our # constraints suite. @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def make_predictions(input_data: pd.DataFrame, output_path: str) -&gt; str: input_data["predictions"] = np.random.random(size=len(input_data)) if not os.path.exists(output_path): os.makedirs(output_path) input_data.to_csv(os.path.join(output_path, "predictions.csv")) return f"wrote predictions successfully to {output_path}" # %% # Lastly, if the constraint checks fail, we will create a FlyteDeck # with the Summary Drift Report, which can provide further intuition into # whether there was a data drift to the failed constraint checks. @task def summary_drift_report(new_data: pd.DataFrame, reference_data: pd.DataFrame) -&gt; str: renderer = WhylogsSummaryDriftRenderer() report = renderer.to_html(target_data=new_data, reference_data=reference_data) flytekit.Deck("summary drift", report) return f"reported summary drift for target dataset with n={len(new_data)}" # %% # Finally, we can then create a Flyte workflow that will # chain together our example data pipeline @workflow def wf() -&gt; str: # 1. Read data target_df = get_target_data() # 2. Profile data and validate it profile_view = create_profile_view(df=target_df) validated = constraints_report(profile_view=profile_view) # 3. Conditional actions if data is valid or not return ( conditional("stop_if_fails") .if_(validated.is_false()) .then( summary_drift_report( new_data=target_df, reference_data=get_reference_data(), ) ) .else_() .then(make_predictions(input_data=target_df, output_path="./data")) ) # %% if __name__ == "__main__": wf()""" whylogs Example --------------- This examples shows users how to profile pandas DataFrames with whylogs, pass them within tasks and also use our renderers to create a SummaryDriftReport and a ConstraintsReport with failed and passed constraints. """ # %% # First, let's make all the necessary imports for our example to run properly import os import flytekit import numpy as np import pandas as pd import whylogs as why from flytekit import conditional, task, workflow from flytekitplugins.whylogs.renderer import WhylogsConstraintsRenderer, WhylogsSummaryDriftRenderer from flytekitplugins.whylogs.schema import WhylogsDatasetProfileTransformer from sklearn.datasets import load_diabetes from whylogs.core import DatasetProfileView from whylogs.core.constraints import ConstraintsBuilder from whylogs.core.constraints.factories import ( greater_than_number, mean_between_range, null_percentage_below_number, smaller_than_number, ) # %% # Next thing is defining a task to read our reference dataset. # For this, we will take scikit-learn's entire example Diabetes dataset @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_reference_data() -&gt; pd.DataFrame: diabetes = load_diabetes() df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df["target"] = pd.DataFrame(diabetes.target) return df # %% # To some extent, we wanted to show kinds of drift in our example, # so in order to reproduce some of what real-life data behaves # we will take an arbitrary subset of the reference dataset @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def get_target_data() -&gt; pd.DataFrame: diabetes = load_diabetes() df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df["target"] = pd.DataFrame(diabetes.target) return df.mask(df["age"] &lt; 0.0).dropna(axis=0) # %% # Now we will define a task that can take in any pandas DataFrame # and return a ``DatasetProfileView``, which is our data profile. # With it, users can either visualize and check overall statistics # or even run a constraint suite on top of it. @task def create_profile_view(df: pd.DataFrame) -&gt; DatasetProfileView: result = why.log(df) return result.view() # %% # And we will also define a constraints report task # that will run some checks in our existing profile. @task def constraints_report(profile_view: DatasetProfileView) -&gt; bool: builder = ConstraintsBuilder(dataset_profile_view=profile_view) builder.add_constraint(greater_than_number(column_name="age", number=45.0)) builder.add_constraint(smaller_than_number(column_name="bp", number=20.0)) builder.add_constraint(mean_between_range(column_name="s3", lower=-1.5, upper=1.5)) builder.add_constraint(null_percentage_below_number(column_name="sex", number=0.0)) constraints = builder.build() renderer = WhylogsConstraintsRenderer() flytekit.Deck("constraints", renderer.to_html(constraints=constraints)) return constraints.validate() # %% # This is a representation of a prediction task. Since we are looking # to take some of the complexity away from our demonstrations, # our model prediction here will be represented by generating a bunch of # random numbers with numpy. This task will take place if we pass our # constraints suite. @task(container_image="<http://ghcr.io/flyteorg/flytecookbook:core-latest|ghcr.io/flyteorg/flytecookbook:core-latest>") def make_predictions(input_data: pd.DataFrame, output_path: str) -&gt; str: input_data["predictions"] = np.random.random(size=len(input_data)) if not os.path.exists(output_path): os.makedirs(output_path) input_data.to_csv(os.path.join(output_path, "predictions.csv")) return f"wrote predictions successfully to {output_path}" # %% # Lastly, if the constraint checks fail, we will create a FlyteDeck # with the Summary Drift Report, which can provide further intuition into # whether there was a data drift to the failed constraint checks. @task def summary_drift_report(new_data: pd.DataFrame, reference_data: pd.DataFrame) -&gt; str: renderer = WhylogsSummaryDriftRenderer() report = renderer.to_html(target_data=new_data, reference_data=reference_data) flytekit.Deck("summary drift", report) return f"reported summary drift for target dataset with n={len(new_data)}" # %% # Finally, we can then create a Flyte workflow that will # chain together our example data pipeline @workflow def wf() -&gt; str: # 1. Read data target_df = get_target_data() # 2. Profile data and validate it profile_view = create_profile_view(df=target_df) validated = constraints_report(profile_view=profile_view) # 3. Conditional actions if data is valid or not return ( conditional("stop_if_fails") .if_(validated.is_false()) .then( summary_drift_report( new_data=target_df, reference_data=get_reference_data(), ) ) .else_() .then(make_predictions(input_data=target_df, output_path="./data")) ) # %% if __name__ == "__main__": wf()```
Sure, I will. Are you still seeing any errors?
Sure, I will. Are you still seeing any errors?
The Flyte Workflow is stuck at the first get_target_data step for over 13 mins in my Flyte console And I forot to bookmark the flyte K8s dashboard console. So searching for the k8s dashboard link to see the pod status.
The Flyte Workflow is stuck at the first get_target_data step for over 13 mins in my Flyte console And I forot to bookmark the flyte K8s dashboard console. So searching for the k8s dashboard link to see the pod status.
Please check. I see the same error with this command. So it only works if the libraries are imported within a task.
Please check. I see the same error with this command. So it only works if the libraries are imported within a task.
Yes, it should error out as we discussed.
Yes, it should error out as we discussed.
So if you don't use a whylogs type at the task boundary, it should work cause then, you can import whylogs within the relevant tasks. If not, I'm not sure this is possible.
So if you don't use a whylogs type at the task boundary, it should work cause then, you can import whylogs within the relevant tasks. If not, I'm not sure this is possible.
Yeah. Would you be creating a docs issue you mentioned before and help to create a working example ? It will be beneficial for all.
Yeah. Would you be creating a docs issue you mentioned before and help to create a working example ? It will be beneficial for all.
Of course! I already created an issue: <https://github.com/flyteorg/flyte/issues/3496>
<@U04V98CBRBJ> is this the getting started? can you tell me `pip show flytekit` <@U04V98CBRBJ> thank you for pointing this - there is indeed a problem in 1.2.11 of flytekit (the docker container was not published) we are publishing it right now - the github workflow silently failed :disappointed: cc <@U0265RTUJ5B>
<@U04V98CBRBJ> is this the getting started? can you tell me `pip show flytekit` <@U04V98CBRBJ> thank you for pointing this - there is indeed a problem in 1.2.11 of flytekit (the docker container was not published) we are publishing it right now - the github workflow silently failed :disappointed: cc <@U0265RTUJ5B>
Hi <@UNZB4NW3S> It is so great to hear from you. ```(flyteorg) asaha@asaha-mbp182 flyteorg % pip show flytekit Name: flytekit Version: 1.2.11``` Ok. I can downgrade.
Hi <@UNZB4NW3S> It is so great to hear from you. ```(flyteorg) asaha@asaha-mbp182 flyteorg % pip show flytekit Name: flytekit Version: 1.2.11``` Ok. I can downgrade.
<@U04V98CBRBJ>, we just published that image. Can you retry? (No need to downgrade) Also, any reason why you're running that specific version? Curious to know why you're not running any of the 1.3.x or 1.4.x releases.
<@U04V98CBRBJ>, we just published that image. Can you retry? (No need to downgrade) Also, any reason why you're running that specific version? Curious to know why you're not running any of the 1.3.x or 1.4.x releases.
Yes, I had that question as well. I have python 3.10 conda env. When I install flytekit it by default install 1.2.11 but I checked in pypi it is alreadt 1.4.x Not sure why it auto downloaded a lower version. THis is the reason ```Requirement already satisfied: wheel&lt;1.0.0,&gt;=0.30.0 in /opt/anaconda3/envs/flyteorg/lib/python3.10/site-packages (from flytekit==1.4.2) (0.38.4) ERROR: Could not find a version that satisfies the requirement grpcio&lt;2.0,&gt;=1.50.0 (from flytekit) (from versions: 1.41.0rc2, 1.41.0, 1.41.1, 1.42.0rc1, 1.42.0, 1.43.0rc1, 1.43.0, 1.44.0rc1, 1.44.0rc2, 1.44.0, 1.45.0rc1, 1.45.0, 1.46.0rc1, 1.46.0rc2, 1.46.0, 1.46.1, 1.46.3, 1.46.5, 1.47.0rc1, 1.47.0, 1.47.2, 1.48.0rc1, 1.48.0, 1.48.1, 1.48.2, 1.49.0rc1, 1.49.0rc3, 1.49.0, 1.49.1, 1.50.0rc1) ERROR: No matching distribution found for grpcio&lt;2.0,&gt;=1.50.0 (flyteorg) asaha@asaha-mbp182 flyteorg % pip freeze | grep grpcio grpcio==1.49.0 grpcio-status==1.48.2 (flyteorg) asaha@asaha-mbp182 flyteorg % ```
Yes, I had that question as well. I have python 3.10 conda env. When I install flytekit it by default install 1.2.11 but I checked in pypi it is alreadt 1.4.x Not sure why it auto downloaded a lower version. THis is the reason ```Requirement already satisfied: wheel&lt;1.0.0,&gt;=0.30.0 in /opt/anaconda3/envs/flyteorg/lib/python3.10/site-packages (from flytekit==1.4.2) (0.38.4) ERROR: Could not find a version that satisfies the requirement grpcio&lt;2.0,&gt;=1.50.0 (from flytekit) (from versions: 1.41.0rc2, 1.41.0, 1.41.1, 1.42.0rc1, 1.42.0, 1.43.0rc1, 1.43.0, 1.44.0rc1, 1.44.0rc2, 1.44.0, 1.45.0rc1, 1.45.0, 1.46.0rc1, 1.46.0rc2, 1.46.0, 1.46.1, 1.46.3, 1.46.5, 1.47.0rc1, 1.47.0, 1.47.2, 1.48.0rc1, 1.48.0, 1.48.1, 1.48.2, 1.49.0rc1, 1.49.0rc3, 1.49.0, 1.49.1, 1.50.0rc1) ERROR: No matching distribution found for grpcio&lt;2.0,&gt;=1.50.0 (flyteorg) asaha@asaha-mbp182 flyteorg % pip freeze | grep grpcio grpcio==1.49.0 grpcio-status==1.48.2 (flyteorg) asaha@asaha-mbp182 flyteorg % ```
are you using conda install or pip?
are you using conda install or pip?
:white_check_mark: Fixed it. Overriding the index-url. Lyft artifactory is points to earlier grpcio ```pip install flytekit scikit-learn --index-url <https://pypi.org/simple> ``` ```(flyteorg) asaha@asaha-mbp182 flyteorg % (flyteorg) asaha@asaha-mbp182 flyteorg % pip freeze | grep flytekit flytekit==1.4.2 flytekitplugins-whylogs==1.4.2 (flyteorg) asaha@asaha-mbp182 flyteorg %``` Installed `pip install flytekitplugins.whylogs` Jumped to try out <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/whylogs_examples/index.html> K8s logs shows ```{"asctime": "2023-03-18 00:29:38,136", "name": "flytekit", "levelname": "WARNING", "message": "FlyteSchema is deprecated, use Structured Dataset instead."} Matplotlib created a temporary config/cache directory at /tmp/matplotlib-masd6q2b because the default path (/home/flytekit/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing. tar: Removing leading `/' from member names {"asctime": "2023-03-18 00:29:45,494", "name": "flytekit", "levelname": "WARNING", "message": "FlyteSchema is deprecated, use Structured Dataset instead."} Traceback (most recent call last): File "/usr/local/bin/pyflyte-execute", line 8, in &lt;module&gt; sys.exit(execute_task_cmd()) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py", line 476, in execute_task_cmd _execute_task( File "/usr/local/lib/python3.10/site-packages/flytekit/exceptions/scopes.py", line 160, in system_entry_point return wrapped(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py", line 348, in _execute_task _task_def = resolver_obj.load_task(loader_args=resolver_args) File "/usr/local/lib/python3.10/site-packages/flytekit/core/python_auto_container.py", line 279, in load_task task_module = importlib.import_module(name=task_module) # type: ignore File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 1050, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 1027, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 1006, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 688, in _load_unlocked File "&lt;frozen importlib._bootstrap_external&gt;", line 883, in exec_module File "&lt;frozen importlib._bootstrap&gt;", line 241, in _call_with_frames_removed File "/root/whylogs_example.py", line 17, in &lt;module&gt; import whylogs as why ModuleNotFoundError: No module named 'whylogs' Traceback (most recent call last): File "/usr/local/bin/pyflyte-fast-execute", line 8, in &lt;module&gt; sys.exit(fast_execute_task_cmd()) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py", line 513, in fast_execute_task_cmd subprocess.run(cmd, check=True) File "/usr/local/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['pyflyte-execute', '--inputs', '<s3://my-s3-bucket/metadata/propeller/flytesnacks-development-f40b03d9ec1d846548ea/n0/data/inputs.pb>', '--output-prefix', '<s3://my-s3-bucket/metadata/propeller/flytesnacks-development-f40b03d9ec1d846548ea/n0/data/0>', '--raw-output-data-prefix', '<s3://my-s3-bucket/data/vh/f40b03d9ec1d846548ea-n0-0>', '--checkpoint-path', '<s3://my-s3-bucket/data/vh/f40b03d9ec1d846548ea-n0-0/_flytecheckpoints>', '--prev-checkpoint', '""', '--dynamic-addl-distro', '<s3://my-s3-bucket/flytesnacks/development/2TMTB4SERDGT2U2IPM7HIKMSBU======/scriptmode.tar.gz>', '--dynamic-dest-dir', '/root', '--resolver', 'flytekit.core.python_auto_container.default_task_resolver', '--', 'task-module', 'whylogs_example', 'task-name', 'get_target_data']' returned non-zero exit status 1.``` Clearly I need to go slow. I believe minio is emulating the s3 path. I guess I need to provide a container_image with whylogs for the whylogs task. Will continue exploring. Is there a way I can find out all the pre-built docker containers flyte has ? I can reuse if there is already one for whylogs. NVM. I found them here <https://github.com/flyteorg/flytesnacks/pkgs/container/flytecookbook/versions?filters%5Bversion_type%5D=tagged>
:white_check_mark: Fixed it. Overriding the index-url. Lyft artifactory is points to earlier grpcio ```pip install flytekit scikit-learn --index-url <https://pypi.org/simple> ``` ```(flyteorg) asaha@asaha-mbp182 flyteorg % (flyteorg) asaha@asaha-mbp182 flyteorg % pip freeze | grep flytekit flytekit==1.4.2 flytekitplugins-whylogs==1.4.2 (flyteorg) asaha@asaha-mbp182 flyteorg %``` Installed `pip install flytekitplugins.whylogs` Jumped to try out <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/whylogs_examples/index.html> K8s logs shows ```{"asctime": "2023-03-18 00:29:38,136", "name": "flytekit", "levelname": "WARNING", "message": "FlyteSchema is deprecated, use Structured Dataset instead."} Matplotlib created a temporary config/cache directory at /tmp/matplotlib-masd6q2b because the default path (/home/flytekit/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing. tar: Removing leading `/' from member names {"asctime": "2023-03-18 00:29:45,494", "name": "flytekit", "levelname": "WARNING", "message": "FlyteSchema is deprecated, use Structured Dataset instead."} Traceback (most recent call last): File "/usr/local/bin/pyflyte-execute", line 8, in &lt;module&gt; sys.exit(execute_task_cmd()) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py", line 476, in execute_task_cmd _execute_task( File "/usr/local/lib/python3.10/site-packages/flytekit/exceptions/scopes.py", line 160, in system_entry_point return wrapped(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py", line 348, in _execute_task _task_def = resolver_obj.load_task(loader_args=resolver_args) File "/usr/local/lib/python3.10/site-packages/flytekit/core/python_auto_container.py", line 279, in load_task task_module = importlib.import_module(name=task_module) # type: ignore File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 1050, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 1027, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 1006, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 688, in _load_unlocked File "&lt;frozen importlib._bootstrap_external&gt;", line 883, in exec_module File "&lt;frozen importlib._bootstrap&gt;", line 241, in _call_with_frames_removed File "/root/whylogs_example.py", line 17, in &lt;module&gt; import whylogs as why ModuleNotFoundError: No module named 'whylogs' Traceback (most recent call last): File "/usr/local/bin/pyflyte-fast-execute", line 8, in &lt;module&gt; sys.exit(fast_execute_task_cmd()) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/flytekit/bin/entrypoint.py", line 513, in fast_execute_task_cmd subprocess.run(cmd, check=True) File "/usr/local/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['pyflyte-execute', '--inputs', '<s3://my-s3-bucket/metadata/propeller/flytesnacks-development-f40b03d9ec1d846548ea/n0/data/inputs.pb>', '--output-prefix', '<s3://my-s3-bucket/metadata/propeller/flytesnacks-development-f40b03d9ec1d846548ea/n0/data/0>', '--raw-output-data-prefix', '<s3://my-s3-bucket/data/vh/f40b03d9ec1d846548ea-n0-0>', '--checkpoint-path', '<s3://my-s3-bucket/data/vh/f40b03d9ec1d846548ea-n0-0/_flytecheckpoints>', '--prev-checkpoint', '""', '--dynamic-addl-distro', '<s3://my-s3-bucket/flytesnacks/development/2TMTB4SERDGT2U2IPM7HIKMSBU======/scriptmode.tar.gz>', '--dynamic-dest-dir', '/root', '--resolver', 'flytekit.core.python_auto_container.default_task_resolver', '--', 'task-module', 'whylogs_example', 'task-name', 'get_target_data']' returned non-zero exit status 1.``` Clearly I need to go slow. I believe minio is emulating the s3 path. I guess I need to provide a container_image with whylogs for the whylogs task. Will continue exploring. Is there a way I can find out all the pre-built docker containers flyte has ? I can reuse if there is already one for whylogs. NVM. I found them here <https://github.com/flyteorg/flytesnacks/pkgs/container/flytecookbook/versions?filters%5Bversion_type%5D=tagged>
You are going fast This is what I was saying cc <@U01J90KBSU9> / <@U0265RTUJ5B> / <@U01DYLVUNJE> (have default container image per plugin that needs the library
You are going fast This is what I was saying cc <@U01J90KBSU9> / <@U0265RTUJ5B> / <@U01DYLVUNJE> (have default container image per plugin that needs the library
Could you elaborate on what you mean by default container image, <@UNZB4NW3S>? Do you mean we need to modify our integration examples to have a container image per task or should we somehow enforce the image when registering the workflow?
Could you elaborate on what you mean by default container image, <@UNZB4NW3S>? Do you mean we need to modify our integration examples to have a container image per task or should we somehow enforce the image when registering the workflow?
I will create an issue about this today This is a feature
hi guys, i’m having issues w flyte sandbox deployment. i’ve tried deploying the demo sandbox first on my mac, and when that failed, on a dev ec2 instance. both are exhibiting the same issues: on the flyte console page, i no longer get the standard page with the flytesnacks`development`, `staging`, `production` options (see screenshots). on clicking `login`, i get `Not Found`. i’ve been running a bunch of POCs over the past few weeks on a local cluster, and this hasn’t been an issue. tail’ed logs in the parent container ```E0317 05:57:50.613939 59 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition E0317 05:57:50.614012 59 nestedpendingoperations.go:335] Operation for "{volumeName:<http://kubernetes.io/configmap/a03a0b9b-1dfe-48da-9e41-e34652633335-config-volume|kubernetes.io/configmap/a03a0b9b-1dfe-48da-9e41-e34652633335-config-volume> podName:a03a0b9b-1dfe-48da-9e41-e34652633335 nodeName:}" failed. No retries permitted until 2023-03-17 05:57:51.113991707 +0000 UTC m=+33.777193887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "<http://kubernetes.io/configmap/a03a0b9b-1dfe-48da-9e41-e34652633335-config-volume|kubernetes.io/configmap/a03a0b9b-1dfe-48da-9e41-e34652633335-config-volume>") pod "coredns-b96499967-n7rw8" (UID: "a03a0b9b-1dfe-48da-9e41-e34652633335") : failed to sync configmap cache: timed out waiting for the condition I0317 05:57:50.666398 59 request.go:601] Waited for 1.184759686s due to client-side throttling, not priority and fairness, request: PATCH:<https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/local-path-provisioner-7b7dc8d6f5-4jzfv/status> I0317 05:57:50.969983 59 node_lifecycle_controller.go:1192] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0317 05:57:52.541998 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-certs\" (UniqueName: \"<http://kubernetes.io/secret/c20e7c75-ff30-48c7-9dc8-a778c7adb611-kubernetes-dashboard-certs\|kubernetes.io/secret/c20e7c75-ff30-48c7-9dc8-a778c7adb611-kubernetes-dashboard-certs\>") pod \"flyte-sandbox-kubernetes-dashboard-6757db879c-cgj7d\" (UID: \"c20e7c75-ff30-48c7-9dc8-a778c7adb611\") " pod="flyte/flyte-sandbox-kubernetes-dashboard-6757db879c-cgj7d" I0317 05:57:52.542062 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flyte-sandbox-minio-storage\" (UniqueName: \"<http://kubernetes.io/host-path/e5685a57-1387-4e7e-b8fc-4a0aa4249465-flyte-sandbox-minio-storage\|kubernetes.io/host-path/e5685a57-1387-4e7e-b8fc-4a0aa4249465-flyte-sandbox-minio-storage\>") pod \"flyte-sandbox-minio-645c8ddf7c-zzsnj\" (UID: \"e5685a57-1387-4e7e-b8fc-4a0aa4249465\") " pod="flyte/flyte-sandbox-minio-645c8ddf7c-zzsnj" I0317 05:57:52.542147 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjw5\" (UniqueName: \"<http://kubernetes.io/projected/72166ae0-0f86-4e1b-8cae-ad1079386c98-kube-api-access-9fjw5\|kubernetes.io/projected/72166ae0-0f86-4e1b-8cae-ad1079386c98-kube-api-access-9fjw5\>") pod \"metrics-server-668d979685-9dvnv\" (UID: \"72166ae0-0f86-4e1b-8cae-ad1079386c98\") " pod="kube-system/metrics-server-668d979685-9dvnv" I0317 05:57:52.542400 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"<http://kubernetes.io/empty-dir/c20e7c75-ff30-48c7-9dc8-a778c7adb611-tmp-volume\|kubernetes.io/empty-dir/c20e7c75-ff30-48c7-9dc8-a778c7adb611-tmp-volume\>") pod \"flyte-sandbox-kubernetes-dashboard-6757db879c-cgj7d\" (UID: \"c20e7c75-ff30-48c7-9dc8-a778c7adb611\") " pod="flyte/flyte-sandbox-kubernetes-dashboard-6757db879c-cgj7d" I0317 05:57:52.542462 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xlg\" (UniqueName: \"<http://kubernetes.io/projected/e5685a57-1387-4e7e-b8fc-4a0aa4249465-kube-api-access-n9xlg\|kubernetes.io/projected/e5685a57-1387-4e7e-b8fc-4a0aa4249465-kube-api-access-n9xlg\>") pod \"flyte-sandbox-minio-645c8ddf7c-zzsnj\" (UID: \"e5685a57-1387-4e7e-b8fc-4a0aa4249465\") " pod="flyte/flyte-sandbox-minio-645c8ddf7c-zzsnj" I0317 05:57:52.542596 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n22j8\" (UniqueName: \"<http://kubernetes.io/projected/c20e7c75-ff30-48c7-9dc8-a778c7adb611-kube-api-access-n22j8\|kubernetes.io/projected/c20e7c75-ff30-48c7-9dc8-a778c7adb611-kube-api-access-n22j8\>") pod \"flyte-sandbox-kubernetes-dashboard-6757db879c-cgj7d\" (UID: \"c20e7c75-ff30-48c7-9dc8-a778c7adb611\") " pod="flyte/flyte-sandbox-kubernetes-dashboard-6757db879c-cgj7d" I0317 05:57:52.542704 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"<http://kubernetes.io/empty-dir/72166ae0-0f86-4e1b-8cae-ad1079386c98-tmp-dir\|kubernetes.io/empty-dir/72166ae0-0f86-4e1b-8cae-ad1079386c98-tmp-dir\>") pod \"metrics-server-668d979685-9dvnv\" (UID: \"72166ae0-0f86-4e1b-8cae-ad1079386c98\") " pod="kube-system/metrics-server-668d979685-9dvnv" E0317 05:57:53.170755 59 event_broadcaster.go:253] Server rejected event '&amp;v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"flyte-sandbox-postgresql-0.174d1f5176bad15e", GenerateName:"", Namespace:"flyte", SelfLink:"", UID:"f77f1e93-6f61-4ae2-a3df-c5c8407fbc73", ResourceVersion:"564", Generation:0, CreationTimestamp:time.Date(2023, time.March, 17, 5, 57, 50, 0, time.Local), DeletionTimestamp:&lt;nil&gt;, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"<http://events.k8s.io/v1|events.k8s.io/v1>", Time:time.Date(2023, time.March, 17, 5, 57, 50, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc010b6e0c0), Subresource:""}}}, EventTime:time.Date(2023, time.March, 17, 5, 57, 50, 823827754, time.Local), Series:(*v1.EventSeries)(0xc006365b20), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-63438637467c", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"flyte", Name:"flyte-sandbox-postgresql-0", UID:"73ecc8eb-18ef-4ee3-985a-86f935dbe63a", APIVersion:"v1", ResourceVersion:"531", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedLastTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedCount:0}': 'Event "flyte-sandbox-postgresql-0.174d1f5176bad15e" is invalid: [series.count: Invalid value: "": should be at least 2, eventTime: Invalid value: 2023-03-17 05:57:50.823827 +0000 UTC: field is immutable]' (will not retry!) E0317 05:58:06.079882 59 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request W0317 05:58:06.527593 59 garbagecollector.go:747] failed to discover some groups: map[<http://metrics.k8s.io/v1beta1:the|metrics.k8s.io/v1beta1:the> server is currently unable to handle the request] I0317 05:58:14.187196 59 topology_manager.go:200] "Topology Admit Handler" I0317 05:58:14.381395 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flyte-sandbox-db-storage\" (UniqueName: \"<http://kubernetes.io/host-path/73ecc8eb-18ef-4ee3-985a-86f935dbe63a-flyte-sandbox-db-storage\|kubernetes.io/host-path/73ecc8eb-18ef-4ee3-985a-86f935dbe63a-flyte-sandbox-db-storage\>") pod \"flyte-sandbox-postgresql-0\" (UID: \"73ecc8eb-18ef-4ee3-985a-86f935dbe63a\") " pod="flyte/flyte-sandbox-postgresql-0" I0317 05:58:14.381427 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rghms\" (UniqueName: \"<http://kubernetes.io/projected/73ecc8eb-18ef-4ee3-985a-86f935dbe63a-kube-api-access-rghms\|kubernetes.io/projected/73ecc8eb-18ef-4ee3-985a-86f935dbe63a-kube-api-access-rghms\>") pod \"flyte-sandbox-postgresql-0\" (UID: \"73ecc8eb-18ef-4ee3-985a-86f935dbe63a\") " pod="flyte/flyte-sandbox-postgresql-0" I0317 05:58:29.845273 59 scope.go:110] "RemoveContainer" containerID="7faac34fa644246cdbba57411191fae1e37b9365a6212e65ec49e59ff6d4a36d" I0317 05:58:34.334500 59 event.go:294] "Event occurred" object="flyte-sandbox-webhook" fieldPath="" kind="MutatingWebhookConfiguration" apiVersion="<http://admissionregistration.k8s.io/v1|admissionregistration.k8s.io/v1>" type="Warning" reason="OwnerRefInvalidNamespace" message="ownerRef [apps/v1/ReplicaSet, namespace: , name: flyte-sandbox-75c5d88454, uid: 185cb389-333f-4252-a82d-2a0300ee0c6a] does not exist in namespace \"\"" I0317 05:58:36.093738 59 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for <http://flyteworkflows.flyte.lyft.com|flyteworkflows.flyte.lyft.com> I0317 05:58:36.093793 59 shared_informer.go:255] Waiting for caches to sync for resource quota I0317 05:58:36.194168 59 shared_informer.go:262] Caches are synced for resource quota I0317 05:58:36.547321 59 shared_informer.go:255] Waiting for caches to sync for garbage collector I0317 05:58:36.547369 59 shared_informer.go:262] Caches are synced for garbage collector I0317 05:58:41.788357 59 topology_manager.go:200] "Topology Admit Handler" I0317 05:58:41.949892 59 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mddt\" (UniqueName: \"<http://kubernetes.io/projected/b8c597dc-2a26-427e-954c-59b097e2e433-kube-api-access-4mddt\|kubernetes.io/projected/b8c597dc-2a26-427e-954c-59b097e2e433-kube-api-access-4mddt\>") pod \"py39-cacher\" (UID: \"b8c597dc-2a26-427e-954c-59b097e2e433\") " pod="default/py39-cacher" I0317 05:59:18.165852 59 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mddt\" (UniqueName: \"<http://kubernetes.io/projected/b8c597dc-2a26-427e-954c-59b097e2e433-kube-api-access-4mddt\|kubernetes.io/projected/b8c597dc-2a26-427e-954c-59b097e2e433-kube-api-access-4mddt\>") pod \"b8c597dc-2a26-427e-954c-59b097e2e433\" (UID: \"b8c597dc-2a26-427e-954c-59b097e2e433\") " I0317 05:59:18.166966 59 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "<http://kubernetes.io/projected/b8c597dc-2a26-427e-954c-59b097e2e433-kube-api-access-4mddt|kubernetes.io/projected/b8c597dc-2a26-427e-954c-59b097e2e433-kube-api-access-4mddt>" (OuterVolumeSpecName: "kube-api-access-4mddt") pod "b8c597dc-2a26-427e-954c-59b097e2e433" (UID: "b8c597dc-2a26-427e-954c-59b097e2e433"). InnerVolumeSpecName "kube-api-access-4mddt". PluginName "<http://kubernetes.io/projected|kubernetes.io/projected>", VolumeGidValue "" I0317 05:59:18.266413 59 reconciler.go:384] "Volume detached for volume \"kube-api-access-4mddt\" (UniqueName: \"<http://kubernetes.io/projected/b8c597dc-2a26-427e-954c-59b097e2e433-kube-api-access-4mddt\|kubernetes.io/projected/b8c597dc-2a26-427e-954c-59b097e2e433-kube-api-access-4mddt\>") on node \"63438637467c\" DevicePath \"\"" I0317 05:59:18.940441 59 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4b4ce53ce5be1b58135823754b6e3b93afaae44b8b683c7918ba27aae05d3a48" W0317 06:02:38.683970 59 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"``` flytectl version: ```(flyte) ubuntu@ip-172-31-92-199:~$ flytectl version { "App": "flytectl", "Build": "29da288", "Version": "0.6.34", "BuildTime": "2023-03-17 09:17:23.237655216 +0000 UTC m=+0.024939893" }{ "App": "controlPlane", "Build": "unknown", "Version": "unknown", "BuildTime": "2023-03-17 05:58:29.976870718 +0000 UTC m=+0.033316151"``` sandbox cluster version `1.4.1` replication: i had a fresh install of flytectl on a new server, and ran `flytectl demo start`
same or similar issue- i have the same screen appearing. I did a fresh install too. <https://flyte-org.slack.com/archives/CP2HDHKE1/p1679045708684189?thread_ts=1678991494.797069&amp;cid=CP2HDHKE1>
same or similar issue- i have the same screen appearing. I did a fresh install too. <https://flyte-org.slack.com/archives/CP2HDHKE1/p1679045708684189?thread_ts=1678991494.797069&amp;cid=CP2HDHKE1>
I think this is an issue with flyteconsole `&gt;=1.4.8` I'm seeing the same issue after upgrading our staging cluster to the Flyte Release `1.4.1` Downgrading flyteconsole to `1.4.7` helps. Issue to track this: <https://github.com/flyteorg/flyte/issues/3485>
I think this is an issue with flyteconsole `&gt;=1.4.8` I'm seeing the same issue after upgrading our staging cluster to the Flyte Release `1.4.1` Downgrading flyteconsole to `1.4.7` helps. Issue to track this: <https://github.com/flyteorg/flyte/issues/3485>
Cc <@U0265RTUJ5B> / <@U0231BEP02E>
Cc <@U0265RTUJ5B> / <@U0231BEP02E>
Hey <@U0165SDP3BQ> - unfortunately yeah we had a regression regarding material-ui. But we have the fix out in <https://github.com/flyteorg/flyteconsole/releases/tag/v1.5.1> cc: <@U04NPJA96RW> <@U039ABA6BAR>
Hey <@U0165SDP3BQ> - unfortunately yeah we had a regression regarding material-ui. But we have the fix out in <https://github.com/flyteorg/flyteconsole/releases/tag/v1.5.1> cc: <@U04NPJA96RW> <@U039ABA6BAR>
Hey <@U04FL09B6EA> we've been trying to replicate the issue with 1.4.8 and we did see an error that caught our eye but it wasn't this screen. Just curious what is your stack configuration (flyte config) like
Hey <@U04FL09B6EA> we've been trying to replicate the issue with 1.4.8 and we did see an error that caught our eye but it wasn't this screen. Just curious what is your stack configuration (flyte config) like
<@U04FL09B6EA>, <@U0165SDP3BQ>, we just released Flyte 1.4.2 that contains a fix for this. Sorry for the regression, we're investing in testing+automation to ensure that this doesn't happen in the future.
<@U04FL09B6EA>, <@U0165SDP3BQ>, we just released Flyte 1.4.2 that contains a fix for this. Sorry for the regression, we're investing in testing+automation to ensure that this doesn't happen in the future.
1.4.2 looks good - thanks for the quick turnaround!
This message was deleted.
It should be possible- there is already a reverse proxy inside imo
It should be possible- there is already a reverse proxy inside imo
<@UNZB4NW3S> , this doesn’t seem to work. I try to navigate to: `&lt;public_ip&gt;:30080/console` and nothing ends up loading despite the demo/sandbox running properly. I also curled the localhost:30089/console from inside the server and I get the html response back.
Hey folks, Apologies if I missed the solution to this in the documentation somewhere. I’m trying to deploy Flyte onto AWS EKS and enable the AWS Batch plug-in. So far I’m using helm with a `values.yaml` listed below and I can’t seem to figure out how to get the right configuration into the flyte admin config. # Helm command ```helm install flyteorg/flyte-binary \ --generate-name \ --kube-context=&lt;context&gt; \ --namespace flyte \ --values flyte-binary/flyte-binary-eks-values.yaml``` # flyte-binary-eks-values.yaml ```configuration: database: password:&lt;RD Password&gt; host: &lt;DB Host URI&gt; dbname: app storage: metadataContainer: &lt;bucket&gt; userDataContainer: &lt;bucket&gt; provider: s3 providerConfig: s3: region: "us-west-2" authType: "iam" logging: level: 1 plugins: cloudwatch: enabled: true templateUri: |- <https://console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/eks/opta-development/cluster;stream=var.log.containers.{{> .podName }}_{{ .namespace }}_{{ .containerName }}-{{ .containerId }}.log inline: plugins: aws: batch: roleAnnotationKey: &lt;Redacted&gt; region: us-west-2 tasks: task-plugins: enabled-plugins: - container - sidecar - aws_array default-for-task-types: - container_array: aws_array - aws-batch: aws_array - container: container serviceAccount: create: true annotations: <http://eks.amazonaws.com/role-arn|eks.amazonaws.com/role-arn>: &lt;Redacted&gt; # Where should this go? configMaps: adminServer: flyteadmin: roleNameKey: &lt;Redacted&gt; queues: executionQueues: - dynamic: &lt;JobQueueName&gt; attributes: - default workflowConfigs: - tags: - default``` And when I try and run a workflow ith Batch tasks I get this error: ```Workflow[flytesnacks:development:<http://workflows.example.wf|workflows.example.wf>] failed. RuntimeExecutionError: max number of system retry attempts [11/10] exhausted. Last known status message: failed at Node[n0]. RuntimeExecutionError: failed during plugin execution, caused by: failed to execute handle for plugin [aws_array]: [BadTaskSpecification] config[dynamic_queue] is missing``` Thanks for reading this far :slightly_smiling_face: :pray: Update. I was able get this to work by manually editing the configmap generated by helm but is there a better way?
Have you looked at <https://docs.flyte.org/en/latest/deployment/plugins/aws/batch.html#deployment-plugin-setup-aws-array> doc?
Have you looked at <https://docs.flyte.org/en/latest/deployment/plugins/aws/batch.html#deployment-plugin-setup-aws-array> doc?
what update did you make manually? oh the bit in where should this go? try under `configuration` ```inline: flyteadmin: roleNameKey: ... ...```
what update did you make manually? oh the bit in where should this go? try under `configuration` ```inline: flyteadmin: roleNameKey: ... ...```
Hey <@U01J90KBSU9>. Thanks for answering! I have looked at that <https://docs.flyte.org/en/latest/deployment/plugins/aws/batch.html#deployment-plugin-setup-aws-array|AWS Batch Setup doc> you linked. The helm blocks are helpful but I struggle to know what their proper context is. I ended up adapting <https://github.com/flyteorg/flyte/blob/master/charts/flyte-binary/eks-production.yaml|flyte-binary/eks-production.yaml> and mostly got it working. Once I get things working I’ll see if I can write up where I went wrong in a helpful way. Thanks, <@UNR3C6Y4T>! I’ll give that a try :pray: Adding that block to `configuration.inline` doesn’t seem to solve it. This is the ConfigMap manifest I edited to solve the issue but I’d prefer to do it in helm. ```# Source: flyte-binary/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: flyte-binary-1678743094-config namespace: "flyte" labels: <http://helm.sh/chart|helm.sh/chart>: flyte-binary-v1.3.0 <http://app.kubernetes.io/name|app.kubernetes.io/name>: flyte-binary <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: flyte-binary-1678743094 <http://app.kubernetes.io/version|app.kubernetes.io/version>: "1.16.0" <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm annotations: data: 000-core.yaml: | admin: endpoint: localhost:8089 insecure: true catalog-cache: endpoint: localhost:8081 insecure: true type: datacatalog cluster_resources: standaloneDeployment: false templatePath: /etc/flyte/cluster-resource-templates logger: show-source: true level: 1 propeller: create-flyteworkflow-crd: true &gt;&gt;&gt;&gt; Added this flyteadmin: roleNameKey: &lt;REDACTED&gt; queues: # A list of items, one per AWS Batch Job Queue. executionQueues: # The name of the job queue from AWS Batch - dynamic: "default_EC2_job_queue" # A list of tags/attributes that can be used to match workflows to this queue. attributes: - default # A list of configs to match project and/or domain and/or workflows to job queues using tags. workflowConfigs: # An empty rule to match any workflow to the queue tagged as "default" - tags: - default &lt;&lt;&lt;&lt; webhook: certDir: /var/run/flyte/certs localCert: true secretName: flyte-binary-1678743094-webhook-secret serviceName: flyte-binary-1678743094-webhook servicePort: 443 001-plugins.yaml: | tasks: task-plugins: enabled-plugins: - container - sidecar - K8S-ARRAY default-for-task-types: - container: container - container_array: K8S-ARRAY plugins: logs: kubernetes-enabled: false cloudwatch-enabled: true cloudwatch-template-uri: <https://console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/eks/opta-development/cluster;stream=var.log.containers.{{> .podName }}_{{ .namespace }}_{{ .containerName }}-{{ .containerId }}.log stackdriver-enabled: false k8s-array: logs: config: kubernetes-enabled: false cloudwatch-enabled: true cloudwatch-template-uri: <https://console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/eks/opta-development/cluster;stream=var.log.containers.{{> .podName }}_{{ .namespace }}_{{ .containerName }}-{{ .containerId }}.log stackdriver-enabled: false 002-database.yaml: | database: postgres: username: postgres passwordPath: /var/run/secrets/flyte/db-pass host: &lt;REDACTED&gt; port: 5432 dbname: app options: "sslmode=disable" 003-storage.yaml: | propeller: rawoutput-prefix: &lt;REDACTED&gt; storage: type: stow stow: kind: s3 config: region: us-west-2 disable_ssl: false v2_signing: false auth_type: iam container: &lt;REDACTED&gt; 010-inline-config.yaml: | plugins: aws: batch: roleAnnotationKey: &lt;REDACTED&gt; region: us-west-2 tasks: task-plugins: default-for-task-types: - container_array: aws_array - aws-batch: aws_array - container: container enabled-plugins: - container - sidecar - aws_array```
Hey! I have a question regarding how the role permission setup is supposed to work on the multi cluster setup. What i would like to run is basically one control-plane with multiple data-planes in different AWS Accounts. Every data-plane should have a dedicated `flyte-user-roles` for accessing stuff in their account. What is currently happening when following the official documentation is that after exchanging secret, token and adjusting the cluster config on the control plane, the data-plane retrieves all the namespaces, quotas and Service Accounts from the control plane (which are created by the `cluster_resource_manager` i guess). This leaves me with default Service Accounts in the data-plane for all the projects/domains where the `flyte-user-role` of the control plane is annotated. Obviously i want the `flyte-user-roles` of the data-planes in there, which are completely unused so far in my setup. One way would be to just replace the default Service Account annotation with the correct flyte-user-role in the project/domains i need them. Is there a better or correct way of doing this?
<@UNR3C6Y4T> <@UNZB4NW3S>, can one of you please help Jan?
<@UNR3C6Y4T> <@UNZB4NW3S>, can one of you please help Jan?
I went with overwriting the default service account in the data-plane for now: `kubectl annotate serviceaccount -n $FLYTE_PROJECT_NAME-$domain default <http://eks.amazonaws.com/role-arn=arn:aws:iam::$ACCOUNT_ID:role/$FLYTE_USER_ROLE|eks.amazonaws.com/role-arn=arn:aws:iam::$ACCOUNT_ID:role/$FLYTE_USER_ROLE> --overwrite` Still happy to hear, if there is correct way of doing this :slightly_smiling_face:
Hey all, I put together this repo with a bunch of tooling to get the multi-cluster deployment operational on EKS. <https://github.com/alexifm/flyte-eks-deployment> I actually haven’t tested it much so it’s probably riddled with bugs. It’s a cleanup and refactoring of what we’ve had success with so I expect flaws to be minor and not major issues with the logic.
This is awesome. Thanks for sharing
This is awesome. Thanks for sharing
no prob. hopefully it helps people wade through it a little easier.
no prob. hopefully it helps people wade through it a little easier.
Wow! Thanks for putting together a guide, Alex! Really appreciate it. <@U01DYLVUNJE>, shall we add a link to this in our deployment guide somewhere?
Wow! Thanks for putting together a guide, Alex! Really appreciate it. <@U01DYLVUNJE>, shall we add a link to this in our deployment guide somewhere?
I’m commenting on this PR now: <https://github.com/flyteorg/flyte/pull/3363> I think there’s a lot of disparate sources that have shades of correctness or were correct at some point in time. For example, I found the below lines to be incorrect (searched the `values.yaml` and the referenced part didn’t exist). Also, if you see the `values-auth.yaml` file I constructed, this part of the PR appears inadequate for getting Auth working. <https://github.com/flyteorg/flyte/pull/3363/files#diff-ee77fbe074a2e541ba69e70ac154aa8fcffcdc3699cb4f0abc8379e4ee2a3b9bR191-R195>
I’m commenting on this PR now: <https://github.com/flyteorg/flyte/pull/3363> I think there’s a lot of disparate sources that have shades of correctness or were correct at some point in time. For example, I found the below lines to be incorrect (searched the `values.yaml` and the referenced part didn’t exist). Also, if you see the `values-auth.yaml` file I constructed, this part of the PR appears inadequate for getting Auth working. <https://github.com/flyteorg/flyte/pull/3363/files#diff-ee77fbe074a2e541ba69e70ac154aa8fcffcdc3699cb4f0abc8379e4ee2a3b9bR191-R195>
Oh okay. Please leave a comment on the PR. We'll make sure to incorporate your suggestion!
Oh okay. Please leave a comment on the PR. We'll make sure to incorporate your suggestion!
Also, if you see in my repo, I will need to submit a PR to the main flyte repo because the helm chart was insufficient (maybe it’s been updated, haven’t checked in the last week or two) <https://github.com/alexifm/flyte-eks-deployment/commit/69e327734acf0bca67cd89dc62e7c26e8ca7a9c9>
Also, if you see in my repo, I will need to submit a PR to the main flyte repo because the helm chart was insufficient (maybe it’s been updated, haven’t checked in the last week or two) <https://github.com/alexifm/flyte-eks-deployment/commit/69e327734acf0bca67cd89dc62e7c26e8ca7a9c9>
The extra lines of config you added to your repo aren't present in the flyte helm chart. <@UNR3C6Y4T>, is this something we need to add to the flyte helm chart? <@U04P6HHMCG0>, please feel free to create a PR meanwhile.
The extra lines of config you added to your repo aren't present in the flyte helm chart. <@UNR3C6Y4T>, is this something we need to add to the flyte helm chart? <@U04P6HHMCG0>, please feel free to create a PR meanwhile.
Right. That commit is a quick hack I put in because I was finding the the `additionalVolumes` and `additionalVolumeMounts` described in the `values-override.yaml` file in <https://docs.flyte.org/en/latest/deployment/deployment/multicluster.html#user-and-control-plane-deployment|this section> weren’t working out of the box. I was getting errors on some of the containers/init-containers for `flyteadmin` and `clusterresourcesync`and it was due to the secrets not being mounted in enough places.
Right. That commit is a quick hack I put in because I was finding the the `additionalVolumes` and `additionalVolumeMounts` described in the `values-override.yaml` file in <https://docs.flyte.org/en/latest/deployment/deployment/multicluster.html#user-and-control-plane-deployment|this section> weren’t working out of the box. I was getting errors on some of the containers/init-containers for `flyteadmin` and `clusterresourcesync`and it was due to the secrets not being mounted in enough places.
this is great <@U04P6HHMCG0> thank you. I will use it and report back any findings
<@U029U35LRDJ> remind me again where we can control the max futures file size? cc <@U019PBV483E> is it <https://github.com/flyteorg/flytepropeller/blob/95a4791f59452845714bb22252c77dee57ac38c1/pkg/controller/config/config.go#L64>?
yes. that is exactly the configuration option.
Hello :slightly_smiling_face: I am currently following the multi cluster deployment. At the moment, i have a problem when applying the values-override.yaml with the correct entries for my data-plane cluster on the control plane. From every deployment being healthy only flyte admin fails with the following logs: ```{"json":{},"level":"warning","msg":"stow configuration section missing, defaulting to legacy s3/minio connection config","ts":"2023-03-01T21:25:00Z"} {"json":{},"level":"fatal","msg":"caught panic: entries is empty [goroutine 1 [running]:\nruntime/debug.Stack()\n\t/usr/local/go/src/runtime/debug/stack.go:24``` Can anyone point me in the right direction what be missing here?
what does your storage config look like? this is using minio but this is the demo environment config <https://github.com/flyteorg/flyte/blob/7f66e475b522745f3721170906db32f61d9af357/docker/sandbox-bundled/manifests/complete.yaml#L463>
what does your storage config look like? this is using minio but this is the demo environment config <https://github.com/flyteorg/flyte/blob/7f66e475b522745f3721170906db32f61d9af357/docker/sandbox-bundled/manifests/complete.yaml#L463>
This is my storage config: ```storage.yaml: | storage: type: s3 container: "control-dev-playground-cluster-bucket" connection: auth-type: iam region: eu-central-1 enable-multicontainer: false limits: maxDownloadMBs: 10``` Pretty sure this error is misleading since an enabled flytepropeller on the control plan is not causing these issues Nevermind! I was missing the ```enabled: true``` in my flyte-admin-clusters-config. Thank you for helping me out :slightly_smiling_face:
Hey folks, deploying Flyte on a single EKS, is it possible to deploy Flyte's binary in an ondemand nodegroup, and then have tasks run in a spot nodegroup?
Yup
Yup
like a scale to zero node group?