output
stringlengths
1
18.7k
input
stringlengths
1
18.7k
Yeah even a streaming job that terminates say after 24h and is rescheduled then as a new task execution would be totally fine. for now
Sören Brunk I hear you And we would love to work with you on this Yup so today tasks need to Complete No hard requirement really, you could set You can actually run a streaming jobs and set the platform timeout to infinity :joy: So it should run today But I want to qualify these as special task types in the future, Just for correctness
Ya this should all work There is no need for a task to terminate It’s a logical requirement- u see what I mean
Yeah even a streaming job that terminates say after 24h and is rescheduled then as a new task execution would be totally fine. for now
yes I get it
Ya this should all work There is no need for a task to terminate It’s a logical requirement- u see what I mean
Awesome, would love to help with trying it out Just want to see if there are differences in streaming
yes I get it
Great! I’ll get back to you when I have the chance to give it a try
Awesome, would love to help with trying it out Just want to see if there are differences in streaming
yup in the spark operator CRD i mean, if there are adding it should be simple
Great! I’ll get back to you when I have the chance to give it a try
IMHO there’s not really much of a difference from the Spark operator point of view. Probably <https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#using-container-lifecycle-hooks|graceful shutdown> triggered from the outside is a more common requirement for a streaming job.
yup in the spark operator CRD i mean, if there are adding it should be simple
ohh we trigger that so this should just work for now haha ya then that would be one of the future items to support forever running functions - streaming functions and we will be adding support for flinkoperator as well
IMHO there’s not really much of a difference from the Spark operator point of view. Probably <https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#using-container-lifecycle-hooks|graceful shutdown> triggered from the outside is a more common requirement for a streaming job.
So like I said, our main gripe with SM is the lack of resource management, no job queues like on AWS Batch. Looks like Flyte will help us out
Flyte supports Sagemaker builtin algorithms today - The interface is where we have really innovated IMO Custom models is coming soon, slated for this month When you use Sagemaker through Flyte, you are still interacting with Flyte Control plane, so it will manage the executions, queue them up if needed and also capture/record all information We do not really use a lot of sagemaker at Lyft *yet, but we would love to know your problems - like resource management and we can tackle them within Flyte easily Fredrik Sannholm ^
yup, we have a built in resource pooling thing, now that you brought it up, i will add support for that into Sagemaker plugin :slightly_smiling_face: Chang-Hong Hsu Haytham Abuelfutuh ^
So like I said, our main gripe with SM is the lack of resource management, no job queues like on AWS Batch. Looks like Flyte will help us out
So based on <https://docs.aws.amazon.com/general/latest/gr/sagemaker.html|this> , you can run 20 concurrent training jobs, which is a ridiculous number for a company of any size. I haven’t discussed this with our AWS guy, I assume we could get a higher limit with :moneybag: the real problem is the lack of a queue
yup, we have a built in resource pooling thing, now that you brought it up, i will add support for that into Sagemaker plugin :slightly_smiling_face: Chang-Hong Hsu Haytham Abuelfutuh ^
yup i agree, this is easily supportable with Flyte
So based on <https://docs.aws.amazon.com/general/latest/gr/sagemaker.html|this> , you can run 20 concurrent training jobs, which is a ridiculous number for a company of any size. I haven’t discussed this with our AWS guy, I assume we could get a higher limit with :moneybag: the real problem is the lack of a queue
If you exceed your limit it just returns a error, and have to try again :slightly_smiling_face:
yup i agree, this is easily supportable with Flyte
this is our problem with Hive, Presto, AWS Batch (has a rate limit) and Sagemaker
If you exceed your limit it just returns a error, and have to try again :slightly_smiling_face:
Awesomeness! :heart_eyes:
this is our problem with Hive, Presto, AWS Batch (has a rate limit) and Sagemaker
yup, and you do see in the UI, that you are waiting in the line thank you for the awesome suggestion
Awesomeness! :heart_eyes:
Yes we have a pooling mechanism to do rate limit of some sort It is not backed by a queue yet so no ordering is preserved when the job is rejected due to resource limit. But basically it does what you described here &gt; If you exceed your limit it just returns a error, and have to try again I created an issue to track this <https://github.com/lyft/flyte/issues/460>
yup, and you do see in the UI, that you are waiting in the line thank you for the awesome suggestion
thank you Chang-Hong Hsu Chang-Hong Hsu we dont need it right now as it would automatically queue, because the CRD would naturally serve as a queuing mechanism :slightly_smiling_face:
Yes we have a pooling mechanism to do rate limit of some sort It is not backed by a queue yet so no ordering is preserved when the job is rejected due to resource limit. But basically it does what you described here &gt; If you exceed your limit it just returns a error, and have to try again I created an issue to track this <https://github.com/lyft/flyte/issues/460>
thank you for the question. I do not think we use Atlas, but yes we do use Amundsen. So Amundsen only provides a flat view in time for any metadata. Flyte on the other hand has a timeline of data generation (so 3D lineage - available through DataCatalog). That being said, we will be working on integrating Amundsen and Flyte to index latest versions of “Workflows”, “tasks” and “datasets” into Amundsen. The plan was to use the Eventstream from Flyte to build the current view in Amundsen but, this is not available yet, and would probably go into early next year - based on priorities
Another question (sorry if this is in the docs)! Lyft uses Amundsen/ <http://atlas.apache.org/#/|Apache Atlas> to track lineage, right? How well would it integrate with Flyte?
ok! Atlas is one of the alternatives to power the metadata service of Amundsen, as per: <https://github.com/lyft/amundsen>.
thank you for the question. I do not think we use Atlas, but yes we do use Amundsen. So Amundsen only provides a flat view in time for any metadata. Flyte on the other hand has a timeline of data generation (so 3D lineage - available through DataCatalog). That being said, we will be working on integrating Amundsen and Flyte to index latest versions of “Workflows”, “tasks” and “datasets” into Amundsen. The plan was to use the Eventstream from Flyte to build the current view in Amundsen but, this is not available yet, and would probably go into early next year - based on priorities
We should probably open it up for users to sign up with an approval
George Snelling: silly qn, how does one join the group to get future notifications?
Jeev B not a silly qn at all. We use gsuite for the domain <http://flyte.org|flyte.org>. The group was locked down too tight. I just relaxed the permissions and exposed it to public search, but it might take a while before the indexing catches up and it shows up in search. For posterity the URL is <https://groups.google.com/a/flyte.org/forum/#!forum/users> Also, we’ve been treating slack as our primary communication channel out of habit, but we’ll try to do better about cross-posting important notices to <mailto:users@flyte.org|users@flyte.org> since google hosts those messages indefinitely for a modest fee.
We should probably open it up for users to sign up with an approval
Haytham Abuelfutuh ^^ Thank you Jeev for doing this
hi all! i have a draft PR here for dynamic sidecar tasks. I was hoping to get some thoughts on how its been implemented before I go through with writing tests/docs:<https://github.com/lyft/flytekit/pull/152> I tested this with the following workflow: ```from flytekit.sdk.tasks import dynamic_sidecar_task, python_task, inputs, outputs from flytekit.sdk.types import Types from flytekit.sdk.workflow import workflow_class, Output from k8s.io.api.core.v1 import generated_pb2 def _task_pod_spec(): pod_spec = generated_pb2.PodSpec() cnt = generated_pb2.Container(name="main") pod_spec.volumes.extend( [ generated_pb2.Volume( name="dummy-configmap", volumeSource=generated_pb2.VolumeSource( configMap=generated_pb2.ConfigMapVolumeSource( localObjectReference=generated_pb2.LocalObjectReference( name="dummy-configmap" ) ) ), ) ] ) cnt.volumeMounts.extend( [ generated_pb2.VolumeMount( name="dummy-configmap", mountPath="/data", readOnly=True, ) ] ) pod_spec.containers.extend([cnt]) return pod_spec @inputs(input_config=Types.String) @outputs(output_config=Types.String) @python_task def passthrough(wf_params, input_config, output_config): output_config.set(input_config) @outputs(config=Types.String) @dynamic_sidecar_task(pod_spec=_task_pod_spec(), primary_container_name="main") def get_config(wf_params, config): with open("/data/dummy.cfg") as handle: task = passthrough(input_config=handle.read()) config.set(task.outputs.output_config) @workflow_class class DynamicPodCustomizationWF: config_getter = get_config() config = Output(config_getter.outputs.config, sdk_type=Types.String)``` and it correctly outputs the contents of the configmap: which only works if the pod customization is working as intended this might be too simple, and i might likely have been glossing over some important details. just wanted to get some expert opinion on it! :slightly_smiling_face:
Ketan Umare: Its not complete. still needs tests/docs, but I wanted to make sure that I was on the right track, and that this is worth pursuing!
Haytham Abuelfutuh ^^ Thank you Jeev for doing this
will look into it in a bit maybe tomorrow
Ketan Umare: Its not complete. still needs tests/docs, but I wanted to make sure that I was on the right track, and that this is worth pursuing!
yea no rush at all! thanks!
will look into it in a bit maybe tomorrow
btw, did you get a chance to try the “blanket tolerations?
yea no rush at all! thanks!
nope not yet. i'm actually doing some handover stuff before i go on family leave anytime now.... not much time to work on new stuff yet.
btw, did you get a chance to try the “blanket tolerations?
ohh family leave? …
nope not yet. i'm actually doing some handover stuff before i go on family leave anytime now.... not much time to work on new stuff yet.
i do have a sandbox env that i can mess around on, so will try to play around with it!
ohh family leave? …
ok no hurries
i do have a sandbox env that i can mess around on, so will try to play around with it!
thanks for putting this in jeev! i’ll take a deeper look at this tomorrow. we’re also in the middle of a bit of a refactor that I’m really hoping to get in by the end of next week. but it shouldn’t really affect this change in particular.
ok no hurries
thanks Yee
thanks for putting this in jeev! i’ll take a deeper look at this tomorrow. we’re also in the middle of a bit of a refactor that I’m really hoping to get in by the end of next week. but it shouldn’t really affect this change in particular.
oh hi. sorry, yes. Katrina Rogan and i both took a look. fill out tests and we can merge? also bump the version
thanks Yee
sounds good thanks!
oh hi. sorry, yes. Katrina Rogan and i both took a look. fill out tests and we can merge? also bump the version
and added a couple random comments.
sounds good thanks!
Yee another example of branch
Hey everyone! Is it possible to run a spark task on condition, for example, if one of the workflows inputs was filled? Maybe you have an example where I can look
hey Artem Osipov! This is natively supported in the IDL but not yet implemented. For now though, you can achieve a similar behavior by using dynamic tasks, though it’s not exactly the same. for example, here is a dynamic task that yields tasks from within a for block: <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/workflows/batch.py#L58> you could just as easily make that an if block. but the interface isn’t nearly as clean since it means you’ll be running an entire task (replete with container loading and such), just to run an if statement. the native branching that ketan’s referring to will fix that.
Yee another example of branch
yes we can!
Yee - Vrinda Vasavada wants to know if we can have 2 different docker images in the same repo. Note: its not the same workflow, but basically different workflows with different images Yee this is not indicated in any examples, and the default examples dont really help understand this. Can we help Vrinda Vasavada Vrinda Vasavada just for me to understand, can you have these different workflows in different python modules / folders?
that should make it simple so if you actually look Vrinda Vasavada <https://github.com/lyft/flytesnacks> follows this pattern exactly like look at this - <https://github.com/lyft/flytesnacks/tree/master/plugins/spark> <https://github.com/lyft/flytesnacks/tree/master/plugins/pytorch> different yet in the same repo
yes we can!
basically just having two repos in one repo? sure
that should make it simple so if you actually look Vrinda Vasavada <https://github.com/lyft/flytesnacks> follows this pattern exactly like look at this - <https://github.com/lyft/flytesnacks/tree/master/plugins/spark> <https://github.com/lyft/flytesnacks/tree/master/plugins/pytorch> different yet in the same repo
but these are almost logically separated, you can do better than that Yee it is not 2 repos, its discreet usecases
basically just having two repos in one repo? sure
yeah that’s fine, as long as you can have different top level dirs
but these are almost logically separated, you can do better than that Yee it is not 2 repos, its discreet usecases
Okay gotcha this makes sense, and in the case that we want to have the same top level directory is that also possible?
yeah that’s fine, as long as you can have different top level dirs
I was just going to type that, that should be possible as well so it becomes complicated Vrinda Vasavada if you try to import without the right dependencies installed across the 2 files otherwise it is entirely possible this is the registration command - <https://github.com/lyft/flytesnacks/blob/master/cookbook/Makefile#L30>
Okay gotcha this makes sense, and in the case that we want to have the same top level directory is that also possible?
so long as there is a distinct break somewhere, some parent that bifurcates all the tasks/workflows, it’ll be fine, it doesn’t have to be at the root of the repo.
I was just going to type that, that should be possible as well so it becomes complicated Vrinda Vasavada if you try to import without the right dependencies installed across the 2 files otherwise it is entirely possible this is the registration command - <https://github.com/lyft/flytesnacks/blob/master/cookbook/Makefile#L30>
yup, Yee so the real thing that differentiates the “discovery” of all tasks and workflows is the “config” file for example, we will find all workflows in - <https://github.com/lyft/flytesnacks/blob/master/cookbook/sandbox.config#L2> recursively Vrinda Vasavada if you have any questions, let us know, we should probably add an example in this repo
so long as there is a distinct break somewhere, some parent that bifurcates all the tasks/workflows, it’ll be fine, it doesn’t have to be at the root of the repo.
okay so there should be some split in the directories and a separate config file for each, and then the different docker images in each?
yup, Yee so the real thing that differentiates the “discovery” of all tasks and workflows is the “config” file for example, we will find all workflows in - <https://github.com/lyft/flytesnacks/blob/master/cookbook/sandbox.config#L2> recursively Vrinda Vasavada if you have any questions, let us know, we should probably add an example in this repo
ideally yes.
okay so there should be some split in the directories and a separate config file for each, and then the different docker images in each?
gotcha, thank you!! :slightly_smiling_face: I'll let you know if I have any more questions as I implement
ideally yes.
just for background, the link between a task and the image happens inside the task definition. currently at registration, the task will be populated by the value of the `FLYTE_INTERNAL_IMAGE` environment variable (which is set by the build script provided by flytekit). It’s overridable ofc. Figuring out how to allow users to specify arbitrary images for arbitrary tasks in a workflow is a mostly ux question i feel, and one that’s hard to do in a non-confusing way. But it’s definitely something we’ve been thinking about. but not relevant for your case right now
gotcha, thank you!! :slightly_smiling_face: I'll let you know if I have any more questions as I implement
Yeah Just saw that. Wondering how I missed it, thought had a test to catch it.
Anand Swaminathan Katrina Rogan - Hongxin Liang seems to have found a bug <https://github.com/lyft/flyteadmin/pull/71/files#r475657367>
do you understand the impact? It seems that everytime there are missing resources, it will fail?
Yeah Just saw that. Wondering how I missed it, thought had a test to catch it.
Exactly. Hongxin Liang fix I believe is correct. In fact if you scroll few lines back, I have a debug statement for this case.
do you understand the impact? It seems that everytime there are missing resources, it will fail?
but then how can this work for sandbox? I think we should fix this ASAP
Exactly. Hongxin Liang fix I believe is correct. In fact if you scroll few lines back, I have a debug statement for this case.
Ketan Umare May be Sandbox does not use multi cluster?
but then how can this work for sandbox? I think we should fix this ASAP
yeah i thought we only used this is in our actual lyft deployment
Ketan Umare May be Sandbox does not use multi cluster?
This PR is correct <https://github.com/lyft/flyteadmin/pull/117/files>
yeah i thought we only used this is in our actual lyft deployment
yeah we check if the resource is nil below before dereferencing
This PR is correct <https://github.com/lyft/flyteadmin/pull/117/files>
Thanks for approving the PR. I will create a corresponding issue for tracking purpose and then merge it. BTW this fails for non in-cluster setup, so it doesn’t need to be multiple clusters.
yeah we check if the resource is nil below before dereferencing
it looks like admin isn't using the flyteadmin role
Katrina Rogan - Meet emiliza (from unity) she is trying to get Flyte up in GCP she is failing to start `sync resources` because of this error ```{"json":{"src":"clusterresource.go:98"},"level":"fatal","msg":"Failed to sync cluster resources [Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flyteexamples-development] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope, Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flyteexamples-staging] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope, Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flyteexamples-production] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope, Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flytetester-development] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope, Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flytetester-staging] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope, Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flytetester-production] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope, Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flytesnacks-development] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope, Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flytesnacks-staging] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope, Failed to create kubernetes object from config template [aa_namespace.yaml] for namespace [flytesnacks-production] with err: namespaces is forbidden: User \"system:serviceaccount:flyte:default\" cannot create resource \"namespaces\" in API group \"\" at the cluster scope]","ts":"2020-08-24T20:24:21Z"}``` it seems her RBAC is wrongly configured for Admin
ya
it looks like admin isn't using the flyteadmin role
should be this one <https://github.com/lyft/flyte/blob/master/kustomize/base/adminserviceaccount/adminserviceaccount.yaml#L6>
ya
this is what i’ve got set for rbac: ```apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1> kind: ClusterRole metadata: name: flyteadmin namespace: flyte rules: - apiGroups: - "" - <http://flyte.lyft.com|flyte.lyft.com> - <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io> resources: - configmaps - flyteworkflows - namespaces - pods - resourcequotas - roles - rolebindings - secrets - services - serviceaccounts - spark-role verbs: - '*'```
should be this one <https://github.com/lyft/flyte/blob/master/kustomize/base/adminserviceaccount/adminserviceaccount.yaml#L6>
emiliza it is what Katrina Rogan is saying, you need to have the role specified, i dont know how but somehow it was overriden in your kustomize?
this is what i’ve got set for rbac: ```apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1> kind: ClusterRole metadata: name: flyteadmin namespace: flyte rules: - apiGroups: - "" - <http://flyte.lyft.com|flyte.lyft.com> - <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io> resources: - configmaps - flyteworkflows - namespaces - pods - resourcequotas - roles - rolebindings - secrets - services - serviceaccounts - spark-role verbs: - '*'```
huh i wonder if it's because we specify the service account name here <https://github.com/lyft/flyte/blob/d3660d4a1a8f46d0246b8798a7e6fbbc58f7820a/kustomize/overlays/eks/admindeployment/cron.yaml#L12> but not here <https://github.com/lyft/flyte/blob/d3660d4a1a8f46d0246b8798a7e6fbbc58f7820a/kustomize/overlays/eks/admindeployment/admindeployment.yaml#L30> ? emiliza is this failing on start-up or when the cron runs?
emiliza it is what Katrina Rogan is saying, you need to have the role specified, i dont know how but somehow it was overriden in your kustomize?
on start-up
huh i wonder if it's because we specify the service account name here <https://github.com/lyft/flyte/blob/d3660d4a1a8f46d0246b8798a7e6fbbc58f7820a/kustomize/overlays/eks/admindeployment/cron.yaml#L12> but not here <https://github.com/lyft/flyte/blob/d3660d4a1a8f46d0246b8798a7e6fbbc58f7820a/kustomize/overlays/eks/admindeployment/admindeployment.yaml#L30> ? emiliza is this failing on start-up or when the cron runs?
ooh okay i bet that could be it, since the cron has the correct service role - let me make a pr
on start-up
i’ll add to my deployment and check it out
ooh okay i bet that could be it, since the cron has the correct service role - let me make a pr
Katrina Rogan she is using the GCP overlay that Spotify built, it might have a bug,
i’ll add to my deployment and check it out
can you link me?
Katrina Rogan she is using the GCP overlay that Spotify built, it might have a bug,
<https://github.com/lyft/flyte/pull/185/files> yup it is missing in both EKS and in GCP
can you link me?
the gcp admindeployment also appears to be missing the serviceaccount
<https://github.com/lyft/flyte/pull/185/files> yup it is missing in both EKS and in GCP
Thank you emiliza!
the gcp admindeployment also appears to be missing the serviceaccount
i can update this pr instead, thanks for pointing this out
Thank you emiliza!
hmm, recreated the admin deployment and killed the `syncresources` pods but i’m still getting the same issue
i can update this pr instead, thanks for pointing this out
oh sorry i had a meeting and wasn't yet able to update :sweat_smile: mind pulling the latest and trying again emiliza
hmm, recreated the admin deployment and killed the `syncresources` pods but i’m still getting the same issue
works now, thanks Katrina Rogan!
oh sorry i had a meeting and wasn't yet able to update :sweat_smile: mind pulling the latest and trying again emiliza
great, glad to hear it!
works now, thanks Katrina Rogan!
Thank you Katrina
great, glad to hear it!
completely different question: is there something else i’m supposed to install to get FlyteHub showing up in my flyte console? looking at importing <https://flytehub.org/objectdetector|ObjectDetector> and cant find this Hub :joy:
Thank you Katrina
That is not supported in the mainline fork of Flyte Why do you want Flyte hub?
completely different question: is there something else i’m supposed to install to get FlyteHub showing up in my flyte console? looking at importing <https://flytehub.org/objectdetector|ObjectDetector> and cant find this Hub :joy:
just to import that objectdetector workflow
That is not supported in the mainline fork of Flyte Why do you want Flyte hub?
That example is available in flytesnacks Just use that Fastest way to get started
just to import that objectdetector workflow
hmm, i wasn’t actually able to get <https://lyft.github.io/flyte/user/getting_started/examples.html|flytesnacks> loaded properly. followed the example and got some errors: ``` $ docker run --network host -e FLYTE_PLATFORM_URL='http://&lt;redacted admin ip&gt;/' lyft/flytesnacks:v0.1.0 pyflyte -p flytesnacks -d development -c sandbox.config register workflows Using configuration file at /app/sandbox.config Flyte Admin URL http://&lt;redacted admin ip&gt;/ Running task, workflow, and launch plan registration for flytesnacks, development, ['workflows'] with version 46045e6383611da1cb763d64d846508806fce1a4 Registering Task: workflows.edges.edge_detection_canny Traceback (most recent call last): File "/app/venv/bin/pyflyte", line 11, in &lt;module&gt; sys.exit(main()) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/app/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func return f(get_current_context(), *args, **kwargs) File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 86, in workflows register_all(project, domain, pkgs, test, version) File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 24, in register_all o.register(project, domain, name, version) File "/app/venv/lib/python3.6/site-packages/flytekit/common/exceptions/scopes.py", line 158, in system_entry_point return wrapped(*args, **kwargs) File "/app/venv/lib/python3.6/site-packages/flytekit/common/tasks/task.py", line 141, in register _engine_loader.get_engine().get_task(self).register(id_to_register) File "/app/venv/lib/python3.6/site-packages/flytekit/engines/flyte/engine.py", line 234, in register self.sdk_task File "/app/venv/lib/python3.6/site-packages/flytekit/clients/friendly.py", line 50, in create_task spec=task_spec.to_flyte_idl() File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 12, in handler return fn(*args, **kwargs) File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 77, in create_task return self._stub.CreateTask(task_create_request) File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 604, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 506, in _end_unary_response_blocking raise _Rendezvous(state, None, None, deadline) grpc._channel._Rendezvous: &lt;_Rendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "DNS resolution failed" debug_error_string = "{"created":"@1598317962.581326500","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"@1598317962.581295600","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/resolving_lb_policy.cc","file_line":263,"referenced_errors":[{"created":"@1598317962.581287600","description":"DNS resolution failed","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":357,"grpc_status":14,"referenced_errors":[{"created":"@1598317962.581256400","description":"C-ares status is not ARES_SUCCESS: Domain name not found","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244,"referenced_errors":[{"created":"@1598317962.581099300","description":"C-ares status is not ARES_SUCCESS: Domain name not found","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244}]}]}]}]}"```
That example is available in flytesnacks Just use that Fastest way to get started
FYI Steven Kothen-Hill Marvin Bertin
Hi all, video from today’s bi-weekly sync posted here: <https://drive.google.com/file/d/1V8IEpxS8sHtvDjkk7W-7BGNkOy_xB0sn/view?usp=sharing> Yee showed some alternative Python signatures for task execution with more or less syntax and implied context that he is considering for the next update of the Flytekit SDK. If you are an opinionated Python jock please take a look and comment, or better, submit an alternative PR. It’s important that we get this right because we’re going to be living with it for a long, long time.
Ruslan Stanevich whats the usecase?
Hello Everyone! :wave: I’d like to ask advice about using `storage_request` / `storage_limit` for `python_tasks` tasks. I thought that this value should appear in the Pod spec at `resources.*.ephemeral-storage` section but my assumptions were wrong And I see in the Flytepropeller sources that it is the different k8s core type. Could you please advise how to use this properly? Thank you in advance!
Looks like it should be PVC storage but I missed how I can use it :thinking_face: HI Ketan At the moment there is no use case that we could not solve, just curiosity when I saw this in the documentation :slightly_smiling_face:
Ruslan Stanevich whats the usecase?
thank you, yes so `storage-limits` are available only in the newer Kubernetes versions and you are right, we are not patchng it to ephemeral storage we should if you want to use PVC you have to use sidecar tasks (bad name)
Looks like it should be PVC storage but I missed how I can use it :thinking_face: HI Ketan At the moment there is no use case that we could not solve, just curiosity when I saw this in the documentation :slightly_smiling_face:
Yeah, I agree. About the use cases. In fact, we have an example of a task that we are expecting a huge file with geographic data (which is very problematic to split). And yes, we used sidecar_task for this as workaround. We are on EKS v1.17 now
thank you, yes so `storage-limits` are available only in the newer Kubernetes versions and you are right, we are not patchng it to ephemeral storage we should if you want to use PVC you have to use sidecar tasks (bad name)
ohh awesome lets file an issue for storage support and you can help us validate it
Yeah, I agree. About the use cases. In fact, we have an example of a task that we are expecting a huge file with geographic data (which is very problematic to split). And yes, we used sidecar_task for this as workaround. We are on EKS v1.17 now
Which one?
Hi, it seems we are missing a few release tags here: <https://github.com/lyft/flyte/releases>.
0.6 and 0.7 are missing
Which one?
that is my bad the release has no real meaning here, but a tag i will add the latest Sören Brunk / Hongxin Liang created 0.7.0 i will go back in time and tag 0.6.0 later, unless you want it now
0.6 and 0.7 are missing
Cool. Thanks. I often refer to release page for latest stuff. :)
that is my bad the release has no real meaning here, but a tag i will add the latest Sören Brunk / Hongxin Liang created 0.7.0 i will go back in time and tag 0.6.0 later, unless you want it now
Hey Ketan! What exact issue do you observe? I’ve found in <https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-torch-nn-dataparallel-models|this tutorial> that model wrapped into DataParallel (i.e. distributed model) should be saved the following way ```torch.save(model.module.state_dict(), PATH)``` Is that the thing you’re talking about?
Igor Valko question about the pytorch example <https://github.com/lyft/flytesnacks/blob/master/plugins/pytorch/workflows/mnist.py#L148-L152> I think this will not work for population size (mpi group size) greater than 1
so in Flyte, when you run the task, each task will startup with all the inputs and expect all the outputs but in distributed training case, only one job really produces the output so we need to ensure that only the master produces outputs (model) and for all other, outputs are ignored and/or not required Chang-Hong Hsu ^ Kevin Su ^
Hey Ketan! What exact issue do you observe? I’ve found in <https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-torch-nn-dataparallel-models|this tutorial> that model wrapped into DataParallel (i.e. distributed model) should be saved the following way ```torch.save(model.module.state_dict(), PATH)``` Is that the thing you’re talking about?
No worries. We pin our kustomize bases by commit anyway so this shouldn’t affect us. Thanks for the heads up!
All, I am working on cleaning up our Kustomize, do you think this will impact your deployments. We will try to release it in the next version. This should make it easier to configure and manage your overlays, but, it may break any existing overlays. Is that ok? I will share the PR Jeev B Hongxin Liang emiliza Deepen Mehta Sören Brunk ^?
ok that is awesome. ya again i will share the PR. But I think this would clean up the kustomize quite a bit
No worries. We pin our kustomize bases by commit anyway so this shouldn’t affect us. Thanks for the heads up!
Ketan Umare Please go ahead. We only use the Kustomize setup for testing cluster. Production is different as you know.
ok that is awesome. ya again i will share the PR. But I think this would clean up the kustomize quite a bit