output
stringlengths
1
18.7k
input
stringlengths
1
18.7k
no rush for the docs. we don't have to meet today!
ya so do we want to catch up at 10 i can just let me know
ok i thought you want to do stress testing i just wanted to give you the perf things before you start stress testing so sure lets catch up later
no rush for the docs. we don't have to meet today!
Thank you Anmol Khurana Yes, we use Spark for production workflows.
Hi everyone, we are looking at updating the SparkOperator/CRD  version used in Flyte. Currently, we use `v1beta1` version of the SparkOperator which is deployed as part of Flyte deploy. Similarly, the Spark Plugin currently creates v1beta1  spark resources. We are looking to move to `v1beta2`  which has been the stable version supported by  GCP SparkOperator and has multiple new features. This might need some manual cluster clean-up as well as can have a potential impact on existing running jobs.  Please review details in <https://github.com/lyft/flyte/issues/573>  if you are using Spark in production on how this can potentially impact you and let us know if there are any concerns. Ruslan Stanevich I believe you use Spark in production . Also Ketan Umare please add anyone else who might be using Spark
Jeev B Nelson Arapé I don’t think you guys use spark right? Deepen Mehta I know you use your own spark right?
Thank you Anmol Khurana Yes, we use Spark for production workflows.
It's a holiday/day off at Lyft and a few other companies too
George Snelling Ketan Umare are we having the usual open-source meeting tuesday? Or are we skipping this one on account of the election?
What do you guys think If so we should cancel now
It's a holiday/day off at Lyft and a few other companies too
I'm fine to cancel. Want me to do it?
What do you guys think If so we should cancel now
my vote is to cancel now… and we can do the fast register demo and also present the alpha release together (though I’m not sure we’ll get fast-register working with the new code)
I'm fine to cancel. Want me to do it?
Ketan Umare?
my vote is to cancel now… and we can do the fast register demo and also present the alpha release together (though I’m not sure we’ll get fast-register working with the new code)
Are the project and domain attributes correct?
hey not sure if this the right channel, lmk and i can move it. I seem to be unable to fetch and run remote workflows/tasks from the python API 1. If i try and register a workflow that fetches a remote workflow i get `An entity was not found in modules accessible from the workflow packages configuration` . I traced it to an arg `detect_unreferenced_entities` which is hardcoded to `True` . I can expose this as a cmdline arg in a PR but i'm a little confused about the intended functionality here 2. If i fork this code to get around that and register, code fails at runtime to fetch the task because `FLYTE_PLATFORM_URL` is not set or is wrong. My understanding is that this code shouldn't be running at all and it's a fluke of python class loading. What's the correct workaround? Also I understand that this is a product of the old flytekit and point (2) in particular should be fixed in the new version example code that fails ```@workflow_class class RoyaltiesForecast: # System workflow parameters parameters = Input(Types.Generic, default={}) consumption = Input(BQDataset) ads_revenue = SdkWorkflow.fetch( "flytesnacks", "production", "workflows.ads_revenue.workflow.AdsRevenueForecast", "v4" ) ads_revenue = ads_revenue( parameters=parameters, consumption=consumption, )``` cc Gleb Kanterov Ketan Umare Yee as mentioned, this is our main blocker
hmm yes, but would that be related? the code seems to be explicitly preventing this behavior <https://github.com/lyft/flytekit/blob/master/flytekit/tools/module_loader.py#L72>
Are the project and domain attributes correct?
i just looked and the sample and tried to guess let me check
hmm yes, but would that be related? the code seems to be explicitly preventing this behavior <https://github.com/lyft/flytekit/blob/master/flytekit/tools/module_loader.py#L72>
code is a little opaque, still trying to parse but i think that's what it's doing. setting that value to false fixes this issue for sure
i just looked and the sample and tried to guess let me check
Dylan Wilder can you try it this way - <https://github.com/lyft/flytesnacks/blob/master/cookbook/recipes/shared/sharing.ipynb> basically factoring out the task outside the class? I think its assuming that the task exists i know this example works, we can debug once you have the code working (so that we can unblock you) Yee what do you think ^ Dylan Wilder any luck
code is a little opaque, still trying to parse but i think that's what it's doing. setting that value to false fixes this issue for sure
Yee is helping me :slightly_smiling_face: on meets
Dylan Wilder can you try it this way - <https://github.com/lyft/flytesnacks/blob/master/cookbook/recipes/shared/sharing.ipynb> basically factoring out the task outside the class? I think its assuming that the task exists i know this example works, we can debug once you have the code working (so that we can unblock you) Yee what do you think ^ Dylan Wilder any luck
I thought that the problem was with how Python loads the code One solution can be separating workflows and tasks into different modules
Yee is helping me :slightly_smiling_face: on meets
problem (2) is. that problem (1) is different
I thought that the problem was with how Python loads the code One solution can be separating workflows and tasks into different modules
(1) don’t know :slightly_smiling_face:
problem (2) is. that problem (1) is different
yea yee and i looked into it, so we're working on it. will let you guys know!
(1) don’t know :slightly_smiling_face:
if you wouldn’t mind trying, i’d be curious to see what this value is at the point of failure ```from flytekit.configuration.sdk import WORKFLOW_PACKAGES WORKFLOW_PACKAGES.get()```
yea yee and i looked into it, so we're working on it. will let you guys know!
ya, Yee wouldnt creating a module level variable called the task solve it?
if you wouldn’t mind trying, i’d be curious to see what this value is at the point of failure ```from flytekit.configuration.sdk import WORKFLOW_PACKAGES WORKFLOW_PACKAGES.get()```
it can be that (2) can be fixed by creating a wrapper that will return None if the code is running inside container in Flyte task. Now sure how much harm this magic is going to bring :slightly_smiling_face:
ya, Yee wouldnt creating a module level variable called the task solve it?
as it gets added to the instance tracker
it can be that (2) can be fixed by creating a wrapper that will return None if the code is running inside container in Flyte task. Now sure how much harm this magic is going to bring :slightly_smiling_face:
```WORKFLOW_PACKAGES ['workflows.onemodel.royalties']```
as it gets added to the instance tracker
Gleb Kanterov we are also in the new flytekit api, overhauling how remote tasks should work, we should discuss that, let me start a thread in <#CREL4QVAQ|flytekit>
```WORKFLOW_PACKAGES ['workflows.onemodel.royalties']```
sorry Yee if i unset the env var i get ```WORKFLOW_PACKAGES []``` but it errors either way
Gleb Kanterov we are also in the new flytekit api, overhauling how remote tasks should work, we should discuss that, let me start a thread in <#CREL4QVAQ|flytekit>
this line needs to be moved to module level: ` ```ads_revenue = SdkWorkflow.fetch( "flytesnacks", "production", "workflows.ads_revenue.workflow.AdsRevenueForecast", "v4" )```
sorry Yee if i unset the env var i get ```WORKFLOW_PACKAGES []``` but it errors either way
doesn't fix it if by module you mean file
this line needs to be moved to module level: ` ```ads_revenue = SdkWorkflow.fetch( "flytesnacks", "production", "workflows.ads_revenue.workflow.AdsRevenueForecast", "v4" )```
yeah it needs to be assigned to a key returned in `dir(module)` same error if you move it?
doesn't fix it if by module you mean file
yup :confused: Yee can testify
yeah it needs to be assigned to a key returned in `dir(module)` same error if you move it?
hmmm unfortunately i can’t dig into this right now, but my best guess would be there is some slight misalignment between workflow and task registerable entities--since that path for tasks is pretty well-exercised and hasn’t been creating issues. and catching up, looks like yee already found it and fixed it
yup :confused: Yee can testify
1. correct. 2. correct.
I have some questions about ser/de and caching behavior. I think i understand so stop me when i'm wrong 1. caching is based on the _materialized_ inputs to each task. eg if task B depends on output from A and A is rerun due to changed inputs _but the output does not change_ task B will not be rerun 2. The equality of output is determined by the proto representation of that type. eg if it's a string type then this process is straightforward, however if it's a CSV or dataframe then even though the data output by A may be the same the file location in the proto may change and _so it will be a cache miss_
I have an interesting use case that i'm trying to solve that where this behavior is a blocker. would like to get your input we're basically build a scenario analysis tool which involves a bunch of interconnected models. users provide assumptions (such as future fx rates, product pricing, etc) and kick off the system to make predictions. because of the scale/complexity of the parameters, we can not model each as a flyte type. So we need an external UI/database etc. the plan was to have a user create immutable copies of possible parameters and then pass a pointer to the params as the main arg to the workflow. unfortunately since that pointer will change every run it basically invalidates the entire cache which will be a bad user experience
1. correct. 2. correct.
Hey Dylan Wilder. That's an interesting use case! DataCatalog (the thing that provides caching behavior) has public and documented APIs... We have customers who use it outside the typical behavior of Flyte caching. One possibility I see here is that you can compute your own provenance (the thing you want to use as the lookup key), and use that directly to lookup from data catalog, if an existing artifact exist (in this case it'll be a pointer to the real dataset), return it... This way, subsequent tasks will continue to behave the same way (caching will work as expected... etc.) you can compute the provenance based on hashing of the real data generated (might be expensive, depending on the size)... Does that make sense?
I have an interesting use case that i'm trying to solve that where this behavior is a blocker. would like to get your input we're basically build a scenario analysis tool which involves a bunch of interconnected models. users provide assumptions (such as future fx rates, product pricing, etc) and kick off the system to make predictions. because of the scale/complexity of the parameters, we can not model each as a flyte type. So we need an external UI/database etc. the plan was to have a user create immutable copies of possible parameters and then pass a pointer to the params as the main arg to the workflow. unfortunately since that pointer will change every run it basically invalidates the entire cache which will be a bad user experience
it does! this is the kind of thing i was looking for
Hey Dylan Wilder. That's an interesting use case! DataCatalog (the thing that provides caching behavior) has public and documented APIs... We have customers who use it outside the typical behavior of Flyte caching. One possibility I see here is that you can compute your own provenance (the thing you want to use as the lookup key), and use that directly to lookup from data catalog, if an existing artifact exist (in this case it'll be a pointer to the real dataset), return it... This way, subsequent tasks will continue to behave the same way (caching will work as expected... etc.) you can compute the provenance based on hashing of the real data generated (might be expensive, depending on the size)... Does that make sense?
alright, let me write up a gist with an example... Some very rough idea here: <https://gist.github.com/EngHabu/79da5071a4f2715811dec55cc8f5961a>
it does! this is the kind of thing i was looking for
sorry what's going on with `B` here? ostensibly it should be written to storage somewhere and the reference saved in datacatalog? is that storage external or is there a way to store the object directly in flyte. and how is this different than storing anywhere deterministic and returning the directory as a string (ie eliding datacatalog) also follow on question. in many cases _some_ parameters have changed but many haven't. and not all downstreams depend on every parameter, so ideally would just rerun those that are necessary but again _don't want to expose everything as a top level flyte output_ since this would be unmanageable. think a big dict of key to dataset values. is it possible to depend on one of the keys only or in general on some sub element of an output. In the old api i'd guess no because outputs aren't materialized, but maybe in the new one?
alright, let me write up a gist with an example... Some very rough idea here: <https://gist.github.com/EngHabu/79da5071a4f2715811dec55cc8f5961a>
the python sdk shouldn’t impact this since it really comes down to the IDL definition. Having sub-references into the IDL for structured outputs is a really interesting thought though. one idea that could work today (not sure how extensible it is to your setup), but you could probably achieve this in a fairly elegant way with python by declaring a dict with individual keys pointing to task outputs or workflow inputs. Then `**` exploding the dict as input to downstream nodes. Then you need to implement a way to drop out unnecessary keys which can probably be achieved by making a wrapping of the task object which drops out extra kwargs prior to attempting to construct the node. basically, use python code to aggregate the references into something manageable
sorry what's going on with `B` here? ostensibly it should be written to storage somewhere and the reference saved in datacatalog? is that storage external or is there a way to store the object directly in flyte. and how is this different than storing anywhere deterministic and returning the directory as a string (ie eliding datacatalog) also follow on question. in many cases _some_ parameters have changed but many haven't. and not all downstreams depend on every parameter, so ideally would just rerun those that are necessary but again _don't want to expose everything as a top level flyte output_ since this would be unmanageable. think a big dict of key to dataset values. is it possible to depend on one of the keys only or in general on some sub element of an output. In the old api i'd guess no because outputs aren't materialized, but maybe in the new one?
Hi Jeev B execution ID is designed for this scenario. We have many cases at lyft where users launch executions and used of the ID is a way to achieve idempotency Now retries on failures, what are the failure scenarios? Workflow is just a meta entity and should never fail, but a task fails right? And if that is the case one should introduce retries for task nodes Please indicate cases in which you see workflows failing that should need an entire workflow retry
one scenario to consider is having messages that are delivered at least once trigger workflows...
you are right. we want to retry on task failures. but would it be possible to intervene manually to “resume” workflows after tasks have failed? in case of a disastrous infra failure for instance so it picks up from the last successful tasks and proceeds to completion
Hi Jeev B execution ID is designed for this scenario. We have many cases at lyft where users launch executions and used of the ID is a way to achieve idempotency Now retries on failures, what are the failure scenarios? Workflow is just a meta entity and should never fail, but a task fails right? And if that is the case one should introduce retries for task nodes Please indicate cases in which you see workflows failing that should need an entire workflow retry
Hmm, so today that can be achieved if you use memorization If not we don’t have a resume, but let’s add that as a feature request That can be built as we can completely recreate the state
you are right. we want to retry on task failures. but would it be possible to intervene manually to “resume” workflows after tasks have failed? in case of a disastrous infra failure for instance so it picks up from the last successful tasks and proceeds to completion
right but memoization works with a new execution ID right? the difference is that we’d like to resume the existing execution ID that makes sense. we’ll add a feature request! at lyft, when users use idempotent execution IDs, how do they handle failures outside of the workflow? for instance if kiam fails to provide credentials to S3? do they just relaunch and leverage memoization? does that make sense?
Hmm, so today that can be achieved if you use memorization If not we don’t have a resume, but let’s add that as a feature request That can be built as we can completely recreate the state
Ya they do Leverage memorization
right but memoization works with a new execution ID right? the difference is that we’d like to resume the existing execution ID that makes sense. we’ll add a feature request! at lyft, when users use idempotent execution IDs, how do they handle failures outside of the workflow? for instance if kiam fails to provide credentials to S3? do they just relaunch and leverage memoization? does that make sense?
sounds good. we’ll go down this path for now and create a wrapping service that will handle launches/relaunches for us.
Ya they do Leverage memorization
Hmm that does not sound good - more work? So to understand you need the same execution ID to be repeated
sounds good. we’ll go down this path for now and create a wrapping service that will handle launches/relaunches for us.
yea a bit. yes that’s one option. what we need is to have a machine idempotently kick off workflows while handling the case of “resuming” workflows with failed tasks. for context, we have a controller that responds to object storage events and kicks off workflows. and gcp pubsub is an at least once delivery system. but in the event of task failures due to infra failures we want to be able to intervene and “touch” files to “retrigger” and push these workflows through. does that make sense Ketan Umare
Hmm that does not sound good - more work? So to understand you need the same execution ID to be repeated
give me some time, I will comment.
yea a bit. yes that’s one option. what we need is to have a machine idempotently kick off workflows while handling the case of “resuming” workflows with failed tasks. for context, we have a controller that responds to object storage events and kicks off workflows. and gcp pubsub is an at least once delivery system. but in the event of task failures due to infra failures we want to be able to intervene and “touch” files to “retrigger” and push these workflows through. does that make sense Ketan Umare
Ketan Umare: i might be able to leverage the `FlyteWorkflow` CRD with a custom label override as the idempotency key without any other additional work.
give me some time, I will comment.
hmm i am a little confused and intrigued especially the controller that uses events to kick of workflows I am extremely interested in that to see if you guys want to open source it at some point as a Flyte module
Ketan Umare: i might be able to leverage the `FlyteWorkflow` CRD with a custom label override as the idempotency key without any other additional work.
yea the idea itself was inspired by this: <https://github.com/argoproj/argo-events> except we only really care about GCS object created events or webhooks our current implementation isnt as extensible... yet, but it has lots of potential!
hmm i am a little confused and intrigued especially the controller that uses events to kick of workflows I am extremely interested in that to see if you guys want to open source it at some point as a Flyte module
Thomas Vetterli: fyi
<!here> Reminder everybody: community zoom meet tomorrow, Tuesday 9/17, 9am Pacific Time, 5pm UTC. Katrina Rogan will demo dramatic reductions in workflow registration times, improving interaction speed Ketan Umare will demo the new Flytekit SDK alpha, (built by the inimitable but shy Yee) available now for community feedback. Play with Flyte workflows locally before containerizing.
thank you! yeah we’ve never set these to single files before. not sure if there was a reason for that.
Yee: just noticed that setting `workflow_packages` to point to a single python file fails with: `has no attribute '__path__'` probably because: ```def iterate_modules(pkgs): for package_name in pkgs: package = importlib.import_module(package_name) yield package for _, name, _ in pkgutil.walk_packages(package.__path__, prefix="{}.".format(package_name)): yield importlib.import_module(name)``` assumes that all packages are directories with `__ init __.py` files. ```&gt;&gt;&gt; importlib.import_module("flytekit.models.admin.common") &lt;module 'flytekit.models.admin.common' from '/Users/jeev/Workspace/repos/flytekit/flytekit/models/admin/common.py'&gt; &gt;&gt;&gt; importlib.import_module("flytekit.models.admin") &lt;module 'flytekit.models.admin' from '/Users/jeev/Workspace/repos/flytekit/flytekit/models/admin/__init__.py'&gt;``` both of these are valid imports but with the former, we don't need to walk. we can use `if hasattr(package, __ path __)` as a check to see if a package is walkable. PR here: <https://github.com/lyft/flytekit/pull/259>
we don’t have a lot of substance - some python code that I can demo and perhaps that will inspire a discussion. :)
Jeev B / Gleb Kanterov are you guys presenting this Tuesday OSS sync? Jeev B we would love to to see what you have for the reactive work and Gleb Kanterov i think your JAva SDK is ready :stuck_out_tongue_winking_eye:
that is exactly the point I guess what the usecase is, and how it helps you we want to actually plan for it in 2021, and whatever you have could be a great starting point Thank you buddy!
we don’t have a lot of substance - some python code that I can demo and perhaps that will inspire a discussion. :)
How did you arrive at this conclusion? I’m curious to know, because just yesterday I told my team “we should use <https://squidfunk.github.io/mkdocs-material/> for docs” :sweat_smile:
We should use <https://gitbook.com/|https://gitbook.com/> for docs
Why do you think so, Fred? Have you done some homework? If so, do you want to share?
How did you arrive at this conclusion? I’m curious to know, because just yesterday I told my team “we should use <https://squidfunk.github.io/mkdocs-material/> for docs” :sweat_smile:
Did some googling, but I liked how simple it is, open source ad big community, a lot of plugins, and for someone who has zero frontend skills its easy to make it look nice :smile: Also, <https://fastapi.tiangolo.com> is my all time favourite SW doc and its built with Material for MkDocs, so that might have affected my opinion a little bit.
Why do you think so, Fred? Have you done some homework? If so, do you want to share?
Ohh I just saw that gitbook is very well Integrated with git, but love suggestions I should have said - we should evaluate this :blush:
Did some googling, but I liked how simple it is, open source ad big community, a lot of plugins, and for someone who has zero frontend skills its easy to make it look nice :smile: Also, <https://fastapi.tiangolo.com> is my all time favourite SW doc and its built with Material for MkDocs, so that might have affected my opinion a little bit.
Is there a opensource/selfhosted version of gitbook? I only looked at it briefly because it looked proprietary.
Ohh I just saw that gitbook is very well Integrated with git, but love suggestions I should have said - we should evaluate this :blush:
True
Is there a opensource/selfhosted version of gitbook? I only looked at it briefly because it looked proprietary.
It's free for open source software
True
Aaha that’s why I see a lot of oss software using it
It's free for open source software
oh, well thats nice!
Aaha that’s why I see a lot of oss software using it
Fredrik Sannholm is your usecase private code? Also, I think one of the most important questions is source code documentation, especially for open source projects
oh, well thats nice!
yes, internal docs and tools Currently we have in Confluence, which sort of works, but I’m not a fan
Fredrik Sannholm is your usecase private code? Also, I think one of the most important questions is source code documentation, especially for open source projects
We use mkdocs as well, but I don’t mind using anything that works
yes, internal docs and tools Currently we have in Confluence, which sort of works, but I’m not a fan
Gleb Kanterov so mkdocs generates Java and python docs or are there other libraries that do it and you plug in
We use mkdocs as well, but I don’t mind using anything that works
My understanding is that mkdocs turn markdown into html or what ever. There are plugins that turn python docstrings into markdown. Not sure about java though
Gleb Kanterov so mkdocs generates Java and python docs or are there other libraries that do it and you plug in
It’s mostly for hand-written markdown docs
My understanding is that mkdocs turn markdown into html or what ever. There are plugins that turn python docstrings into markdown. Not sure about java though
:tada:
Channel and Community, I have some information to share. Lyft has decided to donate Flyte to Linux Foundation. With this donation we will be creating a standalone neutral entity under Linux Foundation We feel that this would help in fostering a better communtiy and a better open source product. Thank you for all the support, we will hopefully take the product higher And with features like flytekit (native typing) and others coming, we are sure we can help the community move from prototyping to production for their pipelines very quickly and efficiently Gleb Kanterov Hongxin Liang Nelson Arapé Jeev B Fredrik Sannholm Sören Brunk Yuvraj (union.ai) Ruslan Stanevich Niels Bantilan Tim Chan ^ Yosi Taguri
What I would love to know from all of you, is that we might break the import statements for you folks, as the code will be in a new organization
:tada:
when is this move taking place Ketan Umare? is it going into incubation first? not sure how linux foundation works.
What I would love to know from all of you, is that we might break the import statements for you folks, as the code will be in a new organization
We will do it in the next couple weeks only after all of you confirm
when is this move taking place Ketan Umare? is it going into incubation first? not sure how linux foundation works.
none of the old stuff will break though right? or why do you think it will?
We will do it in the next couple weeks only after all of you confirm
nope only if you directly import the go-code
none of the old stuff will break though right? or why do you think it will?
ah right ok
nope only if you directly import the go-code
we might break it again we dont know, if there is a redirect created if so, then that would not break either
ah right ok
i see
we might break it again we dont know, if there is a redirect created if so, then that would not break either
so we are figuring out the mechanics but this is one time cost the home will be `<http://github.com/flyteorg|github.com/flyteorg>`
i see
is the plan to move everything over as is, so that we can just update our docker image paths to "switch" over the deployment?
so we are figuring out the mechanics but this is one time cost the home will be `<http://github.com/flyteorg|github.com/flyteorg>`
we as a community will own this space and we can easily add new projects there ohh yes we have already started publishing all images to github docker registry
is the plan to move everything over as is, so that we can just update our docker image paths to "switch" over the deployment?
oh cool
we as a community will own this space and we can easily add new projects there ohh yes we have already started publishing all images to github docker registry
we will be updating the base kustomize soon and anyone can now build using github workflows (part of the core) too
oh cool
:thumbsup: very exciting
we will be updating the base kustomize soon and anyone can now build using github workflows (part of the core) too
thank you again we hope to begin the new year with a much more grander vision
:thumbsup: very exciting
congrats team, looking forward to the future of Flyte!
thank you again we hope to begin the new year with a much more grander vision
That’s very exciting news! I’m sure it will help Flyte to grow as an open-source project and also make it visible to a wider audience. Looking forward too!
congrats team, looking forward to the future of Flyte!
Sounds cool! New chapter in the book of flyte!
That’s very exciting news! I’m sure it will help Flyte to grow as an open-source project and also make it visible to a wider audience. Looking forward too!
Big congrats! Looking forward to the new era.
Sounds cool! New chapter in the book of flyte!
Jeev B all images have moved to github container registry. <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L8861|Example> They should follow exactly the same pattern except that you use `<http://ghcr.io/|ghcr.io/>` instead of `<http://docker.pkg.github.com/|docker.pkg.github.com/>` prefix...
Big congrats! Looking forward to the new era.
Also we are waiting for all of you guys to ok then we can move all code to flyteorg github organization
Jeev B all images have moved to github container registry. <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L8861|Example> They should follow exactly the same pattern except that you use `<http://ghcr.io/|ghcr.io/>` instead of `<http://docker.pkg.github.com/|docker.pkg.github.com/>` prefix...
Yes. Basically the question is: *Do you reference <http://github.com/lyft/flyte*|github.com/lyft/flyte*> anywhere in your repos?* particularly `golang` repos... or if you do any scripting around installing flytekit from source... etc. A quick (:thumbsup: for Yes) and (:thumbsdown: for No) would be great! As part of the Linux Foundation Donation ^, we will need to transfer ownership of flyte repos to a different org (<http://github.com/flyteorg|github.com/flyteorg>)... there are a couple of options (if you can think of more, plz feel free to add)... Can I get a couple of eyes on these options? <https://docs.google.com/document/d/1nmS6yyF8uVZ4nlkD9o5JrwYehYSBYRlc4yIeh8GDbvI> Trying to enumerate the tactical process of moving these repos... and decide by this week on how to move forward...
Also we are waiting for all of you guys to ok then we can move all code to flyteorg github organization
Haytham Abuelfutuh we build propeller internally with our own plugins, so there is the reference, but should be very easy to fix.
Yes. Basically the question is: *Do you reference <http://github.com/lyft/flyte*|github.com/lyft/flyte*> anywhere in your repos?* particularly `golang` repos... or if you do any scripting around installing flytekit from source... etc. A quick (:thumbsup: for Yes) and (:thumbsdown: for No) would be great! As part of the Linux Foundation Donation ^, we will need to transfer ownership of flyte repos to a different org (<http://github.com/flyteorg|github.com/flyteorg>)... there are a couple of options (if you can think of more, plz feel free to add)... Can I get a couple of eyes on these options? <https://docs.google.com/document/d/1nmS6yyF8uVZ4nlkD9o5JrwYehYSBYRlc4yIeh8GDbvI> Trying to enumerate the tactical process of moving these repos... and decide by this week on how to move forward...
Github establishes a redirect for the moved repos... `go get` is fine using that... However, after the move, when we actually do the code change to use vanity domains, yes you will need to change your dependency... hopefully we can coordinate that.. will keep you updated for sure
Haytham Abuelfutuh we build propeller internally with our own plugins, so there is the reference, but should be very easy to fix.
<https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/core/tasks.proto#L103> <https://github.com/lyft/flyteadmin/blob/master/pkg/repositories/models/task.go#L22>
Where in the Admin database is the task type available?
Yee That's from user but is that what is used by UI ? I was hoping it will be somewhere in executions My intention is to find what type a task execution is.
<https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/core/tasks.proto#L103> <https://github.com/lyft/flyteadmin/blob/master/pkg/repositories/models/task.go#L22>
Randy Schott would know this right?
Yee That's from user but is that what is used by UI ? I was hoping it will be somewhere in executions My intention is to find what type a task execution is.
Yeah. Just checked It's part of the identifier. No way to do SQL, but it's there
Randy Schott would know this right?
Well, there is a way to do sql, but it’s a join?
Yeah. Just checked It's part of the identifier. No way to do SQL, but it's there
Nope/ They are all protobuf :disappointed:
Well, there is a way to do sql, but it’s a join?