output
stringlengths
1
18.7k
input
stringlengths
1
18.7k
Here is something I do in personal projects sometimes: I place my task logic in a separate python module. My Flyte workflow mostly handles passing the inputs to those module functions. If I have issues, I just `docker run -it <mydockerimage>` then use a python interpreter and manually test the python module.
That process makes sense. I'd like more details on the debugging side. I'd assume that most of the time people are just logging what seems to be important. When something goes wrong, they look at the output and add extra prints to chase down the issue.
Joseph Winston also Flyte has a way to debug locally from a remote execution. It allows you to load data from partial steps remote and restart an execution maybe of a task locally I use this to debug like in a jupyter notebook
Here is something I do in personal projects sometimes: I place my task logic in a separate python module. My Flyte workflow mostly handles passing the inputs to those module functions. If I have issues, I just `docker run -it <mydockerimage>` then use a python interpreter and manually test the python module.
Ketan Umare, can you please point me to the documentation? Thanks.
Joseph Winston also Flyte has a way to debug locally from a remote execution. It allows you to load data from partial steps remote and restart an execution maybe of a task locally I use this to debug like in a jupyter notebook
here is some private documentation… the earlier portions contain lyft-specific stuff and we just haven’t had time to port things over yet. you’ll need to make sure the correct settings are set as well.
Ketan Umare, can you please point me to the documentation? Thanks.
Thank you Yee - Joseph Winston, we will create an issue to port over this documentation
here is some private documentation… the earlier portions contain lyft-specific stuff and we just haven’t had time to port things over yet. you’ll need to make sure the correct settings are set as well.
so perhaps start your python session with something like `FLYTE_PLATFORM_URL=<http://blah.net|blah.net>`
Thank you Yee - Joseph Winston, we will create an issue to port over this documentation
Let me try this.
so perhaps start your python session with something like `FLYTE_PLATFORM_URL=<http://blah.net|blah.net>`
you can't delete workflows atm but you can always register with a different version if the workflow structure has changed in the meantime
When I submit my workflow, I receive the following trace back: ``` raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: &lt;_InactiveRpcError of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "workflow with different structure already exists with id resource_type:WORKFLOW project:"afi" domain:"development" name:"empty-afi.empty.AFI_WF" version:"3c8408be6ab9eb1736d237ce3e71e7dbd2f5eff8" " debug_error_string = "{"created":"@1588358901.586477489","description":"Error received from peer ipv4:172.17.252.205:30081","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"workflow with different structure already exists with id resource_type:WORKFLOW project:"afi" domain:"development" name:"empty-afi.empty.AFI_WF" version:"3c8408be6ab9eb1736d237ce3e71e7dbd2f5eff8" ","grpc_status":3}"``` How do I delete workflows? Or projects? Or tasks? This fails ```curl -X DELETE ${FLYTE_PLATFORM_URL}/api/v1/projects -d '{"project": {"id": "afi", "name": "afi"} }'```
At the moment, no entities are deletable, short of directly modifying the database used by Flyte Admin. Workflow IDs are a unique combination of project/domain/name/version. A new version should exist any time the underlying code changes (it’s common practice for the version to be defined by the current git sha). So if you’re seeing that error, it should mean you don’t need to register it again.
you can't delete workflows atm but you can always register with a different version if the workflow structure has changed in the meantime
Joseph Winston all entities in Flyte are immutable baring a few aesthetic attributes. You should worry about wrong registrations just register new ones
At the moment, no entities are deletable, short of directly modifying the database used by Flyte Admin. Workflow IDs are a unique combination of project/domain/name/version. A new version should exist any time the underlying code changes (it’s common practice for the version to be defined by the current git sha). So if you’re seeing that error, it should mean you don’t need to register it again.
the task decorator needs to come before, but no order for inputs/outputs. you might even be able to apply input and output decorators multiple times. is there something specific you are trying to implement? if the decorators are making it difficult, i might be able to suggest a cleaner way.
Does it matter what order you stack the decorators in flytekit? In other words could you specify inputs, outputs, and python_task in any order?
just curious. As you know I've been building a dagster to flyte compiler. Now I want to include Flyte's typed inputs and outputs. I am programmatically constructing SdkRunnableTask's currently Now, I am trying to figure out the architecture for constructing Inputs and Outputs as well.
the task decorator needs to come before, but no order for inputs/outputs. you might even be able to apply input and output decorators multiple times. is there something specific you are trying to implement? if the decorators are making it difficult, i might be able to suggest a cleaner way.
cool! i know parts of dagster are open-source, do you have an example of what you have currently?
just curious. As you know I've been building a dagster to flyte compiler. Now I want to include Flyte's typed inputs and outputs. I am programmatically constructing SdkRunnableTask's currently Now, I am trying to figure out the architecture for constructing Inputs and Outputs as well.
yes! <https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-flyte/dagster_flyte/flyte_compiler.py#L74> This currently bypasses inputs/outputs in flyte, and relies on dagster for everything.
cool! i know parts of dagster are open-source, do you have an example of what you have currently?
awesome, and what does passing inputs and outputs look like in dagster, is it by file(s)?
yes! <https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-flyte/dagster_flyte/flyte_compiler.py#L74> This currently bypasses inputs/outputs in flyte, and relies on dagster for everything.
so theres a few things. The fields in in the function sig can have typehints and dagster will utilize them. Additionally you can pass input configs and output configs to the `@solid` decorator, and create InputDefinition/OutputDefinition. Let me show you some docs some examples in here: <https://docs.dagster.io/docs/tutorial/basics#providing-input-values-for-custom-types-in-config> <https://docs.dagster.io/docs/apidocs/solids#dagster.InputDefinition>
awesome, and what does passing inputs and outputs look like in dagster, is it by file(s)?
ok sweet! i’ll take a read of these--it might take me a bit to process. then i’ll try to get back to you with: 1. a high-level overview of how our i/o passing and type system works currently. 2. maybe a couple hints/code snippets i can give as a starting point of how to think about our system in a way that is relevant to you
so theres a few things. The fields in in the function sig can have typehints and dagster will utilize them. Additionally you can pass input configs and output configs to the `@solid` decorator, and create InputDefinition/OutputDefinition. Let me show you some docs some examples in here: <https://docs.dagster.io/docs/tutorial/basics#providing-input-values-for-custom-types-in-config> <https://docs.dagster.io/docs/apidocs/solids#dagster.InputDefinition>
Amazing, that would be greatly appreciated. looking at the definitions of inputs and outputs, this actually looks pretty straightforward. Definitely interested in your advice here though. the outputs might be slightly confusing however.
ok sweet! i’ll take a read of these--it might take me a bit to process. then i’ll try to get back to you with: 1. a high-level overview of how our i/o passing and type system works currently. 2. maybe a couple hints/code snippets i can give as a starting point of how to think about our system in a way that is relevant to you
Yes. We run several Admin services (or replicas/pods) always. Admin is stateless. Workflow schedules and notifications should all work independent of the replica size. In fact I encourage you to run multiple replicas for better availability.
Hi :hand: Do you have experience running flyteadmin in multiple replicas? considering if we have enabled workflow scheduling and notifications. Thank you in advance!
hey Ruslan Stanevich you should be able to update any hadoop jars. I think that script should be taken as a starting point only Also internally we are now using hadopp 3.0 jars because the output committer is better
Hello Everyone! Could you please advice about `Spark` in Flyte? So, the question related to `aws-java-sdk` version <https://github.com/lyft/flytekit/blob/master/scripts/flytekit_install_spark.sh#L48> Do you use newer `hadoop` version with newer `aws java sdk` version? Actually, there is no problem with hadoop but I just care about custom location for “mounted” aws credentials file: `AWS_CREDENTIAL_PROFILES_FILE` It maintains only by new aws sdk as I see Thank you in advance! So, there in no problem with it now ) Because we can mont file like /path/to/envrc to container with content like ```AWS_ACCESS_KEY=bla-bla AWS_SECRET_KEY=bla-bla-bla``` and add to entrypoint ```source /path/to/envrc``` Sorry for mess :slightly_smiling_face:
Hi Guilherme, welcome Let me formulate a response and I will post back.
Hello! We are currently considering using Pachyderm in our company for running our workflows. We recently heard of Flyte and it looks like an interesting alternative. It has some very similar concepts and seems to try to solve some similar pain points. I wonder if someone has compared both solutions, and which conclusions they came to (e.g. pros and cons when comparing them). Thanks for any input!
interesting. I don’t overly think of Pachyderm and Flyte as exclusive/alternatives (either/or)
Hi Guilherme, welcome Let me formulate a response and I will post back.
. Pachyderm: Cool - they have a git like data management primitive This implies all data has to be stored in their own file system They do have a workflow engine, but it is Json driven and does not really work with other open source projects (AFAIK) like spark, flink, distributed training etc They use some custom stuff in kubernetes, and everything used to be driven by pachctl Flyte: Do not re-invent. Work is done by plugins, plugins can use k8s, aws batch, spark, distributed training, hive, presto etc (you can add your own backend plugins) Tasks and Workflows are at the core We do not have a file system, but we understand data using our custom type system, for every task you have to have a input output interface, which allows the Flyte engine to create a lineage graph of data dependencies Tasks are treated as top class citizens and hence we can cache the artifacts and de-dupe them across executions without having our own file system Data can be stored in any cloud store, across many buckets in their own native formats Tasks are language independent, can b written in any language, but we come with a python SDK Java/Scala SDK is being worked on by Spotify We use FLyte in production at Lyft for very large scale workloads
interesting. I don’t overly think of Pachyderm and Flyte as exclusive/alternatives (either/or)
George Snelling I think Chang-Hong Hsu would love to demo it in the next meeting. Its his work :slightly_smiling_face:
<!here> Hello Flyers, reminder that our bi-weekly Flyte OSS sync is tomorrow, 6/30 at 9am PDT. Ketan and Yee may have a demo of some exciting ML integration if the stars align. Zoom: <https://us04web.zoom.us/j/71298741279?pwd=TDR1RUppQmxGaDRFdzBOa2lHN1dsZz09> Meeting ID: 712 9874 1279 Password: 7stPGd -g
Ketan Umare thank you for the credits. George Snelling I’d be more than happy to do it in the next meeting and I believe we will have a even more robust integration to show then :)
George Snelling I think Chang-Hong Hsu would love to demo it in the next meeting. Its his work :slightly_smiling_face:
hey Ruslan Stanevich do you want this to be an execution time or launch plan so we have annotations and labels yes otherwise you will have to set it on the sidecar today AFAIK we dont have specific node selector attributes what is the usecase Ruslan Stanevich? the reason why i think its not good to use node affinity, for workflow itself, maybe we can have it for an execution
Hello Everyone! Could you please advise about running specific pyflyte workflow on dedicated eks node? Is using @`sidecar_task` decorator and `pod_spec` only and common approach for setting `nodeSelector` and `tolerations` on running Pods? Thank you in advance! Just needs isolated (with no other pods) node with big disk size
You can pin by requesting the whole machine, very inelegant though and only works for the biggest machine
hey Ruslan Stanevich do you want this to be an execution time or launch plan so we have annotations and labels yes otherwise you will have to set it on the sidecar today AFAIK we dont have specific node selector attributes what is the usecase Ruslan Stanevich? the reason why i think its not good to use node affinity, for workflow itself, maybe we can have it for an execution
Thank you very much for your response! Most of our workflows run in `pyspark` jobs. And, yes, we manage the node selector and annotations for SparkApplication with our tool. We have several `non-spark` tasks, and basically they are quite lightweight But there is a workflow that downloads a large CSV file and processes it (100Gb for now). &gt; _(Maybe Spark will be better for this, but this task is based on a “ready-made” solution (afaik))._ And it would be nice to be able to separately configure the parameters of this EKS Node (node group), for example, `increase the capacity of the attached volume` up to several hundreds GB or more. Basically, other nodes do not need such parameters. And honestly, I would like to ask this team to contact these details on this channel. :thinking_face: I’m just trying to do it from infra perspective.
You can pin by requesting the whole machine, very inelegant though and only works for the biggest machine
:+1: Welcome!
Arun Kevin Su Welcome to the community One more member will be joining soon. All of flyte is in golang, There are couple completely parallel projects that are in progress - flytectl and a golang SDK for flyte flytectl is worked on by austin we could also bootstrap golang sdk for flyte and Kevin Su is helping with TFoperator support
We would also love some help in understand how we could add Flyte as a target to TFX - <https://www.tensorflow.org/tfx/api_docs/python/tfx/orchestration>
:+1: Welcome!
:+1:
We would also love some help in understand how we could add Flyte as a target to TFX - <https://www.tensorflow.org/tfx/api_docs/python/tfx/orchestration>
Ketan Umare If I understand correctly, TFX run on some workflow orchestrator like airflow, kubeflow. If we could implement tfx workflow orchestrator interface. we could run TFX on Flyte. btw, Linkedin has ran TFX on askaban Any WIP issue about this, I'd like to join the thread.
:+1:
No issue yet, we know that tfx can be targeted to an orchestrator. We want to use flyte and in there add flink/spark as the beam runners
Ketan Umare If I understand correctly, TFX run on some workflow orchestrator like airflow, kubeflow. If we could implement tfx workflow orchestrator interface. we could run TFX on Flyte. btw, Linkedin has ran TFX on askaban Any WIP issue about this, I'd like to join the thread.
hi Sören Brunk first of all welcome. Love that you have been digging into Flyte (more the eyes better it is) Give me a few minutes and I would love to answer all your questions :slightly_smiling_face: Also would love to jump on a call and discuss more at length
Hi Everyone! We’re currently looking into Flyte and the design around data awareness via a type system looks very interesting. I also like the idea of a language independent declarative workflow spec and that you’re building on FP concepts like composition and immutability for caching and repeatability. Playing with Flyte, I’m still a bit confused about task/workflow registration and I couldn’t find too much information about it in the docs. The recommended way seems to build a docker container with the workflow code and flytekit. Then I run pyflyte from within that container to register the workflow. Is registering from inside the container the only way? What happens if I have a workflow that requires different containers, i.e. a workflow that contains a regular Python task and a Spark task, or even just conflicting Python dependencies for different tasks. How would I usually do the workflow registration in such a case? I’ve also seen that there’s some work about <https://github.com/lyft/flyte/issues/297|raw-containers> that might be related. Thanks, Sören
Thanks Ketan Umare and no hurries since I’ll try to get some sleep now (European timezone). :slightly_smiling_face: I’d be happy to talk. I’ll PM you tomorrow if that’s ok.
hi Sören Brunk first of all welcome. Love that you have been digging into Flyte (more the eyes better it is) Give me a few minutes and I would love to answer all your questions :slightly_smiling_face: Also would love to jump on a call and discuss more at length
yes please PM me whenever The points you have brought up are great and your question ```bit confused about task/workflow registration and I couldn't find too much information about it in the docs. The recommended way seems to build a docker container with the workflow code and flytekit.``` Short answer: It is just documentation. Flyte absolutely supports a separate container per doc. But doing that cleanly in flytekit (python) needs some more work workflow registration is actually a 2 step process step 1: task registration (which should be tied to the container) step 2: workflow registration simplifying this for the user is the challenge, and I can say we have not really crossed it
Thanks Ketan Umare and no hurries since I’ll try to get some sleep now (European timezone). :slightly_smiling_face: I’d be happy to talk. I’ll PM you tomorrow if that’s ok.
Ok that makes sense, thanks for the explanation. I think I need to get a better feeling for flytekit so I’ll try to build a few example workflows that reflect our use cases. If I hit any major roadblocks I’ll ask here again.
yes please PM me whenever The points you have brought up are great and your question ```bit confused about task/workflow registration and I couldn't find too much information about it in the docs. The recommended way seems to build a docker container with the workflow code and flytekit.``` Short answer: It is just documentation. Flyte absolutely supports a separate container per doc. But doing that cleanly in flytekit (python) needs some more work workflow registration is actually a 2 step process step 1: task registration (which should be tied to the container) step 2: workflow registration simplifying this for the user is the challenge, and I can say we have not really crossed it
Yee / Haytham Abuelfutuh if you near a computer. Else I will answer in a bit
Hello Everyone! Does anybody work with Dynamic tasks? I'm trying to run dynamic tasks sequentially, but it always runs in parallel even I set `max_concurrency=1` The sample workflow: ```from __future__ import absolute_import, division, print_function from flytekit.sdk.tasks import dynamic_task, inputs, outputs, python_task from flytekit.sdk.types import Types from flytekit.sdk.workflow import Input, Output, workflow_class import time @inputs(command=Types.String) @outputs(out_str=Types.String) @python_task def other_task(wf_params, command, out_str): time.sleep(60) out_str.set(command) @inputs(in_str=Types.String) @outputs(out_str=Types.String) @dynamic_task(max_concurrency=1) def str_inc(wf_params, in_str, out_str): res = [] for s in in_str: task = other_task(command=s) yield task res.append(task.outputs.out_str) out_str.set(str(res)) @workflow_class class DummyWf(object): in_str = Input(Types.String, required=True, help="in_str") run_str_inc = str_inc(in_str=in_str) edges = Output(run_str_inc.outputs.out_str, sdk_type=Types.String) ``` So when I'm running workflow with `123456` input I'm expecting that execution should take at least 6 minutes (because each task sleeps 60 seconds), but it takes about 2 I'll be much appreciated if somebody knows how to solve this issue
here... answering... Hey Aleksandr, yes max concurrency is not yet implemented unfortunately. However, you can achieve what you want by generating a workflow with node dependencies between the nodes in the `@dynamic_task` . Let me try to write you an example: <https://github.com/lyft/flytesnacks/blob/master/cookbook/workflows/recipe_2/dynamic.py#L15-L42|This> is an example of generating a dynamic workflow through `@dynamic_task` . <https://gist.github.com/EngHabu/d1faab2a9088434aec3ea467b5dcf690|Here> is an example that will do what you are trying to do. Note that it doesn't collect the outputs of these tasks. Please let me know if it's not obvious how to achieve that last bit..
Yee / Haytham Abuelfutuh if you near a computer. Else I will answer in a bit
many thanks, looks like it should help Haytham Abuelfutuh could you share any timelines when `max_cuncurrency` can be implemented?
here... answering... Hey Aleksandr, yes max concurrency is not yet implemented unfortunately. However, you can achieve what you want by generating a workflow with node dependencies between the nodes in the `@dynamic_task` . Let me try to write you an example: <https://github.com/lyft/flytesnacks/blob/master/cookbook/workflows/recipe_2/dynamic.py#L15-L42|This> is an example of generating a dynamic workflow through `@dynamic_task` . <https://gist.github.com/EngHabu/d1faab2a9088434aec3ea467b5dcf690|Here> is an example that will do what you are trying to do. Note that it doesn't collect the outputs of these tasks. Please let me know if it's not obvious how to achieve that last bit..
Sören Brunk (in a meeting) will brb
I’m trying to run Spark tasks to run in a Flyte sandbox installation but I’m running into issues. I managed to configure everything so that Flyte starts the driver pod but now I’m stuck with the following error (from the pod logs): ```Usage: entrypoint.py [OPTIONS] COMMAND [ARGS]... Try 'entrypoint.py --help' for help. Error: No such command 'pyflyte-execute --task-module single_step.spark --task-name hello_spark --inputs <s3://my-s3-bucket/metadata/propeller/myflyteproject-development-r1ik65bysr/spark-task/data/inputs.pb> --output-prefix <s3://my-s3-bucket/metadata/propeller/myflyteproject-development-r1ik65bysr/spark-task/data/0>'.``` My guess is that `pyflyte-execute ...` should not be in single quotes. If I call `entrypoint.py pyflyte-execute …` manually it seems to work better. I have no idea how to configure it correctly though. What I’m currently doing is adding the following entrypoint in my Dockerfile: ```ENTRYPOINT ["/opt/flytekit_venv", "flytekit_spark_entrypoint.sh"]``` Does anyone have an idea what I’m doing wrong? Thanks, Sören
hi. this is installed by flytekit <https://github.com/lyft/flytekit/blob/eec85fb35e5dd975840aa0019dfdc167af1e4f29/setup.py#L55> so please make sure in your pip requirements file that you install `flytekit[spark]` or better yet `flytekit[all]` instead of just `flytekit`
Sören Brunk (in a meeting) will brb
Sören Brunk
hi. this is installed by flytekit <https://github.com/lyft/flytekit/blob/eec85fb35e5dd975840aa0019dfdc167af1e4f29/setup.py#L55> so please make sure in your pip requirements file that you install `flytekit[spark]` or better yet `flytekit[all]` instead of just `flytekit`
Yee thanks. Yes I’m installing `flytekit[spark]` (haven’t tried `flytekit[all]` yet. Essentially, I’ve taken the <https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile|Dockerfile of the python example from flytesnacks>, then I’ve added these lines: ```RUN ${VENV}/bin/pip install flytekit[spark] RUN ${VENV}/bin/flytekit_install_spark.sh ENV SPARK_HOME /opt/spark``` When I run a Spark task in this container I get the following error: ```/opt/flytekit_venv: line 10: exec: driver-py: not found``` So I modified the docker entrypoint, to run the spark entrypoint, resulting in the entrypoint.py error (yes three different kinds of entrypoints). ```ENTRYPOINT ["/opt/flytekit_venv", "flytekit_spark_entrypoint.sh"]``` I also tried activating the flytekit venv inside `flytekit_spark_entrypoint.sh` directly instead but It’s giving me the same result. Once I get the Spark task to run I’d be happy to contribute the full example to flytesnacks :grin:
Sören Brunk
what happens if you just ```ENTRYPOINT [ "/opt/flytekit_spark_entrypoint.sh" ]``` (after copying the file there ofc)
Yee thanks. Yes I’m installing `flytekit[spark]` (haven’t tried `flytekit[all]` yet. Essentially, I’ve taken the <https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile|Dockerfile of the python example from flytesnacks>, then I’ve added these lines: ```RUN ${VENV}/bin/pip install flytekit[spark] RUN ${VENV}/bin/flytekit_install_spark.sh ENV SPARK_HOME /opt/spark``` When I run a Spark task in this container I get the following error: ```/opt/flytekit_venv: line 10: exec: driver-py: not found``` So I modified the docker entrypoint, to run the spark entrypoint, resulting in the entrypoint.py error (yes three different kinds of entrypoints). ```ENTRYPOINT ["/opt/flytekit_venv", "flytekit_spark_entrypoint.sh"]``` I also tried activating the flytekit venv inside `flytekit_spark_entrypoint.sh` directly instead but It’s giving me the same result. Once I get the Spark task to run I’d be happy to contribute the full example to flytesnacks :grin:
Same error ( I have to add `. ${VENV}/bin/activate` to flytekit_spark_entrypoint.sh in this case because I otherwise I can’t register tasks).
what happens if you just ```ENTRYPOINT [ "/opt/flytekit_spark_entrypoint.sh" ]``` (after copying the file there ofc)
sorry yeah
Same error ( I have to add `. ${VENV}/bin/activate` to flytekit_spark_entrypoint.sh in this case because I otherwise I can’t register tasks).
Anmol Khurana can probably provide the most context here. but i believe the way spark assumes entrypoints wreaks havoc on venvs. i would suggest the following: 1. make `ENTRYPOINT ["/opt/flytekit_spark_entrypoint.sh" ]` 2. then make an executable script of your own which activates venv and then passes along the args. something like this: <https://github.com/lyft/flytekit/blob/master/scripts/flytekit_venv> 3. in your flytekit.config file, reference that script: <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/configs/local.config#L3> that will result in flyte being able to enter your venv after going through the spark entrypoint and before executing your actual code you could also re-use our flytekit_venv script (we install it with flytekit) and put all your flyte-specific python dependencies in there `flytekit_venv pip install …`
sorry yeah
Thanks Matt Smith I’ll try your suggestions tomorrow and report back.
Anmol Khurana can probably provide the most context here. but i believe the way spark assumes entrypoints wreaks havoc on venvs. i would suggest the following: 1. make `ENTRYPOINT ["/opt/flytekit_spark_entrypoint.sh" ]` 2. then make an executable script of your own which activates venv and then passes along the args. something like this: <https://github.com/lyft/flytekit/blob/master/scripts/flytekit_venv> 3. in your flytekit.config file, reference that script: <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/configs/local.config#L3> that will result in flyte being able to enter your venv after going through the spark entrypoint and before executing your actual code you could also re-use our flytekit_venv script (we install it with flytekit) and put all your flyte-specific python dependencies in there `flytekit_venv pip install …`
also i can help with any of this Sören Brunk I was pretty busy this week (I am oncall :D)
Thanks Matt Smith I’ll try your suggestions tomorrow and report back.
Matt Smith I’ve changed the docker entrypoint to flytekit_spark_entrypoint.sh as you suggested. I already had 2. and 3. in place using your flytekit_venv script because I’ve derived it from the Python flytesnacks example. I’m still getting the same error though. Ok this seems indeed to be an issue with single quotes around `pyflyte-execute ...` in `flytekit_spark_entrypoint.sh` When I use `$PYSPARK_APP_ARGS` directly as an argument to spark-submit instead of `$PYSPARK_ARGS` it works. I haven’t figured out why exactly, because you know, bash… but it should be fixable. Now the driver is running but the Spark executors are failing. I can’t really figure out what’s going on because the driver or the Spark operator immediately removes the failed executors. Does anyone have an idea how to keep them around for debugging?
also i can help with any of this Sören Brunk I was pretty busy this week (I am oncall :D)
Ohh sorry about this Anmol Khurana or I can help in a bit We should have an example for this, terribly sorry ok let me create an example and share with you is that ok? so the way this works is that “both spark executor and driver use the same entrypoint” the entrypoint script we also want other flyte tasks to use so we provide an option to switch. I think the open source one might not be the correct one, as internally we have a base image that users use.
Matt Smith I’ve changed the docker entrypoint to flytekit_spark_entrypoint.sh as you suggested. I already had 2. and 3. in place using your flytekit_venv script because I’ve derived it from the Python flytesnacks example. I’m still getting the same error though. Ok this seems indeed to be an issue with single quotes around `pyflyte-execute ...` in `flytekit_spark_entrypoint.sh` When I use `$PYSPARK_APP_ARGS` directly as an argument to spark-submit instead of `$PYSPARK_ARGS` it works. I haven’t figured out why exactly, because you know, bash… but it should be fixable. Now the driver is running but the Spark executors are failing. I can’t really figure out what’s going on because the driver or the Spark operator immediately removes the failed executors. Does anyone have an idea how to keep them around for debugging?
Yes of you could create an example that would be awesome! I think it would also be useful for flytesnacks.
Ohh sorry about this Anmol Khurana or I can help in a bit We should have an example for this, terribly sorry ok let me create an example and share with you is that ok? so the way this works is that “both spark executor and driver use the same entrypoint” the entrypoint script we also want other flyte tasks to use so we provide an option to switch. I think the open source one might not be the correct one, as internally we have a base image that users use.
Yup I am doing that now. You can see it tomorrow Or Monday
Yes of you could create an example that would be awesome! I think it would also be useful for flytesnacks.
Looking into this as well. will update and make sure we have examples as well. Sorry about this. Sören Brunk are you setting `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` . If not, can you try adding those to the dockerfile and see if it helps ? Something like: ```ARG PYTHON_EXEC=.../venv/bin/python3 ENV PYSPARK_PYTHON ${PYTHON_EXEC} ENV PYSPARK_DRIVER_PYTHON ${PYTHON_EXEC}``` Meanwhile I am working on adding an example in flytesnacks as well to capture all of this.
Yup I am doing that now. You can see it tomorrow Or Monday
Anmol Khurana just tried your suggestion but unfortunately the executors are still failing. I think I’ll just wait until you’ve added the example now. Thanks for all your help guys!
Looking into this as well. will update and make sure we have examples as well. Sorry about this. Sören Brunk are you setting `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` . If not, can you try adding those to the dockerfile and see if it helps ? Something like: ```ARG PYTHON_EXEC=.../venv/bin/python3 ENV PYSPARK_PYTHON ${PYTHON_EXEC} ENV PYSPARK_DRIVER_PYTHON ${PYTHON_EXEC}``` Meanwhile I am working on adding an example in flytesnacks as well to capture all of this.
No thank you for the patience Sören Brunk Thanks to Anmol (and previous problems) he fixed the script here - <https://github.com/lyft/flytekit/pull/132>
Anmol Khurana just tried your suggestion but unfortunately the executors are still failing. I think I’ll just wait until you’ve added the example now. Thanks for all your help guys!
Thanks Ketan for sharing this. Just to expand a bit, the error message was pretty much what was happening. The `entrypoint.sh` in flytekit had additional quotes which aren’t in the internal version we use (or in the open-source spark one) and were causing issues. In-addition to <https://github.com/lyft/flytekit/pull/132>, <https://github.com/lyft/flytesnacks/pull/15/files> has the dockerfile and an example workflow which I used to build/test locally. I need to clean-up these PRs a bit/add docs etc and I plan to have these checked-in by early next week.
No thank you for the patience Sören Brunk Thanks to Anmol (and previous problems) he fixed the script here - <https://github.com/lyft/flytekit/pull/132>
hey Sören Brunk I think you will wake up, so Anmol Khurana decide to remove the default script in “flytekit” as it seems spark upstream has a script that we can use . He has this issue - <https://github.com/lyft/flyte/issues/409> and he is about to merge the changes. Also he has this example which works - <https://github.com/lyft/flytesnacks/pull/15> But he is improving it a little bit
Thanks Ketan for sharing this. Just to expand a bit, the error message was pretty much what was happening. The `entrypoint.sh` in flytekit had additional quotes which aren’t in the internal version we use (or in the open-source spark one) and were causing issues. In-addition to <https://github.com/lyft/flytekit/pull/132>, <https://github.com/lyft/flytesnacks/pull/15/files> has the dockerfile and an example workflow which I used to build/test locally. I need to clean-up these PRs a bit/add docs etc and I plan to have these checked-in by early next week.
Released <https://github.com/lyft/flytekit/releases/tag/v0.10.3> with the fix. Updated <https://github.com/lyft/flytesnacks/pull/15> as well with to refer to the new flytekit .
hey Sören Brunk I think you will wake up, so Anmol Khurana decide to remove the default script in “flytekit” as it seems spark upstream has a script that we can use . He has this issue - <https://github.com/lyft/flyte/issues/409> and he is about to merge the changes. Also he has this example which works - <https://github.com/lyft/flytesnacks/pull/15> But he is improving it a little bit
Thanks Anmol Khurana and Ketan Umare I’ve just tried to run the version from that PR and success! :tada::tada: I just had to set `FLYTE_INTERNAL_IMAGE` to use my locally built Docker image like so: ```ENV FLYTE_INTERNAL_IMAGE $IMAGE_TAG```
Released <https://github.com/lyft/flytekit/releases/tag/v0.10.3> with the fix. Updated <https://github.com/lyft/flytesnacks/pull/15> as well with to refer to the new flytekit .
Oliver Mannion depends on what you mean by “interactive” . So if you mean “spark” like interactive where you write code in one cell, and get results and so on.. then at the moment - NO. But, if you want to iterate on a pipeline then it does. And I will explain how - 1. We recently added support for “single task execution” Where you can execute one task of a pipeline. 2. Flyte always supported requesting an execution and getting results back through the API. So you will be able to retrieve the results in a jupyter notebook 3. We also recently added a pre-alpha version of “Raw container support” - you will see more example soon on this. This allows you to get away from building a container - The biggest problem today in interactive execution One problem that I think with Flyte and iterative development is - Flyte is multi-tenant by design, which means if a large set of production workloads are running, users interactive requests will get queued up potentially, but if you have enough machines then this should work As a last point, we are exploring additional interactive ideas that could mitigate the largest pain today - Building a container. But we do not feel very comfortable from completely taking away containers, as that provides strong reproducibility guarantees - which I think is a cornerstone of Flyte. Hope this answers. I would love to discuss more Oliver Mannion ^ any more questions? also WIP - <https://github.com/lyft/flytesnacks/>
Does Flyte support interactive iterative ML research workloads on GPUs? Or is it more for well-defined scheduled workloads?
Using a Jupyter notebook to describe and trigger a task and then inspect the results might be what I’m thinking of here. Do you have any examples of that?
Oliver Mannion depends on what you mean by “interactive” . So if you mean “spark” like interactive where you write code in one cell, and get results and so on.. then at the moment - NO. But, if you want to iterate on a pipeline then it does. And I will explain how - 1. We recently added support for “single task execution” Where you can execute one task of a pipeline. 2. Flyte always supported requesting an execution and getting results back through the API. So you will be able to retrieve the results in a jupyter notebook 3. We also recently added a pre-alpha version of “Raw container support” - you will see more example soon on this. This allows you to get away from building a container - The biggest problem today in interactive execution One problem that I think with Flyte and iterative development is - Flyte is multi-tenant by design, which means if a large set of production workloads are running, users interactive requests will get queued up potentially, but if you have enough machines then this should work As a last point, we are exploring additional interactive ideas that could mitigate the largest pain today - Building a container. But we do not feel very comfortable from completely taking away containers, as that provides strong reproducibility guarantees - which I think is a cornerstone of Flyte. Hope this answers. I would love to discuss more Oliver Mannion ^ any more questions? also WIP - <https://github.com/lyft/flytesnacks/>
I am writing one, will share. hey Oliver Mannion / Joseph Winston here is the updated cookbook - <https://github.com/lyft/flytesnacks/tree/master/cookbook> It is not yet complete but many parts are the simple once 1/3/4/5/9 are still wip Oliver Mannion I have a lot more examples check them out and let me know
Using a Jupyter notebook to describe and trigger a task and then inspect the results might be what I’m thinking of here. Do you have any examples of that?
<https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2785> is this not it?
Hello Everyone! My question related possibility to extend `sidecar_task` to specify `tolerations` section in `pod_spec` I noticed that the `tolerations` field in `pod_spec` doesn’t appear into the Pod manifest in kubernetes. <https://lyft.github.io/flyte/user/tasktypes/sidecar.html?highlight=pod_spec#working-example> So, my workaround to achieve is using documented approach with `resource-tolerations` in k8s propeller plugin and extended k8s resources: <https://github.com/lyft/flyteplugins/blob/master/go/tasks/testdata/config.yaml#L15> and <https://kubernetes.io/docs/tasks/configure-pod-container/extended-resource/> Q: I’m interested if it makes sense to not drop `tolerations` field in pod_spec? Or is it important for other stuff? P.S.: My case: very specific python task which requires up to terabyte disk size and it should run on the dedicated ec2 node group Thank you in advance!
Hi Yee! Yes, we tried to put this object into pod_spec
<https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2785> is this not it?
oooh you mean it doesn’t make its way into the actual Pod sorry Katrina Rogan if she wants to take a look if not, i can do it later
Hi Yee! Yes, we tried to put this object into pod_spec
we shouldn't be deliberately dropping anything from the podspec <https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/k8s/sidecar/sidecar.go#L64> ah we do we should be appending to the user pod spec definition Ruslan Stanevich do you mind filing an issue to track?
oooh you mean it doesn’t make its way into the actual Pod sorry Katrina Rogan if she wants to take a look if not, i can do it later
Thank you Katrina Rogan for checking! :pray: Should I report it as feature request or issue?
we shouldn't be deliberately dropping anything from the podspec <https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/k8s/sidecar/sidecar.go#L64> ah we do we should be appending to the user pod spec definition Ruslan Stanevich do you mind filing an issue to track?
issue/bug is fine :D
Thank you Katrina Rogan for checking! :pray: Should I report it as feature request or issue?
thank you for your help :pray: issue has been created <https://github.com/lyft/flyte/issues/417> have a nice day!
issue/bug is fine :D
Katrina Rogan I think same is happening for volume Ruslan Stanevich you should also look at platform specific tolerations, configuring them in the backend
thank you for your help :pray: issue has been created <https://github.com/lyft/flyte/issues/417> have a nice day!
Hi, Ketan, do you mean specific tolerations for requested resources? or it is like common tolerations for all workflows? I’m not sure I understood this correctly :slightly_smiling_face:
Katrina Rogan I think same is happening for volume Ruslan Stanevich you should also look at platform specific tolerations, configuring them in the backend
Ketan Umare why do you say the same is happening for volume? the original use case for sidecar tasks was to support shared volume mounts and i don't see us overriding it
Hi, Ketan, do you mean specific tolerations for requested resources? or it is like common tolerations for all workflows? I’m not sure I understood this correctly :slightly_smiling_face:
Katrina Rogan some users in Minsk tried it and failed. It seems it was getting overwritten
Ketan Umare why do you say the same is happening for volume? the original use case for sidecar tasks was to support shared volume mounts and i don't see us overriding it
hm maybe something got refactored - i'll take a look in the same pr Ketan Umare i don't see volume being overwritten in plugins code, do you have more details about the failure Minsk folks saw?
Katrina Rogan some users in Minsk tried it and failed. It seems it was getting overwritten
Katrina Rogan I dont, I just got that as a comment, we can ping them and get more details. They said it was `/dev/shm`
hm maybe something got refactored - i'll take a look in the same pr Ketan Umare i don't see volume being overwritten in plugins code, do you have more details about the failure Minsk folks saw?
and they were also creating a corresponding volume mount?
Katrina Rogan I dont, I just got that as a comment, we can ping them and get more details. They said it was `/dev/shm`
What needs to be done here? I have my git.name and git.email matching what I (believe) is on github. And github credentials were used to sign CLA. How do I sign a new CLA — is there a link to general form? And what address/info needs to be on it? Everytime I click in the PR about the CLA — it pulls up the already signed CLA, which says is signed by my Github user. It seems I’m missing something obvious here? Or did we find a really broken edge case?
Derek Schaller, can you help austin with the CLA issue it seems bruce used github account to sign the CLA
Ok let’s look at your PR Something is wrong So I don’t have admin on the CLA I can probably reset
What needs to be done here? I have my git.name and git.email matching what I (believe) is on github. And github credentials were used to sign CLA. How do I sign a new CLA — is there a link to general form? And what address/info needs to be on it? Everytime I click in the PR about the CLA — it pulls up the already signed CLA, which says is signed by my Github user. It seems I’m missing something obvious here? Or did we find a really broken edge case?
“brucearctor” — seems to be both my git user.name, and github name. “An author in one of the commits has no associated github name” — so not sure how that gets linked? it’s still not clear what the email is in the current CLA? Or how I’d sign a new one. If those would solve the issue
Ok let’s look at your PR Something is wrong So I don’t have admin on the CLA I can probably reset
We have 2 groups within Lyft running Flyte on EKS. Having said that, our instruction on EKS is not complete. We will have to guide you a bit manually on this and then add to the docs. Here is a starting point - <https://github.com/lyft/flyte/blob/master/eks/README.md> cc Yee Ketan Umare Can you tag any L5/Minsk folks who are active here.
Hello Everyone. Could you advise me on the best way to deploy Flyte to EKS?
and welcome!
We have 2 groups within Lyft running Flyte on EKS. Having said that, our instruction on EKS is not complete. We will have to guide you a bit manually on this and then add to the docs. Here is a starting point - <https://github.com/lyft/flyte/blob/master/eks/README.md> cc Yee Ketan Umare Can you tag any L5/Minsk folks who are active here.
<https://github.com/lyft/flyte/issues/299>
and welcome!
Welcome Yiannis, both Yee and I can work with you to run it on EKS we have a sample that is close to complete <https://github.com/lyft/flyte/tree/master/eks> Ruslan Stanevich / <@UP23UL29J> should also be able to help, they run Flyte on EKS
<https://github.com/lyft/flyte/issues/299>
thanks all. I just found Kustomize and TF in the repository. I would suppose the tf would be recommended.
Welcome Yiannis, both Yee and I can work with you to run it on EKS we have a sample that is close to complete <https://github.com/lyft/flyte/tree/master/eks> Ruslan Stanevich / <@UP23UL29J> should also be able to help, they run Flyte on EKS
so Yiannis so the TF is only to setup the EKS cluster, S3 bucket, Postgres Aurora db etc (the infra) the kustomize is what sets up Flyte so to run Flyte you need 1. EKS cluster 2. Postgres DB (you can run in the cluster, but not recomended) 3. S3 bucket And then to access the UI/Admin API you need 4. ELB hope this helps
thanks all. I just found Kustomize and TF in the repository. I would suppose the tf would be recommended.
Thank you!
so Yiannis so the TF is only to setup the EKS cluster, S3 bucket, Postgres Aurora db etc (the infra) the kustomize is what sets up Flyte so to run Flyte you need 1. EKS cluster 2. Postgres DB (you can run in the cluster, but not recomended) 3. S3 bucket And then to access the UI/Admin API you need 4. ELB hope this helps
the Terraform, will help creating 1/2/3/4
Thank you!
I already have a EKS
the Terraform, will help creating 1/2/3/4
ohh that is awesome do you have a postgres DB? you can just create one in console if you have the perms
I already have a EKS
not yet, trying to get it now
ohh that is awesome do you have a postgres DB? you can just create one in console if you have the perms
ok and one s3 bucket
not yet, trying to get it now
no permissions yet
ok and one s3 bucket
this is where Flyte will store metadata and intermediate data Awesome
no permissions yet
ok cool. So im close thank you very much ! :smile:
this is where Flyte will store metadata and intermediate data Awesome
once you have that we can help you with the kustomize, should not be much, but need a couple changes ya, this is actually great, its been a while we have helped someone setup from scratch on EKS, helps us improve the docs too Yiannis let me know if you need help
ok cool. So im close thank you very much ! :smile:
Sören Brunk, great question. Today the tasks are designed to complete Eventually we want to support streaming tasks. As I think streaming as a service is pretty cool But we do have flink k8s operator Or spark operator You can just deploy using them Or we can add a task that launches and releases Sören Brunk let me know, is this something that you are needing on day 1? i would definitely share on the flyte homepage on github you will see the other 2 operators
Hello Flyte Friends! I have question for the experts once again. Does the Flyte design allow/work well with really long running tasks? I.e. is it possible/does it make sense to deploy something like a Spark Streaming job that basically runs continuously until it fails or someone stops it?
It’s not really a hard requirement for us right now. But we have use cases where we continuously receive smaller batches of machine data so the overhead of spinning up a new spark job every time is quite high. A Spark streaming job would be much more suitable here. Of course we we could just deploy that job using the Spark operator directly but it would be much nicer to describe it as a Flyte Spark task. It would basically be a simple single task workflow.
Sören Brunk, great question. Today the tasks are designed to complete Eventually we want to support streaming tasks. As I think streaming as a service is pretty cool But we do have flink k8s operator Or spark operator You can just deploy using them Or we can add a task that launches and releases Sören Brunk let me know, is this something that you are needing on day 1? i would definitely share on the flyte homepage on github you will see the other 2 operators
Sören Brunk I hear you And we would love to work with you on this Yup so today tasks need to Complete No hard requirement really, you could set You can actually run a streaming jobs and set the platform timeout to infinity :joy: So it should run today But I want to qualify these as special task types in the future, Just for correctness
It’s not really a hard requirement for us right now. But we have use cases where we continuously receive smaller batches of machine data so the overhead of spinning up a new spark job every time is quite high. A Spark streaming job would be much more suitable here. Of course we we could just deploy that job using the Spark operator directly but it would be much nicer to describe it as a Flyte Spark task. It would basically be a simple single task workflow.