output
stringlengths 1
18.7k
| input
stringlengths 1
18.7k
|
---|---|
jinx
| <#CP2HDHKE1|onboarding>
|
oh you beat me to it :slightly_smiling_face:
| jinx
|
Hello Sergio! thank you for joining
| Hello
|
glad to hear it! welcome and please don’t hesitate to ask any questions!!
| Hello everyone, I just started playing with Flyte and I'm very excited, it looks like a very promising project
|
Thank you Matt, as a matter of fact I do have a few…what would be the best channel to discuss them?
| glad to hear it! welcome and please don’t hesitate to ask any questions!!
|
I think you can ask here and if it’s better to be in another channel, we can help re-route it for you Though i’m about to head off to bed so someone will have to pick it up tomorrow morning.
| Thank you Matt, as a matter of fact I do have a few…what would be the best channel to discuss them?
|
Hi Giordano sorry for the late reply. So in the local sandbox cluster mode we do not have a scheduler today. When you deploy to a cloud environment we use either AWS cloud watch schedules or on GCP cloud scheduler. This is an area of active interest and we prefer to write a simple one using k8s operator or better option work with our partner Spotify to use Styx scheduler to run the cron schedules
So to understand, we just don’t have the cron runner, the runner just triggers Flyte control plane and we do the rest. - let me know if you would love to contribute or have any other questions
Let me know if you need help with anything else
| ok I got a couple of questions:
1)
I followed the examples in the flytesnacks repository and created a simple workflow with a few tasks, now I am trying to schedule it to run every 5 minutes by using this code that I found in the docs (<https://lyft.github.io/flyte/user/features/lanuchplans.html>) which I added after the workflow definition
```my_fixed_rate_launch_plan = myworkflow.create_launch_plan(
default_inputs={'string_in': Input(Types.String, default="aaaa")},
schedule=schedules.FixedRate(datetime.timedelta(minutes=5)),
)```
when I rebuild the container and re-register the workflow i see that the launchplan gets registered
```Flyte Admin URL 127.0.0.1:30081
Running task, workflow, and launch plan registration for flytedemo, development, ['fk_tasks'] with version 62ffc8b36883d18f4cf424f8d09510a5dd3db46d
Registering Task: fk_tasks.workflows.download_dataset
Registering Task: fk_tasks.workflows.read_pickle
Registering Workflow: fk_tasks.workflows.myworkflow
Registering Launch Plan: fk_tasks.workflows.my_fixed_rate_launch_plan
Registering Launch Plan: fk_tasks.workflows.myworkflow
Registering Task: fk_tasks.workflows.reverse_task
Registering Task: fk_tasks.workflows.uppercase_task```
unfortunately in the gui I get a “This workflow has no schedules.”
What am I missing?
|
Hi Ketan Umare thanks for the info, i was looking at flyte to replace airflow running on a barematal k8s cluster so I guess I’ll have to manage without the scheduling part
| Hi Giordano sorry for the late reply. So in the local sandbox cluster mode we do not have a scheduler today. When you deploy to a cloud environment we use either AWS cloud watch schedules or on GCP cloud scheduler. This is an area of active interest and we prefer to write a simple one using k8s operator or better option work with our partner Spotify to use Styx scheduler to run the cron schedules
So to understand, we just don’t have the cron runner, the runner just triggers Flyte control plane and we do the rest. - let me know if you would love to contribute or have any other questions
Let me know if you need help with anything else
|
Giordano we would love to work with you on a replacement for the cloud scheduler
Try out the other features and e See is you like everything else
The reason for not packaging a scheduler is reinvention of wheel And building a scheduler for our scale will take sometime 100ks execution per day
| Hi Ketan Umare thanks for the info, i was looking at flyte to replace airflow running on a barematal k8s cluster so I guess I’ll have to manage without the scheduling part
|
There is documentation for this, will share a link. It’s as simple as importing the workflow and there are bunch of API’s on them
Also you can use Flyte-cli
| 2) are there any examples on how to connect to flyteadmin and launch/manage and query workflow results via the flytesdk?
|
great thanks looking forward to the link.
| There is documentation for this, will share a link. It’s as simple as importing the workflow and there are bunch of API’s on them
Also you can use Flyte-cli
|
<https://lyft.github.io/flyte/user/features/flytecli.html|https://lyft.github.io/flyte/user/features/flytecli.html> Flyte cli
<https://lyft.github.io/flyte/flytekit/flytekit.common.html#module-flytekit.common.workflow_execution|https://lyft.github.io/flyte/flytekit/flytekit.common.html#module-flytekit.common.workflow_execution>
We don’t have better docs on the interactive part yet
| great thanks looking forward to the link.
|
is your code pushed anywhere we can take a look?
| ok one more question, I was trying to create a workflow that would call some tasks aready defined in another workflow using an example that I found on the kubecon 19 presentation
```from flytekit.models.launch_plan import LaunchPlanState
from flytekit.common import utils, schedules, tasks
from flytekit.sdk.tasks import python_task, outputs, inputs
from flytekit.sdk.types import Types
from flytekit.sdk.workflow import workflow_class, Output, Input
from flytekit.common.tasks.task import SdkTask
@workflow_class
class myworkflow(object):
string_in = Input(Types.String, required=True, help="input string")
dataset = Input(Types.CSV, default=Types.CSV.create_at_known_location(
"<http://172.16.140.171:8000/label_summary.csv>"),
help="A CSV File")
return_dataset = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.download_dataset","470cae526836f7a05d41ef81faa07a3b275b9de9")(dataset=dataset)
return_pickle = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.read_pickle","470cae526836f7a05d41ef81faa07a3b275b9de9")(dataset=return_dataset.outputs.out)
myoutput = Output(return_pickle.outputs.csv_head, sdk_type=Types.String)```
unfortunately when I register I get an error
```flytekit.common.exceptions.user.FlyteAssertion: An entity was not found in modules accessible from the workflow packages configuration. Please ensure that entities in 'fk_workflow.workflow' are moved to a configured packaged, or adjust the configuration.```
Does my code look ok? am I missing something?
What I am trying to achieve is to de-couple the tasks from the workflow so that I can dynamically create a workflow from pre-existing tasks. my use case would be some kind of application where a user can choose different pre-built tasks, build a workflow definition through some kind of gui and then run a script that would dynamically create the workflow with those tasks. I understand that the tasks need to be created inside a container, but from what I understand a workflow does not need to be related to a specific container build, is my assumption correct?
|
No but i can create a quick repo
| is your code pushed anywhere we can take a look?
|
sure, if you don’t mind.
always helpful
| No but i can create a quick repo
|
hmm yeh, i’m not seeing where `fk_workflow.workflow` is referenced
(the code you copied originally looks great!)
| sure, if you don’t mind.
always helpful
|
hi Matt thanks, i just pushed to the repo my example
<https://github.com/giordyb/flyte_demo>
the workflow inside the fk_tasks folder works just fine (it’s 4 tasks and 1 workflow)
in the workflow inside the fk_workflow folder I’m trying fetch the tasks created with the previous workflow but doesn’t work
| hmm yeh, i’m not seeing where `fk_workflow.workflow` is referenced
(the code you copied originally looks great!)
|
what happens if you run `flyte-cli -h <your-url> list-task-names -p flytedemo -d development`
| hi Matt thanks, i just pushed to the repo my example
<https://github.com/giordyb/flyte_demo>
the workflow inside the fk_tasks folder works just fine (it’s 4 tasks and 1 workflow)
in the workflow inside the fk_workflow folder I’m trying fetch the tasks created with the previous workflow but doesn’t work
|
Welcome to Flyte CLI! Version: 0.4.4
Task Names Found in flytedemo:development
fk_tasks.tasks_and_workflow.download_dataset
fk_tasks.tasks_and_workflow.read_pickle
fk_tasks.tasks_and_workflow.reverse_task
fk_tasks.tasks_and_workflow.uppercase_task
btw I also tried fetch_latest without the version number but i get a
```AttributeError: type object 'SdkTask' has no attribute 'fetch_latest'```
| what happens if you run `flyte-cli -h <your-url> list-task-names -p flytedemo -d development`
|
cool and also: `flyte-cli -h <your-url> list-task-versions -p flytedemo -d development -n fk_tasks.tasks_and_workflow.download_dataset`
| Welcome to Flyte CLI! Version: 0.4.4
Task Names Found in flytedemo:development
fk_tasks.tasks_and_workflow.download_dataset
fk_tasks.tasks_and_workflow.read_pickle
fk_tasks.tasks_and_workflow.reverse_task
fk_tasks.tasks_and_workflow.uppercase_task
btw I also tried fetch_latest without the version number but i get a
```AttributeError: type object 'SdkTask' has no attribute 'fetch_latest'```
|
Welcome to Flyte CLI! Version: 0.4.4
Task Versions Found for flytedemo:development:fk_tasks.tasks_and_workflow.download_dataset
Version Urn
1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216 tsk:flytedemo:development:fk_tasks.tasks_and_workflow.download_dataset:1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216
| cool and also: `flyte-cli -h <your-url> list-task-versions -p flytedemo -d development -n fk_tasks.tasks_and_workflow.download_dataset`
|
ok cool, the SHA `470cae526836f7a05d41ef81faa07a3b275b9de9` doesn’t look like it was ever registered
if you swapped that with `1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216`, it should work
| Welcome to Flyte CLI! Version: 0.4.4
Task Versions Found for flytedemo:development:fk_tasks.tasks_and_workflow.download_dataset
Version Urn
1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216 tsk:flytedemo:development:fk_tasks.tasks_and_workflow.download_dataset:1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216
|
sorry i had rebuilt the deployment in the meantime
just tried with that version and it still throws
```flytekit.common.exceptions.user.FlyteAssertion: An entity was not found in modules accessible from the workflow packages configuration. Please ensure that entities in 'fk_workflow.workflow' are moved to a configured packaged, or adjust the configuration.```
| ok cool, the SHA `470cae526836f7a05d41ef81faa07a3b275b9de9` doesn’t look like it was ever registered
if you swapped that with `1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216`, it should work
|
```read_task = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.read_pickle","1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216")
download_task = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.download_dataset","1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216")
@workflow_class
class myworkflow(object):
string_in = Input(Types.String, required=True, help="input string")
dataset = Input(Types.CSV, default=Types.CSV.create_at_known_location(
"<http://172.16.140.171:8000/label_summary.csv>"),
help="A CSV File")
return_dataset = download_task(dataset=dataset)
return_pickle = read_task(dataset=return_dataset.outputs.out)
myoutput = Output(return_pickle.outputs.csv_head, sdk_type=Types.String)```
hmm that might be a bug in the registration tool.
…sorry still typing
can you try above? i think there is a bug which incorrectly triggers a sanity-check mechanism
anyway, this is actually easier in a ‘pure client’ case. The case where you have a service that is dynamically generating workflows, you can use the client to find task versions, weave them into a workflow, and then simply call register in a pure script like:
```t = SdkTask.fetch(...)
wf = workflow(nodes={'n1': t()}, inputs=...)
lp = wf.create_launch_plan()
wf.register(..)
lp.register(..)```
| sorry i had rebuilt the deployment in the meantime
just tried with that version and it still throws
```flytekit.common.exceptions.user.FlyteAssertion: An entity was not found in modules accessible from the workflow packages configuration. Please ensure that entities in 'fk_workflow.workflow' are moved to a configured packaged, or adjust the configuration.```
|
```Running task, workflow, and launch plan registration for flytedemo, development, ['fk_workflow'] with version 1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216
Traceback (most recent call last):
File "/app/venv/bin/pyflyte", line 11, in <module>
sys.exit(main())
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 97, in workflows
register_all(project, domain, pkgs, test, version)
File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 21, in register_all
for m, k, o in iterate_registerable_entities_in_order(pkgs):
File "/app/venv/lib/python3.6/site-packages/flytekit/tools/module_loader.py", line 112, in iterate_registerable_entities_in_order
for m in iterate_modules(pkgs):
File "/app/venv/lib/python3.6/site-packages/flytekit/tools/module_loader.py", line 16, in iterate_modules
yield importlib.import_module(name)
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/app/fk_workflow/workflow.py", line 9, in <module>
read_task = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.read_pickle","1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216")
File "/app/venv/lib/python3.6/site-packages/flytekit/common/exceptions/scopes.py", line 161, in system_entry_point
return wrapped(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/common/tasks/task.py", line 159, in fetch
admin_task = _engine_loader.get_engine().fetch_task(task_id=task_id)
File "/app/venv/lib/python3.6/site-packages/flytekit/engines/flyte/engine.py", line 108, in fetch_task
).client.get_task(task_id)
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/friendly.py", line 162, in get_task
id=id.to_flyte_idl()
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 12, in handler
return fn(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 136, in get_task
return self._stub.GetTask(get_object_request)
File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.NOT_FOUND
details = "entry not found"
debug_error_string = "{"created":"@1579804960.139722800","description":"Error received from peer ipv4:127.0.0.1:30081","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"entry not found","grpc_status":5}"
>
make: *** [register_workflow] Error 1```
nope still throws an error
the pure client solution would be even better i think
there is something that I haven’t quite understood yet, maybe you could shed some light
I understand that in order to register a task i need to do it from a container (or at least set an env variable that points to the container that will run it)
do I still need to reference a container when creating a workflow?
| ```read_task = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.read_pickle","1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216")
download_task = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.download_dataset","1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216")
@workflow_class
class myworkflow(object):
string_in = Input(Types.String, required=True, help="input string")
dataset = Input(Types.CSV, default=Types.CSV.create_at_known_location(
"<http://172.16.140.171:8000/label_summary.csv>"),
help="A CSV File")
return_dataset = download_task(dataset=dataset)
return_pickle = read_task(dataset=return_dataset.outputs.out)
myoutput = Output(return_pickle.outputs.csv_head, sdk_type=Types.String)```
hmm that might be a bug in the registration tool.
…sorry still typing
can you try above? i think there is a bug which incorrectly triggers a sanity-check mechanism
anyway, this is actually easier in a ‘pure client’ case. The case where you have a service that is dynamically generating workflows, you can use the client to find task versions, weave them into a workflow, and then simply call register in a pure script like:
```t = SdkTask.fetch(...)
wf = workflow(nodes={'n1': t()}, inputs=...)
lp = wf.create_launch_plan()
wf.register(..)
lp.register(..)```
|
no, definitely not for a workflow…and you don’t actually technically need to be in a container for a task either. Being in a container that is configured in a certain way just makes it easy to auto-fill in information when creating tasks.
| ```Running task, workflow, and launch plan registration for flytedemo, development, ['fk_workflow'] with version 1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216
Traceback (most recent call last):
File "/app/venv/bin/pyflyte", line 11, in <module>
sys.exit(main())
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 97, in workflows
register_all(project, domain, pkgs, test, version)
File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 21, in register_all
for m, k, o in iterate_registerable_entities_in_order(pkgs):
File "/app/venv/lib/python3.6/site-packages/flytekit/tools/module_loader.py", line 112, in iterate_registerable_entities_in_order
for m in iterate_modules(pkgs):
File "/app/venv/lib/python3.6/site-packages/flytekit/tools/module_loader.py", line 16, in iterate_modules
yield importlib.import_module(name)
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/app/fk_workflow/workflow.py", line 9, in <module>
read_task = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.read_pickle","1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216")
File "/app/venv/lib/python3.6/site-packages/flytekit/common/exceptions/scopes.py", line 161, in system_entry_point
return wrapped(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/common/tasks/task.py", line 159, in fetch
admin_task = _engine_loader.get_engine().fetch_task(task_id=task_id)
File "/app/venv/lib/python3.6/site-packages/flytekit/engines/flyte/engine.py", line 108, in fetch_task
).client.get_task(task_id)
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/friendly.py", line 162, in get_task
id=id.to_flyte_idl()
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 12, in handler
return fn(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 136, in get_task
return self._stub.GetTask(get_object_request)
File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.NOT_FOUND
details = "entry not found"
debug_error_string = "{"created":"@1579804960.139722800","description":"Error received from peer ipv4:127.0.0.1:30081","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"entry not found","grpc_status":5}"
>
make: *** [register_workflow] Error 1```
nope still throws an error
the pure client solution would be even better i think
there is something that I haven’t quite understood yet, maybe you could shed some light
I understand that in order to register a task i need to do it from a container (or at least set an env variable that points to the container that will run it)
do I still need to reference a container when creating a workflow?
|
right that’s what I figured
if you could provide some simple example of a workflow created with the pure client I would appreciate it
another thing that’s not very clear from the flytekit documentation is how to connect to the admin service
i know that with flyte-cli I can just pass the host parameters but I couldn’t find the equivalent for the flytekit
i think it’s looking for some kind of configuration file because I get an error similar to this
```FlyteAssertion: No configuration set for [platform] url. This is a required configuration.```
| no, definitely not for a workflow…and you don’t actually technically need to be in a container for a task either. Being in a container that is configured in a certain way just makes it easy to auto-fill in information when creating tasks.
|
do you get this error when using the pyflyte tool? or just when writing a script yourself?
if the latter, take a look at this method: <https://github.com/lyft/flytekit/blob/master/flytekit/configuration/__init__.py#L11>
| right that’s what I figured
if you could provide some simple example of a workflow created with the pure client I would appreciate it
another thing that’s not very clear from the flytekit documentation is how to connect to the admin service
i know that with flyte-cli I can just pass the host parameters but I couldn’t find the equivalent for the flytekit
i think it’s looking for some kind of configuration file because I get an error similar to this
```FlyteAssertion: No configuration set for [platform] url. This is a required configuration.```
|
when creating a script myself
| do you get this error when using the pyflyte tool? or just when writing a script yourself?
if the latter, take a look at this method: <https://github.com/lyft/flytekit/blob/master/flytekit/configuration/__init__.py#L11>
|
alternatively, you can set env vars that follow the format: `FLYTE_{SECTION}_{KEY}` . so in this case FLYTE_PLATFORM_URL, but i recommend creating a `flytekit.conf` file
and i will find an example of a pure client use case
| when creating a script myself
|
is the flytekit.config supposed to be created when running flyte-cli setup-config?
| alternatively, you can set env vars that follow the format: `FLYTE_{SECTION}_{KEY}` . so in this case FLYTE_PLATFORM_URL, but i recommend creating a `flytekit.conf` file
and i will find an example of a pure client use case
|
it is the same config file format, so it can be.
| is the flytekit.config supposed to be created when running flyte-cli setup-config?
|
because I tried running
```flyte-cli setup-config -h localhost:30081 -i```
but I got a jsondecode error…should I open an issue about that?
| it is the same config file format, so it can be.
|
yes please. cc: Yee
| because I tried running
```flyte-cli setup-config -h localhost:30081 -i```
but I got a jsondecode error…should I open an issue about that?
|
:+1:
| yes please. cc: Yee
|
there should be a barebones config in the demo repository you can copy
| :+1:
|
ok thanks I’ll take a look
| there should be a barebones config in the demo repository you can copy
|
here is an example script for a pure client approach. note: i copy-pasted different elements so there _might_ be a copy-paste error (and the interface definitions for your tasks/workflows will be different):
```my_task = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.download_dataset", "1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216")
input_dict = {
'input_1': Input(Types.Integer, default=10, help='Not required input.'),
'input_2': Input(Types.Integer, help='Required.')
}
nodes = {
'a': my_task(a=input_dict['input_1']),
'b': my_task(a=input_dict['input_2']),
'c': my_task(a=1000)
}
outputs = {
'wf_out': Output(
[
nodes['a'].outputs.b,
nodes['b'].outputs.b,
nodes['c'].outputs.b,
],
sdk_type=[Types.Integer]
)
}
w = workflow(inputs=input_dict, outputs=outputs, nodes=nodes)
w.register('flytedemo', 'development', 'simple_functional', version)))
lp = w.create_launch_plan(
fixed_inputs={'input_2': 100},
schedule=schedules.CronSchedule("0/15 * * * ? *")
lp.register('flytedemo', 'development', 'simple_functional', version)))
ex = lp.execute('flytedemo', 'development', inputs={'input_1': 500})
print(("Execution URN: {}".format(ex.id)))
# wait for execution to complete, then check out the status, inputs, and outputs.
ex.wait_for_completion(timeout=datetime.timedelta(minutes=10))
assert ex.closure.phase == _core_execution.WorkflowExecutionPhase.SUCCEEDED
assert ex.error is None
assert len(ex.inputs) == 2
assert len(ex.outputs) == 1
assert ex.inputs['input_1'] == 500
assert ex.inputs['input_2'] == 100
assert ex.outputs['wf_out'] == [501, 101, 1001]
for k, ne in six.iteritems(ex.node_executions):
if k in {'start-node', 'end-node'}:
continue
ne.sync()
assert ne.closure.phase == _core_execution.NodeExecutionPhase.SUCCEEDED
assert len(ne.inputs) == 1
assert len(ne.outputs) == 1
assert len(ne.executions) == 1
assert len(ne.task_executions) == 1
assert ne.error is None
ne.executions[0].sync()
assert ne.executions[0].error is None
assert len(ne.executions[0].inputs) == 1
assert len(ne.executions[0].outputs) == 1
assert ne.task_executions[0].closure.phase == _core_execution.TaskExecutionPhase.SUCCEEDED
assert len(ne.task_executions[0].inputs) == 1
assert len(ne.task_executions[0].outputs) == 1
assert ne.task_executions[0].error is None
assert ex.node_executions['a'].inputs['a'] == 500
assert ex.node_executions['a'].outputs['b'] == 501
assert ex.node_executions['b'].inputs['a'] == 100
assert ex.node_executions['b'].outputs['b'] == 101
assert ex.node_executions['c'].inputs['a'] == 1000
assert ex.node_executions['c'].outputs['b'] == 1001```
important imports:
```from flytekit.sdk.types import Types
from flytekit.sdk.workflow import workflow, Input, Output
from flytekit.common import schedules
from flytekit.configuration import TemporaryConfiguration
from flytekit.models import launch_plan as _launch_plan
from flytekit.models.core import execution as _core_execution```
and you’ll want to set the config somewhere beforehand too
| ok thanks I’ll take a look
|
hi matt I got it to work after all
one of the issues was that I renamed the task names but didn’t rename them in the code
the other issue is that the docker image has the flytekit pinned to an older version in the flytesnacks example
that’s why I got the error with fetch_latest
thank you for your help!
| here is an example script for a pure client approach. note: i copy-pasted different elements so there _might_ be a copy-paste error (and the interface definitions for your tasks/workflows will be different):
```my_task = SdkTask.fetch("flytedemo","development","fk_tasks.tasks.download_dataset", "1e0d95cc82b85cdd96feab3c6f9b6e2f7baba216")
input_dict = {
'input_1': Input(Types.Integer, default=10, help='Not required input.'),
'input_2': Input(Types.Integer, help='Required.')
}
nodes = {
'a': my_task(a=input_dict['input_1']),
'b': my_task(a=input_dict['input_2']),
'c': my_task(a=1000)
}
outputs = {
'wf_out': Output(
[
nodes['a'].outputs.b,
nodes['b'].outputs.b,
nodes['c'].outputs.b,
],
sdk_type=[Types.Integer]
)
}
w = workflow(inputs=input_dict, outputs=outputs, nodes=nodes)
w.register('flytedemo', 'development', 'simple_functional', version)))
lp = w.create_launch_plan(
fixed_inputs={'input_2': 100},
schedule=schedules.CronSchedule("0/15 * * * ? *")
lp.register('flytedemo', 'development', 'simple_functional', version)))
ex = lp.execute('flytedemo', 'development', inputs={'input_1': 500})
print(("Execution URN: {}".format(ex.id)))
# wait for execution to complete, then check out the status, inputs, and outputs.
ex.wait_for_completion(timeout=datetime.timedelta(minutes=10))
assert ex.closure.phase == _core_execution.WorkflowExecutionPhase.SUCCEEDED
assert ex.error is None
assert len(ex.inputs) == 2
assert len(ex.outputs) == 1
assert ex.inputs['input_1'] == 500
assert ex.inputs['input_2'] == 100
assert ex.outputs['wf_out'] == [501, 101, 1001]
for k, ne in six.iteritems(ex.node_executions):
if k in {'start-node', 'end-node'}:
continue
ne.sync()
assert ne.closure.phase == _core_execution.NodeExecutionPhase.SUCCEEDED
assert len(ne.inputs) == 1
assert len(ne.outputs) == 1
assert len(ne.executions) == 1
assert len(ne.task_executions) == 1
assert ne.error is None
ne.executions[0].sync()
assert ne.executions[0].error is None
assert len(ne.executions[0].inputs) == 1
assert len(ne.executions[0].outputs) == 1
assert ne.task_executions[0].closure.phase == _core_execution.TaskExecutionPhase.SUCCEEDED
assert len(ne.task_executions[0].inputs) == 1
assert len(ne.task_executions[0].outputs) == 1
assert ne.task_executions[0].error is None
assert ex.node_executions['a'].inputs['a'] == 500
assert ex.node_executions['a'].outputs['b'] == 501
assert ex.node_executions['b'].inputs['a'] == 100
assert ex.node_executions['b'].outputs['b'] == 101
assert ex.node_executions['c'].inputs['a'] == 1000
assert ex.node_executions['c'].outputs['b'] == 1001```
important imports:
```from flytekit.sdk.types import Types
from flytekit.sdk.workflow import workflow, Input, Output
from flytekit.common import schedules
from flytekit.configuration import TemporaryConfiguration
from flytekit.models import launch_plan as _launch_plan
from flytekit.models.core import execution as _core_execution```
and you’ll want to set the config somewhere beforehand too
|
Awesome!!! thank you both
| Hongxin Liang and Haytham Abuelfutuh
I’ve merged the boilerplate change
and updated the idl pr <https://github.com/lyft/flyteidl/pull/27>
thanks honnix for adding me as a collab
mind taking a look one last time before we merge?
|
I will take care of the rest repos.
Thanks.
| Awesome!!! thank you both
|
good morning
thank you!
i’ll approve everything tomorrow morning.
will then test, and then post an annoucement to the issue and this channel
| I will take care of the rest repos.
Thanks.
|
None that we know off
is not playing well with flytekit
| Hello everyone… are there any known issues with running a script that uses the “hyperopt” python library under flyte?
|
I have been trying to run a script that uses it under a workflow
| None that we know off
is not playing well with flytekit
|
hmmm and
but you are using the python bindings (flytekit) right?
| I have been trying to run a script that uses it under a workflow
|
i don’t even have to actually use the library, as soon as I import it in the script i get this
| hmmm and
but you are using the python bindings (flytekit) right?
|
are you pasting the error?
| i don’t even have to actually use the library, as soon as I import it in the script i get this
|
```ERROR:root:Error from command '['aws', '--endpoint-url', '<http://minio.yolotrainframework.svc.cluster.local:9000>', 's3', 'cp', '<s3://my-s3-bucket/metadata/propeller/flytedemo-d> │
│ b'' │
│ Traceback (most recent call last): │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/interfaces/data/data_proxy.py", line 127, in get_data │
│ proxy.download(remote_path, local_path) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/interfaces/data/s3/s3proxy.py", line 109, in download │
│ return _update_cmd_config_and_execute(cmd) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/interfaces/data/s3/s3proxy.py", line 36, in _update_cmd_config_and_execute │
│ return _subprocess.check_call(cmd, env=env) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/tools/subprocess.py", line 34, in check_call │
│ "Called process exited with error code: {}. Stderr dump:\n\n{}".format(ret_code, err_str) │
│ Exception: Called process exited with error code: -9. Stderr dump: │
│ b'' │
│ During handling of the above exception, another exception occurred: │
│ Traceback (most recent call last): │
│ File "/app/venv/bin/pyflyte-execute", line 11, in <module> │
│ sys.exit(execute_task_cmd()) │
│ File "/app/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__ │
│ return self.main(*args, **kwargs) │
│ File "/app/venv/lib/python3.6/site-packages/click/core.py", line 717, in main │
│ rv = self.invoke(ctx) │
│ File "/app/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke │
│ return ctx.invoke(self.callback, **ctx.params) │
│ File "/app/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke │
│ return callback(*args, **kwargs) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/bin/entrypoint.py", line 104, in execute_task_cmd │
│ _execute_task(task_module, task_name, inputs, output_prefix, test) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/common/exceptions/scopes.py", line 161, in system_entry_point │
│ return wrapped(*args, **kwargs) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/bin/entrypoint.py", line 83, in _execute_task │
│ _data_proxy.Data.get_data(inputs, local_inputs_file)```
this is from inside the container
if I take out the hyperopt import everything works
| are you pasting the error?
|
hmmm
Exit code -9
let me check
Giordano can you share your code
if you dont mind?
| ```ERROR:root:Error from command '['aws', '--endpoint-url', '<http://minio.yolotrainframework.svc.cluster.local:9000>', 's3', 'cp', '<s3://my-s3-bucket/metadata/propeller/flytedemo-d> │
│ b'' │
│ Traceback (most recent call last): │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/interfaces/data/data_proxy.py", line 127, in get_data │
│ proxy.download(remote_path, local_path) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/interfaces/data/s3/s3proxy.py", line 109, in download │
│ return _update_cmd_config_and_execute(cmd) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/interfaces/data/s3/s3proxy.py", line 36, in _update_cmd_config_and_execute │
│ return _subprocess.check_call(cmd, env=env) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/tools/subprocess.py", line 34, in check_call │
│ "Called process exited with error code: {}. Stderr dump:\n\n{}".format(ret_code, err_str) │
│ Exception: Called process exited with error code: -9. Stderr dump: │
│ b'' │
│ During handling of the above exception, another exception occurred: │
│ Traceback (most recent call last): │
│ File "/app/venv/bin/pyflyte-execute", line 11, in <module> │
│ sys.exit(execute_task_cmd()) │
│ File "/app/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__ │
│ return self.main(*args, **kwargs) │
│ File "/app/venv/lib/python3.6/site-packages/click/core.py", line 717, in main │
│ rv = self.invoke(ctx) │
│ File "/app/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke │
│ return ctx.invoke(self.callback, **ctx.params) │
│ File "/app/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke │
│ return callback(*args, **kwargs) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/bin/entrypoint.py", line 104, in execute_task_cmd │
│ _execute_task(task_module, task_name, inputs, output_prefix, test) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/common/exceptions/scopes.py", line 161, in system_entry_point │
│ return wrapped(*args, **kwargs) │
│ File "/app/venv/lib/python3.6/site-packages/flytekit/bin/entrypoint.py", line 83, in _execute_task │
│ _data_proxy.Data.get_data(inputs, local_inputs_file)```
this is from inside the container
if I take out the hyperopt import everything works
|
it seems like importing hyperopt createss some issues with connecting to s3
sure
gimme a sec
| hmmm
Exit code -9
let me check
Giordano can you share your code
if you dont mind?
|
ya this is actually connecting to minio in your sandbox cluster
| it seems like importing hyperopt createss some issues with connecting to s3
sure
gimme a sec
|
Looks like the subprocess which was trying to copy from s3 got a SIGKILL signal.
The most common reason for that would be OOM. `aws s3 cp` shouldn't take too much memory, but it might be worth trying to bump the memory in your container using something like:
```@python_task(cpu_limit="10000m", memory_limit="10000Mi")```
| ya this is actually connecting to minio in your sandbox cluster
|
Johnny Burns Ketan Umare just tried what you suggested and it's working now, thanks!
It was indeed a resource issue
| Looks like the subprocess which was trying to copy from s3 got a SIGKILL signal.
The most common reason for that would be OOM. `aws s3 cp` shouldn't take too much memory, but it might be worth trying to bump the memory in your container using something like:
```@python_task(cpu_limit="10000m", memory_limit="10000Mi")```
|
Wow awesome!! Way to finish it off—I know it wasn’t easy! :p
| As of today, we have migrated all golang Flyte repos to use go mod. Thanks to Hongxin Liang for driving this change.
Boilerplate changes: <https://github.com/lyft/boilerplate/pull/4>
The boilerplate code will now default to go.mod.
<https://github.com/lyft/flytestdlib/releases/tag/v0.3.0> (was already on go mod as of v0.2.29)
<https://github.com/lyft/flyteidl/releases/tag/v0.17.0>
<https://github.com/lyft/flyteplugins/releases/tag/v0.3.0>
<https://github.com/lyft/flytepropeller/releases/tag/v0.2.0>
<https://github.com/lyft/flyteadmin/releases/tag/v0.2.0>
<https://github.com/lyft/datacatalog/releases/tag/0.2.0>
We’ve introduced a minor workaround to the boilerplate repo (and hence to all other go repos). Please read through the thorough background discussion that Hongxin Liang posted around go modules in the GitHub issue (<https://github.com/lyft/flyte/issues/129>). In order to persist and standardize on a version of the common golang tools, we’ve added a second set of go.mod/sum files. These are listed in the boilerplate PR and basically separate your code from these tools. That is, your code likely uses golangci-lint but that doesn’t mean it should be part of your project’s go.mod files. Honnix’s comment has more information. This dual go.mod approach is also slated to be introduced more formally and rigorously in go 1.14.
This is a good overview of the new commands:
<https://github.com/golang/go/wiki/Modules#daily-workflow>
Note also that in most repos, we’re now pinning the version of client-go to something compatible with the Lyft fork of K8s (see <https://github.com/lyft/datacatalog/pull/21#issuecomment-579411660> for more information).
|
Thank you both! this not only aligns Flyte Repos with what's becoming the standards but also improves our tooling in the process (consistent build tools' versions). Big shout out to Hongxin Liang for going above and beyond to deliver that across all Flyte repos.
| Wow awesome!! Way to finish it off—I know it wasn’t easy! :p
|
Great job everyone! It was a good learning process for me to get better understanding of go ecosystem, and I enjoyed working on this. Super excited to see everything in place. :thumbsup:
| Thank you both! this not only aligns Flyte Repos with what's becoming the standards but also improves our tooling in the process (consistent build tools' versions). Big shout out to Hongxin Liang for going above and beyond to deliver that across all Flyte repos.
|
This is a good place to start learning about notifications:
<https://lyft.github.io/flyte/user/features/notifications.html>
As far as logging/monitoring, there are two "stories" there:
1. As an administrator of the "Flyte" system, you want logging monitoring about the heath of your Flyte cluster.
2. As a user (workflow writer), you want logs showing what your workflow is doing, so you can debug your workflow code.
| I want to understand:
• Flyte's Monitoring
• Logging
• Alerting
• Integeration with Slack or something: Say this job is started by user X, job X is finished etc..
|
Email and Slack is good enough
So What about Logging and Monitoring?
| This is a good place to start learning about notifications:
<https://lyft.github.io/flyte/user/features/notifications.html>
As far as logging/monitoring, there are two "stories" there:
1. As an administrator of the "Flyte" system, you want logging monitoring about the heath of your Flyte cluster.
2. As a user (workflow writer), you want logs showing what your workflow is doing, so you can debug your workflow code.
|
<https://lyft.github.io/flyte/user/features/observability.html>
Flyte system components use Prometheus (they expose metrics through prometheus port).
We found that prometheus can be a hit or a miss for user containers (since it's a pull model, it can miss data points). So for metrics emitted from user containers, you will need to setup <https://github.com/statsd/statsd> in the cluster and configure this :
this (default env vars): <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L636>
Add something like this:
`- FLYTE_STATSD_HOST: stats.statsagent`
where stats.statsagent is the DNS for where you deployed statsd within the cluster...
having said that, I think there is an obvious gap in documentation in how to set this up :slightly_smiling_face:
| Email and Slack is good enough
So What about Logging and Monitoring?
|
Hmmmm.... Looks like the docs for that are missing :disappointed:
The notification code should look something like this:
```@workflow(notify=[Email(when=[States.FAILED, States.TIMED_OUT], who=["<mailto:me@email.com|me@email.com>"],
service_instances=["staging", "production"]),
Email(when=[States.SUCCESS], who=["<mailto:myteam@email.com|myteam@email.com>", "<mailto:me@email.com|me@email.com>"])])```
| <https://lyft.github.io/flyte/user/features/notifications.html#howto-slack> link is broken?
|
Nice! TWiML is my favorite podcast! (went to Twimlcon recently too). Will listen to it this evening.
Great interview!
| <!here> Ketan Umare and myself on TWIML, talking about Flyte, giving a bit more history about how it all started, where it's going and key differentiators: <https://twimlai.com/twiml-talk-343-scalable-and-maintainable-workflows-at-lyft-with-flyte-w-haytham-abuelfutuh-and-ketan-umare/>
|
Thank you Richard
| Nice! TWiML is my favorite podcast! (went to Twimlcon recently too). Will listen to it this evening.
Great interview!
|
Awesome! Congrats!
| Thank you Richard
|
Good listen :+1:. Wish you got more time to talk about the caching and cataloguing stuff though as I think that’s a big differentiator from say Argo and Airflow.
| Awesome! Congrats!
|
Thank you Jonathon, that's certainly true. Stay tuned for more :wink:
| Good listen :+1:. Wish you got more time to talk about the caching and cataloguing stuff though as I think that’s a big differentiator from say Argo and Airflow.
|
Memoization part sounded really exciting !
| Thank you Jonathon, that's certainly true. Stay tuned for more :wink:
|
Adhita Selvaraj how’s the tf operator stuff going
| Memoization part sounded really exciting !
|
Are you able to run just the raw container that contains the task in question?
| Hi guys, I've been working the past few days to implement some DL pipelines that I use at work on Flyte and I've been spending a lot of time debugging...right now i'm working locally on my docker-desktop k8s environment and the workflow is as follows:
edit code -> build container -> register workflow -> launch workflow -> check errors -> repeat
Some code I can debug locally but for example the actual task code needs to run inside flyte. Is there a way to "mock" the inputs and outputs or some way to debug it with an editor (I'm using VSCode).
Perhaps there is a way to attach to the task & workflow processes with ptvsd?
|
Yes, of course
| Are you able to run just the raw container that contains the task in question?
|
what are the types of these inputs?
the command that is passed to the container is in the task spec. it’ll look something like this.
the braces are filled in by Propeller before execution to be the location of the s3 inputs file and the path that the outputs should be written to.
| Yes, of course
|
Ok thanks I'll try that
I didn't think I could run the command directly
| what are the types of these inputs?
the command that is passed to the container is in the task spec. it’ll look something like this.
the braces are filled in by Propeller before execution to be the location of the s3 inputs file and the path that the outputs should be written to.
|
yup you absolutely can…. this is probably the most black-box way of debugging. you can also run the task outside of the container entirely, just in a virtualenv (assuming this is python).
this isn’t yet sanitized for outside lyft so i’m gonna copy paste some
| Ok thanks I'll try that
I didn't think I could run the command directly
|
Nice catch. Yeah, my impression is that this is a bug.
My best guess is that it's related to this:
<https://github.com/lyft/flyteadmin/blob/master/cmd/entrypoints/root.go#L80>
Both Flyteadmin and Datacatalog do this, flytepropeller does not.
Haytham Abuelfutuh, can you confirm?
| Hello Everyone! :hand: I am interested in a small part of `flytepropeller` in `flyte`.
Default, Propeller container is started with the command `flytepropeller --config "/etc/flyte/config*/config.yaml"`
Source: <https://github.com/lyft/flyte/blob/master/kustomize/base/propeller/deployment.yaml#L35-L39> It works great!
In the next step, I just removed the `access-key` and `secret-key` from `config.yam` and changed the command as follows:
```flytepropeller \
--config "/etc/flyte/config*/config.yaml" \
--storage.connection.access-key 'minio' \
--storage.connection.secret-key 'miniostorage'```
In my opinion, these flags should update the configuration for propeller. (then, I’d like use keys from environment vars)
At least, this approach works for `flyteadmin` and `datacatalog`. But, using this approach with `flytepropeller`, I have the error below:
```{"json":{"src":"root.go:221"},"level":"fatal","msg":"Failed to start Controller - [Failed to create Metadata storage: unable to configure the storage for s3. Error: missing Access Key ID]","ts":"2020-02-05T12:19:16Z"}```
Looks like my flags were ignored. Could you advise what is my mistake?
Thanks in advance and have a great day! :slightly_smiling_face:
sorry if I use wrong channel for this question :pray:
|
Can you show me the storage section in config?
| Nice catch. Yeah, my impression is that this is a bug.
My best guess is that it's related to this:
<https://github.com/lyft/flyteadmin/blob/master/cmd/entrypoints/root.go#L80>
Both Flyteadmin and Datacatalog do this, flytepropeller does not.
Haytham Abuelfutuh, can you confirm?
|
Thank you for quick response!
I did only these small changes in comparison with sandbox example <https://github.com/lyft/flyte/blob/master/kustomize/overlays/sandbox/propeller/config.yaml#L40-L49>
just commented two lines with creds:
| Can you show me the storage section in config?
|
I also reproduced the issue locally (removed the `endpoint` config from the file, added it as a command line arg).
| Thank you for quick response!
I did only these small changes in comparison with sandbox example <https://github.com/lyft/flyte/blob/master/kustomize/overlays/sandbox/propeller/config.yaml#L40-L49>
just commented two lines with creds:
|
Awesome!. This should work. I agree this is a bug, mind filing an issue on <http://github.com/lyft/flyte|github.com/lyft/flyte> and assign it to me (@enghabu), Ruslan Stanevich?
| I also reproduced the issue locally (removed the `endpoint` config from the file, added it as a command line arg).
|
oh of course!
thanks for your responsiveness
| Awesome!. This should work. I agree this is a bug, mind filing an issue on <http://github.com/lyft/flyte|github.com/lyft/flyte> and assign it to me (@enghabu), Ruslan Stanevich?
|
Haytham Abuelfutuh haven't tested this yet, but if you like how it looks, I can give it a whirl.
<https://github.com/lyft/flytepropeller/pull/63/files>
^ Guess it's never that simple. Looks like this results in a whole new problem:
```* '' has invalid keys: qubolelimit, quboletokenpath, redishostkey, redishostpath, resourcemanagertype
1 error(s) decoding:```
| oh of course!
thanks for your responsiveness
|
<https://github.com/lyft/flyte/issues/167>
hope I missed nothing
Johnny, I don’t know exactly, but from my expirience, at least this error:
```1 error(s) decoding:
* '' has invalid keys: qubolelimit```
can be related to using flytepropeller:v0.2* with old plugin configs
<https://github.com/lyft/flyte/commit/b6c38aed5019677e4fc83b4c160fa3daca29cbc0#diff-9587455136ef88535397e7e8006e0dde>
| Haytham Abuelfutuh haven't tested this yet, but if you like how it looks, I can give it a whirl.
<https://github.com/lyft/flytepropeller/pull/63/files>
^ Guess it's never that simple. Looks like this results in a whole new problem:
```* '' has invalid keys: qubolelimit, quboletokenpath, redishostkey, redishostpath, resourcemanagertype
1 error(s) decoding:```
|
Ah, you might be right! Maybe my fix is working. Let me look.
That helps, but doesn't totally fix the issue. Seems some work will need to go into fixing it
| <https://github.com/lyft/flyte/issues/167>
hope I missed nothing
Johnny, I don’t know exactly, but from my expirience, at least this error:
```1 error(s) decoding:
* '' has invalid keys: qubolelimit```
can be related to using flytepropeller:v0.2* with old plugin configs
<https://github.com/lyft/flyte/commit/b6c38aed5019677e4fc83b4c160fa3daca29cbc0#diff-9587455136ef88535397e7e8006e0dde>
|
oh, as I see there is `secrets` section in pfopeller’s config
<https://github.com/lyft/flytepropeller/blob/b1595306d38404c41eb3e6bf7dbabd8c8978544b/pkg/controller/nodes/task/secretmanager/config.go>
does it mean we are able to use it in the workflow task? Sorry, maybe silly question, I didn’t manage to find it in doc )
A few use cases would be extremely useful
Thanks in advance!
| Ah, you might be right! Maybe my fix is working. Let me look.
That helps, but doesn't totally fix the issue. Seems some work will need to go into fixing it
|
Great question.
The secret manager in propeller is designed for the secrets of `flytepropeller` (and its plugins). Not so much for individual tasks.
That might be a bit confusing, so I can elaborate a bit.
Flytepropeller launches tasks based on their task "type". In order to launch those tasks, propeller might need access to secrets.
For example, one of our task types can launch hive queries in a remote (3rd party) cluster.
In order for flytepropeller to access that remote cluster (to launch the query), propeller needs a secret "access" token. All tasks of this type need that token.
This plugin uses the secret manager to retrieve the token, and uses that token to make the query.
<https://github.com/lyft/flyteplugins/blob/e5ab7319502a0a69d4825b5abe08764d24133811/go/tasks/plugins/hive/executor.go#L79>
| oh, as I see there is `secrets` section in pfopeller’s config
<https://github.com/lyft/flytepropeller/blob/b1595306d38404c41eb3e6bf7dbabd8c8978544b/pkg/controller/nodes/task/secretmanager/config.go>
does it mean we are able to use it in the workflow task? Sorry, maybe silly question, I didn’t manage to find it in doc )
A few use cases would be extremely useful
Thanks in advance!
|
Oh, cool
It makes sense!
Thank you!
| Great question.
The secret manager in propeller is designed for the secrets of `flytepropeller` (and its plugins). Not so much for individual tasks.
That might be a bit confusing, so I can elaborate a bit.
Flytepropeller launches tasks based on their task "type". In order to launch those tasks, propeller might need access to secrets.
For example, one of our task types can launch hive queries in a remote (3rd party) cluster.
In order for flytepropeller to access that remote cluster (to launch the query), propeller needs a secret "access" token. All tasks of this type need that token.
This plugin uses the secret manager to retrieve the token, and uses that token to make the query.
<https://github.com/lyft/flyteplugins/blob/e5ab7319502a0a69d4825b5abe08764d24133811/go/tasks/plugins/hive/executor.go#L79>
|
If you'd like to have secrets on a per-task basis, we can discuss that too. We definitely do that here at Lyft. You should be able to use "vault" or any other secret manager to get those secrets into your container.
| Oh, cool
It makes sense!
Thank you!
|
Yeah, currently we use "vault" with serviceaccount token authentication.
Thank you for advice! We just looked for the most "native" approach to use it in Flyte
| If you'd like to have secrets on a per-task basis, we can discuss that too. We definitely do that here at Lyft. You should be able to use "vault" or any other secret manager to get those secrets into your container.
|
So Flyte doesn't really prescribe a way to handle secrets (it's up to you), but I here is one way you might do it with vault (we're not using vault, so I could be overlooking some details):
Vault has a secret injector:
<https://www.vaultproject.io/docs/platform/k8s/injector/index.html>
If you annotate your pods correctly, the secrets should get injected into the pod.
Flyte allows you to add annotations via launch plans:
```annotations=Annotations({"<http://vault.hashicorp.com/agent-inject-secret-foo|vault.hashicorp.com/agent-inject-secret-foo>": 'bar/baz}),```
I'm sure there are other ways of doing this as well. Just one example.
| Yeah, currently we use "vault" with serviceaccount token authentication.
Thank you for advice! We just looked for the most "native" approach to use it in Flyte
|
fetching of an SdkWorkflow object is being worked on in an PR. it’s not quite ready yet. hopefully in a week it’ll be done, along with some other features we’ve been meaning to push out. <https://github.com/lyft/flytekit/pull/75/files>
| hello! what would be the best way with the SDK to “fetch” an existing workflow, create a new launch plan with new inputs and execute it? i tried something like this:
```import flytekit.configuration
import flytekit.common.workflow
flytekit.configuration.set_flyte_config_file("flytekit.conf")
wf = flytekit.common.workflow.SdkWorkflow.fetch(
project="yolotrain",
domain="development",
name="train.single.workflow_yolo_single",
version="71a60ca9fa75497968bb09fe8c4ba8d3aee042cb",
)```
but I’m getting
```FlyteAssertion: An SDK node must have one underlying entity specified at once. Received the following entities: []```
|
Ok thanks, looking forward to the pr :+1:
| fetching of an SdkWorkflow object is being worked on in an PR. it’s not quite ready yet. hopefully in a week it’ll be done, along with some other features we’ve been meaning to push out. <https://github.com/lyft/flytekit/pull/75/files>
|
i think there’s a way to create the launchplan without the class code present as well
but will wait for Matt Smith to answer that (who should be back today)
| Ok thanks, looking forward to the pr :+1:
|
Yee there is a way
Giordano you can use flyte-cli
that is the simplest way to create a launchplan
or am i still jet lagged :slightly_smiling_face:
| i think there’s a way to create the launchplan without the class code present as well
but will wait for Matt Smith to answer that (who should be back today)
|
Hey, Giordano quick question: are you trying to execute the workflow here? Or trying to ‘specialize’ the workflow interface by creating defaults, schedules, different service accounts, etc.?
for more context: a launch plan can be thought of as a specialized ‘executable’ for a workflow. Then a launch plan can be executed as many times as one wants. For example, if i had a workflow that takes inputs `country` and `time`, I could use the same workflow to create two launch plans. One launch plan could freeze `country='USA'` and be scheduled to run daily with an IAM role given write access to `<s3://USA-data>`. The other could freeze `country='Canada'` and be scheduled to run weekly with an IAM role that only accesses `<s3://Canada-data>`. This way a pipeline (workflow) can be generalized, but then specialized at execution to provide data protections, etc..
An execution -> launch plan, is a many-to-one relationship. So generally, when you are creating a new execution, you don’t need to create a new launch plan. You only need to retrieve an existing one and call execute on it.
So if you are trying to launch an execution, the simplest way is:
```lp = SdkLaunchPlan.fetch('project', 'domain', 'name', 'version')
ex = lp.execute('project', 'domain' inputs={'a': 1, 'b': 'hello'}, <name='optional idempotency string'>)```
| Yee there is a way
Giordano you can use flyte-cli
that is the simplest way to create a launchplan
or am i still jet lagged :slightly_smiling_face:
|
Hi Matt, right now I’m not looking to customize the workflow, just be able to launch it with different inputs so your suggestion might actually do the trick
| Hey, Giordano quick question: are you trying to execute the workflow here? Or trying to ‘specialize’ the workflow interface by creating defaults, schedules, different service accounts, etc.?
for more context: a launch plan can be thought of as a specialized ‘executable’ for a workflow. Then a launch plan can be executed as many times as one wants. For example, if i had a workflow that takes inputs `country` and `time`, I could use the same workflow to create two launch plans. One launch plan could freeze `country='USA'` and be scheduled to run daily with an IAM role given write access to `<s3://USA-data>`. The other could freeze `country='Canada'` and be scheduled to run weekly with an IAM role that only accesses `<s3://Canada-data>`. This way a pipeline (workflow) can be generalized, but then specialized at execution to provide data protections, etc..
An execution -> launch plan, is a many-to-one relationship. So generally, when you are creating a new execution, you don’t need to create a new launch plan. You only need to retrieve an existing one and call execute on it.
So if you are trying to launch an execution, the simplest way is:
```lp = SdkLaunchPlan.fetch('project', 'domain', 'name', 'version')
ex = lp.execute('project', 'domain' inputs={'a': 1, 'b': 'hello'}, <name='optional idempotency string'>)```
|
cool then it’s an easier answer :slightly_smiling_face:
if you use `pyflyte` to register your workflows, there should be a default launch plan created for each workflow with the same name as the workflow
| Hi Matt, right now I’m not looking to customize the workflow, just be able to launch it with different inputs so your suggestion might actually do the trick
|
One thing that is not clear to me is the difference between the active and non-active launchplan
for example when I look for “active” launchplans in my environment i don’t see any
| cool then it’s an easier answer :slightly_smiling_face:
if you use `pyflyte` to register your workflows, there should be a default launch plan created for each workflow with the same name as the workflow
|
ah yes, so an active launch plan can be thought of a representation of the ‘tip of your deployment’
| One thing that is not clear to me is the difference between the active and non-active launchplan
for example when I look for “active” launchplans in my environment i don’t see any
|
kind of like the latest version is that correct?
| ah yes, so an active launch plan can be thought of a representation of the ‘tip of your deployment’
|
yes exactly, except making it a bit easier to rollback if necessary
it’s important for two major cases:
| kind of like the latest version is that correct?
|
ok so how do I make a launch plan active then? i couldn’t find it in the docs…
i know that there is a “fetch_latest” for tasks…
| yes exactly, except making it a bit easier to rollback if necessary
it’s important for two major cases:
|
yeah, that’s a topic of ongoing debate…perhaps the active tag should be applied to all tasks and workflows. or we should go with a more generalized solution where workflows and tasks can be custom tagged and labeled and then have methods like `fetch_tag`.
The reason why active is specifically important for launch plans is because of schedules. The admin service needs to know to change the schedule configuration for a launch plan based on which one is active.
anywho, to make your launch plan active:
`pyflyte -p project -d domain -c flyte.config lp activate-all [--ignore-schedules]`
| ok so how do I make a launch plan active then? i couldn’t find it in the docs…
i know that there is a “fetch_latest” for tasks…
|