output
stringlengths 1
18.7k
| input
stringlengths 1
18.7k
|
---|---|
Oh great! thank you!
We also would like to manage K8s scheduler to `not schedule` other pods on expensive `GPU` nodes.
So our GPU nodes should have `taints` and Pods for GPU nodes should have `tolerations`.
Moreover, in the future we’d like to set `nodeSelector` to choose different type of nodes for some tasks (default/computed optized/GPU/etc)
And we looked at `LaunchPlan Annotations` in python code and how we can use them for managing Spark tasks.
For now we implemented quick-n-dirty solution.
We wrote simple `k8s webhook controller` which modify `SparkApplication CRD` based on Annotations values.
Example of annotations:
```...
annotations:
<http://jsonPatchSpark.lyft-minsk.com/patches|jsonPatchSpark.lyft-minsk.com/patches>: |
- op: add
path: /spec/executor/gpu
value:
name: <http://nvidia.com/gpu|nvidia.com/gpu>
quantity: 1
- op: add
path: /spec/executor/tolerations
value:
- key: flyte/gpu
value: dedicated
operator: Equal
effect: NoSchedule
...```
This applies necessary patches for `SparkApplication` to run task on GPU node.
Yes, it is too low-level definition with jsonPatches...
but It allowed us to make fine tuning for requested resources.
For today we got good results on DEV cluster.
| Ruslan Stanevich want to help us launch this in open source? contributions welcome
|
this is awesome
| Oh great! thank you!
We also would like to manage K8s scheduler to `not schedule` other pods on expensive `GPU` nodes.
So our GPU nodes should have `taints` and Pods for GPU nodes should have `tolerations`.
Moreover, in the future we’d like to set `nodeSelector` to choose different type of nodes for some tasks (default/computed optized/GPU/etc)
And we looked at `LaunchPlan Annotations` in python code and how we can use them for managing Spark tasks.
For now we implemented quick-n-dirty solution.
We wrote simple `k8s webhook controller` which modify `SparkApplication CRD` based on Annotations values.
Example of annotations:
```...
annotations:
<http://jsonPatchSpark.lyft-minsk.com/patches|jsonPatchSpark.lyft-minsk.com/patches>: |
- op: add
path: /spec/executor/gpu
value:
name: <http://nvidia.com/gpu|nvidia.com/gpu>
quantity: 1
- op: add
path: /spec/executor/tolerations
value:
- key: flyte/gpu
value: dedicated
operator: Equal
effect: NoSchedule
...```
This applies necessary patches for `SparkApplication` to run task on GPU node.
Yes, it is too low-level definition with jsonPatches...
but It allowed us to make fine tuning for requested resources.
For today we got good results on DEV cluster.
|
And yes important note.
It was some time ago but we had to activate webhooks for sparkoperator (which not enabled in flyte default) too in eks cluster with Flyte. Some teams wanted using usual Scheduled SparkApplications for data syncs.
| this is awesome
|
hmm why not use scheduled jobs through flyte?
Ruslan Stanevich, would love to understand whats happening on your side. Can we sync up sometime?
| And yes important note.
It was some time ago but we had to activate webhooks for sparkoperator (which not enabled in flyte default) too in eks cluster with Flyte. Some teams wanted using usual Scheduled SparkApplications for data syncs.
|
Sure, we prefer using scheduled launch plans in Flyte for all python jobs! It works great!
But just there are some DWH Team’s jobs written in Scala and they are running as scheduledSparkApplication.
So, we use native Flyte features where possible :+1:
| hmm why not use scheduled jobs through flyte?
Ruslan Stanevich, would love to understand whats happening on your side. Can we sync up sometime?
|
Ruslan Stanevich actually we just added support for scala spark jobs in python
| Sure, we prefer using scheduled launch plans in Flyte for all python jobs! It works great!
But just there are some DWH Team’s jobs written in Scala and they are running as scheduledSparkApplication.
So, we use native Flyte features where possible :+1:
|
Sorry maybe I was confusing.
I mean the code is written in Scala language.
So, If I understood correctly we cannot run Scala code in flyte.
If about spark_jobs (written in python) - yes they are scheduled also via native Flyte approach (Cloudwatch/SQS)
| Ruslan Stanevich actually we just added support for scala spark jobs in python
|
You can run any language in Flyte, Spotify is actually writing a Java sdk. But scala spark job support is already available in flytekit python
The only thing is you use a jar
| Sorry maybe I was confusing.
I mean the code is written in Scala language.
So, If I understood correctly we cannot run Scala code in flyte.
If about spark_jobs (written in python) - yes they are scheduled also via native Flyte approach (Cloudwatch/SQS)
|
This is exciting!
But I'm not sure that I understand now how run it. Let me check.
Can I find some examples of running other languages in flytekit repo?
| You can run any language in Flyte, Spotify is actually writing a Java sdk. But scala spark job support is already available in flytekit python
The only thing is you use a jar
|
Hmm let me try and share, Anmol Khurana do you have this example checked in?
| This is exciting!
But I'm not sure that I understand now how run it. Let me check.
Can I find some examples of running other languages in flytekit repo?
|
I don’t have that right now. I am planning to work on it this week. Will share once its ready.
Ruslan Stanevich this was recently checked in and is available now in beta: <https://github.com/lyft/flytekit/releases/tag/v0.7.0b4>
will add docs but <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/workflows/scala_spark.py#L10> is an example of how a scala spark job can be integrated as a task in Flyte
| Hmm let me try and share, Anmol Khurana do you have this example checked in?
|
Anmol Khurana can you close the issue too?
| I don’t have that right now. I am planning to work on it this week. Will share once its ready.
Ruslan Stanevich this was recently checked in and is available now in beta: <https://github.com/lyft/flytekit/releases/tag/v0.7.0b4>
will add docs but <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/workflows/scala_spark.py#L10> is an example of how a scala spark job can be integrated as a task in Flyte
|
done
| Anmol Khurana can you close the issue too?
|
Hey Johnny, having some issues building flyte on MacOS with minikube.
minikube has had a regression that has broken minikube tunnel.
I was able to work around this using: `minikube service --alsologtostderr -v=3 contour -n heptio-contour`
which has given me a mapping that I can access the console with. However when trying to run a docker container for myflyteproject example, I am running into networking errors with gprc.
| Johnny Burns can you please help Jordan Bramble
This is the most common problem everyone has
I think i should install minikube to really answer it
he is having a problem connecting to admin
|
Hey Jordan. Glad to help :wave:
There is a lot to unpack with your question. I don't think you should need minikube tunnel, but let's talk about your problem before I say that for sure.
When you say that you're trying to run a docker container, you mean that you've built a container for a flyte workflow?
Are you having trouble registering that? or running it?
| Hey Johnny, having some issues building flyte on MacOS with minikube.
minikube has had a regression that has broken minikube tunnel.
I was able to work around this using: `minikube service --alsologtostderr -v=3 contour -n heptio-contour`
which has given me a mapping that I can access the console with. However when trying to run a docker container for myflyteproject example, I am running into networking errors with gprc.
|
yes, so I went through the install and set up which implied i needed minikube tunnel. I was able to use service instead to get to the console. Now I am attempting to run a workflow, based on this
<https://lyft.github.io/flyte/user/getting_started/create_first.html>
I am able to build the container.
now I am attempting run it.
using the command at the bottom, alongside of the URL and port i received after running `minikube service --alsologtostderr -v=3 contour -n heptio-contour`
| Hey Jordan. Glad to help :wave:
There is a lot to unpack with your question. I don't think you should need minikube tunnel, but let's talk about your problem before I say that for sure.
When you say that you're trying to run a docker container, you mean that you've built a container for a flyte workflow?
Are you having trouble registering that? or running it?
|
Ah. Yeah.
I'm guessing you built that container with a `docker` command on your mac?
| yes, so I went through the install and set up which implied i needed minikube tunnel. I was able to use service instead to get to the console. Now I am attempting to run a workflow, based on this
<https://lyft.github.io/flyte/user/getting_started/create_first.html>
I am able to build the container.
now I am attempting run it.
using the command at the bottom, alongside of the URL and port i received after running `minikube service --alsologtostderr -v=3 contour -n heptio-contour`
|
yes.
| Ah. Yeah.
I'm guessing you built that container with a `docker` command on your mac?
|
Minikube is a VM, so we can think of it kind of like a remote machine.
When you built your docker container locally, that image lives on your mac.
Flyte is running inside your Minikube VM, which is a totally separate machine.
So when Flyte launches your task, it's looking for this docker image, but that docker image does not exist on the machine (the minikube VM).
Make sense? If so, I can propose a few solutions.
| yes.
|
yes that makes sense.
| Minikube is a VM, so we can think of it kind of like a remote machine.
When you built your docker container locally, that image lives on your mac.
Flyte is running inside your Minikube VM, which is a totally separate machine.
So when Flyte launches your task, it's looking for this docker image, but that docker image does not exist on the machine (the minikube VM).
Make sense? If so, I can propose a few solutions.
|
Cool. So, here are 3 ways you can solve this issue:
1. Build the docker container _inside_ the minikube VM. By that, I mean use `minikube ssh` and treat that as your development environment. Run your same `docker build` command there. Then, your VM will have the image.
2. Push your image to a remote datastore like dockerhub. Your VM will be able to pull the remote image from dockerhub.
3. Use docker-for-mac instead of minikube. This will run K8s on your mac, so Flyte will be running on the same machine you built the image on.
| yes that makes sense.
|
what do you recommend here? my goal is to get a working environment up ASAP, so I can build some test workflows locally. They won't require processing large datasets at this time.
| Cool. So, here are 3 ways you can solve this issue:
1. Build the docker container _inside_ the minikube VM. By that, I mean use `minikube ssh` and treat that as your development environment. Run your same `docker build` command there. Then, your VM will have the image.
2. Push your image to a remote datastore like dockerhub. Your VM will be able to pull the remote image from dockerhub.
3. Use docker-for-mac instead of minikube. This will run K8s on your mac, so Flyte will be running on the same machine you built the image on.
|
I would do #1, tbh
Are you using the `flytesnacks` repo to register your first image?
| what do you recommend here? my goal is to get a working environment up ASAP, so I can build some test workflows locally. They won't require processing large datasets at this time.
|
yes, I am running `make docker_build` from inside of the repo
and then attempting to run the container
| I would do #1, tbh
Are you using the `flytesnacks` repo to register your first image?
|
Cool. So I would git pull that from within your minikube
There is one other small snag you're going to hit
You'll need to change this line:
<https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile#L33>
I think the image name is incorrect. When you're done building the image, do `docker images`
and verify the name of the docker image.
I think you probably just need to remove `<http://docker.io|docker.io>` from that line. If so, LMK because I'm going to put in a PR to fix it in the flytesnacks repo.
| yes, I am running `make docker_build` from inside of the repo
and then attempting to run the container
|
Thanks, I've got this built inside of minikube and registered the xgboost tasks. However, now when I look at my flyte console which I have a tunnel to from my local browser, I don't see any tasks under flytesnacks.
| Cool. So I would git pull that from within your minikube
There is one other small snag you're going to hit
You'll need to change this line:
<https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile#L33>
I think the image name is incorrect. When you're done building the image, do `docker images`
and verify the name of the docker image.
I think you probably just need to remove `<http://docker.io|docker.io>` from that line. If so, LMK because I'm going to put in a PR to fix it in the flytesnacks repo.
|
Did your `register` work successfully?
Are you registering under the `flytesnacks` project? if not, change the `-p` in your registration command to `flytesnacks`
| Thanks, I've got this built inside of minikube and registered the xgboost tasks. However, now when I look at my flyte console which I have a tunnel to from my local browser, I don't see any tasks under flytesnacks.
|
yes to both.
| Did your `register` work successfully?
Are you registering under the `flytesnacks` project? if not, change the `-p` in your registration command to `flytesnacks`
|
no tasks or workflows?
| yes to both.
|
0.5.3 should be fine
this honestly sounds like a mismatch internally in admin.
assuming this is a local installation, could you kill the admin pod and let a new one come up and then try again?
| Hello Everyone!
could you please advise what is the minimal required `Flytekit` version for new `Flyte release`?
I think I use old vwersion
I got this error when registering workflow built with `flytekit==0.5.3`
``` File "/usr/local/lib/python3.7/dist-packages/grpc/_<http://channel.py|channel.py>", line 729, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "failed database operation with column "type" of relation "tasks" does not exist"
debug_error_string = "{"created":"@1586327555.355034200","description":"Error received from peer ipv4:10.200.63.62:80","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"failed database operation with column "type" of relation "tasks" does not exist","grpc_status":2}"```
|
hmm, when we come back to the previous flyte release - all worked fine …
it is in AWS EKS
| 0.5.3 should be fine
this honestly sounds like a mismatch internally in admin.
assuming this is a local installation, could you kill the admin pod and let a new one come up and then try again?
|
which version worked?
| hmm, when we come back to the previous flyte release - all worked fine …
it is in AWS EKS
|
<http://docker.io/lyft/flyteadmin:v0.2.1|docker.io/lyft/flyteadmin:v0.2.1> works
<http://docker.io/lyft/flyteadmin:v0.2.4|docker.io/lyft/flyteadmin:v0.2.4> failed for us
| which version worked?
|
can you run v0.2.4 again?
and kill the pod and let a new one come up?
i feel like one of the migrations didn’t run for some reason.
the migrations should run as part of the init containers.
also, can you confirm that the admin version for all the init containers also match that v0.2.4?
| <http://docker.io/lyft/flyteadmin:v0.2.1|docker.io/lyft/flyteadmin:v0.2.1> works
<http://docker.io/lyft/flyteadmin:v0.2.4|docker.io/lyft/flyteadmin:v0.2.4> failed for us
|
Yes, it is 0.2.1 in init! (due it places in our overlays)
Let me check again!
| can you run v0.2.4 again?
and kill the pod and let a new one come up?
i feel like one of the migrations didn’t run for some reason.
the migrations should run as part of the init containers.
also, can you confirm that the admin version for all the init containers also match that v0.2.4?
|
cool, let me know
| Yes, it is 0.2.1 in init! (due it places in our overlays)
Let me check again!
|
Yes, it was my bad
Everything works :+1:
Thank you very much!
| cool, let me know
|
Can we look in a bit
| Hello :hand:
Could you please help to figure out with my possible misconfiguration for new `Datacatalog`? When starting `v0.2.1` - the one new log record appeared
```{"json":{"src":"stowstore.go:76"},"level":"warning","msg":"stow configuration section missing, defaulting to legacy s3/minio connection config","ts":"2020-04-09T18:48:07Z"}```
but there is no this log when running datacatalog `v0.1.1`
Thank you in advice!
# Storage config:
```storage:
type: s3
container: {{ AWS_S3_BUCKET }}
connection:
access-key: {{ AWS_ACCESS_KEY }}
auth-type: accesskey
secret-key: {{ AWS_SECRET_KEY }}
region: us-east-1```
# Start logs with Datacatalog 0.1.1
```Using config file: [/etc/datacatalog/config/datacatalog_config.yaml]
time="2020-04-09T18:50:16Z" level=info msg="Using config file: [/etc/datacatalog/config/datacatalog_config.yaml]"
time="2020-04-09T18:50:16Z" level=info msg="Config section [logger] updated. Firing updated event." src="viper.go:317"
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [database] updated. No update handler registered.","ts":"2020-04-09T18:50:16Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [datacatalog] updated. No update handler registered.","ts":"2020-04-09T18:50:16Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [application] updated. No update handler registered.","ts":"2020-04-09T18:50:16Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [storage] updated. No update handler registered.","ts":"2020-04-09T18:50:16Z"}
{"json":{"src":"serve.go:67"},"level":"info","msg":"Serving DataCatalog http on port :0","ts":"2020-04-09T18:50:16Z"}
{"json":{"app_name":"datacatalog","src":"service.go:91"},"level":"info","msg":"Created data storage.","ts":"2020-04-09T18:50:17Z"}
{"json":{"app_name":"datacatalog","src":"service.go:110"},"level":"info","msg":"Created DB connection.","ts":"2020-04-09T18:50:17Z"}
{"json":{"src":"server.go:95"},"level":"info","msg":"Starting profiling server on port [10254]","ts":"2020-04-09T18:50:17Z"}
{"json":{"src":"serve.go:48"},"level":"info","msg":"Serving DataCatalog Insecure on port :8089","ts":"2020-04-09T18:50:17Z"}```
|
# Start Logs with Datacatalog 0.2.1
```Warn: No metricsProvider set for the workqueue
Using config file: [/etc/datacatalog/config-app/datacatalog_config.yaml]
time="2020-04-09T18:48:07Z" level=info msg="Using config file: [/etc/datacatalog/config-app/datacatalog_config.yaml]"
time="2020-04-09T18:48:07Z" level=info msg="Config section [database] updated. No update handler registered." src="viper.go:315"
time="2020-04-09T18:48:07Z" level=info msg="Config section [datacatalog] updated. No update handler registered." src="viper.go:315"
time="2020-04-09T18:48:07Z" level=info msg="Config section [application] updated. No update handler registered." src="viper.go:315"
time="2020-04-09T18:48:07Z" level=info msg="Config section [storage] updated. No update handler registered." src="viper.go:315"
time="2020-04-09T18:48:07Z" level=info msg="Config section [logger] updated. Firing updated event." src="viper.go:317"
{"json":{"src":"serve.go:71"},"level":"info","msg":"Serving DataCatalog http on port :0","ts":"2020-04-09T18:48:07Z"}
{"json":{"src":"stowstore.go:76"},"level":"warning","msg":"stow configuration section missing, defaulting to legacy s3/minio connection config","ts":"2020-04-09T18:48:07Z"}
{"json":{"app_name":"datacatalog","src":"service.go:79"},"level":"info","msg":"Created data storage.","ts":"2020-04-09T18:48:07Z"}
{"json":{"app_name":"datacatalog","src":"service.go:98"},"level":"info","msg":"Created DB connection.","ts":"2020-04-09T18:48:07Z"}
{"json":{"src":"server.go:96"},"level":"info","msg":"Starting profiling server on port [10254]","ts":"2020-04-09T18:48:07Z"}
{"json":{"src":"serve.go:49"},"level":"info","msg":"Serving DataCatalog Insecure on port :8089","ts":"2020-04-09T18:48:07Z"}```
| Can we look in a bit
|
remind me again what the deployment is? is this on eks?
| # Start Logs with Datacatalog 0.2.1
```Warn: No metricsProvider set for the workqueue
Using config file: [/etc/datacatalog/config-app/datacatalog_config.yaml]
time="2020-04-09T18:48:07Z" level=info msg="Using config file: [/etc/datacatalog/config-app/datacatalog_config.yaml]"
time="2020-04-09T18:48:07Z" level=info msg="Config section [database] updated. No update handler registered." src="viper.go:315"
time="2020-04-09T18:48:07Z" level=info msg="Config section [datacatalog] updated. No update handler registered." src="viper.go:315"
time="2020-04-09T18:48:07Z" level=info msg="Config section [application] updated. No update handler registered." src="viper.go:315"
time="2020-04-09T18:48:07Z" level=info msg="Config section [storage] updated. No update handler registered." src="viper.go:315"
time="2020-04-09T18:48:07Z" level=info msg="Config section [logger] updated. Firing updated event." src="viper.go:317"
{"json":{"src":"serve.go:71"},"level":"info","msg":"Serving DataCatalog http on port :0","ts":"2020-04-09T18:48:07Z"}
{"json":{"src":"stowstore.go:76"},"level":"warning","msg":"stow configuration section missing, defaulting to legacy s3/minio connection config","ts":"2020-04-09T18:48:07Z"}
{"json":{"app_name":"datacatalog","src":"service.go:79"},"level":"info","msg":"Created data storage.","ts":"2020-04-09T18:48:07Z"}
{"json":{"app_name":"datacatalog","src":"service.go:98"},"level":"info","msg":"Created DB connection.","ts":"2020-04-09T18:48:07Z"}
{"json":{"src":"server.go:96"},"level":"info","msg":"Starting profiling server on port [10254]","ts":"2020-04-09T18:48:07Z"}
{"json":{"src":"serve.go:49"},"level":"info","msg":"Serving DataCatalog Insecure on port :8089","ts":"2020-04-09T18:48:07Z"}```
|
hi Yee!
yes this is eks
| remind me again what the deployment is? is this on eks?
|
this is the configuration section that we have for data catalog
```storage:
type: s3
connection:
auth-type: iam
region: us-east-1
cache:
max_size_mbs: 1024
target_gc_percent: 70
container: "your-s3-bucket-name"```
do you have that somewhere?
usually when something like this happens, it’s either a misconfiguration issue, or the config files aren’t even being loaded.
| hi Yee!
yes this is eks
|
we have this one:
```storage:
type: s3
container: {{ AWS_S3_BUCKET }}
connection:
access-key: {{ AWS_ACCESS_KEY }}
auth-type: accesskey
secret-key: {{ AWS_SECRET_KEY }}
region: us-east-1```
| this is the configuration section that we have for data catalog
```storage:
type: s3
connection:
auth-type: iam
region: us-east-1
cache:
max_size_mbs: 1024
target_gc_percent: 70
container: "your-s3-bucket-name"```
do you have that somewhere?
usually when something like this happens, it’s either a misconfiguration issue, or the config files aren’t even being loaded.
|
near the top of the logs for catalog, when the process starts after the pod has been brought up, it should say all the log files it’s loaded.
can you look for something like that?
this is from my local sandbox deployment but something along these lines
```time="2020-03-30T16:14:51Z" level=info msg="Using config file: [/etc/datacatalog/config/datacatalog_config.yaml]"
time="2020-03-30T16:14:51Z" level=info msg="Config section [logger] updated. Firing updated event." src="viper.go:317"
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [database] updated. No update handler registered.","ts":"2020-03-30T16:14:51Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [datacatalog] updated. No update handler registered.","ts":"2020-03-30T16:14:51Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [application] updated. No update handler registered.","ts":"2020-03-30T16:14:51Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [storage] updated. No update handler registered.","ts":"2020-03-30T16:14:51Z"}
{"json":{"src":"serve.go:67"},"level":"info","msg":"Serving DataCatalog http on port :0","ts":"2020-03-30T16:14:51Z"}```
| we have this one:
```storage:
type: s3
container: {{ AWS_S3_BUCKET }}
connection:
access-key: {{ AWS_ACCESS_KEY }}
auth-type: accesskey
secret-key: {{ AWS_SECRET_KEY }}
region: us-east-1```
|
sure,
It says `using config file: [ … ]` (line 2)
And we got `warning` (line 11)
This is logs for starting datacatalog `0.2.1`
but yes, using version `0.1.2` I got the same logs like you’ve shown me
| near the top of the logs for catalog, when the process starts after the pod has been brought up, it should say all the log files it’s loaded.
can you look for something like that?
this is from my local sandbox deployment but something along these lines
```time="2020-03-30T16:14:51Z" level=info msg="Using config file: [/etc/datacatalog/config/datacatalog_config.yaml]"
time="2020-03-30T16:14:51Z" level=info msg="Config section [logger] updated. Firing updated event." src="viper.go:317"
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [database] updated. No update handler registered.","ts":"2020-03-30T16:14:51Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [datacatalog] updated. No update handler registered.","ts":"2020-03-30T16:14:51Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [application] updated. No update handler registered.","ts":"2020-03-30T16:14:51Z"}
{"json":{"src":"viper.go:315"},"level":"info","msg":"Config section [storage] updated. No update handler registered.","ts":"2020-03-30T16:14:51Z"}
{"json":{"src":"serve.go:67"},"level":"info","msg":"Serving DataCatalog http on port :0","ts":"2020-03-30T16:14:51Z"}```
|
Andrew Chan any idea what this might be?
Ruslan Stanevich stupid question, but you’re obviously not running minio as part of your eks deployment…
so does data catalog work? I’m wondering if the log line itself is erroneous.
| sure,
It says `using config file: [ … ]` (line 2)
And we got `warning` (line 11)
This is logs for starting datacatalog `0.2.1`
but yes, using version `0.1.2` I got the same logs like you’ve shown me
|
sure, we don’t use minio as part of deployment!
about datacatalog I’m not sure …
let me check
hmm honestly it works fine
```grpcurl \
-authority="flyte.datacatalog.grpc" \
<http://istio-ingress.domain.com:82|istio-ingress.domain.com:82> \
datacatalog.DataCatalog/ListDatasets```
```{
"datasets": [
{
"id": {
....```
| Andrew Chan any idea what this might be?
Ruslan Stanevich stupid question, but you’re obviously not running minio as part of your eks deployment…
so does data catalog work? I’m wondering if the log line itself is erroneous.
|
would you mind filing an issue in that case on us?
in the flyte repo.
and we’ll look more into it when we have a bit more time.
| sure, we don’t use minio as part of deployment!
about datacatalog I’m not sure …
let me check
hmm honestly it works fine
```grpcurl \
-authority="flyte.datacatalog.grpc" \
<http://istio-ingress.domain.com:82|istio-ingress.domain.com:82> \
datacatalog.DataCatalog/ListDatasets```
```{
"datasets": [
{
"id": {
....```
|
I can take a look as well on the GH issue. To confirm you haven’t changed your config map between v0.1.1 and v0.2?
And also Admin is up and running fine too? Datacatalog accesses s3 the same way Admin does
| would you mind filing an issue in that case on us?
in the flyte repo.
and we’ll look more into it when we have a bit more time.
|
yes, no changes for configmap
and sure, all components work fine. Usual, scheduled workflows and notifications are running.
| I can take a look as well on the GH issue. To confirm you haven’t changed your config map between v0.1.1 and v0.2?
And also Admin is up and running fine too? Datacatalog accesses s3 the same way Admin does
|
And you’re observing that aside from that one `warn` DataCatalog is up and running and works fine?
| yes, no changes for configmap
and sure, all components work fine. Usual, scheduled workflows and notifications are running.
|
but just got some feedback that team couldn’t use datacatalog when we updated…
so maybe will discuss about their compatibility with new datacatalog (if it makes sense)
due to time gap communication is slower :joy:
Yes, for us I see no issues …
| And you’re observing that aside from that one `warn` DataCatalog is up and running and works fine?
|
uh, can you elaborate on “team couldn’t use datacatalog when we updated”?
why?
that sounds bad
| but just got some feedback that team couldn’t use datacatalog when we updated…
so maybe will discuss about their compatibility with new datacatalog (if it makes sense)
due to time gap communication is slower :joy:
Yes, for us I see no issues …
|
> but just got some feedback that team couldn’t use datacatalog when we updated…
Do you mean trying to use it outside of Flyte? ie., directly communicating with it and not via flyteplugins?
| uh, can you elaborate on “team couldn’t use datacatalog when we updated”?
why?
that sounds bad
|
I am waiting for details about this
we exposed Datacatalog via aws VPC endpoint for another team purposes
and this warn log was doubt for me that datacatalog is configured another way
Thank you for your help :pray:
| > but just got some feedback that team couldn’t use datacatalog when we updated…
Do you mean trying to use it outside of Flyte? ie., directly communicating with it and not via flyteplugins?
|
Ruslan Stanevich can we get on a hangouts call sometime? I would like to connect and discuss how you are using Flyte etc and see how we could improve our working relationship
| I am waiting for details about this
we exposed Datacatalog via aws VPC endpoint for another team purposes
and this warn log was doubt for me that datacatalog is configured another way
Thank you for your help :pray:
|
I think this is a great idea!
And I would like to inform other teams that use Flyte.
And we will try to collect questions/interested people and arrange a call in your morning (our evening)
So, I think we can arrange a meeting next week.
let you know more precisely after the weekend
thank you Ketan Umare:pray:
| Ruslan Stanevich can we get on a hangouts call sometime? I would like to connect and discuss how you are using Flyte etc and see how we could improve our working relationship
|
Ruslan Stanevich absolutely, looking forward to it
| I think this is a great idea!
And I would like to inform other teams that use Flyte.
And we will try to collect questions/interested people and arrange a call in your morning (our evening)
So, I think we can arrange a meeting next week.
let you know more precisely after the weekend
thank you Ketan Umare:pray:
|
Ketan Umare , do you have a chance to call in morning, ~9..10 am in your timezone?
Will Thursday suit you?
| Ruslan Stanevich absolutely, looking forward to it
|
ya morning works, let me check on Thursday
I can do next wednesday?
or monday
i can also do 10:00 am tomorrow
| Ketan Umare , do you have a chance to call in morning, ~9..10 am in your timezone?
Will Thursday suit you?
|
Hi!
on Wednesday at 10am (our 8pm) sounds good
tomorrow
so can we create hangouts meet at 10am and sent invitation to you?
| ya morning works, let me check on Thursday
I can do next wednesday?
or monday
i can also do 10:00 am tomorrow
|
add me too?
| Hi!
on Wednesday at 10am (our 8pm) sounds good
tomorrow
so can we create hangouts meet at 10am and sent invitation to you?
|
great!
please advise if we can invite someone else :pray:
Ketan Umare Yee Hello! we are in hangouts room
| add me too?
|
logging in now
| great!
please advise if we can invite someone else :pray:
Ketan Umare Yee Hello! we are in hangouts room
|
Let me know if you need me in the meeting, if there is datacatalog specific topics
| logging in now
|
currently it is not possible to do this - could you open up a github issue in the flyte repo please?
could you also describe the use-case/why you want to do this? context always helps.
| Hi everyone !
Could you please help? I've set up a schedule for the launch plan in my project. Question: How to prevent the new launch plan execution if the previous one still running ?
I try to find such option in the source code but I can't found. The concurrent execution of the job is unacceptable for my use case.
Thank you in advice!
|
Hey Igor Valko :wave:
To clarify your question. Are you asking if your plugin will have a client so that you can can create and watch k8s resources? If so, the answer is yes, your plugin should have access to that.
| Hi. I’m working on flytepropeller plugin. Just a quick question here: am I supposed not to have an access to kubeClient or there is a way to use it?
|
You might be referring core plugins. For k8s plugins I see no means to reach out to kubeClient <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/nodes/task/k8s/plugin_manager.go>
or am I missing smth?
| Hey Igor Valko :wave:
To clarify your question. Are you asking if your plugin will have a client so that you can can create and watch k8s resources? If so, the answer is yes, your plugin should have access to that.
|
not missing anything…
so the plugin framework is a bit split right now… flyteplugins contains all the interfaces, and flytepropeller implements them.
since most use-cases deal with K8s resources, and because there’s a lot of commonality between plugins that just deal with K8s objects, those were all grouped together, into the plugin_manager you referenced.
however, in the interest of iteration speed, it was done in the flytepropeller repo. in the future we expect to move it out (along with some general restructuring).
so… if you’re working on a new plugin for k8s, then i think it belongs there.
if you’re not, and you’re writing a general plugin that goes into the flyteplugins repo, the client is available yes, in the SetupContext() only of the plugin.
so it’s incumbent upon the plugin writer to save the pointer
does this make sense?
| You might be referring core plugins. For k8s plugins I see no means to reach out to kubeClient <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/nodes/task/k8s/plugin_manager.go>
or am I missing smth?
|
Correct :slightly_smiling_face:
| Thank you for your time and answers!
It was very helpful and inspiring meeting!
Can you please remind us - Flyte OSS meeting is held on Tuesday at 9AM, correct?
|
Thank you! Johnny Burns
Do you have meets in Hangouts or Zoom or smth else ?
| Correct :slightly_smiling_face:
|
Ruslan Stanevich its on <http://github.com/lyft/flyte|github.com/lyft/flyte> page
i should have sent earlier
| Thank you! Johnny Burns
Do you have meets in Hangouts or Zoom or smth else ?
|
Are you saying you want to perform CI in AWS?
| We should probably use <https://aws.amazon.com/awscredits/|https://aws.amazon.com/awscredits/> to perform CI for Flyte releases
|
Ya
It’s free for some
| Are you saying you want to perform CI in AWS?
|
Interesting. Can you elaborate a bit more? What's your vision for running free CI on AWS?
| Ya
It’s free for some
|
Every merge to master atleast the Flyte branch gets a set of golden tests
| Interesting. Can you elaborate a bit more? What's your vision for running free CI on AWS?
|
How do we qualify for free AWS though (the link you showed is just a trial period, I think)?
And would you manually manage the testing platform?
| Every merge to master atleast the Flyte branch gets a set of golden tests
|
No they have free for open source
| How do we qualify for free AWS though (the link you showed is just a trial period, I think)?
And would you manually manage the testing platform?
|
wow super cool! if you have any questions in regards to working with flytekit, feel free to ping me.
| Hey all, a bit of a vague request. Many of you know I am building a dagster -> flytekit compiler. I have it seemingly working (can execute the python tasks locally), can successfully register the tasks.
|
Jordan Bramble What happens when you describe the pods using `kubectl describe` ?
| Almost certainly memory/cpu I think. Heres what I am noticing. my k8s cluster has spun up on the order of 30 sync resources pods that seem to be stuck in pending state. My guess is that is what is going wrong.
|
I ended up deleting the ones that were pending. I re-registered the tasks, and now when I launch workflow, it hangs with status "UNKNOWN". I no longer see a container corresponding to the workflow being created. This is different than before.
screenshot inbound
| Jordan Bramble What happens when you describe the pods using `kubectl describe` ?
|
This is likely a different problem from your previous problem. Is it possible `propeller` is not running?
How did you delete the pending ones?
can you do `kubectl get pods -n flyte` ?
| I ended up deleting the ones that were pending. I re-registered the tasks, and now when I launch workflow, it hangs with status "UNKNOWN". I no longer see a container corresponding to the workflow being created. This is different than before.
screenshot inbound
|
I deleted the containers that were hanging doing:
`kubectl get pods -n flyte | grep Pending | awk '{print $1}' | xargs kubectl -n flyte delete pod`
here are current pods in the flyte namespace
```Jordans-MacBook-Pro-2:flytekit jordanbramble$ kubectl get pods -n flyte
NAME READY STATUS RESTARTS AGE
datacatalog-6f9db4f88f-2vbg8 1/1 Running 0 21h
flyteadmin-694cc79fb4-dmr7x 2/2 Running 0 21h
flyteconsole-749fcd46d5-bn7rk 1/1 Running 0 21h
flytepropeller-6f897bfd68-4krx8 1/1 Running 0 21h
minio-f58cffb47-qqccw 1/1 Running 0 21h
postgres-759fc6996-bkh95 1/1 Running 0 21h
redis-0 1/1 Running 0 21h
syncresources-1587527520-7xcgg 0/1 Completed 0 19h
syncresources-1587527580-gprkr 0/1 Completed 0 19h
syncresources-1587527640-wtm2m 0/1 Completed 0 19h```
| This is likely a different problem from your previous problem. Is it possible `propeller` is not running?
How did you delete the pending ones?
can you do `kubectl get pods -n flyte` ?
|
Your pending pods were running in the `flyte` namespace?
| I deleted the containers that were hanging doing:
`kubectl get pods -n flyte | grep Pending | awk '{print $1}' | xargs kubectl -n flyte delete pod`
here are current pods in the flyte namespace
```Jordans-MacBook-Pro-2:flytekit jordanbramble$ kubectl get pods -n flyte
NAME READY STATUS RESTARTS AGE
datacatalog-6f9db4f88f-2vbg8 1/1 Running 0 21h
flyteadmin-694cc79fb4-dmr7x 2/2 Running 0 21h
flyteconsole-749fcd46d5-bn7rk 1/1 Running 0 21h
flytepropeller-6f897bfd68-4krx8 1/1 Running 0 21h
minio-f58cffb47-qqccw 1/1 Running 0 21h
postgres-759fc6996-bkh95 1/1 Running 0 21h
redis-0 1/1 Running 0 21h
syncresources-1587527520-7xcgg 0/1 Completed 0 19h
syncresources-1587527580-gprkr 0/1 Completed 0 19h
syncresources-1587527640-wtm2m 0/1 Completed 0 19h```
|
yes, they were all syncresources-*
I previously had a pod ran in a created namespace for the task that I registered in flyte. but I deleted those as well after aborting them. Some were in error.
| Your pending pods were running in the `flyte` namespace?
|
Hmmm... Your workflow still hasn't launched? Maybe check the FlytePropeller logs?
| yes, they were all syncresources-*
I previously had a pod ran in a created namespace for the task that I registered in flyte. but I deleted those as well after aborting them. Some were in error.
|
forgive me for a dumb question, do I do that by running kubectl logs for that pod?
| Hmmm... Your workflow still hasn't launched? Maybe check the FlytePropeller logs?
|
Ah. yeah, you can do `kubectl logs -n flyte {propeller pod name}`
Also, what happens if you do `kubectl get pods -n {your-project-namespace}-{yourenvironmnt}`
maybe `dagstertest-staging` is the namespace
| forgive me for a dumb question, do I do that by running kubectl logs for that pod?
|
yes that was the namespace, when I originally started this thread. but launching in the flyte UI is no longer creating a pod under that namespace anymore
when I try to access propellor logs:
`Error from server: Get <https://172.17.0.2:10250/containerLogs/flyte/flytepropeller-6f897bfd68-4krx8/flytepropeller>: dial tcp 172.17.0.2:10250: connect: connection refused`
I am surprised by this, I thought all of these pods were running locally on minikube.
| Ah. yeah, you can do `kubectl logs -n flyte {propeller pod name}`
Also, what happens if you do `kubectl get pods -n {your-project-namespace}-{yourenvironmnt}`
maybe `dagstertest-staging` is the namespace
|
Maybe your minikube VM doesn't expose that port.
Maybe try to `minikube ssh` and then check logs
You can also check the logs for `flyteadmin` service
| yes that was the namespace, when I originally started this thread. but launching in the flyte UI is no longer creating a pod under that namespace anymore
when I try to access propellor logs:
`Error from server: Get <https://172.17.0.2:10250/containerLogs/flyte/flytepropeller-6f897bfd68-4krx8/flytepropeller>: dial tcp 172.17.0.2:10250: connect: connection refused`
I am surprised by this, I thought all of these pods were running locally on minikube.
|
inside of minikube VM, any idea the singificance of these '/pause' commands?
```
docker@minikube:~/dagster_flyte_test$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33d62a034040 <http://gcr.io/heptio-images/contour|gcr.io/heptio-images/contour> "contour serve --inc…" 22 hours ago Up 22 hours k8s_contour-unknown_contour-d7cff74b5-r8mrv_heptio-contour_84c1a2c5-400f-4de0-b261-a92cdd33f64d_0
a101f465f201 redocly/redoc "sh -c 'ln -s /usr/s…" 22 hours ago Up 22 hours k8s_redoc_flyteadmin-694cc79fb4-dmr7x_flyte_43875e46-837f-487e-a62e-9df797d5f113_0
68a1332e8e89 <http://gcr.io/spark-operator/spark-operator|gcr.io/spark-operator/spark-operator> "/usr/bin/spark-oper…" 22 hours ago Up 22 hours k8s_sparkoperator-unknown_sparkoperator-96ffc7d89-6zdtq_sparkoperator_9526aae1-57a8-42b5-8217-f18ba3e4683c_0
efbfe8b000db envoyproxy/envoy-alpine "envoy -c /config/co…" 22 hours ago Up 22 hours k8s_envoy-envoyingressv1_contour-d7cff74b5-r8mrv_heptio-contour_84c1a2c5-400f-4de0-b261-a92cdd33f64d_0
c661e16339cc 52f60f817d16 "datacatalog --logto…" 22 hours ago Up 22 hours k8s_datacatalog_datacatalog-6f9db4f88f-2vbg8_flyte_7ca5e39e-9941-4f65-9466-17505d9a817c_0
68b80f51e5f0 66c598488568 "flyteadmin --logtos…" 22 hours ago Up 22 hours k8s_flyteadmin_flyteadmin-694cc79fb4-dmr7x_flyte_43875e46-837f-487e-a62e-9df797d5f113_0
993a7f700108 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_sparkoperator-96ffc7d89-6zdtq_sparkoperator_9526aae1-57a8-42b5-8217-f18ba3e4683c_0
a07d6ed8e489 bitnami/redis "/app-entrypoint.sh …" 22 hours ago Up 22 hours k8s_redis-resource-manager_redis-0_flyte_34fc92ab-b139-4a9d-b03d-517357c8d034_0
faaf08d47321 postgres "docker-entrypoint.s…" 22 hours ago Up 22 hours k8s_postgres_postgres-759fc6996-bkh95_flyte_b6230298-5a24-4f67-88a3-bf194e6fffb1_0
7564ed54b69c lyft/flyteconsole "/nodejs/bin/node in…" 22 hours ago Up 22 hours k8s_flyteconsole_flyteconsole-749fcd46d5-bn7rk_flyte_6b67253b-170c-4844-ac07-768328e84b2e_0
ee1ea2da1f91 minio/minio "/usr/bin/docker-ent…" 22 hours ago Up 22 hours k8s_minio_minio-f58cffb47-qqccw_flyte_d6cd809f-4fa2-40f6-9102-73283a5b1890_0
f7f27e19889d <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_contour-d7cff74b5-r8mrv_heptio-contour_84c1a2c5-400f-4de0-b261-a92cdd33f64d_0
954f319247c4 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_flyteadmin-694cc79fb4-dmr7x_flyte_43875e46-837f-487e-a62e-9df797d5f113_0
4e8bd45cb1d5 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_redis-0_flyte_34fc92ab-b139-4a9d-b03d-517357c8d034_0
f388e6da9b66 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_flytepropeller-6f897bfd68-4krx8_flyte_c95423eb-4003-4206-958d-401bd8131fe5_0
66c2d1a92c2e <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_postgres-759fc6996-bkh95_flyte_b6230298-5a24-4f67-88a3-bf194e6fffb1_0
d471a6426605 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_minio-f58cffb47-qqccw_flyte_d6cd809f-4fa2-40f6-9102-73283a5b1890_0
880edc7cb3d8 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_datacatalog-6f9db4f88f-2vbg8_flyte_7ca5e39e-9941-4f65-9466-17505d9a817c_0
70948f2729fb <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_flyteconsole-749fcd46d5-bn7rk_flyte_6b67253b-170c-4844-ac07-768328e84b2e_0
ba2cc20ff246 4689081edb10 "/storage-provisioner" 22 hours ago Up 22 hours k8s_storage-provisioner_storage-provisioner_kube-system_c520de17-88ec-4048-afec-4b8ddb1c0824_1
6545d5844fb8 43940c34f24f "/usr/local/bin/kube…" 22 hours ago Up 22 hours k8s_kube-proxy_kube-proxy-5gdrd_kube-system_1c6697ba-ea64-4a09-b425-2bf52cccb08e_0
812e4a17d215 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kube-proxy-5gdrd_kube-system_1c6697ba-ea64-4a09-b425-2bf52cccb08e_0
0f05d4700c5c <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_storage-provisioner_kube-system_c520de17-88ec-4048-afec-4b8ddb1c0824_0
00b42b722632 67da37a9a360 "/coredns -conf /etc…" 22 hours ago Up 22 hours k8s_coredns_coredns-66bff467f8-d4lmm_kube-system_1d96de74-9f98-49d2-b796-e131768dc5c1_0
ab3390711d79 67da37a9a360 "/coredns -conf /etc…" 22 hours ago Up 22 hours k8s_coredns_coredns-66bff467f8-9fps6_kube-system_00ccab71-d412-4137-87a1-404100c73eb4_0
8713a2a69a5b aa67fec7d7ef "/bin/kindnetd" 22 hours ago Up 22 hours k8s_kindnet-cni_kindnet-2ncvj_kube-system_dcbf8ff7-bcf9-497a-8ffc-75fc900a58b4_0
15c6f7df4812 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kindnet-2ncvj_kube-system_dcbf8ff7-bcf9-497a-8ffc-75fc900a58b4_0
b414ff1d4a11 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_coredns-66bff467f8-d4lmm_kube-system_1d96de74-9f98-49d2-b796-e131768dc5c1_0
83c21644b747 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_coredns-66bff467f8-9fps6_kube-system_00ccab71-d412-4137-87a1-404100c73eb4_0
bd5e08da5f5c 303ce5db0e90 "etcd --advertise-cl…" 22 hours ago Up 22 hours k8s_etcd_etcd-minikube_kube-system_ca02679f24a416493e1c288b16539a55_0
14d447e9a97d 74060cea7f70 "kube-apiserver --ad…" 22 hours ago Up 22 hours k8s_kube-apiserver_kube-apiserver-minikube_kube-system_45e2432c538c36239dfecde67cb91065_0
23ba80eb3f24 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_etcd-minikube_kube-system_ca02679f24a416493e1c288b16539a55_0
72a44a3fafbb <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_0
d62e857b95d5 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kube-controller-manager-minikube_kube-system_c92479a2ea69d7c331c16a5105dd1b8c_0
3b4cc84d6c30 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kube-apiserver-minikube_kube-system_45e2432c538c36239dfecde67cb91065_0```
looks like that happened propellor and a few things.
| Maybe your minikube VM doesn't expose that port.
Maybe try to `minikube ssh` and then check logs
You can also check the logs for `flyteadmin` service
|
Yes it uses refuse to throttle requests to external services to maintain quotas and pools
It should be optional
In kubernetes this is achieved by resource quota and other means
| What does Flyte use redis for? Is it for allocating and releasing tokens for external resources?
|
Just to add color to this: In an ideal scenario, external services maintain server side enforced quota/limits within which the service should be expected to behave normally. In the real world though, this is not always the case. As Ketan said, this is an optional component in the sense that plugin authors can choose to use if the target service they communicate with doesn't follow that norm. But as a system administrator, if you choose to enable one of these plugins (e.g. Hive Plugin), you are required to setup Redis (or choose Noop which basically means you get no protection from client side)
| Yes it uses refuse to throttle requests to external services to maintain quotas and pools
It should be optional
In kubernetes this is achieved by resource quota and other means
|
Do you mind taking a look at this <https://flyte-org.slack.com/archives/CP2HDHKE1/p1584576549032500?thread_ts=1584576549.032500|thread> for suggestions on how to approach this?
I would recommend searching the <#CP2HDHKE1|onboarding> channels there are a lot of gems in there :slightly_smiling_face:
| Another question, after installing the Flyte sandbox I'm running the flytesnacks workflow and receiving a 500:
```$ docker run --network host -e FLYTE_PLATFORM_URL='127.0.0.1:30081' lyft/flytesnacks:v0.1.0 pyflyte -p flytesnacks -d development -c sandbox.config register workflows
Using configuration file at /app/sandbox.config
Flyte Admin URL 127.0.0.1:30081
Running task, workflow, and launch plan registration for flytesnacks, development, ['workflows'] with version 46045e6383611da1cb763d64d846508806fce1a4
Registering Task: workflows.edges.edge_detection_canny
Traceback (most recent call last):
File "/app/venv/bin/pyflyte", line 11, in <module>
sys.exit(main())
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/app/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 86, in workflows
register_all(project, domain, pkgs, test, version)
File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 24, in register_all
o.register(project, domain, name, version)
File "/app/venv/lib/python3.6/site-packages/flytekit/common/exceptions/scopes.py", line 158, in system_entry_point
return wrapped(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/common/tasks/task.py", line 141, in register
_engine_loader.get_engine().get_task(self).register(id_to_register)
File "/app/venv/lib/python3.6/site-packages/flytekit/engines/flyte/engine.py", line 234, in register
self.sdk_task
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/friendly.py", line 50, in create_task
spec=task_<http://spec.to|spec.to>_flyte_idl()
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 12, in handler
return fn(*args, **kwargs)
File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 77, in create_task
return self._stub.CreateTask(task_create_request)
File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 604, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 506, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.CANCELLED
details = "Received http2 header with status: 500"
debug_error_string = "{"created":"@1587993306.222037400","description":"Received http2 :status header with non-200 OK status","file":"src/core/ext/filters/http/client/http_client_<http://filter.cc|filter.cc>","file_line":122,"grpc_message":"Received http2 header with status: 500","grpc_status":1,"value":"500"}"
>```
|
Hi Joseph, first of all welcome!
Hey it is ‘advisable’ to avoid side effects because of idempotency, for example to provide retries and deterministic re execution, it should be replayable
That being said if you are calling microservices then there are ways of making it idempotent
But making it replayable is very hard
Also if you are calling microservices it should be fine to ignore the warning, but also disable caching (default is disabled)
| _New user here._
I was hoping to invoke microservices with flyte. However, in the discussion on task <https://github.com/lyft/flyte/blob/25d79e37bd02f200976312cbe592a66c563d0041/rsts/user/concepts/tasks.rst|requirements> there is this note on pure functions (*bold* text is my own to call out the question):
> Is it a *pure* function? i.e. does it have side effects that are not known to the system (e.g. calls a web-service). It's strongly advisable to *avoid side-effects* in tasks. When side-effects are required, ensure that those operations are *idempotent*.
What are the best practices when calling RESTful (or gRPC) APIs so that one doesn't invalidate the *idempotent requirement?*
|
To illustrate ketan's point, if you had an order processing workflow, and the last step calls out to the payments service, you'd need to make sure that if the task re-runs, it doesn't charge the credit card twice.
| Hi Joseph, first of all welcome!
Hey it is ‘advisable’ to avoid side effects because of idempotency, for example to provide retries and deterministic re execution, it should be replayable
That being said if you are calling microservices then there are ways of making it idempotent
But making it replayable is very hard
Also if you are calling microservices it should be fine to ignore the warning, but also disable caching (default is disabled)
|
Understood. So, if I use compensating transactions I would be cool.
| To illustrate ketan's point, if you had an order processing workflow, and the last step calls out to the payments service, you'd need to make sure that if the task re-runs, it doesn't charge the credit card twice.
|
Yup
And again this is a correctness requirement
Not a operating requirement
If you don’t mind I would love to understand your requirement
| Understood. So, if I use compensating transactions I would be cool.
|
Great question
so here is what we do at Lyft, i am not saying this is the right way
1. When you write code, you write one task a time and hopefully unit test it somewhat
2. When you are somewhat confident, you can build a container and register it
3. then you run an execution and debug
2/3 are done automatically using Pull Requests in github
we automatically push the container and register the flow with Flyteadmin
| _Another new user question_ -- What are the recommended practices for the edit, debug, test cycle in flyte?
|
That process makes sense.
I'd like more details on the debugging side. I'd assume that most of the time people are just logging what seems to be important. When something goes wrong, they look at the output and add extra prints to chase down the issue.
| Great question
so here is what we do at Lyft, i am not saying this is the right way
1. When you write code, you write one task a time and hopefully unit test it somewhat
2. When you are somewhat confident, you can build a container and register it
3. then you run an execution and debug
2/3 are done automatically using Pull Requests in github
we automatically push the container and register the flow with Flyteadmin
|