workspace
stringclasses 1
value | channel
stringclasses 1
value | sentences
stringlengths 1
3.93k
| ts
stringlengths 26
26
| user
stringlengths 2
11
| sentence_id
stringlengths 44
53
| timestamp
float64 1.5B
1.56B
| __index_level_0__
int64 0
106k
|
---|---|---|---|---|---|---|---|
pythondev | help | <@Catherina> you would be better off writing a quick howto document for your scripts - spin up the env, run. If they depend on a python package that is not in the standard python install you can even catch the `ImportError` and print the venv startup help text | 2019-06-05T15:21:32.243100 | Clemmie | pythondev_help_Clemmie_2019-06-05T15:21:32.243100 | 1,559,748,092.2431 | 27,021 |
pythondev | help | Yeah but I want people to just be able to ssh into box -> have all scripts in user’s $PATh | 2019-06-05T15:23:24.244400 | Catherina | pythondev_help_Catherina_2019-06-05T15:23:24.244400 | 1,559,748,204.2444 | 27,022 |
pythondev | help | clearly explicitly setting the venv python’s path in the python script is… well, not optimal | 2019-06-05T15:23:49.244900 | Catherina | pythondev_help_Catherina_2019-06-05T15:23:49.244900 | 1,559,748,229.2449 | 27,023 |
pythondev | help | but not sure what are better alternatives | 2019-06-05T15:23:57.245200 | Catherina | pythondev_help_Catherina_2019-06-05T15:23:57.245200 | 1,559,748,237.2452 | 27,024 |
pythondev | help | <@Catherina> I agree with <@Clemmie>. if they are SSH'ing into the box and calling the script, then they can read the `README`. | 2019-06-05T15:25:43.245300 | Ashley | pythondev_help_Ashley_2019-06-05T15:25:43.245300 | 1,559,748,343.2453 | 27,025 |
pythondev | help | or, just make it a service for the to query directly | 2019-06-05T15:26:22.245700 | Ashley | pythondev_help_Ashley_2019-06-05T15:26:22.245700 | 1,559,748,382.2457 | 27,026 |
pythondev | help | we’re having a little trouble helping because you are really forcing a square peg into a round hole | 2019-06-05T15:27:09.246700 | Clemmie | pythondev_help_Clemmie_2019-06-05T15:27:09.246700 | 1,559,748,429.2467 | 27,027 |
pythondev | help | ok… but the readme-route how does that fit into the dev/qa/prod setup? At that point there’d be no need for a “prod” server | 2019-06-05T15:27:17.247000 | Catherina | pythondev_help_Catherina_2019-06-05T15:27:17.247000 | 1,559,748,437.247 | 27,028 |
pythondev | help | each user would just build his own | 2019-06-05T15:27:30.247500 | Catherina | pythondev_help_Catherina_2019-06-05T15:27:30.247500 | 1,559,748,450.2475 | 27,029 |
pythondev | help | via the readme | 2019-06-05T15:27:34.247900 | Catherina | pythondev_help_Catherina_2019-06-05T15:27:34.247900 | 1,559,748,454.2479 | 27,030 |
pythondev | help | what I wanted was for something centralized | 2019-06-05T15:27:56.248800 | Catherina | pythondev_help_Catherina_2019-06-05T15:27:56.248800 | 1,559,748,476.2488 | 27,031 |
pythondev | help | Anyone familiar with the pywin32 wrapper? I am having a bit of a challenge with FindWindow, definitely sure I have the right title, found it using EnumWindows | 2019-06-05T15:28:17.249700 | Arcelia | pythondev_help_Arcelia_2019-06-05T15:28:17.249700 | 1,559,748,497.2497 | 27,032 |
pythondev | help | depends - if these access other, more locked down machines for networking then I would have an ip whitelist on the other machines, only accessible from your control machine | 2019-06-05T15:28:21.249900 | Clemmie | pythondev_help_Clemmie_2019-06-05T15:28:21.249900 | 1,559,748,501.2499 | 27,033 |
pythondev | help | I would just have a jumpbox with a firewall or ingress rules | 2019-06-05T15:51:40.252500 | Jesusa | pythondev_help_Jesusa_2019-06-05T15:51:40.252500 | 1,559,749,900.2525 | 27,034 |
pythondev | help | <@Catherina> If you're dead set on this particular implementation path, I would recommend that you have your Jenkins server package up the virtualenv along with the script so that it's all one artifact. Then you deploy that artifact to your servers, along with a wrapper script that activates the virtualenv and runs the actual script. Drop a symlink to that wrapper script in `/usr/bin` and make it executable by the users who should be allowed to run it. | 2019-06-05T16:09:33.261200 | Carmen | pythondev_help_Carmen_2019-06-05T16:09:33.261200 | 1,559,750,973.2612 | 27,035 |
pythondev | help | That keeps your dependencies contained within the virtualenv, and your users are insulated from anything related to the virtualenv because they're just calling the wrapper script. | 2019-06-05T16:10:30.262100 | Carmen | pythondev_help_Carmen_2019-06-05T16:10:30.262100 | 1,559,751,030.2621 | 27,036 |
pythondev | help | that’s… a lot of work | 2019-06-05T16:40:22.263100 | Jesusa | pythondev_help_Jesusa_2019-06-05T16:40:22.263100 | 1,559,752,822.2631 | 27,037 |
pythondev | help | for a network guy | 2019-06-05T16:40:25.263300 | Jesusa | pythondev_help_Jesusa_2019-06-05T16:40:25.263300 | 1,559,752,825.2633 | 27,038 |
pythondev | help | if people are going to do `python /path/to/my/script` you could just alias python to activate the virtualenv ahead of time | 2019-06-05T16:49:24.264800 | Jesusa | pythondev_help_Jesusa_2019-06-05T16:49:24.264800 | 1,559,753,364.2648 | 27,039 |
pythondev | help | <@Catherina> If you make your scripts part of a python package and install them into a virtualenv, then they’ll show up inside that virtualenv’s bin directory. You can then symlink those files from anywhere (/usr/bin etc) and they’ll run inside the virtualenv. <https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html> | 2019-06-05T16:50:03.265700 | Letty | pythondev_help_Letty_2019-06-05T16:50:03.265700 | 1,559,753,403.2657 | 27,040 |
pythondev | help | (which is roughly the “wrapper script” idea, but the wrapper part itself is handled by virtualenv) | 2019-06-05T16:51:34.266200 | Letty | pythondev_help_Letty_2019-06-05T16:51:34.266200 | 1,559,753,494.2662 | 27,041 |
pythondev | help | :thinking_face: | 2019-06-05T16:52:17.266400 | Catherina | pythondev_help_Catherina_2019-06-05T16:52:17.266400 | 1,559,753,537.2664 | 27,042 |
pythondev | help | This is maybe surprising behavior, but it is documented:
> However, all scripts installed in a virtual environment should be runnable without activating it, and run with the virtual environment’s Python automatically.
<https://docs.python.org/3/library/venv.html> | 2019-06-05T16:54:47.267400 | Letty | pythondev_help_Letty_2019-06-05T16:54:47.267400 | 1,559,753,687.2674 | 27,043 |
pythondev | help | So <@Letty> how is this more robust than adding the venv’s python path in my `#!`? | 2019-06-05T16:58:19.268300 | Catherina | pythondev_help_Catherina_2019-06-05T16:58:19.268300 | 1,559,753,899.2683 | 27,044 |
pythondev | help | For every script | 2019-06-05T16:58:37.268500 | Catherina | pythondev_help_Catherina_2019-06-05T16:58:37.268500 | 1,559,753,917.2685 | 27,045 |
pythondev | help | Is it actually better? | 2019-06-05T16:58:44.268700 | Catherina | pythondev_help_Catherina_2019-06-05T16:58:44.268700 | 1,559,753,924.2687 | 27,046 |
pythondev | help | I have a command line python module. If I send a ctrl+c event then I need to reset terminal. If I let it finish I don't need to reset terminal.
I can input commands, but they just don't appear on the command line. I think I've traced the issue to be caused by: `proc = Popen(commands, stdout=PIPE)` | 2019-06-05T17:07:32.270800 | Layla | pythondev_help_Layla_2019-06-05T17:07:32.270800 | 1,559,754,452.2708 | 27,047 |
pythondev | help | I think what's happening is I'm piping the stdout, and catching a signal at this point doesn't allow me to stop piping out. | 2019-06-05T17:18:32.271400 | Layla | pythondev_help_Layla_2019-06-05T17:18:32.271400 | 1,559,755,112.2714 | 27,048 |
pythondev | help | It’s effectively doing the same thing, but with the advantage that the `venv` module itself is building the wrapper. There’s a decent chance your way of adding the venv’s python to your `#!` line has a subtle bug with environments, while the using the stdlib’s version is more likely to get it correct (and maintain the it moving forward).
Not a huge plus, but it’s always good to use something directly supported instead of rolling your own. | 2019-06-05T17:41:15.271500 | Letty | pythondev_help_Letty_2019-06-05T17:41:15.271500 | 1,559,756,475.2715 | 27,049 |
pythondev | help | Is the signal in the python code? | 2019-06-05T19:52:37.273100 | Bethany | pythondev_help_Bethany_2019-06-05T19:52:37.273100 | 1,559,764,357.2731 | 27,050 |
pythondev | help | Can you capture the stdout then print in Python, and use your logic there? | 2019-06-05T19:53:28.274200 | Bethany | pythondev_help_Bethany_2019-06-05T19:53:28.274200 | 1,559,764,408.2742 | 27,051 |
pythondev | help | I think my problem is I'm capturing the stdout while using GDB, and when the signal is caught by the process I'm always at `proc.wait()` so the stdout is still being sent to a pipe | 2019-06-05T20:07:45.275800 | Layla | pythondev_help_Layla_2019-06-05T20:07:45.275800 | 1,559,765,265.2758 | 27,052 |
pythondev | help | Oh so the subrocess is the one that receives the signal? | 2019-06-05T20:24:13.276300 | Bethany | pythondev_help_Bethany_2019-06-05T20:24:13.276300 | 1,559,766,253.2763 | 27,053 |
pythondev | help | hey all - hoping to get some help with a django/celery/mysql issue:
I have a django api miner that runs through celery to queue and run the jobs, in which I use django-celery-results with to check the status of jobs; this whole setup is running in docker linux containers.
When a job returns a larger data set, I keep getting a `mysql has gone away` error: <https://bpaste.net/show/cc6cb279f629>
I tried adjusting the batch size per the django docs to create limited amount of records at a time, but that seems to have no effect.
I tried using a `django.db.close_old_connections()` on each time the job stores data, but that also did not have any effect.
my celery settings are: <https://bpaste.net/show/724d1cc516e8>
any idea what may be causing this issue? | 2019-06-05T20:28:45.276500 | Shirley | pythondev_help_Shirley_2019-06-05T20:28:45.276500 | 1,559,766,525.2765 | 27,054 |
pythondev | help | Usually this is because you’re operating on the same db connection that you established at the start of the task | 2019-06-05T20:32:49.277700 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:32:49.277700 | 1,559,766,769.2777 | 27,055 |
pythondev | help | the `get` and `store` methods are part of the same class, however the store method which touches the models, is not called until after the `get` method has returned results. | 2019-06-05T20:36:22.279400 | Shirley | pythondev_help_Shirley_2019-06-05T20:36:22.279400 | 1,559,766,982.2794 | 27,056 |
pythondev | help | (just verified) the class is re-initialized with each job call; so in theory the store method should be closing it's connection? | 2019-06-05T20:38:02.281100 | Shirley | pythondev_help_Shirley_2019-06-05T20:38:02.281100 | 1,559,767,082.2811 | 27,057 |
pythondev | help | Not if you’re using the same queryset | 2019-06-05T20:38:45.281600 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:38:45.281600 | 1,559,767,125.2816 | 27,058 |
pythondev | help | so django would be keeping it open in anticipation of additional calls? | 2019-06-05T20:40:30.282000 | Shirley | pythondev_help_Shirley_2019-06-05T20:40:30.282000 | 1,559,767,230.282 | 27,059 |
pythondev | help | What’s your code? | 2019-06-05T20:41:18.282700 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:41:18.282700 | 1,559,767,278.2827 | 27,060 |
pythondev | help | <https://bpaste.net/show/a37ab4693ab8> | 2019-06-05T20:41:52.283200 | Shirley | pythondev_help_Shirley_2019-06-05T20:41:52.283200 | 1,559,767,312.2832 | 27,061 |
pythondev | help | that is the full class; then I just call on it with different `view_ids`, re-initializing the class on each call | 2019-06-05T20:42:25.283900 | Shirley | pythondev_help_Shirley_2019-06-05T20:42:25.283900 | 1,559,767,345.2839 | 27,062 |
pythondev | help | the celery task that calls on the class: <https://bpaste.net/show/f5e2f708d1a7> | 2019-06-05T20:44:27.284200 | Shirley | pythondev_help_Shirley_2019-06-05T20:44:27.284200 | 1,559,767,467.2842 | 27,063 |
pythondev | help | Ok... | 2019-06-05T20:47:16.284400 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:47:16.284400 | 1,559,767,636.2844 | 27,064 |
pythondev | help | `results = ga.get(request=self.request)` | 2019-06-05T20:47:30.284700 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:47:30.284700 | 1,559,767,650.2847 | 27,065 |
pythondev | help | I assume this returns a Django queryset? | 2019-06-05T20:47:56.285300 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:47:56.285300 | 1,559,767,676.2853 | 27,066 |
pythondev | help | Or something from `values`? | 2019-06-05T20:48:16.285900 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:48:16.285900 | 1,559,767,696.2859 | 27,067 |
pythondev | help | that is just a call to a method that executes the API call using the Google Analytics python lib <https://developers.google.com/analytics/devguides/reporting/core/v4/rest/> | 2019-06-05T20:48:49.286500 | Shirley | pythondev_help_Shirley_2019-06-05T20:48:49.286500 | 1,559,767,729.2865 | 27,068 |
pythondev | help | it creates an authentication using secret key from a json file, makes the http call and returns the results | 2019-06-05T20:49:58.287600 | Shirley | pythondev_help_Shirley_2019-06-05T20:49:58.287600 | 1,559,767,798.2876 | 27,069 |
pythondev | help | it doesn't touch any django models at all | 2019-06-05T20:50:05.287900 | Shirley | pythondev_help_Shirley_2019-06-05T20:50:05.287900 | 1,559,767,805.2879 | 27,070 |
pythondev | help | Gotcha | 2019-06-05T20:50:13.288100 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:50:13.288100 | 1,559,767,813.2881 | 27,071 |
pythondev | help | <https://docs.djangoproject.com/en/2.2/ref/settings/#std:setting-CONN_MAX_AGE> | 2019-06-05T20:50:16.288300 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:50:16.288300 | 1,559,767,816.2883 | 27,072 |
pythondev | help | Do you have a setting for that? | 2019-06-05T20:50:35.288900 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:50:35.288900 | 1,559,767,835.2889 | 27,073 |
pythondev | help | Non-zero, I mean | 2019-06-05T20:50:49.289300 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:50:49.289300 | 1,559,767,849.2893 | 27,074 |
pythondev | help | I did not; just set it to `None` now and will give it a try | 2019-06-05T20:51:19.289700 | Shirley | pythondev_help_Shirley_2019-06-05T20:51:19.289700 | 1,559,767,879.2897 | 27,075 |
pythondev | help | Ok. What’s the bulk create batch size? | 2019-06-05T20:51:56.290600 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:51:56.290600 | 1,559,767,916.2906 | 27,076 |
pythondev | help | 100 | 2019-06-05T20:52:08.290800 | Shirley | pythondev_help_Shirley_2019-06-05T20:52:08.290800 | 1,559,767,928.2908 | 27,077 |
pythondev | help | I tried setting it to 10 and got the same results before | 2019-06-05T20:52:25.291200 | Shirley | pythondev_help_Shirley_2019-06-05T20:52:25.291200 | 1,559,767,945.2912 | 27,078 |
pythondev | help | <https://github.com/celery/django-celery-results/issues/58> | 2019-06-05T20:53:32.291400 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:53:32.291400 | 1,559,768,012.2914 | 27,079 |
pythondev | help | This might be related to your issue | 2019-06-05T20:53:46.292000 | Hiroko | pythondev_help_Hiroko_2019-06-05T20:53:46.292000 | 1,559,768,026.292 | 27,080 |
pythondev | help | `"CONN_MAX_AGE": None,` in the database settings still throws the error :disappointed: | 2019-06-05T20:58:24.292500 | Shirley | pythondev_help_Shirley_2019-06-05T20:58:24.292500 | 1,559,768,304.2925 | 27,081 |
pythondev | help | reading up on that bug, am I missing something or was it closed without any fixes? | 2019-06-05T20:58:39.292900 | Shirley | pythondev_help_Shirley_2019-06-05T20:58:39.292900 | 1,559,768,319.2929 | 27,082 |
pythondev | help | Not sure, but does seem related? | 2019-06-05T21:04:50.294400 | Hiroko | pythondev_help_Hiroko_2019-06-05T21:04:50.294400 | 1,559,768,690.2944 | 27,083 |
pythondev | help | Oh, what’s your max_allowed_packet configuration in MySQL? | 2019-06-05T21:05:10.295100 | Hiroko | pythondev_help_Hiroko_2019-06-05T21:05:10.295100 | 1,559,768,710.2951 | 27,084 |
pythondev | help | I did not have any explicit setting for mysql | 2019-06-05T21:05:26.295800 | Shirley | pythondev_help_Shirley_2019-06-05T21:05:26.295800 | 1,559,768,726.2958 | 27,085 |
pythondev | help | If you’re pushing over the size, MySQL will hang up | 2019-06-05T21:05:37.296200 | Hiroko | pythondev_help_Hiroko_2019-06-05T21:05:37.296200 | 1,559,768,737.2962 | 27,086 |
pythondev | help | wouldn't the batch size negate that? | 2019-06-05T21:05:55.296400 | Shirley | pythondev_help_Shirley_2019-06-05T21:05:55.296400 | 1,559,768,755.2964 | 27,087 |
pythondev | help | the responses are fairly small | 2019-06-05T21:07:31.296800 | Shirley | pythondev_help_Shirley_2019-06-05T21:07:31.296800 | 1,559,768,851.2968 | 27,088 |
pythondev | help | which translated into django model form, would be even smaller | 2019-06-05T21:08:01.297400 | Shirley | pythondev_help_Shirley_2019-06-05T21:08:01.297400 | 1,559,768,881.2974 | 27,089 |
pythondev | help | Dang... yeah. You’re right | 2019-06-05T21:08:27.298200 | Hiroko | pythondev_help_Hiroko_2019-06-05T21:08:27.298200 | 1,559,768,907.2982 | 27,090 |
pythondev | help | ```rcds_created += models.GeoNetwork.objects.bulk_create(
batch, DB_BATCH_SIZE
)``` | 2019-06-05T21:11:43.299000 | Hiroko | pythondev_help_Hiroko_2019-06-05T21:11:43.299000 | 1,559,769,103.299 | 27,091 |
pythondev | help | I wonder what would happen if the rcds_created variable was removed, leaving just the bulk create? | 2019-06-05T21:12:47.300900 | Hiroko | pythondev_help_Hiroko_2019-06-05T21:12:47.300900 | 1,559,769,167.3009 | 27,092 |
pythondev | help | will try it now | 2019-06-05T21:13:27.301100 | Shirley | pythondev_help_Shirley_2019-06-05T21:13:27.301100 | 1,559,769,207.3011 | 27,093 |
pythondev | help | no go :disappointed: | 2019-06-05T21:27:10.301300 | Shirley | pythondev_help_Shirley_2019-06-05T21:27:10.301300 | 1,559,770,030.3013 | 27,094 |
pythondev | help | it seems like it fails around the 3rd time it calls the class | 2019-06-05T21:27:28.301700 | Shirley | pythondev_help_Shirley_2019-06-05T21:27:28.301700 | 1,559,770,048.3017 | 27,095 |
pythondev | help | (I have a few `view_ids` it iterates over to perform the jobs) | 2019-06-05T21:27:52.302300 | Shirley | pythondev_help_Shirley_2019-06-05T21:27:52.302300 | 1,559,770,072.3023 | 27,096 |
pythondev | help | <@Hiroko> so I ended up splitting up my initial jobs (which were pulling YTD) into monthly jobs and they all ran fine, since each job ran relatively quickly using only 1 month of data.
At this point I def think this is a bug of django-celery-task, as somewhere it is keeping connections open for too long, which is causing the failure.
I will try to open a bug on their repo to see what they can find.
Appreciate all your help in troubleshooting it! | 2019-06-05T23:07:34.304900 | Shirley | pythondev_help_Shirley_2019-06-05T23:07:34.304900 | 1,559,776,054.3049 | 27,097 |
pythondev | help | <@Hiroko> figured out what the issue was.
I changed the mysql max_packet_size to 256mb and the issue went away, when is when I discovered that my silent exception handler had some code from debugging, which was dumping the full data set into the log, which celery-results was trying to pass into mysql in one go. | 2019-06-05T23:52:37.306600 | Shirley | pythondev_help_Shirley_2019-06-05T23:52:37.306600 | 1,559,778,757.3066 | 27,098 |
pythondev | help | The reason behind this below error
"The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the <code>commentThread</code> resource in the request body to ensure that it is valid.">
Anyone faced this issue? My credentials are valid, but still I'm getting this error. For 20 videos I'm getting data after that program keeps running then after long time it show this error. | 2019-06-06T03:34:16.309000 | Roxie | pythondev_help_Roxie_2019-06-06T03:34:16.309000 | 1,559,792,056.309 | 27,099 |
pythondev | help | I found this reference
<https://stackoverflow.com/questions/32220648/cant-create-replies-to-some-existing-youtube-comments> | 2019-06-06T03:34:44.309500 | Roxie | pythondev_help_Roxie_2019-06-06T03:34:44.309500 | 1,559,792,084.3095 | 27,100 |
pythondev | help | Because of hiding replies | 2019-06-06T03:34:59.309900 | Roxie | pythondev_help_Roxie_2019-06-06T03:34:59.309900 | 1,559,792,099.3099 | 27,101 |
pythondev | help | What is solution to overcome this issue. Anyone please suggest. | 2019-06-06T03:35:30.310500 | Roxie | pythondev_help_Roxie_2019-06-06T03:35:30.310500 | 1,559,792,130.3105 | 27,102 |
pythondev | help | Anyone around? | 2019-06-06T05:11:48.311300 | Javier | pythondev_help_Javier_2019-06-06T05:11:48.311300 | 1,559,797,908.3113 | 27,103 |
pythondev | help | nope | 2019-06-06T05:13:54.311600 | Jettie | pythondev_help_Jettie_2019-06-06T05:13:54.311600 | 1,559,798,034.3116 | 27,104 |
pythondev | help | sick | 2019-06-06T05:41:38.312400 | Javier | pythondev_help_Javier_2019-06-06T05:41:38.312400 | 1,559,799,698.3124 | 27,105 |
pythondev | help | ```
class Inventory:
def __init__(self):
self.slots = []
def add_item(self, item):
self.slots.append(item)
class SortedInventory(Inventory):
def add_item(self, item):
super().add_item(item)
self.slots.append(item)
self.slots.sort()
``` | 2019-06-06T05:42:21.313000 | Javier | pythondev_help_Javier_2019-06-06T05:42:21.313000 | 1,559,799,741.313 | 27,106 |
pythondev | help | why isn't the sort working? | 2019-06-06T05:42:28.313300 | Javier | pythondev_help_Javier_2019-06-06T05:42:28.313300 | 1,559,799,748.3133 | 27,107 |
pythondev | help | what's `item` - can it be sorted with sort? maybe you need to give sort more info e.g. using `self.slots.sort(key=some_function_to_sort_by)` | 2019-06-06T05:49:09.313700 | Guillermina | pythondev_help_Guillermina_2019-06-06T05:49:09.313700 | 1,559,800,149.3137 | 27,108 |
pythondev | help | Item is just something that gets appended to self.slots. I'm not sorting item here right? | 2019-06-06T05:49:44.313900 | Javier | pythondev_help_Javier_2019-06-06T05:49:44.313900 | 1,559,800,184.3139 | 27,109 |
pythondev | help | Is it a number? | 2019-06-06T05:49:55.314100 | Rubie | pythondev_help_Rubie_2019-06-06T05:49:55.314100 | 1,559,800,195.3141 | 27,110 |
pythondev | help | Nah, it shouldn't matter. | 2019-06-06T05:50:10.314300 | Javier | pythondev_help_Javier_2019-06-06T05:50:10.314300 | 1,559,800,210.3143 | 27,111 |
pythondev | help | try sorted()? | 2019-06-06T05:50:20.314500 | Rubie | pythondev_help_Rubie_2019-06-06T05:50:20.314500 | 1,559,800,220.3145 | 27,112 |
pythondev | help | Can't, this is basically homework. | 2019-06-06T05:50:29.314700 | Javier | pythondev_help_Javier_2019-06-06T05:50:29.314700 | 1,559,800,229.3147 | 27,113 |
pythondev | help | What's the output of self.slots | 2019-06-06T05:51:18.314900 | Rubie | pythondev_help_Rubie_2019-06-06T05:51:18.314900 | 1,559,800,278.3149 | 27,114 |
pythondev | help | It wouldn't be so frustrating if there were literally any output at all. The error message I have is `bummer: hmmm, the itmes don't seem to be sorted`. What kind of error is that!? | 2019-06-06T05:51:43.315100 | Javier | pythondev_help_Javier_2019-06-06T05:51:43.315100 | 1,559,800,303.3151 | 27,115 |
pythondev | help | idk, I can't see it | 2019-06-06T05:51:48.315300 | Javier | pythondev_help_Javier_2019-06-06T05:51:48.315300 | 1,559,800,308.3153 | 27,116 |
pythondev | help | print(self.slots)? | 2019-06-06T05:52:18.315500 | Rubie | pythondev_help_Rubie_2019-06-06T05:52:18.315500 | 1,559,800,338.3155 | 27,117 |
pythondev | help | I don't see any output at all | 2019-06-06T05:52:55.315700 | Javier | pythondev_help_Javier_2019-06-06T05:52:55.315700 | 1,559,800,375.3157 | 27,118 |
pythondev | help | assuming self.slots is a list then it should be sorted as is right? | 2019-06-06T05:53:52.316000 | Javier | pythondev_help_Javier_2019-06-06T05:53:52.316000 | 1,559,800,432.316 | 27,119 |
pythondev | help | append will add item to the last index | 2019-06-06T05:54:19.316200 | Rubie | pythondev_help_Rubie_2019-06-06T05:54:19.316200 | 1,559,800,459.3162 | 27,120 |