# %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:44:50.747581Z","iopub.execute_input":"2023-01-19T13:44:50.748868Z","iopub.status.idle":"2023-01-19T13:44:50.788780Z","shell.execute_reply.started":"2023-01-19T13:44:50.748755Z","shell.execute_reply":"2023-01-19T13:44:50.787152Z"},"jupyter":{"source_hidden":true}} #NB: Kaggle requires phone verification to use the internet or a GPU. If you haven't done that yet, the cell below will fail # This code is only here to check that your internet is enabled. It doesn't do anything else. # Here's a help thread on getting your phone number verified: https://www.kaggle.com/product-feedback/135367 import socket,warnings try: socket.setdefaulttimeout(1) socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(('1.1.1.1', 53)) except socket.error as ex: raise Exception("STOP: No internet. Click '>|' in top right and set 'Internet' switch to on") # %% [code] {"_kg_hide-input":true,"_kg_hide-output":true,"execution":{"iopub.status.busy":"2023-01-19T13:44:55.472652Z","iopub.execute_input":"2023-01-19T13:44:55.473277Z","iopub.status.idle":"2023-01-19T13:45:07.027672Z","shell.execute_reply.started":"2023-01-19T13:44:55.473245Z","shell.execute_reply":"2023-01-19T13:45:07.026513Z"},"jupyter":{"source_hidden":true}} # It's a good idea to ensure you're running the latest version of any libraries you need. # `!pip install -Uqq ` upgrades to the latest version of # NB: You can safely ignore any warnings or errors pip spits out about running as root or incompatibilities import os iskaggle = os.environ.get('KAGGLE_KERNEL_RUN_TYPE', '') # %% [markdown] # In 2015 the idea of creating a computer system that could recognise birds was considered so outrageously challenging that it was the basis of [this XKCD joke](https://xkcd.com/1425/): # %% [markdown] # ![image.png](attachment:a0483178-c30e-4fdd-b2c2-349e130ab260.png) # %% [markdown] # But today, we can do exactly that, in just a few minutes, using entirely free resources! # # The basic steps we'll take are: # # 1. Use DuckDuckGo to search for images of "bird photos" # 1. Use DuckDuckGo to search for images of "forest photos" # 1. Fine-tune a pretrained neural network to recognise these two groups # 1. Try running this model on a picture of a bird and see if it works. # %% [markdown] # ## Step 1: Download images of birds and non-birds # %% [code] {"_kg_hide-input":true,"execution":{"iopub.status.busy":"2023-01-19T13:45:56.061456Z","iopub.execute_input":"2023-01-19T13:45:56.061849Z","iopub.status.idle":"2023-01-19T13:45:56.069190Z","shell.execute_reply.started":"2023-01-19T13:45:56.061817Z","shell.execute_reply":"2023-01-19T13:45:56.067878Z"}} from duckduckgo_search import ddg_images from fastcore.all import * def search_images(term, max_images=30): print(f"Searching for '{term}'") return L(ddg_images(term, max_results=max_images)).itemgot('image') # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:45:51.347402Z","iopub.execute_input":"2023-01-19T13:45:51.347810Z","iopub.status.idle":"2023-01-19T13:45:51.408967Z","shell.execute_reply.started":"2023-01-19T13:45:51.347776Z","shell.execute_reply":"2023-01-19T13:45:51.407774Z"}} ?L # %% [markdown] # Let's start by searching for a bird photo and seeing what kind of result we get. We'll start by getting URLs from a search: # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:46:01.346523Z","iopub.execute_input":"2023-01-19T13:46:01.347704Z","iopub.status.idle":"2023-01-19T13:46:01.848775Z","shell.execute_reply.started":"2023-01-19T13:46:01.347644Z","shell.execute_reply":"2023-01-19T13:46:01.847667Z"}} #NB: `search_images` depends on duckduckgo.com, which doesn't always return correct responses. # If you get a JSON error, just try running it again (it may take a couple of tries). urls = search_images('snowboard', max_images=1) urls[0] # %% [markdown] # ...and then download a URL and take a look at it: # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:46:15.912826Z","iopub.execute_input":"2023-01-19T13:46:15.913436Z","iopub.status.idle":"2023-01-19T13:46:21.067256Z","shell.execute_reply.started":"2023-01-19T13:46:15.913404Z","shell.execute_reply":"2023-01-19T13:46:21.066186Z"}} from fastdownload import download_url dest = 'snowboard.jpg' download_url(urls[0], dest, show_progress=False) from fastai.vision.all import * im = Image.open(dest) im.to_thumb(256,256) # %% [markdown] # Now let's do the same with "forest photos": # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:46:30.534509Z","iopub.execute_input":"2023-01-19T13:46:30.535405Z","iopub.status.idle":"2023-01-19T13:46:32.663418Z","shell.execute_reply.started":"2023-01-19T13:46:30.535368Z","shell.execute_reply":"2023-01-19T13:46:32.662448Z"}} download_url(search_images('snowboard', max_images=1)[0], 'snowboard.jpg', show_progress=False) Image.open('snowboard.jpg').to_thumb(256,256) # %% [markdown] # Our searches seem to be giving reasonable results, so let's grab a few examples of each of "bird" and "forest" photos, and save each group of photos to a different folder (I'm also trying to grab a range of lighting conditions here): # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:46:44.584802Z","iopub.execute_input":"2023-01-19T13:46:44.585202Z","iopub.status.idle":"2023-01-19T13:48:11.129905Z","shell.execute_reply.started":"2023-01-19T13:46:44.585170Z","shell.execute_reply":"2023-01-19T13:48:11.128350Z"}} searches = 'skis','snowboard' path = Path('snowboard_or_not') from time import sleep for o in searches: dest = (path/o) dest.mkdir(exist_ok=True, parents=True) download_images(dest, urls=search_images(f'{o} photo')) sleep(10) # Pause between searches to avoid over-loading server download_images(dest, urls=search_images(f'{o} backcountry photo')) sleep(10) download_images(dest, urls=search_images(f'{o} downhill photo')) sleep(10) resize_images(path/o, max_size=400, dest=path/o) # %% [markdown] # ## Step 2: Train our model # %% [markdown] # Some photos might not download correctly which could cause our model training to fail, so we'll remove them: # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:48:13.927238Z","iopub.execute_input":"2023-01-19T13:48:13.927600Z","iopub.status.idle":"2023-01-19T13:48:14.375727Z","shell.execute_reply.started":"2023-01-19T13:48:13.927569Z","shell.execute_reply":"2023-01-19T13:48:14.374184Z"}} failed = verify_images(get_image_files(path)) failed.map(Path.unlink) len(failed) # %% [markdown] # To train a model, we'll need `DataLoaders`, which is an object that contains a *training set* (the images used to create a model) and a *validation set* (the images used to check the accuracy of a model -- not used during training). In `fastai` we can create that easily using a `DataBlock`, and view sample images from it: # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:48:18.530196Z","iopub.execute_input":"2023-01-19T13:48:18.530595Z","iopub.status.idle":"2023-01-19T13:48:19.133024Z","shell.execute_reply.started":"2023-01-19T13:48:18.530562Z","shell.execute_reply":"2023-01-19T13:48:19.132389Z"}} dls = DataBlock( blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(valid_pct=0.2, seed=42), get_y=parent_label, item_tfms=[Resize(192, method='squish')] ).dataloaders(path, bs=32) dls.show_batch(max_n=6) # %% [markdown] # Here what each of the `DataBlock` parameters means: # # blocks=(ImageBlock, CategoryBlock), # # The inputs to our model are images, and the outputs are categories (in this case, "bird" or "forest"). # # get_items=get_image_files, # # To find all the inputs to our model, run the `get_image_files` function (which returns a list of all image files in a path). # # splitter=RandomSplitter(valid_pct=0.2, seed=42), # # Split the data into training and validation sets randomly, using 20% of the data for the validation set. # # get_y=parent_label, # # The labels (`y` values) is the name of the `parent` of each file (i.e. the name of the folder they're in, which will be *bird* or *forest*). # # item_tfms=[Resize(192, method='squish')] # # Before training, resize each image to 192x192 pixels by "squishing" it (as opposed to cropping it). # %% [markdown] # Now we're ready to train our model. The fastest widely used computer vision model is `resnet18`. You can train this in a few minutes, even on a CPU! (On a GPU, it generally takes under 10 seconds...) # # `fastai` comes with a helpful `fine_tune()` method which automatically uses best practices for fine tuning a pre-trained model, so we'll use that. # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:48:35.151957Z","iopub.execute_input":"2023-01-19T13:48:35.153822Z","iopub.status.idle":"2023-01-19T13:49:18.382049Z","shell.execute_reply.started":"2023-01-19T13:48:35.153778Z","shell.execute_reply":"2023-01-19T13:49:18.380613Z"}} learn = vision_learner(dls, resnet18, metrics=error_rate) learn.fine_tune(3) # %% [markdown] # Generally when I run this I see 100% accuracy on the validation set (although it might vary a bit from run to run). # # "Fine-tuning" a model means that we're starting with a model someone else has trained using some other dataset (called the *pretrained model*), and adjusting the weights a little bit so that the model learns to recognise your particular dataset. In this case, the pretrained model was trained to recognise photos in *imagenet*, and widely-used computer vision dataset with images covering 1000 categories) For details on fine-tuning and why it's important, check out the [free fast.ai course](https://course.fast.ai/). # %% [markdown] # ## Step 3: Use our model (and build your own!) # %% [markdown] # Let's see what our model thinks about that bird we downloaded at the start: # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T13:50:02.912145Z","iopub.execute_input":"2023-01-19T13:50:02.912475Z","iopub.status.idle":"2023-01-19T13:50:03.010309Z","shell.execute_reply.started":"2023-01-19T13:50:02.912450Z","shell.execute_reply":"2023-01-19T13:50:03.009055Z"}} is_snowboard,_,probs = learn.predict(PILImage.create('snowboard.jpg')) print(dls.vocab) print(dls.vocab.o2i) print(f"Probability that it is a {is_snowboard}: {probs[dls.vocab.o2i.get(is_snowboard)]:.4f}") # %% [code] {"execution":{"iopub.status.busy":"2023-01-19T12:43:17.999435Z","iopub.execute_input":"2023-01-19T12:43:17.999835Z","iopub.status.idle":"2023-01-19T12:43:18.006416Z","shell.execute_reply.started":"2023-01-19T12:43:17.999788Z","shell.execute_reply":"2023-01-19T12:43:18.005077Z"}} # %% [markdown] # Good job, resnet18. :) # # So, as you see, in the space of a few years, creating computer vision classification models has gone from "so hard it's a joke" to "trivially easy and free"! # # It's not just in computer vision. Thanks to deep learning, computers can now do many things which seemed impossible just a few years ago, including [creating amazing artworks](https://openai.com/dall-e-2/), and [explaining jokes](https://www.datanami.com/2022/04/22/googles-massive-new-language-model-can-explain-jokes/). It's moving so fast that even experts in the field have trouble predicting how it's going to impact society in the coming years. # # One thing is clear -- it's important that we all do our best to understand this technology, because otherwise we'll get left behind! # %% [markdown] # Now it's your turn. Click "Copy & Edit" and try creating your own image classifier using your own image searches! # # If you enjoyed this, please consider clicking the "upvote" button in the top-right -- it's very encouraging to us notebook authors to know when people appreciate our work.