author
stringlengths 3
31
| claps
stringlengths 1
5
| reading_time
int64 2
31
| link
stringlengths 92
277
| title
stringlengths 24
104
| text
stringlengths 1.35k
44.5k
|
---|---|---|---|---|---|
Justin Lee | 8.3K | 11 | https://medium.com/swlh/chatbots-were-the-next-big-thing-what-happened-5fc49dd6fa61?source=---------0---------------- | Chatbots were the next big thing: what happened? – The Startup – Medium | Oh, how the headlines blared:
Chatbots were The Next Big Thing.
Our hopes were sky high. Bright-eyed and bushy-tailed, the industry was ripe for a new era of innovation: it was time to start socializing with machines.
And why wouldn’t they be? All the road signs pointed towards insane success.
At the Mobile World Congress 2017, chatbots were the main headliners. The conference organizers cited an ‘overwhelming acceptance at the event of the inevitable shift of focus for brands and corporates to chatbots’.
In fact, the only significant question around chatbots was who would monopolize the field, not whether chatbots would take off in the first place:
One year on, we have an answer to that question.
No.
Because there isn’t even an ecosystem for a platform to dominate.
Chatbots weren’t the first technological development to be talked up in grandiose terms and then slump spectacularly.
The age-old hype cycle unfolded in familiar fashion...
Expectations built, built, and then..... It all kind of fizzled out.
The predicted paradim shift didn’t materialize.
And apps are, tellingly, still alive and well.
We look back at our breathless optimism and turn to each other, slightly baffled:
“is that it? THAT was the chatbot revolution we were promised?”
Digit’s Ethan Bloch sums up the general consensus:
According to Dave Feldman, Vice President of Product Design at Heap, chatbots didn’t just take on one difficult problem and fail: they took on several and failed all of them.
Bots can interface with users in different ways. The big divide is text vs. speech. In the beginning (of computer interfaces) was the (written) word.
Users had to type commands manually into a machine to get anything done.
Then, graphical user interfaces (GUIs) came along and saved the day. We became entranced by windows, mouse clicks, icons. And hey, we eventually got color, too!
Meanwhile, a bunch of research scientists were busily developing natural language (NL) interfaces to databases, instead of having to learn an arcane database query language.
Another bunch of scientists were developing speech-processing software so that you could just speak to your computer, rather than having to type. This turned out to be a whole lot more difficult than anyone originally realised:
The next item on the agenda was holding a two-way dialog with a machine. Here’s an example dialog (dating back to the 1990s) with VCR setup system:
Pretty cool, right? The system takes turns in collaborative way, and does a smart job of figuring out what the user wants.
It was carefully crafted to deal with conversations involving VCRs, and could only operate within strict limitations.
Modern day bots, whether they use typed or spoken input, have to face all these challenges, but also work in an efficient and scalable way on a variety of platforms.
Basically, we’re still trying to achieve the same innovations we were 30 years ago.
Here’s where I think we’re going wrong:
An oversized assumption has been that apps are ‘over’, and would be replaced by bots.
By pitting two such disparate concepts against one another (instead of seeing them as separate entities designed to serve different purposes) we discouraged bot development.
You might remember a similar war cry when apps first came onto the scene ten years ago: but do you remember when apps replaced the internet?
It’s said that a new product or service needs to be two of the following: better, cheaper, or faster. Are chatbots cheaper or faster than apps? No — not yet, at least.
Whether they’re ‘better’ is subjective, but I think it’s fair to say that today’s best bot isn’t comparable to today’s best app.
Plus, nobody thinks that using Lyft is too complicated, or that it’s too hard to order food or buy a dress on an app. What is too complicated is trying to complete these tasks with a bot — and having the bot fail.
A great bot can be about as useful as an average app. When it comes to rich, sophisticated, multi-layered apps, there’s no competition.
That’s because machines let us access vast and complex information systems, and the early graphical information systems were a revolutionary leap forward in helping us locate those systems.
Modern-day apps benefit from decades of research and experimentation. Why would we throw this away?
But, if we swap the word ‘replace’ with ‘extend’, things get much more interesting.
Today’s most successful bot experiences take a hybrid approach, incorporating chat into a broader strategy that encompasses more traditional elements.
The next wave will be multimodal apps, where you can say what you want (like with Siri) and get back information as a map, text, or even a spoken response.
Another problematic aspect of the sweeping nature of hype is that it tends to bypass essential questions like these.
For plenty of companies, bots just aren’t the right solution. The past two years are littered with cases of bots being blindly applied to problems where they aren’t needed.
Building a bot for the sake of it, letting it loose and hoping for the best will never end well:
The vast majority of bots are built using decision-tree logic, where the bot’s canned response relies on spotting specific keywords in the user input.
The advantage of this approach is that it’s pretty easy to list all the cases that they are designed to cover. And that’s precisely their disadvantage, too.
That’s because these bots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate.
Problems arise when life refuses to fit into those boxes.
According to recent reports, 70% of the 100,000+ bots on Facebook Messenger are failing to fulfil simple user requests. This is partly a result of developers failing to narrow their bot down to one strong area of focus.
When we were building GrowthBot, we decided to make it specific to sales and marketers: not an ‘all-rounder’, despite the temptation to get overexcited about potential capabilties.
Remember: a bot that does ONE thing well is infinitely more helpful than a bot that does multiple things poorly.
A competent developer can build a basic bot in minutes — but one that can hold a conversation? That’s another story. Despite the constant hype around AI, we’re still a long way from achieving anything remotely human-like.
In an ideal world, the technology known as NLP (natural language processing) should allow a chatbot to understand the messages it receives. But NLP is only just emerging from research labs and is very much in its infancy.
Some platforms provide a bit of NLP, but even the best is at toddler-level capacity (for example, think about Siri understanding your words, but not their meaning.)
As Matt Asay outlines, this results in another issue: failure to capture the attention and creativity of developers.
And conversations are complex. They’re not linear. Topics spin around each other, take random turns, restart or abruptly finish.
Today’s rule-based dialogue systems are too brittle to deal with this kind of unpredictability, and statistical approaches using machine learning are just as limited. The level of AI required for human-like conversation just isn’t available yet.
And in the meantime, there are few high-quality examples of trailblazing bots to lead the way. As Dave Feldman remarked:
Once upon a time, the only way to interact with computers was by typing arcane commands to the terminal. Visual interfaces using windows, icons or a mouse were a revolution in how we manipulate information
There’s a reasons computing moved from text-based to graphical user interfaces (GUIs). On the input side, it’s easier and faster to click than it is to type.
Tapping or selecting is obviously preferable to typing out a whole sentence, even with predictive (often error-prone ) text. On the output side, the old adage that a picture is worth a thousand words is usually true.
We love optical displays of information because we are highly visual creatures. It’s no accident that kids love touch screens. The pioneers who dreamt up graphical interface were inspired by cognitive psychology, the study of how the brain deals with communication.
Conversational UIs are meant to replicate the way humans prefer to communicate, but they end up requiring extra cognitive effort. Essentially, we’re swapping something simple for a more-complex alternative.
Sure, there are some concepts that we can only express using language (“show me all the ways of getting to a museum that give me 2000 steps but don’t take longer than 35 minutes”), but most tasks can be carried out more efficiently and intuitively with GUIs than with a conversational UI.
Aiming for a human dimension in business interactions makes sense.
If there’s one thing that’s broken about sales and marketing, it’s the lack of humanity: brands hide behind ticket numbers, feedback forms, do-not-reply-emails, automated responses and gated ‘contact us’ forms.
Facebook’s goal is that their bots should pass the so-called Turing Test, meaning you can’t tell whether you are talking to a bot or a human. But a bot isn’t the same as a human. It never will be.
A conversation encompasses so much more than just text.
Humans can read between the lines, leverage contextual information and understand double layers like sarcasm. Bots quickly forget what they’re talking about, meaning it’s a bit like conversing with someone who has little or no short-term memory.
As HubSpot team pinpointed:
People aren’t easily fooled, and pretending a bot is a human is guaranteed to diminish returns (not to mention the fact that you’re lying to your users).
And even those rare bots that are powered by state-of-the-art NLP, and excel at processing and producing content, will fall short in comparison.
And here’s the other thing. Conversational UIs are built to replicate the way humans prefer to communicate — with other humans.
But is that how humans prefer to interact with machines?
Not necessarily.
At the end of the day, no amount of witty quips or human-like mannerisms will save a bot from conversational failure.
In a way, those early-adopters weren’t entirely wrong.
People are yelling at Google Home to play their favorite song, ordering pizza from the Domino’s bot and getting makeup tips from Sephora. But in terms of consumer response and developer involvement, chatbots haven’t lived up to the hype generated circa 2015/16.
Not even close.
Computers are good at being computers. Searching for data, crunching numbers, analyzing opinions and condensing that information.
Computers aren’t good at understanding human emotion. The state of NLP means they still don’t ‘get’ what we’re asking them, never mind how we feel.
That’s why it’s still impossible to imagine effective customer support, sales or marketing without the essential human touch: empathy and emotional intelligence.
For now, bots can continue to help us with automated, repetitive, low-level tasks and queries; as cogs in a larger, more complex system. And we did them, and ourselves, a disservice by expecting so much, so soon.
But that’s not the whole story.
Yes, our industry massively overestimated the initial impact chatbots would have. Emphasis on initial.
As Bill Gates once said:
The hype is over. And that’s a good thing. Now, we can start examining the middle-grounded grey area, instead of the hyper-inflated, frantic black and white zone.
I believe we’re at the very beginning of explosive growth. This sense of anti-climax is completely normal for transformational technology.
Messaging will continue to gain traction. Chatbots aren’t going away. NLP and AI are becoming more sophisticated every day.
Developers, apps and platforms will continue to experiment with, and heavily invest in, conversational marketing.
And I can’t wait to see what happens next.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
|
Conor Dewey | 1.4K | 7 | https://towardsdatascience.com/python-for-data-science-8-concepts-you-may-have-forgotten-i-did-825966908393?source=---------1---------------- | Python for Data Science: 8 Concepts You May Have Forgotten | If you’ve ever found yourself looking up the same question, concept, or syntax over and over again when programming, you’re not alone.
I find myself doing this constantly.
While it’s not unnatural to look things up on StackOverflow or other resources, it does slow you down a good bit and raise questions as to your complete understanding of the language.
We live in a world where there is a seemingly infinite amount of accessible, free resources looming just one search away at all times. However, this can be both a blessing and a curse. When not managed effectively, an over-reliance on these resources can build poor habits that will set you back long-term.
Personally, I find myself pulling code from similar discussion threads several times, rather than taking the time to learn and solidify the concept so that I can reproduce the code myself the next time.
This approach is lazy and while it may be the path of least resistance in the short-term, it will ultimately hurt your growth, productivity, and ability to recall syntax (cough, interviews) down the line.
Recently, I’ve been working through an online data science course titled Python for Data Science and Machine Learning on Udemy (Oh God, I sound like that guy on Youtube). Over the early lectures in the series, I was reminded of some concepts and syntax that I consistently overlook when performing data analysis in Python.
In the interest of solidifying my understanding of these concepts once and for all and saving you guys a couple of StackOverflow searches, here’s the stuff that I’m always forgetting when working with Python, NumPy, and Pandas.
I’ve included a short description and example for each, however for your benefit, I will also include links to videos and other resources that explore each concept more in-depth as well.
Writing out a for loop every time you need to define some sort of list is tedious, luckily Python has a built-in way to address this problem in just one line of code. The syntax can be a little hard to wrap your head around but once you get familiar with this technique you’ll use it fairly often.
See the example above and below for how you would normally go about list comprehension with a for loop vs. creating your list with in one simple line with no loops necessary.
Ever get tired of creating function after function for limited use cases? Lambda functions to the rescue! Lambda functions are used for creating small, one-time and anonymous function objects in Python. Basically, they let you create a function, without creating a function.
The basic syntax of lambda functions is:
Note that lambda functions can do everything that regular functions can do, as long as there’s just one expression. Check out the simple example below and the upcoming video to get a better feel for the power of lambda functions:
Once you have a grasp on lambda functions, learning to pair them with the map and filter functions can be a powerful tool.
Specifically, map takes in a list and transforms it into a new list by performing some sort of operation on each element. In this example, it goes through each element and maps the result of itself times 2 to a new list. Note that the list function simply converts the output to list type.
The filter function takes in a list and a rule, much like map, however it returns a subset of the original list by comparing each element against the boolean filtering rule.
For creating quick and easy Numpy arrays, look no further than the arange and linspace functions. Each one has their specific purpose, but the appeal here (instead of using range), is that they output NumPy arrays, which are typically easier to work with for data science.
Arange returns evenly spaced values within a given interval. Along with a starting and stopping point, you can also define a step size or data type if necessary. Note that the stopping point is a ‘cut-off’ value, so it will not be included in the array output.
Linspace is very similar, but with a slight twist. Linspace returns evenly spaced numbers over a specified interval. So given a starting and stopping point, as well as a number of values, linspace will evenly space them out for you in a NumPy array. This is especially helpful for data visualizations and declaring axes when plotting.
You may have ran into this when dropping a column in Pandas or summing values in NumPy matrix. If not, then you surely will at some point. Let’s use the example of dropping a column for now:
I don’t know how many times I wrote this line of code before I actually knew why I was declaring axis what I was. As you can probably deduce from above, set axis to 1 if you want to deal with columns and set it to 0 if you want rows. But why is this? My favorite reasoning, or atleast how I remember this:
Calling the shape attribute from a Pandas dataframe gives us back a tuple with the first value representing the number of rows and the second value representing the number of columns. If you think about how this is indexed in Python, rows are at 0 and columns are at 1, much like how we declare our axis value. Crazy, right?
If you’re familiar with SQL, then these concepts will probably come a lot easier for you. Anyhow, these functions are essentially just ways to combine dataframes in specific ways. It can be difficult to keep track of which is best to use at which time, so let’s review it.
Concat allows the user to append one or more dataframes to each other either below or next to it (depending on how you define the axis).
Merge combines multiple dataframes on specific, common columns that serve as the primary key.
Join, much like merge, combines two dataframes. However, it joins them based on their indices, rather than some specified column.
Check out the excellent Pandas documentation for specific syntax and more concrete examples, as well as some special cases that you may run into.
Think of apply as a map function, but made for Pandas DataFrames or more specifically, for Series. If you’re not as familiar, Series are pretty similar to NumPy arrays for the most part.
Apply sends a function to every element along a column or row depending on what you specify. You might imagine how useful this can be, especially for formatting and manipulating values across a whole DataFrame column, without having to loop at all.
Last but certainly not least is pivot tables. If you’re familiar with Microsoft Excel, then you’ve probably heard of pivot tables in some respect. The Pandas built-in pivot_table function creates a spreadsheet-style pivot table as a DataFrame. Note that the levels in the pivot table are stored in MultiIndex objects on the index and columns of the resulting DataFrame.
That’s it for now. I hope a couple of these overviews have effectively jogged your memory regarding important yet somewhat tricky methods, functions, and concepts you frequently encounter when using Python for data science. Personally, I know that even the act of writing these out and trying to explain them in simple terms has helped me out a ton.
If you’re interested in receiving my weekly rundown of interesting articles and resources focused on data science, machine learning, and artificial intelligence, then subscribe to Self Driven Data Science using the form below!
If you enjoyed this post, feel free to hit the clap button and if you’re interested in posts to come, make sure to follow me on Medium at the link below — I’ll be writing and shipping every day this month as part of a 30-Day Challenge.
This article was originally published on conordewey.com
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data Scientist & Writer | www.conordewey.com
Sharing concepts, ideas, and codes.
|
William Koehrsen | 2.8K | 11 | https://towardsdatascience.com/automated-feature-engineering-in-python-99baf11cc219?source=---------2---------------- | Automated Feature Engineering in Python – Towards Data Science | Machine learning is increasingly moving from hand-designed models to automatically optimized pipelines using tools such as H20, TPOT, and auto-sklearn. These libraries, along with methods such as random search, aim to simplify the model selection and tuning parts of machine learning by finding the best model for a dataset with little to no manual intervention. However, feature engineering, an arguably more valuable aspect of the machine learning pipeline, remains almost entirely a human labor.
Feature engineering, also known as feature creation, is the process of constructing new features from existing data to train a machine learning model. This step can be more important than the actual model used because a machine learning algorithm only learns from the data we give it, and creating features that are relevant to a task is absolutely crucial (see the excellent paper “A Few Useful Things to Know about Machine Learning”).
Typically, feature engineering is a drawn-out manual process, relying on domain knowledge, intuition, and data manipulation. This process can be extremely tedious and the final features will be limited both by human subjectivity and time. Automated feature engineering aims to help the data scientist by automatically creating many candidate features out of a dataset from which the best can be selected and used for training.
In this article, we will walk through an example of using automated feature engineering with the featuretools Python library. We will use an example dataset to show the basics (stay tuned for future posts using real-world data). The complete code for this article is available on GitHub.
Feature engineering means building additional features out of existing data which is often spread across multiple related tables. Feature engineering requires extracting the relevant information from the data and getting it into a single table which can then be used to train a machine learning model.
The process of constructing features is very time-consuming because each new feature usually requires several steps to build, especially when using information from more than one table. We can group the operations of feature creation into two categories: transformations and aggregations. Let’s look at a few examples to see these concepts in action.
A transformation acts on a single table (thinking in terms of Python, a table is just a Pandas DataFrame ) by creating new features out of one or more of the existing columns. As an example, if we have the table of clients below
we can create features by finding the month of the joined column or taking the natural log of the income column. These are both transformations because they use information from only one table.
On the other hand, aggregations are performed across tables, and use a one-to-many relationship to group observations and then calculate statistics. For example, if we have another table with information on the loans of clients, where each client may have multiple loans, we can calculate statistics such as the average, maximum, and minimum of loans for each client.
This process involves grouping the loans table by the client, calculating the aggregations, and then merging the resulting data into the client data. Here’s how we would do that in Python using the language of Pandas.
These operations are not difficult by themselves, but if we have hundreds of variables spread across dozens of tables, this process is not feasible to do by hand. Ideally, we want a solution that can automatically perform transformations and aggregations across multiple tables and combine the resulting data into a single table. Although Pandas is a great resource, there’s only so much data manipulation we want to do by hand! (For more on manual feature engineering check out the excellent Python Data Science Handbook).
Fortunately, featuretools is exactly the solution we are looking for. This open-source Python library will automatically create many features from a set of related tables. Featuretools is based on a method known as “Deep Feature Synthesis”, which sounds a lot more imposing than it actually is (the name comes from stacking multiple features not because it uses deep learning!).
Deep feature synthesis stacks multiple transformation and aggregation operations (which are called feature primitives in the vocab of featuretools) to create features from data spread across many tables. Like most ideas in machine learning, it’s a complex method built on a foundation of simple concepts. By learning one building block at a time, we can form a good understanding of this powerful method.
First, let’s take a look at our example data. We already saw some of the dataset above, and the complete collection of tables is as follows:
If we have a machine learning task, such as predicting whether a client will repay a future loan, we will want to combine all the information about clients into a single table. The tables are related (through the client_id and the loan_id variables) and we could use a series of transformations and aggregations to do this process by hand. However, we will shortly see that we can instead use featuretools to automate the process.
The first two concepts of featuretools are entities and entitysets. An entity is simply a table (or a DataFrame if you think in Pandas). An EntitySet is a collection of tables and the relationships between them. Think of an entityset as just another Python data structure, with its own methods and attributes.
We can create an empty entityset in featuretools using the following:
Now we have to add entities. Each entity must have an index, which is a column with all unique elements. That is, each value in the index must appear in the table only once. The index in the clients dataframe is the client_idbecause each client has only one row in this dataframe. We add an entity with an existing index to an entityset using the following syntax:
The loans dataframe also has a unique index, loan_id and the syntax to add this to the entityset is the same as for clients. However, for the payments dataframe, there is no unique index. When we add this entity to the entityset, we need to pass in the parameter make_index = True and specify the name of the index. Also, although featuretools will automatically infer the data type of each column in an entity, we can override this by passing in a dictionary of column types to the parameter variable_types .
For this dataframe, even though missed is an integer, this is not a numeric variable since it can only take on 2 discrete values, so we tell featuretools to treat is as a categorical variable. After adding the dataframes to the entityset, we inspect any of them:
The column types have been correctly inferred with the modification we specified. Next, we need to specify how the tables in the entityset are related.
The best way to think of a relationship between two tables is the analogy of parent to child. This is a one-to-many relationship: each parent can have multiple children. In the realm of tables, a parent table has one row for every parent, but the child table may have multiple rows corresponding to multiple children of the same parent.
For example, in our dataset, the clients dataframe is a parent of the loans dataframe. Each client has only one row in clients but may have multiple rows in loans. Likewise, loans is the parent of payments because each loan will have multiple payments. The parents are linked to their children by a shared variable. When we perform aggregations, we group the child table by the parent variable and calculate statistics across the children of each parent.
To formalize a relationship in featuretools, we only need to specify the variable that links two tables together. The clients and the loans table are linked via the client_id variable and loans and payments are linked with the loan_id. The syntax for creating a relationship and adding it to the entityset are shown below:
The entityset now contains the three entities (tables) and the relationships that link these entities together. After adding entities and formalizing relationships, our entityset is complete and we are ready to make features.
Before we can quite get to deep feature synthesis, we need to understand feature primitives. We already know what these are, but we have just been calling them by different names! These are simply the basic operations that we use to form new features:
New features are created in featuretools using these primitives either by themselves or stacking multiple primitives. Below is a list of some of the feature primitives in featuretools (we can also define custom primitives):
These primitives can be used by themselves or combined to create features. To make features with specified primitives we use the ft.dfs function (standing for deep feature synthesis). We pass in the entityset, the target_entity , which is the table where we want to add the features, the selected trans_primitives (transformations), and agg_primitives (aggregations):
The result is a dataframe of new features for each client (because we made clients the target_entity). For example, we have the month each client joined which is a transformation feature primitive:
We also have a number of aggregation primitives such as the average payment amounts for each client:
Even though we specified only a few feature primitives, featuretools created many new features by combining and stacking these primitives.
The complete dataframe has 793 columns of new features!
We now have all the pieces in place to understand deep feature synthesis (dfs). In fact, we already performed dfs in the previous function call! A deep feature is simply a feature made of stacking multiple primitives and dfs is the name of process that makes these features. The depth of a deep feature is the number of primitives required to make the feature.
For example, the MEAN(payments.payment_amount) column is a deep feature with a depth of 1 because it was created using a single aggregation. A feature with a depth of two is LAST(loans(MEAN(payments.payment_amount)) This is made by stacking two aggregations: LAST (most recent) on top of MEAN. This represents the average payment size of the most recent loan for each client.
We can stack features to any depth we want, but in practice, I have never gone beyond a depth of 2. After this point, the features are difficult to interpret, but I encourage anyone interested to try “going deeper”.
We do not have to manually specify the feature primitives, but instead can let featuretools automatically choose features for us. To do this, we use the same ft.dfs function call but do not pass in any feature primitives:
Featuretools has built many new features for us to use. While this process does automatically create new features, it will not replace the data scientist because we still have to figure out what to do with all these features. For example, if our goal is to predict whether or not a client will repay a loan, we could look for the features most correlated with a specified outcome. Moreover, if we have domain knowledge, we can use that to choose specific feature primitives or seed deep feature synthesis with candidate features.
Automated feature engineering has solved one problem, but created another: too many features. Although it’s difficult to say before fitting a model which of these features will be important, it’s likely not all of them will be relevant to a task we want to train our model on. Moreover, having too many features can lead to poor model performance because the less useful features drown out those that are more important.
The problem of too many features is known as the curse of dimensionality. As the number of features increases (the dimension of the data grows) it becomes more and more difficult for a model to learn the mapping between features and targets. In fact, the amount of data needed for the model to perform well scales exponentially with the number of features.
The curse of dimensionality is combated with feature reduction (also known as feature selection): the process of removing irrelevant features. This can take on many forms: Principal Component Analysis (PCA), SelectKBest, using feature importances from a model, or auto-encoding using deep neural networks. However, feature reduction is a different topic for another article. For now, we know that we can use featuretools to create numerous features from many tables with minimal effort!
Like many topics in machine learning, automated feature engineering with featuretools is a complicated concept built on simple ideas. Using concepts of entitysets, entities, and relationships, featuretools can perform deep feature synthesis to create new features. Deep feature synthesis in turn stacks feature primitives — aggregations, which act across a one-to-many relationship between tables, and transformations, functions applied to one or more columns in a single table — to build new features from multiple tables.
In future articles, I’ll show how to use this technique on a real world problem, the Home Credit Default Risk competition currently being hosted on Kaggle. Stay tuned for that post, and in the meantime, read this introduction to get started in the competition! I hope that you can now use automated feature engineering as an aid in a data science pipeline. Our models are only as good as the data we give them, and automated feature engineering can help to make the feature creation process more efficient.
For more information on featuretools, including advanced usage, check out the online documentation. To see how featuretools is used in practice, read about the work of Feature Labs, the company behind the open-source library.
As always, I welcome feedback and constructive criticism and can be reached on Twitter @koehrsen_will.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data Scientist and Master Student, Data Science Communicator and Advocate
Sharing concepts, ideas, and codes.
|
Gant Laborde | 1.3K | 7 | https://medium.freecodecamp.org/machine-learning-how-to-go-from-zero-to-hero-40e26f8aa6da?source=---------3---------------- | Machine Learning: how to go from Zero to Hero – freeCodeCamp | If your understanding of A.I. and Machine Learning is a big question mark, then this is the blog post for you. Here, I gradually increase your AwesomenessicityTM by gluing inspirational videos together with friendly text.
Sit down and relax. These videos take time, and if they don’t inspire you to continue to the next section, fair enough.
However, if you find yourself at the bottom of this article, you’ve earned your well-rounded knowledge and passion for this new world. Where you go from there is up to you.
A.I. was always cool, from moving a paddle in Pong to lighting you up with combos in Street Fighter.
A.I. has always revolved around a programmer’s functional guess at how something should behave. Fun, but programmers aren’t always gifted in programming A.I. as we often see. Just Google “epic game fails” to see glitches in A.I., physics, and sometimes even experienced human players.
Regardless, A.I. has a new talent. You can teach a computer to play video games, understand language, and even how to identify people or things. This tip-of-the-iceberg new skill comes from an old concept that only recently got the processing power to exist outside of theory.
I’m talking about Machine Learning.
You don’t need to come up with advanced algorithms anymore. You just have to teach a computer to come up with its own advanced algorithm.
So how does something like that even work? An algorithm isn’t really written as much as it is sort of... bred. I’m not using breeding as an analogy. Watch this short video, which gives excellent commentary and animations to the high-level concept of creating the A.I.
Wow! Right? That’s a crazy process!
Now how is it that we can’t even understand the algorithm when it’s done? One great visual was when the A.I. was written to beat Mario games. As a human, we all understand how to play a side-scroller, but identifying the predictive strategy of the resulting A.I. is insane.
Impressed? There’s something amazing about this idea, right? The only problem is we don’t know Machine Learning, and we don’t know how to hook it up to video games.
Fortunately for you, Elon Musk already provided a non-profit company to do the latter. Yes, in a dozen lines of code you can hook up any A.I. you want to countless games/tasks!
I have two good answers on why you should care. Firstly, Machine Learning (ML) is making computers do things that we’ve never made computers do before. If you want to do something new, not just new to you, but to the world, you can do it with ML.
Secondly, if you don’t influence the world, the world will influence you.
Right now significant companies are investing in ML, and we’re already seeing it change the world. Thought-leaders are warning that we can’t let this new age of algorithms exist outside of the public eye. Imagine if a few corporate monoliths controlled the Internet. If we don’t take up arms, the science won’t be ours. I think Christian Heilmann said it best in his talk on ML.
The concept is useful and cool. We understand it at a high level, but what the heck is actually happening? How does this work?
If you want to jump straight in, I suggest you skip this section and move on to the next “How Do I Get Started” section. If you’re motivated to be a DOer in ML, you won’t need these videos.
If you’re still trying to grasp how this could even be a thing, the following video is perfect for walking you through the logic, using the classic ML problem of handwriting.
Pretty cool huh? That video shows that each layer gets simpler rather than more complicated. Like the function is chewing data into smaller pieces that end in an abstract concept. You can get your hands dirty in interacting with this process on this site (by Adam Harley).
It’s cool watching data go through a trained model, but you can even watch your neural network get trained.
One of the classic real-world examples of Machine Learning in action is the iris data set from 1936. In a presentation I attended by JavaFXpert’s overview on Machine Learning, I learned how you can use his tool to visualize the adjustment and back propagation of weights to neurons on a neural network. You get to watch it train the neural model!
Even if you’re not a Java buff, the presentation Jim gives on all things Machine Learning is a pretty cool 1.5+ hour introduction into ML concepts, which includes more info on many of the examples above.
These concepts are exciting! Are you ready to be the Einstein of this new era? Breakthroughs are happening every day, so get started now.
There are tons of resources available. I’ll be recommending two approaches.
In this approach, you’ll understand Machine Learning down to the algorithms and the math. I know this way sounds tough, but how cool would it be to really get into the details and code this stuff from scratch!
If you want to be a force in ML, and hold your own in deep conversations, then this is the route for you.
I recommend that you try out Brilliant.org’s app (always great for any science lover) and take the Artificial Neural Network course. This course has no time limits and helps you learn ML while killing time in line on your phone.
This one costs money after Level 1.
Combine the above with simultaneous enrollment in Andrew Ng’s Stanford course on “Machine Learning in 11 weeks”. This is the course that Jim Weaver recommended in his video above. I’ve also had this course independently suggested to me by Jen Looper.
Everyone provides a caveat that this course is tough. For some of you that’s a show stopper, but for others, that’s why you’re going to put yourself through it and collect a certificate saying you did.
This course is 100% free. You only have to pay for a certificate if you want one.
With those two courses, you’ll have a LOT of work to do. Everyone should be impressed if you make it through because that’s not simple.
But more so, if you do make it through, you’ll have a deep understanding of the implementation of Machine Learning that will catapult you into successfully applying it in new and world-changing ways.
If you’re not interested in writing the algorithms, but you want to use them to create the next breathtaking website/app, you should jump into TensorFlow and the crash course.
TensorFlow is the de facto open-source software library for machine learning. It can be used in countless ways and even with JavaScript. Here’s a crash course.
Plenty more information on available courses and rankings can be found here.
If taking a course is not your style, you’re still in luck. You don’t have to learn the nitty-gritty of ML in order to use it today. You can efficiently utilize ML as a service in many ways with tech giants who have trained models ready.
I would still caution you that there’s no guarantee that your data is safe or even yours, but the offerings of services for ML are quite attractive!
Using an ML service might be the best solution for you if you’re excited and able to upload your data to Amazon/Microsoft/Google. I like to think of these services as a gateway drug to advanced ML. Either way, it’s good to get started now.
I have to say thank you to all the aforementioned people and videos. They were my inspiration to get started, and though I’m still a newb in the ML world, I’m happy to light the path for others as we embrace this awe-inspiring age we find ourselves in.
It’s imperative to reach out and connect with people if you take up learning this craft. Without friendly faces, answers, and sounding boards, anything can be hard. Just being able to ask and get a response is a game changer. Add me, and add the people mentioned above. Friendly people with friendly advice helps!
See?
I hope this article has inspired you and those around you to learn ML!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Software Consultant, Adjunct Professor, Published Author, Award Winning Speaker, Mentor, Organizer and Immature Nerd :D — Lately full of React Native Tech
Our community publishes stories worth reading on development, design, and data science.
|
Emmanuel Ameisen | 935 | 11 | https://blog.insightdatascience.com/reinforcement-learning-from-scratch-819b65f074d8?source=---------4---------------- | Reinforcement Learning from scratch – Insight Data | Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program.
Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch.
Recently, I gave a talk at the O’Reilly AI conference in Beijing about some of the interesting lessons we’ve learned in the world of NLP. While there, I was lucky enough to attend a tutorial on Deep Reinforcement Learning (Deep RL) from scratch by Unity Technologies. I thought that the session, led by Arthur Juliani, was extremely informative and wanted to share some big takeaways below.
In our conversations with companies, we’ve seen a rise of interesting Deep RL applications, tools and results. In parallel, the inner workings and applications of Deep RL, such as AlphaGo pictured above, can often seem esoteric and hard to understand. In this post, I will give an overview of core aspects of the field that can be understood by anyone.
Many of the visuals are from the slides of the talk, and some are new. The explanations and opinions are mine. If anything is unclear, reach out to me here!
Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?), Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?). In dialog systems for example, classical Deep Learning aims to learn the right response for a given query. On the other hand, Deep Reinforcement Learning focuses on the right sequences of sentences that will lead to a positive outcome, for example a happy customer.
This makes Deep RL particularly attractive for tasks that require planning and adaptation, such as manufacturing or self-driving. However, industry applications have trailed behind the rapidly advancing results coming out of the research community. A major reason is that Deep RL often requires an agent to experiment millions of times before learning anything useful. The best way to do this rapidly is by using a simulation environment. This tutorial will be using Unity to create environments to train agents in.
For this workshop led by Arthur Juliani and Leon Chen, their goal was to get every participants to successfully train multiple Deep RL algorithms in 4 hours. A tall order! Below, is a comprehensive overview of many of the main algorithms that power Deep RL today. For a more complete set of tutorials, Arthur Juliani wrote an 8-part series starting here.
Deep RL can be used to best the top human players at Go, but to understand how that’s done, you first need to understand a few simple concepts, starting with much easier problems.
1/It all starts with slot machines
Let’s imagine you are faced with 4 chests that you can pick from at each turn. Each of them have a different average payout, and your goal is to maximize the total payout you receive after a fixed number of turns. This is a classic problem called Multi-armed bandits and is where we will start. The crux of the problem is to balance exploration, which helps us learn about which states are good, and exploitation, where we now use what we know to pick the best slot machine.
Here, we will utilize a value function that maps our actions to an estimated reward, called the Q function. First, we’ll initialize all Q values at equal values. Then, we’ll update the Q value of each action (picking each chest) based on how good the payout was after choosing this action. This allows us to learn a good value function. We will approximate our Q function using a neural network (starting with a very shallow one) that learns a probability distribution (by using a softmax) over the 4 potential chests.
While the value function tells us how good we estimate each action to be, the policy is the function that determines which actions we end up taking. Intuitively, we might want to use a policy that picks the action with the highest Q value. This performs poorly in practice, as our Q estimates will be very wrong at the start before we gather enough experience through trial and error. This is why we need to add a mechanism to our policy to encourage exploration. One way to do that is to use epsilon greedy, which consists of taking a random action with probability epsilon. We start with epsilon being close to 1, always choosing random actions, and lower epsilon as we go along and learn more about which chests are good. Eventually, we learn which chests are best.
In practice, we might want to take a more subtle approach than either taking the action we think is the best, or a random action. A popular method is Boltzmann Exploration, which adjust probabilities based on our current estimate of how good each chest is, adding in a randomness factor.
2/Adding different states
The previous example was a world in which we were always in the same state, waiting to pick from the same 4 chests in front of us. Most real-word problems consist of many different states. That is what we will add to our environment next. Now, the background behind chests alternates between 3 colors at each turn, changing the average values of the chests. This means we need to learn a Q function that depends not only on the action (the chest we pick), but the state (what the color of the background is). This version of the problem is called Contextual Multi-armed Bandits.
Surprisingly, we can use the same approach as before. The only thing we need to add is an extra dense layer to our neural network, that will take in as input a vector representing the current state of the world.
3/Learning about the consequences of our actions
There is another key factor that makes our current problem simpler than mosts. In most environments, such as in the maze depicted above, the actions that we take have an impact on the state of the world. If we move up on this grid, we might receive a reward or we might receive nothing, but the next turn we will be in a different state. This is where we finally introduce a need for planning.
First, we will define our Q function as the immediate reward in our current state, plus the discounted reward we are expecting by taking all of our future actions. This solution works if our Q estimate of states is accurate, so how can we learn a good estimate?
We will use a method called Temporal Difference (TD) learning to learn a good Q function. The idea is to only look at a limited number of steps in the future. TD(1) for example, only uses the next 2 states to evaluate the reward.
Surprisingly, we can use TD(0), which looks at the current state, and our estimate of the reward the next turn, and get great results. The structure of the network is the same, but we need to go through one forward step before receiving the error. We then use this error to back propagate gradients, like in traditional Deep Learning, and update our value estimates.
3+/Introducing Monte Carlo
Another method to estimate the eventual success of our actions is Monte Carlo Estimates. This consists of playing out the entire episode with our current policy until we reach an end (success by reaching a green block or failure by reaching a red block in the image above) and use that result to update our value estimates for each traversed state. This allows us to propagate values efficiently in one batch at the end of an episode, instead of every time we make a move. The cost is that we are introducing noise to our estimates, since we attribute very distant rewards to them.
4/The world is rarely discrete
The previous methods were using neural networks to approximate our value estimates by mapping from a discrete number of states and actions to a value. In the maze for example, there were 49 states (squares) and 4 actions (move in each adjacent direction). In this environment, we are trying to learn how to balance a ball on a 2 dimensional paddle, by deciding at each time step whether we want to tilt the paddle left or right. Here, the state space becomes continuous (the angle of the paddle, and the position of the ball). The good news is, we can still use Neural Networks to approximate this function!
A note about off-policy vs on-policy learning: The methods we used previously, are off-policy methods, meaning we can generate data with any strategy(using epsilon greedy for example) and learn from it. On-policy methods can only learn from actions that were taken following our policy (remember, a policy is the method we use to determine which actions to take). This constrains our learning process, as we have to have an exploration strategy that is built in to the policy itself, but allows us to tie results directly to our reasoning, and enables us to learn more efficiently.
The approach we will use here is called Policy Gradients, and is an on-policy method. Previously, we were first learning a value function Q for each action in each state and then building a policy on top. In Vanilla Policy Gradient, we still use Monte Carlo Estimates, but we learn our policy directly through a loss function that increases the probability of choosing rewarding actions. Since we are learning on policy, we cannot use methods such as epsilon greedy (which includes random choices), to get our agent to explore the environment. The way that we encourage exploration is by using a method called entropy regularization, which pushes our probability estimates to be wider, and thus will encourage us to make riskier choices to explore the space.
4+/Leveraging deep learning for representations
In practice, many state of the art RL methods require learning both a policy and value estimates. The way we do this with deep learning is by having both be two separate outputs of the same backbone neural network, which will make it easier for our neural network to learn good representations.
One method to do this is Advantage Actor Critic (A2C). We learn our policy directly with policy gradients (defined above), and learn a value function using something called Advantage. Instead of updating our value function based on rewards, we update it based on our advantage, which measures how much better or worse an action was than our previous value function estimated it to be. This helps make learning more stable compared to simple Q Learning and Vanilla Policy Gradients.
5/Learning directly from the screen
There is an additional advantage to using Deep Learning for these methods, which is that Deep Neural Networks excel at perceptive tasks. When a human plays a game, the information received is not a list of states, but an image (usually of a screen, or a board, or the surrounding environment).
Image-based Learning combines a Convolutional Neural Network (CNN) with RL. In this environment, we pass in a raw image instead of features, and add a 2 layer CNN to our architecture without changing anything else! We can even inspect activations to see what the network picks up on to determine value, and policy. In the example below, we can see that the network uses the current score and distant obstacles to estimate the value of the current state, while focusing on nearby obstacles for determining actions. Neat!
As a side note, while toying around with the provided implementation, I’ve found that visual learning is very sensitive to hyperparameters. Changing the discount rate slightly for example, completely prevented the neural network from learning even on a toy application. This is a widely known problem, but it is interesting to see it first hand.
6/Nuanced actions
So far, we’ve played with environments with continuous and discrete state spaces. However, every environment we studied had a discrete action space: we could move in one of four directions, or tilt the paddle to the left or right. Ideally, for applications such as self-driving cars, we would like to learn continuous actions, such as turning the steering wheel between 0 and 360 degrees. In this environment called 3D ball world, we can choose to tilt the paddle to any value on each of its axes. This gives us more control as to how we perform actions, but makes the action space much larger.
We can approach this by approximating our potential choices with Gaussian distributions. We learn a probability distribution over potential actions by learning the mean and variance of a Gaussian distribution, and our policy we sample from that distribution. Simple, in theory :).
7/Next steps for the brave
There are a few concepts that separate the algorithms described above from state of the art approaches. It’s interesting to see that conceptually, the best robotics and game-playing algorithms are not that far away from the ones we just explored:
That’s it for this overview, I hope this has been informative and fun! If you are looking to dive deeper into the theory of RL, give Arthur’s posts a read, or diving deeper by following David Silver’s UCL course. If you are looking to learn more about the projects we do at Insight, or how we work with companies, please check us out below, or reach out to me here.
Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program.
Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
AI Lead at Insight AI @EmmanuelAmeisen
Insight Fellows Program - Your bridge to a career in data
|
Irhum Shafkat | 2K | 15 | https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1?source=---------5---------------- | Intuitively Understanding Convolutions for Deep Learning | The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code.
However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images.
The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.
The kernel repeats this process for every location it slides over, converting a 2D matrix of features into yet another 2D matrix of features. The output features are essentially, the weighted sums (with the weights being the values of the kernel itself) of the input features located roughly in the same location of the output pixel on the input layer.
Whether or not an input feature falls within this “roughly same location”, gets determined directly by whether it’s in the area of the kernel that produced the output or not. This means the size of the kernel directly determines how many (or few) input features get combined in the production of a new output feature.
This is all in pretty stark contrast to a fully connected layer. In the above example, we have 5×5=25 input features, and 3×3=9 output features. If this were a standard fully connected layer, you’d have a weight matrix of 25×9 = 225 parameters, with every output feature being the weighted sum of every single input feature. Convolutions allow us to do this transformation with only 9 parameters, with each output feature, instead of “looking at” every input feature, only getting to “look” at input features coming from roughly the same location. Do take note of this, as it’ll be critical to our later discussion.
Before we move on, it’s definitely worth looking into two techniques that are commonplace in convolution layers: Padding and Strides.
Padding does something pretty clever to solve this: pad the edges with extra, “fake” pixels (usually of value 0, hence the oft-used term “zero padding”). This way, the kernel when sliding can allow the original edge pixels to be at its center, while extending into the fake pixels beyond the edge, producing an output the same size as the input.
The idea of the stride is to skip some of the slide locations of the kernel. A stride of 1 means to pick slides a pixel apart, so basically every single slide, acting as a standard convolution. A stride of 2 means picking slides 2 pixels apart, skipping every other slide in the process, downsizing by roughly a factor of 2, a stride of 3 means skipping every 2 slides, downsizing roughly by factor 3, and so on.
More modern networks, such as the ResNet architectures entirely forgo pooling layers in their internal layers, in favor of strided convolutions when needing to reduce their output sizes.
Of course, the diagrams above only deals with the case where the image has a single input channel. In practicality, most input images have 3 channels, and that number only increases the deeper you go into a network. It’s pretty easy to think of channels, in general, as being a “view” of the image as a whole, emphasising some aspects, de-emphasising others.
So this is where a key distinction between terms comes in handy: whereas in the 1 channel case, where the term filter and kernel are interchangeable, in the general case, they’re actually pretty different. Each filter actually happens to be a collection of kernels, with there being one kernel for every single input channel to the layer, and each kernel being unique.
Each filter in a convolution layer produces one and only one output channel, and they do it like so:
Each of the kernels of the filter “slides” over their respective input channels, producing a processed version of each. Some kernels may have stronger weights than others, to give more emphasis to certain input channels than others (eg. a filter may have a red kernel channel with stronger weights than others, and hence, respond more to differences in the red channel features than the others).
Each of the per-channel processed versions are then summed together to form one channel. The kernels of a filter each produce one version of each channel, and the filter as a whole produces one overall output channel.
Finally, then there’s the bias term. The way the bias term works here is that each output filter has one bias term. The bias gets added to the output channel so far to produce the final output channel.
And with the single filter case down, the case for any number of filters is identical: Each filter processes the input with its own, different set of kernels and a scalar bias with the process described above, producing a single output channel. They are then concatenated together to produce the overall output, with the number of output channels being the number of filters. A nonlinearity is then usually applied before passing this as input to another convolution layer, which then repeats this process.
Even with the mechanics of the convolution layer down, it can still be hard to relate it back to a standard feed-forward network, and it still doesn’t explain why convolutions scale to, and work so much better for image data.
Suppose we have a 4×4 input, and we want to transform it into a 2×2 grid. If we were using a feedforward network, we’d reshape the 4×4 input into a vector of length 16, and pass it through a densely connected layer with 16 inputs and 4 outputs. One could visualize the weight matrix W for a layer:
And although the convolution kernel operation may seem a bit strange at first, it is still a linear transformation with an equivalent transformation matrix. If we were to use a kernel K of size 3 on the reshaped 4×4 input to get a 2×2 output, the equivalent transformation matrix would be:
(Note: while the above matrix is an equivalent transformation matrix, the actual operation is usually implemented as a very different matrix multiplication[2])
The convolution then, as a whole, is still a linear transformation, but at the same time it’s also a dramatically different kind of transformation. For a matrix with 64 elements, there’s just 9 parameters which themselves are reused several times. Each output node only gets to see a select number of inputs (the ones inside the kernel). There is no interaction with any of the other inputs, as the weights to them are set to 0.
It’s useful to see the convolution operation as a hard prior on the weight matrix. In this context, by prior, I mean predefined network parameters. For example, when you use a pretrained model for image classification, you use the pretrained network parameters as your prior, as a feature extractor to your final densely connected layer.
In that sense, there’s a direct intuition between why both are so efficient (compared to their alternatives). Transfer learning is efficient by orders of magnitude compared to random initialization, because you only really need to optimize the parameters of the final fully connected layer, which means you can have fantastic performance with only a few dozen images per class.
Here, you don’t need to optimize all 64 parameters, because we set most of them to zero (and they’ll stay that way), and the rest we convert to shared parameters, resulting in only 9 actual parameters to optimize. This efficiency matters, because when you move from the 784 inputs of MNIST to real world 224×224×3 images, thats over 150,000 inputs. A dense layer attempting to halve the input to 75,000 inputs would still require over 10 billion parameters. For comparison, the entirety of ResNet-50 has some 25 million parameters.
So fixing some parameters to 0, and tying parameters increases efficiency, but unlike the transfer learning case, where we know the prior is good because it works on a large general set of images, how do we know this is any good?
The answer lies in the feature combinations the prior leads the parameters to learn.
Early on in this article, we discussed that:
So with backpropagation coming in all the way from the classification nodes of the network, the kernels have the interesting task of learning weights to produce features only from a set of local inputs. Additionally, because the kernel itself is applied across the entire image, the features the kernel learns must be general enough to come from any part of the image.
If this were any other kind of data, eg. categorical data of app installs, this would’ve been a disaster, for just because your number of app installs and app type columns are next to each other doesn’t mean they have any “local, shared features” common with app install dates and time used. Sure, the four may have an underlying higher level feature (eg. which apps people want most) that can be found, but that gives us no reason to believe the parameters for the first two are exactly the same as the parameters for the latter two. The four could’ve been in any (consistent) order and still be valid!
Pixels however, always appear in a consistent order, and nearby pixels influence a pixel e.g. if all nearby pixels are red, it’s pretty likely the pixel is also red. If there are deviations, that’s an interesting anomaly that could be converted into a feature, and all this can be detected from comparing a pixel with its neighbors, with other pixels in its locality.
And this idea is really what a lot of earlier computer vision feature extraction methods were based around. For instance, for edge detection, one can use a Sobel edge detection filter, a kernel with fixed parameters, operating just like the standard one-channel convolution:
For a non-edge containing grid (eg. the background sky), most of the pixels are the same value, so the overall output of the kernel at that point is 0. For a grid with an vertical edge, there is a difference between the pixels to the left and right of the edge, and the kernel computes that difference to be non-zero, activating and revealing the edges. The kernel only works only a 3×3 grids at a time, detecting anomalies on a local scale, yet when applied across the entire image, is enough to detect a certain feature on a global scale, anywhere in the image!
So the key difference we make with deep learning is ask this question: Can useful kernels be learnt? For early layers operating on raw pixels, we could reasonably expect feature detectors of fairly low level features, like edges, lines, etc.
There’s an entire branch of deep learning research focused on making neural network models interpretable. One of the most powerful tools to come out of that is Feature Visualization using optimization[3]. The idea at core is simple: optimize a image (usually initialized with random noise) to activate a filter as strongly as possible. This does make intuitive sense: if the optimized image is completely filled with edges, that’s strong evidence that’s what the filter itself is looking for and is activated by. Using this, we can peek into the learnt filters, and the results are stunning:
One important thing to notice here is that convolved images are still images. The output of a small grid of pixels from the top left of an image will still be on the top left. So you can run another convolution layer on top of another (such as the two on the left) to extract deeper features, which we visualize.
Yet, however deep our feature detectors get, without any further changes they’ll still be operating on very small patches of the image. No matter how deep your detectors are, you can’t detect faces from a 3×3 grid. And this is where the idea of the receptive field comes in.
A essential design choice of any CNN architecture is that the input sizes grow smaller and smaller from the start to the end of the network, while the number of channels grow deeper. This, as mentioned earlier, is often done through strides or pooling layers. Locality determines what inputs from the previous layer the outputs get to see. The receptive field determines what area of the original input to the entire network the output gets to see.
The idea of a strided convolution is that we only process slides a fixed distance apart, and skip the ones in the middle. From a different point of view, we only keep outputs a fixed distance apart, and remove the rest[1].
We then apply a nonlinearity to the output, and per usual, then stack another new convolution layer on top. And this is where things get interesting. Even if were we to apply a kernel of the same size (3×3), having the same local area, to the output of the strided convolution, the kernel would have a larger effective receptive field:
This is because the output of the strided layer still does represent the same image. It is not so much cropping as it is resizing, only thing is that each single pixel in the output is a “representative” of a larger area (of whose other pixels were discarded) from the same rough location from the original input. So when the next layer’s kernel operates on the output, it’s operating on pixels collected from a larger area.
(Note: if you’re familiar with dilated convolutions, note that the above is not a dilated convolution. Both are methods of increasing the receptive field, but dilated convolutions are a single layer, while this takes place on a regular convolution following a strided convolution, with a nonlinearity inbetween)
This expansion of the receptive field allows the convolution layers to combine the low level features (lines, edges), into higher level features (curves, textures), as we see in the mixed3a layer.
Followed by a pooling/strided layer, the network continues to create detectors for even higher level features (parts, patterns), as we see for mixed4a.
The repeated reduction in image size across the network results in, by the 5th block on convolutions, input sizes of just 7×7, compared to inputs of 224×224. At this point, each single pixel represents a grid of 32×32 pixels, which is huge.
Compared to earlier layers, where an activation meant detecting an edge, here, an activation on the tiny 7×7 grid is one for a very high level feature, such as for birds.
The network as a whole progresses from a small number of filters (64 in case of GoogLeNet), detecting low level features, to a very large number of filters(1024 in the final convolution), each looking for an extremely specific high level feature. Followed by a final pooling layer, which collapses each 7×7 grid into a single pixel, each channel is a feature detector with a receptive field equivalent to the entire image.
Compared to what a standard feedforward network would have done, the output here is really nothing short of awe-inspiring. A standard feedforward network would have produced abstract feature vectors, from combinations of every single pixel in the image, requiring intractable amounts of data to train.
The CNN, with the priors imposed on it, starts by learning very low level feature detectors, and as across the layers as its receptive field is expanded, learns to combine those low-level features into progressively higher level features; not an abstract combination of every single pixel, but rather, a strong visual hierarchy of concepts.
By detecting low level features, and using them to detect higher level features as it progresses up its visual hierarchy, it is eventually able to detect entire visual concepts such as faces, birds, trees, etc, and that’s what makes them such powerful, yet efficient with image data.
With the visual hierarchy CNNs build, it is pretty reasonable to assume that their vision systems are similar to humans. And they’re really great with real world images, but they also fail in ways that strongly suggest their vision systems aren’t entirely human-like. The most major problem: Adversarial Examples[4], examples which have been specifically modified to fool the model.
Adversarial examples would be a non-issue if the only tampered ones that caused the models to fail were ones that even humans would notice. The problem is, the models are susceptible to attacks by samples which have only been tampered with ever so slightly, and would clearly not fool any human. This opens the door for models to silently fail, which can be pretty dangerous for a wide range of applications from self-driving cars to healthcare.
Robustness against adversarial attacks is currently a highly active area of research, the subject of many papers and even competitions, and solutions will certainly improve CNN architectures to become safer and more reliable.
CNNs were the models that allowed computer vision to scale from simple applications to powering sophisticated products and services, ranging from face detection in your photo gallery to making better medical diagnoses. They might be the key method in computer vision going forward, or some other new breakthrough might just be around the corner. Regardless, one thing is for sure: they’re nothing short of amazing, at the heart of many present-day innovative applications, and are most certainly worth deeply understanding.
Hope you enjoyed this article! If you’d like to stay connected, you’ll find me on Twitter here. If you have a question, comments are welcome! — I find them to be useful to my own learning process as well.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Curious programmer, tinkers around in Python and deep learning.
Sharing concepts, ideas, and codes.
|
Sam Drozdov | 2.3K | 6 | https://uxdesign.cc/an-intro-to-machine-learning-for-designers-5c74ba100257?source=---------6---------------- | An intro to Machine Learning for designers – UX Collective | There is an ongoing debate about whether or not designers should write code. Wherever you fall on this issue, most people would agree that designers should know about code. This helps designers understand constraints and empathize with developers. It also allows designers to think outside of the pixel perfect box when problem solving. For the same reasons, designers should know about machine learning.
Put simply, machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Even though Arthur Samuel coined the term over fifty years ago, only recently have we seen the most exciting applications of machine learning — digital assistants, autonomous driving, and spam-free email all exist thanks to machine learning.
Over the past decade new algorithms, better hardware, and more data have made machine learning an order of magnitude more effective. Only in the past few years companies like Google, Amazon, and Apple have made some of their powerful machine learning tools available to developers. Now is the best time to learn about machine learning and apply it to the products you are building.
Since machine learning is now more accessible than ever before, designers today have the opportunity to think about how machine learning can be applied to improve their products. Designers should be able to talk with software developers about what is possible, how to prepare, and what outcomes to expect. Below are a few example applications that should serve as inspiration for these conversations.
Machine learning can help create user-centric products by personalizing experiences to the individuals who use them. This allows us to improve things like recommendations, search results, notifications, and ads.
Machine learning is effective at finding abnormal content. Credit card companies use this to detect fraud, email providers use this to detect spam, and social media companies use this to detect things like hate speech.
Machine learning has enabled computers to begin to understand the things we say (natural-language processing) and the things we see (computer vision). This allows Siri to understand “Siri, set a reminder...”, Google Photos to create albums of your dog, and Facebook to describe a photo to those visually impaired.
Machine learning is also helpful in understanding how users are grouped. This insight can then be used to look at analytics on a group-by-group basis. From here, different features can be evaluated across groups or be rolled out to only a particular group of users.
Machine learning allows us to make predictions about how a user might behave next. Knowing this, we can help prepare for a user’s next action. For example, if we can predict what content a user is planning on viewing, we can preload that content so it’s immediately ready when they want it.
Depending on the application and what data is available, there are different types of machine learning algorithms to choose from. I’ll briefly cover each of the following.
Supervised learning allows us to make predictions using correctly labeled data. Labeled data is a group of examples that has informative tags or outputs. For example, photos with associated hashtags or a house’s features (eq. number of bedrooms, location) and its price.
By using supervised learning we can fit a line to the labelled data that either splits the data into categories or represents the trend of the data. Using this line we are able to make predictions on new data. For example, we can look at new photos and predict hashtags or look at a new house’s features and predict its price.
If the output we are trying to predict is a list of tags or values we call it classification. If the output we are trying to predict is a number we call it regression.
Unsupervised learning is helpful when we have unlabeled data or we are not exactly sure what outputs (like an image’s hashtags or a house’s price) are meaningful. Instead we can identify patterns among unlabeled data. For example, we can identify related items on an e-commerce website or recommend items to someone based on others who made similar purchases.
If the pattern is a group we call it a cluster. If the pattern is a rule (e.q. if this, then that) we call it an association.
Reinforcement learning doesn’t use an existing data set. Instead we create an agent to collect its own data through trial-and-error in an environment where it is reinforced with a reward. For example, an agent can learn to play Mario by receiving a positive reward for collecting coins and a negative reward for walking into a Goomba.
Reinforcement learning is inspired by the way that humans learn and has turned out to be an effective way to teach computers. Specifically, reinforcement has been effective at training computers to play games like Go and Dota.
Understanding the problem you are trying to solve and the available data will constrain the types of machine learning you can use (e.q. identifying objects in an image with supervised learning requires a labeled data set of images). However, constraints are the fruit of creativity. In some cases, you can set out to collect data that is not already available or consider other approaches.
Even though machine learning is a science, it comes with a margin of error. It is important to consider how a user’s experience might be impacted by this margin of error. For example, when an autonomous car fails to recognize its surroundings people can get hurt.
Even though machine learning has never been as accessible as it is today, it still requires additional resources (developers and time) to be integrated into a product. This makes it important to think about whether the resulting impact justifies the amount of resources needed to implement.
We have barely covered the tip of the iceberg, but hopefully at this point you feel more comfortable thinking about how machine learning can be applied to your product. If you are interested in learning more about machine learning, here are some helpful resources:
Thanks for reading. Chat with me on Twitter @samueldrozdov
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Digital Product Designer samueldrozdov.com
Curated stories on user experience, usability, and product design. By @fabriciot and @caioab.
|
Conor Dewey | 252 | 10 | https://towardsdatascience.com/the-big-list-of-ds-ml-interview-resources-2db4f651bd63?source=---------7---------------- | The Big List of DS/ML Interview Resources – Towards Data Science | Data science interviews certainly aren’t easy. I know this first hand. I’ve participated in over 50 individual interviews and phone screens while applying for competitive internships over the last calendar year. Through this exciting and somewhat (at times, very) painful process, I’ve accumulated a plethora of useful resources that helped me prepare for and eventually pass data science interviews.
Long story short, I’ve decided to sort through all my bookmarks and notes in order to deliver a comprehensive list of data science resources.
With this list by your side, you should have more than enough effective tools at your disposal next time you’re prepping for a big interview.
It’s worth noting that many of these resources are naturally going to geared towards entry-level and intern data science positions, as that’s where my expertise lies. Keep that in mind and enjoy!
Here’s some of the more general resources covering data science as a whole. Specifically, I highly recommend checking out the first two links regarding 120 Data Science Interview Questions. While the ebook itself is a couple bucks out of pocket, the answers themselves are free on Quora. These were some of my favorite full-coverage questions to practice with right before an interview.
Even Data Scientists cannot escape the dreaded algorithmic coding interview. In my experience, this isn’t the case 100% of the time, but chances are you’ll be asked to work through something similar to an easy or medium question on LeetCode or HackerRank.
As far as language goes, most companies will let you use whatever language you want. Personally, I did almost all of my algorithmic coding in Java even though the positions were targeted at Python and R programmers. If I had to recommend one thing, it’s to break out your wallet and invest in Cracking the Coding Interview. It absolutely lives up to the hype. I plan to continue using it for years to come.
Once the interviewer knows that you can think-through problems and code effectively, chances are that you’ll move onto some more data science specific applications. Depending on the interviewer and the position, you will likely be able to choose between Python and R as your tool of choice. Since I’m partial to Python, my resources below will primarily focus on effectively using Pandas and NumPy for data analysis.
A data science interview typically isn’t complete without checking your knowledge of SQL. This can be done over the phone or through a live coding question, more likely the latter. I’ve found that the difficulty level of these questions can vary a good bit, ranging from being painfully easy to requiring complex joins and obscure functions.
Our good friend, statistics is still crucial for Data Scientists and it’s reflected as such in interviews. I had many interviews begin by seeing if I can explain a common statistics or probability concept in simple and concise terms. As positions get more experienced, I suspect this happens less and less as traditional statistical questions begin to take the more practical form of A/B testing scenarios, covered later in the post.
You’ll notice that I’ve compiled a few more resources here than in other sections. This isn’t a mistake. Machine learning is a complex field that is a virtual guarantee in data science interviews today.
The way that you’ll be tested on this is no guarantee however. It may come up as a conceptual question regarding cross validation or bias-variance tradeoff, or it may take the form of a take home assignment with a dataset attached. I’ve seen both several times, so you’ve got to be prepared for anything.
Specifically, check out the Machine Learning Flashcards below, they’re only a couple bucks and were my by far my favorite way to quiz myself on any conceptual ML stuff.
This won’t be covered in every single data science interview, but it’s certainly not uncommon. Most interviews will have atleast one section solely dedicated to product thinking which often lends itself to A/B testing of some sort. Make sure your familiar with the concepts and statistical background necessary in order to be prepared when it comes up. If you have time to spare, I took the free online course by Udacity and overall, I was pretty impressed.
Lastly, I wanted to call out all of the posts related to data science jobs and interviewing that I read over and over again to understand, not only how to prepare, but what to expect as well. If you only check out one section here, this is the one to focus on. This is the layer that sits on top of all the technical skills and application. Don’t overlook it.
I hope you find these resources useful during your next interview or job search. I know I did, truthfully I’m just glad that I saved these links somewhere. Lastly, this post is part of an ongoing initiative to ‘open-source’ my experience applying and interviewing at data science positions, so if you enjoyed this content then be sure to follow me for more stuff like this.
If you’re interested in receiving my weekly rundown of interesting articles and resources focused on data science, machine learning, and artificial intelligence, then subscribe to Self Driven Data Science using the form below!
If you enjoyed this post, feel free to hit the clap button and if you’re interested in posts to come, make sure to follow me on Medium at the link below — I’ll be writing and shipping every day this month as part of a 30-Day Challenge.
This article was originally published on conordewey.com
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data Scientist & Writer | www.conordewey.com
Sharing concepts, ideas, and codes.
|
Abhishek Parbhakar | 937 | 6 | https://towardsdatascience.com/must-know-information-theory-concepts-in-deep-learning-ai-e54a5da9769d?source=---------8---------------- | Must know Information Theory concepts in Deep Learning (AI) | Information theory is an important field that has made significant contribution to deep learning and AI, and yet is unknown to many. Information theory can be seen as a sophisticated amalgamation of basic building blocks of deep learning: calculus, probability and statistics. Some examples of concepts in AI that come from Information theory or related fields:
In the early 20th century, scientists and engineers were struggling with the question: “How to quantify the information? Is there a analytical way or a mathematical measure that can tell us about the information content?”. For example, consider below two sentences:
It is not difficult to tell that the second sentence gives us more information since it also tells that Bruno is “big” and “brown” in addition to being a “dog”. How can we quantify the difference between two sentences? Can we have a mathematical measure that tells us how much more information second sentence have as compared to the first?
Scientists were struggling with these questions. Semantics, domain and form of data only added to the complexity of the problem. Then, mathematician and engineer Claude Shannon came up with the idea of “Entropy” that changed our world forever and marked the beginning of “Digital Information Age”.
Shannon proposed that the “semantic aspects of data are irrelevant”, and nature and meaning of data doesn’t matter when it comes to information content. Instead he quantified information in terms of probability distribution and “uncertainty”. Shannon also introduced the term “bit”, that he humbly credited to his colleague John Tukey. This revolutionary idea not only laid the foundation of Information Theory but also opened new avenues for progress in fields like artificial intelligence.
Below we discuss four popular, widely used and must known Information theoretic concepts in deep learning and data sciences:
Also called Information Entropy or Shannon Entropy.
Entropy gives a measure of uncertainty in an experiment. Let’s consider two experiments:
If we compare the two experiments, in exp 2 it is easier to predict the outcome as compared to exp 1. So, we can say that exp 1 is inherently more uncertain/unpredictable than exp 2. This uncertainty in the experiment is measured using entropy.
Therefore, if there is more inherent uncertainty in the experiment then it has higher entropy. Or lesser the experiment is predictable more is the entropy. The probability distribution of experiment is used to calculate the entropy.
A deterministic experiment, which is completely predictable, say tossing a coin with P(H)=1, has entropy zero. An experiment which is completely random, say rolling fair dice, is least predictable, has maximum uncertainty, and has the highest entropy among such experiments.
Another way to look at entropy is the average information gained when we observe outcomes of an random experiment. The information gained for a outcome of an experiment is defined as a function of probability of occurrence of that outcome. More the rarer is the outcome, more is the information gained from observing it.
For example, in an deterministic experiment, we always know the outcome, so no new information gained is here from observing the outcome and hence entropy is zero.
For a discrete random variable X, with possible outcomes (states) x_1,...,x_n the entropy, in unit of bits, is defined as:
where p(x_i) is the probability of i^th outcome of X.
Cross entropy is used to compare two probability distributions. It tells us how similar two distributions are.
Cross entropy between two probability distributions p and q defined over same set of outcomes is given by:
Mutual information is a measure of mutual dependency between two probability distributions or random variables. It tells us how much information about one variable is carried by the another variable.
Mutual information captures dependency between random variables and is more generalized than vanilla correlation coefficient, which captures only the linear relationship.
Mutual information of two discrete random variables X and Y is defined as:
where p(x,y) is the joint probability distribution of X and Y, and p(x) and p(y) are the marginal probability distribution of X and Y respectively.
Also called Relative Entropy.
KL divergence is another measure to find similarities between two probability distributions. It measures how much one distribution diverges from the other.
Suppose, we have some data and true distribution underlying it is ‘P’. But we don’t know this ‘P’, so we choose a new distribution ‘Q’ to approximate this data. Since ‘Q’ is just an approximation, it won’t be able to approximate the data as good as ‘P’ and some information loss will occur. This information loss is given by KL divergence.
KL divergence between ‘P’ and ‘Q’ tells us how much information we lose when we try to approximate data given by ‘P’ with ‘Q’.
KL divergence of a probability distribution Q from another probability distribution P is defined as:
KL divergence is commonly used in unsupervised machine learning technique Variational Autoencoders.
Information Theory was originally formulated by mathematician and electrical engineer Claude Shannon in his seminal paper “A Mathematical Theory of Communication” in 1948.
Note: Terms experiments, random variable & AI, machine learning, deep learning, data science have been used loosely above but have technically different meanings.
In case you liked the article, do follow me Abhishek Parbhakar for more articles related to AI, philosophy and economics.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Finding equilibria among AI, philosophy, and economics.
Sharing concepts, ideas, and codes.
|
Aman Dalmia | 2.3K | 17 | https://blog.usejournal.com/what-i-learned-from-interviewing-at-multiple-ai-companies-and-start-ups-a9620415e4cc?source=---------9---------------- | What I learned from interviewing at multiple AI companies and start-ups | Over the past 8 months, I’ve been interviewing at various companies like Google’s DeepMind, Wadhwani Institute of AI, Microsoft, Ola, Fractal Analytics, and a few others primarily for the roles — Data Scientist, Software Engineer & Research Engineer. In the process, not only did I get an opportunity to interact with many great minds, but also had a peek at myself along with a sense of what people really look for when interviewing someone. I believe that if I’d had this knowledge before, I could have avoided many mistakes and have prepared in a much better manner, which is what the motivation behind this post is, to be able to help someone bag their dream place of work.
This post arose from a discussion with one of my juniors on the lack of really fulfilling job opportunities offered through campus placements for people working in AI. Also, when I was preparing, I noticed people using a lot of resources but as per my experience over the past months, I realised that one can do away with a few minimal ones for most roles in AI, all of which I’m going to mention at the end of the post. I begin with How to get noticed a.k.a. the interview. Then I provide a List of companies and start-ups to apply, which is followed by How to ace that interview. Based on whatever experience I’ve had, I add a section on What we should strive to work for. I conclude with Minimal Resources you need for preparation.
NOTE: For people who are sitting for campus placements, there are two things I’d like to add. Firstly, most of what I’m going to say (except for the last one maybe) is not going to be relevant to you for placements. But, and this is my second point, as I mentioned before, opportunities on campus are mostly in software engineering roles having no intersection with AI. So, this post is specifically meant for people who want to work on solving interesting problems using AI. Also, I want to add that I haven’t cleared all of these interviews but I guess that’s the essence of failure — it’s the greatest teacher! The things that I mention here may not all be useful but these are things that I did and there’s no way for me to know what might have ended up making my case stronger.
To be honest, this step is the most important one. What makes off-campus placements so tough and exhausting is getting the recruiter to actually go through your profile among the plethora of applications that they get. Having a contact inside the organisation place a referral for you would make it quite easy, but, in general, this part can be sub-divided into three keys steps:
a) Do the regulatory preparation and do that well: So, with regulatory preparation, I mean —a LinkedIn profile, a Github profile, a portfolio website and a well-polished CV. Firstly, your CV should be really neat and concise. Follow this guide by Udacity for cleaning up your CV — Resume Revamp. It has everything that I intend to say and I’ve been using it as a reference guide myself. As for the CV template, some of the in-built formats on Overleaf are quite nice. I personally use deedy-resume. Here’s a preview:
As it can be seen, a lot of content can be fit into one page. However, if you really do need more than that, then the format linked above would not work directly. Instead, you can find a modified multi-page format of the same here. The next most important thing to mention is your Github profile. A lot of people underestimate the potential of this, just because unlike LinkedIn, it doesn’t have a “Who Viewed Your Profile” option. People DO go through your Github because that’s the only way they have to validate what you have mentioned in your CV, given that there’s a lot of noise today with people associating all kinds of buzzwords with their profile. Especially for data science, open-source has a big role to play too with majority of the tools, implementations of various algorithms, lists of learning resources, all being open-sourced. I discuss the benefits of getting involved in Open-Source and how one can start from scratch in an earlier post here. The bare minimum for now should be:
• Create a Github account if you don’t already have one.• Create a repository for each of the projects that you have done.• Add documentation with clear instructions on how to run the code• Add documentation for each file mentioning the role of each function, the meaning of each parameter, proper formatting (e.g. PEP8 for Python) along with a script to automate the previous step (Optional).
Moving on, the third step is what most people lack, which is having a portfolio website demonstrating their experience and personal projects. Making a portfolio indicates that you are really serious about getting into the field and adds a lot of points to the authenticity factor. Also, you generally have space constraints on your CV and tend to miss out on a lot of details. You can use your portfolio to really delve deep into the details if you want to and it’s highly recommended to include some sort of visualisation or demonstration of the project/idea. It’s really easy to create one too as there are a lot of free platforms with drag-and-drop features making the process really painless. I personally use Weebly which is a widely used tool. It’s better to have a reference to begin with. There are a lot of awesome ones out there but I referred to Deshraj Yadav’s personal website to begin with making mine:
Finally, a lot of recruiters and start-ups have nowadays started using LinkedIn as their go-to platform for hiring. A lot of good jobs get posted there. Apart from recruiters, the people working at influential positions are quite active there as well. So, if you can grab their attention, you have a good chance of getting in too. Apart from that, maintaining a clean profile is necessary for people to have the will to connect with you. An important part of LinkedIn is their search tool and for you to show up, you must have the relevant keywords interspersed over your profile. It took me a lot of iterations and re-evaluations to finally have a decent one. Also, you should definitely ask people with or under whom you’ve worked with to endorse you for your skills and add a recommendation talking about their experience of working with you. All of this increases your chance of actually getting noticed. I’ll again point towards Udacity’s guide for LinkedIn and Github profiles.
All this might seem like a lot, but remember that you don’t need to do it in a single day or even a week or a month. It’s a process, it never ends. Setting up everything at first would definitely take some effort but once it’s there and you keep updating it regularly as events around you keep happening, you’ll not only find it to be quite easy, but also you’ll be able to talk about yourself anywhere anytime without having to explicitly prepare for it because you become so aware about yourself.
b) Stay authentic: I’ve seen a lot of people do this mistake of presenting themselves as per different job profiles. According to me, it’s always better to first decide what actually interests you, what would you be happy doing and then search for relevant opportunities; not the other way round. The fact that the demand for AI talent surpasses the supply for the same gives you this opportunity. Spending time on your regulatory preparation mentioned above would give you an all-around perspective on yourself and help make this decision easier. Also, you won’t need to prepare answers to various kinds of questions that you get asked during an interview. Most of them would come out naturally as you’d be talking about something you really care about.
c) Networking: Once you’re done with a), figured out b), Networking is what will actually help you get there. If you don’t talk to people, you miss out on hearing about many opportunities that you might have a good shot at. It’s important to keep connecting with new people each day, if not physically, then on LinkedIn, so that upon compounding it after many days, you have a large and strong network. Networking is NOT messaging people to place a referral for you. When I was starting off, I did this mistake way too often until I stumbled upon this excellent article by Mark Meloon, where he talks about the importance of building a real connection with people by offering our help first. Another important step in networking is to get your content out. For example, if you’re good at something, blog about it and share that blog on Facebook and LinkedIn. Not only does this help others, it helps you as well. Once you have a good enough network, your visibility increases multi-fold. You never know how one person from your network liking or commenting on your posts, may help you reach out to a much broader audience including people who might be looking for someone of your expertise.
I’m presenting this list in alphabetical order to avoid the misinterpretation of any specific preference. However, I do place a “*” on the ones that I’d personally recommend. This recommendation is based on either of the following: mission statement, people, personal interaction or scope of learning. More than 1 “*” is purely based on the 2nd and 3rd factors.
Your interview begins the moment you have entered the room and a lot of things can happen between that moment and the time when you’re asked to introduce yourself — your body language and the fact that you’re smiling while greeting them plays a big role, especially when you’re interviewing for a start-up as culture-fit is something that they extremely care about. You need to understand that as much as the interviewer is a stranger to you, you’re a stranger to him/her too. So, they’re probably just as nervous as you are.
It’s important to view the interview as more of a conversation between yourself and the interviewer. Both of you are looking for a mutual fit — you are looking for an awesome place to work at and the interviewer is looking for an awesome person (like you) to work with. So, make sure that you’re feeling good about yourself and that you take the charge of making the initial moments of your conversation pleasant for them. And the easiest way I know how to make that happen is to smile.
There are mostly two types of interviews — one, where the interviewer has come with come prepared set of questions and is going to just ask you just that irrespective of your profile and the second, where the interview is based on your CV. I’ll start with the second one.
This kind of interview generally begins with a “Can you tell me a bit about yourself?”. At this point, 2 things are a big NO — talking about your GPA in college and talking about your projects in detail. An ideal statement should be about a minute or two long, should give a good idea on what have you been doing till now, and it’s not restricted to academics. You can talk about your hobbies like reading books, playing sports, meditation, etc — basically, anything that contributes to defining you. The interviewer will then take something that you talk about here as a cue for his next question, and then the technical part of the interview begins. The motive of this kind of interview is to really check whether whatever you have written on your CV is true or not:
There would be a lot of questions on what could be done differently or if “X” was used instead of “Y”, what would have happened. At this point, it’s important to know the kind of trade-offs that is usually made during implementation, for e.g. if the interviewer says that using a more complex model would have given better results, then you might say that you actually had less data to work with and that would have lead to overfitting. In one of the interviews, I was given a case-study to work on and it involved designing algorithms for a real-world use case. I’ve noticed that once I’ve been given the green flag to talk about a project, the interviewers really like it when I talk about it in the following flow:
Problem > 1 or 2 previous approaches > Our approach > Result > Intuition
The other kind of interview is really just to test your basic knowledge. Don’t expect those questions to be too hard. But they would definitely scratch every bit of the basics that you should be having, mainly based around Linear Algebra, Probability, Statistics, Optimisation, Machine Learning and/or Deep Learning. The resources mentioned in the Minimal Resources you need for preparation section should suffice, but make sure that you don’t miss out one bit among them. The catch here is the amount of time you take to answer those questions. Since these cover the basics, they expect that you should be answering them almost instantly. So, do your preparation accordingly.
Throughout the process, it’s important to be confident and honest about what you know and what you don’t know. If there’s a question that you’re certain you have no idea about, say it upfront rather than making “Aah”, “Um” sounds. If some concept is really important but you are struggling with answering it, the interviewer would generally (depending on how you did in the initial parts) be happy to give you a hint or guide you towards the right solution. It’s a big plus if you manage to pick their hints and arrive at the correct solution. Try to not get nervous and the best way to avoid that is by, again, smiling.
Now we come to the conclusion of the interview where the interviewer would ask you if you have any questions for them. It’s really easy to think that your interview is done and just say that you have nothing to ask. I know many people who got rejected just because of failing at this last question. As I mentioned before, it’s not only you who is being interviewed. You are also looking for a mutual fit with the company itself. So, it’s quite obvious that if you really want to join a place, you must have many questions regarding the work culture there or what kind of role are they seeing you in. It can be as simple as being curious about the person interviewing you. There’s always something to learn from everything around you and you should make sure that you leave the interviewer with the impression that you’re truly interested in being a part of their team. A final question that I’ve started asking all my interviewers, is for a feedback on what they might want me to improve on. This has helped me tremendously and I still remember every feedback that I’ve gotten which I’ve incorporated into my daily life.
That’s it. Based on my experience, if you’re just honest about yourself, are competent, truly care about the company you’re interviewing for and have the right mindset, you should have ticked all the right boxes and should be getting a congratulatory mail soon 😄
We live in an era full of opportunities and that applies to anything that you love. You just need to strive to become the best at it and you will find a way to monetise it. As Gary Vaynerchuk (just follow him already) says:
This is a great time to be working in AI and if you’re truly passionate about it, you have so much that you can do with AI. You can empower so many people that have always been under-represented. We keep nagging about the problems surrounding us, but there’s been never such a time where common people like us can actually do something about those problems, rather than just complaining. Jeffrey Hammerbacher (Founder, Cloudera) had famously said:
We can do so much with AI than we can ever imagine. There are many extremely challenging problems out there which require incredibly smart people like you to put your head down on and solve. You can make many lives better. Time to let go of what is “cool”, or what would “look good”. THINK and CHOOSE wisely.
Any Data Science interview comprises of questions mostly of a subset of the following four categories: Computer Science, Math, Statistics and Machine Learning.
If you’re not familiar with the math behind Deep Learning, then you should consider going over my last post for resources to understand them. However, if you are comfortable, I’ve found that the chapters 2, 3 and 4 of the Deep Learning Book are enough to prepare/revise for theoretical questions during such interviews. I’ve been preparing summaries for a few chapters which you can refer to where I’ve tried to even explain a few concepts that I found challenging to understand at first, in case you are not willing to go through the entire chapters. And if you’ve already done a course on probability, you should be comfortable answering a few numerical as well. For stats, covering these topics should be enough.
Now, the range of questions here can vary depending on the type of position you are applying for. If it’s a more traditional Machine Learning based interview where they want to check your basic knowledge in ML, you can complete any one of the following courses:- Machine Learning by Andrew Ng — CS 229- Machine Learning course by Caltech Professor Yaser Abu-Mostafa
Important topics are: Supervised Learning (Classification, Regression, SVM, Decision Tree, Random Forests, Logistic Regression, Multi-layer Perceptron, Parameter Estimation, Bayes’ Decision Rule), Unsupervised Learning (K-means Clustering, Gaussian Mixture Models), Dimensionality Reduction (PCA).
Now, if you’re applying for a more advanced position, there’s a high chance that you might be questioned on Deep Learning. In that case, you should be very comfortable with Convolutional Neural Networks (CNNs) and/or (depending upon what you’ve worked on) Recurrent Neural Networks (RNNs) and their variants. And by being comfortable, you must know what is the fundamental idea behind Deep Learning, how CNNs/RNNs actually worked, what kind of architectures have been proposed and what has been the motivation behind those architectural changes. Now, there’s no shortcut for this. Either you understand them or you put enough time to understand them. For CNNs, the recommended resource is Stanford’s CS 231N and CS 224N for RNNs. I found this Neural Network class by Hugo Larochelle to be really enlightening too. Refer this for a quick refresher too. Udacity coming to the aid here too. By now, you should have figured out that Udacity is a really important place for an ML practitioner. There are not a lot of places working on Reinforcement Learning (RL) in India and I too am not experienced in RL as of now. So, that’s one thing to add to this post sometime in the future.
Getting placed off-campus is a long journey of self-realisation. I realise that this has been another long post and I’m again extremely grateful to you for valuing my thoughts. I hope that this post finds a way of being useful to you and that it helped you in some way to prepare for your next Data Science interview better. If it did, I request you to really think about what I talk about in What we should strive to work for.
I’m very thankful to my friends from IIT Guwahati for their helpful feedback, especially Ameya Godbole, Kothapalli Vignesh and Prabal Jain. A majority of what I mention here, like “viewing an interview as a conversation” and “seeking feedback from our interviewers”, arose from multiple discussions with Prabal who has been advising me constantly on how I can improve my interviewing skills.
This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.
Follow our publication to see more product & design stories featured by the Journal team.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
AI Fanatic • Math Lover • Dreamer
The official Journal blog
|
Sophia Arakelyan | 7 | 4 | https://buzzrobot.com/from-ballerina-to-ai-researcher-part-i-46fce67f809b?source=---------1---------------- | From Ballerina to AI Researcher: Part I – buZZrobot | Last year, I published the article “From Ballerina to AI writer” where I described how I embraced the technical part of AI without a technical background. But having love and passion for AI, I educated myself and was able to build a neural net classifier and do projects in Deep RL.
Recently, I’ve become a participant in the OpenAI Scholarship Program (OpenAI is a non-profit that gathers top AI researchers to ensure the safety of AI to benefit humanity). Every week for the next three months I’ll publish blog posts sharing my story of transformation from a person dedicated to 15 years of professional dancing and then writing about tech and AI to actually conducting AI research.
Finding your true calling — the key component of happiness
My primary goal with the series of blog posts “From Ballerina to AI researcher” is to show that it’s never too late to embrace a new field, start over again, and find your true calling. Finding work you love is one of the most important components of happiness - — something that you do every day and invest your time in to grow; that makes you feel fulfilled, gives you energy; something that is a refuge for your soul.
Great things never come easy. We have to be able to fight to make great things happen. But you can’t fight for something you don’t believe in, especially if you don’t feel like it’s really important for you and humanity. Finding that thing is a real challenge. I feel lucky that I found my true passion — AI. To me, the technology itself and the AI community — researchers, scientists, people who dedicate their lives to building the most powerful technology of all time with the mission to benefit humanity and make it safe for us — is a great source of energy.
The structure of the blog post series
Today, I’m giving an overall intro of what I’m going to cover in my “From Ballerina to AI Researcher” series.
I’ll dedicate the sequence of blog posts during the OpenAI Scholars program to several aspects of AI technology. I’ll cover those areas that concern me a lot, like AI and automation, bias in ML, dual use of AI, etc.
Also, the structure of my posts will include some insights on what I’m working on right now (the final technical project will be available by the end of August and will be open-sourced).
I feel very lucky to have Alec Radford, an experienced researcher, as my mentor who guides me in the NLP and NLU research area.
First week of my scholarship
I’ve dedicated my first week within the program to learning about the Transformer architecture that performs much better on sequential data compared to RNNs, LSTMs.
The novelty of the architecture is its multi-head self-attention mechanism. According to the original paper, experiments with the transformer on two machine translation tasks showed the model to be superior in quality while being more parallelizable and requiring significantly less time to train.
More concretely, when RNNs or CNNs take a sequence as an input, it goes through sentences word by word, which is a huge obstacle toward parallelization of the process (takes more time to train models). Moreover, if sequences are too long, the model tends to forget the content of distant positions in sequence or mixes it with the following positions’ content — this is the fundamental problem in dealing with sequential data. The transformer architecture reduced this problem thanks to the multi-head self-attention mechanism.
I digged into RNN, LSTM models to catch up with the background information. To that end, I’ve found Andrew Ng’s course on Deep Learning along with the papers extremely useful. To develop insights regarding the transformer, I went through the following resources: the video by Łukasz Kaiser from Google Brain, one of the model’s creators; a blog post with very well elaborated content re: the model, ran the code tensor2tensor and the code using the PyTorch framework from this paper to “feel” the difference between the TF and PyTorch frameworks.
Overall, the goal within the program is to develop deep comprehension of the NLU research area: challenges, current state of the art; and to formulate and test hypotheses that tackle the most important problems of the field.
I’ll share more on what I’m working on in my future articles. Meanwhile, if you have questions/feedback, please leave a comment.
If you want to learn more about me, here are my Facebook and Twitter accounts.
I’d appreciate your feedback on my posts, such as what topics are most interesting to you that I should consider further coverage on.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Former ballerina turned AI writer. Fan of sci-fi, astrophysics. Consciousness is the key. Founder of buZZrobot.com
The publication aims to cover practical aspects of AI technology, use cases along with interviews with notable people in the AI field.
|
Dr. GP Pulipaka | 2 | 6 | https://medium.com/@gp_pulipaka/3-ways-to-apply-latent-semantic-analysis-on-large-corpus-text-on-macos-terminal-jupyterlab-colab-7b4dc3e1622?source=---------9---------------- | 3 Ways to Apply Latent Semantic Analysis on Large-Corpus Text on macOS Terminal, JupyterLab, and... | Latent semantic analysis works on large-scale datasets to generate representations to discover the insights through natural language processing. There are different approaches to perform the latent semantic analysis at multiple levels such as document level, phrase level, and sentence level. Primarily semantic analysis can be summarized into lexical semantics and the study of combining individual words into paragraphs or sentences. The lexical semantics classifies and decomposes the lexical items. Applying lexical semantic structures has different contexts to identify the differences and similarities between the words. A generic term in a paragraph or a sentence is hypernym and hyponymy provides the meaning of the relationship between instances of the hyponyms. Homonyms contain similar syntax or similar spelling with similar structuring with different meanings. Homonyms are not related to each other. Book is an example for homonym. It can mean for someone to read something or an act of making a reservation with similar spelling, form, and syntax. However, the definition is different. Polysemy is another phenomenon of the words where a single word could be associated with multiple related senses and distinct meanings. The word polysemy is a Greek word which means many signs. Python provides NLTK library to perform tokenization of the words by chopping the words in larger chunks into phrases or meaningful strings. Processing words through tokenization produce tokens. Word lemmatization converts words from the current inflected form into the base form.
Latent semantic analysis
Applying latent semantic analysis on large datasets of text and documents represents the contextual meaning through mathematical and statistical computation methods on large corpus of text. Many times, latent semantic analysis overtook human scores and subject matter tests conducted by humans. The accuracy of latent semantic analysis is high as it reads through machine readable documents and texts at a web scale. Latent semantic analysis is a technique that applies singular value decomposition and principal component analysis (PCA). The document can be represented with Z x Y Matrix A, the rows of the matrix represent the document in the collection. The matrix A can represent numerous hundred thousands of rows and columns on a typical large-corpus text document. Applying singular value decomposition develops a set of operations dubbed matrix decomposition. Natural language processing in Python with NLTK library applies a low-rank approximation to the term-document matrix. Later, the low-rank approximation aids in indexing and retrieving the document known as latent semantic indexing by clustering the number of words in the document.
Brief overview of linear algebra
The A with Z x Y matrix contains the real-valued entries with non-negative values for the term-document matrix. Determining the rank of the matrix comes with the number of linearly independent columns or rows in the the matrix. The rank of A ≤ {Z,Y}. A square c x c represented as diagonal matrix where off-diagonal entries are zero. Examining the matrix, if all the c diagonal matrices are one, the identity matrix of the dimension c represented by Ic. For the square Z x Z matrix, A with a vector k which contains not all zeroes, for λ. The matrix decomposition applies on the square matrix factored into the product of matrices from eigenvectors. This allows to reduce the dimensionality of the words from multi-dimensions to two dimensions to view on the plot. The dimensionality reduction techniques with principal component analysis and singular value decomposition holds critical relevance in natural language processing. The Zipfian nature of the frequency of the words in a document makes it difficult to determine the similarity of the words in a static stage. Hence, eigen decomposition is a by-product of singular value decomposition as the input of the document is highly asymmetrical. The latent semantic analysis is a particular technique in semantic space to parse through the document and identify the words with polysemy with NLKT library. The resources such as punkt and wordnet have to be downloaded from NLTK.
Deep Learning at scale with Google Colab notebooks
Training machine learning or deep learning models on CPUs could take hours and could be pretty expensive in terms of the programming language efficiency with time and energy of the computer resources. Google built Colab Notebooks environment for research and development purposes. It runs entirely on the cloud without requiring any additional hardware or software setup for each machine. It’s entirely equivalent of a Jupyter notebook that aids the data scientists to share the colab notebooks by storing on Google drive just like any other Google Sheets or documents in a collaborative environment. There are no additional costs associated with enabling GPU at runtime for acceleration on the runtime. There are some challenges of uploading the data into Colab, unlike Jupyter notebook that can access the data directly from the local directory of the machine. In Colab, there are multiple options to upload the files from the local file system or a drive can be mounted to load the data through drive FUSE wrapper.
Once this step is complete, it shows the following log without errors:
The next step would be generating the authentication tokens to authenticate the Google credentials for the drive and Colab
If it shows successful retrieval of access token, then Colab is all set.
At this stage, the drive is not mounted yet, it will show false when accessing the contents of the text file.
Once the drive is mounted, Colab has access to the datasets from Google drive.
Once the files are accessible, the Python can be executed similar to executing in Jupyter environment. Colab notebook also displays the results similar to what we see on Jupyter notebook.
PyCharm IDE
The program can be run compiled on PyCharm IDE environment and run on PyCharm or can be executed from OSX Terminal.
Results from OSX Terminal
Jupyter Notebook on standalone machine
Jupyter Notebook gives a similar output running the latent semantic analysis on the local machine:
References
Gorrell, G. (2006). Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing. Retrieved from https://www.aclweb.org/anthology/E06-1013
Hardeniya, N. (2016). Natural Language Processing: Python and NLTK . Birmingham, England: Packt Publishing.
Landauer, T. K., Foltz, P. W., Laham, D., & University of Colorado at Boulder (1998). An Introduction to Latent Semantic Analysis. Retrieved from http://lsa.colorado.edu/papers/dp1.LSAintro.pdf
Stackoverflow (2018). Mounting Google Drive on Google Colab. Retrieved from https://stackoverflow.com/questions/50168315/mounting-google-drive-on-google-colab
Stanford University (2009). Matrix decompositions and latent semantic indexing. Retrieved from https://nlp.stanford.edu/IR-book/html/htmledition/matrix-decompositions-and-latent-semantic-indexing-1.html
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | Big data | IoT | Startups | SAP | MachineLearning | DeepLearning | DataScience
|
Scott Santens | 7.3K | 14 | https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49?source=tag_archive---------0---------------- | Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines | (An alternate version of this article was originally published in the Boston Globe)
On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.
Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it.
The language is a new class of machine learning known as deep learning, and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui, not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most.
What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan. In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue.
So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game.
AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic. Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income.
This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far:
Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990.
All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines.
Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them.
If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn.
I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience. I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me.
As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work.
One of these applications is the creation of deep neural networks - kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data.
Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years. This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets. Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn. Why?
Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.
The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can. The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds.
This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself. In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.
However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “Maybe in another ten years.” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI, describe it:
Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times. So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “Any time now.”
Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it.
We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare, machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans.
Amelia is just one AI out there currently being beta-tested in companies right now. Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide.
Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.
A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead? And what does it no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide? These are questions we need to start asking, and fast.
Fortunately, people are beginning to ask these questions, and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income. By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship, and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of possible implementation in countries like Switzerland, Finland, the Netherlands, and Canada.
The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million. That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time.
And now even the White House, in a stunning report to Congress, has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “duck and cover” strategies for avoiding nuclear blasts during the Cold War.
All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.”
AI pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies... My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”
Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system... we may have to consider instituting a basic income guarantee.”
Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”
When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers?
No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?
What’s the big lesson to learn, in a century when machines can learn?
I offer it’s that jobs are for machines, and life is for people.
This article was written on a crowdfunded monthly basic income. If you found value in this article, you can support it along with all my advocacy for basic income with a monthly patron pledge of $1+.
Special thanks to Arjun Banker, Steven Grimm, Larry Cohen, Topher Hunt, Aaron Marcus-Kubitza, Andrew Stern, Keith Davis, Albert Wenger, Richard Just, Chris Smothers, Mark Witham, David Ihnen, Danielle Texeira, Katie Doemland, Paul Wicks, Jan Smole, Joe Esposito, Jack Wagner, Joe Ballou, Stuart Matthews, Natalie Foster, Chris McCoy, Michael Honey, Gary Aranovich, Kai Wong, John David Hodge, Louise Whitmore, Dan O’Sullivan, Harish Venkatesan, Michiel Dral, Gerald Huff, Susanne Berg, Cameron Ottens, Kian Alavi, Gray Scott, Kirk Israel, Robert Solovay, Jeff Schulman, Andrew Henderson, Robert F. Greene, Martin Jordo, Victor Lau, Shane Gordon, Paolo Narciso, Johan Grahn, Tony DeStefano, Erhan Altay, Bryan Herdliska, Stephane Boisvert, Dave Shelton, Rise & Shine PAC, Luke Sampson, Lee Irving, Kris Roadruck, Amy Shaffer, Thomas Welsh, Olli Niinimäki, Casey Young, Elizabeth Balcar, Masud Shah, Allen Bauer, all my other funders for their support, and my amazing partner, Katie Smith.
Scott Santens writes about basic income on his blog. You can also follow him here on Medium, on Twitter, on Facebook, or on Reddit where he is a moderator for the /r/BasicIncome community of over 30,000 subscribers.
If you feel others would appreciate this article, please click the green heart.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
New Orleans writer focused on the potential for human civilization to gets its act together in the 21st century. Moderator of /r/BasicIncome on Reddit.
Articles discussing the concept of the universal basic income
|
Adam Geitgey | 35K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471?source=tag_archive---------1---------------- | Machine Learning is Fun! – Adam Geitgey – Medium | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 日本語, Português, Português (alternate), Türkçe, Français, 한국어 , العَرَبِيَّة, Español (México), Español (España), Polski, Italiano, 普通话, Русский, 한국어 , Tiếng Việt or فارسی.
Bigger update: The content of this article is now available as a full-length video course that walks you through every step of the code. You can take the course for free (and access everything else on Lynda.com free for 30 days) if you sign up with this link.
Have you heard people talking about machine learning but only have a fuzzy idea of what that means? Are you tired of nodding your way through conversations with co-workers? Let’s change that!
This guide is for anyone who is curious about machine learning but has no idea where to start. I imagine there are a lot of people who tried reading the wikipedia article, got frustrated and gave up wishing someone would just give them a high-level explanation. That’s what this is.
The goal is be accessible to anyone — which means that there’s a lot of generalizations. But who cares? If this gets anyone more interested in ML, then mission accomplished.
Machine learning is the idea that there are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.
For example, one kind of algorithm is a classification algorithm. It can put data into different groups. The same classification algorithm used to recognize handwritten numbers could also be used to classify emails into spam and not-spam without changing a line of code. It’s the same algorithm but it’s fed different training data so it comes up with different classification logic.
“Machine learning” is an umbrella term covering lots of these kinds of generic algorithms.
You can think of machine learning algorithms as falling into one of two main categories — supervised learning and unsupervised learning. The difference is simple, but really important.
Let’s say you are a real estate agent. Your business is growing, so you hire a bunch of new trainee agents to help you out. But there’s a problem — you can glance at a house and have a pretty good idea of what a house is worth, but your trainees don’t have your experience so they don’t know how to price their houses.
To help your trainees (and maybe free yourself up for a vacation), you decide to write a little app that can estimate the value of a house in your area based on it’s size, neighborhood, etc, and what similar houses have sold for.
So you write down every time someone sells a house in your city for 3 months. For each house, you write down a bunch of details — number of bedrooms, size in square feet, neighborhood, etc. But most importantly, you write down the final sale price:
Using that training data, we want to create a program that can estimate how much any other house in your area is worth:
This is called supervised learning. You knew how much each house sold for, so in other words, you knew the answer to the problem and could work backwards from there to figure out the logic.
To build your app, you feed your training data about each house into your machine learning algorithm. The algorithm is trying to figure out what kind of math needs to be done to make the numbers work out.
This kind of like having the answer key to a math test with all the arithmetic symbols erased:
From this, can you figure out what kind of math problems were on the test? You know you are supposed to “do something” with the numbers on the left to get each answer on the right.
In supervised learning, you are letting the computer work out that relationship for you. And once you know what math was required to solve this specific set of problems, you could answer to any other problem of the same type!
Let’s go back to our original example with the real estate agent. What if you didn’t know the sale price for each house? Even if all you know is the size, location, etc of each house, it turns out you can still do some really cool stuff. This is called unsupervised learning.
This is kind of like someone giving you a list of numbers on a sheet of paper and saying “I don’t really know what these numbers mean but maybe you can figure out if there is a pattern or grouping or something — good luck!”
So what could do with this data? For starters, you could have an algorithm that automatically identified different market segments in your data. Maybe you’d find out that home buyers in the neighborhood near the local college really like small houses with lots of bedrooms, but home buyers in the suburbs prefer 3-bedroom houses with lots of square footage. Knowing about these different kinds of customers could help direct your marketing efforts.
Another cool thing you could do is automatically identify any outlier houses that were way different than everything else. Maybe those outlier houses are giant mansions and you can focus your best sales people on those areas because they have bigger commissions.
Supervised learning is what we’ll focus on for the rest of this post, but that’s not because unsupervised learning is any less useful or interesting. In fact, unsupervised learning is becoming increasingly important as the algorithms get better because it can be used without having to label the data with the correct answer.
Side note: There are lots of other types of machine learning algorithms. But this is a pretty good place to start.
As a human, your brain can approach most any situation and learn how to deal with that situation without any explicit instructions. If you sell houses for a long time, you will instinctively have a “feel” for the right price for a house, the best way to market that house, the kind of client who would be interested, etc. The goal of Strong AI research is to be able to replicate this ability with computers.
But current machine learning algorithms aren’t that good yet — they only work when focused a very specific, limited problem. Maybe a better definition for “learning” in this case is “figuring out an equation to solve a specific problem based on some example data”.
Unfortunately “Machine Figuring out an equation to solve a specific problem based on some example data” isn’t really a great name. So we ended up with “Machine Learning” instead.
Of course if you are reading this 50 years in the future and we’ve figured out the algorithm for Strong AI, then this whole post will all seem a little quaint. Maybe stop reading and go tell your robot servant to go make you a sandwich, future human.
So, how would you write the program to estimate the value of a house like in our example above? Think about it for a second before you read further.
If you didn’t know anything about machine learning, you’d probably try to write out some basic rules for estimating the price of a house like this:
If you fiddle with this for hours and hours, you might end up with something that sort of works. But your program will never be perfect and it will be hard to maintain as prices change.
Wouldn’t it be better if the computer could just figure out how to implement this function for you? Who cares what exactly the function does as long is it returns the correct number:
One way to think about this problem is that the price is a delicious stew and the ingredients are the number of bedrooms, the square footage and the neighborhood. If you could just figure out how much each ingredient impacts the final price, maybe there’s an exact ratio of ingredients to stir in to make the final price.
That would reduce your original function (with all those crazy if’s and else’s) down to something really simple like this:
Notice the magic numbers in bold — .841231951398213, 1231.1231231, 2.3242341421, and 201.23432095. These are our weights. If we could just figure out the perfect weights to use that work for every house, our function could predict house prices!
A dumb way to figure out the best weights would be something like this:
Start with each weight set to 1.0:
Run every house you know about through your function and see how far off the function is at guessing the correct price for each house:
For example, if the first house really sold for $250,000, but your function guessed it sold for $178,000, you are off by $72,000 for that single house.
Now add up the squared amount you are off for each house you have in your data set. Let’s say that you had 500 home sales in your data set and the square of how much your function was off for each house was a grand total of $86,123,373. That’s how “wrong” your function currently is.
Now, take that sum total and divide it by 500 to get an average of how far off you are for each house. Call this average error amount the cost of your function.
If you could get this cost to be zero by playing with the weights, your function would be perfect. It would mean that in every case, your function perfectly guessed the price of the house based on the input data. So that’s our goal — get this cost to be as low as possible by trying different weights.
Repeat Step 2 over and over with every single possible combination of weights. Whichever combination of weights makes the cost closest to zero is what you use. When you find the weights that work, you’ve solved the problem!
That’s pretty simple, right? Well think about what you just did. You took some data, you fed it through three generic, really simple steps, and you ended up with a function that can guess the price of any house in your area. Watch out, Zillow!
But here’s a few more facts that will blow your mind:
Pretty crazy, right?
Ok, of course you can’t just try every combination of all possible weights to find the combo that works the best. That would literally take forever since you’d never run out of numbers to try.
To avoid that, mathematicians have figured out lots of clever ways to quickly find good values for those weights without having to try very many. Here’s one way:
First, write a simple equation that represents Step #2 above:
Now let’s re-write exactly the same equation, but using a bunch of machine learning math jargon (that you can ignore for now):
This equation represents how wrong our price estimating function is for the weights we currently have set.
If we graph this cost equation for all possible values of our weights for number_of_bedrooms and sqft, we’d get a graph that might look something like this:
In this graph, the lowest point in blue is where our cost is the lowest — thus our function is the least wrong. The highest points are where we are most wrong. So if we can find the weights that get us to the lowest point on this graph, we’ll have our answer!
So we just need to adjust our weights so we are “walking down hill” on this graph towards the lowest point. If we keep making small adjustments to our weights that are always moving towards the lowest point, we’ll eventually get there without having to try too many different weights.
If you remember anything from Calculus, you might remember that if you take the derivative of a function, it tells you the slope of the function’s tangent at any point. In other words, it tells us which way is downhill for any given point on our graph. We can use that knowledge to walk downhill.
So if we calculate a partial derivative of our cost function with respect to each of our weights, then we can subtract that value from each weight. That will walk us one step closer to the bottom of the hill. Keep doing that and eventually we’ll reach the bottom of the hill and have the best possible values for our weights. (If that didn’t make sense, don’t worry and keep reading).
That’s a high level summary of one way to find the best weights for your function called batch gradient descent. Don’t be afraid to dig deeper if you are interested on learning the details.
When you use a machine learning library to solve a real problem, all of this will be done for you. But it’s still useful to have a good idea of what is happening.
The three-step algorithm I described is called multivariate linear regression. You are estimating the equation for a line that fits through all of your house data points. Then you are using that equation to guess the sales price of houses you’ve never seen before based where that house would appear on your line. It’s a really powerful idea and you can solve “real” problems with it.
But while the approach I showed you might work in simple cases, it won’t work in all cases. One reason is because house prices aren’t always simple enough to follow a continuous line.
But luckily there are lots of ways to handle that. There are plenty of other machine learning algorithms that can handle non-linear data (like neural networks or SVMs with kernels). There are also ways to use linear regression more cleverly that allow for more complicated lines to be fit. In all cases, the same basic idea of needing to find the best weights still applies.
Also, I ignored the idea of overfitting. It’s easy to come up with a set of weights that always works perfectly for predicting the prices of the houses in your original data set but never actually works for any new houses that weren’t in your original data set. But there are ways to deal with this (like regularization and using a cross-validation data set). Learning how to deal with this issue is a key part of learning how to apply machine learning successfully.
In other words, while the basic concept is pretty simple, it takes some skill and experience to apply machine learning and get useful results. But it’s a skill that any developer can learn!
Once you start seeing how easily machine learning techniques can be applied to problems that seem really hard (like handwriting recognition), you start to get the feeling that you could use machine learning to solve any problem and get an answer as long as you have enough data. Just feed in the data and watch the computer magically figure out the equation that fits the data!
But it’s important to remember that machine learning only works if the problem is actually solvable with the data that you have.
For example, if you build a model that predicts home prices based on the type of potted plants in each house, it’s never going to work. There just isn’t any kind of relationship between the potted plants in each house and the home’s sale price. So no matter how hard it tries, the computer can never deduce a relationship between the two.
So remember, if a human expert couldn’t use the data to solve the problem manually, a computer probably won’t be able to either. Instead, focus on problems where a human could solve the problem, but where it would be great if a computer could solve it much more quickly.
In my mind, the biggest problem with machine learning right now is that it mostly lives in the world of academia and commercial research groups. There isn’t a lot of easy to understand material out there for people who would like to get a broad understanding without actually becoming experts. But it’s getting a little better every day.
If you want to try out what you’ve learned in this article, I made a course that walks you through every step of this article, including writing all the code. Give it a try!
If you want to go deeper, Andrew Ng’s free Machine Learning class on Coursera is pretty amazing as a next step. I highly recommend it. It should be accessible to anyone who has a Comp. Sci. degree and who remembers a very minimal amount of math.
Also, you can play around with tons of machine learning algorithms by downloading and installing SciKit-Learn. It’s a python framework that has “black box” versions of all the standard algorithms.
If you liked this article, please consider signing up for my Machine Learning is Fun! Newsletter:
Also, please check out the full-length course version of this article. It covers everything in this article in more detail, including writing the actual code in Python. You can get a free 30-day trial to watch the course if you sign up with this link.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 2!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Adam Geitgey | 14.2K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721?source=tag_archive---------2---------------- | Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano.
Are you tired of reading endless news stories about deep learning and not really knowing what that means? Let’s change that!
This time, we are going to learn how to write programs that recognize objects in images using deep learning. In other words, we’re going to explain the black magic that allows Google Photos to search your photos based on what is in the picture:
Just like Part 1 and Part 2, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished!
(If you haven’t already read part 1 and part 2, read them now!)
You might have seen this famous xkcd comic before.
The goof is based on the idea that any 3-year-old child can recognize a photo of a bird, but figuring out how to make a computer recognize objects has puzzled the very best computer scientists for over 50 years.
In the last few years, we’ve finally found a good approach to object recognition using deep convolutional neural networks. That sounds like a a bunch of made up words from a William Gibson Sci-Fi novel, but the ideas are totally understandable if you break them down one by one.
So let’s do it — let’s write a program that can recognize birds!
Before we learn how to recognize pictures of birds, let’s learn how to recognize something much simpler — the handwritten number “8”.
In Part 2, we learned about how neural networks can solve complex problems by chaining together lots of simple neurons. We created a small neural network to estimate the price of a house based on how many bedrooms it had, how big it was, and which neighborhood it was in:
We also know that the idea of machine learning is that the same generic algorithms can be reused with different data to solve different problems. So let’s modify this same neural network to recognize handwritten text. But to make the job really simple, we’ll only try to recognize one letter — the numeral “8”.
Machine learning only works when you have data — preferably a lot of data. So we need lots and lots of handwritten “8”s to get started. Luckily, researchers created the MNIST data set of handwritten numbers for this very purpose. MNIST provides 60,000 images of handwritten digits, each as an 18x18 image. Here are some “8”s from the data set:
The neural network we made in Part 2 only took in a three numbers as the input (“3” bedrooms, “2000” sq. feet , etc.). But now we want to process images with our neural network. How in the world do we feed images into a neural network instead of just numbers?
The answer is incredible simple. A neural network takes numbers as input. To a computer, an image is really just a grid of numbers that represent how dark each pixel is:
To feed an image into our neural network, we simply treat the 18x18 pixel image as an array of 324 numbers:
The handle 324 inputs, we’ll just enlarge our neural network to have 324 input nodes:
Notice that our neural network also has two outputs now (instead of just one). The first output will predict the likelihood that the image is an “8” and thee second output will predict the likelihood it isn’t an “8”. By having a separate output for each type of object we want to recognize, we can use a neural network to classify objects into groups.
Our neural network is a lot bigger than last time (324 inputs instead of 3!). But any modern computer can handle a neural network with a few hundred nodes without blinking. This would even work fine on your cell phone.
All that’s left is to train the neural network with images of “8”s and not-“8"s so it learns to tell them apart. When we feed in an “8”, we’ll tell it the probability the image is an “8” is 100% and the probability it’s not an “8” is 0%. Vice versa for the counter-example images.
Here’s some of our training data:
We can train this kind of neural network in a few minutes on a modern laptop. When it’s done, we’ll have a neural network that can recognize pictures of “8”s with a pretty high accuracy. Welcome to the world of (late 1980’s-era) image recognition!
It’s really neat that simply feeding pixels into a neural network actually worked to build image recognition! Machine learning is magic! ...right?
Well, of course it’s not that simple.
First, the good news is that our “8” recognizer really does work well on simple images where the letter is right in the middle of the image:
But now the really bad news:
Our “8” recognizer totally fails to work when the letter isn’t perfectly centered in the image. Just the slightest position change ruins everything:
This is because our network only learned the pattern of a perfectly-centered “8”. It has absolutely no idea what an off-center “8” is. It knows exactly one pattern and one pattern only.
That’s not very useful in the real world. Real world problems are never that clean and simple. So we need to figure out how to make our neural network work in cases where the “8” isn’t perfectly centered.
We already created a really good program for finding an “8” centered in an image. What if we just scan all around the image for possible “8”s in smaller sections, one section at a time, until we find one?
This approach called a sliding window. It’s the brute force solution. It works well in some limited cases, but it’s really inefficient. You have to check the same image over and over looking for objects of different sizes. We can do better than this!
When we trained our network, we only showed it “8”s that were perfectly centered. What if we train it with more data, including “8”s in all different positions and sizes all around the image?
We don’t even need to collect new training data. We can just write a script to generate new images with the “8”s in all kinds of different positions in the image:
Using this technique, we can easily create an endless supply of training data.
More data makes the problem harder for our neural network to solve, but we can compensate for that by making our network bigger and thus able to learn more complicated patterns.
To make the network bigger, we just stack up layer upon layer of nodes:
We call this a “deep neural network” because it has more layers than a traditional neural network.
This idea has been around since the late 1960s. But until recently, training this large of a neural network was just too slow to be useful. But once we figured out how to use 3d graphics cards (which were designed to do matrix multiplication really fast) instead of normal computer processors, working with large neural networks suddenly became practical. In fact, the exact same NVIDIA GeForce GTX 1080 video card that you use to play Overwatch can be used to train neural networks incredibly quickly.
But even though we can make our neural network really big and train it quickly with a 3d graphics card, that still isn’t going to get us all the way to a solution. We need to be smarter about how we process images into our neural network.
Think about it. It doesn’t make sense to train a network to recognize an “8” at the top of a picture separately from training it to recognize an “8” at the bottom of a picture as if those were two totally different objects.
There should be some way to make the neural network smart enough to know that an “8” anywhere in the picture is the same thing without all that extra training. Luckily... there is!
As a human, you intuitively know that pictures have a hierarchy or conceptual structure. Consider this picture:
As a human, you instantly recognize the hierarchy in this picture:
Most importantly, we recognize the idea of a child no matter what surface the child is on. We don’t have to re-learn the idea of child for every possible surface it could appear on.
But right now, our neural network can’t do this. It thinks that an “8” in a different part of the image is an entirely different thing. It doesn’t understand that moving an object around in the picture doesn’t make it something different. This means it has to re-learn the identify of each object in every possible position. That sucks.
We need to give our neural network understanding of translation invariance — an “8” is an “8” no matter where in the picture it shows up.
We’ll do this using a process called Convolution. The idea of convolution is inspired partly by computer science and partly by biology (i.e. mad scientists literally poking cat brains with weird probes to figure out how cats process images).
Instead of feeding entire images into our neural network as one grid of numbers, we’re going to do something a lot smarter that takes advantage of the idea that an object is the same no matter where it appears in a picture.
Here’s how it’s going to work, step by step —
Similar to our sliding window search above, let’s pass a sliding window over the entire original image and save each result as a separate, tiny picture tile:
By doing this, we turned our original image into 77 equally-sized tiny image tiles.
Earlier, we fed a single image into a neural network to see if it was an “8”. We’ll do the exact same thing here, but we’ll do it for each individual image tile:
However, there’s one big twist: We’ll keep the same neural network weights for every single tile in the same original image. In other words, we are treating every image tile equally. If something interesting appears in any given tile, we’ll mark that tile as interesting.
We don’t want to lose track of the arrangement of the original tiles. So we save the result from processing each tile into a grid in the same arrangement as the original image. It looks like this:
In other words, we’ve started with a large image and we ended with a slightly smaller array that records which sections of our original image were the most interesting.
The result of Step 3 was an array that maps out which parts of the original image are the most interesting. But that array is still pretty big:
To reduce the size of the array, we downsample it using an algorithm called max pooling. It sounds fancy, but it isn’t at all!
We’ll just look at each 2x2 square of the array and keep the biggest number:
The idea here is that if we found something interesting in any of the four input tiles that makes up each 2x2 grid square, we’ll just keep the most interesting bit. This reduces the size of our array while keeping the most important bits.
So far, we’ve reduced a giant image down into a fairly small array.
Guess what? That array is just a bunch of numbers, so we can use that small array as input into another neural network. This final neural network will decide if the image is or isn’t a match. To differentiate it from the convolution step, we call it a “fully connected” network.
So from start to finish, our whole five-step pipeline looks like this:
Our image processing pipeline is a series of steps: convolution, max-pooling, and finally a fully-connected network.
When solving problems in the real world, these steps can be combined and stacked as many times as you want! You can have two, three or even ten convolution layers. You can throw in max pooling wherever you want to reduce the size of your data.
The basic idea is to start with a large image and continually boil it down, step-by-step, until you finally have a single result. The more convolution steps you have, the more complicated features your network will be able to learn to recognize.
For example, the first convolution step might learn to recognize sharp edges, the second convolution step might recognize beaks using it’s knowledge of sharp edges, the third step might recognize entire birds using it’s knowledge of beaks, etc.
Here’s what a more realistic deep convolutional network (like you would find in a research paper) looks like:
In this case, they start a 224 x 224 pixel image, apply convolution and max pooling twice, apply convolution 3 more times, apply max pooling and then have two fully-connected layers. The end result is that the image is classified into one of 1000 categories!
So how do you know which steps you need to combine to make your image classifier work?
Honestly, you have to answer this by doing a lot of experimentation and testing. You might have to train 100 networks before you find the optimal structure and parameters for the problem you are solving. Machine learning involves a lot of trial and error!
Now finally we know enough to write a program that can decide if a picture is a bird or not.
As always, we need some data to get started. The free CIFAR10 data set contains 6,000 pictures of birds and 52,000 pictures of things that are not birds. But to get even more data we’ll also add in the Caltech-UCSD Birds-200–2011 data set that has another 12,000 bird pics.
Here’s a few of the birds from our combined data set:
And here’s some of the 52,000 non-bird images:
This data set will work fine for our purposes, but 72,000 low-res images is still pretty small for real-world applications. If you want Google-level performance, you need millions of large images. In machine learning, having more data is almost always more important that having better algorithms. Now you know why Google is so happy to offer you unlimited photo storage. They want your sweet, sweet data!
To build our classifier, we’ll use TFLearn. TFlearn is a wrapper around Google’s TensorFlow deep learning library that exposes a simplified API. It makes building convolutional neural networks as easy as writing a few lines of code to define the layers of our network.
Here’s the code to define and train the network:
If you are training with a good video card with enough RAM (like an Nvidia GeForce GTX 980 Ti or better), this will be done in less than an hour. If you are training with a normal cpu, it might take a lot longer.
As it trains, the accuracy will increase. After the first pass, I got 75.4% accuracy. After just 10 passes, it was already up to 91.7%. After 50 or so passes, it capped out around 95.5% accuracy and additional training didn’t help, so I stopped it there.
Congrats! Our program can now recognize birds in images!
Now that we have a trained neural network, we can use it! Here’s a simple script that takes in a single image file and predicts if it is a bird or not.
But to really see how effective our network is, we need to test it with lots of images. The data set I created held back 15,000 images for validation. When I ran those 15,000 images through the network, it predicted the correct answer 95% of the time.
That seems pretty good, right? Well... it depends!
Our network claims to be 95% accurate. But the devil is in the details. That could mean all sorts of different things.
For example, what if 5% of our training images were birds and the other 95% were not birds? A program that guessed “not a bird” every single time would be 95% accurate! But it would also be 100% useless.
We need to look more closely at the numbers than just the overall accuracy. To judge how good a classification system really is, we need to look closely at how it failed, not just the percentage of the time that it failed.
Instead of thinking about our predictions as “right” and “wrong”, let’s break them down into four separate categories —
Using our validation set of 15,000 images, here’s how many times our predictions fell into each category:
Why do we break our results down like this? Because not all mistakes are created equal.
Imagine if we were writing a program to detect cancer from an MRI image. If we were detecting cancer, we’d rather have false positives than false negatives. False negatives would be the worse possible case — that’s when the program told someone they definitely didn’t have cancer but they actually did.
Instead of just looking at overall accuracy, we calculate Precision and Recall metrics. Precision and Recall metrics give us a clearer picture of how well we did:
This tells us that 97% of the time we guessed “Bird”, we were right! But it also tells us that we only found 90% of the actual birds in the data set. In other words, we might not find every bird but we are pretty sure about it when we do find one!
Now that you know the basics of deep convolutional networks, you can try out some of the examples that come with tflearn to get your hands dirty with different neural network architectures. It even comes with built-in data sets so you don’t even have to find your own images.
You also know enough now to start branching and learning about other areas of machine learning. Why not learn how to use algorithms to train computers how to play Atari games next?
If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 4, Part 5 and Part 6!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Adam Geitgey | 15.2K | 13 | https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78?source=tag_archive---------3---------------- | Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano.
Have you noticed that Facebook has developed an uncanny ability to recognize your friends in your photographs? In the old days, Facebook used to make you to tag your friends in photos by clicking on them and typing in their name. Now as soon as you upload a photo, Facebook tags everyone for you like magic:
This technology is called face recognition. Facebook’s algorithms are able to recognize your friends’ faces after they have been tagged only a few times. It’s pretty amazing technology — Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do!
Let’s learn how modern face recognition works! But just recognizing your friends would be too easy. We can push this tech to the limit to solve a more challenging problem — telling Will Ferrell (famous actor) apart from Chad Smith (famous rock musician)!
So far in Part 1, 2 and 3, we’ve used machine learning to solve isolated problems that have only one step — estimating the price of a house, generating new data based on existing data and telling if an image contains a certain object. All of those problems can be solved by choosing one machine learning algorithm, feeding in data, and getting the result.
But face recognition is really a series of several related problems:
As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces and end up seeing faces in everyday objects:
Computers are not capable of this kind of high-level generalization (at least not yet...), so we have to teach them how to do each step in this process separately.
We need to build a pipeline where we solve each step of face recognition separately and pass the result of the current step to the next step. In other words, we will chain together several machine learning algorithms:
Let’s tackle this problem one step at a time. For each step, we’ll learn about a different machine learning algorithm. I’m not going to explain every single algorithm completely to keep this from turning into a book, but you’ll learn the main ideas behind each one and you’ll learn how you can build your own facial recognition system in Python using OpenFace and dlib.
The first step in our pipeline is face detection. Obviously we need to locate the faces in a photograph before we can try to tell them apart!
If you’ve used any camera in the last 10 years, you’ve probably seen face detection in action:
Face detection is a great feature for cameras. When the camera can automatically pick out faces, it can make sure that all the faces are in focus before it takes the picture. But we’ll use it for a different purpose — finding the areas of the image we want to pass on to the next step in our pipeline.
Face detection went mainstream in the early 2000's when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to run on cheap cameras. However, much more reliable solutions exist now. We’re going to use a method invented in 2005 called Histogram of Oriented Gradients — or just HOG for short.
To find faces in an image, we’ll start by making our image black and white because we don’t need color data to find faces:
Then we’ll look at every single pixel in our image one at a time. For every single pixel, we want to look at the pixels that directly surrounding it:
Our goal is to figure out how dark the current pixel is compared to the pixels directly surrounding it. Then we want to draw an arrow showing in which direction the image is getting darker:
If you repeat that process for every single pixel in the image, you end up with every pixel being replaced by an arrow. These arrows are called gradients and they show the flow from light to dark across the entire image:
This might seem like a random thing to do, but there’s a really good reason for replacing the pixels with gradients. If we analyze pixels directly, really dark images and really light images of the same person will have totally different pixel values. But by only considering the direction that brightness changes, both really dark images and really bright images will end up with the same exact representation. That makes the problem a lot easier to solve!
But saving the gradient for every single pixel gives us way too much detail. We end up missing the forest for the trees. It would be better if we could just see the basic flow of lightness/darkness at a higher level so we could see the basic pattern of the image.
To do this, we’ll break up the image into small squares of 16x16 pixels each. In each square, we’ll count up how many gradients point in each major direction (how many point up, point up-right, point right, etc...). Then we’ll replace that square in the image with the arrow directions that were the strongest.
The end result is we turn the original image into a very simple representation that captures the basic structure of a face in a simple way:
To find faces in this HOG image, all we have to do is find the part of our image that looks the most similar to a known HOG pattern that was extracted from a bunch of other training faces:
Using this technique, we can now easily find faces in any image:
If you want to try this step out yourself using Python and dlib, here’s code showing how to generate and view HOG representations of images.
Whew, we isolated the faces in our image. But now we have to deal with the problem that faces turned different directions look totally different to a computer:
To account for this, we will try to warp each picture so that the eyes and lips are always in the sample place in the image. This will make it a lot easier for us to compare faces in the next steps.
To do this, we are going to use an algorithm called face landmark estimation. There are lots of ways to do this, but we are going to use the approach invented in 2014 by Vahid Kazemi and Josephine Sullivan.
The basic idea is we will come up with 68 specific points (called landmarks) that exist on every face — the top of the chin, the outside edge of each eye, the inner edge of each eyebrow, etc. Then we will train a machine learning algorithm to be able to find these 68 specific points on any face:
Here’s the result of locating the 68 face landmarks on our test image:
Now that we know were the eyes and mouth are, we’ll simply rotate, scale and shear the image so that the eyes and mouth are centered as best as possible. We won’t do any fancy 3d warps because that would introduce distortions into the image. We are only going to use basic image transformations like rotation and scale that preserve parallel lines (called affine transformations):
Now no matter how the face is turned, we are able to center the eyes and mouth are in roughly the same position in the image. This will make our next step a lot more accurate.
If you want to try this step out yourself using Python and dlib, here’s the code for finding face landmarks and here’s the code for transforming the image using those landmarks.
Now we are to the meat of the problem — actually telling faces apart. This is where things get really interesting!
The simplest approach to face recognition is to directly compare the unknown face we found in Step 2 with all the pictures we have of people that have already been tagged. When we find a previously tagged face that looks very similar to our unknown face, it must be the same person. Seems like a pretty good idea, right?
There’s actually a huge problem with that approach. A site like Facebook with billions of users and a trillion photos can’t possibly loop through every previous-tagged face to compare it to every newly uploaded picture. That would take way too long. They need to be able to recognize faces in milliseconds, not hours.
What we need is a way to extract a few basic measurements from each face. Then we could measure our unknown face the same way and find the known face with the closest measurements. For example, we might measure the size of each ear, the spacing between the eyes, the length of the nose, etc. If you’ve ever watched a bad crime show like CSI, you know what I am talking about:
Ok, so which measurements should we collect from each face to build our known face database? Ear size? Nose length? Eye color? Something else?
It turns out that the measurements that seem obvious to us humans (like eye color) don’t really make sense to a computer looking at individual pixels in an image. Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect itself. Deep learning does a better job than humans at figuring out which parts of a face are important to measure.
The solution is to train a Deep Convolutional Neural Network (just like we did in Part 3). But instead of training the network to recognize pictures objects like we did last time, we are going to train it to generate 128 measurements for each face.
The training process works by looking at 3 face images at a time:
Then the algorithm looks at the measurements it is currently generating for each of those three images. It then tweaks the neural network slightly so that it makes sure the measurements it generates for #1 and #2 are slightly closer while making sure the measurements for #2 and #3 are slightly further apart:
After repeating this step millions of times for millions of images of thousands of different people, the neural network learns to reliably generate 128 measurements for each person. Any ten different pictures of the same person should give roughly the same measurements.
Machine learning people call the 128 measurements of each face an embedding. The idea of reducing complicated raw data like a picture into a list of computer-generated numbers comes up a lot in machine learning (especially in language translation). The exact approach for faces we are using was invented in 2015 by researchers at Google but many similar approaches exist.
This process of training a convolutional neural network to output face embeddings requires a lot of data and computer power. Even with an expensive NVidia Telsa video card, it takes about 24 hours of continuous training to get good accuracy.
But once the network has been trained, it can generate measurements for any face, even ones it has never seen before! So this step only needs to be done once. Lucky for us, the fine folks at OpenFace already did this and they published several trained networks which we can directly use. Thanks Brandon Amos and team!
So all we need to do ourselves is run our face images through their pre-trained network to get the 128 measurements for each face. Here’s the measurements for our test image:
So what parts of the face are these 128 numbers measuring exactly? It turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person.
If you want to try this step yourself, OpenFace provides a lua script that will generate embeddings all images in a folder and write them to a csv file. You run it like this.
This last step is actually the easiest step in the whole process. All we have to do is find the person in our database of known people who has the closest measurements to our test image.
You can do that by using any basic machine learning classification algorithm. No fancy deep learning tricks are needed. We’ll use a simple linear SVM classifier, but lots of classification algorithms could work.
All we need to do is train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. Running this classifier takes milliseconds. The result of the classifier is the name of the person!
So let’s try out our system. First, I trained a classifier with the embeddings of about 20 pictures each of Will Ferrell, Chad Smith and Jimmy Falon:
Then I ran the classifier on every frame of the famous youtube video of Will Ferrell and Chad Smith pretending to be each other on the Jimmy Fallon show:
It works! And look how well it works for faces in different poses — even sideways faces!
Let’s review the steps we followed:
Now that you know how this all works, here’s instructions from start-to-finish of how run this entire face recognition pipeline on your own computer:
UPDATE 4/9/2017: You can still follow the steps below to use OpenFace. However, I’ve released a new Python-based face recognition library called face_recognition that is much easier to install and use. So I’d recommend trying out face_recognition first instead of continuing below!
I even put together a pre-configured virtual machine with face_recognition, OpenCV, TensorFlow and lots of other deep learning tools pre-installed. You can download and run it on your computer very easily. Give the virtual machine a shot if you don’t want to install all these libraries yourself!
Original OpenFace instructions:
If you liked this article, please consider signing up for my Machine Learning is Fun! newsletter:
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 5!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 5