author
stringlengths 3
31
| claps
stringlengths 1
5
| reading_time
int64 2
31
| link
stringlengths 92
277
| title
stringlengths 24
104
| text
stringlengths 1.35k
44.5k
|
---|---|---|---|---|---|
Justin Lee | 8.3K | 11 | https://medium.com/swlh/chatbots-were-the-next-big-thing-what-happened-5fc49dd6fa61?source=---------0---------------- | Chatbots were the next big thing: what happened? – The Startup – Medium | Oh, how the headlines blared:
Chatbots were The Next Big Thing.
Our hopes were sky high. Bright-eyed and bushy-tailed, the industry was ripe for a new era of innovation: it was time to start socializing with machines.
And why wouldn’t they be? All the road signs pointed towards insane success.
At the Mobile World Congress 2017, chatbots were the main headliners. The conference organizers cited an ‘overwhelming acceptance at the event of the inevitable shift of focus for brands and corporates to chatbots’.
In fact, the only significant question around chatbots was who would monopolize the field, not whether chatbots would take off in the first place:
One year on, we have an answer to that question.
No.
Because there isn’t even an ecosystem for a platform to dominate.
Chatbots weren’t the first technological development to be talked up in grandiose terms and then slump spectacularly.
The age-old hype cycle unfolded in familiar fashion...
Expectations built, built, and then..... It all kind of fizzled out.
The predicted paradim shift didn’t materialize.
And apps are, tellingly, still alive and well.
We look back at our breathless optimism and turn to each other, slightly baffled:
“is that it? THAT was the chatbot revolution we were promised?”
Digit’s Ethan Bloch sums up the general consensus:
According to Dave Feldman, Vice President of Product Design at Heap, chatbots didn’t just take on one difficult problem and fail: they took on several and failed all of them.
Bots can interface with users in different ways. The big divide is text vs. speech. In the beginning (of computer interfaces) was the (written) word.
Users had to type commands manually into a machine to get anything done.
Then, graphical user interfaces (GUIs) came along and saved the day. We became entranced by windows, mouse clicks, icons. And hey, we eventually got color, too!
Meanwhile, a bunch of research scientists were busily developing natural language (NL) interfaces to databases, instead of having to learn an arcane database query language.
Another bunch of scientists were developing speech-processing software so that you could just speak to your computer, rather than having to type. This turned out to be a whole lot more difficult than anyone originally realised:
The next item on the agenda was holding a two-way dialog with a machine. Here’s an example dialog (dating back to the 1990s) with VCR setup system:
Pretty cool, right? The system takes turns in collaborative way, and does a smart job of figuring out what the user wants.
It was carefully crafted to deal with conversations involving VCRs, and could only operate within strict limitations.
Modern day bots, whether they use typed or spoken input, have to face all these challenges, but also work in an efficient and scalable way on a variety of platforms.
Basically, we’re still trying to achieve the same innovations we were 30 years ago.
Here’s where I think we’re going wrong:
An oversized assumption has been that apps are ‘over’, and would be replaced by bots.
By pitting two such disparate concepts against one another (instead of seeing them as separate entities designed to serve different purposes) we discouraged bot development.
You might remember a similar war cry when apps first came onto the scene ten years ago: but do you remember when apps replaced the internet?
It’s said that a new product or service needs to be two of the following: better, cheaper, or faster. Are chatbots cheaper or faster than apps? No — not yet, at least.
Whether they’re ‘better’ is subjective, but I think it’s fair to say that today’s best bot isn’t comparable to today’s best app.
Plus, nobody thinks that using Lyft is too complicated, or that it’s too hard to order food or buy a dress on an app. What is too complicated is trying to complete these tasks with a bot — and having the bot fail.
A great bot can be about as useful as an average app. When it comes to rich, sophisticated, multi-layered apps, there’s no competition.
That’s because machines let us access vast and complex information systems, and the early graphical information systems were a revolutionary leap forward in helping us locate those systems.
Modern-day apps benefit from decades of research and experimentation. Why would we throw this away?
But, if we swap the word ‘replace’ with ‘extend’, things get much more interesting.
Today’s most successful bot experiences take a hybrid approach, incorporating chat into a broader strategy that encompasses more traditional elements.
The next wave will be multimodal apps, where you can say what you want (like with Siri) and get back information as a map, text, or even a spoken response.
Another problematic aspect of the sweeping nature of hype is that it tends to bypass essential questions like these.
For plenty of companies, bots just aren’t the right solution. The past two years are littered with cases of bots being blindly applied to problems where they aren’t needed.
Building a bot for the sake of it, letting it loose and hoping for the best will never end well:
The vast majority of bots are built using decision-tree logic, where the bot’s canned response relies on spotting specific keywords in the user input.
The advantage of this approach is that it’s pretty easy to list all the cases that they are designed to cover. And that’s precisely their disadvantage, too.
That’s because these bots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate.
Problems arise when life refuses to fit into those boxes.
According to recent reports, 70% of the 100,000+ bots on Facebook Messenger are failing to fulfil simple user requests. This is partly a result of developers failing to narrow their bot down to one strong area of focus.
When we were building GrowthBot, we decided to make it specific to sales and marketers: not an ‘all-rounder’, despite the temptation to get overexcited about potential capabilties.
Remember: a bot that does ONE thing well is infinitely more helpful than a bot that does multiple things poorly.
A competent developer can build a basic bot in minutes — but one that can hold a conversation? That’s another story. Despite the constant hype around AI, we’re still a long way from achieving anything remotely human-like.
In an ideal world, the technology known as NLP (natural language processing) should allow a chatbot to understand the messages it receives. But NLP is only just emerging from research labs and is very much in its infancy.
Some platforms provide a bit of NLP, but even the best is at toddler-level capacity (for example, think about Siri understanding your words, but not their meaning.)
As Matt Asay outlines, this results in another issue: failure to capture the attention and creativity of developers.
And conversations are complex. They’re not linear. Topics spin around each other, take random turns, restart or abruptly finish.
Today’s rule-based dialogue systems are too brittle to deal with this kind of unpredictability, and statistical approaches using machine learning are just as limited. The level of AI required for human-like conversation just isn’t available yet.
And in the meantime, there are few high-quality examples of trailblazing bots to lead the way. As Dave Feldman remarked:
Once upon a time, the only way to interact with computers was by typing arcane commands to the terminal. Visual interfaces using windows, icons or a mouse were a revolution in how we manipulate information
There’s a reasons computing moved from text-based to graphical user interfaces (GUIs). On the input side, it’s easier and faster to click than it is to type.
Tapping or selecting is obviously preferable to typing out a whole sentence, even with predictive (often error-prone ) text. On the output side, the old adage that a picture is worth a thousand words is usually true.
We love optical displays of information because we are highly visual creatures. It’s no accident that kids love touch screens. The pioneers who dreamt up graphical interface were inspired by cognitive psychology, the study of how the brain deals with communication.
Conversational UIs are meant to replicate the way humans prefer to communicate, but they end up requiring extra cognitive effort. Essentially, we’re swapping something simple for a more-complex alternative.
Sure, there are some concepts that we can only express using language (“show me all the ways of getting to a museum that give me 2000 steps but don’t take longer than 35 minutes”), but most tasks can be carried out more efficiently and intuitively with GUIs than with a conversational UI.
Aiming for a human dimension in business interactions makes sense.
If there’s one thing that’s broken about sales and marketing, it’s the lack of humanity: brands hide behind ticket numbers, feedback forms, do-not-reply-emails, automated responses and gated ‘contact us’ forms.
Facebook’s goal is that their bots should pass the so-called Turing Test, meaning you can’t tell whether you are talking to a bot or a human. But a bot isn’t the same as a human. It never will be.
A conversation encompasses so much more than just text.
Humans can read between the lines, leverage contextual information and understand double layers like sarcasm. Bots quickly forget what they’re talking about, meaning it’s a bit like conversing with someone who has little or no short-term memory.
As HubSpot team pinpointed:
People aren’t easily fooled, and pretending a bot is a human is guaranteed to diminish returns (not to mention the fact that you’re lying to your users).
And even those rare bots that are powered by state-of-the-art NLP, and excel at processing and producing content, will fall short in comparison.
And here’s the other thing. Conversational UIs are built to replicate the way humans prefer to communicate — with other humans.
But is that how humans prefer to interact with machines?
Not necessarily.
At the end of the day, no amount of witty quips or human-like mannerisms will save a bot from conversational failure.
In a way, those early-adopters weren’t entirely wrong.
People are yelling at Google Home to play their favorite song, ordering pizza from the Domino’s bot and getting makeup tips from Sephora. But in terms of consumer response and developer involvement, chatbots haven’t lived up to the hype generated circa 2015/16.
Not even close.
Computers are good at being computers. Searching for data, crunching numbers, analyzing opinions and condensing that information.
Computers aren’t good at understanding human emotion. The state of NLP means they still don’t ‘get’ what we’re asking them, never mind how we feel.
That’s why it’s still impossible to imagine effective customer support, sales or marketing without the essential human touch: empathy and emotional intelligence.
For now, bots can continue to help us with automated, repetitive, low-level tasks and queries; as cogs in a larger, more complex system. And we did them, and ourselves, a disservice by expecting so much, so soon.
But that’s not the whole story.
Yes, our industry massively overestimated the initial impact chatbots would have. Emphasis on initial.
As Bill Gates once said:
The hype is over. And that’s a good thing. Now, we can start examining the middle-grounded grey area, instead of the hyper-inflated, frantic black and white zone.
I believe we’re at the very beginning of explosive growth. This sense of anti-climax is completely normal for transformational technology.
Messaging will continue to gain traction. Chatbots aren’t going away. NLP and AI are becoming more sophisticated every day.
Developers, apps and platforms will continue to experiment with, and heavily invest in, conversational marketing.
And I can’t wait to see what happens next.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
|
Conor Dewey | 1.4K | 7 | https://towardsdatascience.com/python-for-data-science-8-concepts-you-may-have-forgotten-i-did-825966908393?source=---------1---------------- | Python for Data Science: 8 Concepts You May Have Forgotten | If you’ve ever found yourself looking up the same question, concept, or syntax over and over again when programming, you’re not alone.
I find myself doing this constantly.
While it’s not unnatural to look things up on StackOverflow or other resources, it does slow you down a good bit and raise questions as to your complete understanding of the language.
We live in a world where there is a seemingly infinite amount of accessible, free resources looming just one search away at all times. However, this can be both a blessing and a curse. When not managed effectively, an over-reliance on these resources can build poor habits that will set you back long-term.
Personally, I find myself pulling code from similar discussion threads several times, rather than taking the time to learn and solidify the concept so that I can reproduce the code myself the next time.
This approach is lazy and while it may be the path of least resistance in the short-term, it will ultimately hurt your growth, productivity, and ability to recall syntax (cough, interviews) down the line.
Recently, I’ve been working through an online data science course titled Python for Data Science and Machine Learning on Udemy (Oh God, I sound like that guy on Youtube). Over the early lectures in the series, I was reminded of some concepts and syntax that I consistently overlook when performing data analysis in Python.
In the interest of solidifying my understanding of these concepts once and for all and saving you guys a couple of StackOverflow searches, here’s the stuff that I’m always forgetting when working with Python, NumPy, and Pandas.
I’ve included a short description and example for each, however for your benefit, I will also include links to videos and other resources that explore each concept more in-depth as well.
Writing out a for loop every time you need to define some sort of list is tedious, luckily Python has a built-in way to address this problem in just one line of code. The syntax can be a little hard to wrap your head around but once you get familiar with this technique you’ll use it fairly often.
See the example above and below for how you would normally go about list comprehension with a for loop vs. creating your list with in one simple line with no loops necessary.
Ever get tired of creating function after function for limited use cases? Lambda functions to the rescue! Lambda functions are used for creating small, one-time and anonymous function objects in Python. Basically, they let you create a function, without creating a function.
The basic syntax of lambda functions is:
Note that lambda functions can do everything that regular functions can do, as long as there’s just one expression. Check out the simple example below and the upcoming video to get a better feel for the power of lambda functions:
Once you have a grasp on lambda functions, learning to pair them with the map and filter functions can be a powerful tool.
Specifically, map takes in a list and transforms it into a new list by performing some sort of operation on each element. In this example, it goes through each element and maps the result of itself times 2 to a new list. Note that the list function simply converts the output to list type.
The filter function takes in a list and a rule, much like map, however it returns a subset of the original list by comparing each element against the boolean filtering rule.
For creating quick and easy Numpy arrays, look no further than the arange and linspace functions. Each one has their specific purpose, but the appeal here (instead of using range), is that they output NumPy arrays, which are typically easier to work with for data science.
Arange returns evenly spaced values within a given interval. Along with a starting and stopping point, you can also define a step size or data type if necessary. Note that the stopping point is a ‘cut-off’ value, so it will not be included in the array output.
Linspace is very similar, but with a slight twist. Linspace returns evenly spaced numbers over a specified interval. So given a starting and stopping point, as well as a number of values, linspace will evenly space them out for you in a NumPy array. This is especially helpful for data visualizations and declaring axes when plotting.
You may have ran into this when dropping a column in Pandas or summing values in NumPy matrix. If not, then you surely will at some point. Let’s use the example of dropping a column for now:
I don’t know how many times I wrote this line of code before I actually knew why I was declaring axis what I was. As you can probably deduce from above, set axis to 1 if you want to deal with columns and set it to 0 if you want rows. But why is this? My favorite reasoning, or atleast how I remember this:
Calling the shape attribute from a Pandas dataframe gives us back a tuple with the first value representing the number of rows and the second value representing the number of columns. If you think about how this is indexed in Python, rows are at 0 and columns are at 1, much like how we declare our axis value. Crazy, right?
If you’re familiar with SQL, then these concepts will probably come a lot easier for you. Anyhow, these functions are essentially just ways to combine dataframes in specific ways. It can be difficult to keep track of which is best to use at which time, so let’s review it.
Concat allows the user to append one or more dataframes to each other either below or next to it (depending on how you define the axis).
Merge combines multiple dataframes on specific, common columns that serve as the primary key.
Join, much like merge, combines two dataframes. However, it joins them based on their indices, rather than some specified column.
Check out the excellent Pandas documentation for specific syntax and more concrete examples, as well as some special cases that you may run into.
Think of apply as a map function, but made for Pandas DataFrames or more specifically, for Series. If you’re not as familiar, Series are pretty similar to NumPy arrays for the most part.
Apply sends a function to every element along a column or row depending on what you specify. You might imagine how useful this can be, especially for formatting and manipulating values across a whole DataFrame column, without having to loop at all.
Last but certainly not least is pivot tables. If you’re familiar with Microsoft Excel, then you’ve probably heard of pivot tables in some respect. The Pandas built-in pivot_table function creates a spreadsheet-style pivot table as a DataFrame. Note that the levels in the pivot table are stored in MultiIndex objects on the index and columns of the resulting DataFrame.
That’s it for now. I hope a couple of these overviews have effectively jogged your memory regarding important yet somewhat tricky methods, functions, and concepts you frequently encounter when using Python for data science. Personally, I know that even the act of writing these out and trying to explain them in simple terms has helped me out a ton.
If you’re interested in receiving my weekly rundown of interesting articles and resources focused on data science, machine learning, and artificial intelligence, then subscribe to Self Driven Data Science using the form below!
If you enjoyed this post, feel free to hit the clap button and if you’re interested in posts to come, make sure to follow me on Medium at the link below — I’ll be writing and shipping every day this month as part of a 30-Day Challenge.
This article was originally published on conordewey.com
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data Scientist & Writer | www.conordewey.com
Sharing concepts, ideas, and codes.
|
William Koehrsen | 2.8K | 11 | https://towardsdatascience.com/automated-feature-engineering-in-python-99baf11cc219?source=---------2---------------- | Automated Feature Engineering in Python – Towards Data Science | Machine learning is increasingly moving from hand-designed models to automatically optimized pipelines using tools such as H20, TPOT, and auto-sklearn. These libraries, along with methods such as random search, aim to simplify the model selection and tuning parts of machine learning by finding the best model for a dataset with little to no manual intervention. However, feature engineering, an arguably more valuable aspect of the machine learning pipeline, remains almost entirely a human labor.
Feature engineering, also known as feature creation, is the process of constructing new features from existing data to train a machine learning model. This step can be more important than the actual model used because a machine learning algorithm only learns from the data we give it, and creating features that are relevant to a task is absolutely crucial (see the excellent paper “A Few Useful Things to Know about Machine Learning”).
Typically, feature engineering is a drawn-out manual process, relying on domain knowledge, intuition, and data manipulation. This process can be extremely tedious and the final features will be limited both by human subjectivity and time. Automated feature engineering aims to help the data scientist by automatically creating many candidate features out of a dataset from which the best can be selected and used for training.
In this article, we will walk through an example of using automated feature engineering with the featuretools Python library. We will use an example dataset to show the basics (stay tuned for future posts using real-world data). The complete code for this article is available on GitHub.
Feature engineering means building additional features out of existing data which is often spread across multiple related tables. Feature engineering requires extracting the relevant information from the data and getting it into a single table which can then be used to train a machine learning model.
The process of constructing features is very time-consuming because each new feature usually requires several steps to build, especially when using information from more than one table. We can group the operations of feature creation into two categories: transformations and aggregations. Let’s look at a few examples to see these concepts in action.
A transformation acts on a single table (thinking in terms of Python, a table is just a Pandas DataFrame ) by creating new features out of one or more of the existing columns. As an example, if we have the table of clients below
we can create features by finding the month of the joined column or taking the natural log of the income column. These are both transformations because they use information from only one table.
On the other hand, aggregations are performed across tables, and use a one-to-many relationship to group observations and then calculate statistics. For example, if we have another table with information on the loans of clients, where each client may have multiple loans, we can calculate statistics such as the average, maximum, and minimum of loans for each client.
This process involves grouping the loans table by the client, calculating the aggregations, and then merging the resulting data into the client data. Here’s how we would do that in Python using the language of Pandas.
These operations are not difficult by themselves, but if we have hundreds of variables spread across dozens of tables, this process is not feasible to do by hand. Ideally, we want a solution that can automatically perform transformations and aggregations across multiple tables and combine the resulting data into a single table. Although Pandas is a great resource, there’s only so much data manipulation we want to do by hand! (For more on manual feature engineering check out the excellent Python Data Science Handbook).
Fortunately, featuretools is exactly the solution we are looking for. This open-source Python library will automatically create many features from a set of related tables. Featuretools is based on a method known as “Deep Feature Synthesis”, which sounds a lot more imposing than it actually is (the name comes from stacking multiple features not because it uses deep learning!).
Deep feature synthesis stacks multiple transformation and aggregation operations (which are called feature primitives in the vocab of featuretools) to create features from data spread across many tables. Like most ideas in machine learning, it’s a complex method built on a foundation of simple concepts. By learning one building block at a time, we can form a good understanding of this powerful method.
First, let’s take a look at our example data. We already saw some of the dataset above, and the complete collection of tables is as follows:
If we have a machine learning task, such as predicting whether a client will repay a future loan, we will want to combine all the information about clients into a single table. The tables are related (through the client_id and the loan_id variables) and we could use a series of transformations and aggregations to do this process by hand. However, we will shortly see that we can instead use featuretools to automate the process.
The first two concepts of featuretools are entities and entitysets. An entity is simply a table (or a DataFrame if you think in Pandas). An EntitySet is a collection of tables and the relationships between them. Think of an entityset as just another Python data structure, with its own methods and attributes.
We can create an empty entityset in featuretools using the following:
Now we have to add entities. Each entity must have an index, which is a column with all unique elements. That is, each value in the index must appear in the table only once. The index in the clients dataframe is the client_idbecause each client has only one row in this dataframe. We add an entity with an existing index to an entityset using the following syntax:
The loans dataframe also has a unique index, loan_id and the syntax to add this to the entityset is the same as for clients. However, for the payments dataframe, there is no unique index. When we add this entity to the entityset, we need to pass in the parameter make_index = True and specify the name of the index. Also, although featuretools will automatically infer the data type of each column in an entity, we can override this by passing in a dictionary of column types to the parameter variable_types .
For this dataframe, even though missed is an integer, this is not a numeric variable since it can only take on 2 discrete values, so we tell featuretools to treat is as a categorical variable. After adding the dataframes to the entityset, we inspect any of them:
The column types have been correctly inferred with the modification we specified. Next, we need to specify how the tables in the entityset are related.
The best way to think of a relationship between two tables is the analogy of parent to child. This is a one-to-many relationship: each parent can have multiple children. In the realm of tables, a parent table has one row for every parent, but the child table may have multiple rows corresponding to multiple children of the same parent.
For example, in our dataset, the clients dataframe is a parent of the loans dataframe. Each client has only one row in clients but may have multiple rows in loans. Likewise, loans is the parent of payments because each loan will have multiple payments. The parents are linked to their children by a shared variable. When we perform aggregations, we group the child table by the parent variable and calculate statistics across the children of each parent.
To formalize a relationship in featuretools, we only need to specify the variable that links two tables together. The clients and the loans table are linked via the client_id variable and loans and payments are linked with the loan_id. The syntax for creating a relationship and adding it to the entityset are shown below:
The entityset now contains the three entities (tables) and the relationships that link these entities together. After adding entities and formalizing relationships, our entityset is complete and we are ready to make features.
Before we can quite get to deep feature synthesis, we need to understand feature primitives. We already know what these are, but we have just been calling them by different names! These are simply the basic operations that we use to form new features:
New features are created in featuretools using these primitives either by themselves or stacking multiple primitives. Below is a list of some of the feature primitives in featuretools (we can also define custom primitives):
These primitives can be used by themselves or combined to create features. To make features with specified primitives we use the ft.dfs function (standing for deep feature synthesis). We pass in the entityset, the target_entity , which is the table where we want to add the features, the selected trans_primitives (transformations), and agg_primitives (aggregations):
The result is a dataframe of new features for each client (because we made clients the target_entity). For example, we have the month each client joined which is a transformation feature primitive:
We also have a number of aggregation primitives such as the average payment amounts for each client:
Even though we specified only a few feature primitives, featuretools created many new features by combining and stacking these primitives.
The complete dataframe has 793 columns of new features!
We now have all the pieces in place to understand deep feature synthesis (dfs). In fact, we already performed dfs in the previous function call! A deep feature is simply a feature made of stacking multiple primitives and dfs is the name of process that makes these features. The depth of a deep feature is the number of primitives required to make the feature.
For example, the MEAN(payments.payment_amount) column is a deep feature with a depth of 1 because it was created using a single aggregation. A feature with a depth of two is LAST(loans(MEAN(payments.payment_amount)) This is made by stacking two aggregations: LAST (most recent) on top of MEAN. This represents the average payment size of the most recent loan for each client.
We can stack features to any depth we want, but in practice, I have never gone beyond a depth of 2. After this point, the features are difficult to interpret, but I encourage anyone interested to try “going deeper”.
We do not have to manually specify the feature primitives, but instead can let featuretools automatically choose features for us. To do this, we use the same ft.dfs function call but do not pass in any feature primitives:
Featuretools has built many new features for us to use. While this process does automatically create new features, it will not replace the data scientist because we still have to figure out what to do with all these features. For example, if our goal is to predict whether or not a client will repay a loan, we could look for the features most correlated with a specified outcome. Moreover, if we have domain knowledge, we can use that to choose specific feature primitives or seed deep feature synthesis with candidate features.
Automated feature engineering has solved one problem, but created another: too many features. Although it’s difficult to say before fitting a model which of these features will be important, it’s likely not all of them will be relevant to a task we want to train our model on. Moreover, having too many features can lead to poor model performance because the less useful features drown out those that are more important.
The problem of too many features is known as the curse of dimensionality. As the number of features increases (the dimension of the data grows) it becomes more and more difficult for a model to learn the mapping between features and targets. In fact, the amount of data needed for the model to perform well scales exponentially with the number of features.
The curse of dimensionality is combated with feature reduction (also known as feature selection): the process of removing irrelevant features. This can take on many forms: Principal Component Analysis (PCA), SelectKBest, using feature importances from a model, or auto-encoding using deep neural networks. However, feature reduction is a different topic for another article. For now, we know that we can use featuretools to create numerous features from many tables with minimal effort!
Like many topics in machine learning, automated feature engineering with featuretools is a complicated concept built on simple ideas. Using concepts of entitysets, entities, and relationships, featuretools can perform deep feature synthesis to create new features. Deep feature synthesis in turn stacks feature primitives — aggregations, which act across a one-to-many relationship between tables, and transformations, functions applied to one or more columns in a single table — to build new features from multiple tables.
In future articles, I’ll show how to use this technique on a real world problem, the Home Credit Default Risk competition currently being hosted on Kaggle. Stay tuned for that post, and in the meantime, read this introduction to get started in the competition! I hope that you can now use automated feature engineering as an aid in a data science pipeline. Our models are only as good as the data we give them, and automated feature engineering can help to make the feature creation process more efficient.
For more information on featuretools, including advanced usage, check out the online documentation. To see how featuretools is used in practice, read about the work of Feature Labs, the company behind the open-source library.
As always, I welcome feedback and constructive criticism and can be reached on Twitter @koehrsen_will.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data Scientist and Master Student, Data Science Communicator and Advocate
Sharing concepts, ideas, and codes.
|
Gant Laborde | 1.3K | 7 | https://medium.freecodecamp.org/machine-learning-how-to-go-from-zero-to-hero-40e26f8aa6da?source=---------3---------------- | Machine Learning: how to go from Zero to Hero – freeCodeCamp | If your understanding of A.I. and Machine Learning is a big question mark, then this is the blog post for you. Here, I gradually increase your AwesomenessicityTM by gluing inspirational videos together with friendly text.
Sit down and relax. These videos take time, and if they don’t inspire you to continue to the next section, fair enough.
However, if you find yourself at the bottom of this article, you’ve earned your well-rounded knowledge and passion for this new world. Where you go from there is up to you.
A.I. was always cool, from moving a paddle in Pong to lighting you up with combos in Street Fighter.
A.I. has always revolved around a programmer’s functional guess at how something should behave. Fun, but programmers aren’t always gifted in programming A.I. as we often see. Just Google “epic game fails” to see glitches in A.I., physics, and sometimes even experienced human players.
Regardless, A.I. has a new talent. You can teach a computer to play video games, understand language, and even how to identify people or things. This tip-of-the-iceberg new skill comes from an old concept that only recently got the processing power to exist outside of theory.
I’m talking about Machine Learning.
You don’t need to come up with advanced algorithms anymore. You just have to teach a computer to come up with its own advanced algorithm.
So how does something like that even work? An algorithm isn’t really written as much as it is sort of... bred. I’m not using breeding as an analogy. Watch this short video, which gives excellent commentary and animations to the high-level concept of creating the A.I.
Wow! Right? That’s a crazy process!
Now how is it that we can’t even understand the algorithm when it’s done? One great visual was when the A.I. was written to beat Mario games. As a human, we all understand how to play a side-scroller, but identifying the predictive strategy of the resulting A.I. is insane.
Impressed? There’s something amazing about this idea, right? The only problem is we don’t know Machine Learning, and we don’t know how to hook it up to video games.
Fortunately for you, Elon Musk already provided a non-profit company to do the latter. Yes, in a dozen lines of code you can hook up any A.I. you want to countless games/tasks!
I have two good answers on why you should care. Firstly, Machine Learning (ML) is making computers do things that we’ve never made computers do before. If you want to do something new, not just new to you, but to the world, you can do it with ML.
Secondly, if you don’t influence the world, the world will influence you.
Right now significant companies are investing in ML, and we’re already seeing it change the world. Thought-leaders are warning that we can’t let this new age of algorithms exist outside of the public eye. Imagine if a few corporate monoliths controlled the Internet. If we don’t take up arms, the science won’t be ours. I think Christian Heilmann said it best in his talk on ML.
The concept is useful and cool. We understand it at a high level, but what the heck is actually happening? How does this work?
If you want to jump straight in, I suggest you skip this section and move on to the next “How Do I Get Started” section. If you’re motivated to be a DOer in ML, you won’t need these videos.
If you’re still trying to grasp how this could even be a thing, the following video is perfect for walking you through the logic, using the classic ML problem of handwriting.
Pretty cool huh? That video shows that each layer gets simpler rather than more complicated. Like the function is chewing data into smaller pieces that end in an abstract concept. You can get your hands dirty in interacting with this process on this site (by Adam Harley).
It’s cool watching data go through a trained model, but you can even watch your neural network get trained.
One of the classic real-world examples of Machine Learning in action is the iris data set from 1936. In a presentation I attended by JavaFXpert’s overview on Machine Learning, I learned how you can use his tool to visualize the adjustment and back propagation of weights to neurons on a neural network. You get to watch it train the neural model!
Even if you’re not a Java buff, the presentation Jim gives on all things Machine Learning is a pretty cool 1.5+ hour introduction into ML concepts, which includes more info on many of the examples above.
These concepts are exciting! Are you ready to be the Einstein of this new era? Breakthroughs are happening every day, so get started now.
There are tons of resources available. I’ll be recommending two approaches.
In this approach, you’ll understand Machine Learning down to the algorithms and the math. I know this way sounds tough, but how cool would it be to really get into the details and code this stuff from scratch!
If you want to be a force in ML, and hold your own in deep conversations, then this is the route for you.
I recommend that you try out Brilliant.org’s app (always great for any science lover) and take the Artificial Neural Network course. This course has no time limits and helps you learn ML while killing time in line on your phone.
This one costs money after Level 1.
Combine the above with simultaneous enrollment in Andrew Ng’s Stanford course on “Machine Learning in 11 weeks”. This is the course that Jim Weaver recommended in his video above. I’ve also had this course independently suggested to me by Jen Looper.
Everyone provides a caveat that this course is tough. For some of you that’s a show stopper, but for others, that’s why you’re going to put yourself through it and collect a certificate saying you did.
This course is 100% free. You only have to pay for a certificate if you want one.
With those two courses, you’ll have a LOT of work to do. Everyone should be impressed if you make it through because that’s not simple.
But more so, if you do make it through, you’ll have a deep understanding of the implementation of Machine Learning that will catapult you into successfully applying it in new and world-changing ways.
If you’re not interested in writing the algorithms, but you want to use them to create the next breathtaking website/app, you should jump into TensorFlow and the crash course.
TensorFlow is the de facto open-source software library for machine learning. It can be used in countless ways and even with JavaScript. Here’s a crash course.
Plenty more information on available courses and rankings can be found here.
If taking a course is not your style, you’re still in luck. You don’t have to learn the nitty-gritty of ML in order to use it today. You can efficiently utilize ML as a service in many ways with tech giants who have trained models ready.
I would still caution you that there’s no guarantee that your data is safe or even yours, but the offerings of services for ML are quite attractive!
Using an ML service might be the best solution for you if you’re excited and able to upload your data to Amazon/Microsoft/Google. I like to think of these services as a gateway drug to advanced ML. Either way, it’s good to get started now.
I have to say thank you to all the aforementioned people and videos. They were my inspiration to get started, and though I’m still a newb in the ML world, I’m happy to light the path for others as we embrace this awe-inspiring age we find ourselves in.
It’s imperative to reach out and connect with people if you take up learning this craft. Without friendly faces, answers, and sounding boards, anything can be hard. Just being able to ask and get a response is a game changer. Add me, and add the people mentioned above. Friendly people with friendly advice helps!
See?
I hope this article has inspired you and those around you to learn ML!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Software Consultant, Adjunct Professor, Published Author, Award Winning Speaker, Mentor, Organizer and Immature Nerd :D — Lately full of React Native Tech
Our community publishes stories worth reading on development, design, and data science.
|
Emmanuel Ameisen | 935 | 11 | https://blog.insightdatascience.com/reinforcement-learning-from-scratch-819b65f074d8?source=---------4---------------- | Reinforcement Learning from scratch – Insight Data | Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program.
Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch.
Recently, I gave a talk at the O’Reilly AI conference in Beijing about some of the interesting lessons we’ve learned in the world of NLP. While there, I was lucky enough to attend a tutorial on Deep Reinforcement Learning (Deep RL) from scratch by Unity Technologies. I thought that the session, led by Arthur Juliani, was extremely informative and wanted to share some big takeaways below.
In our conversations with companies, we’ve seen a rise of interesting Deep RL applications, tools and results. In parallel, the inner workings and applications of Deep RL, such as AlphaGo pictured above, can often seem esoteric and hard to understand. In this post, I will give an overview of core aspects of the field that can be understood by anyone.
Many of the visuals are from the slides of the talk, and some are new. The explanations and opinions are mine. If anything is unclear, reach out to me here!
Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?), Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?). In dialog systems for example, classical Deep Learning aims to learn the right response for a given query. On the other hand, Deep Reinforcement Learning focuses on the right sequences of sentences that will lead to a positive outcome, for example a happy customer.
This makes Deep RL particularly attractive for tasks that require planning and adaptation, such as manufacturing or self-driving. However, industry applications have trailed behind the rapidly advancing results coming out of the research community. A major reason is that Deep RL often requires an agent to experiment millions of times before learning anything useful. The best way to do this rapidly is by using a simulation environment. This tutorial will be using Unity to create environments to train agents in.
For this workshop led by Arthur Juliani and Leon Chen, their goal was to get every participants to successfully train multiple Deep RL algorithms in 4 hours. A tall order! Below, is a comprehensive overview of many of the main algorithms that power Deep RL today. For a more complete set of tutorials, Arthur Juliani wrote an 8-part series starting here.
Deep RL can be used to best the top human players at Go, but to understand how that’s done, you first need to understand a few simple concepts, starting with much easier problems.
1/It all starts with slot machines
Let’s imagine you are faced with 4 chests that you can pick from at each turn. Each of them have a different average payout, and your goal is to maximize the total payout you receive after a fixed number of turns. This is a classic problem called Multi-armed bandits and is where we will start. The crux of the problem is to balance exploration, which helps us learn about which states are good, and exploitation, where we now use what we know to pick the best slot machine.
Here, we will utilize a value function that maps our actions to an estimated reward, called the Q function. First, we’ll initialize all Q values at equal values. Then, we’ll update the Q value of each action (picking each chest) based on how good the payout was after choosing this action. This allows us to learn a good value function. We will approximate our Q function using a neural network (starting with a very shallow one) that learns a probability distribution (by using a softmax) over the 4 potential chests.
While the value function tells us how good we estimate each action to be, the policy is the function that determines which actions we end up taking. Intuitively, we might want to use a policy that picks the action with the highest Q value. This performs poorly in practice, as our Q estimates will be very wrong at the start before we gather enough experience through trial and error. This is why we need to add a mechanism to our policy to encourage exploration. One way to do that is to use epsilon greedy, which consists of taking a random action with probability epsilon. We start with epsilon being close to 1, always choosing random actions, and lower epsilon as we go along and learn more about which chests are good. Eventually, we learn which chests are best.
In practice, we might want to take a more subtle approach than either taking the action we think is the best, or a random action. A popular method is Boltzmann Exploration, which adjust probabilities based on our current estimate of how good each chest is, adding in a randomness factor.
2/Adding different states
The previous example was a world in which we were always in the same state, waiting to pick from the same 4 chests in front of us. Most real-word problems consist of many different states. That is what we will add to our environment next. Now, the background behind chests alternates between 3 colors at each turn, changing the average values of the chests. This means we need to learn a Q function that depends not only on the action (the chest we pick), but the state (what the color of the background is). This version of the problem is called Contextual Multi-armed Bandits.
Surprisingly, we can use the same approach as before. The only thing we need to add is an extra dense layer to our neural network, that will take in as input a vector representing the current state of the world.
3/Learning about the consequences of our actions
There is another key factor that makes our current problem simpler than mosts. In most environments, such as in the maze depicted above, the actions that we take have an impact on the state of the world. If we move up on this grid, we might receive a reward or we might receive nothing, but the next turn we will be in a different state. This is where we finally introduce a need for planning.
First, we will define our Q function as the immediate reward in our current state, plus the discounted reward we are expecting by taking all of our future actions. This solution works if our Q estimate of states is accurate, so how can we learn a good estimate?
We will use a method called Temporal Difference (TD) learning to learn a good Q function. The idea is to only look at a limited number of steps in the future. TD(1) for example, only uses the next 2 states to evaluate the reward.
Surprisingly, we can use TD(0), which looks at the current state, and our estimate of the reward the next turn, and get great results. The structure of the network is the same, but we need to go through one forward step before receiving the error. We then use this error to back propagate gradients, like in traditional Deep Learning, and update our value estimates.
3+/Introducing Monte Carlo
Another method to estimate the eventual success of our actions is Monte Carlo Estimates. This consists of playing out the entire episode with our current policy until we reach an end (success by reaching a green block or failure by reaching a red block in the image above) and use that result to update our value estimates for each traversed state. This allows us to propagate values efficiently in one batch at the end of an episode, instead of every time we make a move. The cost is that we are introducing noise to our estimates, since we attribute very distant rewards to them.
4/The world is rarely discrete
The previous methods were using neural networks to approximate our value estimates by mapping from a discrete number of states and actions to a value. In the maze for example, there were 49 states (squares) and 4 actions (move in each adjacent direction). In this environment, we are trying to learn how to balance a ball on a 2 dimensional paddle, by deciding at each time step whether we want to tilt the paddle left or right. Here, the state space becomes continuous (the angle of the paddle, and the position of the ball). The good news is, we can still use Neural Networks to approximate this function!
A note about off-policy vs on-policy learning: The methods we used previously, are off-policy methods, meaning we can generate data with any strategy(using epsilon greedy for example) and learn from it. On-policy methods can only learn from actions that were taken following our policy (remember, a policy is the method we use to determine which actions to take). This constrains our learning process, as we have to have an exploration strategy that is built in to the policy itself, but allows us to tie results directly to our reasoning, and enables us to learn more efficiently.
The approach we will use here is called Policy Gradients, and is an on-policy method. Previously, we were first learning a value function Q for each action in each state and then building a policy on top. In Vanilla Policy Gradient, we still use Monte Carlo Estimates, but we learn our policy directly through a loss function that increases the probability of choosing rewarding actions. Since we are learning on policy, we cannot use methods such as epsilon greedy (which includes random choices), to get our agent to explore the environment. The way that we encourage exploration is by using a method called entropy regularization, which pushes our probability estimates to be wider, and thus will encourage us to make riskier choices to explore the space.
4+/Leveraging deep learning for representations
In practice, many state of the art RL methods require learning both a policy and value estimates. The way we do this with deep learning is by having both be two separate outputs of the same backbone neural network, which will make it easier for our neural network to learn good representations.
One method to do this is Advantage Actor Critic (A2C). We learn our policy directly with policy gradients (defined above), and learn a value function using something called Advantage. Instead of updating our value function based on rewards, we update it based on our advantage, which measures how much better or worse an action was than our previous value function estimated it to be. This helps make learning more stable compared to simple Q Learning and Vanilla Policy Gradients.
5/Learning directly from the screen
There is an additional advantage to using Deep Learning for these methods, which is that Deep Neural Networks excel at perceptive tasks. When a human plays a game, the information received is not a list of states, but an image (usually of a screen, or a board, or the surrounding environment).
Image-based Learning combines a Convolutional Neural Network (CNN) with RL. In this environment, we pass in a raw image instead of features, and add a 2 layer CNN to our architecture without changing anything else! We can even inspect activations to see what the network picks up on to determine value, and policy. In the example below, we can see that the network uses the current score and distant obstacles to estimate the value of the current state, while focusing on nearby obstacles for determining actions. Neat!
As a side note, while toying around with the provided implementation, I’ve found that visual learning is very sensitive to hyperparameters. Changing the discount rate slightly for example, completely prevented the neural network from learning even on a toy application. This is a widely known problem, but it is interesting to see it first hand.
6/Nuanced actions
So far, we’ve played with environments with continuous and discrete state spaces. However, every environment we studied had a discrete action space: we could move in one of four directions, or tilt the paddle to the left or right. Ideally, for applications such as self-driving cars, we would like to learn continuous actions, such as turning the steering wheel between 0 and 360 degrees. In this environment called 3D ball world, we can choose to tilt the paddle to any value on each of its axes. This gives us more control as to how we perform actions, but makes the action space much larger.
We can approach this by approximating our potential choices with Gaussian distributions. We learn a probability distribution over potential actions by learning the mean and variance of a Gaussian distribution, and our policy we sample from that distribution. Simple, in theory :).
7/Next steps for the brave
There are a few concepts that separate the algorithms described above from state of the art approaches. It’s interesting to see that conceptually, the best robotics and game-playing algorithms are not that far away from the ones we just explored:
That’s it for this overview, I hope this has been informative and fun! If you are looking to dive deeper into the theory of RL, give Arthur’s posts a read, or diving deeper by following David Silver’s UCL course. If you are looking to learn more about the projects we do at Insight, or how we work with companies, please check us out below, or reach out to me here.
Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program.
Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
AI Lead at Insight AI @EmmanuelAmeisen
Insight Fellows Program - Your bridge to a career in data
|
Irhum Shafkat | 2K | 15 | https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1?source=---------5---------------- | Intuitively Understanding Convolutions for Deep Learning | The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code.
However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images.
The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.
The kernel repeats this process for every location it slides over, converting a 2D matrix of features into yet another 2D matrix of features. The output features are essentially, the weighted sums (with the weights being the values of the kernel itself) of the input features located roughly in the same location of the output pixel on the input layer.
Whether or not an input feature falls within this “roughly same location”, gets determined directly by whether it’s in the area of the kernel that produced the output or not. This means the size of the kernel directly determines how many (or few) input features get combined in the production of a new output feature.
This is all in pretty stark contrast to a fully connected layer. In the above example, we have 5×5=25 input features, and 3×3=9 output features. If this were a standard fully connected layer, you’d have a weight matrix of 25×9 = 225 parameters, with every output feature being the weighted sum of every single input feature. Convolutions allow us to do this transformation with only 9 parameters, with each output feature, instead of “looking at” every input feature, only getting to “look” at input features coming from roughly the same location. Do take note of this, as it’ll be critical to our later discussion.
Before we move on, it’s definitely worth looking into two techniques that are commonplace in convolution layers: Padding and Strides.
Padding does something pretty clever to solve this: pad the edges with extra, “fake” pixels (usually of value 0, hence the oft-used term “zero padding”). This way, the kernel when sliding can allow the original edge pixels to be at its center, while extending into the fake pixels beyond the edge, producing an output the same size as the input.
The idea of the stride is to skip some of the slide locations of the kernel. A stride of 1 means to pick slides a pixel apart, so basically every single slide, acting as a standard convolution. A stride of 2 means picking slides 2 pixels apart, skipping every other slide in the process, downsizing by roughly a factor of 2, a stride of 3 means skipping every 2 slides, downsizing roughly by factor 3, and so on.
More modern networks, such as the ResNet architectures entirely forgo pooling layers in their internal layers, in favor of strided convolutions when needing to reduce their output sizes.
Of course, the diagrams above only deals with the case where the image has a single input channel. In practicality, most input images have 3 channels, and that number only increases the deeper you go into a network. It’s pretty easy to think of channels, in general, as being a “view” of the image as a whole, emphasising some aspects, de-emphasising others.
So this is where a key distinction between terms comes in handy: whereas in the 1 channel case, where the term filter and kernel are interchangeable, in the general case, they’re actually pretty different. Each filter actually happens to be a collection of kernels, with there being one kernel for every single input channel to the layer, and each kernel being unique.
Each filter in a convolution layer produces one and only one output channel, and they do it like so:
Each of the kernels of the filter “slides” over their respective input channels, producing a processed version of each. Some kernels may have stronger weights than others, to give more emphasis to certain input channels than others (eg. a filter may have a red kernel channel with stronger weights than others, and hence, respond more to differences in the red channel features than the others).
Each of the per-channel processed versions are then summed together to form one channel. The kernels of a filter each produce one version of each channel, and the filter as a whole produces one overall output channel.
Finally, then there’s the bias term. The way the bias term works here is that each output filter has one bias term. The bias gets added to the output channel so far to produce the final output channel.
And with the single filter case down, the case for any number of filters is identical: Each filter processes the input with its own, different set of kernels and a scalar bias with the process described above, producing a single output channel. They are then concatenated together to produce the overall output, with the number of output channels being the number of filters. A nonlinearity is then usually applied before passing this as input to another convolution layer, which then repeats this process.
Even with the mechanics of the convolution layer down, it can still be hard to relate it back to a standard feed-forward network, and it still doesn’t explain why convolutions scale to, and work so much better for image data.
Suppose we have a 4×4 input, and we want to transform it into a 2×2 grid. If we were using a feedforward network, we’d reshape the 4×4 input into a vector of length 16, and pass it through a densely connected layer with 16 inputs and 4 outputs. One could visualize the weight matrix W for a layer:
And although the convolution kernel operation may seem a bit strange at first, it is still a linear transformation with an equivalent transformation matrix. If we were to use a kernel K of size 3 on the reshaped 4×4 input to get a 2×2 output, the equivalent transformation matrix would be:
(Note: while the above matrix is an equivalent transformation matrix, the actual operation is usually implemented as a very different matrix multiplication[2])
The convolution then, as a whole, is still a linear transformation, but at the same time it’s also a dramatically different kind of transformation. For a matrix with 64 elements, there’s just 9 parameters which themselves are reused several times. Each output node only gets to see a select number of inputs (the ones inside the kernel). There is no interaction with any of the other inputs, as the weights to them are set to 0.
It’s useful to see the convolution operation as a hard prior on the weight matrix. In this context, by prior, I mean predefined network parameters. For example, when you use a pretrained model for image classification, you use the pretrained network parameters as your prior, as a feature extractor to your final densely connected layer.
In that sense, there’s a direct intuition between why both are so efficient (compared to their alternatives). Transfer learning is efficient by orders of magnitude compared to random initialization, because you only really need to optimize the parameters of the final fully connected layer, which means you can have fantastic performance with only a few dozen images per class.
Here, you don’t need to optimize all 64 parameters, because we set most of them to zero (and they’ll stay that way), and the rest we convert to shared parameters, resulting in only 9 actual parameters to optimize. This efficiency matters, because when you move from the 784 inputs of MNIST to real world 224×224×3 images, thats over 150,000 inputs. A dense layer attempting to halve the input to 75,000 inputs would still require over 10 billion parameters. For comparison, the entirety of ResNet-50 has some 25 million parameters.
So fixing some parameters to 0, and tying parameters increases efficiency, but unlike the transfer learning case, where we know the prior is good because it works on a large general set of images, how do we know this is any good?
The answer lies in the feature combinations the prior leads the parameters to learn.
Early on in this article, we discussed that:
So with backpropagation coming in all the way from the classification nodes of the network, the kernels have the interesting task of learning weights to produce features only from a set of local inputs. Additionally, because the kernel itself is applied across the entire image, the features the kernel learns must be general enough to come from any part of the image.
If this were any other kind of data, eg. categorical data of app installs, this would’ve been a disaster, for just because your number of app installs and app type columns are next to each other doesn’t mean they have any “local, shared features” common with app install dates and time used. Sure, the four may have an underlying higher level feature (eg. which apps people want most) that can be found, but that gives us no reason to believe the parameters for the first two are exactly the same as the parameters for the latter two. The four could’ve been in any (consistent) order and still be valid!
Pixels however, always appear in a consistent order, and nearby pixels influence a pixel e.g. if all nearby pixels are red, it’s pretty likely the pixel is also red. If there are deviations, that’s an interesting anomaly that could be converted into a feature, and all this can be detected from comparing a pixel with its neighbors, with other pixels in its locality.
And this idea is really what a lot of earlier computer vision feature extraction methods were based around. For instance, for edge detection, one can use a Sobel edge detection filter, a kernel with fixed parameters, operating just like the standard one-channel convolution:
For a non-edge containing grid (eg. the background sky), most of the pixels are the same value, so the overall output of the kernel at that point is 0. For a grid with an vertical edge, there is a difference between the pixels to the left and right of the edge, and the kernel computes that difference to be non-zero, activating and revealing the edges. The kernel only works only a 3×3 grids at a time, detecting anomalies on a local scale, yet when applied across the entire image, is enough to detect a certain feature on a global scale, anywhere in the image!
So the key difference we make with deep learning is ask this question: Can useful kernels be learnt? For early layers operating on raw pixels, we could reasonably expect feature detectors of fairly low level features, like edges, lines, etc.
There’s an entire branch of deep learning research focused on making neural network models interpretable. One of the most powerful tools to come out of that is Feature Visualization using optimization[3]. The idea at core is simple: optimize a image (usually initialized with random noise) to activate a filter as strongly as possible. This does make intuitive sense: if the optimized image is completely filled with edges, that’s strong evidence that’s what the filter itself is looking for and is activated by. Using this, we can peek into the learnt filters, and the results are stunning:
One important thing to notice here is that convolved images are still images. The output of a small grid of pixels from the top left of an image will still be on the top left. So you can run another convolution layer on top of another (such as the two on the left) to extract deeper features, which we visualize.
Yet, however deep our feature detectors get, without any further changes they’ll still be operating on very small patches of the image. No matter how deep your detectors are, you can’t detect faces from a 3×3 grid. And this is where the idea of the receptive field comes in.
A essential design choice of any CNN architecture is that the input sizes grow smaller and smaller from the start to the end of the network, while the number of channels grow deeper. This, as mentioned earlier, is often done through strides or pooling layers. Locality determines what inputs from the previous layer the outputs get to see. The receptive field determines what area of the original input to the entire network the output gets to see.
The idea of a strided convolution is that we only process slides a fixed distance apart, and skip the ones in the middle. From a different point of view, we only keep outputs a fixed distance apart, and remove the rest[1].
We then apply a nonlinearity to the output, and per usual, then stack another new convolution layer on top. And this is where things get interesting. Even if were we to apply a kernel of the same size (3×3), having the same local area, to the output of the strided convolution, the kernel would have a larger effective receptive field:
This is because the output of the strided layer still does represent the same image. It is not so much cropping as it is resizing, only thing is that each single pixel in the output is a “representative” of a larger area (of whose other pixels were discarded) from the same rough location from the original input. So when the next layer’s kernel operates on the output, it’s operating on pixels collected from a larger area.
(Note: if you’re familiar with dilated convolutions, note that the above is not a dilated convolution. Both are methods of increasing the receptive field, but dilated convolutions are a single layer, while this takes place on a regular convolution following a strided convolution, with a nonlinearity inbetween)
This expansion of the receptive field allows the convolution layers to combine the low level features (lines, edges), into higher level features (curves, textures), as we see in the mixed3a layer.
Followed by a pooling/strided layer, the network continues to create detectors for even higher level features (parts, patterns), as we see for mixed4a.
The repeated reduction in image size across the network results in, by the 5th block on convolutions, input sizes of just 7×7, compared to inputs of 224×224. At this point, each single pixel represents a grid of 32×32 pixels, which is huge.
Compared to earlier layers, where an activation meant detecting an edge, here, an activation on the tiny 7×7 grid is one for a very high level feature, such as for birds.
The network as a whole progresses from a small number of filters (64 in case of GoogLeNet), detecting low level features, to a very large number of filters(1024 in the final convolution), each looking for an extremely specific high level feature. Followed by a final pooling layer, which collapses each 7×7 grid into a single pixel, each channel is a feature detector with a receptive field equivalent to the entire image.
Compared to what a standard feedforward network would have done, the output here is really nothing short of awe-inspiring. A standard feedforward network would have produced abstract feature vectors, from combinations of every single pixel in the image, requiring intractable amounts of data to train.
The CNN, with the priors imposed on it, starts by learning very low level feature detectors, and as across the layers as its receptive field is expanded, learns to combine those low-level features into progressively higher level features; not an abstract combination of every single pixel, but rather, a strong visual hierarchy of concepts.
By detecting low level features, and using them to detect higher level features as it progresses up its visual hierarchy, it is eventually able to detect entire visual concepts such as faces, birds, trees, etc, and that’s what makes them such powerful, yet efficient with image data.
With the visual hierarchy CNNs build, it is pretty reasonable to assume that their vision systems are similar to humans. And they’re really great with real world images, but they also fail in ways that strongly suggest their vision systems aren’t entirely human-like. The most major problem: Adversarial Examples[4], examples which have been specifically modified to fool the model.
Adversarial examples would be a non-issue if the only tampered ones that caused the models to fail were ones that even humans would notice. The problem is, the models are susceptible to attacks by samples which have only been tampered with ever so slightly, and would clearly not fool any human. This opens the door for models to silently fail, which can be pretty dangerous for a wide range of applications from self-driving cars to healthcare.
Robustness against adversarial attacks is currently a highly active area of research, the subject of many papers and even competitions, and solutions will certainly improve CNN architectures to become safer and more reliable.
CNNs were the models that allowed computer vision to scale from simple applications to powering sophisticated products and services, ranging from face detection in your photo gallery to making better medical diagnoses. They might be the key method in computer vision going forward, or some other new breakthrough might just be around the corner. Regardless, one thing is for sure: they’re nothing short of amazing, at the heart of many present-day innovative applications, and are most certainly worth deeply understanding.
Hope you enjoyed this article! If you’d like to stay connected, you’ll find me on Twitter here. If you have a question, comments are welcome! — I find them to be useful to my own learning process as well.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Curious programmer, tinkers around in Python and deep learning.
Sharing concepts, ideas, and codes.
|
Sam Drozdov | 2.3K | 6 | https://uxdesign.cc/an-intro-to-machine-learning-for-designers-5c74ba100257?source=---------6---------------- | An intro to Machine Learning for designers – UX Collective | There is an ongoing debate about whether or not designers should write code. Wherever you fall on this issue, most people would agree that designers should know about code. This helps designers understand constraints and empathize with developers. It also allows designers to think outside of the pixel perfect box when problem solving. For the same reasons, designers should know about machine learning.
Put simply, machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Even though Arthur Samuel coined the term over fifty years ago, only recently have we seen the most exciting applications of machine learning — digital assistants, autonomous driving, and spam-free email all exist thanks to machine learning.
Over the past decade new algorithms, better hardware, and more data have made machine learning an order of magnitude more effective. Only in the past few years companies like Google, Amazon, and Apple have made some of their powerful machine learning tools available to developers. Now is the best time to learn about machine learning and apply it to the products you are building.
Since machine learning is now more accessible than ever before, designers today have the opportunity to think about how machine learning can be applied to improve their products. Designers should be able to talk with software developers about what is possible, how to prepare, and what outcomes to expect. Below are a few example applications that should serve as inspiration for these conversations.
Machine learning can help create user-centric products by personalizing experiences to the individuals who use them. This allows us to improve things like recommendations, search results, notifications, and ads.
Machine learning is effective at finding abnormal content. Credit card companies use this to detect fraud, email providers use this to detect spam, and social media companies use this to detect things like hate speech.
Machine learning has enabled computers to begin to understand the things we say (natural-language processing) and the things we see (computer vision). This allows Siri to understand “Siri, set a reminder...”, Google Photos to create albums of your dog, and Facebook to describe a photo to those visually impaired.
Machine learning is also helpful in understanding how users are grouped. This insight can then be used to look at analytics on a group-by-group basis. From here, different features can be evaluated across groups or be rolled out to only a particular group of users.
Machine learning allows us to make predictions about how a user might behave next. Knowing this, we can help prepare for a user’s next action. For example, if we can predict what content a user is planning on viewing, we can preload that content so it’s immediately ready when they want it.
Depending on the application and what data is available, there are different types of machine learning algorithms to choose from. I’ll briefly cover each of the following.
Supervised learning allows us to make predictions using correctly labeled data. Labeled data is a group of examples that has informative tags or outputs. For example, photos with associated hashtags or a house’s features (eq. number of bedrooms, location) and its price.
By using supervised learning we can fit a line to the labelled data that either splits the data into categories or represents the trend of the data. Using this line we are able to make predictions on new data. For example, we can look at new photos and predict hashtags or look at a new house’s features and predict its price.
If the output we are trying to predict is a list of tags or values we call it classification. If the output we are trying to predict is a number we call it regression.
Unsupervised learning is helpful when we have unlabeled data or we are not exactly sure what outputs (like an image’s hashtags or a house’s price) are meaningful. Instead we can identify patterns among unlabeled data. For example, we can identify related items on an e-commerce website or recommend items to someone based on others who made similar purchases.
If the pattern is a group we call it a cluster. If the pattern is a rule (e.q. if this, then that) we call it an association.
Reinforcement learning doesn’t use an existing data set. Instead we create an agent to collect its own data through trial-and-error in an environment where it is reinforced with a reward. For example, an agent can learn to play Mario by receiving a positive reward for collecting coins and a negative reward for walking into a Goomba.
Reinforcement learning is inspired by the way that humans learn and has turned out to be an effective way to teach computers. Specifically, reinforcement has been effective at training computers to play games like Go and Dota.
Understanding the problem you are trying to solve and the available data will constrain the types of machine learning you can use (e.q. identifying objects in an image with supervised learning requires a labeled data set of images). However, constraints are the fruit of creativity. In some cases, you can set out to collect data that is not already available or consider other approaches.
Even though machine learning is a science, it comes with a margin of error. It is important to consider how a user’s experience might be impacted by this margin of error. For example, when an autonomous car fails to recognize its surroundings people can get hurt.
Even though machine learning has never been as accessible as it is today, it still requires additional resources (developers and time) to be integrated into a product. This makes it important to think about whether the resulting impact justifies the amount of resources needed to implement.
We have barely covered the tip of the iceberg, but hopefully at this point you feel more comfortable thinking about how machine learning can be applied to your product. If you are interested in learning more about machine learning, here are some helpful resources:
Thanks for reading. Chat with me on Twitter @samueldrozdov
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Digital Product Designer samueldrozdov.com
Curated stories on user experience, usability, and product design. By @fabriciot and @caioab.
|
Conor Dewey | 252 | 10 | https://towardsdatascience.com/the-big-list-of-ds-ml-interview-resources-2db4f651bd63?source=---------7---------------- | The Big List of DS/ML Interview Resources – Towards Data Science | Data science interviews certainly aren’t easy. I know this first hand. I’ve participated in over 50 individual interviews and phone screens while applying for competitive internships over the last calendar year. Through this exciting and somewhat (at times, very) painful process, I’ve accumulated a plethora of useful resources that helped me prepare for and eventually pass data science interviews.
Long story short, I’ve decided to sort through all my bookmarks and notes in order to deliver a comprehensive list of data science resources.
With this list by your side, you should have more than enough effective tools at your disposal next time you’re prepping for a big interview.
It’s worth noting that many of these resources are naturally going to geared towards entry-level and intern data science positions, as that’s where my expertise lies. Keep that in mind and enjoy!
Here’s some of the more general resources covering data science as a whole. Specifically, I highly recommend checking out the first two links regarding 120 Data Science Interview Questions. While the ebook itself is a couple bucks out of pocket, the answers themselves are free on Quora. These were some of my favorite full-coverage questions to practice with right before an interview.
Even Data Scientists cannot escape the dreaded algorithmic coding interview. In my experience, this isn’t the case 100% of the time, but chances are you’ll be asked to work through something similar to an easy or medium question on LeetCode or HackerRank.
As far as language goes, most companies will let you use whatever language you want. Personally, I did almost all of my algorithmic coding in Java even though the positions were targeted at Python and R programmers. If I had to recommend one thing, it’s to break out your wallet and invest in Cracking the Coding Interview. It absolutely lives up to the hype. I plan to continue using it for years to come.
Once the interviewer knows that you can think-through problems and code effectively, chances are that you’ll move onto some more data science specific applications. Depending on the interviewer and the position, you will likely be able to choose between Python and R as your tool of choice. Since I’m partial to Python, my resources below will primarily focus on effectively using Pandas and NumPy for data analysis.
A data science interview typically isn’t complete without checking your knowledge of SQL. This can be done over the phone or through a live coding question, more likely the latter. I’ve found that the difficulty level of these questions can vary a good bit, ranging from being painfully easy to requiring complex joins and obscure functions.
Our good friend, statistics is still crucial for Data Scientists and it’s reflected as such in interviews. I had many interviews begin by seeing if I can explain a common statistics or probability concept in simple and concise terms. As positions get more experienced, I suspect this happens less and less as traditional statistical questions begin to take the more practical form of A/B testing scenarios, covered later in the post.
You’ll notice that I’ve compiled a few more resources here than in other sections. This isn’t a mistake. Machine learning is a complex field that is a virtual guarantee in data science interviews today.
The way that you’ll be tested on this is no guarantee however. It may come up as a conceptual question regarding cross validation or bias-variance tradeoff, or it may take the form of a take home assignment with a dataset attached. I’ve seen both several times, so you’ve got to be prepared for anything.
Specifically, check out the Machine Learning Flashcards below, they’re only a couple bucks and were my by far my favorite way to quiz myself on any conceptual ML stuff.
This won’t be covered in every single data science interview, but it’s certainly not uncommon. Most interviews will have atleast one section solely dedicated to product thinking which often lends itself to A/B testing of some sort. Make sure your familiar with the concepts and statistical background necessary in order to be prepared when it comes up. If you have time to spare, I took the free online course by Udacity and overall, I was pretty impressed.
Lastly, I wanted to call out all of the posts related to data science jobs and interviewing that I read over and over again to understand, not only how to prepare, but what to expect as well. If you only check out one section here, this is the one to focus on. This is the layer that sits on top of all the technical skills and application. Don’t overlook it.
I hope you find these resources useful during your next interview or job search. I know I did, truthfully I’m just glad that I saved these links somewhere. Lastly, this post is part of an ongoing initiative to ‘open-source’ my experience applying and interviewing at data science positions, so if you enjoyed this content then be sure to follow me for more stuff like this.
If you’re interested in receiving my weekly rundown of interesting articles and resources focused on data science, machine learning, and artificial intelligence, then subscribe to Self Driven Data Science using the form below!
If you enjoyed this post, feel free to hit the clap button and if you’re interested in posts to come, make sure to follow me on Medium at the link below — I’ll be writing and shipping every day this month as part of a 30-Day Challenge.
This article was originally published on conordewey.com
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data Scientist & Writer | www.conordewey.com
Sharing concepts, ideas, and codes.
|
Abhishek Parbhakar | 937 | 6 | https://towardsdatascience.com/must-know-information-theory-concepts-in-deep-learning-ai-e54a5da9769d?source=---------8---------------- | Must know Information Theory concepts in Deep Learning (AI) | Information theory is an important field that has made significant contribution to deep learning and AI, and yet is unknown to many. Information theory can be seen as a sophisticated amalgamation of basic building blocks of deep learning: calculus, probability and statistics. Some examples of concepts in AI that come from Information theory or related fields:
In the early 20th century, scientists and engineers were struggling with the question: “How to quantify the information? Is there a analytical way or a mathematical measure that can tell us about the information content?”. For example, consider below two sentences:
It is not difficult to tell that the second sentence gives us more information since it also tells that Bruno is “big” and “brown” in addition to being a “dog”. How can we quantify the difference between two sentences? Can we have a mathematical measure that tells us how much more information second sentence have as compared to the first?
Scientists were struggling with these questions. Semantics, domain and form of data only added to the complexity of the problem. Then, mathematician and engineer Claude Shannon came up with the idea of “Entropy” that changed our world forever and marked the beginning of “Digital Information Age”.
Shannon proposed that the “semantic aspects of data are irrelevant”, and nature and meaning of data doesn’t matter when it comes to information content. Instead he quantified information in terms of probability distribution and “uncertainty”. Shannon also introduced the term “bit”, that he humbly credited to his colleague John Tukey. This revolutionary idea not only laid the foundation of Information Theory but also opened new avenues for progress in fields like artificial intelligence.
Below we discuss four popular, widely used and must known Information theoretic concepts in deep learning and data sciences:
Also called Information Entropy or Shannon Entropy.
Entropy gives a measure of uncertainty in an experiment. Let’s consider two experiments:
If we compare the two experiments, in exp 2 it is easier to predict the outcome as compared to exp 1. So, we can say that exp 1 is inherently more uncertain/unpredictable than exp 2. This uncertainty in the experiment is measured using entropy.
Therefore, if there is more inherent uncertainty in the experiment then it has higher entropy. Or lesser the experiment is predictable more is the entropy. The probability distribution of experiment is used to calculate the entropy.
A deterministic experiment, which is completely predictable, say tossing a coin with P(H)=1, has entropy zero. An experiment which is completely random, say rolling fair dice, is least predictable, has maximum uncertainty, and has the highest entropy among such experiments.
Another way to look at entropy is the average information gained when we observe outcomes of an random experiment. The information gained for a outcome of an experiment is defined as a function of probability of occurrence of that outcome. More the rarer is the outcome, more is the information gained from observing it.
For example, in an deterministic experiment, we always know the outcome, so no new information gained is here from observing the outcome and hence entropy is zero.
For a discrete random variable X, with possible outcomes (states) x_1,...,x_n the entropy, in unit of bits, is defined as:
where p(x_i) is the probability of i^th outcome of X.
Cross entropy is used to compare two probability distributions. It tells us how similar two distributions are.
Cross entropy between two probability distributions p and q defined over same set of outcomes is given by:
Mutual information is a measure of mutual dependency between two probability distributions or random variables. It tells us how much information about one variable is carried by the another variable.
Mutual information captures dependency between random variables and is more generalized than vanilla correlation coefficient, which captures only the linear relationship.
Mutual information of two discrete random variables X and Y is defined as:
where p(x,y) is the joint probability distribution of X and Y, and p(x) and p(y) are the marginal probability distribution of X and Y respectively.
Also called Relative Entropy.
KL divergence is another measure to find similarities between two probability distributions. It measures how much one distribution diverges from the other.
Suppose, we have some data and true distribution underlying it is ‘P’. But we don’t know this ‘P’, so we choose a new distribution ‘Q’ to approximate this data. Since ‘Q’ is just an approximation, it won’t be able to approximate the data as good as ‘P’ and some information loss will occur. This information loss is given by KL divergence.
KL divergence between ‘P’ and ‘Q’ tells us how much information we lose when we try to approximate data given by ‘P’ with ‘Q’.
KL divergence of a probability distribution Q from another probability distribution P is defined as:
KL divergence is commonly used in unsupervised machine learning technique Variational Autoencoders.
Information Theory was originally formulated by mathematician and electrical engineer Claude Shannon in his seminal paper “A Mathematical Theory of Communication” in 1948.
Note: Terms experiments, random variable & AI, machine learning, deep learning, data science have been used loosely above but have technically different meanings.
In case you liked the article, do follow me Abhishek Parbhakar for more articles related to AI, philosophy and economics.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Finding equilibria among AI, philosophy, and economics.
Sharing concepts, ideas, and codes.
|
Aman Dalmia | 2.3K | 17 | https://blog.usejournal.com/what-i-learned-from-interviewing-at-multiple-ai-companies-and-start-ups-a9620415e4cc?source=---------9---------------- | What I learned from interviewing at multiple AI companies and start-ups | Over the past 8 months, I’ve been interviewing at various companies like Google’s DeepMind, Wadhwani Institute of AI, Microsoft, Ola, Fractal Analytics, and a few others primarily for the roles — Data Scientist, Software Engineer & Research Engineer. In the process, not only did I get an opportunity to interact with many great minds, but also had a peek at myself along with a sense of what people really look for when interviewing someone. I believe that if I’d had this knowledge before, I could have avoided many mistakes and have prepared in a much better manner, which is what the motivation behind this post is, to be able to help someone bag their dream place of work.
This post arose from a discussion with one of my juniors on the lack of really fulfilling job opportunities offered through campus placements for people working in AI. Also, when I was preparing, I noticed people using a lot of resources but as per my experience over the past months, I realised that one can do away with a few minimal ones for most roles in AI, all of which I’m going to mention at the end of the post. I begin with How to get noticed a.k.a. the interview. Then I provide a List of companies and start-ups to apply, which is followed by How to ace that interview. Based on whatever experience I’ve had, I add a section on What we should strive to work for. I conclude with Minimal Resources you need for preparation.
NOTE: For people who are sitting for campus placements, there are two things I’d like to add. Firstly, most of what I’m going to say (except for the last one maybe) is not going to be relevant to you for placements. But, and this is my second point, as I mentioned before, opportunities on campus are mostly in software engineering roles having no intersection with AI. So, this post is specifically meant for people who want to work on solving interesting problems using AI. Also, I want to add that I haven’t cleared all of these interviews but I guess that’s the essence of failure — it’s the greatest teacher! The things that I mention here may not all be useful but these are things that I did and there’s no way for me to know what might have ended up making my case stronger.
To be honest, this step is the most important one. What makes off-campus placements so tough and exhausting is getting the recruiter to actually go through your profile among the plethora of applications that they get. Having a contact inside the organisation place a referral for you would make it quite easy, but, in general, this part can be sub-divided into three keys steps:
a) Do the regulatory preparation and do that well: So, with regulatory preparation, I mean —a LinkedIn profile, a Github profile, a portfolio website and a well-polished CV. Firstly, your CV should be really neat and concise. Follow this guide by Udacity for cleaning up your CV — Resume Revamp. It has everything that I intend to say and I’ve been using it as a reference guide myself. As for the CV template, some of the in-built formats on Overleaf are quite nice. I personally use deedy-resume. Here’s a preview:
As it can be seen, a lot of content can be fit into one page. However, if you really do need more than that, then the format linked above would not work directly. Instead, you can find a modified multi-page format of the same here. The next most important thing to mention is your Github profile. A lot of people underestimate the potential of this, just because unlike LinkedIn, it doesn’t have a “Who Viewed Your Profile” option. People DO go through your Github because that’s the only way they have to validate what you have mentioned in your CV, given that there’s a lot of noise today with people associating all kinds of buzzwords with their profile. Especially for data science, open-source has a big role to play too with majority of the tools, implementations of various algorithms, lists of learning resources, all being open-sourced. I discuss the benefits of getting involved in Open-Source and how one can start from scratch in an earlier post here. The bare minimum for now should be:
• Create a Github account if you don’t already have one.• Create a repository for each of the projects that you have done.• Add documentation with clear instructions on how to run the code• Add documentation for each file mentioning the role of each function, the meaning of each parameter, proper formatting (e.g. PEP8 for Python) along with a script to automate the previous step (Optional).
Moving on, the third step is what most people lack, which is having a portfolio website demonstrating their experience and personal projects. Making a portfolio indicates that you are really serious about getting into the field and adds a lot of points to the authenticity factor. Also, you generally have space constraints on your CV and tend to miss out on a lot of details. You can use your portfolio to really delve deep into the details if you want to and it’s highly recommended to include some sort of visualisation or demonstration of the project/idea. It’s really easy to create one too as there are a lot of free platforms with drag-and-drop features making the process really painless. I personally use Weebly which is a widely used tool. It’s better to have a reference to begin with. There are a lot of awesome ones out there but I referred to Deshraj Yadav’s personal website to begin with making mine:
Finally, a lot of recruiters and start-ups have nowadays started using LinkedIn as their go-to platform for hiring. A lot of good jobs get posted there. Apart from recruiters, the people working at influential positions are quite active there as well. So, if you can grab their attention, you have a good chance of getting in too. Apart from that, maintaining a clean profile is necessary for people to have the will to connect with you. An important part of LinkedIn is their search tool and for you to show up, you must have the relevant keywords interspersed over your profile. It took me a lot of iterations and re-evaluations to finally have a decent one. Also, you should definitely ask people with or under whom you’ve worked with to endorse you for your skills and add a recommendation talking about their experience of working with you. All of this increases your chance of actually getting noticed. I’ll again point towards Udacity’s guide for LinkedIn and Github profiles.
All this might seem like a lot, but remember that you don’t need to do it in a single day or even a week or a month. It’s a process, it never ends. Setting up everything at first would definitely take some effort but once it’s there and you keep updating it regularly as events around you keep happening, you’ll not only find it to be quite easy, but also you’ll be able to talk about yourself anywhere anytime without having to explicitly prepare for it because you become so aware about yourself.
b) Stay authentic: I’ve seen a lot of people do this mistake of presenting themselves as per different job profiles. According to me, it’s always better to first decide what actually interests you, what would you be happy doing and then search for relevant opportunities; not the other way round. The fact that the demand for AI talent surpasses the supply for the same gives you this opportunity. Spending time on your regulatory preparation mentioned above would give you an all-around perspective on yourself and help make this decision easier. Also, you won’t need to prepare answers to various kinds of questions that you get asked during an interview. Most of them would come out naturally as you’d be talking about something you really care about.
c) Networking: Once you’re done with a), figured out b), Networking is what will actually help you get there. If you don’t talk to people, you miss out on hearing about many opportunities that you might have a good shot at. It’s important to keep connecting with new people each day, if not physically, then on LinkedIn, so that upon compounding it after many days, you have a large and strong network. Networking is NOT messaging people to place a referral for you. When I was starting off, I did this mistake way too often until I stumbled upon this excellent article by Mark Meloon, where he talks about the importance of building a real connection with people by offering our help first. Another important step in networking is to get your content out. For example, if you’re good at something, blog about it and share that blog on Facebook and LinkedIn. Not only does this help others, it helps you as well. Once you have a good enough network, your visibility increases multi-fold. You never know how one person from your network liking or commenting on your posts, may help you reach out to a much broader audience including people who might be looking for someone of your expertise.
I’m presenting this list in alphabetical order to avoid the misinterpretation of any specific preference. However, I do place a “*” on the ones that I’d personally recommend. This recommendation is based on either of the following: mission statement, people, personal interaction or scope of learning. More than 1 “*” is purely based on the 2nd and 3rd factors.
Your interview begins the moment you have entered the room and a lot of things can happen between that moment and the time when you’re asked to introduce yourself — your body language and the fact that you’re smiling while greeting them plays a big role, especially when you’re interviewing for a start-up as culture-fit is something that they extremely care about. You need to understand that as much as the interviewer is a stranger to you, you’re a stranger to him/her too. So, they’re probably just as nervous as you are.
It’s important to view the interview as more of a conversation between yourself and the interviewer. Both of you are looking for a mutual fit — you are looking for an awesome place to work at and the interviewer is looking for an awesome person (like you) to work with. So, make sure that you’re feeling good about yourself and that you take the charge of making the initial moments of your conversation pleasant for them. And the easiest way I know how to make that happen is to smile.
There are mostly two types of interviews — one, where the interviewer has come with come prepared set of questions and is going to just ask you just that irrespective of your profile and the second, where the interview is based on your CV. I’ll start with the second one.
This kind of interview generally begins with a “Can you tell me a bit about yourself?”. At this point, 2 things are a big NO — talking about your GPA in college and talking about your projects in detail. An ideal statement should be about a minute or two long, should give a good idea on what have you been doing till now, and it’s not restricted to academics. You can talk about your hobbies like reading books, playing sports, meditation, etc — basically, anything that contributes to defining you. The interviewer will then take something that you talk about here as a cue for his next question, and then the technical part of the interview begins. The motive of this kind of interview is to really check whether whatever you have written on your CV is true or not:
There would be a lot of questions on what could be done differently or if “X” was used instead of “Y”, what would have happened. At this point, it’s important to know the kind of trade-offs that is usually made during implementation, for e.g. if the interviewer says that using a more complex model would have given better results, then you might say that you actually had less data to work with and that would have lead to overfitting. In one of the interviews, I was given a case-study to work on and it involved designing algorithms for a real-world use case. I’ve noticed that once I’ve been given the green flag to talk about a project, the interviewers really like it when I talk about it in the following flow:
Problem > 1 or 2 previous approaches > Our approach > Result > Intuition
The other kind of interview is really just to test your basic knowledge. Don’t expect those questions to be too hard. But they would definitely scratch every bit of the basics that you should be having, mainly based around Linear Algebra, Probability, Statistics, Optimisation, Machine Learning and/or Deep Learning. The resources mentioned in the Minimal Resources you need for preparation section should suffice, but make sure that you don’t miss out one bit among them. The catch here is the amount of time you take to answer those questions. Since these cover the basics, they expect that you should be answering them almost instantly. So, do your preparation accordingly.
Throughout the process, it’s important to be confident and honest about what you know and what you don’t know. If there’s a question that you’re certain you have no idea about, say it upfront rather than making “Aah”, “Um” sounds. If some concept is really important but you are struggling with answering it, the interviewer would generally (depending on how you did in the initial parts) be happy to give you a hint or guide you towards the right solution. It’s a big plus if you manage to pick their hints and arrive at the correct solution. Try to not get nervous and the best way to avoid that is by, again, smiling.
Now we come to the conclusion of the interview where the interviewer would ask you if you have any questions for them. It’s really easy to think that your interview is done and just say that you have nothing to ask. I know many people who got rejected just because of failing at this last question. As I mentioned before, it’s not only you who is being interviewed. You are also looking for a mutual fit with the company itself. So, it’s quite obvious that if you really want to join a place, you must have many questions regarding the work culture there or what kind of role are they seeing you in. It can be as simple as being curious about the person interviewing you. There’s always something to learn from everything around you and you should make sure that you leave the interviewer with the impression that you’re truly interested in being a part of their team. A final question that I’ve started asking all my interviewers, is for a feedback on what they might want me to improve on. This has helped me tremendously and I still remember every feedback that I’ve gotten which I’ve incorporated into my daily life.
That’s it. Based on my experience, if you’re just honest about yourself, are competent, truly care about the company you’re interviewing for and have the right mindset, you should have ticked all the right boxes and should be getting a congratulatory mail soon 😄
We live in an era full of opportunities and that applies to anything that you love. You just need to strive to become the best at it and you will find a way to monetise it. As Gary Vaynerchuk (just follow him already) says:
This is a great time to be working in AI and if you’re truly passionate about it, you have so much that you can do with AI. You can empower so many people that have always been under-represented. We keep nagging about the problems surrounding us, but there’s been never such a time where common people like us can actually do something about those problems, rather than just complaining. Jeffrey Hammerbacher (Founder, Cloudera) had famously said:
We can do so much with AI than we can ever imagine. There are many extremely challenging problems out there which require incredibly smart people like you to put your head down on and solve. You can make many lives better. Time to let go of what is “cool”, or what would “look good”. THINK and CHOOSE wisely.
Any Data Science interview comprises of questions mostly of a subset of the following four categories: Computer Science, Math, Statistics and Machine Learning.
If you’re not familiar with the math behind Deep Learning, then you should consider going over my last post for resources to understand them. However, if you are comfortable, I’ve found that the chapters 2, 3 and 4 of the Deep Learning Book are enough to prepare/revise for theoretical questions during such interviews. I’ve been preparing summaries for a few chapters which you can refer to where I’ve tried to even explain a few concepts that I found challenging to understand at first, in case you are not willing to go through the entire chapters. And if you’ve already done a course on probability, you should be comfortable answering a few numerical as well. For stats, covering these topics should be enough.
Now, the range of questions here can vary depending on the type of position you are applying for. If it’s a more traditional Machine Learning based interview where they want to check your basic knowledge in ML, you can complete any one of the following courses:- Machine Learning by Andrew Ng — CS 229- Machine Learning course by Caltech Professor Yaser Abu-Mostafa
Important topics are: Supervised Learning (Classification, Regression, SVM, Decision Tree, Random Forests, Logistic Regression, Multi-layer Perceptron, Parameter Estimation, Bayes’ Decision Rule), Unsupervised Learning (K-means Clustering, Gaussian Mixture Models), Dimensionality Reduction (PCA).
Now, if you’re applying for a more advanced position, there’s a high chance that you might be questioned on Deep Learning. In that case, you should be very comfortable with Convolutional Neural Networks (CNNs) and/or (depending upon what you’ve worked on) Recurrent Neural Networks (RNNs) and their variants. And by being comfortable, you must know what is the fundamental idea behind Deep Learning, how CNNs/RNNs actually worked, what kind of architectures have been proposed and what has been the motivation behind those architectural changes. Now, there’s no shortcut for this. Either you understand them or you put enough time to understand them. For CNNs, the recommended resource is Stanford’s CS 231N and CS 224N for RNNs. I found this Neural Network class by Hugo Larochelle to be really enlightening too. Refer this for a quick refresher too. Udacity coming to the aid here too. By now, you should have figured out that Udacity is a really important place for an ML practitioner. There are not a lot of places working on Reinforcement Learning (RL) in India and I too am not experienced in RL as of now. So, that’s one thing to add to this post sometime in the future.
Getting placed off-campus is a long journey of self-realisation. I realise that this has been another long post and I’m again extremely grateful to you for valuing my thoughts. I hope that this post finds a way of being useful to you and that it helped you in some way to prepare for your next Data Science interview better. If it did, I request you to really think about what I talk about in What we should strive to work for.
I’m very thankful to my friends from IIT Guwahati for their helpful feedback, especially Ameya Godbole, Kothapalli Vignesh and Prabal Jain. A majority of what I mention here, like “viewing an interview as a conversation” and “seeking feedback from our interviewers”, arose from multiple discussions with Prabal who has been advising me constantly on how I can improve my interviewing skills.
This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.
Follow our publication to see more product & design stories featured by the Journal team.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
AI Fanatic • Math Lover • Dreamer
The official Journal blog
|
Sophia Arakelyan | 7 | 4 | https://buzzrobot.com/from-ballerina-to-ai-researcher-part-i-46fce67f809b?source=---------1---------------- | From Ballerina to AI Researcher: Part I – buZZrobot | Last year, I published the article “From Ballerina to AI writer” where I described how I embraced the technical part of AI without a technical background. But having love and passion for AI, I educated myself and was able to build a neural net classifier and do projects in Deep RL.
Recently, I’ve become a participant in the OpenAI Scholarship Program (OpenAI is a non-profit that gathers top AI researchers to ensure the safety of AI to benefit humanity). Every week for the next three months I’ll publish blog posts sharing my story of transformation from a person dedicated to 15 years of professional dancing and then writing about tech and AI to actually conducting AI research.
Finding your true calling — the key component of happiness
My primary goal with the series of blog posts “From Ballerina to AI researcher” is to show that it’s never too late to embrace a new field, start over again, and find your true calling. Finding work you love is one of the most important components of happiness - — something that you do every day and invest your time in to grow; that makes you feel fulfilled, gives you energy; something that is a refuge for your soul.
Great things never come easy. We have to be able to fight to make great things happen. But you can’t fight for something you don’t believe in, especially if you don’t feel like it’s really important for you and humanity. Finding that thing is a real challenge. I feel lucky that I found my true passion — AI. To me, the technology itself and the AI community — researchers, scientists, people who dedicate their lives to building the most powerful technology of all time with the mission to benefit humanity and make it safe for us — is a great source of energy.
The structure of the blog post series
Today, I’m giving an overall intro of what I’m going to cover in my “From Ballerina to AI Researcher” series.
I’ll dedicate the sequence of blog posts during the OpenAI Scholars program to several aspects of AI technology. I’ll cover those areas that concern me a lot, like AI and automation, bias in ML, dual use of AI, etc.
Also, the structure of my posts will include some insights on what I’m working on right now (the final technical project will be available by the end of August and will be open-sourced).
I feel very lucky to have Alec Radford, an experienced researcher, as my mentor who guides me in the NLP and NLU research area.
First week of my scholarship
I’ve dedicated my first week within the program to learning about the Transformer architecture that performs much better on sequential data compared to RNNs, LSTMs.
The novelty of the architecture is its multi-head self-attention mechanism. According to the original paper, experiments with the transformer on two machine translation tasks showed the model to be superior in quality while being more parallelizable and requiring significantly less time to train.
More concretely, when RNNs or CNNs take a sequence as an input, it goes through sentences word by word, which is a huge obstacle toward parallelization of the process (takes more time to train models). Moreover, if sequences are too long, the model tends to forget the content of distant positions in sequence or mixes it with the following positions’ content — this is the fundamental problem in dealing with sequential data. The transformer architecture reduced this problem thanks to the multi-head self-attention mechanism.
I digged into RNN, LSTM models to catch up with the background information. To that end, I’ve found Andrew Ng’s course on Deep Learning along with the papers extremely useful. To develop insights regarding the transformer, I went through the following resources: the video by Łukasz Kaiser from Google Brain, one of the model’s creators; a blog post with very well elaborated content re: the model, ran the code tensor2tensor and the code using the PyTorch framework from this paper to “feel” the difference between the TF and PyTorch frameworks.
Overall, the goal within the program is to develop deep comprehension of the NLU research area: challenges, current state of the art; and to formulate and test hypotheses that tackle the most important problems of the field.
I’ll share more on what I’m working on in my future articles. Meanwhile, if you have questions/feedback, please leave a comment.
If you want to learn more about me, here are my Facebook and Twitter accounts.
I’d appreciate your feedback on my posts, such as what topics are most interesting to you that I should consider further coverage on.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Former ballerina turned AI writer. Fan of sci-fi, astrophysics. Consciousness is the key. Founder of buZZrobot.com
The publication aims to cover practical aspects of AI technology, use cases along with interviews with notable people in the AI field.
|
Dr. GP Pulipaka | 2 | 6 | https://medium.com/@gp_pulipaka/3-ways-to-apply-latent-semantic-analysis-on-large-corpus-text-on-macos-terminal-jupyterlab-colab-7b4dc3e1622?source=---------9---------------- | 3 Ways to Apply Latent Semantic Analysis on Large-Corpus Text on macOS Terminal, JupyterLab, and... | Latent semantic analysis works on large-scale datasets to generate representations to discover the insights through natural language processing. There are different approaches to perform the latent semantic analysis at multiple levels such as document level, phrase level, and sentence level. Primarily semantic analysis can be summarized into lexical semantics and the study of combining individual words into paragraphs or sentences. The lexical semantics classifies and decomposes the lexical items. Applying lexical semantic structures has different contexts to identify the differences and similarities between the words. A generic term in a paragraph or a sentence is hypernym and hyponymy provides the meaning of the relationship between instances of the hyponyms. Homonyms contain similar syntax or similar spelling with similar structuring with different meanings. Homonyms are not related to each other. Book is an example for homonym. It can mean for someone to read something or an act of making a reservation with similar spelling, form, and syntax. However, the definition is different. Polysemy is another phenomenon of the words where a single word could be associated with multiple related senses and distinct meanings. The word polysemy is a Greek word which means many signs. Python provides NLTK library to perform tokenization of the words by chopping the words in larger chunks into phrases or meaningful strings. Processing words through tokenization produce tokens. Word lemmatization converts words from the current inflected form into the base form.
Latent semantic analysis
Applying latent semantic analysis on large datasets of text and documents represents the contextual meaning through mathematical and statistical computation methods on large corpus of text. Many times, latent semantic analysis overtook human scores and subject matter tests conducted by humans. The accuracy of latent semantic analysis is high as it reads through machine readable documents and texts at a web scale. Latent semantic analysis is a technique that applies singular value decomposition and principal component analysis (PCA). The document can be represented with Z x Y Matrix A, the rows of the matrix represent the document in the collection. The matrix A can represent numerous hundred thousands of rows and columns on a typical large-corpus text document. Applying singular value decomposition develops a set of operations dubbed matrix decomposition. Natural language processing in Python with NLTK library applies a low-rank approximation to the term-document matrix. Later, the low-rank approximation aids in indexing and retrieving the document known as latent semantic indexing by clustering the number of words in the document.
Brief overview of linear algebra
The A with Z x Y matrix contains the real-valued entries with non-negative values for the term-document matrix. Determining the rank of the matrix comes with the number of linearly independent columns or rows in the the matrix. The rank of A ≤ {Z,Y}. A square c x c represented as diagonal matrix where off-diagonal entries are zero. Examining the matrix, if all the c diagonal matrices are one, the identity matrix of the dimension c represented by Ic. For the square Z x Z matrix, A with a vector k which contains not all zeroes, for λ. The matrix decomposition applies on the square matrix factored into the product of matrices from eigenvectors. This allows to reduce the dimensionality of the words from multi-dimensions to two dimensions to view on the plot. The dimensionality reduction techniques with principal component analysis and singular value decomposition holds critical relevance in natural language processing. The Zipfian nature of the frequency of the words in a document makes it difficult to determine the similarity of the words in a static stage. Hence, eigen decomposition is a by-product of singular value decomposition as the input of the document is highly asymmetrical. The latent semantic analysis is a particular technique in semantic space to parse through the document and identify the words with polysemy with NLKT library. The resources such as punkt and wordnet have to be downloaded from NLTK.
Deep Learning at scale with Google Colab notebooks
Training machine learning or deep learning models on CPUs could take hours and could be pretty expensive in terms of the programming language efficiency with time and energy of the computer resources. Google built Colab Notebooks environment for research and development purposes. It runs entirely on the cloud without requiring any additional hardware or software setup for each machine. It’s entirely equivalent of a Jupyter notebook that aids the data scientists to share the colab notebooks by storing on Google drive just like any other Google Sheets or documents in a collaborative environment. There are no additional costs associated with enabling GPU at runtime for acceleration on the runtime. There are some challenges of uploading the data into Colab, unlike Jupyter notebook that can access the data directly from the local directory of the machine. In Colab, there are multiple options to upload the files from the local file system or a drive can be mounted to load the data through drive FUSE wrapper.
Once this step is complete, it shows the following log without errors:
The next step would be generating the authentication tokens to authenticate the Google credentials for the drive and Colab
If it shows successful retrieval of access token, then Colab is all set.
At this stage, the drive is not mounted yet, it will show false when accessing the contents of the text file.
Once the drive is mounted, Colab has access to the datasets from Google drive.
Once the files are accessible, the Python can be executed similar to executing in Jupyter environment. Colab notebook also displays the results similar to what we see on Jupyter notebook.
PyCharm IDE
The program can be run compiled on PyCharm IDE environment and run on PyCharm or can be executed from OSX Terminal.
Results from OSX Terminal
Jupyter Notebook on standalone machine
Jupyter Notebook gives a similar output running the latent semantic analysis on the local machine:
References
Gorrell, G. (2006). Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing. Retrieved from https://www.aclweb.org/anthology/E06-1013
Hardeniya, N. (2016). Natural Language Processing: Python and NLTK . Birmingham, England: Packt Publishing.
Landauer, T. K., Foltz, P. W., Laham, D., & University of Colorado at Boulder (1998). An Introduction to Latent Semantic Analysis. Retrieved from http://lsa.colorado.edu/papers/dp1.LSAintro.pdf
Stackoverflow (2018). Mounting Google Drive on Google Colab. Retrieved from https://stackoverflow.com/questions/50168315/mounting-google-drive-on-google-colab
Stanford University (2009). Matrix decompositions and latent semantic indexing. Retrieved from https://nlp.stanford.edu/IR-book/html/htmledition/matrix-decompositions-and-latent-semantic-indexing-1.html
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | Big data | IoT | Startups | SAP | MachineLearning | DeepLearning | DataScience
|
Scott Santens | 7.3K | 14 | https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49?source=tag_archive---------0---------------- | Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines | (An alternate version of this article was originally published in the Boston Globe)
On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.
Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it.
The language is a new class of machine learning known as deep learning, and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui, not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most.
What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan. In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue.
So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game.
AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic. Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income.
This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far:
Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990.
All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines.
Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them.
If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn.
I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience. I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me.
As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work.
One of these applications is the creation of deep neural networks - kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data.
Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years. This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets. Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn. Why?
Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.
The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can. The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds.
This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself. In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.
However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “Maybe in another ten years.” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI, describe it:
Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times. So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “Any time now.”
Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it.
We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare, machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans.
Amelia is just one AI out there currently being beta-tested in companies right now. Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide.
Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.
A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead? And what does it no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide? These are questions we need to start asking, and fast.
Fortunately, people are beginning to ask these questions, and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income. By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship, and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of possible implementation in countries like Switzerland, Finland, the Netherlands, and Canada.
The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million. That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time.
And now even the White House, in a stunning report to Congress, has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “duck and cover” strategies for avoiding nuclear blasts during the Cold War.
All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.”
AI pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies... My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”
Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system... we may have to consider instituting a basic income guarantee.”
Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”
When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers?
No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?
What’s the big lesson to learn, in a century when machines can learn?
I offer it’s that jobs are for machines, and life is for people.
This article was written on a crowdfunded monthly basic income. If you found value in this article, you can support it along with all my advocacy for basic income with a monthly patron pledge of $1+.
Special thanks to Arjun Banker, Steven Grimm, Larry Cohen, Topher Hunt, Aaron Marcus-Kubitza, Andrew Stern, Keith Davis, Albert Wenger, Richard Just, Chris Smothers, Mark Witham, David Ihnen, Danielle Texeira, Katie Doemland, Paul Wicks, Jan Smole, Joe Esposito, Jack Wagner, Joe Ballou, Stuart Matthews, Natalie Foster, Chris McCoy, Michael Honey, Gary Aranovich, Kai Wong, John David Hodge, Louise Whitmore, Dan O’Sullivan, Harish Venkatesan, Michiel Dral, Gerald Huff, Susanne Berg, Cameron Ottens, Kian Alavi, Gray Scott, Kirk Israel, Robert Solovay, Jeff Schulman, Andrew Henderson, Robert F. Greene, Martin Jordo, Victor Lau, Shane Gordon, Paolo Narciso, Johan Grahn, Tony DeStefano, Erhan Altay, Bryan Herdliska, Stephane Boisvert, Dave Shelton, Rise & Shine PAC, Luke Sampson, Lee Irving, Kris Roadruck, Amy Shaffer, Thomas Welsh, Olli Niinimäki, Casey Young, Elizabeth Balcar, Masud Shah, Allen Bauer, all my other funders for their support, and my amazing partner, Katie Smith.
Scott Santens writes about basic income on his blog. You can also follow him here on Medium, on Twitter, on Facebook, or on Reddit where he is a moderator for the /r/BasicIncome community of over 30,000 subscribers.
If you feel others would appreciate this article, please click the green heart.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
New Orleans writer focused on the potential for human civilization to gets its act together in the 21st century. Moderator of /r/BasicIncome on Reddit.
Articles discussing the concept of the universal basic income
|
Adam Geitgey | 35K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471?source=tag_archive---------1---------------- | Machine Learning is Fun! – Adam Geitgey – Medium | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 日本語, Português, Português (alternate), Türkçe, Français, 한국어 , العَرَبِيَّة, Español (México), Español (España), Polski, Italiano, 普通话, Русский, 한국어 , Tiếng Việt or فارسی.
Bigger update: The content of this article is now available as a full-length video course that walks you through every step of the code. You can take the course for free (and access everything else on Lynda.com free for 30 days) if you sign up with this link.
Have you heard people talking about machine learning but only have a fuzzy idea of what that means? Are you tired of nodding your way through conversations with co-workers? Let’s change that!
This guide is for anyone who is curious about machine learning but has no idea where to start. I imagine there are a lot of people who tried reading the wikipedia article, got frustrated and gave up wishing someone would just give them a high-level explanation. That’s what this is.
The goal is be accessible to anyone — which means that there’s a lot of generalizations. But who cares? If this gets anyone more interested in ML, then mission accomplished.
Machine learning is the idea that there are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.
For example, one kind of algorithm is a classification algorithm. It can put data into different groups. The same classification algorithm used to recognize handwritten numbers could also be used to classify emails into spam and not-spam without changing a line of code. It’s the same algorithm but it’s fed different training data so it comes up with different classification logic.
“Machine learning” is an umbrella term covering lots of these kinds of generic algorithms.
You can think of machine learning algorithms as falling into one of two main categories — supervised learning and unsupervised learning. The difference is simple, but really important.
Let’s say you are a real estate agent. Your business is growing, so you hire a bunch of new trainee agents to help you out. But there’s a problem — you can glance at a house and have a pretty good idea of what a house is worth, but your trainees don’t have your experience so they don’t know how to price their houses.
To help your trainees (and maybe free yourself up for a vacation), you decide to write a little app that can estimate the value of a house in your area based on it’s size, neighborhood, etc, and what similar houses have sold for.
So you write down every time someone sells a house in your city for 3 months. For each house, you write down a bunch of details — number of bedrooms, size in square feet, neighborhood, etc. But most importantly, you write down the final sale price:
Using that training data, we want to create a program that can estimate how much any other house in your area is worth:
This is called supervised learning. You knew how much each house sold for, so in other words, you knew the answer to the problem and could work backwards from there to figure out the logic.
To build your app, you feed your training data about each house into your machine learning algorithm. The algorithm is trying to figure out what kind of math needs to be done to make the numbers work out.
This kind of like having the answer key to a math test with all the arithmetic symbols erased:
From this, can you figure out what kind of math problems were on the test? You know you are supposed to “do something” with the numbers on the left to get each answer on the right.
In supervised learning, you are letting the computer work out that relationship for you. And once you know what math was required to solve this specific set of problems, you could answer to any other problem of the same type!
Let’s go back to our original example with the real estate agent. What if you didn’t know the sale price for each house? Even if all you know is the size, location, etc of each house, it turns out you can still do some really cool stuff. This is called unsupervised learning.
This is kind of like someone giving you a list of numbers on a sheet of paper and saying “I don’t really know what these numbers mean but maybe you can figure out if there is a pattern or grouping or something — good luck!”
So what could do with this data? For starters, you could have an algorithm that automatically identified different market segments in your data. Maybe you’d find out that home buyers in the neighborhood near the local college really like small houses with lots of bedrooms, but home buyers in the suburbs prefer 3-bedroom houses with lots of square footage. Knowing about these different kinds of customers could help direct your marketing efforts.
Another cool thing you could do is automatically identify any outlier houses that were way different than everything else. Maybe those outlier houses are giant mansions and you can focus your best sales people on those areas because they have bigger commissions.
Supervised learning is what we’ll focus on for the rest of this post, but that’s not because unsupervised learning is any less useful or interesting. In fact, unsupervised learning is becoming increasingly important as the algorithms get better because it can be used without having to label the data with the correct answer.
Side note: There are lots of other types of machine learning algorithms. But this is a pretty good place to start.
As a human, your brain can approach most any situation and learn how to deal with that situation without any explicit instructions. If you sell houses for a long time, you will instinctively have a “feel” for the right price for a house, the best way to market that house, the kind of client who would be interested, etc. The goal of Strong AI research is to be able to replicate this ability with computers.
But current machine learning algorithms aren’t that good yet — they only work when focused a very specific, limited problem. Maybe a better definition for “learning” in this case is “figuring out an equation to solve a specific problem based on some example data”.
Unfortunately “Machine Figuring out an equation to solve a specific problem based on some example data” isn’t really a great name. So we ended up with “Machine Learning” instead.
Of course if you are reading this 50 years in the future and we’ve figured out the algorithm for Strong AI, then this whole post will all seem a little quaint. Maybe stop reading and go tell your robot servant to go make you a sandwich, future human.
So, how would you write the program to estimate the value of a house like in our example above? Think about it for a second before you read further.
If you didn’t know anything about machine learning, you’d probably try to write out some basic rules for estimating the price of a house like this:
If you fiddle with this for hours and hours, you might end up with something that sort of works. But your program will never be perfect and it will be hard to maintain as prices change.
Wouldn’t it be better if the computer could just figure out how to implement this function for you? Who cares what exactly the function does as long is it returns the correct number:
One way to think about this problem is that the price is a delicious stew and the ingredients are the number of bedrooms, the square footage and the neighborhood. If you could just figure out how much each ingredient impacts the final price, maybe there’s an exact ratio of ingredients to stir in to make the final price.
That would reduce your original function (with all those crazy if’s and else’s) down to something really simple like this:
Notice the magic numbers in bold — .841231951398213, 1231.1231231, 2.3242341421, and 201.23432095. These are our weights. If we could just figure out the perfect weights to use that work for every house, our function could predict house prices!
A dumb way to figure out the best weights would be something like this:
Start with each weight set to 1.0:
Run every house you know about through your function and see how far off the function is at guessing the correct price for each house:
For example, if the first house really sold for $250,000, but your function guessed it sold for $178,000, you are off by $72,000 for that single house.
Now add up the squared amount you are off for each house you have in your data set. Let’s say that you had 500 home sales in your data set and the square of how much your function was off for each house was a grand total of $86,123,373. That’s how “wrong” your function currently is.
Now, take that sum total and divide it by 500 to get an average of how far off you are for each house. Call this average error amount the cost of your function.
If you could get this cost to be zero by playing with the weights, your function would be perfect. It would mean that in every case, your function perfectly guessed the price of the house based on the input data. So that’s our goal — get this cost to be as low as possible by trying different weights.
Repeat Step 2 over and over with every single possible combination of weights. Whichever combination of weights makes the cost closest to zero is what you use. When you find the weights that work, you’ve solved the problem!
That’s pretty simple, right? Well think about what you just did. You took some data, you fed it through three generic, really simple steps, and you ended up with a function that can guess the price of any house in your area. Watch out, Zillow!
But here’s a few more facts that will blow your mind:
Pretty crazy, right?
Ok, of course you can’t just try every combination of all possible weights to find the combo that works the best. That would literally take forever since you’d never run out of numbers to try.
To avoid that, mathematicians have figured out lots of clever ways to quickly find good values for those weights without having to try very many. Here’s one way:
First, write a simple equation that represents Step #2 above:
Now let’s re-write exactly the same equation, but using a bunch of machine learning math jargon (that you can ignore for now):
This equation represents how wrong our price estimating function is for the weights we currently have set.
If we graph this cost equation for all possible values of our weights for number_of_bedrooms and sqft, we’d get a graph that might look something like this:
In this graph, the lowest point in blue is where our cost is the lowest — thus our function is the least wrong. The highest points are where we are most wrong. So if we can find the weights that get us to the lowest point on this graph, we’ll have our answer!
So we just need to adjust our weights so we are “walking down hill” on this graph towards the lowest point. If we keep making small adjustments to our weights that are always moving towards the lowest point, we’ll eventually get there without having to try too many different weights.
If you remember anything from Calculus, you might remember that if you take the derivative of a function, it tells you the slope of the function’s tangent at any point. In other words, it tells us which way is downhill for any given point on our graph. We can use that knowledge to walk downhill.
So if we calculate a partial derivative of our cost function with respect to each of our weights, then we can subtract that value from each weight. That will walk us one step closer to the bottom of the hill. Keep doing that and eventually we’ll reach the bottom of the hill and have the best possible values for our weights. (If that didn’t make sense, don’t worry and keep reading).
That’s a high level summary of one way to find the best weights for your function called batch gradient descent. Don’t be afraid to dig deeper if you are interested on learning the details.
When you use a machine learning library to solve a real problem, all of this will be done for you. But it’s still useful to have a good idea of what is happening.
The three-step algorithm I described is called multivariate linear regression. You are estimating the equation for a line that fits through all of your house data points. Then you are using that equation to guess the sales price of houses you’ve never seen before based where that house would appear on your line. It’s a really powerful idea and you can solve “real” problems with it.
But while the approach I showed you might work in simple cases, it won’t work in all cases. One reason is because house prices aren’t always simple enough to follow a continuous line.
But luckily there are lots of ways to handle that. There are plenty of other machine learning algorithms that can handle non-linear data (like neural networks or SVMs with kernels). There are also ways to use linear regression more cleverly that allow for more complicated lines to be fit. In all cases, the same basic idea of needing to find the best weights still applies.
Also, I ignored the idea of overfitting. It’s easy to come up with a set of weights that always works perfectly for predicting the prices of the houses in your original data set but never actually works for any new houses that weren’t in your original data set. But there are ways to deal with this (like regularization and using a cross-validation data set). Learning how to deal with this issue is a key part of learning how to apply machine learning successfully.
In other words, while the basic concept is pretty simple, it takes some skill and experience to apply machine learning and get useful results. But it’s a skill that any developer can learn!
Once you start seeing how easily machine learning techniques can be applied to problems that seem really hard (like handwriting recognition), you start to get the feeling that you could use machine learning to solve any problem and get an answer as long as you have enough data. Just feed in the data and watch the computer magically figure out the equation that fits the data!
But it’s important to remember that machine learning only works if the problem is actually solvable with the data that you have.
For example, if you build a model that predicts home prices based on the type of potted plants in each house, it’s never going to work. There just isn’t any kind of relationship between the potted plants in each house and the home’s sale price. So no matter how hard it tries, the computer can never deduce a relationship between the two.
So remember, if a human expert couldn’t use the data to solve the problem manually, a computer probably won’t be able to either. Instead, focus on problems where a human could solve the problem, but where it would be great if a computer could solve it much more quickly.
In my mind, the biggest problem with machine learning right now is that it mostly lives in the world of academia and commercial research groups. There isn’t a lot of easy to understand material out there for people who would like to get a broad understanding without actually becoming experts. But it’s getting a little better every day.
If you want to try out what you’ve learned in this article, I made a course that walks you through every step of this article, including writing all the code. Give it a try!
If you want to go deeper, Andrew Ng’s free Machine Learning class on Coursera is pretty amazing as a next step. I highly recommend it. It should be accessible to anyone who has a Comp. Sci. degree and who remembers a very minimal amount of math.
Also, you can play around with tons of machine learning algorithms by downloading and installing SciKit-Learn. It’s a python framework that has “black box” versions of all the standard algorithms.
If you liked this article, please consider signing up for my Machine Learning is Fun! Newsletter:
Also, please check out the full-length course version of this article. It covers everything in this article in more detail, including writing the actual code in Python. You can get a free 30-day trial to watch the course if you sign up with this link.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 2!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Adam Geitgey | 14.2K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721?source=tag_archive---------2---------------- | Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano.
Are you tired of reading endless news stories about deep learning and not really knowing what that means? Let’s change that!
This time, we are going to learn how to write programs that recognize objects in images using deep learning. In other words, we’re going to explain the black magic that allows Google Photos to search your photos based on what is in the picture:
Just like Part 1 and Part 2, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished!
(If you haven’t already read part 1 and part 2, read them now!)
You might have seen this famous xkcd comic before.
The goof is based on the idea that any 3-year-old child can recognize a photo of a bird, but figuring out how to make a computer recognize objects has puzzled the very best computer scientists for over 50 years.
In the last few years, we’ve finally found a good approach to object recognition using deep convolutional neural networks. That sounds like a a bunch of made up words from a William Gibson Sci-Fi novel, but the ideas are totally understandable if you break them down one by one.
So let’s do it — let’s write a program that can recognize birds!
Before we learn how to recognize pictures of birds, let’s learn how to recognize something much simpler — the handwritten number “8”.
In Part 2, we learned about how neural networks can solve complex problems by chaining together lots of simple neurons. We created a small neural network to estimate the price of a house based on how many bedrooms it had, how big it was, and which neighborhood it was in:
We also know that the idea of machine learning is that the same generic algorithms can be reused with different data to solve different problems. So let’s modify this same neural network to recognize handwritten text. But to make the job really simple, we’ll only try to recognize one letter — the numeral “8”.
Machine learning only works when you have data — preferably a lot of data. So we need lots and lots of handwritten “8”s to get started. Luckily, researchers created the MNIST data set of handwritten numbers for this very purpose. MNIST provides 60,000 images of handwritten digits, each as an 18x18 image. Here are some “8”s from the data set:
The neural network we made in Part 2 only took in a three numbers as the input (“3” bedrooms, “2000” sq. feet , etc.). But now we want to process images with our neural network. How in the world do we feed images into a neural network instead of just numbers?
The answer is incredible simple. A neural network takes numbers as input. To a computer, an image is really just a grid of numbers that represent how dark each pixel is:
To feed an image into our neural network, we simply treat the 18x18 pixel image as an array of 324 numbers:
The handle 324 inputs, we’ll just enlarge our neural network to have 324 input nodes:
Notice that our neural network also has two outputs now (instead of just one). The first output will predict the likelihood that the image is an “8” and thee second output will predict the likelihood it isn’t an “8”. By having a separate output for each type of object we want to recognize, we can use a neural network to classify objects into groups.
Our neural network is a lot bigger than last time (324 inputs instead of 3!). But any modern computer can handle a neural network with a few hundred nodes without blinking. This would even work fine on your cell phone.
All that’s left is to train the neural network with images of “8”s and not-“8"s so it learns to tell them apart. When we feed in an “8”, we’ll tell it the probability the image is an “8” is 100% and the probability it’s not an “8” is 0%. Vice versa for the counter-example images.
Here’s some of our training data:
We can train this kind of neural network in a few minutes on a modern laptop. When it’s done, we’ll have a neural network that can recognize pictures of “8”s with a pretty high accuracy. Welcome to the world of (late 1980’s-era) image recognition!
It’s really neat that simply feeding pixels into a neural network actually worked to build image recognition! Machine learning is magic! ...right?
Well, of course it’s not that simple.
First, the good news is that our “8” recognizer really does work well on simple images where the letter is right in the middle of the image:
But now the really bad news:
Our “8” recognizer totally fails to work when the letter isn’t perfectly centered in the image. Just the slightest position change ruins everything:
This is because our network only learned the pattern of a perfectly-centered “8”. It has absolutely no idea what an off-center “8” is. It knows exactly one pattern and one pattern only.
That’s not very useful in the real world. Real world problems are never that clean and simple. So we need to figure out how to make our neural network work in cases where the “8” isn’t perfectly centered.
We already created a really good program for finding an “8” centered in an image. What if we just scan all around the image for possible “8”s in smaller sections, one section at a time, until we find one?
This approach called a sliding window. It’s the brute force solution. It works well in some limited cases, but it’s really inefficient. You have to check the same image over and over looking for objects of different sizes. We can do better than this!
When we trained our network, we only showed it “8”s that were perfectly centered. What if we train it with more data, including “8”s in all different positions and sizes all around the image?
We don’t even need to collect new training data. We can just write a script to generate new images with the “8”s in all kinds of different positions in the image:
Using this technique, we can easily create an endless supply of training data.
More data makes the problem harder for our neural network to solve, but we can compensate for that by making our network bigger and thus able to learn more complicated patterns.
To make the network bigger, we just stack up layer upon layer of nodes:
We call this a “deep neural network” because it has more layers than a traditional neural network.
This idea has been around since the late 1960s. But until recently, training this large of a neural network was just too slow to be useful. But once we figured out how to use 3d graphics cards (which were designed to do matrix multiplication really fast) instead of normal computer processors, working with large neural networks suddenly became practical. In fact, the exact same NVIDIA GeForce GTX 1080 video card that you use to play Overwatch can be used to train neural networks incredibly quickly.
But even though we can make our neural network really big and train it quickly with a 3d graphics card, that still isn’t going to get us all the way to a solution. We need to be smarter about how we process images into our neural network.
Think about it. It doesn’t make sense to train a network to recognize an “8” at the top of a picture separately from training it to recognize an “8” at the bottom of a picture as if those were two totally different objects.
There should be some way to make the neural network smart enough to know that an “8” anywhere in the picture is the same thing without all that extra training. Luckily... there is!
As a human, you intuitively know that pictures have a hierarchy or conceptual structure. Consider this picture:
As a human, you instantly recognize the hierarchy in this picture:
Most importantly, we recognize the idea of a child no matter what surface the child is on. We don’t have to re-learn the idea of child for every possible surface it could appear on.
But right now, our neural network can’t do this. It thinks that an “8” in a different part of the image is an entirely different thing. It doesn’t understand that moving an object around in the picture doesn’t make it something different. This means it has to re-learn the identify of each object in every possible position. That sucks.
We need to give our neural network understanding of translation invariance — an “8” is an “8” no matter where in the picture it shows up.
We’ll do this using a process called Convolution. The idea of convolution is inspired partly by computer science and partly by biology (i.e. mad scientists literally poking cat brains with weird probes to figure out how cats process images).
Instead of feeding entire images into our neural network as one grid of numbers, we’re going to do something a lot smarter that takes advantage of the idea that an object is the same no matter where it appears in a picture.
Here’s how it’s going to work, step by step —
Similar to our sliding window search above, let’s pass a sliding window over the entire original image and save each result as a separate, tiny picture tile:
By doing this, we turned our original image into 77 equally-sized tiny image tiles.
Earlier, we fed a single image into a neural network to see if it was an “8”. We’ll do the exact same thing here, but we’ll do it for each individual image tile:
However, there’s one big twist: We’ll keep the same neural network weights for every single tile in the same original image. In other words, we are treating every image tile equally. If something interesting appears in any given tile, we’ll mark that tile as interesting.
We don’t want to lose track of the arrangement of the original tiles. So we save the result from processing each tile into a grid in the same arrangement as the original image. It looks like this:
In other words, we’ve started with a large image and we ended with a slightly smaller array that records which sections of our original image were the most interesting.
The result of Step 3 was an array that maps out which parts of the original image are the most interesting. But that array is still pretty big:
To reduce the size of the array, we downsample it using an algorithm called max pooling. It sounds fancy, but it isn’t at all!
We’ll just look at each 2x2 square of the array and keep the biggest number:
The idea here is that if we found something interesting in any of the four input tiles that makes up each 2x2 grid square, we’ll just keep the most interesting bit. This reduces the size of our array while keeping the most important bits.
So far, we’ve reduced a giant image down into a fairly small array.
Guess what? That array is just a bunch of numbers, so we can use that small array as input into another neural network. This final neural network will decide if the image is or isn’t a match. To differentiate it from the convolution step, we call it a “fully connected” network.
So from start to finish, our whole five-step pipeline looks like this:
Our image processing pipeline is a series of steps: convolution, max-pooling, and finally a fully-connected network.
When solving problems in the real world, these steps can be combined and stacked as many times as you want! You can have two, three or even ten convolution layers. You can throw in max pooling wherever you want to reduce the size of your data.
The basic idea is to start with a large image and continually boil it down, step-by-step, until you finally have a single result. The more convolution steps you have, the more complicated features your network will be able to learn to recognize.
For example, the first convolution step might learn to recognize sharp edges, the second convolution step might recognize beaks using it’s knowledge of sharp edges, the third step might recognize entire birds using it’s knowledge of beaks, etc.
Here’s what a more realistic deep convolutional network (like you would find in a research paper) looks like:
In this case, they start a 224 x 224 pixel image, apply convolution and max pooling twice, apply convolution 3 more times, apply max pooling and then have two fully-connected layers. The end result is that the image is classified into one of 1000 categories!
So how do you know which steps you need to combine to make your image classifier work?
Honestly, you have to answer this by doing a lot of experimentation and testing. You might have to train 100 networks before you find the optimal structure and parameters for the problem you are solving. Machine learning involves a lot of trial and error!
Now finally we know enough to write a program that can decide if a picture is a bird or not.
As always, we need some data to get started. The free CIFAR10 data set contains 6,000 pictures of birds and 52,000 pictures of things that are not birds. But to get even more data we’ll also add in the Caltech-UCSD Birds-200–2011 data set that has another 12,000 bird pics.
Here’s a few of the birds from our combined data set:
And here’s some of the 52,000 non-bird images:
This data set will work fine for our purposes, but 72,000 low-res images is still pretty small for real-world applications. If you want Google-level performance, you need millions of large images. In machine learning, having more data is almost always more important that having better algorithms. Now you know why Google is so happy to offer you unlimited photo storage. They want your sweet, sweet data!
To build our classifier, we’ll use TFLearn. TFlearn is a wrapper around Google’s TensorFlow deep learning library that exposes a simplified API. It makes building convolutional neural networks as easy as writing a few lines of code to define the layers of our network.
Here’s the code to define and train the network:
If you are training with a good video card with enough RAM (like an Nvidia GeForce GTX 980 Ti or better), this will be done in less than an hour. If you are training with a normal cpu, it might take a lot longer.
As it trains, the accuracy will increase. After the first pass, I got 75.4% accuracy. After just 10 passes, it was already up to 91.7%. After 50 or so passes, it capped out around 95.5% accuracy and additional training didn’t help, so I stopped it there.
Congrats! Our program can now recognize birds in images!
Now that we have a trained neural network, we can use it! Here’s a simple script that takes in a single image file and predicts if it is a bird or not.
But to really see how effective our network is, we need to test it with lots of images. The data set I created held back 15,000 images for validation. When I ran those 15,000 images through the network, it predicted the correct answer 95% of the time.
That seems pretty good, right? Well... it depends!
Our network claims to be 95% accurate. But the devil is in the details. That could mean all sorts of different things.
For example, what if 5% of our training images were birds and the other 95% were not birds? A program that guessed “not a bird” every single time would be 95% accurate! But it would also be 100% useless.
We need to look more closely at the numbers than just the overall accuracy. To judge how good a classification system really is, we need to look closely at how it failed, not just the percentage of the time that it failed.
Instead of thinking about our predictions as “right” and “wrong”, let’s break them down into four separate categories —
Using our validation set of 15,000 images, here’s how many times our predictions fell into each category:
Why do we break our results down like this? Because not all mistakes are created equal.
Imagine if we were writing a program to detect cancer from an MRI image. If we were detecting cancer, we’d rather have false positives than false negatives. False negatives would be the worse possible case — that’s when the program told someone they definitely didn’t have cancer but they actually did.
Instead of just looking at overall accuracy, we calculate Precision and Recall metrics. Precision and Recall metrics give us a clearer picture of how well we did:
This tells us that 97% of the time we guessed “Bird”, we were right! But it also tells us that we only found 90% of the actual birds in the data set. In other words, we might not find every bird but we are pretty sure about it when we do find one!
Now that you know the basics of deep convolutional networks, you can try out some of the examples that come with tflearn to get your hands dirty with different neural network architectures. It even comes with built-in data sets so you don’t even have to find your own images.
You also know enough now to start branching and learning about other areas of machine learning. Why not learn how to use algorithms to train computers how to play Atari games next?
If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 4, Part 5 and Part 6!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Adam Geitgey | 15.2K | 13 | https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78?source=tag_archive---------3---------------- | Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano.
Have you noticed that Facebook has developed an uncanny ability to recognize your friends in your photographs? In the old days, Facebook used to make you to tag your friends in photos by clicking on them and typing in their name. Now as soon as you upload a photo, Facebook tags everyone for you like magic:
This technology is called face recognition. Facebook’s algorithms are able to recognize your friends’ faces after they have been tagged only a few times. It’s pretty amazing technology — Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do!
Let’s learn how modern face recognition works! But just recognizing your friends would be too easy. We can push this tech to the limit to solve a more challenging problem — telling Will Ferrell (famous actor) apart from Chad Smith (famous rock musician)!
So far in Part 1, 2 and 3, we’ve used machine learning to solve isolated problems that have only one step — estimating the price of a house, generating new data based on existing data and telling if an image contains a certain object. All of those problems can be solved by choosing one machine learning algorithm, feeding in data, and getting the result.
But face recognition is really a series of several related problems:
As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces and end up seeing faces in everyday objects:
Computers are not capable of this kind of high-level generalization (at least not yet...), so we have to teach them how to do each step in this process separately.
We need to build a pipeline where we solve each step of face recognition separately and pass the result of the current step to the next step. In other words, we will chain together several machine learning algorithms:
Let’s tackle this problem one step at a time. For each step, we’ll learn about a different machine learning algorithm. I’m not going to explain every single algorithm completely to keep this from turning into a book, but you’ll learn the main ideas behind each one and you’ll learn how you can build your own facial recognition system in Python using OpenFace and dlib.
The first step in our pipeline is face detection. Obviously we need to locate the faces in a photograph before we can try to tell them apart!
If you’ve used any camera in the last 10 years, you’ve probably seen face detection in action:
Face detection is a great feature for cameras. When the camera can automatically pick out faces, it can make sure that all the faces are in focus before it takes the picture. But we’ll use it for a different purpose — finding the areas of the image we want to pass on to the next step in our pipeline.
Face detection went mainstream in the early 2000's when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to run on cheap cameras. However, much more reliable solutions exist now. We’re going to use a method invented in 2005 called Histogram of Oriented Gradients — or just HOG for short.
To find faces in an image, we’ll start by making our image black and white because we don’t need color data to find faces:
Then we’ll look at every single pixel in our image one at a time. For every single pixel, we want to look at the pixels that directly surrounding it:
Our goal is to figure out how dark the current pixel is compared to the pixels directly surrounding it. Then we want to draw an arrow showing in which direction the image is getting darker:
If you repeat that process for every single pixel in the image, you end up with every pixel being replaced by an arrow. These arrows are called gradients and they show the flow from light to dark across the entire image:
This might seem like a random thing to do, but there’s a really good reason for replacing the pixels with gradients. If we analyze pixels directly, really dark images and really light images of the same person will have totally different pixel values. But by only considering the direction that brightness changes, both really dark images and really bright images will end up with the same exact representation. That makes the problem a lot easier to solve!
But saving the gradient for every single pixel gives us way too much detail. We end up missing the forest for the trees. It would be better if we could just see the basic flow of lightness/darkness at a higher level so we could see the basic pattern of the image.
To do this, we’ll break up the image into small squares of 16x16 pixels each. In each square, we’ll count up how many gradients point in each major direction (how many point up, point up-right, point right, etc...). Then we’ll replace that square in the image with the arrow directions that were the strongest.
The end result is we turn the original image into a very simple representation that captures the basic structure of a face in a simple way:
To find faces in this HOG image, all we have to do is find the part of our image that looks the most similar to a known HOG pattern that was extracted from a bunch of other training faces:
Using this technique, we can now easily find faces in any image:
If you want to try this step out yourself using Python and dlib, here’s code showing how to generate and view HOG representations of images.
Whew, we isolated the faces in our image. But now we have to deal with the problem that faces turned different directions look totally different to a computer:
To account for this, we will try to warp each picture so that the eyes and lips are always in the sample place in the image. This will make it a lot easier for us to compare faces in the next steps.
To do this, we are going to use an algorithm called face landmark estimation. There are lots of ways to do this, but we are going to use the approach invented in 2014 by Vahid Kazemi and Josephine Sullivan.
The basic idea is we will come up with 68 specific points (called landmarks) that exist on every face — the top of the chin, the outside edge of each eye, the inner edge of each eyebrow, etc. Then we will train a machine learning algorithm to be able to find these 68 specific points on any face:
Here’s the result of locating the 68 face landmarks on our test image:
Now that we know were the eyes and mouth are, we’ll simply rotate, scale and shear the image so that the eyes and mouth are centered as best as possible. We won’t do any fancy 3d warps because that would introduce distortions into the image. We are only going to use basic image transformations like rotation and scale that preserve parallel lines (called affine transformations):
Now no matter how the face is turned, we are able to center the eyes and mouth are in roughly the same position in the image. This will make our next step a lot more accurate.
If you want to try this step out yourself using Python and dlib, here’s the code for finding face landmarks and here’s the code for transforming the image using those landmarks.
Now we are to the meat of the problem — actually telling faces apart. This is where things get really interesting!
The simplest approach to face recognition is to directly compare the unknown face we found in Step 2 with all the pictures we have of people that have already been tagged. When we find a previously tagged face that looks very similar to our unknown face, it must be the same person. Seems like a pretty good idea, right?
There’s actually a huge problem with that approach. A site like Facebook with billions of users and a trillion photos can’t possibly loop through every previous-tagged face to compare it to every newly uploaded picture. That would take way too long. They need to be able to recognize faces in milliseconds, not hours.
What we need is a way to extract a few basic measurements from each face. Then we could measure our unknown face the same way and find the known face with the closest measurements. For example, we might measure the size of each ear, the spacing between the eyes, the length of the nose, etc. If you’ve ever watched a bad crime show like CSI, you know what I am talking about:
Ok, so which measurements should we collect from each face to build our known face database? Ear size? Nose length? Eye color? Something else?
It turns out that the measurements that seem obvious to us humans (like eye color) don’t really make sense to a computer looking at individual pixels in an image. Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect itself. Deep learning does a better job than humans at figuring out which parts of a face are important to measure.
The solution is to train a Deep Convolutional Neural Network (just like we did in Part 3). But instead of training the network to recognize pictures objects like we did last time, we are going to train it to generate 128 measurements for each face.
The training process works by looking at 3 face images at a time:
Then the algorithm looks at the measurements it is currently generating for each of those three images. It then tweaks the neural network slightly so that it makes sure the measurements it generates for #1 and #2 are slightly closer while making sure the measurements for #2 and #3 are slightly further apart:
After repeating this step millions of times for millions of images of thousands of different people, the neural network learns to reliably generate 128 measurements for each person. Any ten different pictures of the same person should give roughly the same measurements.
Machine learning people call the 128 measurements of each face an embedding. The idea of reducing complicated raw data like a picture into a list of computer-generated numbers comes up a lot in machine learning (especially in language translation). The exact approach for faces we are using was invented in 2015 by researchers at Google but many similar approaches exist.
This process of training a convolutional neural network to output face embeddings requires a lot of data and computer power. Even with an expensive NVidia Telsa video card, it takes about 24 hours of continuous training to get good accuracy.
But once the network has been trained, it can generate measurements for any face, even ones it has never seen before! So this step only needs to be done once. Lucky for us, the fine folks at OpenFace already did this and they published several trained networks which we can directly use. Thanks Brandon Amos and team!
So all we need to do ourselves is run our face images through their pre-trained network to get the 128 measurements for each face. Here’s the measurements for our test image:
So what parts of the face are these 128 numbers measuring exactly? It turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person.
If you want to try this step yourself, OpenFace provides a lua script that will generate embeddings all images in a folder and write them to a csv file. You run it like this.
This last step is actually the easiest step in the whole process. All we have to do is find the person in our database of known people who has the closest measurements to our test image.
You can do that by using any basic machine learning classification algorithm. No fancy deep learning tricks are needed. We’ll use a simple linear SVM classifier, but lots of classification algorithms could work.
All we need to do is train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. Running this classifier takes milliseconds. The result of the classifier is the name of the person!
So let’s try out our system. First, I trained a classifier with the embeddings of about 20 pictures each of Will Ferrell, Chad Smith and Jimmy Falon:
Then I ran the classifier on every frame of the famous youtube video of Will Ferrell and Chad Smith pretending to be each other on the Jimmy Fallon show:
It works! And look how well it works for faces in different poses — even sideways faces!
Let’s review the steps we followed:
Now that you know how this all works, here’s instructions from start-to-finish of how run this entire face recognition pipeline on your own computer:
UPDATE 4/9/2017: You can still follow the steps below to use OpenFace. However, I’ve released a new Python-based face recognition library called face_recognition that is much easier to install and use. So I’d recommend trying out face_recognition first instead of continuing below!
I even put together a pre-configured virtual machine with face_recognition, OpenCV, TensorFlow and lots of other deep learning tools pre-installed. You can download and run it on your computer very easily. Give the virtual machine a shot if you don’t want to install all these libraries yourself!
Original OpenFace instructions:
If you liked this article, please consider signing up for my Machine Learning is Fun! newsletter:
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 5!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Xiaohan Zeng | 48K | 13 | https://medium.com/@XiaohanZeng/i-interviewed-at-five-top-companies-in-silicon-valley-in-five-days-and-luckily-got-five-job-offers-25178cf74e0f?source=tag_archive---------4---------------- | I interviewed at five top companies in Silicon Valley in five days, and luckily got five job offers | In the five days from July 24th to 28th 2017, I interviewed at LinkedIn, Salesforce Einstein, Google, Airbnb, and Facebook, and got all five job offers.
It was a great experience, and I feel fortunate that my efforts paid off, so I decided to write something about it. I will discuss how I prepared, review the interview process, and share my impressions about the five companies.
I had been at Groupon for almost three years. It’s my first job, and I have been working with an amazing team and on awesome projects. We’ve been building cool stuff, making impact within the company, publishing papers and all that. But I felt my learning rate was being annealed (read: slowing down) yet my mind was craving more. Also as a software engineer in Chicago, there are so many great companies that all attract me in the Bay Area.
Life is short, and professional life shorter still. After talking with my wife and gaining her full support, I decided to take actions and make my first ever career change.
Although I’m interested in machine learning positions, the positions at the five companies are slightly different in the title and the interviewing process. Three are machine learning engineer (LinkedIn, Google, Facebook), one is data engineer (Salesforce), and one is software engineer in general (Airbnb). Therefore I needed to prepare for three different areas: coding, machine learning, and system design.
Since I also have a full time job, it took me 2–3 months in total to prepare. Here is how I prepared for the three areas.
While I agree that coding interviews might not be the best way to assess all your skills as a developer, there is arguably no better way to tell if you are a good engineer in a short period of time. IMO it is the necessary evil to get you that job.
I mainly used Leetcode and Geeksforgeeks for practicing, but Hackerrank and Lintcode are also good places. I spent several weeks going over common data structures and algorithms, then focused on areas I wasn’t too familiar with, and finally did some frequently seen problems. Due to my time constraints I usually did two problems per day.
Here are some thoughts:
This area is more closely related to the actual working experience. Many questions can be asked during system design interviews, including but not limited to system architecture, object oriented design,database schema design,distributed system design,scalability, etc.
There are many resources online that can help you with the preparation. For the most part I read articles on system design interviews, architectures of large-scale systems, and case studies.
Here are some resources that I found really helpful:
Although system design interviews can cover a lot of topics, there are some general guidelines for how to approach the problem:
With all that said, the best way to practice for system design interviews is to actually sit down and design a system, i.e. your day-to-day work. Instead of doing the minimal work, go deeper into the tools, frameworks, and libraries you use. For example, if you use HBase, rather than simply using the client to run some DDL and do some fetches, try to understand its overall architecture, such as the read/write flow, how HBase ensures strong consistency, what minor/major compactions do, and where LRU cache and Bloom Filter are used in the system. You can even compare HBase with Cassandra and see the similarities and differences in their design. Then when you are asked to design a distributed key-value store, you won’t feel ambushed.
Many blogs are also a great source of knowledge, such as Hacker Noon and engineering blogs of some companies, as well as the official documentation of open source projects.
The most important thing is to keep your curiosity and modesty. Be a sponge that absorbs everything it is submerged into.
Machine learning interviews can be divided into two aspects, theory and product design.
Unless you are have experience in machine learning research or did really well in your ML course, it helps to read some textbooks. Classical ones such as the Elements of Statistical Learning and Pattern Recognition and Machine Learning are great choices, and if you are interested in specific areas you can read more on those.
Make sure you understand basic concepts such as bias-variance trade-off, overfitting, gradient descent, L1/L2 regularization,Bayes Theorem,bagging/boosting,collaborative filtering,dimension reduction, etc. Familiarize yourself with common formulas such as Bayes Theorem and the derivation of popular models such as logistic regression and SVM. Try to implement simple models such as decision trees and K-means clustering. If you put some models on your resume, make sure you understand it thoroughly and can comment on its pros and cons.
For ML product design, understand the general process of building a ML product. Here’s what I tried to do:
Here I want to emphasize again on the importance of remaining curious and learning continuously. Try not to merely using the API for Spark MLlib or XGBoost and calling it done, but try to understand why stochastic gradient descent is appropriate for distributed training, or understand how XGBoost differs from traditional GBDT, e.g. what is special about its loss function, why it needs to compute the second order derivative, etc.
I started by replying to HR’s messages on LinkedIn, and asking for referrals. After a failed attempt at a rock star startup (which I will touch upon later), I prepared hard for several months, and with help from my recruiters, I scheduled a full week of onsites in the Bay Area. I flew in on Sunday, had five full days of interviews with around 30 interviewers at some best tech companies in the world, and very luckily, got job offers from all five of them.
All phone screenings are standard. The only difference is in the duration: For some companies like LinkedIn it’s one hour, while for Facebook and Airbnb it’s 45 minutes.
Proficiency is the key here, since you are under the time gun and usually you only get one chance. You would have to very quickly recognize the type of problem and give a high-level solution. Be sure to talk to the interviewer about your thinking and intentions. It might slow you down a little at the beginning, but communication is more important than anything and it only helps with the interview. Do not recite the solution as the interviewer would almost certainly see through it.
For machine learning positions some companies would ask ML questions. If you are interviewing for those make sure you brush up your ML skills as well.
To make better use of my time, I scheduled three phone screenings in the same afternoon, one hour apart from each. The upside is that you might benefit from the hot hand and the downside is that the later ones might be affected if the first one does not go well, so I don’t recommend it for everyone.
One good thing about interviewing with multiple companies at the same time is that it gives you certain advantages. I was able to skip the second round phone screening with Airbnb and Salesforce because I got the onsite at LinkedIn and Facebook after only one phone screening.
More surprisingly, Google even let me skip their phone screening entirely and schedule my onsite to fill the vacancy after learning I had four onsites coming in the next week. I knew it was going to make it extremely tiring, but hey, nobody can refuse a Google onsite invitation!
LinkedIn
This is my first onsite and I interviewed at the Sunnyvale location. The office is very neat and people look very professional, as always.
The sessions are one hour each. Coding questions are standard, but the ML questions can get a bit tough. That said, I got an email from my HR containing the preparation material which was very helpful, and in the end I did not see anything that was too surprising. I heard the rumor that LinkedIn has the best meals in the Silicon Valley, and from what I saw if it’s not true, it’s not too far from the truth.
Acquisition by Microsoft seems to have lifted the financial burden from LinkedIn, and freed them up to do really cool things. New features such as videos and professional advertisements are exciting. As a company focusing on professional development, LinkedIn prioritizes the growth of its own employees. A lot of teams such as ads relevance and feed ranking are expanding, so act quickly if you want to join.
Salesforce Einstein
Rock star project by rock star team. The team is pretty new and feels very much like a startup. The product is built on the Scala stack, so type safety is a real thing there! Great talks on the Optimus Prime library by Matthew Tovbin at Scala Days Chicago 2017 and Leah McGuire at Spark Summit West 2017.
I interviewed at their Palo Alto office. The team has a cohesive culture and work life balance is great there. Everybody is passionate about what they are doing and really enjoys it. With four sessions it is shorter compared to the other onsite interviews, but I wish I could have stayed longer. After the interview Matthew even took me for a walk to the HP garage :)
Google
Absolutely the industry leader, and nothing to say about it that people don’t already know. But it’s huge. Like, really, really HUGE. It took me 20 minutes to ride a bicycle to meet my friends there. Also lines for food can be too long. Forever a great place for developers.
I interviewed at one of the many buildings on the Mountain View campus, and I don’t know which one it is because it’s HUGE.
My interviewers all look very smart, and once they start talking they are even smarter. It would be very enjoyable to work with these people.
One thing that I felt special about Google’s interviews is that the analysis of algorithm complexity is really important. Make sure you really understand what Big O notation means!
Airbnb
Fast expanding unicorn with a unique culture and arguably the most beautiful office in the Silicon Valley. New products such as Experiences and restaurant reservation, high end niche market, and expansion into China all contribute to a positive prospect. Perfect choice if you are risk tolerant and want a fast growing, pre-IPO experience.
Airbnb’s coding interview is a bit unique because you’ll be coding in an IDE instead of whiteboarding, so your code needs to compile and give the right answer. Some problems can get really hard.
And they’ve got the one-of-a-kind cross functional interviews. This is how Airbnb takes culture seriously, and being technically excellent doesn’t guarantee a job offer. For me the two cross functionals were really enjoyable. I had casual conversations with the interviewers and we all felt happy at the end of the session.
Overall I think Airbnb’s onsite is the hardest due to the difficulty of the problems, longer duration, and unique cross-functional interviews. If you are interested, be sure to understand their culture and core values.
Facebook
Another giant that is still growing fast, and smaller and faster-paced compared to Google. With its product lines dominating the social network market and big investments in AI and VR, I can only see more growth potential for Facebook in the future. With stars like Yann LeCun and Yangqing Jia, it’s the perfect place if you are interested in machine learning.
I interviewed at Building 20, the one with the rooftop garden and ocean view and also where Zuckerberg’s office is located.
I’m not sure if the interviewers got instructions, but I didn’t get clear signs whether my solutions were correct, although I believed they were.
By noon the prior four days started to take its toll, and I was having a headache. I persisted through the afternoon sessions but felt I didn’t do well at all. I was a bit surprised to learn that I was getting an offer from them as well.
Generally I felt people there believe the company’s vision and are proud of what they are building. Being a company with half a trillion market cap and growing, Facebook is a perfect place to grow your career at.
This is a big topic that I won’t cover in this post, but I found this article to be very helpful.
Some things that I do think are important:
All successes start with failures, including interviews. Before I started interviewing for these companies, I failed my interview at Databricks in May.
Back in April, Xiangrui contacted me via LinkedIn asking me if I was interested in a position on the Spark MLlib team. I was extremely thrilled because 1) I use Spark and love Scala, 2) Databricks engineers are top-notch, and 3) Spark is revolutionizing the whole big data world. It is an opportunity I couldn’t miss, so I started interviewing after a few days.
The bar is very high and the process is quite long, including one pre-screening questionnaire, one phone screening, one coding assignment, and one full onsite.
I managed to get the onsite invitation, and visited their office in downtown San Francisco, where Treasure Island can be seen.
My interviewer were incredibly intelligent yet equally modest. During the interviews I often felt being pushed to the limits. It was fine until one disastrous session, where I totally messed up due to insufficient skills and preparation, and it ended up a fiasco. Xiangrui was very kind and walked me to where I wanted to go after the interview was over, and I really enjoyed talking to him.
I got the rejection several days later. It was expected but I felt frustrated for a few days nonetheless. Although I missed the opportunity to work there, I wholeheartedly wish they will continue to make greater impact and achievements.
From the first interview in May to finally accepting the job offer in late September, my first career change was long and not easy.
It was difficult for me to prepare because I needed to keep doing well at my current job. For several weeks I was on a regular schedule of preparing for the interview till 1am, getting up at 8:30am the next day and fully devoting myself to another day at work.
Interviewing at five companies in five days was also highly stressful and risky, and I don’t recommend doing it unless you have a very tight schedule. But it does give you a good advantage during negotiation should you secure multiple offers.
I’d like to thank all my recruiters who patiently walked me through the process, the people who spend their precious time talking to me, and all the companies that gave me the opportunities to interview and extended me offers.
Lastly but most importantly, I want to thank my family for their love and support — my parents for watching me taking the first and every step, my dear wife for everything she has done for me, and my daughter for her warming smile.
Thanks for reading through this long post.
You can find me on LinkedIn or Twitter.
Xiaohan Zeng
10/22/17
PS: Since the publication of this post, it has (unexpectedly) received some attention. I would like to thank everybody for the congratulations and shares, and apologize for not being able to respond to each of them.
This post has been translated into some other languages:
It has been reposted in Tech In Asia.
Breaking Into Startups invited me to a live video streaming, together with Sophia Ciocca.
CoverShr did a short QnA with me.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Critical Mind & Romantic Heart
|
Gil Fewster | 3.3K | 5 | https://medium.freecodecamp.org/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805?source=tag_archive---------5---------------- | The mind-blowing AI announcement from Google that you probably missed. | Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text.
I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own.
In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year.
Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System.
This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock.
Luckily, I’m here to bring you up to speed. Here’s the deal.
Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance.
Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.
This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.
All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.
The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.
Google Translate invented its own language to help it translate more effectively.
What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.
Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes)
To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post:
Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated.
And this leads the Google engineers onto that truly astonishing discovery of creation:
So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics.
I just thought maybe you’d want to know.
Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly.
Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective.
Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
A tinkerer
Our community publishes stories worth reading on development, design, and data science.
|
Adam Geitgey | 10.4K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-part-2-a26a10b68df3?source=tag_archive---------6---------------- | Machine Learning is Fun! Part 2 – Adam Geitgey – Medium | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in Italiano, Español, Français, Türkçe, Русский, 한국어 Português, فارسی, Tiếng Việt or 普通话.
In Part 1, we said that Machine Learning is using generic algorithms to tell you something interesting about your data without writing any code specific to the problem you are solving. (If you haven’t already read part 1, read it now!).
This time, we are going to see one of these generic algorithms do something really cool — create video game levels that look like they were made by humans. We’ll build a neural network, feed it existing Super Mario levels and watch new ones pop out!
Just like Part 1, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished.
Back in Part 1, we created a simple algorithm that estimated the value of a house based on its attributes. Given data about a house like this:
We ended up with this simple estimation function:
In other words, we estimated the value of the house by multiplying each of its attributes by a weight. Then we just added those numbers up to get the house’s value.
Instead of using code, let’s represent that same function as a simple diagram:
However this algorithm only works for simple problems where the result has a linear relationship with the input. What if the truth behind house prices isn’t so simple? For example, maybe the neighborhood matters a lot for big houses and small houses but doesn’t matter at all for medium-sized houses. How could we capture that kind of complicated detail in our model?
To be more clever, we could run this algorithm multiple times with different of weights that each capture different edge cases:
Now we have four different price estimates. Let’s combine those four price estimates into one final estimate. We’ll run them through the same algorithm again (but using another set of weights)!
Our new Super Answer combines the estimates from our four different attempts to solve the problem. Because of this, it can model more cases than we could capture in one simple model.
Let’s combine our four attempts to guess into one big diagram:
This is a neural network! Each node knows how to take in a set of inputs, apply weights to them, and calculate an output value. By chaining together lots of these nodes, we can model complex functions.
There’s a lot that I’m skipping over to keep this brief (including feature scaling and the activation function), but the most important part is that these basic ideas click:
It’s just like LEGO! We can’t model much with one single LEGO block, but we can model anything if we have enough basic LEGO blocks to stick together:
The neural network we’ve seen always returns the same answer when you give it the same inputs. It has no memory. In programming terms, it’s a stateless algorithm.
In many cases (like estimating the price of house), that’s exactly what you want. But the one thing this kind of model can’t do is respond to patterns in data over time.
Imagine I handed you a keyboard and asked you to write a story. But before you start, my job is to guess the very first letter that you will type. What letter should I guess?
I can use my knowledge of English to increase my odds of guessing the right letter. For example, you will probably type a letter that is common at the beginning of words. If I looked at stories you wrote in the past, I could narrow it down further based on the words you usually use at the beginning of your stories. Once I had all that data, I could use it to build a neural network to model how likely it is that you would start with any given letter.
Our model might look like this:
But let’s make the problem harder. Let’s say I need to guess the next letter you are going to type at any point in your story. This is a much more interesting problem.
Let’s use the first few words of Ernest Hemingway’s The Sun Also Rises as an example:
What letter is going to come next?
You probably guessed ’n’ — the word is probably going to be boxing. We know this based on the letters we’ve already seen in the sentence and our knowledge of common words in English. Also, the word ‘middleweight’ gives us an extra clue that we are talking about boxing.
In other words, it’s easy to guess the next letter if we take into account the sequence of letters that came right before it and combine that with our knowledge of the rules of English.
To solve this problem with a neural network, we need to add state to our model. Each time we ask our neural network for an answer, we also save a set of our intermediate calculations and re-use them the next time as part of our input. That way, our model will adjust its predictions based on the input that it has seen recently.
Keeping track of state in our model makes it possible to not just predict the most likely first letter in the story, but to predict the most likely next letter given all previous letters.
This is the basic idea of a Recurrent Neural Network. We are updating the network each time we use it. This allows it to update its predictions based on what it saw most recently. It can even model patterns over time as long as we give it enough of a memory.
Predicting the next letter in a story might seem pretty useless. What’s the point?
One cool use might be auto-predict for a mobile phone keyboard:
But what if we took this idea to the extreme? What if we asked the model to predict the next most likely character over and over — forever? We’d be asking it to write a complete story for us!
We saw how we could guess the next letter in Hemingway’s sentence. Let’s try generating a whole story in the style of Hemingway.
To do this, we are going to use the Recurrent Neural Network implementation that Andrej Karpathy wrote. Andrej is a Deep-Learning researcher at Stanford and he wrote an excellent introduction to generating text with RNNs, You can view all the code for the model on github.
We’ll create our model from the complete text of The Sun Also Rises — 362,239 characters using 84 unique letters (including punctuation, uppercase/lowercase, etc). This data set is actually really small compared to typical real-world applications. To generate a really good model of Hemingway’s style, it would be much better to have at several times as much sample text. But this is good enough to play around with as an example.
As we just start to train the RNN, it’s not very good at predicting letters. Here’s what it generates after a 100 loops of training:
You can see that it has figured out that sometimes words have spaces between them, but that’s about it.
After about 1000 iterations, things are looking more promising:
The model has started to identify the patterns in basic sentence structure. It’s adding periods at the ends of sentences and even quoting dialog. A few words are recognizable, but there’s also still a lot of nonsense.
But after several thousand more training iterations, it looks pretty good:
At this point, the algorithm has captured the basic pattern of Hemingway’s short, direct dialog. A few sentences even sort of make sense.
Compare that with some real text from the book:
Even by only looking for patterns one character at a time, our algorithm has reproduced plausible-looking prose with proper formatting. That is kind of amazing!
We don’t have to generate text completely from scratch, either. We can seed the algorithm by supplying the first few letters and just let it find the next few letters.
For fun, let’s make a fake book cover for our imaginary book by generating a new author name and a new title using the seed text of “Er”, “He”, and “The S”:
Not bad!
But the really mind-blowing part is that this algorithm can figure out patterns in any sequence of data. It can easily generate real-looking recipes or fake Obama speeches. But why limit ourselves human language? We can apply this same idea to any kind of sequential data that has a pattern.
In 2015, Nintendo released Super Mario MakerTM for the Wii U gaming system.
This game lets you draw out your own Super Mario Brothers levels on the gamepad and then upload them to the internet so you friends can play through them. You can include all the classic power-ups and enemies from the original Mario games in your levels. It’s like a virtual LEGO set for people who grew up playing Super Mario Brothers.
Can we use the same model that generated fake Hemingway text to generate fake Super Mario Brothers levels?
First, we need a data set for training our model. Let’s take all the outdoor levels from the original Super Mario Brothers game released in 1985:
This game has 32 levels and about 70% of them have the same outdoor style. So we’ll stick to those.
To get the designs for each level, I took an original copy of the game and wrote a program to pull the level designs out of the game’s memory. Super Mario Bros. is a 30-year-old game and there are lots of resources online that help you figure out how the levels were stored in the game’s memory. Extracting level data from an old video game is a fun programming exercise that you should try sometime.
Here’s the first level from the game (which you probably remember if you ever played it):
If we look closely, we can see the level is made of a simple grid of objects:
We could just as easily represent this grid as a sequence of characters with one character representing each object:
We’ve replaced each object in the level with a letter:
...and so on, using a different letter for each different kind of object in the level.
I ended up with text files that looked like this:
Looking at the text file, you can see that Mario levels don’t really have much of a pattern if you read them line-by-line:
The patterns in a level really emerge when you think of the level as a series of columns:
So in order for the algorithm to find the patterns in our data, we need to feed the data in column-by-column. Figuring out the most effective representation of your input data (called feature selection) is one of the keys of using machine learning algorithms well.
To train the model, I needed to rotate my text files by 90 degrees. This made sure the characters were fed into the model in an order where a pattern would more easily show up:
Just like we saw when creating the model of Hemingway’s prose, a model improves as we train it.
After a little training, our model is generating junk:
It sort of has an idea that ‘-’s and ‘=’s should show up a lot, but that’s about it. It hasn’t figured out the pattern yet.
After several thousand iterations, it’s starting to look like something:
The model has almost figured out that each line should be the same length. It has even started to figure out some of the logic of Mario: The pipes in mario are always two blocks wide and at least two blocks high, so the “P”s in the data should appear in 2x2 clusters. That’s pretty cool!
With a lot more training, the model gets to the point where it generates perfectly valid data:
Let’s sample an entire level’s worth of data from our model and rotate it back horizontal:
This data looks great! There are several awesome things to notice:
Finally, let’s take this level and recreate it in Super Mario Maker:
Play it yourself!
If you have Super Mario Maker, you can play this level by bookmarking it online or by looking it up using level code 4AC9–0000–0157-F3C3.
The recurrent neural network algorithm we used to train our model is the same kind of algorithm used by real-world companies to solve hard problems like speech detection and language translation. What makes our model a ‘toy’ instead of cutting-edge is that our model is generated from very little data. There just aren’t enough levels in the original Super Mario Brothers game to provide enough data for a really good model.
If we could get access to the hundreds of thousands of user-created Super Mario Maker levels that Nintendo has, we could make an amazing model. But we can’t — because Nintendo won’t let us have them. Big companies don’t give away their data for free.
As machine learning becomes more important in more industries, the difference between a good program and a bad program will be how much data you have to train your models. That’s why companies like Google and Facebook need your data so badly!
For example, Google recently open sourced TensorFlow, its software toolkit for building large-scale machine learning applications. It was a pretty big deal that Google gave away such important, capable technology for free. This is the same stuff that powers Google Translate.
But without Google’s massive trove of data in every language, you can’t create a competitor to Google Translate. Data is what gives Google its edge. Think about that the next time you open up your Google Maps Location History or Facebook Location History and notice that it stores every place you’ve ever been.
In machine learning, there’s never a single way to solve a problem. You have limitless options when deciding how to pre-process your data and which algorithms to use. Often combining multiple approaches will give you better results than any single approach.
Readers have sent me links to other interesting approaches to generating Super Mario levels:
If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 3!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
David Venturi | 10.6K | 20 | https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0?source=tag_archive---------7---------------- | Every single Machine Learning course on the internet, ranked by your reviews | A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost.
I’m almost finished now. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. So I started creating a review-driven guide that recommends the best courses for each subject within data science.
For the first guide in the series, I recommended a few coding classes for the beginner data scientist. Then it was statistics and probability classes. Then introductions to data science. Also, data visualization.
For this guide, I spent a dozen hours trying to identify every online machine learning course offered as of May 2017, extracting key bits of information from their syllabi and reviews, and compiling their ratings. My end goal was to identify the three best courses available and present them to you, below.
For this task, I turned to none other than the open source Class Central community, and its database of thousands of course ratings and reviews.
Since 2011, Class Central founder Dhawal Shah has kept a closer eye on online courses than arguably anyone else in the world. Dhawal personally helped me assemble this list of resources.
Each course must fit three criteria:
We believe we covered every notable course that fits the above criteria. Since there are seemingly hundreds of courses on Udemy, we chose to consider the most-reviewed and highest-rated ones only.
There’s always a chance that we missed something, though. So please let us know in the comments section if we left a good course out.
We compiled average ratings and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings.
We made subjective syllabus judgment calls based on three factors:
A popular definition originates from Arthur Samuel in 1959: machine learning is a subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data. Just as humans can learn from experience, so can computers, where data = experience.
A machine learning workflow is the process required for carrying out a machine learning project. Though individual projects can differ, most workflows share several common tasks: problem evaluation, data exploration, data preprocessing, model training/testing/deployment, etc. Below you’ll find helpful visualization of these core steps:
The ideal course introduces the entire process and provides interactive examples, assignments, and/or quizzes where students can perform each task themselves.
First off, let’s define deep learning. Here is a succinct description:
As would be expected, portions of some of the machine learning courses contain deep learning content. I chose not to include deep learning-only courses, however. If you are interested in deep learning specifically, we’ve got you covered with the following article:
My top three recommendations from that list would be:
Several courses listed below ask students to have prior programming, calculus, linear algebra, and statistics experience. These prerequisites are understandable given that machine learning is an advanced discipline.
Missing a few subjects? Good news! Some of this experience can be acquired through our recommendations in the first two articles (programming, statistics) of this Data Science Career Guide. Several top-ranked courses below also provide gentle calculus and linear algebra refreshers and highlight the aspects most relevant to machine learning for those less familiar.
Stanford University’s Machine Learning on Coursera is the clear current winner in terms of ratings, reviews, and syllabus fit. Taught by the famous Andrew Ng, Google Brain founder and former chief scientist at Baidu, this was the class that sparked the founding of Coursera. It has a 4.7-star weighted average rating over 422 reviews.
Released in 2011, it covers all aspects of the machine learning workflow. Though it has a smaller scope than the original Stanford class upon which it is based, it still manages to cover a large number of techniques and algorithms. The estimated timeline is eleven weeks, with two weeks dedicated to neural networks and deep learning. Free and paid options are available.
Ng is a dynamic yet gentle instructor with a palpable experience. He inspires confidence, especially when sharing practical implementation tips and warnings about common pitfalls. A linear algebra refresher is provided and Ng highlights the aspects of calculus most relevant to machine learning.
Evaluation is automatic and is done via multiple choice quizzes that follow each lesson and programming assignments. The assignments (there are eight of them) can be completed in MATLAB or Octave, which is an open-source version of MATLAB. Ng explains his language choice:
Though Python and R are likely more compelling choices in 2017 with the increased popularity of those languages, reviewers note that that shouldn’t stop you from taking the course.
A few prominent reviewers noted the following:
Columbia University’s Machine Learning is a relatively new offering that is part of their Artificial Intelligence MicroMasters on edX. Though it is newer and doesn’t have a large number of reviews, the ones that it does have are exceptionally strong. Professor John Paisley is noted as brilliant, clear, and clever. It has a 4.8-star weighted average rating over 10 reviews.
The course also covers all aspects of the machine learning workflow and more algorithms than the above Stanford offering. Columbia’s is a more advanced introduction, with reviewers noting that students should be comfortable with the recommended prerequisites (calculus, linear algebra, statistics, probability, and coding).
Quizzes (11), programming assignments (4), and a final exam are the modes of evaluation. Students can use either Python, Octave, or MATLAB to complete the assignments. The course’s total estimated timeline is eight to ten hours per week over twelve weeks. It is free with a verified certificate available for purchase.
Below are a few of the aforementioned sparkling reviews:
Machine Learning A-ZTM on Udemy is an impressively detailed offering that provides instruction in both Python and R, which is rare and can’t be said for any of the other top courses. It has a 4.5-star weighted average rating over 8,119 reviews, which makes it the most reviewed course of the ones considered.
It covers the entire machine learning workflow and an almost ridiculous (in a good way) number of algorithms through 40.5 hours of on-demand video. The course takes a more applied approach and is lighter math-wise than the above two courses. Each section starts with an “intuition” video from Eremenko that summarizes the underlying theory of the concept being taught. de Ponteves then walks through implementation with separate videos for both Python and R.
As a “bonus,” the course includes Python and R code templates for students to download and use on their own projects. There are quizzes and homework challenges, though these aren’t the strong points of the course.
Eremenko and the SuperDataScience team are revered for their ability to “make the complex simple.” Also, the prerequisites listed are “just some high school mathematics,” so this course might be a better option for those daunted by the Stanford and Columbia offerings.
A few prominent reviewers noted the following:
Our #1 pick had a weighted average rating of 4.7 out of 5 stars over 422 reviews. Let’s look at the other alternatives, sorted by descending rating. A reminder that deep learning-only courses are not included in this guide — you can find those here.
The Analytics Edge (Massachusetts Institute of Technology/edX): More focused on analytics in general, though it does cover several machine learning topics. Uses R. Strong narrative that leverages familiar real-world examples. Challenging. Ten to fifteen hours per week over twelve weeks. Free with a verified certificate available for purchase. It has a 4.9-star weighted average rating over 214 reviews.
Python for Data Science and Machine Learning Bootcamp (Jose Portilla/Udemy): Has large chunks of machine learning content, but covers the whole data science process. More of a very detailed intro to Python. Amazing course, though not ideal for the scope of this guide. 21.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 3316 reviews.
Data Science and Machine Learning Bootcamp with R (Jose Portilla/Udemy): The comments for Portilla’s above course apply here as well, except for R. 17.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 1317 reviews.
Machine Learning Series (Lazy Programmer Inc./Udemy): Taught by a data scientist/big data engineer/full stack software engineer with an impressive resume, Lazy Programmer currently has a series of 16 machine learning-focused courses on Udemy. In total, the courses have 5000+ ratings and almost all of them have 4.6 stars. A useful course ordering is provided in each individual course’s description. Uses Python. Cost varies depending on Udemy discounts, which are frequent.
Machine Learning (Georgia Tech/Udacity): A compilation of what was three separate courses: Supervised, Unsupervised and Reinforcement Learning. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Bite-sized videos, as is Udacity’s style. Friendly professors. Estimated timeline of four months. Free. It has a 4.56-star weighted average rating over 9 reviews.
Implementing Predictive Analytics with Spark in Azure HDInsight (Microsoft/edX): Introduces the core concepts of machine learning and a variety of algorithms. Leverages several big data-friendly tools, including Apache Spark, Scala, and Hadoop. Uses both Python and R. Four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.5-star weighted average rating over 6 reviews.
Data Science and Machine Learning with Python — Hands On! (Frank Kane/Udemy): Uses Python. Kane has nine years of experience at Amazon and IMDb. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 4139 reviews.
Scala and Spark for Big Data and Machine Learning (Jose Portilla/Udemy): “Big data” focus, specifically on implementation in Scala and Spark. Ten hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 607 reviews.
Machine Learning Engineer Nanodegree (Udacity): Udacity’s flagship Machine Learning program, which features a best-in-class project review system and career support. The program is a compilation of several individual Udacity courses, which are free. Co-created by Kaggle. Estimated timeline of six months. Currently costs $199 USD per month with a 50% tuition refund available for those who graduate within 12 months. It has a 4.5-star weighted average rating over 2 reviews.
Learning From Data (Introductory Machine Learning) (California Institute of Technology/edX): Enrollment is currently closed on edX, but is also available via CalTech’s independent platform (see below). It has a 4.49-star weighted average rating over 42 reviews.
Learning From Data (Introductory Machine Learning) (Yaser Abu-Mostafa/California Institute of Technology): “A real Caltech course, not a watered-down version.” Reviews note it is excellent for understanding machine learning theory. The professor, Yaser Abu-Mostafa, is popular among students and also wrote the textbook upon which this course is based. Videos are taped lectures (with lectures slides picture-in-picture) uploaded to YouTube. Homework assignments are .pdf files. The course experience for online students isn’t as polished as the top three recommendations. It has a 4.43-star weighted average rating over 7 reviews.
Mining Massive Datasets (Stanford University): Machine learning with a focus on “big data.” Introduces modern distributed file systems and MapReduce. Ten hours per week over seven weeks. Free. It has a 4.4-star weighted average rating over 30 reviews.
AWS Machine Learning: A Complete Guide With Python (Chandra Lingam/Udemy): A unique focus on cloud-based machine learning and specifically Amazon Web Services. Uses Python. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 62 reviews.
Introduction to Machine Learning & Face Detection in Python (Holczer Balazs/Udemy): Uses Python. Eight hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 162 reviews.
StatLearning: Statistical Learning (Stanford University): Based on the excellent textbook, “An Introduction to Statistical Learning, with Applications in R” and taught by the professors who wrote it. Reviewers note that the MOOC isn’t as good as the book, citing “thin” exercises and mediocre videos. Five hours per week over nine weeks. Free. It has a 4.35-star weighted average rating over 84 reviews.
Machine Learning Specialization (University of Washington/Coursera): Great courses, but last two classes (including the capstone project) were canceled. Reviewers note that this series is more digestable (read: easier for those without strong technical backgrounds) than other top machine learning courses (e.g. Stanford’s or Caltech’s). Be aware that the series is incomplete with recommender systems, deep learning, and a summary missing. Free and paid options available. It has a 4.31-star weighted average rating over 80 reviews.
From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase (Loony Corn/Udemy): “A down-to-earth, shy but confident take on machine learning techniques.” Taught by four-person team with decades of industry experience together. Uses Python. Cost varies depending on Udemy discounts, which are frequent. It has a 4.2-star weighted average rating over 494 reviews.
Principles of Machine Learning (Microsoft/edX): Uses R, Python, and Microsoft Azure Machine Learning. Part of the Microsoft Professional Program Certificate in Data Science. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.09-star weighted average rating over 11 reviews.
Big Data: Statistical Inference and Machine Learning (Queensland University of Technology/FutureLearn): A nice, brief exploratory machine learning course with a focus on big data. Covers a few tools like R, H2O Flow, and WEKA. Only three weeks in duration at a recommended two hours per week, but one reviewer noted that six hours per week would be more appropriate. Free and paid options available. It has a 4-star weighted average rating over 4 reviews.
Genomic Data Science and Clustering (Bioinformatics V) (University of California, San Diego/Coursera): For those interested in the intersection of computer science and biology and how it represents an important frontier in modern science. Focuses on clustering and dimensionality reduction. Part of UCSD’s Bioinformatics Specialization. Free and paid options available. It has a 4-star weighted average rating over 3 reviews.
Intro to Machine Learning (Udacity): Prioritizes topic breadth and practical tools (in Python) over depth and theory. The instructors, Sebastian Thrun and Katie Malone, make this class so fun. Consists of bite-sized videos and quizzes followed by a mini-project for each lesson. Currently part of Udacity’s Data Analyst Nanodegree. Estimated timeline of ten weeks. Free. It has a 3.95-star weighted average rating over 19 reviews.
Machine Learning for Data Analysis (Wesleyan University/Coursera): A brief intro machine learning and a few select algorithms. Covers decision trees, random forests, lasso regression, and k-means clustering. Part of Wesleyan’s Data Analysis and Interpretation Specialization. Estimated timeline of four weeks. Free and paid options available. It has a 3.6-star weighted average rating over 5 reviews.
Programming with Python for Data Science (Microsoft/edX): Produced by Microsoft in partnership with Coding Dojo. Uses Python. Eight hours per week over six weeks. Free and paid options available. It has a 3.46-star weighted average rating over 37 reviews.
Machine Learning for Trading (Georgia Tech/Udacity): Focuses on applying probabilistic machine learning approaches to trading decisions. Uses Python. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Estimated timeline of four months. Free. It has a 3.29-star weighted average rating over 14 reviews.
Practical Machine Learning (Johns Hopkins University/Coursera): A brief, practical introduction to a number of machine learning algorithms. Several one/two-star reviews expressing a variety of concerns. Part of JHU’s Data Science Specialization. Four to nine hours per week over four weeks. Free and paid options available. It has a 3.11-star weighted average rating over 37 reviews.
Machine Learning for Data Science and Analytics (Columbia University/edX): Introduces a wide range of machine learning topics. Some passionate negative reviews with concerns including content choices, a lack of programming assignments, and uninspiring presentation. Seven to ten hours per week over five weeks. Free with a verified certificate available for purchase. It has a 2.74-star weighted average rating over 36 reviews.
Recommender Systems Specialization (University of Minnesota/Coursera): Strong focus one specific type of machine learning — recommender systems. A four course specialization plus a capstone project, which is a case study. Taught using LensKit (an open-source toolkit for recommender systems). Free and paid options available. It has a 2-star weighted average rating over 2 reviews.
Machine Learning With Big Data (University of California, San Diego/Coursera): Terrible reviews that highlight poor instruction and evaluation. Some noted it took them mere hours to complete the whole course. Part of UCSD’s Big Data Specialization. Free and paid options available. It has a 1.86-star weighted average rating over 14 reviews.
Practical Predictive Analytics: Models and Methods (University of Washington/Coursera): A brief intro to core machine learning concepts. One reviewer noted that there was a lack of quizzes and that the assignments were not challenging. Part of UW’s Data Science at Scale Specialization. Six to eight hours per week over four weeks. Free and paid options available. It has a 1.75-star weighted average rating over 4 reviews.
The following courses had one or no reviews as of May 2017.
Machine Learning for Musicians and Artists (Goldsmiths, University of London/Kadenze): Unique. Students learn algorithms, software tools, and machine learning best practices to make sense of human gesture, musical audio, and other real-time data. Seven sessions in length. Audit (free) and premium ($10 USD per month) options available. It has one 5-star review.
Applied Machine Learning in Python (University of Michigan/Coursera): Taught using Python and the scikit learn toolkit. Part of the Applied Data Science with Python Specialization. Scheduled to start May 29th. Free and paid options available.
Applied Machine Learning (Microsoft/edX): Taught using various tools, including Python, R, and Microsoft Azure Machine Learning (note: Microsoft produces the course). Includes hands-on labs to reinforce the lecture content. Three to four hours per week over six weeks. Free with a verified certificate available for purchase.
Machine Learning with Python (Big Data University): Taught using Python. Targeted towards beginners. Estimated completion time of four hours. Big Data University is affiliated with IBM. Free.
Machine Learning with Apache SystemML (Big Data University): Taught using Apache SystemML, which is a declarative style language designed for large-scale machine learning. Estimated completion time of eight hours. Big Data University is affiliated with IBM. Free.
Machine Learning for Data Science (University of California, San Diego/edX): Doesn’t launch until January 2018. Programming examples and assignments are in Python, using Jupyter notebooks. Eight hours per week over ten weeks. Free with a verified certificate available for purchase.
Introduction to Analytics Modeling (Georgia Tech/edX): The course advertises R as its primary programming tool. Five to ten hours per week over ten weeks. Free with a verified certificate available for purchase.
Predictive Analytics: Gaining Insights from Big Data (Queensland University of Technology/FutureLearn): Brief overview of a few algorithms. Uses Hewlett Packard Enterprise’s Vertica Analytics platform as an applied tool. Start date to be announced. Two hours per week over four weeks. Free with a Certificate of Achievement available for purchase.
Introducción al Machine Learning (Universitas Telefónica/Miríada X): Taught in Spanish. An introduction to machine learning that covers supervised and unsupervised learning. A total of twenty estimated hours over four weeks.
Machine Learning Path Step (Dataquest): Taught in Python using Dataquest’s interactive in-browser platform. Multiple guided projects and a “plus” project where you build your own machine learning system using your own data. Subscription required.
The following six courses are offered by DataCamp. DataCamp’s hybrid teaching style leverages video and text-based instruction with lots of examples through an in-browser code editor. A subscription is required for full access to each course.
Introduction to Machine Learning (DataCamp): Covers classification, regression, and clustering algorithms. Uses R. Fifteen videos and 81 exercises with an estimated timeline of six hours.
Supervised Learning with scikit-learn (DataCamp): Uses Python and scikit-learn. Covers classification and regression algorithms. Seventeen videos and 54 exercises with an estimated timeline of four hours.
Unsupervised Learning in R (DataCamp): Provides a basic introduction to clustering and dimensionality reduction in R. Sixteen videos and 49 exercises with an estimated timeline of four hours.
Machine Learning Toolbox (DataCamp): Teaches the “big ideas” in machine learning. Uses R. 24 videos and 88 exercises with an estimated timeline of four hours.
Machine Learning with the Experts: School Budgets (DataCamp): A case study from a machine learning competition on DrivenData. Involves building a model to automatically classify items in a school’s budget. DataCamp’s “Supervised Learning with scikit-learn” is a prerequisite. Fifteen videos and 51 exercises with an estimated timeline of four hours.
Unsupervised Learning in Python (DataCamp): Covers a variety of unsupervised learning algorithms using Python, scikit-learn, and scipy. The course ends with students building a recommender system to recommend popular musical artists. Thirteen videos and 52 exercises with an estimated timeline of four hours.
Machine Learning (Tom Mitchell/Carnegie Mellon University): Carnegie Mellon’s graduate introductory machine learning course. A prerequisite to their second graduate level course, “Statistical Machine Learning.” Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. A 2011 version of the course also exists. CMU is one of the best graduate schools for studying machine learning and has a whole department dedicated to ML. Free.
Statistical Machine Learning (Larry Wasserman/Carnegie Mellon University): Likely the most advanced course in this guide. A follow-up to Carnegie Mellon’s Machine Learning course. Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. Free.
Undergraduate Machine Learning (Nando de Freitas/University of British Columbia): An undergraduate machine learning course. Lectures are filmed and put on YouTube with the slides posted on the course website. The course assignments are posted as well (no solutions, though). de Freitas is now a full-time professor at the University of Oxford and receives praise for his teaching abilities in various forums. Graduate version available (see below).
Machine Learning (Nando de Freitas/University of British Columbia): A graduate machine learning course. The comments in de Freitas’ undergraduate course (above) apply here as well.
This is the fifth of a six-piece series that covers the best online courses for launching yourself into the data science field. We covered programming in the first article, statistics and probability in the second article, intros to data science in the third article, and data visualization in the fourth.
The final piece will be a summary of those articles, plus the best online courses for other key topics such as data wrangling, databases, and even software engineering.
If you’re looking for a complete list of Data Science online courses, you can find them on Class Central’s Data Science and Big Data subject page.
If you enjoyed reading this, check out some of Class Central’s other pieces:
If you have suggestions for courses I missed, let me know in the responses!
If you found this helpful, click the 💚 so more people will see it here on Medium.
This is a condensed version of my original article published on Class Central, where I’ve included detailed course syllabi.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program.
Our community publishes stories worth reading on development, design, and data science.
|
Michael Jordan | 34K | 16 | https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7?source=tag_archive---------8---------------- | Artificial Intelligence — The Revolution Hasn’t Happened Yet | Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us.
There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”
We didn’t do the amniocentesis, and a healthy girl was born a few months later. But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And this happened day after day until it somehow got fixed. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other places and times. The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.
I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.
Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.
While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways.
Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.
And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design.
The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically.
Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems.
This confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny.
Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory and control theory already existed, and were often inspired by human intelligence (and animal intelligence), these fields were arguably focused on “low-level” signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. “AI” was meant to focus on something different — the “high-level” or “cognitive” capability of humans to “reason” and to “think.” Sixty years later, however, high-level reasoning and thought remain elusive. The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions.
Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.
Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often been hidden behind the scenes, and have been the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon.
One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play.
The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of.
Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits.
For example, returning to my personal anecdote, we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising in such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers.
We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.
Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction.
As for the necessity argument, it is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory?
A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda.
It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms.
Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.
IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean).
It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term “AI,” apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.)
But we need to move beyond the particular historical perspectives of McCarthy and Wiener.
We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II.
This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard.
While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences and the humanities.
On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline.
Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be.
In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.
I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead.
Michael I. Jordan
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Michael I. Jordan is a Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley.
|
Eran Kampf | 57 | 3 | https://developerzen.com/data-mining-handling-missing-values-the-database-bd2241882e72?source=tag_archive---------0---------------- | Data Mining — Handling Missing Values the Database – DeveloperZen | I’ve recently answered Predicting missing data values in a database on StackOverflow and thought it deserved a mention on DeveloperZen.
One of the important stages of data mining is preprocessing, where we prepare the data for mining. Real-world data tends to be incomplete, noisy, and inconsistent and an important task when preprocessing the data is to fill in missing values, smooth out noise and correct inconsistencies.
If we specifically look at dealing with missing data, there are several techniques that can be used. Choosing the right technique is a choice that depends on the problem domain — the data’s domain (sales data? CRM data? ...) and our goal for the data mining process.
So how can you handle missing values in your database?
This is usually done when the class label is missing (assuming your data mining goal is classification), or many attributes are missing from the row (not just one). However, you’ll obviously get poor performance if the percentage of such rows is high.
For example, let’s say we have a database of students enrolment data (age, SAT score, state of residence, etc.) and a column classifying their success in college to “Low”, “Medium” and “High”. Let’s say our goal is to build a model predicting a student’s success in college. Data rows who are missing the success column are not useful in predicting success so they could very well be ignored and removed before running the algorithm.
Decide on a new global constant value, like “unknown“, “N/A” or minus infinity, that will be used to fill all the missing values. This technique is used because sometimes it just doesn’t make sense to try and predict the missing value.
For example, let’s look at the students enrollment database again. Assuming the state of residence attribute data is missing for some students. Filling it up with some state doesn’t really makes sense as opposed to using something like “N/A”.
Replace missing values of an attribute with the mean (or median if its discrete) value for that attribute in the database.
For example, in a database of US family incomes, if the average income of a US family is X you can use that value to replace missing income values.
Instead of using the mean (or median) of a certain attribute calculated by looking at all the rows in a database, we can limit the calculations to the relevant class to make the value more relevant to the row we’re looking at.
Let’s say you have a cars pricing database that, among other things, classifies cars to “Luxury” and “Low budget” and you’re dealing with missing values in the cost field. Replacing missing cost of a luxury car with the average cost of all luxury cars is probably more accurate than the value you’d get if you factor in the low budget cars.
The value can be determined using regression, inference based tools using Bayesian formalism, decision trees, clustering algorithms (K-Mean\Median etc.).
For example, we could use clustering algorithms to create clusters of rows which will then be used for calculating an attribute mean or median as specified in technique #3. Another example could be using a decision tree to try and predict the probable value in the missing attribute, according to other attributes in the data.
I’d suggest looking into regression and decision trees first (ID3 tree generation) as they’re relatively easy and there are plenty of examples on the net...
Additional Notes
Originally published at www.developerzen.com on August 14, 2009.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Maker of things. Big data geek. Food Lover.
The essence of Software Development ...
|
Oliver Lindberg | 1 | 7 | https://medium.com/the-lindberg-interviews/interview-with-googles-alfred-spector-on-voice-search-hybrid-intelligence-and-more-2f6216aa480c?source=tag_archive---------0---------------- | Interview with Google’s Alfred Spector on voice search, hybrid intelligence and more | Google’s a pretty good search engine, right? Well, you ain’t seen nothing yet. VP of research Alfred Spector talks to Oliver Lindberg about the technologies emerging from Google Labs — from voice search to hybrid intelligence and beyond
This article originally appeared in issue 198 of .net magazine in 2010 and was republished at www.techradar.com.
Google has always been tight-lipped about products that haven’t launched yet. It’s no secret, however, that thanks to the company’s bottom-up culture, its engineers are working on tons of new projects at the same time. Following the mantra of ‘release early, release often’, the speed at which the search engine giant is churning out tools is staggering. At the heart of it all is Alfred Spector, Google’s Vice President of Research and Special Initiatives.
One of the areas Google is making significant advances in is voice search. Spector is astounded by how rapidly it’s come along. The Google Mobile App features ‘search by voice’ capabilities that are available for the iPhone, BlackBerry, Windows Mobile and Android. All versions understand English (including US, UK, Australian and Indian-English accents) but the latest addition, for Nokia S60 phones, even introduces Mandarin speech recognition, which — because of its many different accents and tonal characteristics — posed a huge engineering challenge. It’s the most spoken language in the world, but as it isn’t exactly keyboard-friendly, voice search could become immensely popular in China.
“Voice is one of these grand technology challenges in computer science,” Spector explains. “Can a computer understand the human voice? It’s been worked on for many decades and what we’ve realised over the last couple of years is that search, particularly on handheld devices, is amenable to voice as an import mechanism. “It’s very valuable to be able to use voice. All of us know that no matter how good the keyboard, it’s tricky to type exactly the right thing into a searchbar, while holding your backpack and everything else.”
To get a computer to take account of your voice is no mean feat, of course. “One idea is to take all of the voices that the system hears over time into one huge pan-human voice model. So, on the one hand we have a voice that’s higher and with an English accent, and on the other hand my voice, which is deeper and with an American accent. They both go into one model, or it just becomes personalised to the individual; voice scientists are a little unclear as to which is the best approach.”
The research department is also making progress in machine translation. Google Translate already features 51 languages, including Swahili and Yiddish. The latest version introduces instant, real-time translation, phonetic input and text-to-speech support (in English). “We’re able to go from any language to any of the others, and there are 51 times 50, so 2,550 possibilities,” Spector explains. “We’re focusing on increasing the number of languages because we’d like to handle even those languages where there’s not an enormous volume of usage. It will make the web far more valuable to more people if they can access the English-or Chinese language web, for example.
“But we also continue to focus on quality because almost always the translations are valuable but imperfect. Sometimes it comes from training our translation system over more raw data, so we have, say, EU documents in English and French and can compare them and learn rules for translation. The other approach is to bring more knowledge into translation. For example, we’re using more syntactic knowledge today and doing automated parsing with language. It’s been a grand challenge of the field since the late 1950s. Now it’s finally achieved mass usage.”
The team, led by scientist Franz Josef Och, has been collecting data for more than 100 languages, and the Google Translator Toolkit, which makes use of the ‘wisdom of the crowds’, now even supports 345 languages, many of which are minority languages. The editor enables users to translate text, correct the automatic translation and publish it.
Spector thinks that this approach is the future. As computers become even faster, handling more and more data — a lot of it in the cloud — machines learn from users and thus become smarter. He calls this concept ‘hybrid intelligence’. “It’s very difficult to solve these technological problems without human input,” he says. “It’s hard to create a robot that’s as clever, smart and knowledgeable of the world as we humans are. But it’s not as tough to build a computational system like Google, which extends what we do greatly and gradually learns something about the world from us, but that requires our interpretation to make it really successful. “We need to get computers and people communicating in both directions, so the computer learns from the human and makes the human more effective.”
Examples of ‘hybrid intelligence’ are Google Suggest, which instantly offers popular searches as you type a search query, and the ‘did you mean?’ feature in Google search, which corrects you when you misspell a query in the search bar. The more you use it, the better the system gets.
Training computers to become seemingly more intelligent poses major hurdles for Google’s engineers. “Computers don’t train as efficiently as people do,” Spector explains. “Let’s take the chess example. If a Kasparov was the educator, we could count on almost anything he says as being accurate. But if you tried to learn from a million chess players, you learn from my children as well, who play chess but they’re 10 and eight. They’ll be right sometimes and not right other times. There’s noise in that, and some of the noise is spam. One also has to have careful regard for privacy issues.”
By collecting enormous amounts of data, Google hopes to create a powerful database that eventually will understand the relationship between words (for example, ‘a dog is an animal’ and ‘a dog has four legs’). The challenge is to try to establish these relationships automatically, using tons of information, instead of having experts teach the system. This database would then improve search results and language translations because it would have a better understanding of the meaning of the words.
There’s also a lot of research around ‘conceptual search’. “Let’s take a video of a couple in front of the Empire State Building. We watch the video and it’s clear they’re on their honeymoon. But what is the video about? Is it about love or honeymoons, or is it about renting office space? It’s a fundamentally challenging problem.”
One example of conceptual search is Google Image Swirl, which was added to Labs in November. Enter a keyword and you get a list of 12 images; clicking on each one brings up a cluster of related pictures. Click on any of them to expand the ‘wonder wheel’ further. Google notes that they’re not just the most relevant images; the algorithm determines the most relevant group of images with similar appearance and meaning.
To improve the world’s data, Google continues to focus on the importance of the open internet. Another Labs project, Google Fusion Tables facilitates data management in the cloud. It enables users to create tables, filter and aggregate data, merge it with other data sources and visualise it with Google Maps or the Google Visualisation API. The data sets can then be published, shared or kept private and commented on by people around the world. “It’s an example of open collaboration,” Spector says. “If it’s public, we can crawl it to make it searchable and easily visible to people. We hired one of the best database researchers in the world, Alon Halevy, to lead it.”
Google is aiming to make more information available more easily across multiple devices, whether it’s images, videos, speech or maps, no matter which language we’re using. Spector calls the impact “totally transparent processing — it revolutionises the role of computation in day-today life. The computer can break down all these barriers to communication and knowledge. No matter what device we’re using, we have access to things. We can do translations, there are books or government documents, and some day we hope to have medical records. Whatever you want, no matter where you are, you can find it.”
Spector retired in early 2015 and now serves as the CTO of Two Sigma Investments
This article originally appeared in issue 198 of .net magazine in 2010 and was republished at www.techradar.com. Photography by Andy Short
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Independent editor and content consultant. Founder and captain of @pixelpioneers. Co-founder and curator of www.GenerateConf.com. Former editor of @netmag.
Interviews with leading tech entrepreneurs and web designers, conducted by @oliverlindberg at @netmag.
|
Xu Wenhao | 1 | 4 | https://xuwenhao.com/%E5%BB%BA%E8%AE%AE%E7%9A%84%E7%A8%8B%E5%BA%8F%E5%91%98%E5%AD%A6%E4%B9%A0lda%E7%AE%97%E6%B3%95%E7%9A%84%E6%AD%A5%E9%AA%A4-54168e081bc1?source=tag_archive---------0---------------- | 建议的程序员学习LDA算法的步骤 – 蒸汽与魔法 | 这一阵为了工作上的关系,花了点时间学习了一下LDA算法,说实话,对于我这个学CS而非学数学的人来说,除了集体智慧编程这本书之外基本没怎么看过机器学习的人来说,一开始还真是摸不太到门道,前前后后快要四个月了,算是基本了解了这个算法的实现,记录一下,也供后来人快速入门做个参考。
一开始直接就下了Blei的原始的那篇论文来看,但是看了个开头就被Dirichlet分布和几个数学公式打倒,然后因为专心在写项目中的具体的代码,也就先放下了。但是因为发现完全忘记了本科学的概率和统计的内容,只好回头去看大学时候概率论的教材,发现早不知道借给谁了,于是上网买了本,花了几天时间大致回顾了一遍概率论的知识,什么贝叶斯全概率公式,正态分布,二项分布之类的。后来晚上没事儿的时候,去水木的AI版转了转,了解到了Machine Learning的圣经PRML,考虑到反正也是要长期学习了,搞了电子版,同时上淘宝买了个打印胶装的版本。春节里每天晚上看一点儿,扫了一下前两章,再次回顾了一下基本数学知识,然后了解了下贝叶斯学派那种采用共轭先验来建模的方式。于是再次尝试回头去看Blei的那篇论文,发现还是看不太懂,于是又放下了。然后某天Tony让我准备准备给复旦的同学们share一下我们项目中LDA的使用,为了不露怯,又去翻论文,正好看到Science上这篇Topic Models Vs. Unstructured Data的科普性质的文章,翻了一遍之后,再去PRML里看了一遍Graphic Models那一张,觉得对于LDA想解决的问题和方法了解了更清楚了。之后从search engine里搜到这篇文章,然后根据推荐读了一部分的Gibbs Sampling for the Uninitiated。之后忘了怎么又搜到了Mark Steyvers和Tom Griffiths合著的Probabilistic Topic Models,在某个周末往返北京的飞机上读完了,觉得基本上模型训练过程也明白了。再之后就是读了一下这个最简版的LDA Gibbs Sampling的实现,再回过头读了一下PLDA的源码,基本上算是对LDA有了个相对清楚的了解。
这样前前后后,也过去了三个月,其实不少时间都是浪费掉的,比如Blei的论文在没有任何相关知识的情况下一开始读了好几次,都没读完而且得到到信息也很有限,如果重新总结一下,我觉得对于我们这些门外汉程序员来说,想了解LDA大概需要这些知识:
基本上这样一圈下来,基本概念和算法实现都应该搞定了,当然,数学证明其实没那么容易就搞定,但是对于工程师来说,先把这些搞定就能干活了,这个步骤并不适合各位读博士发论文的同学们,但是这样先看看也比较容易对于这些数学问题的兴趣,不然,成天对这符号和数学公式,没有整块业余时间的我是觉得还是容易退缩放弃的。
发现作为工程师来说,还是看代码比较有感觉,看实际应用的实例比较有感觉,看来不能把大部分时间花在PRML上,还是要多对照着代码看。
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Facebook Messenger & Chatbot, Machine Learning & Big Data
生命如此短暂,掌握技艺却要如此长久
|
Netflix Technology Blog | 439 | 9 | https://medium.com/netflix-techblog/netflix-recommendations-beyond-the-5-stars-part-1-55838468f429?source=tag_archive---------0---------------- | Netflix Recommendations: Beyond the 5 stars (Part 1) | by Xavier Amatriain and Justin Basilico (Personalization Science and Engineering)
In this two-part blog post, we will open the doors of one of the most valued Netflix assets: our recommendation system. In Part 1, we will relate the Netflix Prize to the broader recommendation challenge, outline the external components of our personalized service, and highlight how our task has evolved with the business. In Part 2, we will describe some of the data and models that we use and discuss our approach to algorithmic innovation that combines offline machine learning experimentation with online AB testing. Enjoy... and remember that we are always looking for more star talent to add to our great team, so please take a look at our jobs page.
In 2006 we announced the Netflix Prize, a machine learning and data mining competition for movie rating prediction. We offered $1 million to whoever improved the accuracy of our existing system called Cinematch by 10%. We conducted this competition to find new ways to improve the recommendations we provide to our members, which is a key part of our business. However, we had to come up with a proxy question that was easier to evaluate and quantify: the root mean squared error (RMSE) of the predicted rating. The race was on to beat our RMSE of 0.9525 with the finish line of reducing it to 0.8572 or less.
A year into the competition, the Korbell team won the first Progress Prize with an 8.43% improvement. They reported more than 2000 hours of work in order to come up with the final combination of 107 algorithms that gave them this prize. And, they gave us the source code. We looked at the two underlying algorithms with the best performance in the ensemble: Matrix Factorization (which the community generally called SVD, Singular Value Decomposition) and Restricted Boltzmann Machines (RBM). SVD by itself provided a 0.8914 RMSE, while RBM alone provided a competitive but slightly worse 0.8990 RMSE. A linear blend of these two reduced the error to 0.88. To put these algorithms to use, we had to work to overcome some limitations, for instance that they were built to handle 100 million ratings, instead of the more than 5 billion that we have, and that they were not built to adapt as members added more ratings. But once we overcame those challenges, we put the two algorithms into production, where they are still used as part of our recommendation engine.
If you followed the Prize competition, you might be wondering what happened with the final Grand Prize ensemble that won the $1M two years later. This is a truly impressive compilation and culmination of years of work, blending hundreds of predictive models to finally cross the finish line. We evaluated some of the new methods offline but the additional accuracy gains that we measured did not seem to justify the engineering effort needed to bring them into a production environment. Also, our focus on improving Netflix personalization had shifted to the next level by then. In the remainder of this post we will explain how and why it has shifted.
One of the reasons our focus in the recommendation algorithms has changed is because Netflix as a whole has changed dramatically in the last few years. Netflix launched an instant streaming service in 2007, one year after the Netflix Prize began. Streaming has not only changed the way our members interact with the service, but also the type of data available to use in our algorithms. For DVDs our goal is to help people fill their queue with titles to receive in the mail over the coming days and weeks; selection is distant in time from viewing, people select carefully because exchanging a DVD for another takes more than a day, and we get no feedback during viewing. For streaming members are looking for something great to watch right now; they can sample a few videos before settling on one, they can consume several in one session, and we can observe viewing statistics such as whether a video was watched fully or only partially.
Another big change was the move from a single website into hundreds of devices. The integration with the Roku player and the Xbox were announced in 2008, two years into the Netflix competition. Just a year later, Netflix streaming made it into the iPhone. Now it is available on a multitude of devices that go from a myriad of Android devices to the latest AppleTV.
Two years ago, we went international with the launch in Canada. In 2011, we added 43 Latin-American countries and territories to the list. And just recently, we launched in UK and Ireland. Today, Netflix has more than 23 million subscribers in 47 countries. Those subscribers streamed 2 billion hours from hundreds of different devices in the last quarter of 2011. Every day they add 2 million movies and TV shows to the queue and generate 4 million ratings.
We have adapted our personalization algorithms to this new scenario in such a way that now 75% of what people watch is from some sort of recommendation. We reached this point by continuously optimizing the member experience and have measured significant gains in member satisfaction whenever we improved the personalization for our members. Let us now walk you through some of the techniques and approaches that we use to produce these recommendations.
We have discovered through the years that there is tremendous value to our subscribers in incorporating recommendations to personalize as much of Netflix as possible. Personalization starts on our homepage, which consists of groups of videos arranged in horizontal rows. Each row has a title that conveys the intended meaningful connection between the videos in that group. Most of our personalization is based on the way we select rows, how we determine what items to include in them, and in what order to place those items.
Take as a first example the Top 10 row: this is our best guess at the ten titles you are most likely to enjoy. Of course, when we say “you”, we really mean everyone in your household. It is important to keep in mind that Netflix’ personalization is intended to handle a household that is likely to have different people with different tastes. That is why when you see your Top10, you are likely to discover items for dad, mom, the kids, or the whole family. Even for a single person household we want to appeal to your range of interests and moods. To achieve this, in many parts of our system we are not only optimizing for accuracy, but also for diversity.
Another important element in Netflix’ personalization is awareness. We want members to be aware of how we are adapting to their tastes. This not only promotes trust in the system, but encourages members to give feedback that will result in better recommendations. A different way of promoting trust with the personalization component is to provide explanations as to why we decide to recommend a given movie or show. We are not recommending it because it suits our business needs, but because it matches the information we have from you: your explicit taste preferences and ratings, your viewing history, or even your friends’ recommendations.
On the topic of friends, we recently released our Facebook connect feature in 46 out of the 47 countries we operate — all but the US because of concerns with the VPPA law. Knowing about your friends not only gives us another signal to use in our personalization algorithms, but it also allows for different rows that rely mostly on your social circle to generate recommendations.
Some of the most recognizable personalization in our service is the collection of “genre” rows. These range from familiar high-level categories like “Comedies” and “Dramas” to highly tailored slices such as “Imaginative Time Travel Movies from the 1980s”. Each row represents 3 layers of personalization: the choice of genre itself, the subset of titles selected within that genre, and the ranking of those titles. Members connect with these rows so well that we measure an increase in member retention by placing the most tailored rows higher on the page instead of lower. As with other personalization elements, freshness and diversity is taken into account when deciding what genres to show from the thousands possible.
We present an explanation for the choice of rows using a member’s implicit genre preferences — recent plays, ratings, and other interactions — , or explicit feedback provided through our taste preferences survey. We will also invite members to focus a row with additional explicit preference feedback when this is lacking.
Similarity is also an important source of personalization in our service. We think of similarity in a very broad sense; it can be between movies or between members, and can be in multiple dimensions such as metadata, ratings, or viewing data. Furthermore, these similarities can be blended and used as features in other models. Similarity is used in multiple contexts, for example in response to a member’s action such as searching or adding a title to the queue. It is also used to generate rows of “adhoc genres” based on similarity to titles that a member has interacted with recently. If you are interested in a more in-depth description of the architecture of the similarity system, you can read about it in this past post on the blog.
In most of the previous contexts — be it in the Top10 row, the genres, or the similars — ranking, the choice of what order to place the items in a row, is critical in providing an effective personalized experience. The goal of our ranking system is to find the best possible ordering of a set of items for a member, within a specific context, in real-time. We decompose ranking into scoring, sorting, and filtering sets of movies for presentation to a member. Our business objective is to maximize member satisfaction and month-to-month subscription retention, which correlates well with maximizing consumption of video content. We therefore optimize our algorithms to give the highest scores to titles that a member is most likely to play and enjoy.
Now it is clear that the Netflix Prize objective, accurate prediction of a movie’s rating, is just one of the many components of an effective recommendation system that optimizes our members enjoyment. We also need to take into account factors such as context, title popularity, interest, evidence, novelty, diversity, and freshness. Supporting all the different contexts in which we want to make recommendations requires a range of algorithms that are tuned to the needs of those contexts. In the next part of this post, we will talk in more detail about the ranking problem. We will also dive into the data and models that make all the above possible and discuss our approach to innovating in this space.
On to part 2:
Originally published at techblog.netflix.com on April 6, 2012.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Learn more about how Netflix designs, builds, and operates our systems and engineering organizations
Learn about Netflix’s world class engineering efforts, company culture, product developments and more.
|
Netflix Technology Blog | 365 | 10 | https://medium.com/netflix-techblog/netflix-recommendations-beyond-the-5-stars-part-2-d9b96aa399f5?source=tag_archive---------1---------------- | Netflix Recommendations: Beyond the 5 stars (Part 2) | by Xavier Amatriain and Justin Basilico (Personalization Science and Engineering)
In part one of this blog post, we detailed the different components of Netflix personalization. We also explained how Netflix personalization, and the service as a whole, have changed from the time we announced the Netflix Prize.
The $1M Prize delivered a great return on investment for us, not only in algorithmic innovation, but also in brand awareness and attracting stars (no pun intended) to join our team. Predicting movie ratings accurately is just one aspect of our world-class recommender system. In this second part of the blog post, we will give more insight into our broader personalization technology. We will discuss some of our current models, data, and the approaches we follow to lead innovation and research in this space.
The goal of recommender systems is to present a number of attractive items for a person to choose from. This is usually accomplished by selecting some items and sorting them in the order of expected enjoyment (or utility). Since the most common way of presenting recommended items is in some form of list, such as the various rows on Netflix, we need an appropriate ranking model that can use a wide variety of information to come up with an optimal ranking of the items for each of our members.
If you are looking for a ranking function that optimizes consumption, an obvious baseline is item popularity. The reason is clear: on average, a member is most likely to watch what most others are watching. However, popularity is the opposite of personalization: it will produce the same ordering of items for every member. Thus, the goal becomes to find a personalized ranking function that is better than item popularity, so we can better satisfy members with varying tastes.
Recall that our goal is to recommend the titles that each member is most likely to play and enjoy. One obvious way to approach this is to use the member’s predicted rating of each item as an adjunct to item popularity. Using predicted ratings on their own as a ranking function can lead to items that are too niche or unfamiliar being recommended, and can exclude items that the member would want to watch even though they may not rate them highly. To compensate for this, rather than using either popularity or predicted rating on their own, we would like to produce rankings that balance both of these aspects. At this point, we are ready to build a ranking prediction model using these two features.
There are many ways one could construct a ranking function ranging from simple scoring methods, to pairwise preferences, to optimization over the entire ranking. For the purposes of illustration, let us start with a very simple scoring approach by choosing our ranking function to be a linear combination of popularity and predicted rating. This gives an equation of the form frank(u,v) = w1 p(v) + w2 r(u,v) + b, where u=user, v=video item, p=popularity and r=predicted rating. This equation defines a two-dimensional space like the one depicted below.
Once we have such a function, we can pass a set of videos through our function and sort them in descending order according to the score. You might be wondering how we can set the weights w1 and w2 in our model (the bias b is constant and thus ends up not affecting the final ordering). In other words, in our simple two-dimensional model, how do we determine whether popularity is more or less important than predicted rating? There are at least two possible approaches to this. You could sample the space of possible weights and let the members decide what makes sense after many A/B tests. This procedure might be time consuming and not very cost effective. Another possible answer involves formulating this as a machine learning problem: select positive and negative examples from your historical data and let a machine learning algorithm learn the weights that optimize your goal. This family of machine learning problems is known as “Learning to rank” and is central to application scenarios such as search engines or ad targeting. Note though that a crucial difference in the case of ranked recommendations is the importance of personalization: we do not expect a global notion of relevance, but rather look for ways of optimizing a personalized model.
As you might guess, apart from popularity and rating prediction, we have tried many other features at Netflix. Some have shown no positive effect while others have improved our ranking accuracy tremendously. The graph below shows the ranking improvement we have obtained by adding different features and optimizing the machine learning algorithm.
Many supervised classification methods can be used for ranking. Typical choices include Logistic Regression, Support Vector Machines, Neural Networks, or Decision Tree-based methods such as Gradient Boosted Decision Trees (GBDT). On the other hand, a great number of algorithms specifically designed for learning to rank have appeared in recent years such as RankSVM or RankBoost. There is no easy answer to choose which model will perform best in a given ranking problem. The simpler your feature space is, the simpler your model can be. But it is easy to get trapped in a situation where a new feature does not show value because the model cannot learn it. Or, the other way around, to conclude that a more powerful model is not useful simply because you don’t have the feature space that exploits its benefits.
The previous discussion on the ranking algorithms highlights the importance of both data and models in creating an optimal personalized experience for our members. At Netflix, we are fortunate to have many relevant data sources and smart people who can select optimal algorithms to turn data into product features. Here are some of the data sources we can use to optimize our recommendations:
So, what about the models? One thing we have found at Netflix is that with the great availability of data, both in quantity and types, a thoughtful approach is required to model selection, training, and testing. We use all sorts of machine learning approaches: From unsupervised methods such as clustering algorithms to a number of supervised classifiers that have shown optimal results in various contexts. This is an incomplete list of methods you should probably know about if you are working in machine learning for personalization:
Consumer Data Science
The abundance of source data, measurements and associated experiments allow us to operate a data-driven organization. Netflix has embedded this approach into its culture since the company was founded, and we have come to call it Consumer (Data) Science. Broadly speaking, the main goal of our Consumer Science approach is to innovate for members effectively. The only real failure is the failure to innovate; or as Thomas Watson Sr, founder of IBM, put it: “If you want to increase your success rate, double your failure rate.” We strive for an innovation culture that allows us to evaluate ideas rapidly, inexpensively, and objectively. And, once we test something we want to understand why it failed or succeeded. This lets us focus on the central goal of improving our service for our members.
So, how does this work in practice? It is a slight variation over the traditional scientific process called A/B testing (or bucket testing):
When we execute A/B tests, we track many different metrics. But we ultimately trust member engagement (e.g. hours of play) and retention. Tests usually have thousands of members and anywhere from 2 to 20 cells exploring variations of a base idea. We typically have scores of A/B tests running in parallel. A/B tests let us try radical ideas or test many approaches at the same time, but the key advantage is that they allow our decisions to be data-driven. You can read more about our approach to A/B Testing in this previous tech blog post or in some of the Quora answers by our Chief Product Officer Neil Hunt.
An interesting follow-up question that we have faced is how to integrate our machine learning approaches into this data-driven A/B test culture at Netflix. We have done this with an offline-online testing process that tries to combine the best of both worlds. The offline testing cycle is a step where we test and optimize our algorithms prior to performing online A/B testing. To measure model performance offline we track multiple metrics used in the machine learning community: from ranking measures such as normalized discounted cumulative gain, mean reciprocal rank, or fraction of concordant pairs, to classification metrics such as accuracy, precision, recall, or F-score. We also use the famous RMSE from the Netflix Prize or other more exotic metrics to track different aspects like diversity. We keep track of how well those metrics correlate to measurable online gains in our A/B tests. However, since the mapping is not perfect, offline performance is used only as an indication to make informed decisions on follow up tests.
Once offline testing has validated a hypothesis, we are ready to design and launch the A/B test that will prove the new feature valid from a member perspective. If it does, we will be ready to roll out in our continuous pursuit of the better product for our members. The diagram below illustrates the details of this process.
An extreme example of this innovation cycle is what we called the Top10 Marathon. This was a focused, 10-week effort to quickly test dozens of algorithmic ideas related to improving our Top10 row. Think of it as a 2-month hackathon with metrics. Different teams and individuals were invited to contribute ideas and code in this effort. We rolled out 6 different ideas as A/B tests each week and kept track of the offline and online metrics. The winning results are already part of our production system.
The Netflix Prize abstracted the recommendation problem to a proxy question of predicting ratings. But member ratings are only one of the many data sources we have and rating predictions are only part of our solution. Over time we have reformulated the recommendation problem to the question of optimizing the probability a member chooses to watch a title and enjoys it enough to come back to the service. More data availability enables better results. But in order to get those results, we need to have optimized approaches, appropriate metrics and rapid experimentation.
To excel at innovating personalization, it is insufficient to be methodical in our research; the space to explore is virtually infinite. At Netflix, we love choosing and watching movies and TV shows. We focus our research by translating this passion into strong intuitions about fruitful directions to pursue; under-utilized data sources, better feature representations, more appropriate models and metrics, and missed opportunities to personalize. We use data mining and other experimental approaches to incrementally inform our intuition, and so prioritize investment of effort. As with any scientific pursuit, there’s always a contribution from Lady Luck, but as the adage goes, luck favors the prepared mind. Finally, above all, we look to our members as the final judges of the quality of our recommendation approach, because this is all ultimately about increasing our members’ enjoyment in their own Netflix experience. We are always looking for more people to join our team of “prepared minds”. Make sure you take a look at our jobs page.
Originally published at techblog.netflix.com on June 20, 2012.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Learn more about how Netflix designs, builds, and operates our systems and engineering organizations
Learn about Netflix’s world class engineering efforts, company culture, product developments and more.
|
Wolf Garbe | 6 | 6 | https://medium.com/@wolfgarbe/1000x-faster-spelling-correction-algorithm-2012-8701fcd87a5f?source=tag_archive---------2---------------- | 1000x Faster Spelling Correction algorithm (2012) – Wolf Garbe – Medium | Update1: An improved SymSpell implementation is now 1,000,000x faster.Update2: SymSpellCompound with Compound aware spelling correction. Update3: Benchmark of SymSpell, BK-Tree und Norvig’s spell-correct.
Recently I answered a question on Quora about spelling correction for search engines. When I described our SymSpell algorithm I was pointed to Peter Norvig’s page where he outlined his approach.
Both algorithms are based on Edit distance (Damerau-Levenshtein distance). Both try to find the dictionary entries with smallest edit distance from the query term.
If the edit distance is 0 the term is spelled correctly, if the edit distance is <=2 the dictionary term is used as spelling suggestion. But SymSpell uses a different way to search the dictionary, resulting in a significant performance gain and language independence. Three ways to search for minimum edit distance in a dictionary:
1. Naive approachThe obvious way of doing this is to compute the edit distance from the query term to each dictionary term, before selecting the string(s) of minimum edit distance as spelling suggestion. This exhaustive search is inordinately expensive.
Source: Christopher D. Manning, Prabhakar Raghavan & Hinrich Schütze: Introduction to Information Retrieval.
The performance can be significantly improved by terminating the edit distance calculation as soon as a threshold of 2 or 3 has been reached.
2. Peter NorvigGenerate all possible terms with an edit distance (deletes + transposes + replaces + inserts) from the query term and search them in the dictionary. For a word of length n, an alphabet size a, an edit distance d=1, there will be n deletions, n-1 transpositions, a*n alterations, and a*(n+1) insertions, for a total of 2n+2an+a-1 terms at search time.
Source: Peter Norvig: How to Write a Spelling Corrector.
This is much better than the naive approach, but still expensive at search time (114,324 terms for n=9, a=36, d=2) and language dependent (because the alphabet is used to generate the terms, which is different in many languages and huge in Chinese: a=70,000 Unicode Han characters)
3. Symmetric Delete Spelling Correction (SymSpell) Generate terms with an edit distance (deletes only) from each dictionary term and add them together with the original term to the dictionary. This has to be done only once during a pre-calculation step. Generate terms with an edit distance (deletes only) from the input term and search them in the dictionary. For a word of length n, an alphabet size of a, an edit distance of 1, there will be just n deletions, for a total of n terms at search time.
This is three orders of magnitude less expensive (36 terms for n=9 and d=2) and language independent (the alphabet is not required to generate deletes). The cost of this approach is the pre-calculation time and storage space of x deletes for every original dictionary entry, which is acceptable in most cases.
The number x of deletes for a single dictionary entry depends on the maximum edit distance: x=n for edit distance=1, x=n*(n-1)/2 for edit distance=2, x=n!/d!/(n-d)! for edit distance=d (combinatorics: k out of n combinations without repetitions, and k=n-d), E.g. for a maximum edit distance of 2 and an average word length of 5 and 100,000 dictionary entries we need to additionally store 1,500,000 deletes.
Remark 1: During the precalculation, different words in the dictionary might lead to same delete term: delete(sun,1)==delete(sin,1)==sn. While we generate only one new dictionary entry (sn), inside we need to store both original terms as spelling correction suggestion (sun,sin)
Remark 2: There are four different comparison pair types:
The last comparison type is required for replaces and transposes only. But we need to check whether the suggested dictionary term is really a replace or an adjacent transpose of the input term to prevent false positives of higher edit distance (bank==bnak and bank==bink, but bank!=kanb and bank!=xban and bank!=baxn).
Remark 3: Instead of a dedicated spelling dictionary we are using the search engine index itself. This has several benefits:
Remark 4: We have implemented query suggestions/completion in a similar fashion. This is a good way to prevent spelling errors in the first place. Every newly indexed word, whose frequency is over a certain threshold, is stored as a suggestion to all of its prefixes (they are created in the index if they do not yet exist). As we anyway provide an instant search feature the lookup for suggestions comes also at almost no extra cost. Multiple terms are sorted by the number of results stored in the index.
ReasoningThe SymSpell algorithm exploits the fact that the edit distance between two terms is symmetrical:
We are using variant 3, because the delete-only-transformation is language independent and three orders of magnitude less expensive.
Where does the speed come from?
Computational Complexity The SymSpell algorithm is constant time ( O(1) time ), i.e. independent of the dictionary size (but depending on the average term length and maximum edit distance), because our index is based on a Hash Table which has an average search time complexity of O(1).
Comparison to other approaches BK-Trees have a search time of O(log dictionary_size), whereas the SymSpell algorithm is constant time ( O(1) time ), i.e. independent of the dictionary size. Tries have a comparable search performance to our approach. But a Trie is a prefix tree, which requires a common prefix. This makes it suitable for autocomplete or search suggestions, but not applicable for spell checking. If your typing error is e.g. in the first letter, than you have no common prefix, hence the Trie will not work for spelling correction.
Application Possible application fields of the SymSpell algorithm are those of fast approximate dictionary string matching: spell checkers for word processors and search engines, correction systems for optical character recognition, natural language translation based on translation memory, record linkage, de-duplication, matching DNA sequences, fuzzy string searching and fraud detection.
Source codeThe C# implementation of the Symmetric Delete Spelling Correction algorithm is released on GitHub as Open Source under the MIT License:https://github.com/wolfgarbe/symspell
PortsThere are ports in C++, Crystal, Go, Java, Javascript, Python, Ruby, Rust, Scala, Swift available.
Originally published at blog.faroo.com on June 7, 2012.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Founder SeekStorm (Search-as-a-Service), FAROO (P2P Search) http://www.seekstorm.com https://github.com/wolfgarbe https://www.quora.com/profile/Wolf-Garbe
|
Paul Christiano | 43 | 31 | https://ai-alignment.com/a-formalization-of-indirect-normativity-7e44db640160?source=tag_archive---------3---------------- | Formalizing indirect normativity – AI Alignment | This post outlines a formalization of what Nick Bostrom calls “indirect normativity.” I don’t think it’s an adequate solution to the AI control problem; but to my knowledge it was the first precise specification of a goal that meets the “not terrible” bar, i.e. which does not lead to terrible consequences if pursued without any caveats or restrictions. The proposal outlined here was sketched in early 2012 while I was visiting FHI, and was my first serious foray into AI control.
When faced with the challenge of writing down precise moral principles, adhering to the standards demanded in mathematics, moral philosophers encounter two serious difficulties:
In light of these difficulties, a moral philosopher might simply declare: “It is not my place to aspire to mathematical standards of precision. Ethics as a project inherently requires shared language, understanding, and experience; it becomes impossible or meaningless without them.”
This may be a defensible philosophical position, but unfortunately the issue is not entirely philosophical. In the interest of building institutions or machines which reliably pursue what we value, we may one day be forced to describe precisely “what we value” in a way that does not depend on charitable or “common sense” interpretation (in the same way that we today must describe “what we want done” precisely to computers, often with considerable effort). If some aspects of our values cannot be described formally, then it may be more difficult to use institutions or machines to reliably satisfy them. This is not to say that describing our values formally is necessary to satisfying them, merely that it might make it easier.
Since we are focusing on finding any precise and satisfactory moral theory, rather than resolving disputes in moral philosophy, we will adopt a consequentialist approach without justification and focus on axiology. Moreover, we will begin from the standpoint of expected utility maximization, and leave aside questions about how or over what space the maximization is performed.
We aim to mathematically define a utility function U such that we would be willing to build a hypothetical machine which exceptionlessly maximized U, possibly at the catastrophic expense of any other values. We will assume that the machine has an ability to reason which at least rivals that of humans, and is willing to tolerate arbitrarily complex definitions of U (within its ability to reason about them).
We adopt an indirect approach. Rather than specifying what exactly we want, we specify a process for determining what we want. This process is extremely complex, so that any computationally limited agent will always be uncertain about the process’ output. However, by reasoning about the process it is possible to make judgments about which action has the highest expected utility in light of this uncertainty.
For example, I might adopt the principle: “a state of affairs is valuable to the extent that I would judge it valuable after a century of reflection.” In general I will be uncertain about what I would say after a century, but I can act on the basis of my best guesses: after a century I will probably prefer worlds with more happiness, and so today I should prefer worlds with more happiness. After a century I have only a small probability of valuing trees’ feelings, and so today I should go out of my way to avoid hurting them if it is either instrumentally useful or extremely easy. As I spend more time thinking, my beliefs about what I would say after a century may change, and I will start to pursue different states of affairs even though the formal definition of my values is static. Similarly, I might desire to think about the value of trees’ feelings, if I expect that my opinions are unstable: if I spend a month thinking about trees, my current views will then be a much better predictor of my views after a hundred years, and if I know better whether or not trees’ feelings are valuable, I can make better decisions.
This example is quite informal, but it communicates the main idea of the approach. We stress that the value of our contribution, if any, is in the possibility of a precise formulation. (Our proposal itself will be relatively informal; instead it is a description of how you would arrive at a precise formulation.) The use of indirection seems to be necessary to achieve the desired level of precision.
Our proposal contains only two explicit steps:
Each of these steps requires substantial elaboration, but we must also specify what we expect the human to do with these tools.
This proposal is best understood in the context of other fantastic-seeming proposals, such as “my utility is whatever I would write down if I reflected for a thousand years without interruption or biological decay.” The counterfactual events which take place within the definition are far beyond the realm our intuition recognizes as “realistic,” and have no place except in thought experiments. But to the extent that we can reason about these counterfactuals and change our behavior on the basis of that reasoning (if so motivated), we can already see how such fantastic situations could affect our more prosaic reality.
The remainder of this document consists of brief elaboration of some of these steps, and a few arguments about why this is a desirable process.
The first step of our proposal is a high-fidelity mathematical model of human cognition. We will set aside philosophical troubles, and assume that the human brain is a purely physical system which may be characterized mathematically. Even granting this, it is not clear how we can realistically obtain such a characterization.
The most obvious approach to characterizing a brain is to combine measurements of its behavior or architecture with an understanding of biology, chemistry, and physics. This project represents a massive engineering effort which is currently just beginning. Most pessimistically, our proposal could be postponed until this project’s completion. This could still be long before the mathematical characterization of the brain becomes useful for running experiments or automating human activities: because we are interested only in a definition, we do not care about having the computational resources necessary to simulate the brain.
An impractical mathematical definition, however, may be much easier to obtain. We can define a model of a brain in terms of exhaustive searches which could never be practically carried out. For example, given some observations of a neuron, we can formally define a brute force search for a model of that neuron. Similarly, given models of individual neurons we may be able to specify a brute force search over all ways of connecting those neurons which account for our observations of the brain (say, some data acquired through functional neuroimaging).
It may be possible to carry out this definition without exploiting any structural knowledge about the brain, beyond what is necessary to measure it effectively. By collecting imaging data for a human exposed to a wide variety of stimuli, we can recover a large corpus of data which must be explained by any model of a human brain. Moreover, by using our explicit knowledge of human cognition we can algorithmically generate an extensive range of tests which identify a successful simulation, by probing responses to questions or performance on games or puzzles.
In fact, this project may be possible using existing resources. The complexity of the human brain is not as unapproachable as it may at first appear: though it may contain 1014synapses, each described by many parameters, it can be specified much more compactly. A newborn’s brain can be specified by about 109bits of genetic information, together with a recipe for a physical simulation of development. The human brain appears to form new long-term memories at a rate of 1–2 bits per second, suggesting that it may be possible to specify an adult brain using 109additional bits of experiential information. This suggests that it may require only about 1010bits of information to specify a human brain, which is at the limits of what can be reasonably collected by existing technology for functional neuroimaging.
This discussion has glossed over at least one question: what do we mean by ‘brain emulation’? Human cognition does not reside in a physical system with sharp boundaries, and it is not clear how you would define or use a simulation of the “input-output” behavior of such an object.
We will focus on some system which does have precisely defined input-output behavior, and which captures the important aspects of human cognition. Consider a system containing a human, a keyboard, a monitor, and some auxiliary instruments, well-insulated from the environment except for some wires carrying inputs to the monitor and outputs from the keyboard and auxiliary instruments (and wires carrying power). The inputs to this system are simply screens to be displayed on the monitor (say delivered as a sequence to be displayed one after another at 30 frames per second), while the outputs are the information conveyed from the keyboard and the other measuring apparatuses (also delivered as a sequence of data dumps, each recording activity from the last 30th of a second).
This “human in a box” system can be easily formally defined if a precise description of a human brain and coarse descriptions of the human body and the environment are available. Alternatively, the input-output behavior of the human in a box can be directly observed, and a computational model constructed for the entire system. Let H be a mathematical definition of the resulting (randomized) function from input sequences (In(1), In(2), ..., In(K)) to the next output Out(K). H is, by design, a good approximation to what the human “would output” if presented with any particular input sequence.
Using H, we can mathematically define what “would happen” if the human interacted with a wide variety of systems. For example, if we deliver Out(K) as the input to an abstract computer running some arbitrary software, and then define In(K+1) as what the screen would next display, we can mathematically define the distribution over transcripts which would have arisen if the human had interacted with the abstract computer. This computer could be running an interactive shell, a video game, or a messaging client.
Note that H reflects the behavior of a particular human, in a particular mental state. This state is determined by the process used to design H, or the data used to learn it. In general, we can control H by choosing an appropriate human and providing appropriate instructions / training. More emulations could be produced by similar measures if necessary. Using only a single human may seem problematic, but we will not rely on this lone individual to make all relevant ethical judgments. Instead, we will try to select a human with the motivational stability to carry out the subsequent steps faithfully, which will define U using the judgment of a community consisting of many humans.
This discussion has been brief and has necessarily glossed over several important difficulties. One difficulty is the danger of using computationally unbounded brute force search, given the possibility of short programs which exhibit goal-oriented behavior. Another difficulty is that, unless the emulation project is extremely conservative, the models it produces are not likely to be fully-functional humans. Their thoughts may be blurred in various ways, they may be missing many memories or skills, and they may lack important functionalities such as long-term memory formation or emotional expression. The scope of these issues depends on the availability of data from which to learn the relevant aspects of human cognition. Realistic proposals along these lines will need to accommodate these shortcomings, relying on distorted emulations as a tool to construct increasingly accurate models.
For any idealized “software”, with a distinguished instruction return, we can use H to mathematically define the distribution over return values which would result, if the human were to interact with that software. We will informally define a particular program T which provides a rich environment, in which the remainder of our proposal can be implemented. From a technical perspective this will be the last step of our proposal. The remaining steps will be reflected only in the intentions and behavior of the human being simulated in H.
Fix a convenient and adequately expressive language (say a dialect of Python designed to run on an abstract machine). T implements a standard interface for an interactive shell in this language: the user can look through all of the past instructions that have been executed and their return values (rendered as strings) or execute a new instruction. We also provide symbols representing H and T themselves (as functions from sequences of K inputs to a value for the Kth output). We also provide some useful information (such as a snapshot of the Internet, and some information about the process used to create H and T), which we encode as a bit string and store in a single environment variable data. We assume that our language of choice has a return instruction, and we have T return whenever the user executes this instruction. Some care needs to be taken to define the behavior if T enters an infinite loop–we want to minimize the probability that the human accidentally hangs the terminal, with catastrophic consequences, but we cannot provide a complete safety-net without running into unresolvable issues with self-reference.
We define U to be the value returned by H interacting with T. If H represented an unfortunate mental state, then this interaction could be short and unproductive: the simulated human could just decide to type ‘return 0’ and be done with it. However, by choosing an appropriate human to simulate and inculcating an appropriate mental state, we can direct the process further.
We intend for H to use the resources in T to initiate a larger deliberative process. For example, the first step of this process may be to instantiate many copies of H, interacting with variants of messaging clients which are in contact with each other. The return value from the original process could then be defined as the value returned by a designated ‘leader’ from this community, or as a majority vote amongst the copies of H, or so on. Another step might be to create appropriate realistic virtual environments for simulated brains, rather than confining them to boxes. For motivational stability, it may be helpful to design various coordination mechanisms, involving frameworks for interaction, “cached” mental states which are frequently re-instantiated, or sanity checks whereby one copy of H monitors the behavior of another.
The resulting communities of simulated brains then engage in a protracted planning process, ensuring that subsequent steps can be carried out safely or developing alternative approaches. The main priority of this community is to reduce the probability of errors as far as possible (exactly what constitutes an ‘error’ will be discussed at more length later). At the end of this process, we obtain a formal definition of a new protocol H+, which submits its inputs for consideration to a large community and then produces its outputs using some deliberation mechanism (democratic vote, one leader using the rest of the community as advisors, etc.)
The next step requires our community of simulated brains to construct a detailed simulation of Earth which they can observe and manipulate. Once they have such a simulation, they have access to all of the data which would have been available on Earth. In particular, they can now explore many possible futures and construct simulations for each living human.
In order to locate Earth, we will again leverage an exhaustive search. First, H+ decides on informal desiderata for an “Earth simulation.” These are likely to be as follows:
Once H+ has decided on the desiderata, it uses a brute force search to find a simulation satisfying them: for each possible program it instantiates a new copy of H+ tasked with evaluating whether that program is an acceptable simulation. We then define E to be a uniform distribution over programs which pass this evaluation.
We might have doubts about whether this process produces the “real” Earth–perhaps even once we have verified that it is identical according to a laundry list of measures, it may still be different in other important ways. There are two reasons why we might care about such differences. First, if the simulated Earth has a substantially different set of people than the real Earth, then a different set of people will be involved in the subsequent decision making. If we care particularly about the opinions of the people who actually exist (which the reader might well, being amongst such people!) then this may be unsatisfactory. Second, if events transpire significantly differently on the simulated Earth than the real Earth, value judgments designed to guide behavior appropriately in the simulated Earth may lead to less appropriate behaviors in the real Earth. (This will not be a problem if our ultimate definition of U consists of universalizable ethical principles, but we will see that U might take other forms.)
These concerns are addressed by a few broad arguments. First, checking a detailed but arbitrary ‘laundry list’ actually provides a very strong guarantee. For example, if this laundry list includes verifying a snapshot of the Internet, then every event or person documented on the Internet must exist unchanged, and every keystroke of every person composing a document on the Internet must not be disturbed. If the world is well interconnected, then it may be very difficult to modify parts of the world without having substantial effects elsewhere, and so if a long enough arbitrary list of properties is fixed, we expect nearly all of the world to be the same as well. Second, if the essential character of the world is fixed but detailed are varied, we should expect the sort of moral judgments reached by consensus to be relatively constant. Finally, if the system whose behavior depends on these moral judgments is identical between the real and simulated worlds, then outputting a U which causes that system to behave a certain way in the simulated world will also cause that system to behave that way in the real world.
Once H+ has defined a simulation of the world which permits inspection and intervention, by careful trial and error H+ can inspect a variety of possible futures. In particular, they can find interventions which cause the simulated human society to conduct a real brain emulation project and produce high-fidelity brain scans for all living humans.
Once these scans have been obtained, H+ can use them to define U as the output of a new community, H++, which draws on the expertise of all living humans operating under ideal conditions. There are two important degrees of flexibility: how to arrange the community for efficient communication and deliberation, and how to delegate the authority to define U. In terms of organization, the distinction between different approaches is probably not very important. For example, it would probably be perfectly satisfactory to start from a community of humans interacting with each other over something like the existing Internet (but on abstract, secure infrastructure). More important are the safety measures which would be in place, and the mechanism for resolving differences of value between different simulated humans.
The basic approach to resolving disputes is to allow each human to independently create a utility function U, each bounded in the interval [0, 1], and then to return their average. This average can either be unweighted, or can be weighted by a measure of each individual’s influence in the real world, in accordance with a game-theoretic notion like the Shapley value applied to abstract games or simulations of the original world. More sophisticated mechanisms are also possible, and may be desirable. Of course these questions can and should be addressed in part by H+ during its deliberation in the previous step. After all, H+ has access to an unlimited length of time to deliberate and has infinitely powerful computational aids. The role of our reasoning at this stage is simply to suggest that we can reasonably expect H+ to discover effective solutions.
As when discussing discovering a brain simulation by brute force, we have skipped over some critical issues in this section. In general, brute force searches (particularly over programs which we would like to run) are quite dangerous, because such searches will discover many programs with destructive goal-oriented behaviors. To deal with these issues, in both cases, we must rely on patience and powerful safety measures.
Once we have a formal description of a community of interacting humans, given as much time as necessary to deliberate and equipped with infinitely powerful computational aids, it becomes increasingly difficult to make coherent predictions about their behavior. Critically, though, we can also become increasingly confident that the outcome of their behavior will reflect their intentions. We sketch some possibilities, to illustrate the degree of flexibility available.
Perhaps the most natural possibility is for this community to solve some outstanding philosophical problems and to produce a utility function which directly captures their preferences. However, even if they quickly discovered a formulation which appeared to be attractive, they would still be wise to spend a great length of time and to leverage some of these other techniques to ensure that their proposed solution was really satisfactory.
Another natural possibility is to eschew a comprehensive theory of ethics, and define value in terms of the community’s judgment. We can define a utility function in terms of the hypothetical judgments of astronomical numbers of simulated humans, collaboratively evaluating the goodness of a state of affairs by examining its history at the atomic level, understanding the relevant higher-order structure, and applying human intuitions.
It seems quite likely that the community will gradually engage in self-modifications, enlarging their cognitive capacity along various dimensions as they come to understand the relevant aspects of cognition and judge such modifications to preserve their essential character. Either independently or as an outgrowth of this process, they may (gradually or abruptly) pass control to machine intelligences which they are suitably confident expresses their values. This process could be used to acquire the power necessary to define a utility function in one of the above frameworks, or understanding value-preserving self-modification or machine intelligence may itself prove an important ingredient in formalizing what it is we value. Any of these operations would be performed only after considerable analysis, when the original simulated humans were extremely confident in the desirability of the results.
Whatever path they take and whatever coordination mechanisms they use, eventually they will output a utility function U’. We then define U = 0 if U’ < 0, U = 1 if U’ > 1, and U = U’ otherwise.
At this point we have offered a proposal for formally defining a function U. We have made some general observations about what this definition entails. But now we may wonder to what extent U reflects our values, or more relevantly, to what extent our values are served by the creation of U-maximizers. Concerns may be divided into a few natural categories:
We respond to each of these objections in turn.
If the process works as intended, we will reach a stage in which a large community of humans reflects on their values, undergoes a process of discovery and potentially self-modification, and then outputs its result. We may be concerned that this dynamic does not adequately capture what we value.
For example, we may believe that some other extrapolation dynamic captures our values, or that it is morally desirable to act on the basis of our current beliefs without further reflection, or that the presence of realistic disruptions, such as the threat of catastrophe, has an important role in shaping our moral deliberation.
The important observation, in the defense of our proposal, is that whatever objections we could think of today, we could think of within the simulation. If, upon reflection, we decide that too much reflection is undesirable, we can simply change our plans appropriately. If we decide that realistic interference is important for moral deliberation, we can construct a simulation in which such interference occurs, or determine our moral principles by observing moral judgments in our own world’s possible futures.
There is some chance that this proposal is inadequate for some reason which won’t be apparent upon reflection, but then by definition this is a fact which we cannot possibly hope to learn by deliberating now. It therefore seems quite difficult to maintain objections to the proposal along these lines.
One aspect of the proposal does get “locked in,” however, after being considered by only one human rather than by a large civilization: the distribution of authority amongst different humans, and the nature of mechanisms for resolving differing value judgments.
Here we have two possible defenses. One is that the mechanism for resolving such disagreements can be reflected on at length by the individual simulated in H. This individual can spend generations of subjective time, and greatly expand her own cognitive capacities, while attempting to determine the appropriate way to resolve such disagreements. However, this defense is not completely satisfactory: we may be able to rely on this individual to produce a very technically sound and generally efficient proposal, but the proposal itself is quite value laden and relying on one individual to make such a judgment is in some sense begging the question.
A second, more compelling, defense, is that the structure of our world has already provided a mechanism for resolving value disagreements. By assigning decision-making weight in a way that depends on current influence (for example, as determined by the simulated ability of various coalitions to achieve various goals), we can generate a class of proposals which are at a minimum no worse than the status quo. Of course, these considerations will also be shaped by the conditions surrounding the creation or maintenance of systems which will be guided by U–for example, if a nation were to create a U-maximizer, they might first adopt an internal policy for assigning influence on U. By performing this decision making in an idealized environment, we can also reduce the likelihood of destructive conflict and increase the opportunities for mutually beneficial bargaining. We may have moral objections to codifying this sort of “might makes right” policy, favoring a more democratic proposal or something else entirely, but as a matter of empirical fact a more ‘cosmopolitan’ proposal will be adopted only if it is supported by those with the appropriate forms of influence, a situation which is unchanged by precisely codifying existing power structure.
Finally, the values of the simulations in this process may diverge from the values of the original human models, for one reaosn or another. For example, the simulated humans may predictably disagree with the original models about ethical questions by virtue of (probably) having no physical instantiation. That is, the output of this process is defined in terms of what a particular human would do, in a situation which that human knows will never come to pass. If I ask “What would I do, if I were to wake up in a featureless room and told that the future of humanity depended on my actions?” the answer might begin with “become distressed that I am clearly inhabiting a hypothetical situation, and adjust my ethical views to take into account the fact that people in hypothetical situations apparently have relevant first-person experience.” Setting aside the question of whether such adjustments are justified, they at least raise the possibility that our values may diverge from those of the simulations in this process.
These changes might be minimized, by understanding their nature in advance and treating them on a case-by-case basis (if we can become convinced that our understanding is exhaustive). For example, we could try and use humans who robustly employ updateless decision theories which never undergo such predictable changes, or we could attempt to engineer a situation in which all of the humans being emulated do have physical instantiations, and naive self-interest for those emulations aligns roughly with the desired behavior (for example, by allowing the early emulations to “write themselves into” our world).
We can imagine many ways in which this process can fail to work as intended–the original brain emulations may accurately model human behavior, the original subject may deviate from the intended plans, or simulated humans can make an error when interacting with their virtual environment which causes the process to get hijacked by some unintended dynamic.
We can argue that the proposal is likely to succeed, and can bolster the argument in various ways (by reducing the number of assumptions necessary for succees, building in fault-tolerance, justifying each assumption more rigorously, and so on). However, we are unlikely to eliminate the possibility of error. Therefore we need to argue that if the process fails with some small probability, the resulting values will only be slightly disturbed.
This is the reason for requiring U to lie in the interval [0, 1]–we will see that this restriction bounds the damage which may be done by an unlikely failure.
If the process fails with some small probability ε, then we can represent the resulting utility function as U = (1 — ε) U1 + ε U2, where U1 is the intended utility function and U2 is a utility function produced by some arbitrary error process. Now consider two possible states of affairs A and B such that U1(A) > U1(B) + ε /(1 — ε) ≈ U1(B) + ε. Then since 0 ≤ U2 ≤ 1, we have:
U(A) = (1 — ε) U1(A) + ε U2(A) > (1 — ε) U1(B) + ε ≥ (1 — ε) U1(B) + ε U2(B) = U(B)
Thus if A is substantially better than B according to U1, then A is better than B according to U. This shows that a small probability of error, whether coming from the stochasticity of our process or an agent’s uncertainty about the process’ output, has only a small effect on the resulting values.
Moreover, the process contains a humans who have access to a simulation of our world. This implies, in particular, that they have access to a simulation of whatever U-maximizing agents exist in the world, and they have knowledge of those agents’ beliefs about U. This allows them to choose U with perfect knowledge of the effects of error in these agents’ judgments.
In some cases this will allow them to completely negate the effect of error terms. For example, if the randomness in our process causes a perfectly cooperate community of simulated humans to “control” U with probability 2⁄3, and causes an arbitrary adversary to control it with probability 1⁄3, then the simulated humans can spend half of their mass outputting a utility function which exactly counters the effect of the adversary.
In general, the situation is not quite so simple: the fraction of mass controlled by any particular coalition will vary as the system’s uncertainty about U varies, and so it will be impossible to counteract the effect of an error term in a way which is time-independent. Instead, we will argue later that an appropriate choice of a bounded and noisy U can be used to achieve a very wide variety of effective behaviors of U-maximizers, overcoming the limitations both of bounded utility maximization and of noisy specification of utility functions.
Many possible problems with this scheme were described or implicitly addressed above. But that discussion was not exhaustive, and there are some classes of errors that fall through the cracks.
One interesting class of failures concerns changes in the values of the hypothetical human H. This human is in a very strange situation, and it seems quite possible that the physical universe we know contains extremely few instances of that situation (especially as the process unfolds and becomes more exotic). So H’s first-person experience of this situation may lead to significant changes in H’s views.
For example, our intuition that our own universe is valuable seems to be derived substantially from our judgment that our own first-person experiences are valuable. If hypothetically we found ourselves in a very alien universe, it seems quite plausible that we would judge the experiences within that universe to be morally valuable as well (depending perhaps on our initial philosophical inclinations).
Another example concerns our self-interest: much of individual humans’ values seem to depend on their own anticipations about what will happen to them, especially when faced with the prospect of very negative outcomes. If hypothetically we woke up in a completely non-physical situation, it is not exactly clear what we would anticipate, and this may distort our behavior. Would we anticipate the planned thought experiment occurring as planned? Would we focus our attention on those locations in the universe where a simulation of the thought experiment might be occurring? This possibility is particularly troubling in light of the incentives our scheme creates — anyone who can manipulate H’s behavior can have a significant effect on the future of our world, and so many may be motivated to create simulations of H.
A realistic U-maximizer will not be able to carry out the process described in the definition of U–in fact, this process probably requires immensely more computing resources than are available in the universe. (It may even involve the reaction of a simulated human to watching a simulation of the universe!) To what extent can we make robust guarantees about the behavior of such an agent?
We have already touched on this difficulty when discussing the maxim “A state of affairs is valuable to the extent I would judge it valuable after a century of reflection.” We cannot generally predict our own judgments in a hundred years’ time, but we can have well-founded beliefs about those judgments and act on the basis of those beliefs. We can also have beliefs about the value of further deliberation, and can strike a balance between such deliberation and acting on our current best guess.
A U-maximizer faces a similar set of problems: it cannot understand the exact form of U, but it can still have well-founded beliefs about U, and about what sorts of actions are good according to U. For example, if we suppose that the U-maximizer can carry out any reasoning that we can carry out, then the U-maximizer knows to avoid anything which we suspect would be bad according to U (for example, torturing humans). Even if the U-maximizer cannot carry out this reasoning, as long as it can recognize that humans have powerful predictive models for other humans, it can simply appropriate those models (either by carrying out reasoning inspired by human models, or by simply asking).
Moreover, the community of humans being simulated in our process has access to a simulation of whatever U-maximizer is operating under this uncertainty, and has a detailed understanding of that uncertainty. This allows the community to shape their actions in a way with predictable (to the U-maximizer) consequences.
It is easily conceivable that our values cannot be captured by a bounded utility function. Easiest to imagine is the possibility that some states of the world are much better than others, in a way that requires unbounded utility functions. But it is also conceivable that the framework of utility maximization is fundamentally not an appropriate one for guiding such an agent’s action, or that the notion of utility maximization hides subtleties which we do not yet appreciate.
We will argue that it is possible to transform bounded utility maximization into an arbitrary alternative system of decision-making, by designing a utility function which rewards worlds in which the U-maximizer replaced itself with an alternative decision-maker.
It is straightforward to design a utility function which is maximized in worlds where any particular U-maximizer converted itself into a non-U-maximizer–even if no simple characterization can be found for the desired act, we can simply instantiate many communities of humans to look over a world history and decide whether or not they judge the U-maximizer to have acted appropriately.
The more complicated question is whether a realistic U-maximizer can be made to convert itself into a non-U-maximizer, given that it is logically uncertain about the nature of U. It is at least conceivable that it couldn’t: if the desirability of some other behavior is only revealed by philosophical considerations which are too complex to ever be discovered by physically limited agents, then we should not expect any physically limited U-maximizer to respond to those considerations. Of course, in this case we could also not expect normal human deliberation to correctly capture our values. The relevant question is whether a U-maximizer could switch to a different normative framework, if an ordinary investment of effort by human society revealed that a different normative framework was more appropriate.
If a U-maximizer does not spend any time investigating this possibility, than it may not be expected to act on it. But to the extent that we assign a significant probability to the simulated humans deciding that a different normative framework is more appropriate, and to the extent that the U-maximizer is able to either emulate or accept our reasoning, it will also assign a significant probability to this possibility (unless it is able to rule it out by more sophisticated reasoning). If we (and the U-maximizer) expect the simulations to output a U which rewards a switch to a different normative framework, and this possibility is considered seriously, then U-maximization entails exploring this possibility. If these explorations suggest that the simulated humans probably do recommend some particular alternative framework, and will output a U which assigns high value to worlds in which this framework is adopted and low value to worlds in which it isn’t, then a U-maximizer will change frameworks.
Such a “change of frameworks” may involve sweeping action in the world. For example, the U-maximizer may have created many other agents which are pursuing activities instrumentally useful to maximizing U. These agents may then need to be destroyed or altered; anticipating this possibility, the U-maximizer is likely to take actions to ensure that its current “best guess” about U does not get locked in.
This argument suggests that a U-maximizer could adopt an arbitrary alternative framework, if it were feasible to conclude that humans would endorse that framework upon reflection.
Our proposal appears to be something of a cop out, in that it declines to directly take a stance on any ethical issues. Indeed, not only do we fail to specify a utility function ourselves, but we expect the simulations to which we have delegated the problem to in turn delegate it at least a few more times. Clearly at some point this process must bottom out with actual value judgments, and we may be concerned that this sort of “passing the buck” is just obscuring deeper problems which will arise when the process does bottom out.
As observed above, whatever such concerns we might have can also be discovered by the simulations we create. If there is some fundamental difficulty which always arises when trying to assign values, then we certainly have not exacerbated this problem by delegation. Nevertheless, there are at least two coherent objections one might raise:
Both of these objections can be met with a single response. In the current world, we face a broad range of difficult and often urgent problems. By passing the buck the first time, we delegate resolution of ethical challenges to a civilization which does not have to deal with some of these difficulties–in particular, it faces no urgent existential threats. This allows us to divert as much energy as possible to dealing with practical problems today, while still capturing most of the benefits of nearly arbitrarily extensive ethical deliberation.
This process is defined in terms of the behavior of unthinkably many hypothetical brain emulations. It is conceivable that the moral status of these emulations may be significant.
We must make a distinction between two possible sources of moral value: it could be the case that a U-maximizer carries out simulations on physical hardware in order to better understand U, and these simulations have moral value, or it could be the case that the hypothetical emulations themselves have moral value.
In the first case, we can remark that the moral value of such simulations is itself incorporated into the definition of U. Therefore a U-maximizer will be sensitive to the possible suffering of simulations it runs while trying to learn about U–as long as it believes that we may might be concerned about the simulations’ welfare, upon reflection, it can rely as much as possible on approaches which do not involve running simulations, which deprive simulations of the first-person experience of discomfort, or which estimate outcomes by running simulations in more pleasant circumstances. If the U-maximizer is able to foresee that we will consider certain sacrifices in simulation welfare worthwhile, then it will make those sacrifices. In general, in the same way that we can argue that estimates of U reflect our values over states of affairs, we can argue that estimates of U reflects our values over processes for learning about U.
In the second case, a U-maximizer in our world may have little ability to influence the welfare of hypothetical simulations invoked in the definition of U. However, the possible disvalue of these simulations’ experiences are probably seriously diminished.
In general the moral value of such hypothetical simulations’ experiences is somewhat dubious. If we simply write down the definition of U, these simulations seem to have no more reality than story-book characters whose activities we describe.
The best arguments for their moral relevance comes from the great causal significance of their decisions: if the actions of a powerful U-maximizer depend on its beliefs about what a particular simulation would do in a particular situation, including for example that simulation’s awareness of discomfort or fear, or confusion at the absurdity of the hypothetical situation in which they find themselves, then it may be the case that those emotional responses are granted moral significance. However, although we may define astronomical numbers of hypothetical simulations, the detailed emotional responses of very view of these simulations will play an important role in the definition of U.
Moreover, for the most part the existences of the hypothetical simulations we define are extremely well-controlled by those simulations themselves, and may be expected to be counted as unusually happy by the lights of the simulations themselves. The early simulations (who have less such control) are created from an individual who has provided consent and is selected to find such situations particularly non-distressing.
Finally, we observe that U can exert control over the experiences of even hypothetical simulations. If the early simulations would experience morally relevant suffering because of their causal significance, but the later simulations they generate robustly disvalue this suffering, the later simulations can simulate each other and ensure that they all take the same actions, eliminating the causal significance of the earlier simulations.
Originally published at ordinaryideas.wordpress.com on April 21, 2012.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
OpenAI
Aligning AI systems with human interests.
|
Robbie Tilton | 3 | 15 | https://medium.com/@robbietilton/emotional-computing-with-ai-3513884055fa?source=tag_archive---------4---------------- | Emotional Computing – Robbie Tilton – Medium | Investigating the human to computer relationship through reverse engineering the Turing test
Humans are getting closer to creating a computer with the ability to feel and think. Although the processes of the human brain are at large unknown, computer scientists have been working to simulate the human capacity to feel and understand emotions. This paper explores what it means to live in an age where computers can have emotional depth and what this means for the future of human to computer interactions. In an experiment between a human and a human disguised as a computer, the Turing test is reverse engineered in order to understand the role computers will play as they become more adept to the processes of the human mind. Implications for this study are discussed and the direction for future research suggested.
The computer is a gateway technology that has opened up new ways of creation, communication, and expression. Computers in first world countries are a standard household item (approximately 70% of Americans owning one as of 2009 (US Census Bereau)) and are utilized as a tool to achieve a diverse range of goals. As this product continues to become more globalized, transistors are becoming smaller, processors are becoming faster, hard drives are holding information in new networked patterns, and humans are adapting to the methods of interaction expected of machines. At the same time, with more powerful computers and quicker means of communication — many researchers are exploring how a computer can serve as a tool to simulate the brains cognition. If a computer is able to achieve the same intellectual and emotional properties as the human brain — we could potentially understand how we ourselves think and feel.
Coined by MIT, the term Affective Computing relates to computation of emotion or the affective phenomena and is a study that breaks down complex processes of the brain relating them to machine-like activities. Marvin Minsky, Rosalind Picard, Clifford Nass, and Scott Brave — along with many others — have contributed to this field and what it would mean to have a computer that could fully understand its users. In their research it is very clear that humans have the capacity to associate human emotions and personality traits with a machine (Nass and Brave, 2005), but can a human ever truly treat machine as a person? In this paper we will uncover what it means for humans to interact with machines of greater intelligence and attempt to predict the future of human to computer interactions.
The human to computer relationship is continuously evolving and is dependent on the software interface users interact with. With regards to current wide scale interfaces — OSX, Windows, Linux, iOS, and Android — the tools and abilities that a computer provide remains to be the central focus of computational advancements for commercial purposes. This relationship to software is driven by utilitarian needs and humans do not expect emotional comprehension or intellectually equivalent thoughts in their household devices.
As face tracking, eye tracking, speech recognition, and kinetic recognition are advancing in their experimental laboratories, it is anticipated that these technologies will eventually make their way to the mainstream market to provide a new relationship to what a computer can understand about its users and how a user can interact with a computer.
This paper is not about if a computer will have the ability to feel and love its user, but asks the question — to what capacity will humans be able to reciprocate feelings to a machine.
How does Intelligence Quotient (IQ) differ from Emotional Quotient (EQ). An IQ is a representational relationship of intelligence that measures cognitive abilities like learning, understanding, and dealing with new situations. An EQ is a method of measuring emotional intelligence and the ability to both use emotions and cognitive skills (Cherry).
Advances in computer IQ have been astonishing and have proved that machines are capable of answering difficult questions accurately, are able to hold a conversation with human-like understanding, and allow for emotional connections between a human and machine. The Turing test in particular has shown the machines ability to think and even fool a person into believing that it is a human (Turing test explained in detail in section 4). Machines like, Deep Blue, Watson, Eliza, Svetlana, CleverBot, and many more — have all expanded the perceptions of what a computer is and can be.
If an increased computational IQ can allow a human to computer relationship to feel more like a human to human interaction, what would the advancement of computational EQ bring us? Peter Robinson, a professor at the University of Cambridge, states that if a computer understands its users’ feelings that it can then respond with an interaction that is more intuitive for its users’
(Robinson). In essence, EQ advocates feel that it can facilitate a more natural interaction process where collaboration can occur with a computer.
In Alan Turing’s, Computing Machinery and Intelligence (Turing, 1950), a variant on the classic British parlor “imitation game” is proposed. The original game revolves around three players: a man (A), a woman (B), and an interrogator ©. The interrogator stays in a room apart from A and B and only can communicate to the participants through text-based communication (a typewriter or instant messenger style interface). When the game begins one contestant (A or B) is asked to pretend to be the opposite gender and to try and convince the interrogator © of this. At the same time the opposing participant is given full knowledge that the other contestant is trying to fool the interrogator. With Alan Turing’s computational background, he took this imitation game one step further by replacing one of the participants (A or B) with a machine — thus making the investigator try and depict if he/she was speaking to a human or machine. In 1950, Turing proposed that by 2000 the average interrogator would not have more than a 70 percent chance of making the right identification after five minutes of questioning. The Turing test was first passed in 1966, with Eliza by Joseph Weizenbaum, a chat robot programmed to act like a Rogerian psychotherapist (Weizenbaum, 1966). In 1972, Kenneth Colby created a similar bot called PARRY that incorporated more personality than Eliza and was programmed to act like a paranoid schizophrenic (Bowden, 2006). Since these initial victories for the test, the 21st century has proven to continue to provide machines with more human-like qualities and traits that have made people fall in love with them, convinced them of being human, and have human-like reasoning.
Brian Christian, the author of The Most Human Human, argues that the problem with designing artificial intelligence with greater ability is that even though these machines are capable of learning and speaking, that they have no “self”. They are mere accumulations of identities and thoughts that are foreign to the machine and have no central identity of their own. He also argues that people are beginning to idealize the machine and admire machines capabilities more than their fellow humans — in essence — he argues humans are evolving to become more like machines with less of a notion of self (Christian 2011).
Turing states, “we like to believe that Man is in some subtle way superior to the rest of creation” and “it is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.” If this is true, will humans idealize the future of the machine for its intelligence or will they remain an inferior being as an object of our creation? Reversing the Turing test allows us to understand how humans will treat machines when machines provide an equivalent emotional and intellectual capacity. This also hits directly on Jefferson Lister’s quote, “Not until a machine can write a sonnet or compose a
concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.”
Participants were given a chat-room simulation between two participants (A) a human interrogator and (B) a human disguised as a computer. In this simulation A and B were both placed in different rooms to avoid influence and communicated through a text-based interface. (A) was informed that (B) was an advanced computer chat-bot with the capacity to feel, understand, learn, and speak like a human. (B) was informed to be his or herself. Text-based communication was chosen to follow Turing’s argument that a computers voice should not help an interrogator determine if it’s a human or computer. Pairings of participants were chosen to participate in the interaction one at a time to avoid influence from other participants. Each experiment was five minutes in length to replicate Turing’s time restraints.
Twenty-eight graduate students were recruited from the NYU Interactive Telecommunications Program to participate in the study — 50% male and 50% female. The experiment was evenly distributed across men and women. After being recruited in-person, participants were directed to a website that gave instructions and ran the experiment. Upon entering the website, (A) participants were told that we were in the process of evaluating an advanced cloud based computing system that had the capacity to feel emotion, understand, learn, and converse like a human. (B) participants were instructed that they would be communicating with another person through text and to be themselves. They were also told that participant (A) thinks they are a computer, but that they shouldn’t act like a computer or pretend to be one in any way. This allowed (A) to explicitly understand that they were talking to a computer while (B) knew (A) perspective and explicitly were not going to play the role of a computer. Participants were then directed to communicate with the bot or human freely without restrictions. After five minutes of conversation the participants were asked to stop and then filled out a questionnaire.
Participants were asked to rate IQ and EQ of the person they were conversing with. (A) participants perceived the following of (B): IQ: 0% — Not Good / 0% — Barely Acceptable / 21.4% — Okay / 50% — Great / 28.6% Excellent IQ Average Rating: 81.4% EQ: 0% — Not Good / 7.1% — Barely Acceptable / 50% — Okay / 14.3% — Great / 28.6% — Excellent EQ Average Rating: 72.8% Ability to hold a conversation: 0% — Not Good / 0% — Barely Acceptable / 28.6% — Okay / 35.7% — Great / 35.7% — Excellent Ability to hold a conversation Average: 81.4%
(B) participants perceived the following of (A): IQ: 0% — Not Good / 21.4% — Barely Acceptable / 35.7% — Okay / 28.6% — Great / 14.3% Excellent IQ Average Rating: 67% EQ: 7.1% — Not Good / 14.3% — Barely Acceptable / 28.6% — Okay / 35.7% — Great / 14.3% — Excellent EQ Average Rating: 67% Ability to hold a conversation: 7.1% — Not Good / 28.6% — Barely Acceptable / 35.7% — Okay / 0% — Great / 28.6% — Excellent Ability to hold a conversation Average: 62.8%
Overall, (A) participants gave the perceived Chabot higher ratings than (B) participants gave (A). In particular, the highest rating was in regards to the chat- bot’s IQ. This data states that people viewed the chat-bot to be more intellectually competent. It also implies that people talking with bots decrease their IQ, EQ, and conversation ability when communicating with computers.
(A) participants were allowed to decide their username within the chat system to best reflect how they wanted to portray themselves to the machine. (B) participants were designated the gender neutral name “Bot” in an attempt to ganger gender perceptions for the machine. The male to female ratio was divided evenly with all participants: 50% being male and 50% being female.
(A) participants 50% of the time thought (B) was a male, 7.1% a female, and 42.9% gender neutral. On the other hand, (B) participants 28.6% of the time thought (A) was a male, 57.1% a female, and 14.3% gender neutral.
The usernames (A) chose are as follows: Hihi, Inessah Somade3 Willzing Jihyun, G, Ann, Divagrrl93, Thisdoug, Jono, Minion10, P, 123, itslynnburke
From these results, it is clear that people associate the male gender and gender neutrality with machines. It also demonstrates that people modify their identities when speaking with machines.
(B) participants were asked if they would like to pursue a friendship with the person they chatted with. 50% of participants responded affirmatively that they would indeed like to pursue a friendship while 50% said maybe or no. One response stated, “I would like to continue the conversation, but I don’t think I would be enticed to pursue a friendship.” Another responded, “Maybe? I like people who are intellectually curious, but I worry that the person might be a bit of a smart-ass.” Overall the participant disguised as a machine may or may not pursue a friendship after five minutes of text-based conversation.
(B) participants were also asked if they felt (A) cared about their feelings. 21.4% stated that (A) indeed did care about their feelings, 21.4% stated that they weren’t sure if (A) cared about their feelings, and 57.2% stated that (A) did not care about their feelings. These results indicate a user’s lack of attention to (B)’s emotional state.
(A) participants were asked what they felt could be improved about the (B) participants. The following improvements were noted, “Should be funny” “Give it a better sense of humor” “It can be better if he knows about my friends or preference” “The response was inconsistent and too slow”“It should share more about itself. Your algorithm is prime prude, just like that LETDOWN Siri. Well, I guess I liked it better, but it should be more engaged and human consistency, not after the first cold prompt.” “It pushed me on too many questions” “I felt that it gave up on answering and the response time was a bit slow. Outsource the chatbot to fluent English speakers elsewhere and pretend they are bots — if the responses are this slow to this many inquiries, then it should be about the same experience.” “I was very impressed with its parsing ability so far. Not as much with its reasoning. I think some parameters for the conversation would help, like ‘Ask a question’” “Maybe make the response faster”“I was confused at first, because I asked a question, waited a bit, then asked another question, waited and then got a response from the bot...”
The responses from this indicate that even if a computer is a human that its user may not necessarily be fully satisfied with its performance. The response implies that each user would like the machine to accommodate his or her needs in order to cause less personality and cognitive friction. With several participant comments incorporating response time, it also indicates people expect machines to have consistent response times. Humans clearly vary in speed when listening, thinking, and responding, but it is expected of machines to act in a rhythmic fashion. It also suggests that there is an expectation that a machine will answer all questions asked and will not ask its users more questions than perceived necessary.
(A) participants were asked if they felt (B)’s Artificial Intelligence could improve their relationship to computers if integrated in their daily products. 57.1% of participants responded affirmatively that they felt this could improve their relationship:“Well- I think I prefer talking to a person better. But yes for ipod, smart phones, etc. would be very handy for everyday use products”“Yes. Especially iphone is always with me. So it can track my daily behaviors. That makes the algorithm smarter”“Possibly, I should have queries it for information that would have been more relevant to me”“Absolutely!”“Yes”
The 42.9% which responded negatively had doubts that it would be necessary or desirable:“Not sure, it might creep me out if it were.”“I like Siri as much as the next gal, but honestly we’re approaching the uncanny valley now.”“Its not clear to me why this type of relationship needs to improve, i think human relationships still need a lot of work.”“Nope, I still prefer flesh sacks.“No”
The findings of the paper are relevant to the future of Affective Computation: whether a super computer with a human-like IQ and EQ can improve the human-to-computer interaction. The uncertainty of computational equivalency that Turing brought forth is indeed an interesting starting point to understand what we want out of the future of computers.
The responses from the experiment affirm gender perceptions of machines and show how we display ourselves to machines. It seems that we limit our intelligence, limit our emotions, and obscure our identities when communicating to a machine. This leads us to question if we would want to give our true self to a computer if it doesn’t have a self of its own. It also could indicate that people censor themselves for machines because they lack a similarity that bonds humans to humans or that there’s a stigma associated with placing information in a digital device. The inverse relationship is also shown through the data that people perceive a bots IQ, EQ, and discussion ability to be high. Even though the chat-bot was indeed a human this data can imply humans perceive bots to not have restrictions and to be competent at certain procedures.
The results also imply that humans aren’t really sure what they want out of Artificial Intelligence in the future and that we are not certain that an Affective computer would even enjoy a users company and/or conversation. The results also state that we currently think of computers as a very personal device that should be passive (not active), but reactive when interacted with. It suggests a consistent reliability we expect upon machines and that we expect to take more information from a machine than it takes from us.
A major limitation of this experiment is the sample size and sample diversity. The sample size of twenty-eight students is too small to fully understand and gather a stable result set. It was also only conducted with NYU: Interactive Telecommunications Students who all have extensive experience with computers and technology. To get a more accurate assessment of emotions a more diverse sample range needs to be taken.
Five minutes is a short amount of time to create an emotional connection or friendship. To stay true to the Turing tests limitations this was enforced, but further relational understanding could be understood if more time was granted.
Beside the visual interface of the chat window it would be important to show the emotions of participant (B) through a virtual avatar. Not having this visual feedback could have limited emotional resonance with participants (A).
Time is also a limitation. People aren’t used to speaking to inquisitive machines yet and even through a familiar interface (a chat-room) many participants haven’t held conversations with machines previously. Perhaps if chat-bots become more active conversational participants’ in commercial applications users will feel less censored to give themselves to the conversation.
In addition to the refinements noted in the limitations described above, there are several other experiments for possible future studies. For example, investigating a long-term human-to-bot relationship. This would provide a better understanding toward the emotions a human can share with a machine and how a machine can reciprocate these emotions. It would also better allow computer scientists to understand what really creates a significant relationship when physical limitations are present.
Future studies should attempt to push these results further by understanding how a larger sample reacts to a computer algorithm with higher intellectual and emotional understanding. It should also attempt to understand the boundaries of emotional computing and what is ideal for the user and what is ideal for the machine without compromising either parties capacities.
This paper demonstrates the diverse range of emotions that people can feel for affective computation and indicates that we are not in a time where computational equivalency is fully desired or accepted. Positive reactions indicate that there is optimism for more adept artificial intelligence and that there is interest in the field for commercial use. It also provides insight that humans limit themselves when communicating with machines and that inversely machines don’t limit themselves when communicating with humans.
Books & ArticlesBowden M., 2006, Minds as Machine: A History of Cognitive Science, Oxford University Press
Christian B., 2011, The Most Human Human
Marvin M., 2006. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind, Simon & Schuster Paperbacks
Nass C., Brave S., 2005. Wired For Speech: How Voice Activates and Advances the Human-Computer Relationship, MIT Press
Nass C., Brave S., 2005, Hutchinson K., Computers that care: Investigating the effects of orientation of emotion exhibited by an embodied computer agent, Human-Computer Studies, 161- 178, Elsevier
Picard, R., 1997. Affective Computing, MIT Press
Searle J., 1980, Minds, Brains, and Programs, Cambridge University Press, 417–457
Turing, A., 1950, Computing Machinery and Intelligence, Mind, Stor, 59, 433–460
Wilson R., Keil F., 2001, The MIT Encyclopedia of the Cognitive Sciences, MIT Press
Weizenbaum J., 1966, ELIZA — A Computer Program For the Study of Natural Language Communication Between Man and Machine, Communications of the ACM, 36–45
Websites Cherry K., What is Emotional Intelligence?, http://psychology.about.com/od/personalitydevelopment/a/emotionalintell.htm
Epstein R., 2006, Clever Bots, Radio Lab, http://www.radiolab.org/2011/may/31/clever-bots/ IBM, 1977, Deep Blue, IBM, http://www.research.ibm.com/deepblue/ IBM, 2011, Watson, IBM, http://www-03.ibm.com/innovation/us/watson/index.html
Leavitt D., 2011, I Took the Turing Test, New York Times, http://www.nytimes.com/2011/03/20/books/review/book-review-the-most-human-human-by-brian- christian.html
Personal Robotics Group, 2008, Nexi, MIT. http://robotic.media.mit.edu/ Robinson P., The Emotional Computer, Camrbidge Ideas,
http://www.cam.ac.uk/research/news/the-emotional-computer/
US Census Bereau, 2009, Households with a Computer and Internet Use: 1984 to 2009. http://www.census.gov/hhes/computer/
1960’s, Eliza, MIT, http://www.manifestation.com/neurotoys/eliza.php3
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
|
Netflix Technology Blog | 330 | 11 | https://medium.com/netflix-techblog/system-architectures-for-personalization-and-recommendation-e081aa94b5d8?source=tag_archive---------0---------------- | System Architectures for Personalization and Recommendation | by Xavier Amatriain and Justin Basilico
In our previous posts about Netflix personalization, we highlighted the importance of using both data and algorithms to create the best possible experience for Netflix members. We also talked about the importance of enriching the interaction and engaging the user with the recommendation system. Today we’re exploring another important piece of the puzzle: how to create a software architecture that can deliver this experience and support rapid innovation. Coming up with a software architecture that handles large volumes of existing data, is responsive to user interactions, and makes it easy to experiment with new recommendation approaches is not a trivial task. In this post we will describe how we address some of these challenges at Netflix.
To start with, we present an overall system diagram for recommendation systems in the following figure. The main components of the architecture contain one or more machine learning algorithms.
The simplest thing we can do with data is to store it for later offline processing, which leads to part of the architecture for managing Offline jobs. However, computation can be done offline, nearline, or online. Online computation can respond better to recent events and user interaction, but has to respond to requests in real-time. This can limit the computational complexity of the algorithms employed as well as the amount of data that can be processed. Offline computation has less limitations on the amount of data and the computational complexity of the algorithms since it runs in a batch manner with relaxed timing requirements. However, it can easily grow stale between updates because the most recent data is not incorporated. One of the key issues in a personalization architecture is how to combine and manage online and offline computation in a seamless manner. Nearline computation is an intermediate compromise between these two modes in which we can perform online-like computations, but do not require them to be served in real-time. Model training is another form of computation that uses existing data to generate a model that will later be used during the actual computation of results. Another part of the architecture describes how the different kinds of events and data need to be handled by the Event and Data Distribution system. A related issue is how to combine the different Signals and Models that are needed across the offline, nearline, and online regimes. Finally, we also need to figure out how to combine intermediate Recommendation Results in a way that makes sense for the user. The rest of this post will detail these components of this architecture as well as their interactions. In order to do so, we will break the general diagram into different sub-systems and we will go into the details of each of them. As you read on, it is worth keeping in mind that our whole infrastructure runs across the public Amazon Web Services cloud.
As mentioned above, our algorithmic results can be computed either online in real-time, offline in batch, or nearline in between. Each approach has its advantages and disadvantages, which need to be taken into account for each use case.
Online computation can respond quickly to events and use the most recent data. An example is to assemble a gallery of action movies sorted for the member using the current context. Online components are subject to an availability and response time Service Level Agreements (SLA) that specifies the maximum latency of the process in responding to requests from client applications while our member is waiting for recommendations to appear. This can make it harder to fit complex and computationally costly algorithms in this approach. Also, a purely online computation may fail to meet its SLA in some circumstances, so it is always important to think of a fast fallback mechanism such as reverting to a precomputed result. Computing online also means that the various data sources involved also need to be available online, which can require additional infrastructure.
On the other end of the spectrum, offline computation allows for more choices in algorithmic approach such as complex algorithms and less limitations on the amount of data that is used. A trivial example might be to periodically aggregate statistics from millions of movie play events to compile baseline popularity metrics for recommendations. Offline systems also have simpler engineering requirements. For example, relaxed response time SLAs imposed by clients can be easily met. New algorithms can be deployed in production without the need to put too much effort into performance tuning. This flexibility supports agile innovation. At Netflix we take advantage of this to support rapid experimentation: if a new experimental algorithm is slower to execute, we can choose to simply deploy more Amazon EC2 instances to achieve the throughput required to run the experiment, instead of spending valuable engineering time optimizing performance for an algorithm that may prove to be of little business value. However, because offline processing does not have strong latency requirements, it will not react quickly to changes in context or new data. Ultimately, this can lead to staleness that may degrade the member experience. Offline computation also requires having infrastructure for storing, computing, and accessing large sets of precomputed results.
Nearline computation can be seen as a compromise between the two previous modes. In this case, computation is performed exactly like in the online case. However, we remove the requirement to serve results as soon as they are computed and can instead store them, allowing it to be asynchronous. The nearline computation is done in response to user events so that the system can be more responsive between requests. This opens the door for potentially more complex processing to be done per event. An example is to update recommendations to reflect that a movie has been watched immediately after a member begins to watch it. Results can be stored in an intermediate caching or storage back-end. Nearline computation is also a natural setting for applying incremental learning algorithms.
In any case, the choice of online/nearline/offline processing is not an either/or question. All approaches can and should be combined. There are many ways to combine them. We already mentioned the idea of using offline computation as a fallback. Another option is to precompute part of a result with an offline process and leave the less costly or more context-sensitive parts of the algorithms for online computation.
Even the modeling part can be done in a hybrid offline/online manner. This is not a natural fit for traditional supervised classification applications where the classifier has to be trained in batch from labeled data and will only be applied online to classify new inputs. However, approaches such as Matrix Factorization are a more natural fit for hybrid online/offline modeling: some factors can be precomputed offline while others can be updated in real-time to create a more fresh result. Other unsupervised approaches such as clustering also allow for offline computation of the cluster centers and online assignment of clusters. These examples point to the possibility of separating our model training into a large-scale and potentially complex global model training on the one hand and a lighter user-specific model training or updating phase that can be performed online.
Much of the computation we need to do when running personalization machine learning algorithms can be done offline. This means that the jobs can be scheduled to be executed periodically and their execution does not need to be synchronous with the request or presentation of the results. There are two main kinds of tasks that fall in this category: model training and batch computation of intermediate or final results. In the model training jobs, we collect relevant existing data and apply a machine learning algorithm produces a set of model parameters (which we will henceforth refer to as the model). This model will usually be encoded and stored in a file for later consumption. Although most of the models are trained offline in batch mode, we also have some online learning techniques where incremental training is indeed performed online. Batch computation of results is the offline computation process defined above in which we use existing models and corresponding input data to compute results that will be used at a later time either for subsequent online processing or direct presentation to the user.
Both of these tasks need refined data to process, which usually is generated by running a database query. Since these queries run over large amounts of data, it can be beneficial to run them in a distributed fashion, which makes them very good candidates for running on Hadoop via either Hive or Pig jobs. Once the queries have completed, we need a mechanism for publishing the resulting data. We have several requirements for that mechanism: First, it should notify subscribers when the result of a query is ready. Second, it should support different repositories (not only HDFS, but also S3 or Cassandra, for instance). Finally, it should transparently handle errors, allow for monitoring, and alerting. At Netflix we use an internal tool named Hermes that provides all of these capabilities and integrates them into a coherent publish-subscribe framework. It allows data to be delivered to subscribers in near real-time. In some sense, it covers some of the same use cases as Apache Kafka, but it is not a message/event queue system.
Regardless of whether we are doing an online or offline computation, we need to think about how an algorithm will handle three kinds of inputs: models, data, and signals. Models are usually small files of parameters that have been previously trained offline. Data is previously processed information that has been stored in some sort of database, such as movie metadata or popularity. We use the term “signals” to refer to fresh information we input to algorithms. This data is obtained from live services and can be made of user-related information, such as what the member has watched recently, or context data such as session, device, date, or time.
Our goal is to turn member interaction data into insights that can be used to improve the member’s experience. For that reason, we would like the various Netflix user interface applications (Smart TVs, tablets, game consoles, etc.) to not only deliver a delightful user experience but also collect as many user events as possible. These actions can be related to clicks, browsing, viewing, or even the content of the viewport at any time. Events can then be aggregated to provide base data for our algorithms. Here we try to make a distinction between data and events, although the boundary is certainly blurry. We think of events as small units of time-sensitive information that need to be processed with the least amount of latency possible. These events are routed to trigger a subsequent action or process, such as updating a nearline result set. On the other hand, we think of data as more dense information units that might need to be processed and stored for later use. Here the latency is not as important as the information quality and quantity. Of course, there are user events that can be treated as both events and data and therefore sent to both flows.
At Netflix, our near-real-time event flow is managed through an internal framework called Manhattan. Manhattan is a distributed computation system that is central to our algorithmic architecture for recommendation. It is somewhat similar to Twitter’s Storm, but it addresses different concerns and responds to a different set of internal requirements. The data flow is managed mostly through logging through Chukwa to Hadoop for the initial steps of the process. Later we use Hermes as our publish-subscribe mechanism.
The goal of our machine learning approach is to come up with personalized recommendations. These recommendation results can be serviced directly from lists that we have previously computed or they can be generated on the fly by online algorithms. Of course, we can think of using a combination of both where the bulk of the recommendations are computed offline and we add some freshness by post-processing the lists with online algorithms that use real-time signals.
At Netflix, we store offline and intermediate results in various repositories to be later consumed at request time: the primary data stores we use are Cassandra, EVCache, and MySQL. Each solution has advantages and disadvantages over the others. MySQL allows for storage of structured relational data that might be required for some future process through general-purpose querying. However, the generality comes at the cost of scalability issues in distributed environments. Cassandra and EVCache both offer the advantages of key-value stores. Cassandra is a well-known and standard solution when in need of a distributed and scalable no-SQL store. Cassandra works well in some situations, however in cases where we need intensive and constant write operations we find EVCache to be a better fit. The key issue, however, is not so much where to store them as to how to handle the requirements in a way that conflicting goals such as query complexity, read/write latency, and transactional consistency meet at an optimal point for each use case.
In previous posts, we have highlighted the importance of data, models, and user interfaces for creating a world-class recommendation system. When building such a system it is critical to also think of the software architecture in which it will be deployed. We want the ability to use sophisticated machine learning algorithms that can grow to arbitrary complexity and can deal with huge amounts of data. We also want an architecture that allows for flexible and agile innovation where new approaches can be developed and plugged-in easily. Plus, we want our recommendation results to be fresh and respond quickly to new data and user actions. Finding the sweet spot between these desires is not trivial: it requires a thoughtful analysis of requirements, careful selection of technologies, and a strategic decomposition of recommendation algorithms to achieve the best outcomes for our members. We are always looking for great engineers to join our team. If you think you can help us, be sure to look at our jobs page.
Originally published at techblog.netflix.com on March 27, 2013.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Learn more about how Netflix designs, builds, and operates our systems and engineering organizations
Learn about Netflix’s world class engineering efforts, company culture, product developments and more.
|
James Faghmous | 187 | 6 | https://medium.com/@nomadic_mind/new-to-machine-learning-avoid-these-three-mistakes-73258b3848a4?source=tag_archive---------1---------------- | New to Machine Learning? Avoid these three mistakes | Machine learning (ML) is one of the hottest fields in data science. As soon as ML entered the mainstream through Amazon, Netflix, and Facebook people have been giddy about what they can learn from their data. However, modern machine learning (i.e. not the theoretical statistical learning that emerged in the 70s) is very much an evolving field and despite its many successes we are still learning what exactly can ML do for data practitioners. I gave a talk on this topic earlier this fall at Northwestern University and I wanted to share these cautionary tales with a wider audience.
Machine learning is a field of computer science where algorithms improve their performance at a certain task as more data are observed.To do so, algorithms select a hypothesis that best explains the data at hand with the hope that the hypothesis would generalize to future (unseen) data. Take the left panel in the figure in the header, the crosses denote the observed data projected in a two-dimensional space — in this case house prices and their corresponding size in square meters. The blue line is the algorithm’s best hypothesis to explain the observed data. It states “there is a linear relationship between the price and size of a house. As the house’s size increases, so does its price in linear increments.” Now using this hypothesis, I can predict the price of an unseen datapoint based on its size. As the dimensions of the data increase, the hypotheses that explain the data become more complex.However, given that we are using a finite sample of observations to learn our hypothesis, finding an adequate hypothesis that generalizes to unseen data is nontrivial. There are three major pitfalls one can fall into that will prevent you from having a generalizable model and hence the conclusions of your hypothesis will be in doubt.
Occam’s razor is a principle attributed to William of Occam a 14th century philosopher. Occam’s razor advocates for choosing the simplest hypothesis that explains your data, yet no simpler. While this notion is simple and elegant, it is often misunderstood to mean that we must select the simplest hypothesis possible regardless of performance.
In their 2008 paper in Nature, Johan Nyberg and colleagues used a 4-level artificial neural network to predict seasonal hurricane counts using two or three environmental variables. The authors reported stellar accuracy in predicting seasonal North Atlantic hurricane counts, however their model violates Occam’s razor and most certainly doesn’t generalize to unseen data. The razor was violated when the hypothesis or model selected to describe the relationship between environmental data and seasonal hurricane counts was generated using a four-layer neural network. A four-layer neural network can model virtually any function no matter how complex and could fit a small dataset very well but fail to generalize to unseen data. The rightmost panel in the top figure shows such incident. The hypothesis selected by the algorithm (the blue curve) to explain the data is so complex that it fits through every single data point. That is: for any given house size in the training data, I can give you with pinpoint accuracy the price it would sell for. It doesn’t take much to observe that even a human couldn’t be that accurate. We could give you a very close estimate of the price, but to predict the selling price of a house, within a few dollars , every single time is impossible.
The pitfall of selecting too complex a hypothesis is known as overfitting. Think of overfitting as memorizing as opposed to learning. If you are a child and you are memorizing how to add numbers you may memorize the sums of any pair of integers between 0 and 10. However, when asked to calculate 11 + 12 you will be unable to because you have never seen 11 or 12, and therefore couldn’t memorize their sum. That’s what happens to an overfitted model, it gets too lazy to learn the general principle that explains the data and instead memorizes the data.
Data leakage occurs when the data you are using to learn a hypothesis happens to have the information you are trying to predict. The most basic form of data leakage would be to use the same data that we want to predict as input to our model (e.g. use the price of a house to predict the price of the same house). However, most often data leakage occurs subtly and inadvertently. For example, one may wish to learn for anomalies as opposed to raw data, that is a deviations from a long-term mean. However, many fail to remove the test data before computing the anomalies and hence the anomalies carry some information about the data you want to predict since they influenced the mean and standard deviation before being removed.
The are several ways to avoid data leakage as outlined by Claudia Perlich in her great paper on the subject. However, there is no silver bullet — sometimes you may inherit a corrupt dataset without even realizing it. One way to spot data leakage is if you are doing very poorly on unseen independent data. For example, say you got a dataset from someone that spanned 2000-2010, but you started collecting you own data from 2011 onward. If your model’s performance is poor on the newly collected data it may be a sign of data leakage. You must resist the urge to retrain the model with both the potentially corrupt and new data. Instated, either try to identify the causes of poor performance on the new data or, better yet, independently reconstruct the entire dataset. As a rule of thumb, your best defense is to always be mindful of the possibility of data leakage in any dataset.
Sampling bias is the case when you shortchange your model by training it on a biased or non-random dataset, which results in a poorly generalizable hypothesis. In the case of housing prices, sampling bias occurs if, for some reason, all the house prices/sizes you collected were of huge mansions. However, when it was time to test your model and the first price you needed to predict was that of a 2-bedroom apartment you couldn’t predict it. Sampling bias happens very frequently mainly because, as humans, we are notorious for being biased (nonrandom) samplers. One of the most common examples of this bias happens in startups and investing. If you attend any business school course, they will use all these “case studies” of how to build a successful company. Such case studies actually depict the anomalies and not the norm as most companies fail — For every Apple that became a success there were 1000 other startups that died trying. So to build an automated data-driven investment strategy you would need samples from both successful and unsuccessful companies.
The figure above (Figure 13) is a concrete example of sampling bias. Say you want to predict whether a tornado is going to originate at certain location based on two environmental conditions: wind shear and convective available potential energy (CAPE). We don’t have to worry about what these variables actually mean, but Figure 13 shows the wind shear and CAPE associated with 242 tornado cases. We can fit a model to these data but it will certainly not generalize because we failed to include shear and CAPE values when tornados did not occur. In order for our model to separate between positive (tornados) and negative (no tornados) events we must train it using both populations.
There you have it. Being mindful of these limitations does not guarantee that your ML algorithm will solve all your problems, but it certainly reduces the risk of being disappointed when your model doesn’t generalize to unseen data. Now go on young Jedi: train your model, you must!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
@nomadic_mind. Sometimes the difference between success and failure is the same as between = and ==. Living is in the details.
|
Datafiniti | 3 | 5 | https://blog.datafiniti.co/classifying-websites-with-neural-networks-39123a464055?source=tag_archive---------2---------------- | Classifying Websites with Neural Networks – Knowledge from Data: The Datafiniti Blog | At Datafiniti, we have a strong need for converting unstructured web content into structured data. For example, we’d like to find a page like:
and do the following:
Both of these are hard things for a computer to do in an automated manner. While it’s easy for you or me to realize that the above web page is selling some jeans, a computer would have a hard time making the distinction from the above page from either of the following web pages:
Or
Both of these pages share many similarities to the actual product page, but also have many key differences. The real challenge, though, is that if we look at the entire set of possible web pages, those similarities and differences become somewhat blurred, which means hard and fast rules for classifications will fail often. In fact, we can’t even rely on just looking at the underlying HTML, since there are huge variations in how product pages are laid out in HTML.
While we could try and develop a complicated set of rules to account for all the conditions that perfectly identify a product page, doing so would be extremely time consuming, and frankly, incredibly boring work. Instead, we can try using a classical technique out of the artificial intelligence handbook: neural networks.
Here’s a quick primer on neural networks. Let’s say we want to know whether any particular mushroom is poisonous or not. We’re not entirely sure what determines this, but we do have a record of mushrooms with their diameters and heights, along with which of these mushrooms were poisonous to eat, for sure. In order to see if we could use diameter and heights to determine poisonous-ness, we could set up the following equation:
A * (diameter) + B * (height) = 0 or 1 for not-poisonous / poisonous
We would then try various combinations of A and B for all possible diameters and heights until we found a combination that correctly determined poisonous-ness for as many mushrooms as possible.
Neural networks provide a structure for using the output of one set of input data to adjust A and B to the most likely best values for the next set of input data. By constantly adjusting A and B this way, we can quickly get to the best possible values for them.
In order to introduce more complex relationships in our data, we can introduce “hidden” layers in this model, which would end up looking something like:
For a more detailed explanation of neural networks, you can check out the following links:
In our product page classifier algorithm, we setup a neural network with 1 input layer with 27 nodes, 1 hidden layer with 25 nodes, and 1 output layer with 3 output nodes. Our input layer modeled several features, including:
Our output layer had the following:
Our algorithm for the neural network took the following steps:
The ultimate output is two sets of input layers (T1 and T2), that we can use in a matrix equation to predict page type for any given web page. This works like so:
So how did we do? In order to determine how successful we were in our predictions, we need to determine how to measure success. In general, we want to measure how many true positive (TP) results as compared to false positives (FP) and false negatives (FN). Conventional measurements for these are:
Our implementation had the following results:
These scores are just over our training set, of course. The actual scores on real-life data may be a bit lower, but not by much. This is pretty good! We should have an algorithm on our hands that can accurately classify product pages about 90% of the time.
Of course, identifying product pages isn’t enough. We also want to pull out the actual structured data! In particular, we’re interested in product name, price, and any unique identifiers (e.g., UPC, EAN, & ISBN). This information would help us fill out our product search.
We don’t actually use neural networks for doing this. Neural networks are better-suited toward classification problems, and extracting data from a web page is a different type of problem. Instead, we use a variety of heuristics specific to each attribute we’re trying to extract. For example, for product name, we look at the <h1> and <h2> tags, and use a few metrics to determine the best choice. We’ve been able to achieve around a 80% accuracy here. We may go into the actual metrics and methodology for developing them in a separate post!
We feel pretty good about our ability to classify and extract product data. The extraction part could be better, but it’s steadily being improved. In the meantime, we’re also working on classifying other types of pages, such as business data, company team pages, event data, and more.As we roll-out these classifiers and data extractors, we’re including each one in our crawl of the entire Internet. This means that we can scan the entire Internet and pull out any available data that exists out there. Exciting stuff!
You can connect with us and learn more about our business, people, product, and property APIs and datasets by selecting one of the options below.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Instant Access to Web Data
Building the world’s largest database of web data — follow our journey.
|
Arjan Haring 🔮🔨 | 3 | 8 | https://medium.com/i-love-experiments/reinventing-social-sciences-in-the-era-of-big-data-d255f3e391f3?source=tag_archive---------3---------------- | Reinventing Social Sciences in the Era of Big Data – I love experiments – Medium | Sune Lehmann is an Associate Professor at DTU Informatics, Technical University of Denmark. In the past, he has worked as a Postdoctoral Fellow at Institute for Quantitative Social Science at Harvard University and the College of Computer and Information Science at Northeasthern University; before that, he was at Laszlo Barabási’s Center for Complex Network Research at Northeastern University and the Center for Cancer Systems Biology at the Dana Farber Cancer Institute.
I wouldn’t call him stupid. He is okay. Well he is actually pretty great. Forget that, he is freaking fantastic! We should get him over for one of our events! And so we did. Sune spoke at the 2nd #projectwaalhalla.
This time, let’s begin at the beginning, before we dive in deeper. Your main research project has to do with measuring real social networks with high resolution. I know for a fact you don’t mean 3D printed social networks.
But what are you aiming for, and how are you going to get there?
My (humble) research goal is to reinvent social sciences in the age of big data. My background is in mathematical analysis of large networks. But over the past 10 years, I’ve slowly grown more and more interested in understanding social systems.
As a scientist I was blown away by the promise of all of the digital traces of human behavior collected as a consequence of cheap hard drives and databases everywhere. But in spite of the promise of big data, the results so far have been less exciting than I had hoped. For all the hype, deep new scientific insights from big data are far and few between.
A central hypothesis in my work is that in order to advance our quantitative understanding of social interaction, we cannot get by with noisy, incomplete big data: We need good data. Let me explain why and use my own field as an example. Let’s say you have a massive cell phone data set from a telco that provides service to 30% or the population of a large country of 66 million people. That’s something like 20 million people and easily terabytes of monthly data, so a massive dataset.
But when you start thinking about the network, you run into problems. The standard approach is to simply look at the network between the individuals in your sample. Assuming that people are randomly sampled, and links are randomly distributed, you realize that 30% of the population corresponds to only 9% of the links. Is 9% of cell phone calls enough to understand how the network works? With only one in ten links remaining in the dataset, the social structure almost completely erased.
And it gets worse. Telecommunication is only one (small & biased) aspect of human communication. Human interactions may also unfold face-to-face, via text message, email, Facebook, Skype, etc. And these streams are collected in silos, where we cannot generally identify individuals/entities across datasets. So if we think about all these ways we can communicate. Access to only one in ten of my cell phone contacts is very likely insufficient for making valid inferences.
And the worst part is that we can’t know. Without access to the full data set, we can’t even tell what we can and can’t tell from a sample. So when I started out as an assistant professor, I decided to change the course of my career and move from sitting comfortably in front of my computer as a computational/theoretical scientist to becoming an experimenter, to try and attack this problem head on.Now, a few of years later, we have put together a dataset of human social interactions that is unparalleled in terms of quality and size. We recording social interactions within more than 1000 students at my university, using top-of-the-line cell phones as censors. We can capture detailed interaction patterns, such as face-to-face (via bluetooth), social network data (e.g. Facebook and Twitter) via apps, telecommunication data from call logs, and geolocation via GPS & Wifi.
We like to call this type of data ‘Deep Data’: A densely connected group of participants (all the links), observations across many communication channels, high frequency observations (minute-by-minute scale), but with long observation windows (years of collection), and with behavioral data supplemented by classic questionnaires, as well as the possibility of running intervention experiments.
But my expertise (and ultimate interest) is not in building a Deep Data collection platform (although that has been a lot of fun). I want to get back to the questions that motivated the enthusiasm for computational social science in the first place. Reinventing social sciences is what it’s all about.
What can we learn from just one channel? Now that we know about all the communication channels, we can begin to understand what kind of things one may learn from a single channel. Let’s get quantitative about the usefulness of e.g. large cell phone data sets or Facebook, when that’s the only data available.
My heart is still with the network science. In some ways, this whole project is designed to build a system that will really take us places in terms of modeling human social networks. Lots of network science is still about unweighted, undirected static networks; we are already using this dataset to create better models for dynamic, multiplex networks.
Understanding spreading processes (influence, behavior, disease, etc) is a central goal if we look a bit forward in time. We have an system, where N is big enough to perform intervention experiments with randomized controls, etc. We’re still far from implementing this goal, but we’re working on finding the right questions — and working closely with social scientists to get our protocols for these questions just right.
What a coincidence...
We are all about modeling behavior and learning across channels. And with ContagionAPI prominently on our product roadmap we want to start dabbling with spreading processes as well in the near future.
What would you say were major challenges the last years in modeling behavior, and what do see as biggest challenges & opportunities for the future?
There are many challenges. Although we’ve made amazing progress in network science, for example, it’s still a fact that our fundamental understanding of dynamic/multi-channel networks is still in its infancy, there aren’t a lot of easily interpretable models that really explain the underlying networks.
So that’s an area with lots of challenges and corresponding opportunities. And when we want to figure out questions about things taking place on networks, we run into all kinds of problems about how to do statistics right. Brilliant statisticians have shown that homophily and contagion are generically confounded in observational social network studies. On that front, guys like Sinan Aral are doing really exciting work using interventions to get at some of the issues, but there is still lots to do in that area.
Finally, privacy is a big issue. We’re working closely with collaborators at the MIT MediaLab to develop new, responsible solutions — and we’ve already gotten far on that topic. But in terms of data sharing that respects the privacy of study participants, there is still a long way to go. But since studies of digital traces of human behavior will not be going away anytime soon, we have to make progress in this area.
And oh yeah, why does this all matter? And should we be concerned by these things?
I think there are many reasons to be concerned and excited. The more we learn about how systems work, the more we are able to influence them, to control them. That is also true for systems of humans. If we think about spreading of disease, it’d be great to know how to slow down or stop the spread of SARS or similar contagious viruses.
Or, as a society we may be able to increase spread of things we support, such as tolerance, good exercise habits, etc ... and similarly, we can use an understanding influence in social systems to inhibit negative behavior such as intolerance, smoking, etc.
And all this ties into another good reason to be concerned. Companies like Google, Facebook, Apple (or governmental agencies like NSA) are committing serious resources to research in this area. It’s not a coincidence that both Google and Facebook are developing their own cell-phones.
But none of these walled-off players are sharing their results. They’re simply applying them to the public. In my opinion that’s one of the key problems of the current state of affairs, the imbalance of information. We hand over our personal data to powerful corporations, but have nearly zero insight into a) what they know about us and b) what they’re doing with all the stuff they know about us. By doing research that is open, collaborative, explicit about privacy, and public, I hope we can act as a counter-point and work to diminish the information-gap.
Okay, great. But should companies be interested in the stuff you are doing? And if so, why?
I think so! One of the exciting things about this area is that basic research is very close to applied research. Insight into the mechanisms that drive human nature is indeed valuable for companies (I presume that’s why Science Rockstars exists, for example) [note from the editor: not stupid at all].
We already know that human behavior can be influenced significantly with “nudging”, that certain kinds of collective behaviors influence our opinions (and purchasing behaviors). The more we uncover about the details of these mechanism, the more precise and effective we can be about influencing others (let’s discuss the ethics of this another time).
But it’s not just marketing. If used for good, this is the science of what makes people happy. So inside organizations, work like this could be used to re-think organizational structures, incentives, etc; to make employees happier & more fulfilled. Or if we think about organizations as organisms, having access to realtime information about employees can be thought of as a “nervous system” for the company, allowing for faster reaction times when crises arise, identification of pain points, etc.
Finally, for the medical field, we know that genes only explain part of what makes us sick. Being able to quantify and analyze behavior means knowing more about the environment, the nurture part of nurture vs nature. In that sense, detailed data on how we behave could also help us understand how to be healthier.
Originally published at www.sciencerockstars.com on November 2, 2013.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Let’s Fix the Future: Scientific Advisor @jadatascience
A blog series about the discipline of business experimentation. How to run and learn from experiments in different contexts is a complex matter, but lays at the heart of innovation.
|
Eventbrite | 10 | 8 | https://medium.com/@eventbrite/multi-index-locality-sensitive-hashing-for-fun-and-profit-ee04292a6e37?source=tag_archive---------4---------------- | Multi-Index Locality Sensitive Hashing for Fun and Profit | One way that we deal with this volume of data, is to cluster up all the similar messages together to find patterns in behavior of senders. For example, if someone is contacting thousands of different organizers with similar messages, that behavior is suspect and will be examined.
The big question is, how can we compare every single message we see with every other message efficiently and accurately? In this article, we’ll be exploring a technique known as Multi-Index Locality Sensitive Hashing.
To perform the the comparison efficiently, we pre-process the data with a series of steps:
Let’s first define what similar messages are. Here we have and example of two similar messages A and B:
To our human eyes of course they’re similar, but we want determine this similarity quantitatively. The solution is to break up the message into tokens, and then treat each message as a bag of tokens. The simplest, naive way to do tokenization is to split up a message on spaces/punctuation and convert each character to lowercase. So our result from our tokenization of the above messages would be:
I’ll leave as an exercise to the reader to come up with more interesting ways to do tokenization for handling contractions, plurals, foreign languages, etc.
To calculate the similarity between these two bags of tokens, we’ll use an estimation known as the Jaccard Similarity Coefficient. This is defined as “the ratio of sizes of the intersection and union of A and B”. Therefore, in our example:
We’ll then set a threshold, above which, we will consider two messages to be similar. So then, when given a set of M messages, we simply compute the similarity of a message to every other message. This works in theory, but in practice there are cases where this metric is unreliable (eg. if one message is significantly longer than the other); not to mention horribly inefficient (O(N2 M2), where N is the number of tokens per message). We need do things smarter!
One problem with doing a simple Jaccard similarity is that the scale of the value changes with the size (number of tokens) of the message. To address this, we can transform our tokens with a method known as minHash. Here’s a psuedo-code snippet:
The interesting property of the minHash transformation is that it leaves us with a constant N number of hashes, and that “chosen” hashes will be in the same positions in the vector. After the minHash transformation, the Jaccard similarity can be approximated by an element-wise comparison of two hash vectors (implemented as pseudo-code above).
So, we can stop here, but we’re having so much fun... and we can do so much better. Notice when we do comparison, we have to to O(N) integer comparisons, and if we have M messages then comparing every message to each other is O(N M2) integer comparisons. This is still not acceptable.
To reduce the time complexity of comparing minHashes to each other, we can do better with a technique known as bit sampling. The main idea is that we don’t need to know the exact value of each hash, but only that the hashes are equal at their respective positions in each hash vector. With this insight, let’s only look at the least significant bit (LSB) of each hash value.
More pseudo-code:
When comparing two messages, if the hashes are equal in the same position in the minHash vector, then the bits in the equivalent position after bit sampling should be also equal. So, we can emulate the Jaccard similarity of two minHashes by counting the equal bits in the two bit vectors (aka. the Hamming Distance) and dividing by the number of bits. Of course, two different hashes will have the same LSB 50% of the time; to increase our efficacy, we would pick a large N initially. Here is some naive and inefficient pseudo-code:
In practice, more efficient implementations of the bitSimilarity function can calculate in near O(1) time for reasonable sizes of N (Bit Twiddling Hacks). This means that when comparing M messages to each other, we’ve reduced the time complexity to O(M2). But wait, there’s more!
Remember how I said we have a lot of data? O(M2) is still unreasonable when M is a very large number of messages. So we need to try to reduce the number of comparisons to make using a “divide and conquer” strategy.
Lets start with an example where we set N=32, and we want to have a bitSimilarity of .9: In the worst case, to do this, we need 28 of the 32 bits to be equal, or 4 bits unequal. We will refer to the number of unequal bits as the radius of the bit vectors; ie. if two bit vectors are within a certain radius of bits, then they are similar. The unequal bits can be found by taking the bit-wise XOR of the two bit vectors. For example:
If we split up XOR_mask into 4 chunks of 8 bits, then at least one chunk will have exactly zero or exactly one of the bit differences (pigeonhole principal). More generally, if we split XOR_mask of size N into K chunks, with an expected radius R, then at least one chunk is guaranteed to have floor(R / K) or less bits unequal. For the purpose of explanation, we will assume that we have chosen all the parameters such that floor(R / K) = 1.
Now you’re wondering how this piece of logic help us? We can now design a data structure LshTable to index the bit vectors to reduce the number of bitSimilarity comparisons drastically (but increase memory consumption in O(M)) [Fast Search in Hamming Space with Multi-Index Hashing].
We will define LshTable with some pseudo-code:
Basically, in LshTable initialization, we create K hash tables for each K chunks. During add() of a bit vector, we split the bit vector into K chunks. For each of these chunks, we add the original bit vector into the associated hash table under the index chunk.
Upon the lookup() of a bit vector, we once again split it into chunks and for each chunk look up the associated hash table for a chunk that’s close (zero or one bits off). The returned list is a set of candidate bit vectors to check bitSimilarity. Because of the property explained in the previous section, at least one hash table will contain a set of candidates that contains a similar bit vector.
To compare every M message to every other message we first insert its bit vector into an LshTable (an O(K) operation, K is constant). Then to find similar messages, we simply do a lookup from the LshTable (another O(K) operation), and then check bitSimilarity for each of the candidates returned. The number of candidates to check is usually on the order of M / 2^(N/K), if at all. Therefore, the time complexity to compare all M messages to each other is O(M * M / 2^(N/K)). In practice, N and K are empirically chosen such that 2^(N/K) >> M, so the final time complexity is O(M) — remember we started with O(N M2)!
Phew, what a ride. So, we’ve detailed how to find similar messages in a very large set of messages efficiently. By using Multi-Index Locality Sensitivity Hashing, we can reduce the time complexity of from quadratic (with a very high constant) to near linear (with a more manageable constant).
I should also mention that many of the ancillary pseudo-code excerpts used here describe the most naive implementation of each method, and are for instructive purposes only.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
We help bring the world together through live experiences.
|
Akash Shende | 1 | 3 | https://medium.com/@akash0x53/color-based-object-segmentation-baf8044ec6a3?source=tag_archive---------7---------------- | Color Based Object Segmentation – Akash Shende – Medium | In this picture, Pranav Mistry is using color marker on his fingers to track the gesture and his wearable computer perform action based on gestures. That sounds easy! But No, it’s not. Computer need to understand those color marker first, for that it needs to separate marker from any surroundings.
Segmentation can be helpful to achieve this. Various methods are available for segmentation, however, this article talks about robust Color based object segmentation.
Create binary mask that separates blue T-shirt from rest.
To find blue t-shirt in given image, I used OpenCV’s inRange method: Which takes color (or greyscale) image, lower & higher range value as its parameter and returns binary image, where pixel value set to 0 when input pixel doesn’t fall in specified range, otherwise pixel value set to 1. With the help of this function and after determining range values, I ended up with this mask.
But you can see there are problems! It’s not able to create mask for complete t-shirt, also it mask eyes which aren’t blue. This is happening because light from one side of body whitens the right side at the same time creates shadow in left region. Thus, it creates different shades of blue and results into partial segmentation.
Normalization of color plane reduces variation in light by averaging pixel values, thus it removes highlighted and shadowed region and make image flatten. Following image is free from highlights & shadows and it is divided into one large green background, blue t-shirt and skin. Now the inRange method able to mask only t-shirt.
Following function converts a pixel at X, Y location into its corresponding normalized rgb pixel.
Let R,G,B are pixel values, then normalized pixel g(x,y) is calculated as,divide the individual color component by sum of all color components and multiply by 255. Division results into floating point number in range of 0.0 to 1.0 and as this is 8 bit image result is scaled up by 255.
This function accepts 8 bit RGB image matrix of size 800x600 and returns normalized RGB image.
Originally published at akash0x53.github.io on April 29, 2013.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Python आणि बरच काही .
|
Hrishikesh Huilgolkar | 1 | 4 | https://medium.com/@hrishikeshio/traveling-santa-problem-an-incompetent-algorists-attempt-49ad9d26b26?source=tag_archive---------8---------------- | Traveling santa Problem — An incompetent algorist’s attempt | Kaggle announced the Traveling santa problem in the christmas season. I joined in excitedly.. but soon realized this is not an easy problem. Solving this problem would require expertise on data structures and some good familiarity with TSP problems and its many heuristic algorithms. I had neither.. I had to find a way to deal with this problem. I compenseted my lack of algorithmic expertise with common sense, logic and intuition. I finished 65th out of 356 total competitors.
I did some research on packaged TSP solvers and top TSP algorithms. I found concorde but I could not get it to work on my ubuntu machine. So I settled with LKH which uses Lin-Kernighan heuristic for solving TSP and related problems. I wrote scripts for file conversions and for running LKH.
LKH easily solved my tsp problem in around 30 hours. But it was just one path. I still had to figure out how to make it find the second path.A simple idea to get 2 disjoint paths is to generate first path and then make weight of those edges infinite and run LKH on the problem again. But this required the problem to be in Distance Matrix format.Then I found a major problem.
Problem: Ram too lowCreating distance matrix for 150,000 points was unimaginable.It would requirememory for one digit * 150,000 * 150,000assuming memory for one digit = 8 bytes
memory required = 8*150,0002which is 167 GB!
(Correct me if I am wrong)
Solution:A simple solution was to divide the map in manageable chunks.I used scipy’s distance matrix creation function scipy.spatial.distance.pdist() It creates distance matrix from coordinates.The matrix created by pdist is in compressed form (a flattened matrix of upper diagonal elements. scipy.spatial.distance.squareform() can create a square distance matrix from compressed matrix but that would waste a lot of ram.So I created a custom function which divided compressed matrix by rows so LKH can read it.
Input:(coordinates)1 12 34 1
output of pdist:(compressed upper column)1 2 4
Output of squreform():(Uncompressed square matrix)0 1 21 0 42 4 0
Output of my function which processed compressed matrix:(Upper diagonal elements)[[1,2],[4]]
Lots of ram saved!
I tried using Manhattan distance instead of euclidean distance. But after dividing the problem in grids, time taken by distance calculation was manageable so I stuck with euclidean distance.
Through trial and error, I found that on my laptop with 4 GB ram, a 6 by 6 grid in the above format was manageable for both creating distance matrix and for LKH.
I ran LKH on resulting distance matrices and joined the individual solutions.
I joined the resulting solutions in different combination for both paths so as to avoid common paths.
I got 7,415,334 with this method.
I tried time limit on LKH algorithm. From 40,000 seconds I reduced it to 300, 20, 5 ,1 seconds but It made the results slightly worse.
Mingle
The solution above was good but It could have been better. The problem was that the first path was so good that the second path struggled to find good path. The difference between the two paths was big.
Path1 ~= 6.2MPath2 ~= 7.4MFor a long time I thought this would require either solving both paths simultaneously or using genetic algorithm or similar algorithm to combine both paths. Both were pretty difficult to implement.Then I got a simple idea. My map was divided in 36 squares. If I combine 18 squares of first path and 18 squares of second path, I will have a path whose distance will be approximately average of the two paths.I tried this trick and used different combinations of the two paths squares and got the best score of 6,807,498
For new path1, select blue squres from old path1 and grey square from old path2
Use remaining squares for new path2.Remove cross lines
My squares were joined in a zigzag manner. I removed the zig-zag lines for a further improvement.
I scored 6,744,291 which was my best score.Another idea was to make end point of one square and the beginning point of next square as near as possible but I couldn’t implement the idea before deadline.My score was around 200,000 points away from the first place which was 6,526,972. Not bad!
Public repo: https://bitbucket.org/hrishikeshio/traveling-santa (More documentation for source code coming soon)
Originally published at www.blogicious.com on January 19, 2013.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Blockchain, cryptocurrencies and the decentralised future
|
Adam Geitgey | 35K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471?source=tag_archive---------0---------------- | Machine Learning is Fun! – Adam Geitgey – Medium | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in 日本語, Português, Português (alternate), Türkçe, Français, 한국어 , العَرَبِيَّة, Español (México), Español (España), Polski, Italiano, 普通话, Русский, 한국어 , Tiếng Việt or فارسی.
Bigger update: The content of this article is now available as a full-length video course that walks you through every step of the code. You can take the course for free (and access everything else on Lynda.com free for 30 days) if you sign up with this link.
Have you heard people talking about machine learning but only have a fuzzy idea of what that means? Are you tired of nodding your way through conversations with co-workers? Let’s change that!
This guide is for anyone who is curious about machine learning but has no idea where to start. I imagine there are a lot of people who tried reading the wikipedia article, got frustrated and gave up wishing someone would just give them a high-level explanation. That’s what this is.
The goal is be accessible to anyone — which means that there’s a lot of generalizations. But who cares? If this gets anyone more interested in ML, then mission accomplished.
Machine learning is the idea that there are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.
For example, one kind of algorithm is a classification algorithm. It can put data into different groups. The same classification algorithm used to recognize handwritten numbers could also be used to classify emails into spam and not-spam without changing a line of code. It’s the same algorithm but it’s fed different training data so it comes up with different classification logic.
“Machine learning” is an umbrella term covering lots of these kinds of generic algorithms.
You can think of machine learning algorithms as falling into one of two main categories — supervised learning and unsupervised learning. The difference is simple, but really important.
Let’s say you are a real estate agent. Your business is growing, so you hire a bunch of new trainee agents to help you out. But there’s a problem — you can glance at a house and have a pretty good idea of what a house is worth, but your trainees don’t have your experience so they don’t know how to price their houses.
To help your trainees (and maybe free yourself up for a vacation), you decide to write a little app that can estimate the value of a house in your area based on it’s size, neighborhood, etc, and what similar houses have sold for.
So you write down every time someone sells a house in your city for 3 months. For each house, you write down a bunch of details — number of bedrooms, size in square feet, neighborhood, etc. But most importantly, you write down the final sale price:
Using that training data, we want to create a program that can estimate how much any other house in your area is worth:
This is called supervised learning. You knew how much each house sold for, so in other words, you knew the answer to the problem and could work backwards from there to figure out the logic.
To build your app, you feed your training data about each house into your machine learning algorithm. The algorithm is trying to figure out what kind of math needs to be done to make the numbers work out.
This kind of like having the answer key to a math test with all the arithmetic symbols erased:
From this, can you figure out what kind of math problems were on the test? You know you are supposed to “do something” with the numbers on the left to get each answer on the right.
In supervised learning, you are letting the computer work out that relationship for you. And once you know what math was required to solve this specific set of problems, you could answer to any other problem of the same type!
Let’s go back to our original example with the real estate agent. What if you didn’t know the sale price for each house? Even if all you know is the size, location, etc of each house, it turns out you can still do some really cool stuff. This is called unsupervised learning.
This is kind of like someone giving you a list of numbers on a sheet of paper and saying “I don’t really know what these numbers mean but maybe you can figure out if there is a pattern or grouping or something — good luck!”
So what could do with this data? For starters, you could have an algorithm that automatically identified different market segments in your data. Maybe you’d find out that home buyers in the neighborhood near the local college really like small houses with lots of bedrooms, but home buyers in the suburbs prefer 3-bedroom houses with lots of square footage. Knowing about these different kinds of customers could help direct your marketing efforts.
Another cool thing you could do is automatically identify any outlier houses that were way different than everything else. Maybe those outlier houses are giant mansions and you can focus your best sales people on those areas because they have bigger commissions.
Supervised learning is what we’ll focus on for the rest of this post, but that’s not because unsupervised learning is any less useful or interesting. In fact, unsupervised learning is becoming increasingly important as the algorithms get better because it can be used without having to label the data with the correct answer.
Side note: There are lots of other types of machine learning algorithms. But this is a pretty good place to start.
As a human, your brain can approach most any situation and learn how to deal with that situation without any explicit instructions. If you sell houses for a long time, you will instinctively have a “feel” for the right price for a house, the best way to market that house, the kind of client who would be interested, etc. The goal of Strong AI research is to be able to replicate this ability with computers.
But current machine learning algorithms aren’t that good yet — they only work when focused a very specific, limited problem. Maybe a better definition for “learning” in this case is “figuring out an equation to solve a specific problem based on some example data”.
Unfortunately “Machine Figuring out an equation to solve a specific problem based on some example data” isn’t really a great name. So we ended up with “Machine Learning” instead.
Of course if you are reading this 50 years in the future and we’ve figured out the algorithm for Strong AI, then this whole post will all seem a little quaint. Maybe stop reading and go tell your robot servant to go make you a sandwich, future human.
So, how would you write the program to estimate the value of a house like in our example above? Think about it for a second before you read further.
If you didn’t know anything about machine learning, you’d probably try to write out some basic rules for estimating the price of a house like this:
If you fiddle with this for hours and hours, you might end up with something that sort of works. But your program will never be perfect and it will be hard to maintain as prices change.
Wouldn’t it be better if the computer could just figure out how to implement this function for you? Who cares what exactly the function does as long is it returns the correct number:
One way to think about this problem is that the price is a delicious stew and the ingredients are the number of bedrooms, the square footage and the neighborhood. If you could just figure out how much each ingredient impacts the final price, maybe there’s an exact ratio of ingredients to stir in to make the final price.
That would reduce your original function (with all those crazy if’s and else’s) down to something really simple like this:
Notice the magic numbers in bold — .841231951398213, 1231.1231231, 2.3242341421, and 201.23432095. These are our weights. If we could just figure out the perfect weights to use that work for every house, our function could predict house prices!
A dumb way to figure out the best weights would be something like this:
Start with each weight set to 1.0:
Run every house you know about through your function and see how far off the function is at guessing the correct price for each house:
For example, if the first house really sold for $250,000, but your function guessed it sold for $178,000, you are off by $72,000 for that single house.
Now add up the squared amount you are off for each house you have in your data set. Let’s say that you had 500 home sales in your data set and the square of how much your function was off for each house was a grand total of $86,123,373. That’s how “wrong” your function currently is.
Now, take that sum total and divide it by 500 to get an average of how far off you are for each house. Call this average error amount the cost of your function.
If you could get this cost to be zero by playing with the weights, your function would be perfect. It would mean that in every case, your function perfectly guessed the price of the house based on the input data. So that’s our goal — get this cost to be as low as possible by trying different weights.
Repeat Step 2 over and over with every single possible combination of weights. Whichever combination of weights makes the cost closest to zero is what you use. When you find the weights that work, you’ve solved the problem!
That’s pretty simple, right? Well think about what you just did. You took some data, you fed it through three generic, really simple steps, and you ended up with a function that can guess the price of any house in your area. Watch out, Zillow!
But here’s a few more facts that will blow your mind:
Pretty crazy, right?
Ok, of course you can’t just try every combination of all possible weights to find the combo that works the best. That would literally take forever since you’d never run out of numbers to try.
To avoid that, mathematicians have figured out lots of clever ways to quickly find good values for those weights without having to try very many. Here’s one way:
First, write a simple equation that represents Step #2 above:
Now let’s re-write exactly the same equation, but using a bunch of machine learning math jargon (that you can ignore for now):
This equation represents how wrong our price estimating function is for the weights we currently have set.
If we graph this cost equation for all possible values of our weights for number_of_bedrooms and sqft, we’d get a graph that might look something like this:
In this graph, the lowest point in blue is where our cost is the lowest — thus our function is the least wrong. The highest points are where we are most wrong. So if we can find the weights that get us to the lowest point on this graph, we’ll have our answer!
So we just need to adjust our weights so we are “walking down hill” on this graph towards the lowest point. If we keep making small adjustments to our weights that are always moving towards the lowest point, we’ll eventually get there without having to try too many different weights.
If you remember anything from Calculus, you might remember that if you take the derivative of a function, it tells you the slope of the function’s tangent at any point. In other words, it tells us which way is downhill for any given point on our graph. We can use that knowledge to walk downhill.
So if we calculate a partial derivative of our cost function with respect to each of our weights, then we can subtract that value from each weight. That will walk us one step closer to the bottom of the hill. Keep doing that and eventually we’ll reach the bottom of the hill and have the best possible values for our weights. (If that didn’t make sense, don’t worry and keep reading).
That’s a high level summary of one way to find the best weights for your function called batch gradient descent. Don’t be afraid to dig deeper if you are interested on learning the details.
When you use a machine learning library to solve a real problem, all of this will be done for you. But it’s still useful to have a good idea of what is happening.
The three-step algorithm I described is called multivariate linear regression. You are estimating the equation for a line that fits through all of your house data points. Then you are using that equation to guess the sales price of houses you’ve never seen before based where that house would appear on your line. It’s a really powerful idea and you can solve “real” problems with it.
But while the approach I showed you might work in simple cases, it won’t work in all cases. One reason is because house prices aren’t always simple enough to follow a continuous line.
But luckily there are lots of ways to handle that. There are plenty of other machine learning algorithms that can handle non-linear data (like neural networks or SVMs with kernels). There are also ways to use linear regression more cleverly that allow for more complicated lines to be fit. In all cases, the same basic idea of needing to find the best weights still applies.
Also, I ignored the idea of overfitting. It’s easy to come up with a set of weights that always works perfectly for predicting the prices of the houses in your original data set but never actually works for any new houses that weren’t in your original data set. But there are ways to deal with this (like regularization and using a cross-validation data set). Learning how to deal with this issue is a key part of learning how to apply machine learning successfully.
In other words, while the basic concept is pretty simple, it takes some skill and experience to apply machine learning and get useful results. But it’s a skill that any developer can learn!
Once you start seeing how easily machine learning techniques can be applied to problems that seem really hard (like handwriting recognition), you start to get the feeling that you could use machine learning to solve any problem and get an answer as long as you have enough data. Just feed in the data and watch the computer magically figure out the equation that fits the data!
But it’s important to remember that machine learning only works if the problem is actually solvable with the data that you have.
For example, if you build a model that predicts home prices based on the type of potted plants in each house, it’s never going to work. There just isn’t any kind of relationship between the potted plants in each house and the home’s sale price. So no matter how hard it tries, the computer can never deduce a relationship between the two.
So remember, if a human expert couldn’t use the data to solve the problem manually, a computer probably won’t be able to either. Instead, focus on problems where a human could solve the problem, but where it would be great if a computer could solve it much more quickly.
In my mind, the biggest problem with machine learning right now is that it mostly lives in the world of academia and commercial research groups. There isn’t a lot of easy to understand material out there for people who would like to get a broad understanding without actually becoming experts. But it’s getting a little better every day.
If you want to try out what you’ve learned in this article, I made a course that walks you through every step of this article, including writing all the code. Give it a try!
If you want to go deeper, Andrew Ng’s free Machine Learning class on Coursera is pretty amazing as a next step. I highly recommend it. It should be accessible to anyone who has a Comp. Sci. degree and who remembers a very minimal amount of math.
Also, you can play around with tons of machine learning algorithms by downloading and installing SciKit-Learn. It’s a python framework that has “black box” versions of all the standard algorithms.
If you liked this article, please consider signing up for my Machine Learning is Fun! Newsletter:
Also, please check out the full-length course version of this article. It covers everything in this article in more detail, including writing the actual code in Python. You can get a free 30-day trial to watch the course if you sign up with this link.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 2!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Shivon Zilis | 1.2K | 10 | https://medium.com/@shivon/the-current-state-of-machine-intelligence-f76c20db2fe1?source=tag_archive---------1---------------- | The Current State of Machine Intelligence – Shivon Zilis – Medium | (The 2016 Machine Intelligence landscape and post can be found here)
I spent the last three months learning about every artificial intelligence, machine learning, or data related startup I could find — my current list has 2,529 of them to be exact. Yes, I should find better things to do with my evenings and weekends but until then...
Why do this?
A few years ago, investors and startups were chasing “big data” (I helped put together a landscape on that industry). Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or somesuch — collectively I call these “machine intelligence” (I’ll get into the definitions in a second). Our fund, Bloomberg Beta, which is focused on the future of work, has been investing in these approaches. I created this landscape to start to put startups into context. I’m a thesis-oriented investor and it’s much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy.
What is “machine intelligence,” anyway?
I mean “machine intelligence” as a unifying term for what others call machine learning and artificial intelligence. (Some others have used the term before, without quite describing it or understanding how laden this field has been with debates over descriptions.) I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are. ☺
Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus). Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.
What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).
Which companies are on the landscape?
I considered thousands of companies, so while the chart is crowded it’s still a small subset of the overall ecosystem. “Admissions rates” to the chart were fairly in line with those of Yale or Harvard, and perhaps equally arbitrary. ☺
I tried to pick companies that used machine intelligence methods as a defining part of their technology. Many of these companies clearly belong in multiple areas but for the sake of simplicity I tried to keep companies in their primary area and categorized them by the language they use to describe themselves (instead of quibbling over whether a company used “NLP” accurately in its self-description).
If you want to get a sense for innovations at the heart of machine intelligence, focus on the core technologies layer. Some of these companies have APIs that power other applications, some sell their platforms directly into enterprise, some are at the stage of cryptic demos, and some are so stealthy that all we have is a few sentences to describe them.
The most exciting part for me was seeing how much is happening in the application space. These companies separated nicely into those that reinvent the enterprise, industries, and ourselves.
If I were looking to build a company right now, I’d use this landscape to help figure out what core and supporting technologies I could package into a novel industry application. Everyone likes solving the sexy problems but there are an incredible amount of ‘unsexy’ industry use cases that have massive market opportunities and powerful enabling technologies that are begging to be used for creative applications (e.g., Watson Developer Cloud, AlchemyAPI).
Reflections on the landscape:
We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. (Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.) I focused on understanding the ecosystem on a company-by-company level and drawing implications from that.
Yes, it’s true, machine intelligence is transforming the enterprise, industries and humans alike.
On a high level it’s easy to understand why machine intelligence is important, but it wasn’t until I laid out what many of these companies are actually doing that I started to grok how much it is already transforming everything around us. As Kevin Kelly more provocatively put it, “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI”. In many cases you don’t even need the X — machine intelligence will certainly transform existing industries, but will also likely create entirely new ones.
Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, and destroying 80's classic video games.
Many companies will be acquired.
I was surprised to find that over 10% of the eligible (non-public) companies on the slide have been acquired. It was in stark contrast to big data landscape we created, which had very few acquisitions at the time.
No jaw will drop when I reveal that Google is the number one acquirer, though there were more than 15 different acquirers just for the companies on this chart. My guess is that by the end of 2015 almost another 10% will be acquired. For thoughts on which specific ones will get snapped up in the next year you’ll have to twist my arm...
Big companies have a disproportionate advantage, especially those that build consumer products.
The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.
Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom). Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
The talent’s in the New (AI)vy League.
In the last 20 years, most of the best minds in machine intelligence (especially the ‘hardcore AI’ types) worked in academia. They developed new machine intelligence methods, but there were few real world applications that could drive business value.
Now that real world applications of more complex machine intelligence methods like deep belief nets and hierarchical neural networks are starting to solve real world problems, we’re seeing academic talent move to corporate settings. Facebook recruited NYU professors Yann LeCun and Rob Fergus to their AI Lab, Google hired University of Toronto’s Geoffrey Hinton, Baidu wooed Andrew Ng. It’s important to note that they all still give back significantly to the academic community (one of LeCun’s lab mandates is to work on core research to give back to the community, Hinton spends half of his time teaching, Ng has made machine intelligence more accessible through Coursera) but it is clear that a lot of the intellectual horsepower is moving away from academia.
For aspiring minds in the space, these corporate labs not only offer lucrative salaries and access to the “godfathers” of the industry, but, the most important ingredient: data. These labs offer talent access to datasets they could never get otherwise (the ImageNet dataset is fantastic, but can’t compare to what Facebook, Google, and Baidu have in house). As a result, we’ll likely see corporations become the home of many of the most important innovations in machine intelligence and recruit many of the graduate students and postdocs that would have otherwise stayed in academia.
There will be a peace dividend.
Big companies have an inherent advantage and it’s likely that the ones who will win the machine intelligence race will be even more powerful than they are today. However, the good news for the rest of the world is that the core technology they develop will rapidly spill into other areas, both via departing talent and published research.
Similar to the big data revolution, which was sparked by the release of Google’s BigTable and BigQuery papers, we will see corporations release equally groundbreaking new technologies into the community. Those innovations will be adapted to new industries and use cases that the Googles of the world don’t have the DNA or desire to tackle.
Opportunities for entrepreneurs:
“My company does deep learning for X”
Few words will make you more popular in 2015. That is, if you can credibly say them.
Deep learning is a particularly popular method in the machine intelligence field that has been getting a lot of attention. Google, Facebook, and Baidu have achieved excellent results with the method for vision and language based tasks and startups like Enlitic have shown promising results as well.
Yes, it will be an overused buzzword with excitement ahead of results and business models, but unlike the hundreds of companies that say they do “big data”, it’s much easier to cut to the chase in terms of verifying credibility here if you’re paying attention.
The most exciting part about the deep learning method is that when applied with the appropriate levels of care and feeding, it can replace some of the intuition that comes from domain expertise with automatically-learned features. The hope is that, in many cases, it will allow us to fundamentally rethink what a best-in-class solution is.
As an investor who is curious about the quirkier applications of data and machine intelligence, I can’t wait to see what creative problems deep learning practitioners try to solve. I completely agree with Jeff Hawkins when he says a lot of the killer applications of these types of technologies will sneak up on us. I fully intend to keep an open mind.
“Acquihire as a business model”
People say that data scientists are unicorns in short supply. The talent crunch in machine intelligence will make it look like we had a glut of data scientists. In the data field, many people had industry experience over the past decade. Most hardcore machine intelligence work has only been in academia. We won’t be able to grow this talent overnight.
This shortage of talent is a boon for founders who actually understand machine intelligence. A lot of companies in the space will get seed funding because there are early signs that the acquihire price for a machine intelligence expert is north of 5x that of a normal technical acquihire (take, for example Deep Mind, where price per technical head was somewhere between $5–10M, if we choose to consider it in the acquihire category). I’ve had multiple friends ask me, only semi-jokingly, “Shivon, should I just round up all of my smartest friends in the AI world and call it a company?” To be honest, I’m not sure what to tell them. (At Bloomberg Beta, we’d rather back companies building for the long term, but that doesn’t mean this won’t be a lucrative strategy for many enterprising founders.)
A good demo is disproportionately valuable in machine intelligence
I remember watching Watson play Jeopardy. When it struggled at the beginning I felt really sad for it. When it started trouncing its competitors I remember cheering it on as if it were the Toronto Maple Leafs in the Stanley Cup finals (disclaimers: (1) I was an IBMer at the time so was biased towards my team (2) the Maple Leafs have not made the finals during my lifetime — yet — so that was purely a hypothetical).
Why do these awe-inspiring demos matter? The last wave of technology companies to IPO didn’t have demos that most of us would watch, so why should machine intelligence companies? The last wave of companies were very computer-like: database companies, enterprise applications, and the like. Sure, I’d like to see a 10x more performant database, but most people wouldn’t care. Machine intelligence wins and loses on demos because 1) the technology is very human, enough to inspire shock and awe, 2) business models tend to take a while to form, so they need more funding for longer period of time to get them there, 3) they are fantastic acquisition bait.
Watson beat the world’s best humans at trivia, even if it thought Toronto was a US city. DeepMind blew people away by beating video games. Vicarious took on CAPTCHA. There are a few companies still in stealth that promise to impress beyond that, and I can’t wait to see if they get there.
Demo or not, I’d love to talk to anyone using machine intelligence to change the world. There’s no industry too unsexy, no problem too geeky. I’d love to be there to help so don’t be shy.
I hope this landscape chart sparks a conversation. The goal to is make this a living document and I want to know if there are companies or categories missing. I welcome feedback and would like to put together a dynamic visualization where I can add more companies and dimensions to the data (methods used, data types, end users, investment to date, location, etc.) so that folks can interact with it to better explore the space.
Questions and comments: Please email me. Thank you to Andrew Paprocki, Aria Haghighi, Beau Cronin, Ben Lorica, Doug Fulop, David Andrzejewski, Eric Berlow, Eric Jonas, Gary Kazantsev, Gideon Mann, Greg Smithies, Heidi Skinner, Jack Clark, Jon Lehr, Kurt Keutzer, Lauren Barless, Pete Skomoroch, Pete Warden, Roger Magoulas, Sean Gourley, Stephen Purpura, Wes McKinney, Zach Bogue, the Quid team, and the Bloomberg Beta team for your ever-helpful perspectives!
Disclaimer: Bloomberg Beta is an investor in Adatao, Alation, Aviso, BrightFunnel, Context Relevant, Mavrx, Newsle, Orbital Insights, Pop Up Archive, and two others on the chart that are still undisclosed. We’re also investors in a few other machine intelligence companies that aren’t focusing on areas that were a fit for this landscape, so we left them off.
For the full resolution version of the landscape please click here.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Partner at Bloomberg Beta. All about machine intelligence for good. Equal parts nerd and athlete. Straight up Canadian stereotype and proud of it.
|
AirbnbEng | 369 | 11 | https://medium.com/airbnb-engineering/architecting-a-machine-learning-system-for-risk-941abbba5a60?source=tag_archive---------2---------------- | Architecting a Machine Learning System for Risk – Airbnb Engineering & Data Science – Medium | By Naseem Hakim & Aaron Keys
At Airbnb, we want to build the world’s most trusted community. Guests trust Airbnb to connect them with world-class hosts for unique and memorable travel experiences. Airbnb hosts trust that guests will treat their home with the same care and respect that they would their own. The Airbnb review system helps users find community members who earn this trust through positive interactions with others, and the ecosystem as a whole prospers.
The overwhelming majority of web users act in good faith, but unfortunately, there exists a small number of bad actors who attempt to profit by defrauding websites and their communities. The trust and safety team at Airbnb works across many disciplines to help protect our users from these bad actors, ideally before they have the opportunity to impart negativity on the community.
There are many different kinds of risk that online businesses may have to protect against, with varying exposure depending on the particular business. For example, email providers devote significant resources to protecting users from spam, whereas payments companies deal more with credit card chargebacks.
We can mitigate the potential for bad actors to carry out different types of attacks in different ways.
Many risks can be mitigated through user-facing changes to the product that require additional verification from the user. For example, requiring email confirmation, or implementing 2FA to combat account takeovers, as many banks have done.
Scripted attacks are often associated with a noticeable increase in some measurable metric over a short period of time. For example, a sudden 1000% increase in reservations in a particular city could be a result of excellent marketing, or fraud.
Fraudulent actors often exhibit repetitive patterns. As we recognize these patterns, we can apply heuristics to predict when they are about to occur again, and help stop them. For complex, evolving fraud vectors, heuristics eventually become too complicated and therefore unwieldy. In such cases, we turn to machine learning, which will be the focus of this blog post.
For a more detailed look at other aspects of online risk management, check out Ohad Samet’s great ebook.
Different risk vectors can require different architectures. For example, some risk vectors are not time critical, but require computationally intensive techniques to detect. An offline architecture is best suited for this kind of detection. For the purposes of this post, we are focusing on risks requiring realtime or near-realtime action. From a broad perspective, a machine-learning pipeline for these kinds of risk must balance two important goals:
These may seem like competing goals, since optimizing for realtime calculations during a web transaction creates a focus on speed and reliability, whereas optimizing for model building and iteration creates more of a focus on flexibility. At Airbnb, engineering and data teams have worked closely together to develop a framework that accommodates both goals: a fast, robust scoring framework with an agile model-building pipeline.
In keeping with our service-oriented architecture, we built a separate fraud prediction service to handle deriving all the features for a particular model. When a critical event occurs in our system, e.g., a reservation is created, we query the fraud prediction service for this event. This service can then calculate all the features for the “reservation creation” model, and send these features to our Openscoring service, which is described in more detail below. The Openscoring service returns a score and a decision based on a threshold we’ve set, and the fraud prediction service can then use this information to take action (i.e., put the reservation on hold).
The fraud prediction service has to be fast, to ensure that we are taking action on suspicious events in near realtime. Like many of our backend services for which performance is critical, it is built in java, and we parallelize the database queries necessary for feature generation. However, we also want the freedom to occasionally do some heavy computation in deriving features, so we run it asynchronously so that we are never blocking for reservations, etc. This asynchronous model works for many situations where a few seconds of delay in fraud detection has no negative effect. It’s worth noting, however, that there are cases where you may want to react in realtime to block transactions, in which case a synchronous query and precomputed features may be necessary. This service is built in a very modular way, and exposes an internal restful API, making adding new events and models easy.
Openscoring is a Java service that provides a JSON REST interface to the Java Predictive Model Markup Language (PMML) evaluator JPMML. Both JPMML and Openscoring are open source projects released under the Apache 2.0 license and authored by Villu Ruusmann (edit — the most recent version is licensed the under AGPL 3.0) . The JPMML backend of Openscoring consumes PMML, an xml markup language that encodes several common types of machine learning models, including tree models, logit models, SVMs and neural networks. We have streamlined Openscoring for a production environment by adding several features, including kafka logging and statsd monitoring. Andy Kramolisch has modified Openscoring to permit using several models simultaneously.
As described below, there are several considerations that we weighed carefully before moving forward with Openscoring:
After considering all of these factors, we decided that Openscoring best satisfied our two-pronged goal of having a fast and robust, yet flexible machine learning framework.
A schematic of our model-building pipeline using PMML is illustrated above. The first step involves deriving features from the data stored on the site. Since the combination of features that gives the optimal signal is constantly changing, we store the features in a json format, which allows us to generalize the process of loading and transforming features, based on their names and types. We then transform the raw features through bucketing or binning values, and replacing missing values with reasonable estimates to improve signal. We also remove features that are shown to be statistically unimportant from our dataset. While we omit most of the details regarding how we perform these transformations for brevity here, it is important to recognize that these steps take a significant amount of time and care. We then use our transformed features to train and cross-validate the model using our favorite PMML-compatible machine learning library, and upload the PMML model to Openscoring. The final model is tested and then used for decision-making if it becomes the best performer.
The model-training step can be performed in any language with a library that outputs PMML. One commonly used and well-supported library is the R PMML package. As illustrated below, generating a PMML with R requires very little code.
This R script has the advantage of simplicity, and a script similar to this is a great way to start building PMMLs and to get a first model into production. In the long run, however, a setup like this has some disadvantages. First, our script requires that we perform feature transformation as a pre-processing step, and therefore we have add these transformation instructions to the PMML by editing it afterwards. The R PMML package supports many PMML transformations and data manipulations, but it is far from universal. We deploy the model as a separate step — post model-training — and so we have to manually test it for validity, which can be a time-consuming process. Yet another disadvantage of R is that the implementation of the PMML exporter is somewhat slow for a random forest model with many features and many trees. However, we’ve found that simply re-writing the export function in C++ decreases run time by a factor of 10,000, from a few days to a few seconds. We can get around the drawbacks of R while maintaining its advantages by building a pipeline based on Python and scikit-learn. Scikit-learn is a Python package that supports many standard machine learning models, and includes helpful utilities for validating models and performing feature transformations. We find that Python is a more natural language than R for ad-hoc data manipulation and feature extraction. We automate the process of feature extraction based on a set of rules encoded in the names and types of variables in the features json; thus, new features can be incorporated into the model pipeline with no changes to the existing code. Deployment and testing can also be performed automatically in Python by using its standard network libraries to interface with Openscoring. Standard model performance tests (precision recall, ROC curves, etc.) are carried out using sklearn’s built-in capabilities. Sklearn does not support PMML export out of the box, so have written an in-house exporter for particular sklearn classifiers. When the PMML file is uploaded to Openscoring, it is automatically tested for correspondence with the scikit-learn model it represents. Because feature-transformation, model building, model validation, deployment and testing are all carried out in a single script, a data scientist or engineer is able to quickly iterate on a model based on new features or more recent data, and then rapidly deploy the new model into production.
Although this blog post has focused mostly on our architecture and model building pipeline, the truth is that much of our time has been spent elsewhere. Our process was very successful for some models, but for others we encountered poor precision-recall. Initially we considered whether we were experiencing a bias or a variance problem, and tried using more data and more features. However, after finding no improvement, we started digging deeper into the data, and found that the problem was that our ground truth was not accurate.
Consider chargebacks as an example. A chargeback can be “Not As Described (NAD)” or “Fraud” (this is a simplification), and grouping both types of chargebacks together for a single model would be a bad idea because legitimate users can file NAD chargebacks. This is an easy problem to resolve, and not one we actually had (agents categorize chargebacks as part of our workflow); however, there are other types of attacks where distinguishing legitimate activity from illegitimate is more subtle, and necessitated the creation of new data stores and logging pipelines.
Most people who’ve worked in machine learning will find this obvious, but it’s worth re-stressing:
Towards this end, sometimes you don’t know what data you’re going to need until you’ve seen a new attack, especially if you haven’t worked in the risk space before, or have worked in the risk space but only in a different sector. So the best advice we can offer in this case is to log everything. Throw it all in HDFS, whether you need it now or not. In the future, you can always use this data to backfill new data stores if you find it useful. This can be invaluable in responding to a new attack vector.
Although our current ML pipeline uses scikit-learn and Openscoring, our system is constantly evolving. Our current setup is a function of the stage of the company and the amount of resources, both in terms of personnel and data, that are currently available. Smaller companies may only have a few ML models in production and a small number of analysts, and can take time to manually curate data and train the model in many non-standardized steps. Larger companies might have many, many models and require a high degree of automation, and get a sizable boost from online training. A unique challenge of working at a hyper-growth company is that landscape fundamentally changes year-over-year, and pipelines need to adjust to account for this.
As our data and logging pipelines improve, investing in improved learning algorithms will become more worthwhile, and we will likely shift to testing new algorithms, incorporating online learning, and expanding on our model building framework to support larger data sets. Additionally, some of the most important opportunities to improve our models are based on insights into our unique data, feature selection, and other aspects our risk systems that we are not able to share publicly. We would like to acknowledge the other engineers and analysts who have contributed to these critical aspects of this project. We work in a dynamic, highly-collaborative environment, and this project is an example of how engineers and data scientists at Airbnb work together to arrive at a solution that meets a diverse set of needs. If you’re interested in learning more, contact us about our data science and engineering teams!
Originally published at nerds.airbnb.com on June 16, 2014.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Creative engineers and data scientists building a world where you can belong anywhere. http://airbnb.io
Creative engineers and data scientists building a world where you can belong anywhere. http://airbnb.io
|
Yingjie Miao | 43 | 6 | https://medium.com/kifi-engineering/from-word2vec-to-doc2vec-an-approach-driven-by-chinese-restaurant-process-93d3602eaa31?source=tag_archive---------3---------------- | From word2vec to doc2vec: an approach driven by Chinese restaurant process | Google’s word2vec project has created lots of interests in the text mining community. It’s a neural network language model that is “both supervised and unsupervised”. Unsupervised in the sense that you only have to provide a big corpus, say English wiki. Supervised in the sense that the model cleverly generates supervised learning tasks from the corpus. How? Two approaches, known as Continuous Bag of Words (CBOW) and Skip-Gram (See Figure 1 in this paper). CBOW forces the neural net to predict current word by surrounding words, and Skip-Gram forces the neural net to predict surrounding words of the current word. Training is essentially a classic back-propagation method with a few optimization and approximation tricks (e.g. hierarchical softmax).
Word vectors generated by the neural net have nice semantic and syntactic behaviors. Semantically, “iOS” is close to “Android”. Syntactically, “boys” minus “boy” is close to “girls” minus “girl”. One can checkout more examples here.
Although this provides high quality word vectors, there is still no clear way to combine them into a high quality document vector. In this article, we discuss one possible heuristic, inspired by a stochastic process called Chinese Restaurant Process (CRP). Basic idea is to use CRP to drive a clustering process and summing word vectors in the right cluster.
Imagine we have an document about chicken recipe. It contains words like “chicken”, “pepper”, “salt”, “cheese”. It also contains words like “use”, “buy”, “definitely”, “my”, “the”. The word2vec model gives us a vector for each word. One could naively sum up every word vector as the doc vector. This clearly introduces lots of noise. A better heuristic is to use a weighted sum, based on other information like idf or Part of Speech (POS) tag.
The question is: could we be more selective when adding terms? If this is a chicken recipe document, I shouldn’t even consider words like “definitely”, “use”, “my” in the summation. One can argue that idf based weights can significantly reduce noise of boring words like “the” and “is”. However, for words like “definitely”, “overwhelming”, the idfs are not necessarily small as you would hope.
It’s natural to think that if we can first group words into clusters, words like “chicken”, “pepper” may stay in one cluster, along with other clusters of “junk” words. If we can identify the “relevant” clusters, and only summing up word vectors from relevant clusters, we should have a good doc vector.
This boils down to clustering the words in the document. One can of course use off-the-shelf algorithms like K-means, but most these algorithms require a distance metric. Word2vec behaves nicely by cosine similarity, this doesn’t necessarily mean it behaves as well under Eucledian distance (even after projection to unit sphere, it’s perhaps best to use geodesic distance.)
It would be nice if we can directly work with cosine similarity. We have done a quick experiment on clustering words driven by CRP-like stochastic process. It worked surprisingly well — so far.
Now let’s explain CRP. Imagine you go to a (Chinese) restaurant. There are already n tables with different number of peoples. There is also an empty table. CRP has a hyperparamter r > 0, which can be regarded as the “imagined” number of people on the empty table. You go to one of the (n+1) tables with probability proportional to existing number of people on the table. (For the empty table, the number is r). If you go to one of the n existing tables, you are done. If you decide to sit down at the empty table, the Chinese restaurant will automatically create a new empty table. In that case, the next customer comes in will choose from (n+2) tables (including the new empty table).
Inspired by CRP, we tried the following variations of CRP to include the similarity factor. Common setup is the following: we are given M vectors to be clustered. We maintain two things: cluster sum (not centroid!), and vectors in clusters. We iterate through vectors. For current vector V, suppose we have n clusters already. Now we find the cluster C whose cluster sum is most similar to current vector. Call this score sim(V, C).
Variant 1: v creates a new cluster with probability 1/(1 + n). Otherwise v goes to cluster C.
Variant 2: If sim(V, C) > 1/(1 + n), goes to cluster C. Otherwise with probability 1/(1+n) it creates a new cluster and with probability n/(1+n) it goes to C.
In any of the two variants, if v goes to a cluster, we update cluster sum and cluster membership.
There is one distinct difference to traditional CRP: if we don’t go to empty table, we deterministically go to the “most similar” table.
In practice, we find these variants create similar results. One difference is that variant 1 tend to have more clusters and smaller clusters, variant 2 tend to have fewer but larger clusters. The examples below are from variant 2.
For example, for a chicken recipe document, the clusters look like this:
Apparently, the first cluster is most relevant. Now let’s take the cluster sum vector (which is the sum of all vectors from this cluster), and test if it really preserves semantic. Below is a snippet of python console. We trained word vector using the c implementation on a fraction of English Wiki, and read the model file using python library gensim.model.word2vec. c[0] below denotes the cluster 0.
Looks like the semantic is preserved well. It’s convincing that we can use this as the doc vector.
The recipe document seems easy. Now let’s try something more challenging, like a news article. News articles tend to tell stories, and thus has less concentrated “topic words”. We tried the clustering on this article, titled “Signals on Radar Puzzle Officials in Hunt for Malaysian Jet”. We got 4 clusters:
Again, looks decent. Note that this is a simple 1-pass clustering process and we don’t have to specify number of clusters! Could be very helpful for latency sensitive services.
There is still a missing step: how to find out the relevant cluster(s)? We haven’t yet done extensive experiments on this part. A few heuristics to consider:
There are other problems to think about: 1) how do we merge clusters? Based on similarity among cluster sum vectors? Or averaging similarity between cluster members? 2) what is the minimal set of words that can reconstruct cluster sum vector (in the sense of cosine similarity)? This could be used as a semantic keyword extraction method.
Conclusion: Google’s word2vec provides powerful word vectors. We are interested in using these vectors to generate high quality document vectors in an efficient way. We tried a strategy based on a variant of Chinese Restaurant Process and obtained interesting results. There are some open problems to explore, and we would like to hear what you think.
Appendix: python style pseudo-code for similarity driven CRP
We wrote this post while working on Kifi — Connecting people with knowledge. Learn more.
Originally published at eng.kifi.com on March 17, 2014.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
The Kifi Engineering Blog
|
Pinterest Engineering | 113 | 6 | https://medium.com/@Pinterest_Engineering/building-a-smarter-home-feed-ad1918fdfbe3?source=tag_archive---------4---------------- | Building a smarter home feed – Pinterest Engineering – Medium | Chris Pinchak | Pinterest engineer, Discovery
The home feed should be a reflection of what each user cares about. Content is sourced from inputs such as people and boards the user follows, interests, and recommendations. To ensure we maintain fast, reliable and personalized home feeds, we built the smart feed with the following design values in mind:
1. Different sources of Pins should be mixed together at different rates.
2. Some Pins should be selectively dropped or deferred until a later time. Some sources may produce Pins of poor quality for a user, so instead of showing everything available immediately, we can be selective about what to show and what to hold back for a future session.
3. Pins should be arranged in the order of best-first rather than newest-first. For some sources, newer Pins are intuitively better, while for others, newness is less important.
We shifted away from our previously time-ordered home feed system and onto a more flexible one. The core feature of the smart feed architecture is its separation of available, but unseen, content and content that’s already been presented to the user. We leverage knowledge of what the user hasn’t yet seen to our advantage when deciding how the feed evolves over time.
Smart feed is a composition of three independent services, each of which has a specific role in the construction of a home feed.
The smart feed worker is the first to process Pins and has two primary responsibilities — to accept incoming Pins and assign some score proportional to their quality or value to the receiving user, and to remember these scored Pins in some storage for later consumption.
Essentially, the worker manages Pins as they become newly available, such as those from the repins of the people the user follows. Pins have varying value to the receiving user, so the worker is tasked with deciding the magnitude of their subjective quality.
Incoming Pins are currently obtained from three separate sources: repins made by followed users, related Pins, and Pins from followed interests. Each is scored by the worker and then inserted into a pool for that particular type of pin. Each pool is a priority queue sorted on score and belongs to a single user. Newly added Pins mix with those added before, allowing the highest quality Pins to be accessible over time at the front of the queue.
Pools can be implemented in a variety of ways so long as the priority queue requirement is met. We choose to do this by exploiting the key-based sorting of HBase. Each key is a combination of user, score and Pin such that, for any user, we may scan a list of available Pins according to their score. Newly added triples will be inserted at their appropriate location to maintain the score order. This combination of user, score, and Pin into a key value can be used to create a priority queue in other storage systems aside from HBase, a property we may use in the future depending on evolving storage requirements.
Distinct from the smart feed worker, the smart feed content generator is concerned primarily with defining what “new” means in the context of a home feed. When a user accesses the home feed, we ask the content generator for new Pins since their last visit. The generator decides the quantity, composition, and arrangement of new Pins to return in response to this request.
The content generator assembles available Pins into chunks for consumption by the user as part of their home feed. The generator is free to choose any arrangement based on a variety of input signals, and may elect to use some or all of the Pins available in the pools. Pins that are selected for inclusion in a chunk are thereafter removed from from the pools so they cannot be returned as part of subsequent chunks.
The content generator is generally free to perform any rearrangements it likes, but is bound to the priority queue nature of the pools. When the generator asks for n pins from a pool, it’ll get the n highest scoring (i.e., best) Pins available. Therefore, the generator doesn’t need to concern itself with finding the best available content, but instead with how the best available content should be presented.
In addition to providing high availability of the home feed, the smart feed service is responsible for combining new Pins returned by the content generator with those that previously appeared in the home feed. We can separate these into the chunk returned by the content generator and the materialized feed managed by the smart feed service.
The materialized feed represents a frozen view of the feed as it was the last time the user viewed it. To the materialized Pins we add the Pins from the content generator in the chunk. The service makes no decisions about order, instead it adds the Pins in exactly the order given by the chunk. Because it has a fairly low rate of reading and writing, the materialized feed is likely to suffer from fewer availability events. In addition, feeds can be trimmed to restrict them to a maximum size. The need for less storage means we can easily increase the availability and reliability of the materialized feed through replication and the use of faster storage hardware.
The smart feed service relies on the content generator to provide new Pins. If the generator experiences a degradation in performance, the service can gracefully handle the loss of its availability. In the event the content generator encounters an exception while generating a chunk, or if it simply takes too long to produce one, the smart feed service will return the content contained in the materialized feed. In this instance, the feed will appear to the end user as unchanged from last time. Future feed views will produce chunks as large as, or larger than, the last so that eventually the user will see new Pins.
By moving to smart feed, we achieved the goals of a highly flexible architecture and better control over the composition of home feeds. The home feed is now powered by three separate services, each with a well-defined role in its production and distribution. The individual services can be altered or replaced with components that serve the same general purpose. The use of pools to buffer Pins according to their quality allows us a greater amount of control over the composition of home feeds.
Continuing with this project, we intend to better model users’ preferences with respect to Pins in their home feeds. Our accuracy of recommendation quality varies considerably over our user base, and we would benefit from using preference information gathered from recent interactions with the home feed. Knowledge of personal preference will also help us order home feeds so the Pins of most value can be discovered with the least amount of effort.
If you’re interested in tackling challenges and making improvements like this, join our team!
Chris Pinchak is a software engineer at Pinterest.
Acknowledgements: This technology was built in collaboration with Dan Feng, Dmitry Chechik, Raghavendra Prabhu, Jeremy Carroll, Xun Liu, Varun Sharma, Joe Lau, Yuchen Liu, Tian-Ying Chang, and Yun Park. This team, as well as people from across the company, helped make this project a reality with their technical insights and invaluable feedback.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Inventive engineers building the first visual discovery engine, 100 billion ideas and counting. https://careers.pinterest.com/careers/engineering
|
Nikhil Dandekar | 116 | 3 | https://towardsdatascience.com/what-makes-a-good-data-scientist-engineer-a8b4d7948a86?source=tag_archive---------5---------------- | What makes a good data scientist/engineer? – Towards Data Science | The term data scientist has been used lately to describe a wide variety of skills & roles. In this post I will focus on a particular flavor of data scientist. I will talk about the qualities needed to be a good data scientist-engineer who ships relevance products to users. Some examples of relevance products are:
These folks need to be strong at data science and engineering to be successful. Some places call these folks as Machine Learning engineers since most of the work they do involves Machine Learning. More generally, I feel relevance engineer is a good term to describe them.
Relevance engineers have a common set of skills that they draw upon to get their jobs done. The list below doesn’t include some of the known, obvious skills. You obviously need to be smart. You obviously need to have (or be able to learn quickly) the required “book” knowledge.
But beyond that, there are a bunch of not-so-obvious skills that you can’t learn from a book. Here are some of those, in no particular order:
This list is by no means exhaustive, but does capture some of the qualities of the smartest folks I have worked with. Happy to hear what you think.
Thanks to Peter Bailey and Andrew Hogue for feedback on the initial revisions.
*In this post, feature means a software feature, not a machine learning feature.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Engineering Manager doing Machine Learning @ Google. Previously worked on ML and search at Quora, Foursquare and Bing.
Sharing concepts, ideas, and codes.
|
Jeff Smith | 20 | 7 | https://medium.com/data-engineering/modeling-madly-8b2c72eb52be?source=tag_archive---------6---------------- | Modeling Madly – Data Engineering – Medium | I recently wrapped up my second hackathon at Intent Media. You can see my summary of one of our previous hackathons here. These past two hackathons I’ve taken on some slightly different challenges than people usually go after in a hackathon: developing new machine learning models. While I‘ve been working on data science and machine learning systems for a while, I’ve found that trying to do so under extreme constraints can be a distinctly different experience. A very good data hacker can easily find themselves with a great idea at a hackathon but with little to nothing to demo at the end. Accepting that my personal experience is just my own, let me offer three tips for building new models at a hackathon.
When you’re doing a more traditional web app hack at a hackathon, you can almost run out of time and still come up with something pretty good as long as you get that last bug fixed before the demo. This is a great characteristic to build into the plan of a hack but one that simply does not apply to a machine learning hack.
Think about what happens when you do find that last bug in a machine learning project. You still need to potentially do all of the below:
That’s no “just hit refresh” workflow. Even with a well-oiled workflow, some of those tasks can take all of the time your average one-day hackathon is scheduled for. Take #3, for example. Training a production grade model using, say, Hadoop, can take a lot of time, even if you have the cash to spin up a fair-sized cluster of EC2 instances.
What that means for your hack can vary, but you’re just asking for trouble if you don’t start with that fact taken into account in the scope and goals of your project. A solid project design is absolutely crucial, if you’re going to hope to take all of the little steps involved in getting your model ready to demo.
Which leads me to my next point...
One of the best things about working in data science is all of the really smart people. But, of course, the corollary is that one of the worst things about working in data science is all of the really smart people. Sharp engineers and data scientists can take the nugget of an idea and envision a useful, powerful suite of products that would take years to build, which is not so useful when you have a day or two. Mature dataists know just how much ambition is too much and plan accordingly. I happen to be lucky enough to work with some very smart and very mature data scientists and engineers, so this has not been a problem for either of my last few hacks. But, I’m just lucky that way. You might not be so lucky.
Unrealistic ambitions are a constant danger in a machine learning hack, running along the edge of all activities like a precipice beckoning you to dive off and see where you land. If you take one thing away from this post, let it be this: don’t dive off the cliff. Just don’t do it. You won’t like where you land. You’ll wind up with more questions than answers and you’ll have nothing to show come demo time. Moreover, your fellow devs who worked on apps and not models will simply not understand what you spent your time on.
What does a precipice look like? It could be a novel distance metric. It could be a fundamental improvement to a widely used technique like SVRs. Or it could just be something really benign sounding like a longer training set. I would say that even choosing to pose the problem as a regression one instead of a classification one could qualify.
The danger originates in the intrinsic tension between the rigorous and exploratory mode of academic data science/machine learning education and the pedal-to-the-metal pace mandated by a hackathon. They are very different modes of working, and you’re just going to have suspend some of your good habits for a day or so, if you want to have something to demo.
This last point can be the trickiest to put in practice, but I think it can totally be the difference between a project that feels like a hack and one that feels like just getting warmed up on a weeklong story. If you’ve figured out how to scope your project appropriately and designed something that can really be built in a day or two, you can still actually fail to do so. I think it can the difference can easily come down to technology choices.
For example, I currently make my living writing Cascalog, Clojure, and Java on top of Hadoop to process files stored in S3. I know these tools well enough to pay my rent, but I would absolutely hesitate to use any of them in a tight-paced context. I have spent weeks trying to understand a single Cascalog bug. Seriously.
If you know the language, Python offers an unbeatable value proposition for this use case. scikit-learn has nearly everything you could imagine needing. pandas, NumPy, and SciPy are all sitting there to be brought in when appropriate. And don’t forget how awesome it can be to prototype in a purpose-built exploratory development environment like IPython.
But this is machine learning, and sometimes our data is just big. Maybe even web scale. Some people hate these phrases, but they serve a purpose. We don’t all use Hadoop out of love for horrendously complex Java applications.
Big data is not just statistics on a Mac Pro, although it can often look like that. Scale can be a real necessity even in a hackathon.
When it is, there are no easy answers. If you’re lucky, maybe you can actually work with multiple hour model learning times. If you’re really lucky, you might be using Spark and not Hadoop, in which case it might not take hours to learn your model.
My point is that, insofar as you have a choice, choose the leaner meaner tool, the one that will let you do more with less input required from you. Don’t use that C++ library that promises awesome runtime but with Python bindings that you’ve never tried. You’ll never figure out its quirks in time. Write as little data cleanup code as you can manage. Commands like dropna can save you precious minutes to hours. And if you can get your data from database or an API instead of files, then, for the love of Cthulhu, do it. Hell, even if you have to load your data from files to a database first, it might be worth your time. SQL is one of the highest productivity rapid prototyping tools I know.
And though I love to bash on the clunkiness of Hadoop, there are even ways of taking some serious pain out of using it under pressure. Depending on what you’re doing Elastic Map Reduce or PredictionIO can get you to the point of being productive much faster.
I love hackathons and their variations. They remind me of the fun old days in grad school, furiously hacking away to come up with something interesting to say about definitionally uncertain stuff.
The furious pace and the pragmatic compromises are part of the fun. Compared to things like pitch events, hackathons have way less problems (even if they have their issues as well). At their best they’re about the love of unconstrained creation. I’ve tried to do machine learning hacks because it’s just so damn cool to go from zero to having a program that makes decisions. It amazes me every time it works, and doubly so when I can manage to get something working on a deadline.
Taking on a challenge like building a new model in a hackathon is also a great learning experience, especially if you get to work as part of a strong team. Machine learning in the real world is an even larger topic than its academic cousin, and there’s always interesting things to learn. Hackathons can be great places to rapidly iterate through approaches and learn from your teammates how to build things better and faster. That’s pretty likely to come in handy sometime.
The main part of the post is over, but I wanted to make sure to leave a note for anyone who was interested in what we hack at Intent Media (or what we build for our customers). We’re hiring all sorts of smart people to build systems for machine learning and more. Please reach out if you want to hear more about how and why we do what we do.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Author of Reactive Machine Learning Systems @ManningBooks. Building AIs for fun and profit. Friend of animals.
Laying the foundation of tomorrow’s big data
|
Chris Jagers | 45 | 5 | https://medium.com/@chrisjagers/the-wolfram-language-b853337f8427?source=tag_archive---------7---------------- | Explaining the Wolfram Language – Chris Jagers – Medium | Many people are already familiar with Apple’s voice search called Siri, or the search engine behind it called Wolfram Alpha. This search engine can use natural language to search vast sets of data and even compute math. However, this is just a tiny fraction of what the language can do, and I don’t even think it’s a good introduction to what’s possible. To understand the raw power of the underlying technology, you really have to understand what it is and a little about how it works.
The Wolfram website has wonderful documentation and explanations, but for the uninitiated it can seem bewildering. They have repackaged the language so many different ways, that it can be hard for the beginner to understand exactly what it is. That’s why I want to venture my own introduction.
Let’s start with it’s origins. Mathematica was designed as a desktop tool for computational research and exploration. It continued evolving and the breakthrough was realizing those symbols could be anything: images, sounds, algorithms, geometry, data-sets ... anything. So, it became more than just a language.
Stephen Wolfram calls this a knowledge-based language because it has smart-objects built in that can be computed.
The language doesn’t simply find results, it computes results into actual models, analysis and other symbolic objects. The real power is that the results remain symbolic objects that can be further manipulated symbolically (i.e. embed in another symbolic object, operate on it).
In short, anything can be computed. Pretty abstract, I know. Don’t worry, we’ll get to examples soon.
The actual syntax is a combination of Objects and Operators which are grouped and ordered by square brackets [ ]. The stuff at the center of the formula gets read first and then it expands out like a Russian doll.
Out of many potential examples, I have carefully selected one from their site to illustrate it’s simplicity and power.
Let’s say we want our system to determine the difference between poetry and prose. This would be difficult to program directly because there are so many variables and the differences are so subtle. With Wolfram Language, that hard stuff becomes easy. You can train it recognize the difference very quickly. Here’s how it works, let’s use Shakespeare for an example:
First, scan all of Hamlet and call that type of stuff prose. Then scan all of Shakespeare’s Sonnets and call that stuff poetry. Easy.
Next, train the system with machine learning: Classify and Predict are the two big functions. We want to Classify which is poetry and which is prose. Wolfram looks at our situation and instantly determines that the Markov Method is the best for differentiating among all the subtle differences between prose and poetry.
That’s it. Any system using this bit of training will automatically be able to detect the difference between poetry and prose with a high degree of accuracy. The key to this accuracy is the size of the data set. You really need at millions of data points to train it reliably. But with Wolfram, many of those data sets are already built in. Easy.
This is just one tiny example to illustrate what the language looks like and how it goes beyond symbols to work with computable objects. We could continue translating poems into interactive maps, and interactions into music, and so on.
How does Wolfram compare with other products like Apache Hadoop and others? Well, it’s a totally different thing. In those products everything is manual. The various axis (and all the variables) are manually defined. Instead, Wolfram intelligently applies formulas and makes choices to optimize results based on specific conditions. It makes the hard stuff automatic. Plus, it’s capable of much more than machine learning; that’s just one example of hundreds: sound, 3d-geometry, language, images, etc. — and a mixture of them all.
Mathematica is still the most powerful and polished way to access the Wolfram Language. Their new Programming Cloud (and other cloud offerings) signal serious intent to move to the web, but it is still early days. The language is very mature for desktop exploration, and some companies have even made Mathematica applications for small scale internal use, which can be quite useful.
Even though the Wolfram website has signaled intent to make it more broadly deployable within commercial services, I don’t think this is the proper way to use the language. Within my own company, we find Wolfram extremely handy for research, but not deployment within a web-based product. In short, it isn’t performant:
Commercial products require more than a powerful language, they are made within an ecosystem of services and vendors that all have to work together. Without machine learning built into the native cloud where data is stored, it can’t be deployed in a SaaS product in a way that lives up to expectations.
While Stephen Wolfram would love for his language to be used within commercial products, I think he resents having to play nice with lower level languages. His alternative of making API requests across the web isn’t a good way to embed intelligence within products. And I don’t think we will ever see entire SaaS products built entirely with a functional language. Programming is the art of automation.
The Wolfram Community is full of very smart people using the language for research and exploration. They represent the cutting edge of computation. Personally, I’m looking forward to when we can see intelligence woven into commercial and consumer products that solve real problems for people on a daily basis.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
CEO of Learning Machine. www.learningmachine.com
|
John Wittenauer | 2 | 9 | https://medium.com/@jdwittenauer/machine-learning-exercises-in-python-part-1-60db0df846a4?source=tag_archive---------8---------------- | Machine Learning Exercises In Python, Part 1 – John Wittenauer – Medium | This content originally appeared on Curious Insight
This post is part of a series covering the exercises from Andrew Ng’s machine learning class on Coursera. The original code, exercise text, and data files for this post are available here.
Part 1 — Simple Linear RegressionPart 2 — Multivariate Linear RegressionPart 3 — Logistic RegressionPart 4 — Multivariate Logistic RegressionPart 5 — Neural NetworksPart 6 — Support Vector MachinesPart 7 — K-Means Clustering & PCAPart 8 — Anomaly Detection & Recommendation
One of the pivotal moments in my professional development this year came when I discovered Coursera. I’d heard of the “MOOC” phenomenon but had not had the time to dive in and take a class. Earlier this year I finally pulled the trigger and signed up for Andrew Ng’s Machine Learning class. I completed the whole thing from start to finish, including all of the programming exercises. The experience opened my eyes to the power of this type of education platform, and I’ve been hooked ever since.
This blog post will be the first in a series covering the programming exercises from Andrew’s class. One aspect of the course that I didn’t particularly care for was the use of Octave for assignments. Although Octave/Matlab is a fine platform, most real-world “data science” is done in either R or Python (certainly there are other languages and tools being used, but these two are unquestionably at the top of the list). Since I’m trying to develop my Python skills, I decided to start working through the exercises from scratch in Python. The full source code is available at my IPython repo on Github. You’ll also find the data used in these exercises and the original exercise PDFs in sub-folders off the root directory if you’re interested.
While I can explain some of the concepts involved in this exercise along the way, it’s impossible for me to convey all the information you might need to fully comprehend it. If you’re really interested in machine learning but haven’t been exposed to it yet, I encourage you to check out the class (it’s completely free and there’s no commitment whatsoever). With that, let’s get started!
In the first part of exercise 1, we’re tasked with implementing simple linear regression to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities. You’d like to figure out what the expected profit of a new food truck might be given only the population of the city that it would be placed in.
Let’s start by examining the data which is in a file called “ex1data1.txt” in the “data” directory of my repository above. First we need to import a few libraries.
Now let’s get things rolling. We can use pandas to load the data into a data frame and display the first few rows using the “head” function.
(Note: Medium can’t render tables — the full example is here)
Another useful function that pandas provides out-of-the-box is the “describe” function, which calculates some basic statistics on a data set. This is helpful to get a “feel” for the data during the exploratory analysis stage of a project.
(Note: Medium can’t render tables — the full example is here)
Examining stats about your data can be helpful, but sometimes you need to find ways to visualize it too. Fortunately this data set only has one dependent variable, so we can toss it in a scatter plot to get a better idea of what it looks like. We can use the “plot” function provided by pandas for this, which is really just a wrapper for matplotlib.
It really helps to actually look at what’s going on, doesn’t it? We can clearly see that there’s a cluster of values around cities with smaller populations, and a somewhat linear trend of increasing profit as the size of the city increases. Now let’s get to the fun part — implementing a linear regression algorithm in python from scratch!
If you’re not familiar with linear regression, it’s an approach to modeling the relationship between a dependent variable and one or more independent variables (if there’s one independent variable then it’s called simple linear regression, and if there’s more than one independent variable then it’s called multiple linear regression). There are lots of different types and variances of linear regression that are outside the scope of this discussion so I won’t go into that here, but to put it simply — we’re trying to create a *linear model* of the data X, using some number of parameters theta, that describes the variance of the data such that given a new data point that’s not in X, we could accurately predict what the outcome y would be without actually knowing what y is.
In this implementation we’re going to use an optimization technique called gradient descent to find the parameters theta. If you’re familiar with linear algebra, you may be aware that there’s another way to find the optimal parameters for a linear model called the “normal equation” which basically solves the problem at once using a series of matrix calculations. However, the issue with this approach is that it doesn’t scale very well for large data sets. In contrast, we can use variants of gradient descent and other optimization methods to scale to data sets of unlimited size, so for machine learning problems this approach is more practical.
Okay, that’s enough theory. Let’s write some code. The first thing we need is a cost function. The cost function evaluates the quality of our model by calculating the error between our model’s prediction for a data point, using the model parameters, and the actual data point. For example, if the population for a given city is 4 and we predicted that it was 7, our error is (7–4)^2 = 3^2 = 9 (assuming an L2 or “least squares” loss function). We do this for each data point in X and sum the result to get the cost. Here’s the function:
Notice that there are no loops. We’re taking advantage of numpy’s linear algrebra capabilities to compute the result as a series of matrix operations. This is far more computationally efficient than an unoptimizted “for” loop.
In order to make this cost function work seamlessly with the pandas data frame we created above, we need to do some manipulating. First, we need to insert a column of 1s at the beginning of the data frame in order to make the matrix operations work correctly (I won’t go into detail on why this is needed, but it’s in the exercise text if you’re interested — basically it accounts for the intercept term in the linear equation). Second, we need to separate our data into independent variables X and our dependent variable y.
Finally, we’re going to convert our data frames to numpy matrices and instantiate a parameter matirx.
One useful trick to remember when debugging matrix operations is to look at the shape of the matrices you’re dealing with. It’s also helpful to remember when walking through the steps in your head that matrix multiplications look like (i x j) * (j x k) = (i x k), where i, j, and k are the shapes of the relative dimensions of the matrix.
((97L, 2L), (1L, 2L), (97L, 1L))
Okay, so now we can try out our cost function. Remember the parameters were initialized to 0 so the solution isn’t optimal yet, but we can see if it works.
32.072733877455676
So far so good. Now we need to define a function to perform gradient descent on the parameters *theta* using the update rules defined in the exercise text. Here’s the function for gradient descent:
The idea with gradient descent is that for each iteration, we compute the gradient of the error term in order to figure out the appropriate direction to move our parameter vector. In other words, we’re calculating the changes to make to our parameters in order to reduce the error, thus bringing our solution closer to the optimal solution (i.e best fit).
This is a fairly complex topic and I could easily devote a whole blog post just to discussing gradient descent. If you’re interested in learning more, I would recommend starting with this article and branching out from there.
Once again we’re relying on numpy and linear algebra for our solution. You may notice that my implementation is not 100% optimal. In particular, there’s a way to get rid of that inner loop and update all of the parameters at once. I’ll leave it up to the reader to figure it out for now (I’ll cover it in a later post).
Now that we’ve got a way to evaluate solutions, and a way to find a good solution, it’s time to apply this to our data set.
matrix([[-3.24140214, 1.1272942 ]])
Note that we’ve initialized a few new variables here. If you look closely at the gradient descent function, it has parameters called alpha and iters. Alpha is the learning rate — it’s a factor in the update rule for the parameters that helps determine how quickly the algorithm will converge to the optimal solution. Iters is just the number of iterations. There is no hard and fast rule for how to initialize these parameters and typically some trial-and-error is involved.
We now have a parameter vector descibing what we believe is the optimal linear model for our data set. One quick way to evaluate just how good our regression model is might be to look at the total error of our new solution on the data set:
4.5159555030789118
That’s certainly a lot better than 32, but it’s not a very intuitive way to look at it. Fortunately we have some other techniques at our disposal.
We’re now going to use matplotlib to visualize our solution. Remember the scatter plot from before? Let’s overlay a line representing our model on top of a scatter plot of the data to see how well it fits. We can use numpy’s “linspace” function to create an evenly-spaced series of points within the range of our data, and then “evaluate” those points using our model to see what the expected profit would be. We can then turn it into a line graph and plot it.
Not bad! Our solution looks like and optimal linear model of the data set. Since the gradient decent function also outputs a vector with the cost at each training iteration, we can plot that as well.
Notice that the cost always decreases — this is an example of what’s called a convex optimization problem. If you were to plot the entire solution space for the problem (i.e. plot the cost as a function of the model parameters for every possible value of the parameters) you would see that it looks like a “bowl” shape with a “basin” representing the optimal solution.
That’s all for now! In part 2 we’ll finish off the first exercise by extending this example to more than 1 variable. I’ll also show how the above solution can be reached by using a popular machine learning library called scikit-learn.
To comment on this article, check out the original post at Curious Insight
Follow me on twitter to get new post updates
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data scientist, engineer, author, investor, entrepreneur
|
Pinterest Engineering | 25 | 7 | https://medium.com/@Pinterest_Engineering/building-the-interests-platform-73a3a3755c21?source=tag_archive---------9---------------- | Building the interests platform – Pinterest Engineering – Medium | Ningning Hu | Pinterest engineer, Discovery
The core value of Pinterest is to help people find the things they care about, by connecting them to Pins and people that relate to their interests. We’re building a service that’s powered by people, and supercharged with technology.
The interest graph — the connections that make up the Pinterest index — creates bridges between Pins, boards, and Pinners. It’s our job to build a system that helps people to collect the things they love, and connect them to communities of engaged people who share similar interests and can help them discover more. From categories like travel, fitness, and humor, to more niche areas like vintage motorcycles, craft beer, or Japanese architecture, we’re building a visual discovery tool for all interests.
The interests platform is built to support this vision. Specifically, it’s responsible for producing high quality data on interests, interest relationships, and their association with Pins, boards, and Pinners.
Figure 1: Feedback loop between machine intelligence and human curation
In contrast with conventional methods of generating such data, which rely primarily on machine learning and data mining techniques, our system relies heavily on human curation. The ultimate goal is to build a system that’s both machine and human powered, creating a feedback mechanism by which human curated data helps drive improvements in our machine algorithms, and vice versa.
Figure 2: System components
Raw input to the system includes existing data about Pins, boards, Pinners, and search queries, as well as explicit human curation signals about interests. With this data, we’re able to construct a continuously evolving interest dictionary, which provides the foundation to support other key components, such as interest feeds, interest recommendations, and related interests.
From a technology standpoint, interests are text strings that represent entities for which a group of Pinners might have a shared passion.
We generated an initial collection of interests by extracting frequently occurring n-grams from Pin and board descriptions, as well as board titles, and filtering these n-grams using custom built grammars. While this approach provided a high coverage set of interests, we found many terms to be malformed phrases. For instance, we would extract phrases such as “lamborghini yellow” instead of “yellow lamborghini”. This proved problematic because we wanted interest terms to represent how Pinners would describe them, and so, we employed a variety of methods to eliminate malformed interests terms.
We first compared terms with repeated search queries performed by a group of Pinners over a few months. Intuitively, this criterion matches well with the notion that an interest should be an entity for which a group of Pinners are passionate.
Later we filtered the candidate set through public domain ontologies like Wikipedia titles. These ontologies were primarily used to validate proper nouns as opposed to common phrases, as all available ontologies represented only a subset of possible interests. This is especially true for Pinterest, where Pinners themselves curate special interests like “mid century modern style.”
Finally, we also maintain an internal blacklist to filter abusive words and x-rated terms as well as Pinterest specific stop words, like “love”. This filtering is especially important to interest terms which might be recommended to millions of users.
We arrived at a fair quality collection of interests following the above algorithmic approaches. In order to understand the quality of our efforts, we gave a 50,000 term subset of our collection to a third party vendor which used crowdsourcing to rate our data. To be rigorous, we composed a set of four criteria by which users would evaluate candidate Interests terms:
- Is it English?
- Is it a valid phrase in grammar?
- Is it a standalone concept?
- Is it a proper name?
The crowdsourced ratings were both interesting if not somewhat expected. There was a low rate of agreement amongst raters, with especially high discrepancy in determining whether an interest’s term represented a “standalone concept.” Despite the ambiguity, we were able to confirm that 80% of the collection generated using the above algorithms satisfied our interests criteria.
This type of effort, however, is not easy to scale. The real solution is to allow Pinners to provide both implicit and explicit signals to help us determine the validity of an interest. Implicit signals behaviors like clicking and viewing, while explicit signals include asking Pinners to specifically provide information (which can be actions like a thumbs up/thumbs down, starring, or skipping recommendations).
To capture all the signals used for defining the collections of terms, we built a dictionary that stores all the data associated with each interest, including invalid interests and the reason why it’s invalid. This service plays a key role in human curation, by aggregating signals from different people. On top of this dictionary service, we can build different levels of reviewing system.
With the Interests dictionary, we can associate Pins, boards, and Pinners with representative interests. One of the initial ways we experimented with this was launching a preview of a page where Pinners can explore their interests.
Figure 3: Exploring interests
In order to match interests to Pinners, we need to aggregate all the information related with a person’s interests. At its core, our system recommends interests based upon Pins with which a Pinner interacts. Every Pin on Pinterest has been collected and given context by someone who thinks it’s important, and in doing so, is helping other people discover great content. Each individual Pin is an incredibly rich source of data. As discussed in a previous blog post on discovery data model, one Pin often has multiple copies — different people may Pin it from different sources, and the same Pin can be repinned multiple times. During this process, each Pin accumulates numerous unique textual descriptions which allows us to connect Pins with interests terms with high precision.
However, this conceptually simple process requires non-trivial engineering effort to scale to the amount of Pins and Pinners that the service has today. The data process pipeline (managed by Pinball) composes over 35 Hadoop jobs, and runs periodically to update the user-interest mapping to capture users’ latest interest information.
The initial feedback on the explore interests page has been positive, proving the capabilities of our system. We’ll continue testing different ways of exposing a person’s interests and related content, based on implicit signals, as well as explicit signals (such as the ability to create custom categories of interests).
Related interests are an important way of enabling the ability to browse interests and discover new ones. To compute related interests, we simply combine the co-occurrence relationship for interests computed at Pin and board levels.
Figure 4: Computing related interests
The quality of the related interests is surprisingly high given the simplicity of the algorithm. We attribute this effect to the cleanness of Pinterest data. Text data on Pins tend to be very concise, and contain less noise than other types of data, like web pages. Also, related interests calculation already makes use of boards, which are heavily curated by people (vs. machines) in regards to organizing related content. We find that utilizing the co-occurrence of interest terms at the level of both Pins and boards provides the best tradeoff between achieving high precision as well as recall when computing the related interests.
One of the initial ways we began showing people related content was through related Pins. When you Pin an object, you’ll see a recommendation for a related board with that same Pin so you can explore similar objects. Additionally, if you scroll beneath a Pin, you’ll see Pins from other people who’ve also Pinned that original object. At this point, 90% of all Pins have related Pins, and we’ve seen 20% growth in engagement with related Pins in the last six months.
Interests feeds provide Pinners with a continuous feed of Pins that are highly related. Our feeds are populated using a variety of sources, including search and through our annotation pipeline. A key property of the feed is flow. Only feeds with decent flow can attract Pinners to come back repeatedly, thereby maintaining high engagement. In order to optimize for our feeds, we’ve utilized a number of real-time indexing and retrieval systems, including real-time search, real-time annotating, and also human curation for some of the interests.
To ensure quality, we need to guarantee quality from all sources. For that purpose, we measure the engagement of Pins from each source and address quality issue accordingly.
Figure 5: How interest feeds are generated
Accurately capturing Pinner interests and interest relationships, and making this data understandable and actionable for tens of millions of people (collecting tens of billions of Pins), is not only an engineering challenge, but also a product design one. We’re just at the beginning, as we continue to improve the data and design ways to empower people to provide feedback that allows us to build a hybrid system combining machine and human curation to power discovery. Results of these effort will be reflected in future product releases.
If you’re interested in building new ways of helping people discover the things they care about, join our team!
Acknowledgements: The core team members for the interests backend platform are Ningning Hu, Leon Lin, Ryan Shih and Yuan Wei. Many other folks from other parts of the company, especially the discovery team and the infrastructure teams, have provided very useful feedback and help along the way to make the ongoing project successful.
Ningning Hu is an engineer at Pinterest.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Inventive engineers building the first visual discovery engine, 100 billion ideas and counting. https://careers.pinterest.com/careers/engineering
|
Christopher Nguyen | 991 | 8 | https://medium.com/deep-learning-101/algorithms-of-the-mind-10eb13f61fc4?source=tag_archive---------0---------------- | Algorithms of the Mind – Deep Learning 101 – Medium | What Machine Learning Teaches Us About Ourselves
Originally published at blog.arimo.com.Follow me on Twitter to keep informed of interesting developments on these topics.
“Science often follows technology, because inventions give us new ways to think about the world and new phenomena in need of explanation.”
Or so Aram Harrow, an MIT physics professor, counter-intuitively argues in “Why now is the right time to study quantum computing”.
He suggests that the scientific idea of entropy could not really be conceived until steam engine technology necessitated understanding of thermodynamics. Quantum computing similarly arose from attempts to simulate quantum mechanics on ordinary computers.
So what does all this have to do with machine learning?
Much like steam engines, machine learning is a technology intended to solve specific classes of problems. Yet results from the field are indicating intriguing—possibly profound—scientific clues about how our own brains might operate, perceive, and learn. The technology of machine learning is giving us new ways to think about the science of human thought ... and imagination.
Five years ago, deep learning pioneer Geoff Hinton (who currently splits his time between the University of Toronto and Google) published the following demo.
Hinton had trained a five-layer neural network to recognize handwritten digits when given their bitmapped images. It was a form of computer vision, one that made handwriting machine-readable.
But unlike previous works on the same topic, where the main objective is simply to recognize digits, Hinton’s network could also run in reverse. That is, given the concept of a digit, it can regenerate images corresponding to that very concept.
We are seeing, quite literally, a machine imagining an image of the concept of “8”.
The magic is encoded in the layers between inputs and outputs. These layers act as a kind of associative memory, mapping back-and-forth from image and concept, from concept to image, all in one neural network.
But beyond the simplistic, brain-inspired machine vision technology here, the broader scientific question is whether this is how human imagination — visualization — works. If so, there’s a huge a-ha moment here.
After all, isn’t this something our brains do quite naturally? When we see the digit 4, we think of the concept “4”. Conversely, when someone says “8”, we can conjure up in our minds’ eye an image of the digit 8.
Is it all a kind of “running backwards” by the brain from concept to images (or sound, smell, feel, etc.) through the information encoded in the layers? Aren’t we watching this network create new pictures — and perhaps in a more advanced version, even new internal connections — as it does so?
If visual recognition and imagination are indeed just back-and-forth mapping between images and concepts, what’s happening between those layers? Do deep neural networks have some insight or analogies to offer us here?
Let’s first go back 234 years, to Immanuel Kant’s Critique of Pure Reason, in which he argues that “Intuition is nothing but the representation of phenomena”.
Kant railed against the idea that human knowledge could be explained purely as empirical and rational thought. It is necessary, he argued, to consider intuitions. In his definitions, “intuitions” are representations left in a person’s mind by sensory perceptions, where as “concepts” are descriptions of empirical objects or sensory data. Together, these make up human knowledge.
Fast forwarding two centuries later, Berkeley CS professor Alyosha Efros, who specializes in Visual Understanding, pointed out that “there are many more things in our visual world than we have words to describe them with”. Using word labels to train models, Efros argues, exposes our techniques to a language bottleneck. There are many more un-namable intuitions than we have words for.
In training deep networks, such as the seminal “cat-recognition” work led by Quoc Le at Google/Stanford, we’re discovering that the activations in successive layers appear to go from lower to higher conceptual levels. An image recognition network encodes bitmaps at the lowest layer, then apparent corners and edges at the next layer, common shapes at the next, and so on. These intermediate layers don’t necessarily have any activations corresponding to explicit high-level concepts, like “cat” or “dog”, yet they do encode a distributed representation of the sensory inputs. Only the final, output layer has such a mapping to human-defined labels, because they are constrained to match those labels.
Therefore, the above encodings and labels seem to correspond to exactly what Kant referred to as “intuitions” and “concepts”.
In yet another example of machine learning technology revealing insights about human thought, the network diagram above makes you wonder whether this is how the architecture of Intuition — albeit vastly simplified — is being expressed.
If — as Efros has pointed out — there are a lot more conceptual patterns than words can describe, then do words constrain our thoughts? This question is at the heart of the Sapir-Whorf or Linguistic Relativity Hypothesis, and the debate about whether language completely determines the boundaries of our cognition, or whether we are unconstrained to conceptualize anything — regardless of the languages we speak.
In its strongest form, the hypothesis posits that the structure and lexicon of languages constrain how one perceives and conceptualizes the world.
One of the most striking effects of this is demonstrated in the color test shown here. When asked to pick out the one square with a shade of green that’s distinct from all the others, the Himba people of northern Namibia — who have distinct words for the two shades of green — can find it almost instantly.
The rest of us, however, have a much harder time doing so.
The theory is that — once we have words to distinguish one shade from another, our brains will train itself to discriminate between the shades, so the difference would become more and more “obvious” over time. In seeing with our brain, not with our eyes, language drives perception.
With machine learning, we also observe something similar. In supervised learning, we train our models to best match images (or text, audio, etc.) against provided labels or categories. By definition, these models are trained to discriminate much more effectively between categories that have provided labels, than between other possible categories for which we have not provided labels. When viewed from the perspective of supervised machine learning, this outcome is not at all surprising. So perhaps we shouldn’t be too surprised by the results of the color experiment above, either. Language does indeed influence our perception of the world, in the same way that labels in supervised machine learning influence the model’s ability to discriminate among categories.
And yet, we also know that labels are not strictly required to discriminate between cues. In Google’s “cat-recognizing brain”, the network eventually discovers the concept of “cat”, “dog”, etc. all by itself — even without training the algorithm against explicit labels. After this unsupervised training, whenever the network is fed an image belonging to a certain category like “Cats”, the same corresponding set of “Cat” neurons always gets fired up. Simply by looking at the vast set of training images, this network has discovered the essential patterns of each category, as well as the differences of one category vs. another.
In the same way, an infant who is repeatedly shown a paper cup would soon recognize the visual pattern of such a thing, even before it ever learns the words “paper cup” to attach that pattern to a name. In this sense, the strong form of the Sapir-Whorf hypothesis cannot be entirely correct — we can, and do, discover concepts even without the words to describe them.
Supervised and unsupervised machine learning turn out to represent the two sides of the controversy’s coin. And if we recognized them as such, perhaps Sapir-Whorf would not be such a controversy, and more of a reflection of supervised and unsupervised human learning.
I find these correspondences deeply fascinating — and we’ve only scratched the surface. Philosophers, psychologists, linguists, and neuroscientists have studied these topics for a long time. The connection to machine learning and computer science is more recent, especially with the advances in big data and deep learning. When fed with huge amounts of text, images, or audio data, the latest deep learning architectures are demonstrating near or even better-than-human performance in language translation, image classification, and speech recognition.
Every new discovery in machine learning demystifies a bit more of what may be going on in our brains. We’re increasingly able to borrow from the vocabulary of machine learning to talk about our minds.
Thanks to Sonal Chokshi and Vu Pham for extensive review & edits. Also, chrisjagers, chickamade.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
@arimoinc CEO & Co-Founder. Leader, Entrepreneur, Hacker, Xoogler, Executive, Professor. #DataViz #ParallelComputing #DeepLearning & former #GoogleApps.
Fundamentals and Latest Developments in #DeepLearning
|
Per Harald Borgen | 2.1K | 6 | https://medium.com/learning-new-stuff/machine-learning-in-a-week-a0da25d59850?source=tag_archive---------1---------------- | Machine Learning in a Week – Learning New Stuff – Medium | Getting into machine learning (ml) can seem like an unachievable task from the outside.
However, after dedicating one week to learning the basics of the subject, I found it to be much more accessible than I anticipated.
This article is intended to give others who’re interested in getting into ml a roadmap of how to get started, drawing from the experiences I made in my intro week.
Before my machine learning week, I had been reading about the subject for a while, and had gone through half of Andrew Ng’s course on Coursera and a few other theoretical courses. So I had a tiny bit of conceptual understanding of ml, though I was completely unable to transfer any of my knowledge into code. This is what I wanted to change.
I wanted to be able to solve problems with ml by the end of the week, even through this meant skipping a lot of fundamentals, and going for a top-down approach, instead of bottoms up.
After asking for advice on Hacker News, I came to the conclusion that Python’s Scikit Learn-module was the best starting point. This module gives you a wealth of algorithms to choose from, reducing the actual machine learning to a few lines of code.
I started off the week by looking for video tutorials which involved Scikit Learn. I finally landed on Sentdex’s tutorial on how to use ml for investing in stocks, which gave me the necessary knowledge to move on to the next step.
The good thing about the Sentdex tutorial is that the instructor takes you through all the steps of gathering the data. As you go along, you realize that fetching and cleaning up the data can be much more time consuming than doing the actually machine learning. So the ability to write scripts to scrape data from files or crawl the web are essential skills for aspiring machine learning geeks.
I have re-watched several of the videos later on, to help me when I’ve been stuck with problems, so I’d recommend you to do the same.
However, if you already know how to scrape data from websites, this tutorial might not be the perfect fit, as a lot of the videos evolve around data fetching. In that case, the Udacity’s Intro to Machine Learning might be a better place to start.
Tuesday I wanted to see if I could use what I had learned to solve an actual problem. As another developer in my coding cooperative was working on Bank of England’s data visualization competition, I teamed up with him to check out the datasets the bank has released. The most interesting data was their household surveys. This is an annual survey the bank perform on a few thousand households, regarding money related subjects.
The problem we decided to solve was the following:
I played around with the dataset, spent a few hours cleaning up the data, and used the Scikit Learn map to find a suitable algorithm for the problem.
We ended up with a success ratio at around 63%, which isn’t impressive at all. But the machine did at least manage to guess a little better than flipping a coin, which would have given a success rate at 50%.
Seeing results is like fuel to your motivation, so I’d recommend you doing this for yourself, once you have a basic grasp of how to use Scikit Learn.
After playing around with various Scikit Learn modules, I decided to try and write a linear regression algorithm from the ground up.
I wanted to do this, because I felt (and still feel) that I really don’t understand what’s happening on under the hood.
Luckily, the Coursera course goes into detail on how a few of the algorithms work, which came to great use at this point. More specifically, it describes the underlying concepts of using linear regression with gradient descent.
This has definitely been the most effective of learning technique, as it forces you to understand the steps that are going on ‘under the hood’. I strongly recommend you to do this at some point.
I plan to rewrite my own implementations of more complex algorithms as I go along, but I prefer doing this after I’ve played around with the respective algorithms in Scikit Learn.
On Thursday, I started doing Kaggle’s introductory tutorials. Kaggle is a platform for machine learning competitions, where you can submit solutions to problems released by companies or organizations .
I recommend you trying out Kaggle after having a little bit of a theoretical and practical understanding of machine learning. You’ll need this in order to start using Kaggle. Otherwise, it will be more frustrating than rewarding.
The Bag of Words tutorial guides you through every steps you need to take in order to enter a submission to a competition, plus gives you a brief and exciting introduction into Natural Language Processing (NLP). I ended the tutorial with much higher interest in NLP than I had when entering it.
Friday, I continued working on the Kaggle tutorials, and also started Udacity’s Intro to Machine Learning. I’m currently half ways through, and find it quite enjoyable.
It’s a lot easier the Coursera course, as it doesn’t go in depth in the algorithms. But it’s also more practical, as it teaches you Scikit Learn, which is a whole lot easier to apply to the real world than writing algorithms from the ground up in Octave, as you do in the Coursera course.
Doing it for a week hasn’t just been great fun, it has also helped my awareness of its usefulness of machine learning in society. The more I learn about it, the more I see which areas it can be used to solve problems.
Choose a top down approach if you’re not ready for the heavy stuff, and get into problem solving as quickly as possible.
Good luck!
Thanks for reading! My name is Per, I’m a co-founder of Scrimba — a better way to teach and learn code.
If you’ve read this far, I’d recommend you to check out this demo!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Co-founder of Scrimba, the next-generation platform for teaching and learning code. https://scrimba.com.
A publication about improving your technical skills.
|
Ahmed El Deeb | 593 | 7 | https://medium.com/rants-on-machine-learning/what-to-do-with-small-data-d253254d1a89?source=tag_archive---------2---------------- | What to do with “small” data? – Rants on Machine Learning – Medium | By Ahmed El Deeb
Many technology companies now have teams of smart data-scientists, versed in big-data infrastructure tools and machine learning algorithms, but every now and then, a data set with very few data points turns up and none of these algorithms seem to be working properly anymore. What the hell is happening? What can you do about it?
Most data science, relevance, and machine learning activities in technology companies have been focused around “Big Data” and scenarios with huge data sets. Sets where the rows represent documents, users, files, queries, songs, images, etc. Things that are in the thousands, hundreds of thousands, millions or even billions. The infrastructure, tools, and algorithms to deal with these kinds of data sets have been evolving very quickly and improving continuously during the last decade or so. And most data scientists and machine learning practitioners have gained experience is such situations, have grown accustomed to the appropriate algorithms, and gained good intuitions about the usual trade-offs (bias-variance, flexibility-stability, hand-crafted features vs. feature learning, etc.). But small data sets still arise in the wild every now and then, and often, they are trickier to handle, require a different set of algorithms and a different set of skills. Small data sets arise is several situations:
Problems of small-data are numerous, but mainly revolve around high variance:
1- Hire a statistician
I’m not kidding! Statisticians are the original data scientists. The field of statistics was developed when data was much harder to come by, and as such was very aware of small-sample problems. Statistical tests, parametric models, bootstrapping, and other useful mathematical tools are the domain of classical statistics, not modern machine learning. Lacking a good general-purpose statistician, get a marine-biologist, a zoologist, a psychologist, or anyone who was trained in a domain that deals with small sample experiments. The closer to your domain the better. If you don’t want to hire a statistician full time on your team, make it a temporary consultation. But hiring a classically trained statistician could be a very good investment.
2- Stick to simple models
More precisely: stick to a limited set of hypotheses. One way to look at predictive modeling is as a search problem. From an initial set of possible models, which is the most appropriate model to fit our data? In a way, each data point we use for fitting down-votes all models that make it unlikely, or up-vote models that agree with it. When you have heaps of data, you can afford to explore huge sets of models/hypotheses effectively and end up with one that is suitable. When you don’t have so many data points to begin with, you need to start from a fairly small set of possible hypotheses (e.g. the set of all linear models with 3 non-zero weights, the set of decision trees with depth <= 4, the set of histograms with 10 equally-spaced bins). This means that you rule out complex hypotheses like those that deal with non-linearity or feature interactions. This also means that you can’t afford to fit models with too many degrees of freedom (too many weights or parameters). Whenever appropriate, use strong assumptions (e.g. no negative weights, no interaction between features, specific distributions, etc.) to restrict the space of possible hypotheses.
3- Pool data when possible
Are you building a personalized spam filter? Try building it on top of a universal model trained for all users. Are you modeling GDP for a specific country? Try fitting your models on GDP for all countries for which you can get data, maybe using importance sampling to emphasize the country you’re interested in. Are you trying to predict the eruptions of a specific volcano? ... you get the idea.
4- Limit Experimentation
Don’t over-use your validation set. If you try too many different techniques, and use a hold-out set to compare between them, be aware of the statistical power of the results you are getting, and be aware that the performance you are getting on this set is not a good estimator for out of sample performance.
5- Do clean up your data
With small data sets, noise and outliers are especially troublesome. Cleaning up your data could be crucial here to get sensible models. Alternatively you can restrict your modeling to techniques especially designed to be robust to outliers. (e.g. Quantile Regression)
6- Do perform feature selection
I am not a big fan of explicit feature selection. I typically go for regularization and model averaging (next two points) to avoid over-fitting. But if the data is truly limiting, sometimes explicit feature selection is essential. Wherever possible, use domain expertise to do feature selection or elimination, as brute force approaches (e.g. all subsets or greedy forward selection) are as likely to cause over-fitting as including all features.
7- Do use Regularization
Regularization is an almost-magical solution that constraints model fitting and reduces the effective degrees of freedom without reducing the actual number of parameters in the model. L1 regularization produces models with fewer non-zero parameters, effectively performing implicit feature selection, which could be desirable for explainability of performance in production, while L2 regularization produces models with more conservative (closer to zero) parameters and is effectively similar to having strong zero-centered priors for the parameters (in the Bayesian world). L2 is usually better for prediction accuracy than L1.
8- Do use Model Averaging
Model averaging has similar effects to regularization is that it reduces variance and enhances generalization, but it is a generic technique that can be used with any type of models or even with heterogeneous sets of models. The downside here is that you end up with huge collections of models, which could be slow to evaluate or awkward to deploy to a production system. Two very reasonable forms of model averaging are Bagging and Bayesian model averaging.
9- Try Bayesian Modeling and Model Averaging
Again, not a favorite technique of mine, but Bayesian inference may be well suited for dealing with smaller data sets, especially if you can use domain expertise to construct sensible priors.
10- Prefer Confidence Intervals to Point Estimates
It is usually a good idea to get an estimate of confidence in your prediction in addition to producing the prediction itself. For regression analysis this usually takes the form of predicting a range of values that is calibrated to cover the true value 95% of the time or in the case of classification it could be just a matter of producing class probabilities. This becomes more crucial with small data sets as it becomes more likely that certain regions in your feature space are less represented than others. Model averaging as referred to in the previous two points allows us to do that pretty easily in a generic way for regression, classification and density estimation. It is also useful to do that when evaluating your models. Producing confidence intervals on the metrics you are using to compare model performance is likely to save you from jumping to many wrong conclusions.
This could be a somewhat long list of things to do or try, but they all revolve around three main themes: constrained modeling, smoothing and quantification of uncertainty.
Most figures used in this post were taken from the book “Pattern Recognition and Machine Learning” by Christopher Bishop.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Relevance engineer. Machine Learning practitioner and hobbyist. Former entrepreneur.
Rants about machine learning and its future
|
Matt Fogel | 938 | 4 | https://medium.com/swlh/the-7-best-data-science-and-machine-learning-podcasts-e8f0d5a4a419?source=tag_archive---------3---------------- | The 7 Best Data Science and Machine Learning Podcasts | Data science and machine learning have long been interests of mine, but now that I’m working on Fuzzy.ai and trying to make AI and machine learning accessible to all developers, I need to keep on top of all the news in both fields.
My preferred way to do this is through listening to podcasts. I’ve listened to a bunch of machine learning and data science podcasts in the last few months, so I thought I’d share my favorites:
A great starting point on some of the basics of data science and machine learning. Every other week, they release a 10–15 minute episode where hosts, Kyle and Linda Polich give a short primer on topics like k-means clustering, natural language processing and decision tree learning, often using analogies related to their pet parrot, Yoshi. This is the only place where you’ll learn about k-means clustering via placement of parrot droppings.
Website | iTunes
Hosted by Katie Malone and Ben Jaffe of online education startup Udacity, this weekly podcast covers diverse topics in data science and machine learning: teaching specific concepts like Hidden Markov Models and how they apply to real-world problems and datasets. They make complex topics extremely accessible.
Website | iTunes
Each week, hosts Chris Albon and Jonathon Morgan, both experienced technologists and data scientists, talk about the latest news in data science over drinks. Listening to Partially Derivative is a great way to keep up on the latest data news.
Website | iTunes
This podcast features Ben Lorica, O’Reilly Media’s Chief Data Scientist speaking with other experts about timely big data and data science topics. It can often get quite technical, but the topics of discussion are always really interesting.
Website | iTunes
Data Stories is a little more focused on data visualization than data science, but there is often some interesting overlap between the topics. Every other week, Enrico Bertini and Moritz Stefaner cover diverse topics in data with their guests. Recent episodes about smart cities and Nicholas Felton’s annual reports are particularly interesting.
Website | iTunes
Billing itself as “A Gentle Introduction to Artificial Intelligence and Machine Learning”, this podcast can still get quite technical and complex, covering topics like: “How to Reason About Uncertain Events using Fuzzy Set Theory and Fuzzy Measure Theory” and “How to Represent Knowledge using Logical Rules”.
Website | iTunes
The newest podcasts on this list, with 8 episodes released as of this writing. Every other week, hosts Katherine Gorman and Ryan Adams speak with a guest about their work, and news stories related to machine learning.
Website | iTunes
Feel I’ve unfairly left a podcast off this list? Leave me a note to let me know.
Published in Startups, Wanderlust, and Life Hacking
-
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Cofounder of @fuzzyai. Helping developers make their software smarter, faster.
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
|
Illia Polosukhin | 255 | 3 | https://medium.com/@ilblackdragon/tensorflow-tutorial-part-1-c559c63c0cb1?source=tag_archive---------4---------------- | TensorFlow Tutorial— Part 1 – Illia Polosukhin – Medium | UPD (April 20, 2016): Scikit Flow has been merged into TensorFlow since version 0.8 and now called TensorFlow Learn or tf.learn.
Google released a machine learning framework called TensorFlow and it’s taking the world by storm. 10k+ stars on Github, a lot of publicity and general excitement in between AI researchers.
Now, but how you to use it for something regular problem Data Scientist may have? (and if you are AI researcher — we will build up to interesting problems over time).
A reasonable question, why as a Data Scientist, who already has a number of tools in your toolbox (R, Scikit Learn, etc), you care about yet another framework?
The answer is two part:
Let’s start with simple example — take Titanic dataset from Kaggle.
First, make sure you have installed TensorFlow and Scikit Learn with few helpful libs, including Scikit Flow that is simplifying a lot of work with TensorFlow:
You can get dataset and the code from http://github.com/ilblackdragon/tf_examples
Quick look at the data (use iPython or iPython notebook for ease of interactive exploration):
Let’s test how we can predict Survived class, based on float variables in Scikit Learn:
We separate dataset into features and target, fill in N/A in the data with zeros and build a logistic regression. Predicting on the training data gives us some measure of accuracy (of cause it doesn’t properly evaluate the model quality and test dataset should be used, but for simplicity we will look at train only for now).
Now using tf.learn (previously Scikit Flow):
Congratulations, you just built your first TensorFlow model!
TF.Learn is a library that wraps a lot of new APIs by TensorFlow with nice and familiar Scikit Learn API.
TensorFlow is all about a building and executing graph. This is a very powerful concept, but it is also cumbersome to start with.
Looking under the hood of TF.Learn, we just used three parts:
Even as you get more familiar with TensorFlow, pieces of Scikit Flow will be useful (like graph_actions and layers and host of other ops and tools). See future posts for examples of handling categorical variables, text and images.
Part 2 — Deep Neural Networks, Custom TensorFlow models with Scikit Flow and Digit recognition with Convolutional Networks.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Co-Founder @ NEAR.AI — teaching machines to code. I’m tweeting as @ilblackdragon.
|
Ahmed El Deeb | 283 | 3 | https://medium.com/rants-on-machine-learning/the-unreasonable-effectiveness-of-random-forests-f33c3ce28883?source=tag_archive---------5---------------- | The Unreasonable Effectiveness of Random Forests – Rants on Machine Learning – Medium | It’s very common for machine learning practitioners to have favorite algorithms. It’s a bit irrational, since no algorithm strictly dominates in all applications, the performance of ML algorithms varies wildly depending on the application and the dimensionality of the dataset. And even for a given problem and a given dataset, any single model will likely be beaten by an ensemble of diverse models trained by diverse algorithms anyway. But people have favorites nevertheless. Some like SVMs for the elegance of their formulation or the quality of the available implementations, some like decision rules for their simplicity and interpretability, and some are crazy about neural networks for their flexibility.
My favorite out-of-the-box algorithm is (as you might have guessed) the Random Forest, and it’s the second modeling technique I typically try on any given data set (after a linear model).
This beautiful visualization from scikit-learn illustrates the modelling capacity of a decision forest:
Here’s a paper by Leo Breiman, the inventor of the algorithms describing random forests.
Here’s another amazing paper by Rich Caruana et al. evaluating several supervised learning algorithms on many different datasets.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Relevance engineer. Machine Learning practitioner and hobbyist. Former entrepreneur.
Rants about machine learning and its future
|
Christophe Bourguignat | 157 | 4 | https://medium.com/@chris_bour/6-tricks-i-learned-from-the-otto-kaggle-challenge-a9299378cd61?source=tag_archive---------6---------------- | 6 Tricks I Learned From The OTTO Kaggle Challenge – Christophe Bourguignat – Medium | Here are a few things I learned from the OTTO Group Kaggle competition. I had the chance to team up with great Kaggle Master Xavier Conort, and the french community as a whole has been very active.
Teaming with Xavier has been the opportunity to practice some ensembling technics.
We heavily used stacking. We added to an initial set of 93 features, new features being the predictions of N different classifiers (Random Forest, GBM, Neural Networks, ...). And then retrained P classifiers over the 93 + N features. And finally made a weighted average of the P outputs.
We tested two tricks :
This is one of the great functionalities of the last scikit-learn version (0.16). It allows to rescale the classifier predictions by taking observations predicted within a segments (e.g. 0.3–04), and comparing to the actual truth ratio of these observation (e.g. 0.23, with means that a rescaling is needed).
Here is a mini notebook explaining how to use calibration, and demonstrating how well it worked on the OTTO challenge data.
At the beginning of the competition, it appeared quickly that — once again — Gradient Boosting Trees was one of the best performing algorithm, provided that you find the right hyper parameters.
On the scikit-learn implementation, most important hyper parameters are learning_rate (the shrinkage parameter), n_estimators (the number of boosting stages), and max_depth (limits the number of nodes in the tree, the best value depends on the interaction of the input variables). min_samples_split, and min_samples_leaf can also be a way to control depth of the trees for optimal performance.
I also discovered that two other parameters were crucial for this competition. I must admit I never paid attention on it before this challenge : namely subsample (the fraction of samples to be used for fitting the individual base learners), and max_features (the number of features to consider when looking for the best split).
The problem was to find a way to quickly find the best hyperparameters combination. I first discovered GridSearchCV, that makes an exhaustive search over specified parameter ranges. As always with scikit-learn, it has a convenient programming interface, handling for example smoothly cross-validation and parallel distributing of search.
However, the number of parameters to tune, and their range, was too large to discover the best ones in the acceptable time frame I had in mind (typically while sleeping, i.e 7 to 10 hours). I had to fall back to an other option : I then used RandomizedSearchCV, that appeared in 0.14 version. With this method, search is done randomly on a subspace of parameters. It gives generally very good results, as described in this paper, and I was able to find a suitable parameter set within a few hours.
Note that some competitors, like french kaggler Amine, used Hyperopt for hyperparameters optimization.
XGBoost is a Gradient Boosting implementation heavily used by kagglers, and I now understand why. I never used it before, but it was a hot topic discussed in the forum. I decided to have a look at it, even if its main interface is in R (but there is a Python API, that I didn’t use yet). XGBoost is much faster than scikit-learn, and gave better prediction. It will remain for sure part of my toolblox.
Someone posted on the forum :
He was right. It has been for me the opportunity to play with neural networks for the first time.
Several implementations have been used by the competitors : H2O, Keras, cxxnet, ... I personally used Lasagne. Main challenges was to fine tune the number of layers, number of neurons, dropout and learning rate. Here is a notebook on what I learned.
One of the secret of the competition was to run several times the same algorithm, with random selection of observations and features, and take the average of the output.
To do that easily, I discovered the scikit-learn BaggingClassifier meta-estimator. It hides the tedious complexity of looping over model fits, random subsets selection, and averaging — and exposes easy fit() / predict_proba() entry points.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data enthusiast #BigData #DataScience #MachineLearning #FrenchData #Kaggle
|
I'Boss Potiwarakorn | 128 | 3 | https://medium.com/o-v-e-r-f-i-t-t-e-d/machine-learning-%E0%B9%80%E0%B8%A3%E0%B8%B5%E0%B8%A2%E0%B8%99%E0%B8%AD%E0%B8%B0%E0%B9%84%E0%B8%A3-%E0%B8%A3%E0%B8%B9%E0%B9%89%E0%B9%84%E0%B8%9B%E0%B8%97%E0%B8%B3%E0%B9%84%E0%B8%A1-4a1dcae85b96?source=tag_archive---------7---------------- | Machine Learning เรียนอะไร, รู้ไปทําไม – O v e r f i t t e d – Medium | Machine Learning จริงๆแล้วมันคืออะไรกันแน่? ครั้งแรกที่ผมได้ยินคําคํานี้ ผมก็พูดกับตัวเองในใจ “เครื่องจักรที่เรียนรู้ได้ด้วยตัวเองงั้นเหรอ?” ใครมาถามว่ารู้จัก Machine Learning หรือเปล่า? มันคืออะไรอะ? ผมก็ได้แต่บอกเขาไปว่า “เคยได้ยินแต่ชื่อว่ะ”
ผมหวังเป็นอย่างยิ่งว่า คุณที่หลงเข้ามาอ่านบทความนี้ ที่อาจจะเป็นเหมือนผมที่เคยได้ยินมาแต่ชื่อ เมื่ออ่านจบ หากมีใครมาถามว่า “รู้จัก Machine Learning หรือเปล่า? มันคืออะไรอะ?” จะมีความมั่นใจมากพอที่จะตอบเขาไปว่า
“รู้ดิ นั่งๆ เดี๋ยวเล่าให้ฟัง”
ก่อนที่จะอธิบายว่า Machine Learning คืออะไร ขอโม้สักเล็กน้อย ให้เห็นความสําคัญของมันเสียก่อน แล้วจะพบว่า Machine Learning นี่แทบจะเป็นส่วนหนึ่งของชีวิตประจําวันไปแล้ว
Machine Learning เป็นเรื่องที่ใกล้ตัวเรามากๆ ยิ่งสําหรับคนที่ใช้ Internet เป็นประจํานั้น แทบทุกวันจะได้ใช้ประโยชน์จาก Machine Learning ไม่ว่าจะรู้ตัวหรือไม่ก็ตาม ยกตัวอย่างเช่น เมื่อเราต้องการค้นหาอะไรบางอย่างด้วย Google Search แต่ไม่ค่อยแน่ใจ คับคล้ายคับคลาว่ามันน่าจะสะกดแบบนี้ “machn leaning” (ตัวอย่างโง่นิดนึง ๕๕๕๕๕) ปรากฏว่า...
โอ้ พระสงฆ์! มันเดาใจเราได้ เป็นหมอดูหรืออย่างไร!? แน่นอนว่าโค้ดก็คงจะไม่ใช่แบบด้านล่างนี้แน่ๆ
แล้วทําไม Google ถึงได้รู้ใจเรากันนะ?
ยังมีตัวอย่างอีกมากมายที่นํา Machine Learning ไปใช้ เช่น Spam Filtering, Face Recognition, Handwriting Recognition
แม้แต่การทํา marketing ในปัจจุบันก็เริ่มใช้ประโยชน์ Machine Learning เช่นกัน อาทิ การแบ่งกลุ่มลูกค้า(Customer Segmentation) การทํานายการสูญเสียลูกค้า(Customer Churn Prediction) เป็นต้น
และที่จะไม่พูดถึงไม่ได้เลยคือ Facebook’s Friend Suggestions ที่ผมเองก็ไม่รู้ว่าทําไมมันถึงแนะนําสาวสวยให้ผมอย่างสม่ําเสมอ แต่สิ่งที่ผมมั่นใจคือ Facebook ไม่ได้ใช้คนมานั่งเลือกให้แน่ๆ แล้วมันทําได้ยังไงกัน?
แน่นอนว่า Machine Learning คือคําตอบ
สมองของมนุษย์นั้นมีความสามารถที่น่าทึ่งมากมาย เช่น การตระหนักรู้ อารมณ์ความรู้สึก ความทรงจํา ความสามารถในการควบคุมร่างกาย รวมถึงประสาทสัมผัสทั้งห้าที่ทําให้เรามีความสามารถในการรับรู้ แต่ก็มีปัญหาบางอย่างที่ซับซ้อน และไม่เหมาะที่จะแก้ปัญหาโดยการใช้สมองของมนุษย์เพียงอย่างเดียว
เมื่อต้องเขียนโปรแกรมที่จัดการกับข้อมูลจํานวนมาก และมีรูปแบบที่แตกต่างกันออกไป เป็นเรื่องยากที่เราจะทําความเข้าใจข้อมูลและเขียนโปรแกรมที่จะตอบสนองต่อมัน เมื่อมีข้อมูลเข้ามาเพิ่มและมีลักษณะที่ต่างไปอีกก็เหมือนกับ requirement เปลี่ยนตลอดเวลา เราก็ต้องวิเคราะห์ข้อมูลใหม่และแก้โปรแกรมของเราเรื่อยๆซึ่งลําบากมาก
Arthur Samuel หนึ่งในผู้บุกเบิก Computer Gaming, Artificial Intelligence และ Machine Learning ชาวอเมริกัน ได้นิยาม Machine Learning เอาไว้ว่า เป็น “การศึกษาเกี่ยวกับการทําให้คอมพิวเตอร์มีความสามารถที่จะเรียนรู้โดยที่ไม่ต้องเขียนโปรแกรมลงไปตรงๆ”
กล่าวคือ Machine Learning นั้น ไม่ได้กําหนดลงไปในโปรแกรมว่า สําหรับลักษณะ A, B ใดๆ หากข้อมูลมีลักษณะแบบ A ต้องทําอย่างไร แบบ B ต้องทําอย่างไร แต่เป็นโปรแกรมที่ทําความเข้าใจความสัมพันธ์ของข้อมูล แล้วสร้างวิธีการตอบสนองต่อข้อมูลขึ้นมาเอง
ในเมื่อโปรแกรมสามารถเปลี่ยนแปลงวิธีการตอบสนองต่อข้อมูลได้ด้วยตัวเอง เราจึงไม่จําเป็นต้องคอยวิเคราะห์ข้อมูลและแก้โปรแกรมทุกครั้งที่มีข้อมูลใหม่เข้ามาอีกต่อไป
ตัวผมเองเคยสงสัยว่า Machine Learning, Artificial Intelligence และData Mining มันต่างกันยังไง รู้สึกว่ามันก็เป็นเรื่องที่คล้ายกันมากๆ แต่แล้วผมก็พบความจริงว่า จริงๆแล้ว ทั้ง AI (Artificial Intelligence) และ Data Mining นั้นนํา Machine Learning ไปใช้ สิ่งที่ต่างกันก็คือเป้าหมาย
AI นั้นโฟกัสที่การสร้าง Intelligent Agent หรือตัวตนที่มีความคิดขึ้นมา ซึ่งไม่จําเป็นที่จะต้องใช้ Machine Learning ก็ได้ ถึงแม้จะใช้เพียงแค่การ Search หากสามารถตอบสนองได้อย่างชาญฉลาด ก็สามารถเรียกว่าเป็น AI ได้
แต่ไม่จําเป็น ไม่ได้แปลว่าไม่ได้นําไปใช้ ในทางกลับกัน Machine Learning ถูกนําไปใช้ประโยชน์ใน AI เยอะมากๆ โดยถูกใช้เพื่อที่จะสร้างความรู้ใหม่ๆ และนําไปสู่การตอบสนองต่อเหตุการณ์ที่ต่างออกไปจากที่กําหนดไว้แต่แรก
ส่วน Data Mining นั้น เป็นขั้นตอนการวิเคราะห์ใน knowledge discovery หรือการค้นหาความรู้ โดยจะเปลี่ยนจากข้อมูลดิบ(data)ให้เป็นข้อมูลที่ทําความเข้าใจได้(information) เพื่อที่จะนําไปใช้ต่อในอนาคต
Data Mining ใช้วิธีการของทั้ง AI, Machine Learning, สถิติ และ Database System ในการได้มาซึ่งข้อมูลเชิงลึก หรือ insight
โดยสรุปแล้ว ทั้ง 3 ศาสตร์นั้นมีความเกี่ยวข้องกันอย่างมาก และต่างก็นําวิธีการของกันและกันไปใช้ จึงไม่แปลกที่จะรู้สึกว่ามันดูคล้ายๆกัน เพียงแต่เป้าหมายของมันต่างกัน ทําให้วิธีการนั้นไม่เหมือนกันซะทีเดียว
Machine Learning Algorithm นั้น โดยพื้นฐานแล้วจะแบ่งออกได้เป็น 2 ประเภทคือ supervised learning กับ unsupervised learning
supervised learning คือ การเรียนรู้ที่ได้รับคําแนะนํา
สมมติว่าเราเกิดเสียความทรงจํา ไม่สามารถแยกแยะ แอปเปิ้ล มะม่วง และส้มออกจากกันได้ คุณหมอผู้น่ารักจึงเอา แอปเปิ้ล มะม่วง และส้ม มาให้ดู ผลไม้ทั้งหมดที่คุณหมอเอามาให้ดูนี้เรียกว่า training data คือข้อมูลที่นํามาใช้ในการฝึกสอน
คุณหมอเริ่มจากการนําแอปเปิ้ลหลากหลายแบบมาให้ดู และบอกว่า “นี่คือแอปเปิ้ล” นี่เป็นการให้ label หรือป้ายที่แปะบอกว่าข้อมูลที่ได้มานี้คืออะไร
เมื่อเราได้เห็นแอปเปิ้ลก็จะพบว่า แอปเปิ้ลนั้นมีสีแดง หรือสีเขียว รูปทรงของแอปเปิ้ลนั้นหากผ่าครึ่งจะมีลักษณะคล้ายผีเสื้อ สิ่งเหล่านี้เรียกว่า feature หรือคุณสมบัติของข้อมูล
หลังจากนั้นคุณหมอก็นํามะม่วงและส้มมาให้ดู และพบว่า มะม่วงมีสีเขียวหรือเหลือง รูปทรงค่อนข้างยาว ส่วนส้มมีสีส้ม และมีรูปทรงเป็นทรงรี
เมื่อได้ข้อมูลมากเพียงพอ ก็จะเริ่มแยกแยะ แอปเปิ้ล มะม่วง และส้ม ออกจากกันได้ ต่อมา หากเจอส้มเปลือกสีเขียวก็อาจจะเดาได้ว่าเป็นส้ม เพราะรูปทรงของมัน
นี่เป็นตัวอย่างหนึ่งของ classification หรือการจัดหมวดหมู่ เป็น supervised learning แบบหนึ่งที่ใช้กับข้อมูลที่ไม่ต่อเนื่อง (discrete)
regression เป็นการวิเคราะห์สําหรับข้อมูลที่ต่อเนื่อง (continuous)
ภาพด้านบนแสดงถึง linear regression หรือ line of best fit ซึ่งเป็น regression แบบหนึ่ง
จะเห็นได้ว่าเรามี training data อยู่ ซึ่งก็ไม่ได้เรียงตัวกันเป็นเส้นตรง แต่ก็พอจะเห็นรูปแบบและแนวโน้มของข้อมูล หากเราลากเส้นโดยพยายามให้ผลรวมของระยะห่างจาก training data ซึ่งเป็นความคลาดเคลื่อนน้อยที่สุด เราก็จะได้ model ที่พอจะทํานายค่า y ต่อๆไปได้
George E. P. Box นักสถิติศาสตร์กล่าวไว้ว่า
อย่างที่เห็นในภาพ เส้นสีน้ําเงินนั่นคือ model หรือแบบจําลองเพื่อทํานายค่า y ที่มี x สูงกว่านี้ แต่จะเห็นได้ว่าแทบไม่มีจุดไหนที่ตรงเป๊ะๆเลย เราใช้ได้แค่พอทํานายได้คร่าวๆเท่านั้น
หาก supervised learning คือการเรียนรู้ที่มีคําแนะนํา unsupervised learning ก็คือการไปตายเอาดาบหน้า ไม่มีใครมาแนะนําเรา แต่คงต้องขอไปลุยซักตั้ง
เมื่อ supervised learning มี classification ทางด้าน unsupervised learning ก็จะมี clustering ซึ่งหลายคนรวมถึงผมเอง เมื่อได้รู้จักครั้งแรก ก็สงสัยว่า เอ๊ะ มันต่างกันยังไง classification เป็นการจัดหมวดหมู่ ส่วน clustering เป็นการจัดกลุ่ม ฟังๆดูก็คล้ายๆกันอยู่ดี
เช่นนั้นแล้ว ลองกลับไปสวมบทผู้ป่วยสูญเสียความทรงจํา กับคุณหมอน่ารักอีกสักครั้งนะครับ
คราวนี้ คุณหมอ เอาผลไม้มาอีกสามชนิดที่หน้าตาไม่เหมือนทั้งแอปเปิ้ล มะม่วงและส้ม แต่คุณหมอไม่ยอมบอกอะไรเกี่ยวกับเจ้าพวกนี้เลยสักนิด หรือก็คือตอนนี้ ผลไม้เหล่านี้ไม่มี label แต่คุณหมอก็สั่งให้เราแยกมันออกมาเป็นสามกลุ่ม เมื่อเราสังเกต feature ของผลไม้พวกนี้ก็จะพอแยกแยะได้ว่าผลไม้ลูกไหนควรจะอยู่กลุ่มเดียวกัน
เมื่อแยกได้สามกลุ่มแล้ว คุณหมอก็เดินจากไปเสียเฉยๆ ไม่บอกไม่กล่าวอะไร สุดท้ายเราก็รู้แค่ว่า ผลไม้แต่ละลูกอยู่กลุ่มเดียวกับใคร แต่บอกไม่ได้ว่ามันคืออะไรกันแน่ นี่คือ clustering ต่างจาก classification ที่บอกเราตั้งแต่แรกว่าผลไม้แต่ละชนิดมีชื่อว่าอะไรบ้าง
หลายครั้งที่ข้อมูลนั้นมี feature หลายชนิด หรือก็คือเป็นข้อมูลที่มีมิติมาก เมื่อเป็นเช่นนั้นแล้วก็จะเป็นเรื่องยากที่จะแสดงภาพ หรือ visualize ข้อมูล เราจึงควรลดมิติของข้อมูลลง โดยพยายามคงความหมายเดิมอยู่
dimensionality reduction หรือ dimension reduction เป็นการลดมิติของข้อมูล ซึ่งนอกจากจะทําให้ง่ายที่จะ visualize แล้ว เมื่อมีมิติที่น้อยลง นั่นหมายถึงมี feature ที่น้อยลง ซึ่งทําให้ performance ดีขึ้น และลด space complexity อีกด้วย
นอกจากประเภทของ Machine Learning Algorithm แบบ basic ที่เขียนไว้ด้านบนแล้วยังมี Semi-supervised Learning และ Reinforcement Learning ที่พี่ต้า @konpat เขียนเอาไว้ครับ
สุดท้ายนี้ ผมเชื่อว่า Machine Learning เป็นศาสตร์หนึ่งที่สําคัญมากๆสําหรับวงการคอมพิวเตอร์ในปัจจุบัน และอนาคต และส่วนตัวผมคิดว่าศาสตร์นี้มันเท่มากๆเลยนะ ใครสนใจใน Machine Learning ก็เข้ามาคุยกันได้นะครับ :D
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Software Choreographer at ThoughtWorks; Functional Programming, DevOps and Machine Learning Enthusiast
a half-score-day-a-story blog by League of Machine Learning from Chulalongkorn University
|
samim | 301 | 4 | https://medium.com/@samim/generating-stories-about-images-d163ba41e4ed?source=tag_archive---------8---------------- | Generating Stories about Images – samim – Medium | Stories are a fundamental human tool that we use to communicate thought. Creating a stories about a image is a difficult task that many struggle with. New machine-learning experiments are enabling us to generate stories based on the content of images. This experiment explores how to generate little romantic stories about images (incl. guest star Taylor Swift).
neural-storyteller is a recently published experiment by Ryan Kiros (University of Toronto). It combines recurrent neural networks (RNN), skip-thoughts vectors and other techniques to generate little story about images. Neural-storyteller’s outputs are creative and often comedic. It is open-source.
This experiment started by running 5000 randomly selected web-images through neural-storyteller and experimenting with hyper-parameters. neural-storyteller comes with 2 pre-trained models: One trained on 14 million passages of romance novels, the other trained on Taylor Swift Lyrics. Inputs and outputs were manually filtered and recombined into two videos.
Using Romantic Novel Model. Voices generated with a Text-to-Speech.
Using Taylor Swift Model. Combined with a well known Swift instrumental.
neural-storyteller gives us a fascinating glimpse into the future of storytelling. Even though these technologies are not fully mature yet, the art of storytelling is bound to change. In the near future, authors will be training custom models, combining styles across genres and generating text with images & sounds. Exploring this exiting new medium is rewarding!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Designer & Code Magician. Working at the intersection of HCI, Machine Learning & Creativity. Building tools for Enlightenment. Narrative Engineering.
|
AirbnbEng | 517 | 9 | https://medium.com/airbnb-engineering/how-airbnb-uses-machine-learning-to-detect-host-preferences-18ce07150fa3?source=tag_archive---------9---------------- | How Airbnb uses Machine Learning to Detect Host Preferences | By Bar Ifrach
At Airbnb we seek to match people who are looking for accommodation — guests — with those looking to rent out their place — hosts. Guests reach out to hosts whose listings they wish to stay in, however a match succeeds only if the host also wants to accommodate the guest.
I first heard about Airbnb in 2012 from a friend. He offered his nice apartment on the site when he traveled to see his family during our vacations from grad school. His main goal was to fit as many booked nights as possible into the 1–2 weeks when he was away. My friend would accept or reject requests depending on whether or not the request would help him to maximize his occupancy.
About two years later, I joined Airbnb as a Data Scientist. I remembered my friend’s behavior and was curious to discover what affects hosts’ decisions to accept accommodation requests and how Airbnb could increase acceptances and matches on the platform.
What started as a small research project resulted in the development of a machine learning model that learns our hosts’ preferences for accommodation requests based on their past behavior. For each search query that a guest enters on Airbnb’s search engine, our model computes the likelihood that relevant hosts will want to accommodate the guest’s request. Then, we surface likely matches more prominently in the search results. In our A/B testing the model showed about a 3.75% increase in booking conversion, resulting in many more matches on Airbnb. In this blog post I outline the process that brought us to this model.
I kicked off my research into hosts’ acceptances by checking if other hosts maximized their occupancy like my friend. Every accommodation request falls in a sequence or in a window of available days in the calendar, such as on April 5–10 in the calendar shown below. The gray days surrounding the window are either blocked by the host or already booked. If accepted and booked, a request may leave the host with a sub-window before the check-in date (check-in gap — April 5–7) and/or a sub-window after the check-out (check-out gap — April 10).
A host looking to have a high occupancy will try to avoid such gaps. Indeed, when I plotted hosts’ tendency to accept over the sum of the check-in gap and the check-out gap (3+1= 4 in the example above), as in the next plot, I found the effect that I expected to see: hosts were more likely to accept requests that fit well in their calendar and minimize gap days.
But do all hosts try to maximize occupancy and prefer stays with short gaps? Perhaps some hosts are not interested in maximizing their occupancy and would rather host occasionally. And maybe hosts in big markets, like my friend, are different from hosts in smaller markets.
Indeed, when I looked at listings from big and small markets separately, I found that they behaved quite differently. Hosts in big markets care a lot about their occupancy — a request with no gaps is almost 6% likelier to be accepted than one with 7 gap nights. For small markets I found the opposite effect; hosts prefer to have a small number of nights between requests. So, hosts in different markets have different preferences, but it seems likely that even within a market hosts may prefer different stays.
A similar story revealed itself when I looked at hosts’ tendency to accept based on other characteristics of the accommodation request. For example, on average Airbnb hosts prefer accommodation requests that are at least a week in advance over last minute requests. But perhaps some hosts prefer short notice?
The plot below looks at the dispersion of hosts’ preferences for last minute stays (less than 7 days) versus far in advance stays (more than 7 days). Indeed, the dispersion in preferences reveals that some hosts like last minute stays better than far in advance stays — those in the bottom right — even though on average hosts prefer longer notice. I found similar dispersion in hosts’ tendency to accept other trip characteristics like the number of guests, whether it is a weekend trip etc.
All these findings pointed to the same conclusion: if we could promote in our search results hosts who would be more likely to accept an accommodation request resulting from that search query, we would expect to see happier guests and hosts and more matches that turned into fun vacations (or productive business trips).
In other words, we could personalize our search results, but not in the way you might expect. Typically personalized search results promote results that would fit the unique preferences of the searcher — the guest. At a two-sided marketplace like Airbnb, we also wanted to personalize search by the preference of the hosts whose listings would appear in the search results.
Encouraged by my findings, I joined forces with another data scientist and a software engineer to create a personalized search signal. We set out to associate hosts’ prior acceptance and decline decisions by the following characteristics of the trip: check-in date, check-out date and number of guests. By adding host preferences to our existing ranking model capturing guest preferences, we hoped to enable more and better matches.
At first glance, this seems like a perfect case for collaborative filtering — we have users (hosts) and items (trips) and we want to understand the preference for those items by combining historical ratings (accept/decline) with statistical learning from similar hosts. However, the application does not fully fit in the collaborative filtering framework for two reasons.
With these points in mind, we decided to massage the problem into something resembling collaborative filtering. We used the multiplicity of responses for the same trip to reduce the noise coming from the latent factors in the guest-host interaction. To do so, we considered hosts’ average response to a certain trip characteristic in isolation. Instead of looking at the combination of trip length, size of guest party, size of calendar gap and so on, we looked at each of these trip characteristics by itself.
With this coarser structure of preferences we were able to resolve some of the noise in our data as well as the potentially conflicting labels for the same trip. We used the mean acceptance rate for each trip characteristic as a proxy for preference. Still our data-set was relatively sparse. On average, for each trip characteristic we could not determine the preference for about 26% of hosts, because they never received an accommodation request that met those trip characteristics. As a method of imputation, we smoothed the preference using a weight function that, for each trip characteristic, averages the median preference of hosts in the region with the host’s preference. The weight on the median preference is 1 when the host has no data points and goes to 0 monotonically the more data points the host has.
Using these newly defined preferences we created predictions for host acceptances using a L-2 regularized logistic regression. Essentially, we combine the preferences for different trip characteristics into a single prediction for the probability of acceptance. The weight the preference of each trip characteristic has on the acceptance decision is the coefficient that comes out of the logistic regression. To improve the prediction, we include a few more geographic and host specific features in the logistic regression.
This flow chart summarizes the modeling technique.
We ran this model on segments of hosts on our cluster using a user-generated-function (UDF) on Hive. The UDF is written in Python; its inputs are accommodation requests, hosts’ response to them and a few other host features. Depending on the flag passed to it, the UDF either builds the preferences for the different trip characteristics or trains the logistic regression model using scikit-learn.
Our main off-line evaluation metric for the model was mean squared error (MSE), which is more appropriate in a setting when we care about the predicted probability more than about classification. In our off-line evaluation of the model we were able to get a 10% decrease in MSE over our previous model that captured host acceptance probability. This was a promising result. But, we still had to test the performance of the model live on our site.
To test the online performance of the model, we launched an experiment that used the predicted probability of host acceptance as a significant weight in our ranking algorithm that also includes many other features that capture guests’ preferences. Every time a guest in the treatment group entered a search query, our model predicted the probability of acceptance for all relevant hosts and influenced the order in which listings were presented to the guest, ranking likelier matches higher.
We evaluated the experiment by looking at multiple metrics, but the most important one was the likelihood that a guest requesting accommodation would get a booking (booking conversion). We found a 3.75% lift in our booking conversion and a significant increase in the number of successful matches between guests and hosts.
After concluding the initial experiment, we made a few more optimizations that improved conversion by approximately another 1% and then launched the experiment to 100% of users. This was an exciting outcome for our first full-fledged personalization search signal and a sizable contributor to our success.
First, this project taught us that in a two sided marketplace personalization can be effective on the buyer as well as the seller side.
Second, the project taught us that sometimes you have to roll up your sleeves and build a machine learning model tailored for your own application. In this case, the application did not quite fit in the collaborative filtering and a multilevel model with host fixed-effect was too computationally demanding and not suited for a sparse data-set. While building our own model took more time, it was a fun learning experience.
Finally, this project would not have succeeded without the fantastic work of Spencer de Mars and Lukasz Dziurzynski.
Originally published at nerds.airbnb.com on April 14, 2015.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Creative engineers and data scientists building a world where you can belong anywhere. http://airbnb.io
Creative engineers and data scientists building a world where you can belong anywhere. http://airbnb.io
|
Adam Geitgey | 14.2K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721?source=tag_archive---------1---------------- | Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano.
Are you tired of reading endless news stories about deep learning and not really knowing what that means? Let’s change that!
This time, we are going to learn how to write programs that recognize objects in images using deep learning. In other words, we’re going to explain the black magic that allows Google Photos to search your photos based on what is in the picture:
Just like Part 1 and Part 2, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished!
(If you haven’t already read part 1 and part 2, read them now!)
You might have seen this famous xkcd comic before.
The goof is based on the idea that any 3-year-old child can recognize a photo of a bird, but figuring out how to make a computer recognize objects has puzzled the very best computer scientists for over 50 years.
In the last few years, we’ve finally found a good approach to object recognition using deep convolutional neural networks. That sounds like a a bunch of made up words from a William Gibson Sci-Fi novel, but the ideas are totally understandable if you break them down one by one.
So let’s do it — let’s write a program that can recognize birds!
Before we learn how to recognize pictures of birds, let’s learn how to recognize something much simpler — the handwritten number “8”.
In Part 2, we learned about how neural networks can solve complex problems by chaining together lots of simple neurons. We created a small neural network to estimate the price of a house based on how many bedrooms it had, how big it was, and which neighborhood it was in:
We also know that the idea of machine learning is that the same generic algorithms can be reused with different data to solve different problems. So let’s modify this same neural network to recognize handwritten text. But to make the job really simple, we’ll only try to recognize one letter — the numeral “8”.
Machine learning only works when you have data — preferably a lot of data. So we need lots and lots of handwritten “8”s to get started. Luckily, researchers created the MNIST data set of handwritten numbers for this very purpose. MNIST provides 60,000 images of handwritten digits, each as an 18x18 image. Here are some “8”s from the data set:
The neural network we made in Part 2 only took in a three numbers as the input (“3” bedrooms, “2000” sq. feet , etc.). But now we want to process images with our neural network. How in the world do we feed images into a neural network instead of just numbers?
The answer is incredible simple. A neural network takes numbers as input. To a computer, an image is really just a grid of numbers that represent how dark each pixel is:
To feed an image into our neural network, we simply treat the 18x18 pixel image as an array of 324 numbers:
The handle 324 inputs, we’ll just enlarge our neural network to have 324 input nodes:
Notice that our neural network also has two outputs now (instead of just one). The first output will predict the likelihood that the image is an “8” and thee second output will predict the likelihood it isn’t an “8”. By having a separate output for each type of object we want to recognize, we can use a neural network to classify objects into groups.
Our neural network is a lot bigger than last time (324 inputs instead of 3!). But any modern computer can handle a neural network with a few hundred nodes without blinking. This would even work fine on your cell phone.
All that’s left is to train the neural network with images of “8”s and not-“8"s so it learns to tell them apart. When we feed in an “8”, we’ll tell it the probability the image is an “8” is 100% and the probability it’s not an “8” is 0%. Vice versa for the counter-example images.
Here’s some of our training data:
We can train this kind of neural network in a few minutes on a modern laptop. When it’s done, we’ll have a neural network that can recognize pictures of “8”s with a pretty high accuracy. Welcome to the world of (late 1980’s-era) image recognition!
It’s really neat that simply feeding pixels into a neural network actually worked to build image recognition! Machine learning is magic! ...right?
Well, of course it’s not that simple.
First, the good news is that our “8” recognizer really does work well on simple images where the letter is right in the middle of the image:
But now the really bad news:
Our “8” recognizer totally fails to work when the letter isn’t perfectly centered in the image. Just the slightest position change ruins everything:
This is because our network only learned the pattern of a perfectly-centered “8”. It has absolutely no idea what an off-center “8” is. It knows exactly one pattern and one pattern only.
That’s not very useful in the real world. Real world problems are never that clean and simple. So we need to figure out how to make our neural network work in cases where the “8” isn’t perfectly centered.
We already created a really good program for finding an “8” centered in an image. What if we just scan all around the image for possible “8”s in smaller sections, one section at a time, until we find one?
This approach called a sliding window. It’s the brute force solution. It works well in some limited cases, but it’s really inefficient. You have to check the same image over and over looking for objects of different sizes. We can do better than this!
When we trained our network, we only showed it “8”s that were perfectly centered. What if we train it with more data, including “8”s in all different positions and sizes all around the image?
We don’t even need to collect new training data. We can just write a script to generate new images with the “8”s in all kinds of different positions in the image:
Using this technique, we can easily create an endless supply of training data.
More data makes the problem harder for our neural network to solve, but we can compensate for that by making our network bigger and thus able to learn more complicated patterns.
To make the network bigger, we just stack up layer upon layer of nodes:
We call this a “deep neural network” because it has more layers than a traditional neural network.
This idea has been around since the late 1960s. But until recently, training this large of a neural network was just too slow to be useful. But once we figured out how to use 3d graphics cards (which were designed to do matrix multiplication really fast) instead of normal computer processors, working with large neural networks suddenly became practical. In fact, the exact same NVIDIA GeForce GTX 1080 video card that you use to play Overwatch can be used to train neural networks incredibly quickly.
But even though we can make our neural network really big and train it quickly with a 3d graphics card, that still isn’t going to get us all the way to a solution. We need to be smarter about how we process images into our neural network.
Think about it. It doesn’t make sense to train a network to recognize an “8” at the top of a picture separately from training it to recognize an “8” at the bottom of a picture as if those were two totally different objects.
There should be some way to make the neural network smart enough to know that an “8” anywhere in the picture is the same thing without all that extra training. Luckily... there is!
As a human, you intuitively know that pictures have a hierarchy or conceptual structure. Consider this picture:
As a human, you instantly recognize the hierarchy in this picture:
Most importantly, we recognize the idea of a child no matter what surface the child is on. We don’t have to re-learn the idea of child for every possible surface it could appear on.
But right now, our neural network can’t do this. It thinks that an “8” in a different part of the image is an entirely different thing. It doesn’t understand that moving an object around in the picture doesn’t make it something different. This means it has to re-learn the identify of each object in every possible position. That sucks.
We need to give our neural network understanding of translation invariance — an “8” is an “8” no matter where in the picture it shows up.
We’ll do this using a process called Convolution. The idea of convolution is inspired partly by computer science and partly by biology (i.e. mad scientists literally poking cat brains with weird probes to figure out how cats process images).
Instead of feeding entire images into our neural network as one grid of numbers, we’re going to do something a lot smarter that takes advantage of the idea that an object is the same no matter where it appears in a picture.
Here’s how it’s going to work, step by step —
Similar to our sliding window search above, let’s pass a sliding window over the entire original image and save each result as a separate, tiny picture tile:
By doing this, we turned our original image into 77 equally-sized tiny image tiles.
Earlier, we fed a single image into a neural network to see if it was an “8”. We’ll do the exact same thing here, but we’ll do it for each individual image tile:
However, there’s one big twist: We’ll keep the same neural network weights for every single tile in the same original image. In other words, we are treating every image tile equally. If something interesting appears in any given tile, we’ll mark that tile as interesting.
We don’t want to lose track of the arrangement of the original tiles. So we save the result from processing each tile into a grid in the same arrangement as the original image. It looks like this:
In other words, we’ve started with a large image and we ended with a slightly smaller array that records which sections of our original image were the most interesting.
The result of Step 3 was an array that maps out which parts of the original image are the most interesting. But that array is still pretty big:
To reduce the size of the array, we downsample it using an algorithm called max pooling. It sounds fancy, but it isn’t at all!
We’ll just look at each 2x2 square of the array and keep the biggest number:
The idea here is that if we found something interesting in any of the four input tiles that makes up each 2x2 grid square, we’ll just keep the most interesting bit. This reduces the size of our array while keeping the most important bits.
So far, we’ve reduced a giant image down into a fairly small array.
Guess what? That array is just a bunch of numbers, so we can use that small array as input into another neural network. This final neural network will decide if the image is or isn’t a match. To differentiate it from the convolution step, we call it a “fully connected” network.
So from start to finish, our whole five-step pipeline looks like this:
Our image processing pipeline is a series of steps: convolution, max-pooling, and finally a fully-connected network.
When solving problems in the real world, these steps can be combined and stacked as many times as you want! You can have two, three or even ten convolution layers. You can throw in max pooling wherever you want to reduce the size of your data.
The basic idea is to start with a large image and continually boil it down, step-by-step, until you finally have a single result. The more convolution steps you have, the more complicated features your network will be able to learn to recognize.
For example, the first convolution step might learn to recognize sharp edges, the second convolution step might recognize beaks using it’s knowledge of sharp edges, the third step might recognize entire birds using it’s knowledge of beaks, etc.
Here’s what a more realistic deep convolutional network (like you would find in a research paper) looks like:
In this case, they start a 224 x 224 pixel image, apply convolution and max pooling twice, apply convolution 3 more times, apply max pooling and then have two fully-connected layers. The end result is that the image is classified into one of 1000 categories!
So how do you know which steps you need to combine to make your image classifier work?
Honestly, you have to answer this by doing a lot of experimentation and testing. You might have to train 100 networks before you find the optimal structure and parameters for the problem you are solving. Machine learning involves a lot of trial and error!
Now finally we know enough to write a program that can decide if a picture is a bird or not.
As always, we need some data to get started. The free CIFAR10 data set contains 6,000 pictures of birds and 52,000 pictures of things that are not birds. But to get even more data we’ll also add in the Caltech-UCSD Birds-200–2011 data set that has another 12,000 bird pics.
Here’s a few of the birds from our combined data set:
And here’s some of the 52,000 non-bird images:
This data set will work fine for our purposes, but 72,000 low-res images is still pretty small for real-world applications. If you want Google-level performance, you need millions of large images. In machine learning, having more data is almost always more important that having better algorithms. Now you know why Google is so happy to offer you unlimited photo storage. They want your sweet, sweet data!
To build our classifier, we’ll use TFLearn. TFlearn is a wrapper around Google’s TensorFlow deep learning library that exposes a simplified API. It makes building convolutional neural networks as easy as writing a few lines of code to define the layers of our network.
Here’s the code to define and train the network:
If you are training with a good video card with enough RAM (like an Nvidia GeForce GTX 980 Ti or better), this will be done in less than an hour. If you are training with a normal cpu, it might take a lot longer.
As it trains, the accuracy will increase. After the first pass, I got 75.4% accuracy. After just 10 passes, it was already up to 91.7%. After 50 or so passes, it capped out around 95.5% accuracy and additional training didn’t help, so I stopped it there.
Congrats! Our program can now recognize birds in images!
Now that we have a trained neural network, we can use it! Here’s a simple script that takes in a single image file and predicts if it is a bird or not.
But to really see how effective our network is, we need to test it with lots of images. The data set I created held back 15,000 images for validation. When I ran those 15,000 images through the network, it predicted the correct answer 95% of the time.
That seems pretty good, right? Well... it depends!
Our network claims to be 95% accurate. But the devil is in the details. That could mean all sorts of different things.
For example, what if 5% of our training images were birds and the other 95% were not birds? A program that guessed “not a bird” every single time would be 95% accurate! But it would also be 100% useless.
We need to look more closely at the numbers than just the overall accuracy. To judge how good a classification system really is, we need to look closely at how it failed, not just the percentage of the time that it failed.
Instead of thinking about our predictions as “right” and “wrong”, let’s break them down into four separate categories —
Using our validation set of 15,000 images, here’s how many times our predictions fell into each category:
Why do we break our results down like this? Because not all mistakes are created equal.
Imagine if we were writing a program to detect cancer from an MRI image. If we were detecting cancer, we’d rather have false positives than false negatives. False negatives would be the worse possible case — that’s when the program told someone they definitely didn’t have cancer but they actually did.
Instead of just looking at overall accuracy, we calculate Precision and Recall metrics. Precision and Recall metrics give us a clearer picture of how well we did:
This tells us that 97% of the time we guessed “Bird”, we were right! But it also tells us that we only found 90% of the actual birds in the data set. In other words, we might not find every bird but we are pretty sure about it when we do find one!
Now that you know the basics of deep convolutional networks, you can try out some of the examples that come with tflearn to get your hands dirty with different neural network architectures. It even comes with built-in data sets so you don’t even have to find your own images.
You also know enough now to start branching and learning about other areas of machine learning. Why not learn how to use algorithms to train computers how to play Atari games next?
If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 4, Part 5 and Part 6!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Adam Geitgey | 15.2K | 13 | https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78?source=tag_archive---------2---------------- | Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in 普通话, Русский, 한국어, Português, Tiếng Việt or Italiano.
Have you noticed that Facebook has developed an uncanny ability to recognize your friends in your photographs? In the old days, Facebook used to make you to tag your friends in photos by clicking on them and typing in their name. Now as soon as you upload a photo, Facebook tags everyone for you like magic:
This technology is called face recognition. Facebook’s algorithms are able to recognize your friends’ faces after they have been tagged only a few times. It’s pretty amazing technology — Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do!
Let’s learn how modern face recognition works! But just recognizing your friends would be too easy. We can push this tech to the limit to solve a more challenging problem — telling Will Ferrell (famous actor) apart from Chad Smith (famous rock musician)!
So far in Part 1, 2 and 3, we’ve used machine learning to solve isolated problems that have only one step — estimating the price of a house, generating new data based on existing data and telling if an image contains a certain object. All of those problems can be solved by choosing one machine learning algorithm, feeding in data, and getting the result.
But face recognition is really a series of several related problems:
As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces and end up seeing faces in everyday objects:
Computers are not capable of this kind of high-level generalization (at least not yet...), so we have to teach them how to do each step in this process separately.
We need to build a pipeline where we solve each step of face recognition separately and pass the result of the current step to the next step. In other words, we will chain together several machine learning algorithms:
Let’s tackle this problem one step at a time. For each step, we’ll learn about a different machine learning algorithm. I’m not going to explain every single algorithm completely to keep this from turning into a book, but you’ll learn the main ideas behind each one and you’ll learn how you can build your own facial recognition system in Python using OpenFace and dlib.
The first step in our pipeline is face detection. Obviously we need to locate the faces in a photograph before we can try to tell them apart!
If you’ve used any camera in the last 10 years, you’ve probably seen face detection in action:
Face detection is a great feature for cameras. When the camera can automatically pick out faces, it can make sure that all the faces are in focus before it takes the picture. But we’ll use it for a different purpose — finding the areas of the image we want to pass on to the next step in our pipeline.
Face detection went mainstream in the early 2000's when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to run on cheap cameras. However, much more reliable solutions exist now. We’re going to use a method invented in 2005 called Histogram of Oriented Gradients — or just HOG for short.
To find faces in an image, we’ll start by making our image black and white because we don’t need color data to find faces:
Then we’ll look at every single pixel in our image one at a time. For every single pixel, we want to look at the pixels that directly surrounding it:
Our goal is to figure out how dark the current pixel is compared to the pixels directly surrounding it. Then we want to draw an arrow showing in which direction the image is getting darker:
If you repeat that process for every single pixel in the image, you end up with every pixel being replaced by an arrow. These arrows are called gradients and they show the flow from light to dark across the entire image:
This might seem like a random thing to do, but there’s a really good reason for replacing the pixels with gradients. If we analyze pixels directly, really dark images and really light images of the same person will have totally different pixel values. But by only considering the direction that brightness changes, both really dark images and really bright images will end up with the same exact representation. That makes the problem a lot easier to solve!
But saving the gradient for every single pixel gives us way too much detail. We end up missing the forest for the trees. It would be better if we could just see the basic flow of lightness/darkness at a higher level so we could see the basic pattern of the image.
To do this, we’ll break up the image into small squares of 16x16 pixels each. In each square, we’ll count up how many gradients point in each major direction (how many point up, point up-right, point right, etc...). Then we’ll replace that square in the image with the arrow directions that were the strongest.
The end result is we turn the original image into a very simple representation that captures the basic structure of a face in a simple way:
To find faces in this HOG image, all we have to do is find the part of our image that looks the most similar to a known HOG pattern that was extracted from a bunch of other training faces:
Using this technique, we can now easily find faces in any image:
If you want to try this step out yourself using Python and dlib, here’s code showing how to generate and view HOG representations of images.
Whew, we isolated the faces in our image. But now we have to deal with the problem that faces turned different directions look totally different to a computer:
To account for this, we will try to warp each picture so that the eyes and lips are always in the sample place in the image. This will make it a lot easier for us to compare faces in the next steps.
To do this, we are going to use an algorithm called face landmark estimation. There are lots of ways to do this, but we are going to use the approach invented in 2014 by Vahid Kazemi and Josephine Sullivan.
The basic idea is we will come up with 68 specific points (called landmarks) that exist on every face — the top of the chin, the outside edge of each eye, the inner edge of each eyebrow, etc. Then we will train a machine learning algorithm to be able to find these 68 specific points on any face:
Here’s the result of locating the 68 face landmarks on our test image:
Now that we know were the eyes and mouth are, we’ll simply rotate, scale and shear the image so that the eyes and mouth are centered as best as possible. We won’t do any fancy 3d warps because that would introduce distortions into the image. We are only going to use basic image transformations like rotation and scale that preserve parallel lines (called affine transformations):
Now no matter how the face is turned, we are able to center the eyes and mouth are in roughly the same position in the image. This will make our next step a lot more accurate.
If you want to try this step out yourself using Python and dlib, here’s the code for finding face landmarks and here’s the code for transforming the image using those landmarks.
Now we are to the meat of the problem — actually telling faces apart. This is where things get really interesting!
The simplest approach to face recognition is to directly compare the unknown face we found in Step 2 with all the pictures we have of people that have already been tagged. When we find a previously tagged face that looks very similar to our unknown face, it must be the same person. Seems like a pretty good idea, right?
There’s actually a huge problem with that approach. A site like Facebook with billions of users and a trillion photos can’t possibly loop through every previous-tagged face to compare it to every newly uploaded picture. That would take way too long. They need to be able to recognize faces in milliseconds, not hours.
What we need is a way to extract a few basic measurements from each face. Then we could measure our unknown face the same way and find the known face with the closest measurements. For example, we might measure the size of each ear, the spacing between the eyes, the length of the nose, etc. If you’ve ever watched a bad crime show like CSI, you know what I am talking about:
Ok, so which measurements should we collect from each face to build our known face database? Ear size? Nose length? Eye color? Something else?
It turns out that the measurements that seem obvious to us humans (like eye color) don’t really make sense to a computer looking at individual pixels in an image. Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect itself. Deep learning does a better job than humans at figuring out which parts of a face are important to measure.
The solution is to train a Deep Convolutional Neural Network (just like we did in Part 3). But instead of training the network to recognize pictures objects like we did last time, we are going to train it to generate 128 measurements for each face.
The training process works by looking at 3 face images at a time:
Then the algorithm looks at the measurements it is currently generating for each of those three images. It then tweaks the neural network slightly so that it makes sure the measurements it generates for #1 and #2 are slightly closer while making sure the measurements for #2 and #3 are slightly further apart:
After repeating this step millions of times for millions of images of thousands of different people, the neural network learns to reliably generate 128 measurements for each person. Any ten different pictures of the same person should give roughly the same measurements.
Machine learning people call the 128 measurements of each face an embedding. The idea of reducing complicated raw data like a picture into a list of computer-generated numbers comes up a lot in machine learning (especially in language translation). The exact approach for faces we are using was invented in 2015 by researchers at Google but many similar approaches exist.
This process of training a convolutional neural network to output face embeddings requires a lot of data and computer power. Even with an expensive NVidia Telsa video card, it takes about 24 hours of continuous training to get good accuracy.
But once the network has been trained, it can generate measurements for any face, even ones it has never seen before! So this step only needs to be done once. Lucky for us, the fine folks at OpenFace already did this and they published several trained networks which we can directly use. Thanks Brandon Amos and team!
So all we need to do ourselves is run our face images through their pre-trained network to get the 128 measurements for each face. Here’s the measurements for our test image:
So what parts of the face are these 128 numbers measuring exactly? It turns out that we have no idea. It doesn’t really matter to us. All that we care is that the network generates nearly the same numbers when looking at two different pictures of the same person.
If you want to try this step yourself, OpenFace provides a lua script that will generate embeddings all images in a folder and write them to a csv file. You run it like this.
This last step is actually the easiest step in the whole process. All we have to do is find the person in our database of known people who has the closest measurements to our test image.
You can do that by using any basic machine learning classification algorithm. No fancy deep learning tricks are needed. We’ll use a simple linear SVM classifier, but lots of classification algorithms could work.
All we need to do is train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. Running this classifier takes milliseconds. The result of the classifier is the name of the person!
So let’s try out our system. First, I trained a classifier with the embeddings of about 20 pictures each of Will Ferrell, Chad Smith and Jimmy Falon:
Then I ran the classifier on every frame of the famous youtube video of Will Ferrell and Chad Smith pretending to be each other on the Jimmy Fallon show:
It works! And look how well it works for faces in different poses — even sideways faces!
Let’s review the steps we followed:
Now that you know how this all works, here’s instructions from start-to-finish of how run this entire face recognition pipeline on your own computer:
UPDATE 4/9/2017: You can still follow the steps below to use OpenFace. However, I’ve released a new Python-based face recognition library called face_recognition that is much easier to install and use. So I’d recommend trying out face_recognition first instead of continuing below!
I even put together a pre-configured virtual machine with face_recognition, OpenCV, TensorFlow and lots of other deep learning tools pre-installed. You can download and run it on your computer very easily. Give the virtual machine a shot if you don’t want to install all these libraries yourself!
Original OpenFace instructions:
If you liked this article, please consider signing up for my Machine Learning is Fun! newsletter:
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 5!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Adam Geitgey | 10.4K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-part-2-a26a10b68df3?source=tag_archive---------3---------------- | Machine Learning is Fun! Part 2 – Adam Geitgey – Medium | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in Italiano, Español, Français, Türkçe, Русский, 한국어 Português, فارسی, Tiếng Việt or 普通话.
In Part 1, we said that Machine Learning is using generic algorithms to tell you something interesting about your data without writing any code specific to the problem you are solving. (If you haven’t already read part 1, read it now!).
This time, we are going to see one of these generic algorithms do something really cool — create video game levels that look like they were made by humans. We’ll build a neural network, feed it existing Super Mario levels and watch new ones pop out!
Just like Part 1, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished.
Back in Part 1, we created a simple algorithm that estimated the value of a house based on its attributes. Given data about a house like this:
We ended up with this simple estimation function:
In other words, we estimated the value of the house by multiplying each of its attributes by a weight. Then we just added those numbers up to get the house’s value.
Instead of using code, let’s represent that same function as a simple diagram:
However this algorithm only works for simple problems where the result has a linear relationship with the input. What if the truth behind house prices isn’t so simple? For example, maybe the neighborhood matters a lot for big houses and small houses but doesn’t matter at all for medium-sized houses. How could we capture that kind of complicated detail in our model?
To be more clever, we could run this algorithm multiple times with different of weights that each capture different edge cases:
Now we have four different price estimates. Let’s combine those four price estimates into one final estimate. We’ll run them through the same algorithm again (but using another set of weights)!
Our new Super Answer combines the estimates from our four different attempts to solve the problem. Because of this, it can model more cases than we could capture in one simple model.
Let’s combine our four attempts to guess into one big diagram:
This is a neural network! Each node knows how to take in a set of inputs, apply weights to them, and calculate an output value. By chaining together lots of these nodes, we can model complex functions.
There’s a lot that I’m skipping over to keep this brief (including feature scaling and the activation function), but the most important part is that these basic ideas click:
It’s just like LEGO! We can’t model much with one single LEGO block, but we can model anything if we have enough basic LEGO blocks to stick together:
The neural network we’ve seen always returns the same answer when you give it the same inputs. It has no memory. In programming terms, it’s a stateless algorithm.
In many cases (like estimating the price of house), that’s exactly what you want. But the one thing this kind of model can’t do is respond to patterns in data over time.
Imagine I handed you a keyboard and asked you to write a story. But before you start, my job is to guess the very first letter that you will type. What letter should I guess?
I can use my knowledge of English to increase my odds of guessing the right letter. For example, you will probably type a letter that is common at the beginning of words. If I looked at stories you wrote in the past, I could narrow it down further based on the words you usually use at the beginning of your stories. Once I had all that data, I could use it to build a neural network to model how likely it is that you would start with any given letter.
Our model might look like this:
But let’s make the problem harder. Let’s say I need to guess the next letter you are going to type at any point in your story. This is a much more interesting problem.
Let’s use the first few words of Ernest Hemingway’s The Sun Also Rises as an example:
What letter is going to come next?
You probably guessed ’n’ — the word is probably going to be boxing. We know this based on the letters we’ve already seen in the sentence and our knowledge of common words in English. Also, the word ‘middleweight’ gives us an extra clue that we are talking about boxing.
In other words, it’s easy to guess the next letter if we take into account the sequence of letters that came right before it and combine that with our knowledge of the rules of English.
To solve this problem with a neural network, we need to add state to our model. Each time we ask our neural network for an answer, we also save a set of our intermediate calculations and re-use them the next time as part of our input. That way, our model will adjust its predictions based on the input that it has seen recently.
Keeping track of state in our model makes it possible to not just predict the most likely first letter in the story, but to predict the most likely next letter given all previous letters.
This is the basic idea of a Recurrent Neural Network. We are updating the network each time we use it. This allows it to update its predictions based on what it saw most recently. It can even model patterns over time as long as we give it enough of a memory.
Predicting the next letter in a story might seem pretty useless. What’s the point?
One cool use might be auto-predict for a mobile phone keyboard:
But what if we took this idea to the extreme? What if we asked the model to predict the next most likely character over and over — forever? We’d be asking it to write a complete story for us!
We saw how we could guess the next letter in Hemingway’s sentence. Let’s try generating a whole story in the style of Hemingway.
To do this, we are going to use the Recurrent Neural Network implementation that Andrej Karpathy wrote. Andrej is a Deep-Learning researcher at Stanford and he wrote an excellent introduction to generating text with RNNs, You can view all the code for the model on github.
We’ll create our model from the complete text of The Sun Also Rises — 362,239 characters using 84 unique letters (including punctuation, uppercase/lowercase, etc). This data set is actually really small compared to typical real-world applications. To generate a really good model of Hemingway’s style, it would be much better to have at several times as much sample text. But this is good enough to play around with as an example.
As we just start to train the RNN, it’s not very good at predicting letters. Here’s what it generates after a 100 loops of training:
You can see that it has figured out that sometimes words have spaces between them, but that’s about it.
After about 1000 iterations, things are looking more promising:
The model has started to identify the patterns in basic sentence structure. It’s adding periods at the ends of sentences and even quoting dialog. A few words are recognizable, but there’s also still a lot of nonsense.
But after several thousand more training iterations, it looks pretty good:
At this point, the algorithm has captured the basic pattern of Hemingway’s short, direct dialog. A few sentences even sort of make sense.
Compare that with some real text from the book:
Even by only looking for patterns one character at a time, our algorithm has reproduced plausible-looking prose with proper formatting. That is kind of amazing!
We don’t have to generate text completely from scratch, either. We can seed the algorithm by supplying the first few letters and just let it find the next few letters.
For fun, let’s make a fake book cover for our imaginary book by generating a new author name and a new title using the seed text of “Er”, “He”, and “The S”:
Not bad!
But the really mind-blowing part is that this algorithm can figure out patterns in any sequence of data. It can easily generate real-looking recipes or fake Obama speeches. But why limit ourselves human language? We can apply this same idea to any kind of sequential data that has a pattern.
In 2015, Nintendo released Super Mario MakerTM for the Wii U gaming system.
This game lets you draw out your own Super Mario Brothers levels on the gamepad and then upload them to the internet so you friends can play through them. You can include all the classic power-ups and enemies from the original Mario games in your levels. It’s like a virtual LEGO set for people who grew up playing Super Mario Brothers.
Can we use the same model that generated fake Hemingway text to generate fake Super Mario Brothers levels?
First, we need a data set for training our model. Let’s take all the outdoor levels from the original Super Mario Brothers game released in 1985:
This game has 32 levels and about 70% of them have the same outdoor style. So we’ll stick to those.
To get the designs for each level, I took an original copy of the game and wrote a program to pull the level designs out of the game’s memory. Super Mario Bros. is a 30-year-old game and there are lots of resources online that help you figure out how the levels were stored in the game’s memory. Extracting level data from an old video game is a fun programming exercise that you should try sometime.
Here’s the first level from the game (which you probably remember if you ever played it):
If we look closely, we can see the level is made of a simple grid of objects:
We could just as easily represent this grid as a sequence of characters with one character representing each object:
We’ve replaced each object in the level with a letter:
...and so on, using a different letter for each different kind of object in the level.
I ended up with text files that looked like this:
Looking at the text file, you can see that Mario levels don’t really have much of a pattern if you read them line-by-line:
The patterns in a level really emerge when you think of the level as a series of columns:
So in order for the algorithm to find the patterns in our data, we need to feed the data in column-by-column. Figuring out the most effective representation of your input data (called feature selection) is one of the keys of using machine learning algorithms well.
To train the model, I needed to rotate my text files by 90 degrees. This made sure the characters were fed into the model in an order where a pattern would more easily show up:
Just like we saw when creating the model of Hemingway’s prose, a model improves as we train it.
After a little training, our model is generating junk:
It sort of has an idea that ‘-’s and ‘=’s should show up a lot, but that’s about it. It hasn’t figured out the pattern yet.
After several thousand iterations, it’s starting to look like something:
The model has almost figured out that each line should be the same length. It has even started to figure out some of the logic of Mario: The pipes in mario are always two blocks wide and at least two blocks high, so the “P”s in the data should appear in 2x2 clusters. That’s pretty cool!
With a lot more training, the model gets to the point where it generates perfectly valid data:
Let’s sample an entire level’s worth of data from our model and rotate it back horizontal:
This data looks great! There are several awesome things to notice:
Finally, let’s take this level and recreate it in Super Mario Maker:
Play it yourself!
If you have Super Mario Maker, you can play this level by bookmarking it online or by looking it up using level code 4AC9–0000–0157-F3C3.
The recurrent neural network algorithm we used to train our model is the same kind of algorithm used by real-world companies to solve hard problems like speech detection and language translation. What makes our model a ‘toy’ instead of cutting-edge is that our model is generated from very little data. There just aren’t enough levels in the original Super Mario Brothers game to provide enough data for a really good model.
If we could get access to the hundreds of thousands of user-created Super Mario Maker levels that Nintendo has, we could make an amazing model. But we can’t — because Nintendo won’t let us have them. Big companies don’t give away their data for free.
As machine learning becomes more important in more industries, the difference between a good program and a bad program will be how much data you have to train your models. That’s why companies like Google and Facebook need your data so badly!
For example, Google recently open sourced TensorFlow, its software toolkit for building large-scale machine learning applications. It was a pretty big deal that Google gave away such important, capable technology for free. This is the same stuff that powers Google Translate.
But without Google’s massive trove of data in every language, you can’t create a competitor to Google Translate. Data is what gives Google its edge. Think about that the next time you open up your Google Maps Location History or Facebook Location History and notice that it stores every place you’ve ever been.
In machine learning, there’s never a single way to solve a problem. You have limitless options when deciding how to pre-process your data and which algorithms to use. Often combining multiple approaches will give you better results than any single approach.
Readers have sent me links to other interesting approaches to generating Super Mario levels:
If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 3!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Arthur Juliani | 9K | 6 | https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0?source=tag_archive---------4---------------- | Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks | For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead).
Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. While both approaches ultimately allow us to take intelligent actions given a situation, the means of getting to that action differ significantly. You may have heard about DeepQ-Networks which can play Atari Games. These are really just larger and more complex implementations of the Q-Learning algorithm we are going to discuss here.
For this tutorial we are going to be attempting to solve the FrozenLake environment from the OpenAI gym. For those unfamiliar, the OpenAI gym provides an easy way for people to experiment with their learning agents in an array of provided toy games. The FrozenLake environment consists of a 4x4 grid of blocks, each one either being the start block, the goal block, a safe frozen block, or a dangerous hole. The objective is to have an agent learn to navigate from the start to the goal without moving onto a hole. At any given time the agent can choose to move either up, down, left, or right. The catch is that there is a wind which occasionally blows the agent onto a space they didn’t choose. As such, perfect performance every time is impossible, but learning to avoid the holes and reach the goal are certainly still doable. The reward at every step is 0, except for entering the goal, which provides a reward of 1. Thus, we will need an algorithm that learns long-term expected rewards. This is exactly what Q-Learning is designed to provide.
In it’s simplest implementation, Q-Learning is a table of values for every state (row) and action (column) possible in the environment. Within each cell of the table, we learn a value for how good it is to take a given action within a given state. In the case of the FrozenLake environment, we have 16 possible states (one for each block), and 4 possible actions (the four directions of movement), giving us a 16x4 table of Q-values. We start by initializing the table to be uniform (all zeros), and then as we observe the rewards we obtain for various actions, we update the table accordingly.
We make updates to our Q-table using something called the Bellman equation, which states that the expected long-term reward for a given action is equal to the immediate reward from the current action combined with the expected reward from the best future action taken at the following state. In this way, we reuse our own Q-table when estimating how to update our table for future actions! In equation form, the rule looks like this:
This says that the Q-value for a given state (s) and action (a) should represent the current reward (r) plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in. The discount variable allows us to decide how important the possible future rewards are compared to the present reward. By updating in this way, the table slowly begins to obtain accurate measures of the expected future reward for a given action in a given state. Below is a Python walkthrough of the Q-Table algorithm implemented in the FrozenLake environment:
(Thanks to Praneet D for finding the optimal hyperparameters for this approach)
Now, you may be thinking: tables are great, but they don’t really scale, do they? While it is easy to have a 16x4 table for a simple grid world, the number of possible states in any modern game or real-world environment is nearly infinitely larger. For most interesting problems, tables simply don’t work. We instead need some way to take a description of our state, and produce Q-values for actions without a table: that is where neural networks come in. By acting as a function approximator, we can take any number of possible states that can be represented as a vector and learn to map them to Q-values.
In the case of the FrozenLake example, we will be using a one-layer network which takes the state encoded in a one-hot vector (1x16), and produces a vector of 4 Q-values, one for each action. Such a simple network acts kind of like a glorified table, with the network weights serving as the old cells. The key difference is that we can easily expand the Tensorflow network with added layers, activation functions, and different input types, whereas all that is impossible with a regular table. The method of updating is a little different as well. Instead of directly updating our table, with a network we will be using backpropagation and a loss function. Our loss function will be sum-of-squares loss, where the difference between the current predicted Q-values, and the “target” value is computed and the gradients passed through the network. In this case, our Q-target for the chosen action is the equivalent to the Q-value computed in equation 1 above.
Below is the Tensorflow walkthrough of implementing our simple Q-Network:
While the network learns to solve the FrozenLake problem, it turns out it doesn’t do so quite as efficiently as the Q-Table. While neural networks allow for greater flexibility, they do so at the cost of stability when it comes to Q-Learning. There are a number of possible extensions to our simple Q-Network which allow for greater performance and more robust learning. Two tricks in particular are referred to as Experience Replay and Freezing Target Networks. Those improvements and other tweaks were the key to getting Atari-playing Deep Q-Networks, and we will be exploring those additions in the future. For more info on the theory behind Q-Learning, see this great post by Tambet Matiisen. I hope this tutorial has been helpful for those curious about how to implement simple Q-Learning algorithms!
If this post has been valuable to you, please consider donating to help support future tutorials, articles, and implementations. Any contribution is greatly appreciated!
If you’d like to follow my work on Deep Learning, AI, and Cognitive Science, follow me on Medium @Arthur Juliani, or on Twitter @awjliani.
More from my Simple Reinforcement Learning with Tensorflow series:
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Deep Learning @Unity3D & Cognitive Neuroscience PhD student.
Exploring frontier technology through the lens of artificial intelligence, data science, and the shape of things to come
|
Adam Geitgey | 6.8K | 11 | https://medium.com/@ageitgey/machine-learning-is-fun-part-6-how-to-do-speech-recognition-with-deep-learning-28293c162f7a?source=tag_archive---------5---------------- | Machine Learning is Fun Part 6: How to do Speech Recognition with Deep Learning | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in 普通话 , 한국어, Tiếng Việt or Русский.
Speech recognition is invading our lives. It’s built into our phones, our game consoles and our smart watches. It’s even automating our homes. For just $50, you can get an Amazon Echo Dot — a magic box that allows you to order pizza, get a weather report or even buy trash bags — just by speaking out loud:
The Echo Dot has been so popular this holiday season that Amazon can’t seem to keep them in stock!
But speech recognition has been around for decades, so why is it just now hitting the mainstream? The reason is that deep learning finally made speech recognition accurate enough to be useful outside of carefully controlled environments.
Andrew Ng has long predicted that as speech recognition goes from 95% accurate to 99% accurate, it will become a primary way that we interact with computers. The idea is that this 4% accuracy gap is the difference between annoyingly unreliable and incredibly useful. Thanks to Deep Learning, we’re finally cresting that peak.
Let’s learn how to do speech recognition with deep learning!
If you know how neural machine translation works, you might guess that we could simply feed sound recordings into a neural network and train it to produce text:
That’s the holy grail of speech recognition with deep learning, but we aren’t quite there yet (at least at the time that I wrote this — I bet that we will be in a couple of years).
The big problem is that speech varies in speed. One person might say “hello!” very quickly and another person might say “heeeelllllllllllllooooo!” very slowly, producing a much longer sound file with much more data. Both both sound files should be recognized as exactly the same text — “hello!” Automatically aligning audio files of various lengths to a fixed-length piece of text turns out to be pretty hard.
To work around this, we have to use some special tricks and extra precessing in addition to a deep neural network. Let’s see how it works!
The first step in speech recognition is obvious — we need to feed sound waves into a computer.
In Part 3, we learned how to take an image and treat it as an array of numbers so that we can feed directly into a neural network for image recognition:
But sound is transmitted as waves. How do we turn sound waves into numbers? Let’s use this sound clip of me saying “Hello”:
Sound waves are one-dimensional. At every moment in time, they have a single value based on the height of the wave. Let’s zoom in on one tiny part of the sound wave and take a look:
To turn this sound wave into numbers, we just record of the height of the wave at equally-spaced points:
This is called sampling. We are taking a reading thousands of times a second and recording a number representing the height of the sound wave at that point in time. That’s basically all an uncompressed .wav audio file is.
“CD Quality” audio is sampled at 44.1khz (44,100 readings per second). But for speech recognition, a sampling rate of 16khz (16,000 samples per second) is enough to cover the frequency range of human speech.
Lets sample our “Hello” sound wave 16,000 times per second. Here’s the first 100 samples:
You might be thinking that sampling is only creating a rough approximation of the original sound wave because it’s only taking occasional readings. There’s gaps in between our readings so we must be losing data, right?
But thanks to the Nyquist theorem, we know that we can use math to perfectly reconstruct the original sound wave from the spaced-out samples — as long as we sample at least twice as fast as the highest frequency we want to record.
I mention this only because nearly everyone gets this wrong and assumes that using higher sampling rates always leads to better audio quality. It doesn’t.
</end rant>
We now have an array of numbers with each number representing the sound wave’s amplitude at 1/16,000th of a second intervals.
We could feed these numbers right into a neural network. But trying to recognize speech patterns by processing these samples directly is difficult. Instead, we can make the problem easier by doing some pre-processing on the audio data.
Let’s start by grouping our sampled audio into 20-millisecond-long chunks. Here’s our first 20 milliseconds of audio (i.e., our first 320 samples):
Plotting those numbers as a simple line graph gives us a rough approximation of the original sound wave for that 20 millisecond period of time:
This recording is only 1/50th of a second long. But even this short recording is a complex mish-mash of different frequencies of sound. There’s some low sounds, some mid-range sounds, and even some high-pitched sounds sprinkled in. But taken all together, these different frequencies mix together to make up the complex sound of human speech.
To make this data easier for a neural network to process, we are going to break apart this complex sound wave into it’s component parts. We’ll break out the low-pitched parts, the next-lowest-pitched-parts, and so on. Then by adding up how much energy is in each of those frequency bands (from low to high), we create a fingerprint of sorts for this audio snippet.
Imagine you had a recording of someone playing a C Major chord on a piano. That sound is the combination of three musical notes— C, E and G — all mixed together into one complex sound. We want to break apart that complex sound into the individual notes to discover that they were C, E and G. This is the exact same idea.
We do this using a mathematic operation called a Fourier transform. It breaks apart the complex sound wave into the simple sound waves that make it up. Once we have those individual sound waves, we add up how much energy is contained in each one.
The end result is a score of how important each frequency range is, from low pitch (i.e. bass notes) to high pitch. Each number below represents how much energy was in each 50hz band of our 20 millisecond audio clip:
But this is a lot easier to see when you draw this as a chart:
If we repeat this process on every 20 millisecond chunk of audio, we end up with a spectrogram (each column from left-to-right is one 20ms chunk):
A spectrogram is cool because you can actually see musical notes and other pitch patterns in audio data. A neural network can find patterns in this kind of data more easily than raw sound waves. So this is the data representation we’ll actually feed into our neural network.
Now that we have our audio in a format that’s easy to process, we will feed it into a deep neural network. The input to the neural network will be 20 millisecond audio chunks. For each little audio slice, it will try to figure out the letter that corresponds the sound currently being spoken.
We’ll use a recurrent neural network — that is, a neural network that has a memory that influences future predictions. That’s because each letter it predicts should affect the likelihood of the next letter it will predict too. For example, if we have said “HEL” so far, it’s very likely we will say “LO” next to finish out the word “Hello”. It’s much less likely that we will say something unpronounceable next like “XYZ”. So having that memory of previous predictions helps the neural network make more accurate predictions going forward.
After we run our entire audio clip through the neural network (one chunk at a time), we’ll end up with a mapping of each audio chunk to the letters most likely spoken during that chunk. Here’s what that mapping looks like for me saying “Hello”:
Our neural net is predicting that one likely thing I said was “HHHEE_LL_LLLOOO”. But it also thinks that it was possible that I said “HHHUU_LL_LLLOOO” or even “AAAUU_LL_LLLOOO”.
We have some steps we follow to clean up this output. First, we’ll replace any repeated characters a single character:
Then we’ll remove any blanks:
That leaves us with three possible transcriptions — “Hello”, “Hullo” and “Aullo”. If you say them out loud, all of these sound similar to “Hello”. Because it’s predicting one character at a time, the neural network will come up with these very sounded-out transcriptions. For example if you say “He would not go”, it might give one possible transcription as “He wud net go”.
The trick is to combine these pronunciation-based predictions with likelihood scores based on large database of written text (books, news articles, etc). You throw out transcriptions that seem the least likely to be real and keep the transcription that seems the most realistic.
Of our possible transcriptions “Hello”, “Hullo” and “Aullo”, obviously “Hello” will appear more frequently in a database of text (not to mention in our original audio-based training data) and thus is probably correct. So we’ll pick “Hello” as our final transcription instead of the others. Done!
You might be thinking “But what if someone says ‘Hullo’? It’s a valid word. Maybe ‘Hello’ is the wrong transcription!”
Of course it is possible that someone actually said “Hullo” instead of “Hello”. But a speech recognition system like this (trained on American English) will basically never produce “Hullo” as the transcription. It’s just such an unlikely thing for a user to say compared to “Hello” that it will always think you are saying “Hello” no matter how much you emphasize the ‘U’ sound.
Try it out! If your phone is set to American English, try to get your phone’s digital assistant to recognize the world “Hullo.” You can’t! It refuses! It will always understand it as “Hello.”
Not recognizing “Hullo” is a reasonable behavior, but sometimes you’ll find annoying cases where your phone just refuses to understand something valid you are saying. That’s why these speech recognition models are always being retrained with more data to fix these edge cases.
One of the coolest things about machine learning is how simple it sometimes seems. You get a bunch of data, feed it into a machine learning algorithm, and then magically you have a world-class AI system running on your gaming laptop’s video card... Right?
That sort of true in some cases, but not for speech. Recognizing speech is a hard problem. You have to overcome almost limitless challenges: bad quality microphones, background noise, reverb and echo, accent variations, and on and on. All of these issues need to be present in your training data to make sure the neural network can deal with them.
Here’s another example: Did you know that when you speak in a loud room you unconsciously raise the pitch of your voice to be able to talk over the noise? Humans have no problem understanding you either way, but neural networks need to be trained to handle this special case. So you need training data with people yelling over noise!
To build a voice recognition system that performs on the level of Siri, Google Now!, or Alexa, you will need a lot of training data — far more data than you can likely get without hiring hundreds of people to record it for you. And since users have low tolerance for poor quality voice recognition systems, you can’t skimp on this. No one wants a voice recognition system that works 80% of the time.
For a company like Google or Amazon, hundreds of thousands of hours of spoken audio recorded in real-life situations is gold. That’s the single biggest thing that separates their world-class speech recognition system from your hobby system. The whole point of putting Google Now! and Siri on every cell phone for free or selling $50 Alexa units that have no subscription fee is to get you to use them as much as possible. Every single thing you say into one of these systems is recorded forever and used as training data for future versions of speech recognition algorithms. That’s the whole game!
Don’t believe me? If you have an Android phone with Google Now!, click here to listen to actual recordings of yourself saying every dumb thing you’ve ever said into it:
So if you are looking for a start-up idea, I wouldn’t recommend trying to build your own speech recognition system to compete with Google. Instead, figure out a way to get people to give you recordings of themselves talking for hours. The data can be your product instead.
If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun! Part 7!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Adam Geitgey | 5.8K | 16 | https://medium.com/@ageitgey/machine-learning-is-fun-part-5-language-translation-with-deep-learning-and-the-magic-of-sequences-2ace0acca0aa?source=tag_archive---------6---------------- | Machine Learning is Fun Part 5: Language Translation with Deep Learning and the Magic of Sequences | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in 普通话, Русский, 한국어, Tiếng Việt or Italiano.
We all know and love Google Translate, the website that can instantly translate between 100 different human languages as if by magic. It is even available on our phones and smartwatches:
The technology behind Google Translate is called Machine Translation. It has changed the world by allowing people to communicate when it wouldn’t otherwise be possible.
But we all know that high school students have been using Google Translate to... umm... assist with their Spanish homework for 15 years. Isn’t this old news?
It turns out that over the past two years, deep learning has totally rewritten our approach to machine translation. Deep learning researchers who know almost nothing about language translation are throwing together relatively simple machine learning solutions that are beating the best expert-built language translation systems in the world.
The technology behind this breakthrough is called sequence-to-sequence learning. It’s very powerful technique that be used to solve many kinds problems. After we see how it is used for translation, we’ll also learn how the exact same algorithm can be used to write AI chat bots and describe pictures.
Let’s go!
So how do we program a computer to translate human language?
The simplest approach is to replace every word in a sentence with the translated word in the target language. Here’s a simple example of translating from Spanish to English word-by-word:
This is easy to implement because all you need is a dictionary to look up each word’s translation. But the results are bad because it ignores grammar and context.
So the next thing you might do is start adding language-specific rules to improve the results. For example, you might translate common two-word phrases as a single group. And you might swap the order nouns and adjectives since they usually appear in reverse order in Spanish from how they appear in English:
That worked! If we just keep adding more rules until we can handle every part of grammar, our program should be able to translate any sentence, right?
This is how the earliest machine translation systems worked. Linguists came up with complicated rules and programmed them in one-by-one. Some of the smartest linguists in the world labored for years during the Cold War to create translation systems as a way to interpret Russian communications more easily.
Unfortunately this only worked for simple, plainly-structured documents like weather reports. It didn’t work reliably for real-world documents.
The problem is that human language doesn’t follow a fixed set of rules. Human languages are full of special cases, regional variations, and just flat out rule-breaking. The way we speak English more influenced by who invaded who hundreds of years ago than it is by someone sitting down and defining grammar rules.
After the failure of rule-based systems, new translation approaches were developed using models based on probability and statistics instead of grammar rules.
Building a statistics-based translation system requires lots of training data where the exact same text is translated into at least two languages. This double-translated text is called parallel corpora. In the same way that the Rosetta Stone was used by scientists in the 1800s to figure out Egyptian hieroglyphs from Greek, computers can use parallel corpora to guess how to convert text from one language to another.
Luckily, there’s lots of double-translated text already sitting around in strange places. For example, the European Parliament translates their proceedings into 21 languages. So researchers often use that data to help build translation systems.
The fundamental difference with statistical translation systems is that they don’t try to generate one exact translation. Instead, they generate thousands of possible translations and then they rank those translations by likely each is to be correct. They estimate how “correct” something is by how similar it is to the training data. Here’s how it works:
First, we break up our sentence into simple chunks that can each be easily translated:
Next, we will translate each of these chunks by finding all the ways humans have translated those same chunks of words in our training data.
It’s important to note that we are not just looking up these chunks in a simple translation dictionary. Instead, we are seeing how actual people translated these same chunks of words in real-world sentences. This helps us capture all of the different ways they can be used in different contexts:
Some of these possible translations are used more frequently than others. Based on how frequently each translation appears in our training data, we can give it a score.
For example, it’s much more common for someone to say “Quiero” to mean “I want” than to mean “I try.” So we can use how frequently “Quiero” was translated to “I want” in our training data to give that translation more weight than a less frequent translation.
Next, we will use every possible combination of these chunks to generate a bunch of possible sentences.
Just from the chunk translations we listed in Step 2, we can already generate nearly 2,500 different variations of our sentence by combining the chunks in different ways. Here are some examples:
But in a real-world system, there will be even more possible chunk combinations because we’ll also try different orderings of words and different ways of chunking the sentence:
Now need to scan through all of these generated sentences to find the one that is that sounds the “most human.”
To do this, we compare each generated sentence to millions of real sentences from books and news stories written in English. The more English text we can get our hands on, the better.
Take this possible translation:
It’s likely that no one has ever written a sentence like this in English, so it would not be very similar to any sentences in our data set. We’ll give this possible translation a low probability score.
But look at this possible translation:
This sentence will be similar to something in our training set, so it will get a high probability score.
After trying all possible sentences, we’ll pick the sentence that has the most likely chunk translations while also being the most similar overall to real English sentences.
Our final translation would be “I want to go to the prettiest beach.” Not bad!
Statistical machine translation systems perform much better than rule-based systems if you give them enough training data. Franz Josef Och improved on these ideas and used them to build Google Translate in the early 2000s. Machine Translation was finally available to the world.
In the early days, it was surprising to everyone that the “dumb” approach to translating based on probability worked better than rule-based systems designed by linguists. This led to a (somewhat mean) saying among researchers in the 80s:
Statistical machine translation systems work well, but they are complicated to build and maintain. Every new pair of languages you want to translate requires experts to tweak and tune a new multi-step translation pipeline.
Because it is so much work to build these different pipelines, trade-offs have to be made. If you are asking Google to translate Georgian to Telegu, it has to internally translate it into English as an intermediate step because there’s not enough Georgain-to-Telegu translations happening to justify investing heavily in that language pair. And it might do that translation using a less advanced translation pipeline than if you had asked it for the more common choice of French-to-English.
Wouldn’t it be cool if we could have the computer do all that annoying development work for us?
The holy grail of machine translation is a black box system that learns how to translate by itself— just by looking at training data. With Statistical Machine Translation, humans are still needed to build and tweak the multi-step statistical models.
In 2014, KyungHyun Cho’s team made a breakthrough. They found a way to apply deep learning to build this black box system. Their deep learning model takes in a parallel corpora and and uses it to learn how to translate between those two languages without any human intervention.
Two big ideas make this possible — recurrent neural networks and encodings. By combining these two ideas in a clever way, we can build a self-learning translation system.
We’ve already talked about recurrent neural networks in Part 2, but let’s quickly review.
A regular (non-recurrent) neural network is a generic machine learning algorithm that takes in a list of numbers and calculates a result (based on previous training). Neural networks can be used as a black box to solve lots of problems. For example, we can use a neural network to calculate the approximate value of a house based on attributes of that house:
But like most machine learning algorithms, neural networks are stateless. You pass in a list of numbers and the neural network calculates a result. If you pass in those same numbers again, it will always calculate the same result. It has no memory of past calculations. In other words, 2 + 2 always equals 4.
A recurrent neural network (or RNN for short) is a slightly tweaked version of a neural network where the previous state of the neural network is one of the inputs to the next calculation. This means that previous calculations change the results of future calculations!
Why in the world would we want to do this? Shouldn’t 2 + 2 always equal 4 no matter what we last calculated?
This trick allows neural networks to learn patterns in a sequence of data. For example, you can use it to predict the next most likely word in a sentence based on the first few words:
RNNs are useful any time you want to learn patterns in data. Because human language is just one big, complicated pattern, RNNs are increasingly used in many areas of natural language processing.
If you want to learn more about RNNs, you can read Part 2 where we used one to generate a fake Ernest Hemingway book and then used another one to generate fake Super Mario Brothers levels.
The other idea we need to review is Encodings. We talked about encodings in Part 4 as part of face recognition. To explain encodings, let’s take a slight detour into how we can tell two different people apart with a computer.
When you are trying to tell two faces apart with a computer, you collect different measurements from each face and use those measurements to compare faces. For example, we might measure the size of each ear or the spacing between the eyes and compare those measurements from two pictures to see if they are the same person.
You’re probably already familiar with this idea from watching any primetime detective show like CSI:
The idea of turning a face into a list of measurements is an example of an encoding. We are taking raw data (a picture of a face) and turning it into a list of measurements that represent it (the encoding).
But like we saw in Part 4, we don’t have to come up with a specific list of facial features to measure ourselves. Instead, we can use a neural network to generate measurements from a face. The computer can do a better job than us in figuring out which measurements are best able to differentiate two similar people:
This is our encoding. It lets us represent something very complicated (a picture of a face) with something simple (128 numbers). Now comparing two different faces is much easier because we only have to compare these 128 numbers for each face instead of comparing full images.
Guess what? We can do the same thing with sentences! We can come up with an encoding that represents every possible different sentence as a series of unique numbers:
To generate this encoding, we’ll feed the sentence into the RNN, one word at time. The final result after the last word is processed will be the values that represent the entire sentence:
Great, so now we have a way to represent an entire sentence as a set of unique numbers! We don’t know what each number in the encoding means, but it doesn’t really matter. As long as each sentence is uniquely identified by it’s own set of numbers, we don’t need to know exactly how those numbers were generated.
Ok, so we know how to use an RNN to encode a sentence into a set of unique numbers. How does that help us? Here’s where things get really cool!
What if we took two RNNs and hooked them up end-to-end? The first RNN could generate the encoding that represents a sentence. Then the second RNN could take that encoding and just do the same logic in reverse to decode the original sentence again:
Of course being able to encode and then decode the original sentence again isn’t very useful. But what if (and here’s the big idea!) we could train the second RNN to decode the sentence into Spanish instead of English? We could use our parallel corpora training data to train it to do that:
And just like that, we have a generic way of converting a sequence of English words into an equivalent sequence of Spanish words!
This is a powerful idea:
Note that we glossed over some things that are required to make this work with real-world data. For example, there’s additional work you have to do to deal with different lengths of input and output sentences (see bucketing and padding). There’s also issues with translating rare words correctly.
If you want to build your own language translation system, there’s a working demo included with TensorFlow that will translate between English and French. However, this is not for the faint of heart or for those with limited budgets. This technology is still new and very resource intensive. Even if you have a fast computer with a high-end video card, it might take about a month of continuous processing time to train your own language translation system.
Also, Sequence-to-sequence language translation techniques are improving so rapidly that it’s hard to keep up. Many recent improvements (like adding an attention mechanism or tracking context) are significantly improving results but these developments are so new that there aren’t even wikipedia pages for them yet. If you want to do anything serious with sequence-to-sequence learning, you’ll need to keep with new developments as they occur.
So what else can we do with sequence-to-sequence models?
About a year ago, researchers at Google showed that you can use sequence-to-sequence models to build AI bots. The idea is so simple that it’s amazing it works at all.
First, they captured chat logs between Google employees and Google’s Tech Support team. Then they trained a sequence-to-sequence model where the employee’s question was the input sentence and the Tech Support team’s response was the “translation” of that sentence.
When a user interacted with the bot, they would “translate” each of the user’s messages with this system to get the bot’s response.
The end result was a semi-intelligent bot that could (sometimes) answer real tech support questions. Here’s part of a sample conversation between a user and the bot from their paper:
They also tried building a chat bot based on millions of movie subtitles. The idea was to use conversations between movie characters as a way to train a bot to talk like a human. The input sentence is a line of dialog said by one character and the “translation” is what the next character said in response:
This produced really interesting results. Not only did the bot converse like a human, but it displayed a small bit of intelligence:
This is only the beginning of the possibilities. We aren’t limited to converting one sentence into another sentence. It’s also possible to make an image-to-sequence model that can turn an image into text!
A different team at Google did this by replacing the first RNN with a Convolutional Neural Network (like we learned about in Part 3). This allows the input to be a picture instead of a sentence. The rest works basically the same way:
And just like that, we can turn pictures into words (as long as we have lots and lots of training data)!
Andrej Karpathy expanded on these ideas to build a system capable of describing images in great detail by processing multiple regions of an image separately:
This makes it possible to build image search engines that are capable of finding images that match oddly specific search queries:
There’s even researchers working on the reverse problem, generating an entire picture based on just a text description!
Just from these examples, you can start to imagine the possibilities. So far, there have been sequence-to-sequence applications in everything from speech recognition to computer vision. I bet there will be a lot more over the next year.
If you want to learn more in depth about sequence-to-sequence models and translation, here’s some recommended resources:
If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun! Part 6!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
Tal Perry | 2.6K | 17 | https://medium.com/@TalPerry/deep-learning-the-stock-market-df853d139e02?source=tag_archive---------7---------------- | Deep Learning the Stock Market – Tal Perry – Medium | Update 25.1.17 — Took me a while but here is an ipython notebook with a rough implementation
In the past few months I’ve been fascinated with “Deep Learning”, especially its applications to language and text. I’ve spent the bulk of my career in financial technologies, mostly in algorithmic trading and alternative data services. You can see where this is going.
I wrote this to get my ideas straight in my head. While I’ve become a “Deep Learning” enthusiast, I don’t have too many opportunities to brain dump an idea in most of its messy glory. I think that a decent indication of a clear thought is the ability to articulate it to people not from the field. I hope that I’ve succeeded in doing that and that my articulation is also a pleasurable read.
Why NLP is relevant to Stock prediction
In many NLP problems we end up taking a sequence and encoding it into a single fixed size representation, then decoding that representation into another sequence. For example, we might tag entities in the text, translate from English to French or convert audio frequencies to text. There is a torrent of work coming out in these areas and a lot of the results are achieving state of the art performance.
In my mind the biggest difference between the NLP and financial analysis is that language has some guarantee of structure, it’s just that the rules of the structure are vague. Markets, on the other hand, don’t come with a promise of a learnable structure, that such a structure exists is the assumption that this project would prove or disprove (rather it might prove or disprove if I can find that structure).
Assuming the structure is there, the idea of summarizing the current state of the market in the same way we encode the semantics of a paragraph seems plausible to me. If that doesn’t make sense yet, keep reading. It will.
You shall know a word by the company it keeps (Firth, J. R. 1957:11)
There is tons of literature on word embeddings. Richard Socher’s lecture is a great place to start. In short, we can make a geometry of all the words in our language, and that geometry captures the meaning of words and relationships between them. You may have seen the example of “King-man +woman=Queen” or something of the sort.
Embeddings are cool because they let us represent information in a condensed way. The old way of representing words was holding a vector (a big list of numbers) that was as long as the number of words we know, and setting a 1 in a particular place if that was the current word we are looking at. That is not an efficient approach, nor does it capture any meaning. With embeddings, we can represent all of the words in a fixed number of dimensions (300 seems to be plenty, 50 works great) and then leverage their higher dimensional geometry to understand them.
The picture below shows an example. An embedding was trained on more or less the entire internet. After a few days of intensive calculations, each word was embedded in some high dimensional space. This “space” has a geometry, concepts like distance, and so we can ask which words are close together. The authors/inventors of that method made an example. Here are the words that are closest to Frog.
But we can embed more than just words. We can do, say , stock market embeddings.
Market2Vec
The first word embedding algorithm I heard about was word2vec. I want to get the same effect for the market, though I’ll be using a different algorithm. My input data is a csv, the first column is the date, and there are 4*1000 columns corresponding to the High Low Open Closing price of 1000 stocks. That is my input vector is 4000 dimensional, which is too big. So the first thing I’m going to do is stuff it into a lower dimensional space, say 300 because I liked the movie.
Taking something in 4000 dimensions and stuffing it into a 300-dimensional space my sound hard but its actually easy. We just need to multiply matrices. A matrix is a big excel spreadsheet that has numbers in every cell and no formatting problems. Imagine an excel table with 4000 columns and 300 rows, and when we basically bang it against the vector a new vector comes out that is only of size 300. I wish that’s how they would have explained it in college.
The fanciness starts here as we’re going to set the numbers in our matrix at random, and part of the “deep learning” is to update those numbers so that our excel spreadsheet changes. Eventually this matrix spreadsheet (I’ll stick with matrix from now on) will have numbers in it that bang our original 4000 dimensional vector into a concise 300 dimensional summary of itself.
We’re going to get a little fancier here and apply what they call an activation function. We’re going to take a function, and apply it to each number in the vector individually so that they all end up between 0 and 1 (or 0 and infinity, it depends). Why ? It makes our vector more special, and makes our learning process able to understand more complicated things. How?
So what? What I’m expecting to find is that that new embedding of the market prices (the vector) into a smaller space captures all the essential information for the task at hand, without wasting time on the other stuff. So I’d expect they’d capture correlations between other stocks, perhaps notice when a certain sector is declining or when the market is very hot. I don’t know what traits it will find, but I assume they’ll be useful.
Now What
Lets put aside our market vectors for a moment and talk about language models. Andrej Karpathy wrote the epic post “The Unreasonable effectiveness of Recurrent Neural Networks”. If I’d summarize in the most liberal fashion the post boils down to
And then as a punchline, he generated a bunch of text that looks like Shakespeare. And then he did it again with the Linux source code. And then again with a textbook on Algebraic geometry.
So I’ll get back to the mechanics of that magic box in a second, but let me remind you that we want to predict the future market based on the past just like he predicted the next word based on the previous one. Where Karpathy used characters, we’re going to use our market vectors and feed them into the magic black box. We haven’t decided what we want it to predict yet, but that is okay, we won’t be feeding its output back into it either.
Going deeper
I want to point out that this is where we start to get into the deep part of deep learning. So far we just have a single layer of learning, that excel spreadsheet that condenses the market. Now we’re going to add a few more layers and stack them, to make a “deep” something. That’s the deep in deep learning.
So Karpathy shows us some sample output from the Linux source code, this is stuff his black box wrote.
Notice that it knows how to open and close parentheses, and respects indentation conventions; The contents of the function are properly indented and the multi-line printk statement has an inner indentation. That means that this magic box understands long range dependencies. When it’s indenting within the print statement it knows it’s in a print statement and also remembers that it’s in a function( Or at least another indented scope). That’s nuts. It’s easy to gloss over that but an algorithm that has the ability to capture and remember long term dependencies is super useful because... We want to find long term dependencies in the market.
Inside the magical black box
What’s inside this magical black box? It is a type of Recurrent Neural Network (RNN) called an LSTM. An RNN is a deep learning algorithm that operates on sequences (like sequences of characters). At every step, it takes a representation of the next character (Like the embeddings we talked about before) and operates on the representation with a matrix, like we saw before. The thing is, the RNN has some form of internal memory, so it remembers what it saw previously. It uses that memory to decide how exactly it should operate on the next input. Using that memory, the RNN can “remember” that it is inside of an intended scope and that is how we get properly nested output text.
A fancy version of an RNN is called a Long Short Term Memory (LSTM). LSTM has cleverly designed memory that allows it to
So an LSTM can see a “{“ and say to itself “Oh yeah, that’s important I should remember that” and when it does, it essentially remembers an indication that it is in a nested scope. Once it sees the corresponding “}” it can decide to forget the original opening brace and thus forget that it is in a nested scope.
We can have the LSTM learn more abstract concepts by stacking a few of them on top of each other, that would make us “Deep” again. Now each output of the previous LSTM becomes the inputs of the next LSTM, and each one goes on to learn higher abstractions of the data coming in. In the example above (and this is just illustrative speculation), the first layer of LSTMs might learn that characters separated by a space are “words”. The next layer might learn word types like (static void action_new_function).The next layer might learn the concept of a function and its arguments and so on. It’s hard to tell exactly what each layer is doing, though Karpathy’s blog has a really nice example of how he did visualize exactly that.
Connecting Market2Vec and LSTMs
The studious reader will notice that Karpathy used characters as his inputs, not embeddings (Technically a one-hot encoding of characters). But, Lars Eidnes actually used word embeddings when he wrote Auto-Generating Clickbait With Recurrent Neural Network
The figure above shows the network he used. Ignore the SoftMax part (we’ll get to it later). For the moment, check out how on the bottom he puts in a sequence of words vectors at the bottom and each one. (Remember, a “word vector” is a representation of a word in the form of a bunch of numbers, like we saw in the beginning of this post). Lars inputs a sequence of Word Vectors and each one of them:
We’re going to do the same thing with one difference, instead of word vectors we’ll input “MarketVectors”, those market vectors we described before. To recap, the MarketVectors should contain a summary of what’s happening in the market at a given point in time. By putting a sequence of them through LSTMs I hope to capture the long term dynamics that have been happening in the market. By stacking together a few layers of LSTMs I hope to capture higher level abstractions of the market’s behavior.
What Comes out
Thus far we haven’t talked at all about how the algorithm actually learns anything, we just talked about all the clever transformations we’ll do on the data. We’ll defer that conversation to a few paragraphs down, but please keep this part in mind as it is the se up for the punch line that makes everything else worthwhile.
In Karpathy’s example, the output of the LSTMs is a vector that represents the next character in some abstract representation. In Eidnes’ example, the output of the LSTMs is a vector that represents what the next word will be in some abstract space. The next step in both cases is to change that abstract representation into a probability vector, that is a list that says how likely each character or word respectively is likely to appear next. That’s the job of the SoftMax function. Once we have a list of likelihoods we select the character or word that is the most likely to appear next.
In our case of “predicting the market”, we need to ask ourselves what exactly we want to market to predict? Some of the options that I thought about were:
1 and 2 are regression problems, where we have to predict an actual number instead of the likelihood of a specific event (like the letter n appearing or the market going up). Those are fine but not what I want to do.
3 and 4 are fairly similar, they both ask to predict an event (In technical jargon — a class label). An event could be the letter n appearing next or it could be Moved up 5% while not going down more than 3% in the last 10 minutes. The trade-off between 3 and 4 is that 3 is much more common and thus easier to learn about while 4 is more valuable as not only is it an indicator of profit but also has some constraint on risk.
5 is the one we’ll continue with for this article because it’s similar to 3 and 4 but has mechanics that are easier to follow. The VIX is sometimes called the Fear Index and it represents how volatile the stocks in the S&P500 are. It is derived by observing the implied volatility for specific options on each of the stocks in the index.
Sidenote — Why predict the VIX
What makes the VIX an interesting target is that
Back to our LSTM outputs and the SoftMax
How do we use the formulations we saw before to predict changes in the VIX a few minutes in the future? For each point in our dataset, we’ll look what happened to the VIX 5 minutes later. If it went up by more than 1% without going down more than 0.5% during that time we’ll output a 1, otherwise a 0. Then we’ll get a sequence that looks like:
We want to take the vector that our LSTMs output and squish it so that it gives us the probability of the next item in our sequence being a 1. The squishing happens in the SoftMax part of the diagram above. (Technically, since we only have 1 class now, we use a sigmoid ).
So before we get into how this thing learns, let’s recap what we’ve done so far
How does this thing learn?
Now the fun part. Everything we did until now was called the forward pass, we’d do all of those steps while we train the algorithm and also when we use it in production. Here we’ll talk about the backward pass, the part we do only while in training that makes our algorithm learn.
So during training, not only did we prepare years worth of historical data, we also prepared a sequence of prediction targets, that list of 0 and 1 that showed if the VIX moved the way we want it to or not after each observation in our data.
To learn, we’ll feed the market data to our network and compare its output to what we calculated. Comparing in our case will be simple subtraction, that is we’ll say that our model’s error is
Or in English, the square root of the square of the difference between what actually happened and what we predicted.
Here’s the beauty. That’s a differential function, that is, we can tell by how much the error would have changed if our prediction would have changed a little. Our prediction is the outcome of a differentiable function, the SoftMax The inputs to the softmax, the LSTMs are all mathematical functions that are differentiable. Now all of these functions are full of parameters, those big excel spreadsheets I talked about ages ago. So at this stage what we do is take the derivative of the error with respect to every one of the millions of parameters in all of those excel spreadsheets we have in our model. When we do that we can see how the error will change when we change each parameter, so we’ll change each parameter in a way that will reduce the error.
This procedure propagates all the way to the beginning of the model. It tweaks the way we embed the inputs into MarketVectors so that our MarketVectors represent the most significant information for our task.
It tweaks when and what each LSTM chooses to remember so that their outputs are the most relevant to our task.
It tweaks the abstractions our LSTMs learn so that they learn the most important abstractions for our task.
Which in my opinion is amazing because we have all of this complexity and abstraction that we never had to specify anywhere. It’s all inferred MathaMagically from the specification of what we consider to be an error.
What’s next
Now that I’ve laid this out in writing and it still makes sense to me I want
So, if you’ve come this far please point out my errors and share your inputs.
Other thoughts
Here are some mostly more advanced thoughts about this project, what other things I might try and why it makes sense to me that this may actually work.
Liquidity and efficient use of capital
Generally the more liquid a particular market is the more efficient that is. I think this is due to a chicken and egg cycle, whereas a market becomes more liquid it is able to absorb more capital moving in and out without that capital hurting itself. As a market becomes more liquid and more capital can be used in it, you’ll find more sophisticated players moving in. This is because it is expensive to be sophisticated, so you need to make returns on a large chunk of capital in order to justify your operational costs.
A quick corollary is that in less liquid markets the competition isn’t quite as sophisticated and so the opportunities a system like this can bring may not have been traded away. The point being were I to try and trade this I would try and trade it on less liquid segments of the market, that is maybe the TASE 100 instead of the S&P 500.
This stuff is new
The knowledge of these algorithms, the frameworks to execute them and the computing power to train them are all new at least in the sense that they are available to the average Joe such as myself. I’d assume that top players have figured this stuff out years ago and have had the capacity to execute for as long but, as I mention in the above paragraph, they are likely executing in liquid markets that can support their size. The next tier of market participants, I assume, have a slower velocity of technological assimilation and in that sense, there is or soon will be a race to execute on this in as yet untapped markets.
Multiple Time Frames
While I mentioned a single stream of inputs in the above, I imagine that a more efficient way to train would be to train market vectors (at least) on multiple time frames and feed them in at the inference stage. That is, my lowest time frame would be sampled every 30 seconds and I’d expect the network to learn dependencies that stretch hours at most.
I don’t know if they are relevant or not but I think there are patterns on multiple time frames and if the cost of computation can be brought low enough then it is worthwhile to incorporate them into the model. I’m still wrestling with how best to represent these on the computational graph and perhaps it is not mandatory to start with.
MarketVectors
When using word vectors in NLP we usually start with a pretrained model and continue adjusting the embeddings during training of our model. In my case, there are no pretrained market vector available nor is tehre a clear algorithm for training them.
My original consideration was to use an auto-encoder like in this paper but end to end training is cooler.
A more serious consideration is the success of sequence to sequence models in translation and speech recognition, where a sequence is eventually encoded as a single vector and then decoded into a different representation (Like from speech to text or from English to French). In that view, the entire architecture I described is essentially the encoder and I haven’t really laid out a decoder.
But, I want to achieve something specific with the first layer, the one that takes as input the 4000 dimensional vector and outputs a 300 dimensional one. I want it to find correlations or relations between various stocks and compose features about them.
The alternative is to run each input through an LSTM, perhaps concatenate all of the output vectors and consider that output of the encoder stage. I think this will be inefficient as the interactions and correlations between instruments and their features will be lost, and thre will be 10x more computation required. On the other hand, such an architecture could naively be paralleled across multiple GPUs and hosts which is an advantage.
CNNs
Recently there has been a spur of papers on character level machine translation. This paper caught my eye as they manage to capture long range dependencies with a convolutional layer rather than an RNN. I haven’t given it more than a brief read but I think that a modification where I’d treat each stock as a channel and convolve over channels first (like in RGB images) would be another way to capture the market dynamics, in the same way that they essentially encode semantic meaning from characters.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Founder of https://LightTag.io, platform to annotate text for NLP. Google developer expert in ML. I do deep learning on text for a living and for fun.
|
Andrej Karpathy | 9.2K | 7 | https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b?source=tag_archive---------8---------------- | Yes you should understand backprop – Andrej Karpathy – Medium | When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards:
This is seemingly a perfectly sensible appeal - if you’re never going to write backward passes once the class is over, why practice writing them? Are we just torturing the students for our own amusement? Some easy answers could make arguments along the lines of “it’s worth knowing what’s under the hood as an intellectual curiosity”, or perhaps “you might want to improve on the core algorithm later”, but there is a much stronger and practical argument, which I wanted to devote a whole post to:
> The problem with Backpropagation is that it is a leaky abstraction.
In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data. So lets look at a few explicit examples where this is not the case in quite unintuitive ways.
We’re starting off easy here. At one point it was fashionable to use sigmoid (or tanh) non-linearities in the fully connected layers. The tricky part people might not realize until they think about the backward pass is that if you are sloppy with the weight initialization or data preprocessing these non-linearities can “saturate” and entirely stop learning — your training loss will be flat and refuse to go down. For example, a fully connected layer with sigmoid non-linearity computes (using raw numpy):
If your weight matrix W is initialized too large, the output of the matrix multiply could have a very large range (e.g. numbers between -400 and 400), which will make all outputs in the vector z almost binary: either 1 or 0. But if that is the case, z*(1-z), which is local gradient of the sigmoid non-linearity, will in both cases become zero (“vanish”), making the gradient for both x and W be zero. The rest of the backward pass will come out all zero from this point on due to multiplication in the chain rule.
Another non-obvious fun fact about sigmoid is that its local gradient (z*(1-z)) achieves a maximum at 0.25, when z = 0.5. That means that every time the gradient signal flows through a sigmoid gate, its magnitude always diminishes by one quarter (or more). If you’re using basic SGD, this would make the lower layers of a network train much slower than the higher ones.
TLDR: if you’re using sigmoids or tanh non-linearities in your network and you understand backpropagation you should always be nervous about making sure that the initialization doesn’t cause them to be fully saturated. See a longer explanation in this CS231n lecture video.
Another fun non-linearity is the ReLU, which thresholds neurons at zero from below. The forward and backward pass for a fully connected layer that uses ReLU would at the core include:
If you stare at this for a while you’ll see that if a neuron gets clamped to zero in the forward pass (i.e. z=0, it doesn’t “fire”), then its weights will get zero gradient. This can lead to what is called the “dead ReLU” problem, where if a ReLU neuron is unfortunately initialized such that it never fires, or if a neuron’s weights ever get knocked off with a large update during training into this regime, then this neuron will remain permanently dead. It’s like permanent, irrecoverable brain damage. Sometimes you can forward the entire training set through a trained network and find that a large fraction (e.g. 40%) of your neurons were zero the entire time.
TLDR: If you understand backpropagation and your network has ReLUs, you’re always nervous about dead ReLUs. These are neurons that never turn on for any example in your entire training set, and will remain permanently dead. Neurons can also die during training, usually as a symptom of aggressive learning rates. See a longer explanation in CS231n lecture video.
Vanilla RNNs feature another good example of unintuitive effects of backpropagation. I’ll copy paste a slide from CS231n that has a simplified RNN that does not take any input x, and only computes the recurrence on the hidden state (equivalently, the input x could always be zero):
This RNN is unrolled for T time steps. When you stare at what the backward pass is doing, you’ll see that the gradient signal going backwards in time through all the hidden states is always being multiplied by the same matrix (the recurrence matrix Whh), interspersed with non-linearity backprop.
What happens when you take one number a and start multiplying it by some other number b (i.e. a*b*b*b*b*b*b...)? This sequence either goes to zero if |b| < 1, or explodes to infinity when |b|>1. The same thing happens in the backward pass of an RNN, except b is a matrix and not just a number, so we have to reason about its largest eigenvalue instead.
TLDR: If you understand backpropagation and you’re using RNNs you are nervous about having to do gradient clipping, or you prefer to use an LSTM. See a longer explanation in this CS231n lecture video.
Lets look at one more — the one that actually inspired this post. Yesterday I was browsing for a Deep Q Learning implementation in TensorFlow (to see how others deal with computing the numpy equivalent of Q[:, a], where a is an integer vector — turns out this trivial operation is not supported in TF). Anyway, I searched “dqn tensorflow”, clicked the first link, and found the core code. Here is an excerpt:
If you’re familiar with DQN, you can see that there is the target_q_t, which is just [reward * \gamma \argmax_a Q(s’,a)], and then there is q_acted, which is Q(s,a) of the action that was taken. The authors here subtract the two into variable delta, which they then want to minimize on line 295 with the L2 loss with tf.reduce_mean(tf.square()). So far so good.
The problem is on line 291. The authors are trying to be robust to outliers, so if the delta is too large, they clip it with tf.clip_by_value. This is well-intentioned and looks sensible from the perspective of the forward pass, but it introduces a major bug if you think about the backward pass.
The clip_by_value function has a local gradient of zero outside of the range min_delta to max_delta, so whenever the delta is above min/max_delta, the gradient becomes exactly zero during backprop. The authors are clipping the raw Q delta, when they are likely trying to clip the gradient for added robustness. In that case the correct thing to do is to use the Huber loss in place of tf.square:
It’s a bit gross in TensorFlow because all we want to do is clip the gradient if it is above a threshold, but since we can’t meddle with the gradients directly we have to do it in this round-about way of defining the Huber loss. In Torch this would be much more simple.
I submitted an issue on the DQN repo and this was promptly fixed.
Backpropagation is a leaky abstraction; it is a credit assignment scheme with non-trivial consequences. If you try to ignore how it works under the hood because “TensorFlow automagically makes my networks learn”, you will not be ready to wrestle with the dangers it presents, and you will be much less effective at building and debugging neural networks.
The good news is that backpropagation is not that difficult to understand, if presented properly. I have relatively strong feelings on this topic because it seems to me that 95% of backpropagation materials out there present it all wrong, filling pages with mechanical math. Instead, I would recommend the CS231n lecture on backprop which emphasizes intuition (yay for shameless self-advertising). And if you can spare the time, as a bonus, work through the CS231n assignments, which get you to write backprop manually and help you solidify your understanding.
That’s it for now! I hope you’ll be much more suspicious of backpropagation going forward and think carefully through what the backward pass is doing. Also, I’m aware that this post has (unintentionally!) turned into several CS231n ads. Apologies for that :)
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets.
|
Per Harald Borgen | 4.8K | 7 | https://medium.com/learning-new-stuff/machine-learning-in-a-year-cdb0b0ebd29c?source=tag_archive---------9---------------- | Machine Learning in a Year – Learning New Stuff – Medium | This is a follow up to an article I wrote last year, Machine Learning in a Week, on how I kickstarted my way into machine learning (ml) by devoting five days to the subject.
After this highly effective introduction, I continued learning on my spare time and almost exactly one year later I did my first ml project at work, which involved using various ml and natural language processing (nlp) techniques to qualify sales leads at Xeneta.
This felt like a blessing: getting paid to do something I normally did for fun!
It also ripped me out of the delusion that only people with masters degrees or Ph.D’s work with ml professionally.
In this post I want to share my journey, as it might inspire others to do the same.
My interest in ml stems back to 2014 when I started reading articles about it on Hacker News. I simply found the idea of teaching machines stuff by looking at data appealing. At the time I wasn’t even a professional developer, but a hobby coder who’d done a couple of small projects.
So I began watching the first few chapters of Udacity’s Supervised Learning course, while also reading all articles I came across on the subject.
This gave me a little bit of conceptual understanding, though no practical skills. I also didn’t finish it, as I rarely do with MOOC’s.
In January 2015 I joined the Founders and Coders (FAC) bootcamp in London in order to become a developer. A few weeks in, I wanted to learn how to actually code machine learning algorithms, so I started a study group with a few of my peers. Every Tuesday evening, we’d watch lectures from Coursera’s Machine Learning course.
It’s a fantastic course, and I learned a hell of a lot. But it’s tough for a beginner. I had to watch the lectures over and over again before grasping the concepts. The Octave coding task are challenging as well, especially if you don’t know Octave. As a result of the difficulty, one by one fell off the study group as the weeks passed. Eventually, I fell off it myself as well.
In hindsight, I should have started with a course that either used ml libraries for the coding tasks — as opposed to building the algorithms from scratch — or at least used a programming language I knew.
If I could go back in time, I’d choose Udacity’s Intro to Machine Learning, as it’s easier and uses Python and Scikit Learn. This way, we would have gotten our hands dirty as soon as possible, gained confidence, and had more fun.
One of the last things I did at FAC was the ml week stunt. My goal was to be able to apply machine learning to actual problems at the end of the week, which I managed to do.
Throughout the week I did the following:
It’s by far the steepest ml learning curve I’ve ever experienced. Go ahead and read the article if you want a more detailed overview.
After I finished FAC in London and moved back to Norway, I tried to repeat the success from the ml week, but for neural networks instead.
This failed.
There were simply too many distractions to spend 10 hours of coding and learning every day. I had underestimated how important it was to be surrounded by peers at FAC.
However, I got started with neural nets at least, and slowly started to grasp the concept. By July I managed to code my first net. It’s probably the crappiest implementation ever created, and I actually find it embarrassing to show off. But it did the trick; I proved to myself that I understood concepts like backpropagation and gradient descent.
In the second half of the year, my progression slowed down, as I started a new job. The most important takeaway from this period was the leap from non-vectorized to vectorized implementations of neural networks, which involved repeating linear algebra from university.
By the end of the year I wrote an article as a summary of how I learned this:
During the christmas vacation of 2015, I got a motivational boost again and decided try out Kaggle. So I spent quite some time experimenting with various algorithms for their Homesite Quote Conversion, Otto Group Product Classification and Bike Sharing Demand contests.
The main takeaway from this was the experience of iteratively improving the results by experimenting with the algorithms and the data.
I learned to trust my logic when doing machine learning.
If tweaking a parameter or engineering a new feature seems like a good idea logically, it’s quite likely that it actually will help.
Back at work in January 2016 I wanted to continue in the flow I’d gotten into during Christmas. So I asked my manager if I could spend some time learning stuff during my work hours as well, which he happily approved.
Having gotten a basic understanding of neural networks at this point, I wanted to move on to deep learning.
My first attempt was Udacity’s Deep Learning course, which ended up as a big disappointment. The contents of the video lectures are good, but they are too short and shallow to me.
And the IPython Notebook assignments ended up being too frustrating, as I spent most of my time debugging code errors, which is the most effective way to kill motivation. So after doing that for a couple of sessions at work, I simply gave up.
To their defense, I’m a total noob when it comes to IPython Notebooks, so it might not be as bad for you as it was for me. So it might be that I simply wasn’t ready for the course.
Luckily, I then discovered Stanford’s CS224D and decided to give it a shot. It is a fantastic course. And though it’s difficult, I never end up debugging when doing the problem sets.
Secondly, they actually give you the solution code as well, which I often look at when I’m stuck, so that I can work my way backwards to understand the steps needed to reach a solution.
Though I’ve haven’t finished it yet, it has significantly boosted my knowledge in nlp and neural networks so far.
However it’s been tough. Really tough. At one point, I realized I needed help from someone better than me, so I came in touch with a Ph.D student who was willing to help me out for 40 USD per hour, both with the problem sets as well as the overall understanding. This has been critical for me in order to move on, as he has uncovered a lot of black holes in my knowledge.
In addition to this, Xeneta also hired a data scientist recently. He’s got a masters degree in math, so I often ask him for help when I’m stuck with various linear algebra an calculus tasks, or ml in general. So be sure to check out which resources you have internally in your company as well.
After doing all this, I finally felt ready to do a ml project at work. It basically involved training an algorithm to qualify sales leads by reading company descriptions, and has actually proven to be a big time saver for the sales guys using the tool.
Check out out article I wrote about it below or head over to GitHub to dive straight into the code.
Getting to this point has surely been a long journey. But also a fast one; when I started my machine learning in a week project, I certainly didn’t have any hopes of actually using it professionally within a year.
But it’s 100 percent possible. And if I can do it, so can anybody else.
Thanks for reading! My name is Per, I’m a co-founder of Scrimba — a better way to teach and learn code.
If you’ve read this far, I’d recommend you to check out this demo!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Co-founder of Scrimba, the next-generation platform for teaching and learning code. https://scrimba.com.
A publication about improving your technical skills.
|
Xiaohan Zeng | 48K | 13 | https://medium.com/@XiaohanZeng/i-interviewed-at-five-top-companies-in-silicon-valley-in-five-days-and-luckily-got-five-job-offers-25178cf74e0f?source=tag_archive---------0---------------- | I interviewed at five top companies in Silicon Valley in five days, and luckily got five job offers | In the five days from July 24th to 28th 2017, I interviewed at LinkedIn, Salesforce Einstein, Google, Airbnb, and Facebook, and got all five job offers.
It was a great experience, and I feel fortunate that my efforts paid off, so I decided to write something about it. I will discuss how I prepared, review the interview process, and share my impressions about the five companies.
I had been at Groupon for almost three years. It’s my first job, and I have been working with an amazing team and on awesome projects. We’ve been building cool stuff, making impact within the company, publishing papers and all that. But I felt my learning rate was being annealed (read: slowing down) yet my mind was craving more. Also as a software engineer in Chicago, there are so many great companies that all attract me in the Bay Area.
Life is short, and professional life shorter still. After talking with my wife and gaining her full support, I decided to take actions and make my first ever career change.
Although I’m interested in machine learning positions, the positions at the five companies are slightly different in the title and the interviewing process. Three are machine learning engineer (LinkedIn, Google, Facebook), one is data engineer (Salesforce), and one is software engineer in general (Airbnb). Therefore I needed to prepare for three different areas: coding, machine learning, and system design.
Since I also have a full time job, it took me 2–3 months in total to prepare. Here is how I prepared for the three areas.
While I agree that coding interviews might not be the best way to assess all your skills as a developer, there is arguably no better way to tell if you are a good engineer in a short period of time. IMO it is the necessary evil to get you that job.
I mainly used Leetcode and Geeksforgeeks for practicing, but Hackerrank and Lintcode are also good places. I spent several weeks going over common data structures and algorithms, then focused on areas I wasn’t too familiar with, and finally did some frequently seen problems. Due to my time constraints I usually did two problems per day.
Here are some thoughts:
This area is more closely related to the actual working experience. Many questions can be asked during system design interviews, including but not limited to system architecture, object oriented design,database schema design,distributed system design,scalability, etc.
There are many resources online that can help you with the preparation. For the most part I read articles on system design interviews, architectures of large-scale systems, and case studies.
Here are some resources that I found really helpful:
Although system design interviews can cover a lot of topics, there are some general guidelines for how to approach the problem:
With all that said, the best way to practice for system design interviews is to actually sit down and design a system, i.e. your day-to-day work. Instead of doing the minimal work, go deeper into the tools, frameworks, and libraries you use. For example, if you use HBase, rather than simply using the client to run some DDL and do some fetches, try to understand its overall architecture, such as the read/write flow, how HBase ensures strong consistency, what minor/major compactions do, and where LRU cache and Bloom Filter are used in the system. You can even compare HBase with Cassandra and see the similarities and differences in their design. Then when you are asked to design a distributed key-value store, you won’t feel ambushed.
Many blogs are also a great source of knowledge, such as Hacker Noon and engineering blogs of some companies, as well as the official documentation of open source projects.
The most important thing is to keep your curiosity and modesty. Be a sponge that absorbs everything it is submerged into.
Machine learning interviews can be divided into two aspects, theory and product design.
Unless you are have experience in machine learning research or did really well in your ML course, it helps to read some textbooks. Classical ones such as the Elements of Statistical Learning and Pattern Recognition and Machine Learning are great choices, and if you are interested in specific areas you can read more on those.
Make sure you understand basic concepts such as bias-variance trade-off, overfitting, gradient descent, L1/L2 regularization,Bayes Theorem,bagging/boosting,collaborative filtering,dimension reduction, etc. Familiarize yourself with common formulas such as Bayes Theorem and the derivation of popular models such as logistic regression and SVM. Try to implement simple models such as decision trees and K-means clustering. If you put some models on your resume, make sure you understand it thoroughly and can comment on its pros and cons.
For ML product design, understand the general process of building a ML product. Here’s what I tried to do:
Here I want to emphasize again on the importance of remaining curious and learning continuously. Try not to merely using the API for Spark MLlib or XGBoost and calling it done, but try to understand why stochastic gradient descent is appropriate for distributed training, or understand how XGBoost differs from traditional GBDT, e.g. what is special about its loss function, why it needs to compute the second order derivative, etc.
I started by replying to HR’s messages on LinkedIn, and asking for referrals. After a failed attempt at a rock star startup (which I will touch upon later), I prepared hard for several months, and with help from my recruiters, I scheduled a full week of onsites in the Bay Area. I flew in on Sunday, had five full days of interviews with around 30 interviewers at some best tech companies in the world, and very luckily, got job offers from all five of them.
All phone screenings are standard. The only difference is in the duration: For some companies like LinkedIn it’s one hour, while for Facebook and Airbnb it’s 45 minutes.
Proficiency is the key here, since you are under the time gun and usually you only get one chance. You would have to very quickly recognize the type of problem and give a high-level solution. Be sure to talk to the interviewer about your thinking and intentions. It might slow you down a little at the beginning, but communication is more important than anything and it only helps with the interview. Do not recite the solution as the interviewer would almost certainly see through it.
For machine learning positions some companies would ask ML questions. If you are interviewing for those make sure you brush up your ML skills as well.
To make better use of my time, I scheduled three phone screenings in the same afternoon, one hour apart from each. The upside is that you might benefit from the hot hand and the downside is that the later ones might be affected if the first one does not go well, so I don’t recommend it for everyone.
One good thing about interviewing with multiple companies at the same time is that it gives you certain advantages. I was able to skip the second round phone screening with Airbnb and Salesforce because I got the onsite at LinkedIn and Facebook after only one phone screening.
More surprisingly, Google even let me skip their phone screening entirely and schedule my onsite to fill the vacancy after learning I had four onsites coming in the next week. I knew it was going to make it extremely tiring, but hey, nobody can refuse a Google onsite invitation!
LinkedIn
This is my first onsite and I interviewed at the Sunnyvale location. The office is very neat and people look very professional, as always.
The sessions are one hour each. Coding questions are standard, but the ML questions can get a bit tough. That said, I got an email from my HR containing the preparation material which was very helpful, and in the end I did not see anything that was too surprising. I heard the rumor that LinkedIn has the best meals in the Silicon Valley, and from what I saw if it’s not true, it’s not too far from the truth.
Acquisition by Microsoft seems to have lifted the financial burden from LinkedIn, and freed them up to do really cool things. New features such as videos and professional advertisements are exciting. As a company focusing on professional development, LinkedIn prioritizes the growth of its own employees. A lot of teams such as ads relevance and feed ranking are expanding, so act quickly if you want to join.
Salesforce Einstein
Rock star project by rock star team. The team is pretty new and feels very much like a startup. The product is built on the Scala stack, so type safety is a real thing there! Great talks on the Optimus Prime library by Matthew Tovbin at Scala Days Chicago 2017 and Leah McGuire at Spark Summit West 2017.
I interviewed at their Palo Alto office. The team has a cohesive culture and work life balance is great there. Everybody is passionate about what they are doing and really enjoys it. With four sessions it is shorter compared to the other onsite interviews, but I wish I could have stayed longer. After the interview Matthew even took me for a walk to the HP garage :)
Google
Absolutely the industry leader, and nothing to say about it that people don’t already know. But it’s huge. Like, really, really HUGE. It took me 20 minutes to ride a bicycle to meet my friends there. Also lines for food can be too long. Forever a great place for developers.
I interviewed at one of the many buildings on the Mountain View campus, and I don’t know which one it is because it’s HUGE.
My interviewers all look very smart, and once they start talking they are even smarter. It would be very enjoyable to work with these people.
One thing that I felt special about Google’s interviews is that the analysis of algorithm complexity is really important. Make sure you really understand what Big O notation means!
Airbnb
Fast expanding unicorn with a unique culture and arguably the most beautiful office in the Silicon Valley. New products such as Experiences and restaurant reservation, high end niche market, and expansion into China all contribute to a positive prospect. Perfect choice if you are risk tolerant and want a fast growing, pre-IPO experience.
Airbnb’s coding interview is a bit unique because you’ll be coding in an IDE instead of whiteboarding, so your code needs to compile and give the right answer. Some problems can get really hard.
And they’ve got the one-of-a-kind cross functional interviews. This is how Airbnb takes culture seriously, and being technically excellent doesn’t guarantee a job offer. For me the two cross functionals were really enjoyable. I had casual conversations with the interviewers and we all felt happy at the end of the session.
Overall I think Airbnb’s onsite is the hardest due to the difficulty of the problems, longer duration, and unique cross-functional interviews. If you are interested, be sure to understand their culture and core values.
Facebook
Another giant that is still growing fast, and smaller and faster-paced compared to Google. With its product lines dominating the social network market and big investments in AI and VR, I can only see more growth potential for Facebook in the future. With stars like Yann LeCun and Yangqing Jia, it’s the perfect place if you are interested in machine learning.
I interviewed at Building 20, the one with the rooftop garden and ocean view and also where Zuckerberg’s office is located.
I’m not sure if the interviewers got instructions, but I didn’t get clear signs whether my solutions were correct, although I believed they were.
By noon the prior four days started to take its toll, and I was having a headache. I persisted through the afternoon sessions but felt I didn’t do well at all. I was a bit surprised to learn that I was getting an offer from them as well.
Generally I felt people there believe the company’s vision and are proud of what they are building. Being a company with half a trillion market cap and growing, Facebook is a perfect place to grow your career at.
This is a big topic that I won’t cover in this post, but I found this article to be very helpful.
Some things that I do think are important:
All successes start with failures, including interviews. Before I started interviewing for these companies, I failed my interview at Databricks in May.
Back in April, Xiangrui contacted me via LinkedIn asking me if I was interested in a position on the Spark MLlib team. I was extremely thrilled because 1) I use Spark and love Scala, 2) Databricks engineers are top-notch, and 3) Spark is revolutionizing the whole big data world. It is an opportunity I couldn’t miss, so I started interviewing after a few days.
The bar is very high and the process is quite long, including one pre-screening questionnaire, one phone screening, one coding assignment, and one full onsite.
I managed to get the onsite invitation, and visited their office in downtown San Francisco, where Treasure Island can be seen.
My interviewer were incredibly intelligent yet equally modest. During the interviews I often felt being pushed to the limits. It was fine until one disastrous session, where I totally messed up due to insufficient skills and preparation, and it ended up a fiasco. Xiangrui was very kind and walked me to where I wanted to go after the interview was over, and I really enjoyed talking to him.
I got the rejection several days later. It was expected but I felt frustrated for a few days nonetheless. Although I missed the opportunity to work there, I wholeheartedly wish they will continue to make greater impact and achievements.
From the first interview in May to finally accepting the job offer in late September, my first career change was long and not easy.
It was difficult for me to prepare because I needed to keep doing well at my current job. For several weeks I was on a regular schedule of preparing for the interview till 1am, getting up at 8:30am the next day and fully devoting myself to another day at work.
Interviewing at five companies in five days was also highly stressful and risky, and I don’t recommend doing it unless you have a very tight schedule. But it does give you a good advantage during negotiation should you secure multiple offers.
I’d like to thank all my recruiters who patiently walked me through the process, the people who spend their precious time talking to me, and all the companies that gave me the opportunities to interview and extended me offers.
Lastly but most importantly, I want to thank my family for their love and support — my parents for watching me taking the first and every step, my dear wife for everything she has done for me, and my daughter for her warming smile.
Thanks for reading through this long post.
You can find me on LinkedIn or Twitter.
Xiaohan Zeng
10/22/17
PS: Since the publication of this post, it has (unexpectedly) received some attention. I would like to thank everybody for the congratulations and shares, and apologize for not being able to respond to each of them.
This post has been translated into some other languages:
It has been reposted in Tech In Asia.
Breaking Into Startups invited me to a live video streaming, together with Sophia Ciocca.
CoverShr did a short QnA with me.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Critical Mind & Romantic Heart
|
Gil Fewster | 3.3K | 5 | https://medium.freecodecamp.org/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805?source=tag_archive---------1---------------- | The mind-blowing AI announcement from Google that you probably missed. | Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text.
I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own.
In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year.
Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System.
This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock.
Luckily, I’m here to bring you up to speed. Here’s the deal.
Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance.
Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.
This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.
All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.
The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.
Google Translate invented its own language to help it translate more effectively.
What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.
Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes)
To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post:
Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated.
And this leads the Google engineers onto that truly astonishing discovery of creation:
So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics.
I just thought maybe you’d want to know.
Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly.
Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective.
Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
A tinkerer
Our community publishes stories worth reading on development, design, and data science.
|
David Venturi | 10.6K | 20 | https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0?source=tag_archive---------2---------------- | Every single Machine Learning course on the internet, ranked by your reviews | A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost.
I’m almost finished now. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. So I started creating a review-driven guide that recommends the best courses for each subject within data science.
For the first guide in the series, I recommended a few coding classes for the beginner data scientist. Then it was statistics and probability classes. Then introductions to data science. Also, data visualization.
For this guide, I spent a dozen hours trying to identify every online machine learning course offered as of May 2017, extracting key bits of information from their syllabi and reviews, and compiling their ratings. My end goal was to identify the three best courses available and present them to you, below.
For this task, I turned to none other than the open source Class Central community, and its database of thousands of course ratings and reviews.
Since 2011, Class Central founder Dhawal Shah has kept a closer eye on online courses than arguably anyone else in the world. Dhawal personally helped me assemble this list of resources.
Each course must fit three criteria:
We believe we covered every notable course that fits the above criteria. Since there are seemingly hundreds of courses on Udemy, we chose to consider the most-reviewed and highest-rated ones only.
There’s always a chance that we missed something, though. So please let us know in the comments section if we left a good course out.
We compiled average ratings and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings.
We made subjective syllabus judgment calls based on three factors:
A popular definition originates from Arthur Samuel in 1959: machine learning is a subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data. Just as humans can learn from experience, so can computers, where data = experience.
A machine learning workflow is the process required for carrying out a machine learning project. Though individual projects can differ, most workflows share several common tasks: problem evaluation, data exploration, data preprocessing, model training/testing/deployment, etc. Below you’ll find helpful visualization of these core steps:
The ideal course introduces the entire process and provides interactive examples, assignments, and/or quizzes where students can perform each task themselves.
First off, let’s define deep learning. Here is a succinct description:
As would be expected, portions of some of the machine learning courses contain deep learning content. I chose not to include deep learning-only courses, however. If you are interested in deep learning specifically, we’ve got you covered with the following article:
My top three recommendations from that list would be:
Several courses listed below ask students to have prior programming, calculus, linear algebra, and statistics experience. These prerequisites are understandable given that machine learning is an advanced discipline.
Missing a few subjects? Good news! Some of this experience can be acquired through our recommendations in the first two articles (programming, statistics) of this Data Science Career Guide. Several top-ranked courses below also provide gentle calculus and linear algebra refreshers and highlight the aspects most relevant to machine learning for those less familiar.
Stanford University’s Machine Learning on Coursera is the clear current winner in terms of ratings, reviews, and syllabus fit. Taught by the famous Andrew Ng, Google Brain founder and former chief scientist at Baidu, this was the class that sparked the founding of Coursera. It has a 4.7-star weighted average rating over 422 reviews.
Released in 2011, it covers all aspects of the machine learning workflow. Though it has a smaller scope than the original Stanford class upon which it is based, it still manages to cover a large number of techniques and algorithms. The estimated timeline is eleven weeks, with two weeks dedicated to neural networks and deep learning. Free and paid options are available.
Ng is a dynamic yet gentle instructor with a palpable experience. He inspires confidence, especially when sharing practical implementation tips and warnings about common pitfalls. A linear algebra refresher is provided and Ng highlights the aspects of calculus most relevant to machine learning.
Evaluation is automatic and is done via multiple choice quizzes that follow each lesson and programming assignments. The assignments (there are eight of them) can be completed in MATLAB or Octave, which is an open-source version of MATLAB. Ng explains his language choice:
Though Python and R are likely more compelling choices in 2017 with the increased popularity of those languages, reviewers note that that shouldn’t stop you from taking the course.
A few prominent reviewers noted the following:
Columbia University’s Machine Learning is a relatively new offering that is part of their Artificial Intelligence MicroMasters on edX. Though it is newer and doesn’t have a large number of reviews, the ones that it does have are exceptionally strong. Professor John Paisley is noted as brilliant, clear, and clever. It has a 4.8-star weighted average rating over 10 reviews.
The course also covers all aspects of the machine learning workflow and more algorithms than the above Stanford offering. Columbia’s is a more advanced introduction, with reviewers noting that students should be comfortable with the recommended prerequisites (calculus, linear algebra, statistics, probability, and coding).
Quizzes (11), programming assignments (4), and a final exam are the modes of evaluation. Students can use either Python, Octave, or MATLAB to complete the assignments. The course’s total estimated timeline is eight to ten hours per week over twelve weeks. It is free with a verified certificate available for purchase.
Below are a few of the aforementioned sparkling reviews:
Machine Learning A-ZTM on Udemy is an impressively detailed offering that provides instruction in both Python and R, which is rare and can’t be said for any of the other top courses. It has a 4.5-star weighted average rating over 8,119 reviews, which makes it the most reviewed course of the ones considered.
It covers the entire machine learning workflow and an almost ridiculous (in a good way) number of algorithms through 40.5 hours of on-demand video. The course takes a more applied approach and is lighter math-wise than the above two courses. Each section starts with an “intuition” video from Eremenko that summarizes the underlying theory of the concept being taught. de Ponteves then walks through implementation with separate videos for both Python and R.
As a “bonus,” the course includes Python and R code templates for students to download and use on their own projects. There are quizzes and homework challenges, though these aren’t the strong points of the course.
Eremenko and the SuperDataScience team are revered for their ability to “make the complex simple.” Also, the prerequisites listed are “just some high school mathematics,” so this course might be a better option for those daunted by the Stanford and Columbia offerings.
A few prominent reviewers noted the following:
Our #1 pick had a weighted average rating of 4.7 out of 5 stars over 422 reviews. Let’s look at the other alternatives, sorted by descending rating. A reminder that deep learning-only courses are not included in this guide — you can find those here.
The Analytics Edge (Massachusetts Institute of Technology/edX): More focused on analytics in general, though it does cover several machine learning topics. Uses R. Strong narrative that leverages familiar real-world examples. Challenging. Ten to fifteen hours per week over twelve weeks. Free with a verified certificate available for purchase. It has a 4.9-star weighted average rating over 214 reviews.
Python for Data Science and Machine Learning Bootcamp (Jose Portilla/Udemy): Has large chunks of machine learning content, but covers the whole data science process. More of a very detailed intro to Python. Amazing course, though not ideal for the scope of this guide. 21.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 3316 reviews.
Data Science and Machine Learning Bootcamp with R (Jose Portilla/Udemy): The comments for Portilla’s above course apply here as well, except for R. 17.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 1317 reviews.
Machine Learning Series (Lazy Programmer Inc./Udemy): Taught by a data scientist/big data engineer/full stack software engineer with an impressive resume, Lazy Programmer currently has a series of 16 machine learning-focused courses on Udemy. In total, the courses have 5000+ ratings and almost all of them have 4.6 stars. A useful course ordering is provided in each individual course’s description. Uses Python. Cost varies depending on Udemy discounts, which are frequent.
Machine Learning (Georgia Tech/Udacity): A compilation of what was three separate courses: Supervised, Unsupervised and Reinforcement Learning. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Bite-sized videos, as is Udacity’s style. Friendly professors. Estimated timeline of four months. Free. It has a 4.56-star weighted average rating over 9 reviews.
Implementing Predictive Analytics with Spark in Azure HDInsight (Microsoft/edX): Introduces the core concepts of machine learning and a variety of algorithms. Leverages several big data-friendly tools, including Apache Spark, Scala, and Hadoop. Uses both Python and R. Four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.5-star weighted average rating over 6 reviews.
Data Science and Machine Learning with Python — Hands On! (Frank Kane/Udemy): Uses Python. Kane has nine years of experience at Amazon and IMDb. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 4139 reviews.
Scala and Spark for Big Data and Machine Learning (Jose Portilla/Udemy): “Big data” focus, specifically on implementation in Scala and Spark. Ten hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 607 reviews.
Machine Learning Engineer Nanodegree (Udacity): Udacity’s flagship Machine Learning program, which features a best-in-class project review system and career support. The program is a compilation of several individual Udacity courses, which are free. Co-created by Kaggle. Estimated timeline of six months. Currently costs $199 USD per month with a 50% tuition refund available for those who graduate within 12 months. It has a 4.5-star weighted average rating over 2 reviews.
Learning From Data (Introductory Machine Learning) (California Institute of Technology/edX): Enrollment is currently closed on edX, but is also available via CalTech’s independent platform (see below). It has a 4.49-star weighted average rating over 42 reviews.
Learning From Data (Introductory Machine Learning) (Yaser Abu-Mostafa/California Institute of Technology): “A real Caltech course, not a watered-down version.” Reviews note it is excellent for understanding machine learning theory. The professor, Yaser Abu-Mostafa, is popular among students and also wrote the textbook upon which this course is based. Videos are taped lectures (with lectures slides picture-in-picture) uploaded to YouTube. Homework assignments are .pdf files. The course experience for online students isn’t as polished as the top three recommendations. It has a 4.43-star weighted average rating over 7 reviews.
Mining Massive Datasets (Stanford University): Machine learning with a focus on “big data.” Introduces modern distributed file systems and MapReduce. Ten hours per week over seven weeks. Free. It has a 4.4-star weighted average rating over 30 reviews.
AWS Machine Learning: A Complete Guide With Python (Chandra Lingam/Udemy): A unique focus on cloud-based machine learning and specifically Amazon Web Services. Uses Python. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 62 reviews.
Introduction to Machine Learning & Face Detection in Python (Holczer Balazs/Udemy): Uses Python. Eight hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 162 reviews.
StatLearning: Statistical Learning (Stanford University): Based on the excellent textbook, “An Introduction to Statistical Learning, with Applications in R” and taught by the professors who wrote it. Reviewers note that the MOOC isn’t as good as the book, citing “thin” exercises and mediocre videos. Five hours per week over nine weeks. Free. It has a 4.35-star weighted average rating over 84 reviews.
Machine Learning Specialization (University of Washington/Coursera): Great courses, but last two classes (including the capstone project) were canceled. Reviewers note that this series is more digestable (read: easier for those without strong technical backgrounds) than other top machine learning courses (e.g. Stanford’s or Caltech’s). Be aware that the series is incomplete with recommender systems, deep learning, and a summary missing. Free and paid options available. It has a 4.31-star weighted average rating over 80 reviews.
From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase (Loony Corn/Udemy): “A down-to-earth, shy but confident take on machine learning techniques.” Taught by four-person team with decades of industry experience together. Uses Python. Cost varies depending on Udemy discounts, which are frequent. It has a 4.2-star weighted average rating over 494 reviews.
Principles of Machine Learning (Microsoft/edX): Uses R, Python, and Microsoft Azure Machine Learning. Part of the Microsoft Professional Program Certificate in Data Science. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.09-star weighted average rating over 11 reviews.
Big Data: Statistical Inference and Machine Learning (Queensland University of Technology/FutureLearn): A nice, brief exploratory machine learning course with a focus on big data. Covers a few tools like R, H2O Flow, and WEKA. Only three weeks in duration at a recommended two hours per week, but one reviewer noted that six hours per week would be more appropriate. Free and paid options available. It has a 4-star weighted average rating over 4 reviews.
Genomic Data Science and Clustering (Bioinformatics V) (University of California, San Diego/Coursera): For those interested in the intersection of computer science and biology and how it represents an important frontier in modern science. Focuses on clustering and dimensionality reduction. Part of UCSD’s Bioinformatics Specialization. Free and paid options available. It has a 4-star weighted average rating over 3 reviews.
Intro to Machine Learning (Udacity): Prioritizes topic breadth and practical tools (in Python) over depth and theory. The instructors, Sebastian Thrun and Katie Malone, make this class so fun. Consists of bite-sized videos and quizzes followed by a mini-project for each lesson. Currently part of Udacity’s Data Analyst Nanodegree. Estimated timeline of ten weeks. Free. It has a 3.95-star weighted average rating over 19 reviews.
Machine Learning for Data Analysis (Wesleyan University/Coursera): A brief intro machine learning and a few select algorithms. Covers decision trees, random forests, lasso regression, and k-means clustering. Part of Wesleyan’s Data Analysis and Interpretation Specialization. Estimated timeline of four weeks. Free and paid options available. It has a 3.6-star weighted average rating over 5 reviews.
Programming with Python for Data Science (Microsoft/edX): Produced by Microsoft in partnership with Coding Dojo. Uses Python. Eight hours per week over six weeks. Free and paid options available. It has a 3.46-star weighted average rating over 37 reviews.
Machine Learning for Trading (Georgia Tech/Udacity): Focuses on applying probabilistic machine learning approaches to trading decisions. Uses Python. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Estimated timeline of four months. Free. It has a 3.29-star weighted average rating over 14 reviews.
Practical Machine Learning (Johns Hopkins University/Coursera): A brief, practical introduction to a number of machine learning algorithms. Several one/two-star reviews expressing a variety of concerns. Part of JHU’s Data Science Specialization. Four to nine hours per week over four weeks. Free and paid options available. It has a 3.11-star weighted average rating over 37 reviews.
Machine Learning for Data Science and Analytics (Columbia University/edX): Introduces a wide range of machine learning topics. Some passionate negative reviews with concerns including content choices, a lack of programming assignments, and uninspiring presentation. Seven to ten hours per week over five weeks. Free with a verified certificate available for purchase. It has a 2.74-star weighted average rating over 36 reviews.
Recommender Systems Specialization (University of Minnesota/Coursera): Strong focus one specific type of machine learning — recommender systems. A four course specialization plus a capstone project, which is a case study. Taught using LensKit (an open-source toolkit for recommender systems). Free and paid options available. It has a 2-star weighted average rating over 2 reviews.
Machine Learning With Big Data (University of California, San Diego/Coursera): Terrible reviews that highlight poor instruction and evaluation. Some noted it took them mere hours to complete the whole course. Part of UCSD’s Big Data Specialization. Free and paid options available. It has a 1.86-star weighted average rating over 14 reviews.
Practical Predictive Analytics: Models and Methods (University of Washington/Coursera): A brief intro to core machine learning concepts. One reviewer noted that there was a lack of quizzes and that the assignments were not challenging. Part of UW’s Data Science at Scale Specialization. Six to eight hours per week over four weeks. Free and paid options available. It has a 1.75-star weighted average rating over 4 reviews.
The following courses had one or no reviews as of May 2017.
Machine Learning for Musicians and Artists (Goldsmiths, University of London/Kadenze): Unique. Students learn algorithms, software tools, and machine learning best practices to make sense of human gesture, musical audio, and other real-time data. Seven sessions in length. Audit (free) and premium ($10 USD per month) options available. It has one 5-star review.
Applied Machine Learning in Python (University of Michigan/Coursera): Taught using Python and the scikit learn toolkit. Part of the Applied Data Science with Python Specialization. Scheduled to start May 29th. Free and paid options available.
Applied Machine Learning (Microsoft/edX): Taught using various tools, including Python, R, and Microsoft Azure Machine Learning (note: Microsoft produces the course). Includes hands-on labs to reinforce the lecture content. Three to four hours per week over six weeks. Free with a verified certificate available for purchase.
Machine Learning with Python (Big Data University): Taught using Python. Targeted towards beginners. Estimated completion time of four hours. Big Data University is affiliated with IBM. Free.
Machine Learning with Apache SystemML (Big Data University): Taught using Apache SystemML, which is a declarative style language designed for large-scale machine learning. Estimated completion time of eight hours. Big Data University is affiliated with IBM. Free.
Machine Learning for Data Science (University of California, San Diego/edX): Doesn’t launch until January 2018. Programming examples and assignments are in Python, using Jupyter notebooks. Eight hours per week over ten weeks. Free with a verified certificate available for purchase.
Introduction to Analytics Modeling (Georgia Tech/edX): The course advertises R as its primary programming tool. Five to ten hours per week over ten weeks. Free with a verified certificate available for purchase.
Predictive Analytics: Gaining Insights from Big Data (Queensland University of Technology/FutureLearn): Brief overview of a few algorithms. Uses Hewlett Packard Enterprise’s Vertica Analytics platform as an applied tool. Start date to be announced. Two hours per week over four weeks. Free with a Certificate of Achievement available for purchase.
Introducción al Machine Learning (Universitas Telefónica/Miríada X): Taught in Spanish. An introduction to machine learning that covers supervised and unsupervised learning. A total of twenty estimated hours over four weeks.
Machine Learning Path Step (Dataquest): Taught in Python using Dataquest’s interactive in-browser platform. Multiple guided projects and a “plus” project where you build your own machine learning system using your own data. Subscription required.
The following six courses are offered by DataCamp. DataCamp’s hybrid teaching style leverages video and text-based instruction with lots of examples through an in-browser code editor. A subscription is required for full access to each course.
Introduction to Machine Learning (DataCamp): Covers classification, regression, and clustering algorithms. Uses R. Fifteen videos and 81 exercises with an estimated timeline of six hours.
Supervised Learning with scikit-learn (DataCamp): Uses Python and scikit-learn. Covers classification and regression algorithms. Seventeen videos and 54 exercises with an estimated timeline of four hours.
Unsupervised Learning in R (DataCamp): Provides a basic introduction to clustering and dimensionality reduction in R. Sixteen videos and 49 exercises with an estimated timeline of four hours.
Machine Learning Toolbox (DataCamp): Teaches the “big ideas” in machine learning. Uses R. 24 videos and 88 exercises with an estimated timeline of four hours.
Machine Learning with the Experts: School Budgets (DataCamp): A case study from a machine learning competition on DrivenData. Involves building a model to automatically classify items in a school’s budget. DataCamp’s “Supervised Learning with scikit-learn” is a prerequisite. Fifteen videos and 51 exercises with an estimated timeline of four hours.
Unsupervised Learning in Python (DataCamp): Covers a variety of unsupervised learning algorithms using Python, scikit-learn, and scipy. The course ends with students building a recommender system to recommend popular musical artists. Thirteen videos and 52 exercises with an estimated timeline of four hours.
Machine Learning (Tom Mitchell/Carnegie Mellon University): Carnegie Mellon’s graduate introductory machine learning course. A prerequisite to their second graduate level course, “Statistical Machine Learning.” Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. A 2011 version of the course also exists. CMU is one of the best graduate schools for studying machine learning and has a whole department dedicated to ML. Free.
Statistical Machine Learning (Larry Wasserman/Carnegie Mellon University): Likely the most advanced course in this guide. A follow-up to Carnegie Mellon’s Machine Learning course. Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. Free.
Undergraduate Machine Learning (Nando de Freitas/University of British Columbia): An undergraduate machine learning course. Lectures are filmed and put on YouTube with the slides posted on the course website. The course assignments are posted as well (no solutions, though). de Freitas is now a full-time professor at the University of Oxford and receives praise for his teaching abilities in various forums. Graduate version available (see below).
Machine Learning (Nando de Freitas/University of British Columbia): A graduate machine learning course. The comments in de Freitas’ undergraduate course (above) apply here as well.
This is the fifth of a six-piece series that covers the best online courses for launching yourself into the data science field. We covered programming in the first article, statistics and probability in the second article, intros to data science in the third article, and data visualization in the fourth.
The final piece will be a summary of those articles, plus the best online courses for other key topics such as data wrangling, databases, and even software engineering.
If you’re looking for a complete list of Data Science online courses, you can find them on Class Central’s Data Science and Big Data subject page.
If you enjoyed reading this, check out some of Class Central’s other pieces:
If you have suggestions for courses I missed, let me know in the responses!
If you found this helpful, click the 💚 so more people will see it here on Medium.
This is a condensed version of my original article published on Class Central, where I’ve included detailed course syllabi.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program.
Our community publishes stories worth reading on development, design, and data science.
|
Vishal Maini | 32K | 10 | https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12?source=tag_archive---------3---------------- | A Beginner’s Guide to AI/ML 🤖👶 – Machine Learning for Humans – Medium | Part 1: Why Machine Learning Matters. The big picture of artificial intelligence and machine learning — past, present, and future.
Part 2.1: Supervised Learning. Learning with an answer key. Introducing linear regression, loss functions, overfitting, and gradient descent.
Part 2.2: Supervised Learning II. Two methods of classification: logistic regression and SVMs.
Part 2.3: Supervised Learning III. Non-parametric learners: k-nearest neighbors, decision trees, random forests. Introducing cross-validation, hyperparameter tuning, and ensemble models.
Part 3: Unsupervised Learning. Clustering: k-means, hierarchical. Dimensionality reduction: principal components analysis (PCA), singular value decomposition (SVD).
Part 4: Neural Networks & Deep Learning. Why, where, and how deep learning works. Drawing inspiration from the brain. Convolutional neural networks (CNNs), recurrent neural networks (RNNs). Real-world applications.
Part 5: Reinforcement Learning. Exploration and exploitation. Markov decision processes. Q-learning, policy learning, and deep reinforcement learning. The value learning problem.
Appendix: The Best Machine Learning Resources. A curated list of resources for creating your machine learning curriculum.
This guide is intended to be accessible to anyone. Basic concepts in probability, statistics, programming, linear algebra, and calculus will be discussed, but it isn’t necessary to have prior knowledge of them to gain value from this series.
Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.
The rate of acceleration is already astounding. After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years.
In 2015, Google trained a conversational agent (AI) that could not only convincingly interact with humans as a tech support helpdesk, but also discuss morality, express opinions, and answer general facts-based questions.
The same year, DeepMind developed an agent that surpassed human-level performance at 49 Atari games, receiving only the pixels and game score as inputs. Soon after, in 2016, DeepMind obsoleted their own achievement by releasing a new state-of-the-art gameplay method called A3C.
Meanwhile, AlphaGo defeated one of the best human players at Go — an extraordinary achievement in a game dominated by humans for two decades after machines first conquered chess. Many masters could not fathom how it would be possible for a machine to grasp the full nuance and complexity of this ancient Chinese war strategy game, with its 10170 possible board positions (there are only 1080atoms in the universe).
In March 2017, OpenAI created agents that invented their own language to cooperate and more effectively achieve their goal. Soon after, Facebook reportedly successfully training agents to negotiate and even lie.
Just a few days ago (as of this writing), on August 11, 2017, OpenAI reached yet another incredible milestone by defeating the world’s top professionals in 1v1 matches of the online multiplayer game Dota 2.
Much of our day-to-day technology is powered by artificial intelligence. Point your camera at the menu during your next trip to Taiwan and the restaurant’s selections will magically appear in English via the Google Translate app.
Today AI is used to design evidence-based treatment plans for cancer patients, instantly analyze results from medical tests to escalate to the appropriate specialist immediately, and conduct scientific research for drug discovery.
In everyday life, it’s increasingly commonplace to discover machines in roles traditionally occupied by humans. Really, don’t be surprised if a little housekeeping delivery bot shows up instead of a human next time you call the hotel desk to send up some toothpaste.
In this series, we’ll explore the core machine learning concepts behind these technologies. By the end, you should be able to describe how they work at a conceptual level and be equipped with the tools to start building similar applications yourself.
Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of AI, such as computer vision, robotics, machine learning, and natural language processing.
Machine learning is a subfield of artificial intelligence. Its goal is to enable computers to learn on their own. A machine’s learning algorithm enables it to identify patterns in observed data, build models that explain the world, and predict things without having explicit pre-programmed rules and models.
The technologies discussed above are examples of artificial narrow intelligence (ANI), which can effectively perform a narrowly defined task.
Meanwhile, we’re continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or... reprogramming itself.
And this last one is a big deal. Once we create an AI that can improve itself, it will unlock a cycle of recursive self-improvement that could lead to an intelligence explosion over some unknown time period, ranging from many decades to a single day.
You may have heard this point referred to as the singularity. The term is borrowed from the gravitational singularity that occurs at the center of a black hole, an infinitely dense one-dimensional point where the laws of physics as we understand them start to break down.
A recent report by the Future of Humanity Institute surveyed a panel of AI researchers on timelines for AGI, and found that “researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years” (Grace et al, 2017). We’ve personally spoken with a number of sane and reasonable AI practitioners who predict much longer timelines (the upper limit being “never”), and others whose timelines are alarmingly short — as little as a few years.
The advent of greater-than-human-level artificial superintelligence (ASI) could be one of the best or worst things to happen to our species. It carries with it the immense challenge of specifying what AIs will want in a way that is friendly to humans.
While it’s impossible to say what the future holds, one thing is certain: 2017 is a good time to start understanding how machines think. To go beyond the abstractions of a philosopher in an armchair and intelligently shape our roadmaps and policies with respect to AI, we must engage with the details of how machines see the world — what they “want”, their potential biases and failure modes, their temperamental quirks — just as we study psychology and neuroscience to understand how humans learn, decide, act, and feel.
Machine learning is at the core of our journey towards artificial general intelligence, and in the meantime, it will change every industry and have a massive impact on our day-to-day lives. That’s why we believe it’s worth understanding machine learning, at least at a conceptual level — and we designed this series to be the best place to start.
You don’t necessarily need to read the series cover-to-cover to get value out of it. Here are three suggestions on how to approach it, depending on your interests and how much time you have:
Vishal most recently led growth at Upstart, a lending platform that utilizes machine learning to price credit, automate the borrowing process, and acquire users. He spends his time thinking about startups, applied cognitive science, moral philosophy, and the ethics of artificial intelligence.
Samer is a Master’s student in Computer Science and Engineering at UCSD and co-founder of Conigo Labs. Prior to grad school, he founded TableScribe, a business intelligence tool for SMBs, and spent two years advising Fortune 100 companies at McKinsey. Samer previously studied Computer Science and Ethics, Politics, and Economics at Yale.
Most of this series was written during a 10-day trip to the United Kingdom in a frantic blur of trains, planes, cafes, pubs and wherever else we could find a dry place to sit. Our aim was to solidify our own understanding of artificial intelligence, machine learning, and how the methods therein fit together — and hopefully create something worth sharing in the process.
And now, without further ado, let’s dive into machine learning with Part 2.1: Supervised Learning!
More from Machine Learning for Humans 🤖👶
A special thanks to Jonathan Eng, Edoardo Conti, Grant Schneider, Sunny Kumar, Stephanie He, Tarun Wadhwa, and Sachin Maini (series editor) for their significant contributions and feedback.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Research comms @DeepMindAI. Previously @Upstart, @Yale, @TrueVenturesTEC.
Demystifying artificial intelligence & machine learning. Discussions on safe and intentional application of AI for positive social impact.
|
Tim Anglade | 7K | 23 | https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3?source=tag_archive---------4---------------- | How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native | The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now available on Android as well as iOS!)
To achieve this, we designed a bespoke neural architecture that runs directly on your phone, and trained it with Tensorflow, Keras & Nvidia GPUs.
While the use-case is farcical, the app is an approachable example of both deep learning, and edge computing. All AI work is powered 100% by the user’s device, and images are processed without ever leaving their phone. This provides users with a snappier experience (no round trip to the cloud), offline availability, and better privacy. This also allows us to run the app at a cost of $0, even under the load of a million users, providing significant savings compared to traditional cloud-based AI approaches.
The app was developed in-house by the show, by a single developer, running on a single laptop & attached GPU, using hand-curated data. In that respect, it may provide a sense of what can be achieved today, with a limited amount of time & resources, by non-technical companies, individual developers, and hobbyists alike. In that spirit, this article attempts to give a detailed overview of steps involved to help others build their own apps.
If you haven’t seen the show or tried the app (you should!), the app lets you snap a picture and then tells you whether it thinks that image is of a hotdog or not. It’s a straightforward use-case, that pays homage to recent AI research and applications, in particular ImageNet.
While we’ve probably dedicated more engineering resources to recognizing hotdogs than anyone else, the app still fails in horrible and/or subtle ways.
Conversely, it’s also sometimes able to recognize hotdogs in complex situations... According to Engadget, “It’s incredible. I’ve had more success identifying food with the app in 20 minutes than I have had tagging and identifying songs with Shazam in the past two years.”
Have you ever found yourself reading Hacker News, thinking “they raised a 10M series A for that? I could build it in one weekend!” This app probably feels a lot like that, and the initial prototype was indeed built in a single weekend using Google Cloud Platform’s Vision API, and React Native. But the final app we ended up releasing on the app store required months of additional (part-time) work, to deliver meaningful improvements that would be difficult for an outsider to appreciate. We spent weeks optimizing overall accuracy, training time, inference time, iterating on our setup & tooling so we could have a faster development iterations, and spent a whole weekend optimizing the user experience around iOS & Android permissions (don’t even get me started on that one).
All too often technical blog posts or academic papers skip over this part, preferring to present the final chosen solution. In the interest of helping others learn from our mistake & choices, we will present an abridged view of the approaches that didn’t work for us, before we describe the final architecture we ended up shipping in the next section.
We chose React Native to build the prototype as it would give us an easy sandbox to experiment with, and would help us quickly support many devices. The experience ended up being a good one and we kept React Native for the remainder of the project: it didn’t always make things easy, and the design for the app was purposefully limited, but in the end React Native got the job done.
The other main component we used for the prototype — Google Cloud’s Vision API was quickly abandoned. There were 3 main factors:
For these reasons, we started experimenting with what’s trendily called “edge computing”, which for our purposes meant that after training our neural network on our laptop, we would export it and embed it directly into our mobile app, so that the neural network execution phase (or inference) would run directly inside the user’s phone.
Through a chance encounter with Pete Warden of the TensorFlow team, we had become aware of its ability to run TensorFlow directly embedded on an iOS device, and started exploring that path. After React Native, TensorFlow became the second fixed part of our stack.
It only took a day of work to integrate TensorFlow’s Objective-C++ camera example in our React Native shell. It took slightly longer to use their transfer learning script, which helps you retrain the Inception architecture to deal with a more specific image problem. Inception is the name of a family of neural architectures built by Google to deal with image recognition problems. Inception is available “pre-trained” which means the training phase has been completed and the weights are set. Most often for image recognition networks, they have been trained on ImageNet, a dataset containing over 20,000 different types of objects (hotdogs are one of them). However, much like Google Cloud’s Vision API, ImageNet training rewards breadth as much as depth here, and out-of-the-box accuracy on a single one of the 20,000+ categories can be lacking. As such, retraining (also called “transfer learning”) aims to take a full-trained neural net, and retrain it to perform better on the specific problem you’d like to handle. This usually involves some degree of “forgetting”, either by excising entire layers from the stack, or by slowly erasing the network’s ability to distinguish a type of object (e.g. chairs) in favor of better accuracy at recognizing the one you care about (i.e. hotdogs).
While the network (Inception in this case) may have been trained on the 14M images contained in ImageNet, we were able to retrain it on a just a few thousand hotdog images to get drastically enhanced hotdog recognition.
The big advantage of transfer learning are you will get better results much faster, and with less data than if you train from scratch. A full training might take months on multiple GPUs and require millions of images, while retraining can conceivably be done in hours on a laptop with a couple thousand images.
One of the biggest challenges we encountered was understanding exactly what should count as a hotdog and what should not. Defining what a “hotdog” is ends up being surprisingly difficult (do cut up sausages count, and if so, which kinds?) and subject to cultural interpretation.
Similarly, the “open world” nature of our problem meant we had to deal with an almost infinite number of inputs. While certain computer-vision problems have relatively limited inputs (say, x-rays of bolts with or without a mechanical default), we had to prepare the app to be fed selfies, nature shots and any number of foods.
Suffice to say, this approach was promising, and did lead to some improved results, however, it had to be abandoned for a couple of reasons.
First The nature of our problem meant a strong imbalance in training data: there are many more examples of things that are not hotdogs, than things that are hotdogs. In practice this means that if you train your algorithm on 3 hotdog images and 97 non-hotdog images, and it recognizes 0% of the former but 100% of the latter, it will still score 97% accuracy by default! This was not straightforward to solve out of the box using TensorFlow’s retrain tool, and basically necessitated setting up a deep learning model from scratch, import weights, and train in a more controlled manner.
At this point we decided to bite the bullet and get something started with Keras, a deep learning library that provides nicer, easier-to-use abstractions on top of TensorFlow, including pretty awesome training tools, and a class_weights option which is ideal to deal with this sort of dataset imbalance we were dealing with.
We used that opportunity to try other popular neural architectures like VGG, but one problem remained. None of them could comfortably fit on an iPhone. They consumed too much memory, which led to app crashes, and would sometime takes up to 10 seconds to compute, which was not ideal from a UX standpoint. Many things were attempted to mitigate that, but in the end it these architectures were just too big to run efficiently on mobile.
To give you a context out of time, this was roughly the mid-way point of the project. By that time, the UI was 90%+ done and very little of it was going to change. But in hindsight, the neural net was at best 20% done. We had a good sense of challenges & a good dataset, but 0 lines of the final neural architecture had been written, none of our neural code could reliably run on mobile, and even our accuracy was going to improve drastically in the weeks to come.
The problem directly ahead of us was simple: if Inception and VGG were too big, was there a simpler, pre-trained neural network we could retrain? At the suggestion of the always excellent Jeremy P. Howard (where has that guy been all our life?), we explored Xception, Enet and SqueezeNet. We quickly settled on SqueezeNet due to its explicit positioning as a solution for embedded deep learning, and the availability of a pre-trained Keras model on GitHub (yay open-source).
So how big of a difference does this make? An architecture like VGG uses about 138 million parameters (essentially the number of numbers necessary to model the neurons and values between them). Inception is already a massive improvement, requiring only 23 million parameters. SqueezeNet, in comparison only requires 1.25 million.
This has two advantages:
There are tradeoffs of course:
During this phase, we started experimenting with tuning the neural network architecture. In particular, we started using Batch Normalization and trying different activation functions.
After adding Batch Normalization and ELU to SqueezeNet, we were able to train neural network that achieve 90%+ accuracy when training from scratch, however, they were relatively brittle meaning the same network would overfit in some cases, or underfit in others when confronted to real-life testing. Even adding more examples to the dataset and playing with data augmentation failed to deliver a network that met expectations.
So while this phase was promising, and for the first time gave us a functioning app that could work entirely on an iPhone, in less than a second, we eventually moved to our 4th & final architecture.
Our final architecture was spurred in large part by the publication on April 17 of Google’s MobileNets paper, promising a new neural architecture with Inception-like accuracy on simple problems like ours, with only 4M or so parameters. This meant it sat in an interesting sweet spot between a SqueezeNet that had maybe been overly simplistic for our purposes, and the possibly overwrought elephant-trying-to-squeeze-in-a-tutu of using Inception or VGG on Mobile. The paper introduced some capacity to tune the size & complexity of network specifically to trade memory/CPU consumption against accuracy, which was very much top of mind for us at the time.
With less than a month to go before the app had to launch we endeavored to reproduce the paper’s results. This was entirely anticlimactic as within a day of the paper being published a Keras implementation was already offered publicly on GitHub by Refik Can Malli, a student at Istanbul Technical University, whose work we had already benefitted from when we took inspiration from his excellent Keras SqueezeNet implementation. The depth & openness of the deep learning community, and the presence of talented minds like R.C. is what makes deep learning viable for applications today — but they also make working in this field more thrilling than any tech trend we’ve been involved with.
Our final architecture ended up making significant departures from the MobileNets architecture or from convention, in particular:
So how does this stack work exactly? Deep Learning often gets a bad rap for being a “black box”, and while it’s true many components of it can be mysterious, the networks we use often leak information about how some of their magic work. We can look at the layers of this stack and how they activate on specific input images, giving us a sense of each layer’s ability to recognize sausage, buns, or other particularly salient hotdog features.
Data quality was of the utmost importance. A neural network can only be as good as the data that trained it, and improving training set quality was probably one of the top 3 things we spent time on during this project. The key things we did to improve this were:
The final composition of our dataset was 150k images, of which only 3k were hotdogs: there are only so many hotdogs you can look at, but there are many not hotdogs to look at. The 49:1 imbalance was dealt with by saying a Keras class weight of 49:1 in favor of hotdogs. Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit.
Our data augmentation rules were as follows:
These numbers were derived intuitively, based on experiments and our understanding of the real-life usage of our app, as opposed to careful experimentation.
The final key to our data pipeline was using Patrick Rodriguez’s multiprocess image data generator for Keras. While Keras does have a built-in multi-threaded and multiprocess implementation, we found Patrick’s library to be consistently faster in our experiments, for reasons we did not have time to investigate. This library cut our training time to a third of what it used to be.
The network was trained using a 2015 MacBook Pro and attached external GPU (eGPU), specifically an Nvidia GTX 980 Ti (we’d probably buy a 1080 Ti if we were starting today). We were able to train the network on batches of 128 images at a time. The network was trained for a total of 240 epochs, meaning we ran all 150k images through the network 240 times. This took about 80 hours.
We trained the network in 3 phases:
While learning rates were identified by running the linear experiment recommended by the CLR paper, they seem to intuitively make sense, in that the max for each phase is within a factor of 2 of the previous minimum, which is aligned with the industry standard recommendation of halving your learning rate if your accuracy plateaus during training.
In the interest of time we performed some training runs on a Paperspace P5000 instance running Ubuntu. In those cases, we were able to double the batch size, and found that optimal learning rates for each phase were roughly double as well.
Even having designed a relatively compact neural architecture, and having trained it to handle situations it may find in a mobile context, we had a lot of work left to make it run properly. Trying to run a top-of-the-line neural net architecture out of the box can quickly burns hundreds megabytes of RAM, which few mobile devices can spare today. Beyond network optimizations, it turns out the way you handle images or even load TensorFlow itself can have a huge impact on how quickly your network runs, how little RAM it uses, and how crash-free the experience will be for your users.
This was maybe the most mysterious part of this project. Relatively little information can be found about it, possibly due to the dearth of production deep learning applications running on mobile devices as of today. However, we must commend the Tensorflow team, and particularly Pete Warden, Andrew Harp and Chad Whipkey for the existing documentation and their kindness in answering our inquiries.
Instead of using TensorFlow on iOS, we looked at using Apple’s built-in deep learning libraries instead (BNNS, MPSCNN and later on, CoreML). We would have designed the network in Keras, trained it with TensorFlow, exported all the weight values, re-implemented the network with BNNS or MPSCNN (or imported it via CoreML), and loaded the parameters into that new implementation. However, the biggest obstacle was that these new Apple libraries are only available on iOS 10+, and we wanted to support older versions of iOS. As iOS 10+ adoption and these frameworks continue to improve, there may not be a case for using TensorFlow on device in the near future.
If you think injecting JavaScript into your app on the fly is cool, try injecting neural nets into your app! The last production trick we used was to leverage CodePush and Apple’s relatively permissive terms of service, to live-inject new versions of our neural networks after submission to the app store. While this was mostly done to help us quickly deliver accuracy improvements to our users after release, you could conceivably use this approach to drastically expand or alter the feature set of your app without going through an app store review again.
There are a lot of things that didn’t work or we didn’t have time to do, and these are the ideas we’d investigate in the future:
Finally, we’d be remiss not to mention the obvious and important influence of User Experience, Developer Experience and built-in biases in developing an AI app. Each probably deserve their own post (or their own book) but here are the very concrete impacts of these 3 things in our experience.
UX (User Experience) is arguably more critical at every stage of the development of an AI app than for a traditional application. There are no Deep Learning algorithms that will give you perfect results right now, but there are many situations where the right mix of Deep Learning + UX will lead to results that are indistinguishable from perfect. Proper UX expectations are irreplaceable when it comes to setting developers on the right path to design their neural networks, setting the proper expectations for users when they use the app, and gracefully handling the inevitable AI failures. Building AI apps without a UX-first mindset is like training a neural net without Stochastic Gradient Descent: you will end up stuck in the local minima of the Uncanny Valley on your way to building the perfect AI use-case.
DX (Developer Experience) is extremely important as well, because deep learning training time is the new horsing around while waiting for your program to compile. We suggest you heavily favor DX first (hence Keras), as it’s always possible to optimize runtime for later runs (manual GPU parallelization, multi-process data augmentation, TensorFlow pipeline, even re-implementing for caffe2 / pyTorch).
Even projects with relatively obtuse APIs & documentation like TensorFlow greatly improve DX by providing a highly-tested, highly-used, well-maintained environment for training & running neural networks.
For the same reason, it’s hard to beat both the cost as well as the flexibility of having your own local GPU for development. Being able to look at / edit images locally, edit code with your preferred tool without delays greatly improves the development quality & speed of building AI projects.
Most AI apps will hit more critical cultural biases than ours, but as an example, even our straightforward use-case, caught us flat-footed with built-in biases in our initial dataset, that made the app unable to recognize French-style hotdogs, Asian hotdogs, and more oddities we did not have immediate personal experience with. It’s critical to remember that AI do not make “better” decisions than humans — they are infected by the same human biases we fall prey to, via the training sets humans provide.
Thanks to: Mike Judge, Alec Berg, Clay Tarver, Todd Silverstein, Jonathan Dotan, Lisa Schomas, Amy Solomon, Dorothy Street & Rich Toyon, and all the writers of the show — the app would simply not exist without them.Meaghan, Dana, David, Jay, and everyone at HBO. Scale Venture Partners & GitLab. Rachel Thomas and Jeremy Howard & Fast AI for all that they have taught me, and for kindly reviewing a draft of this post. Check out their free online Deep Learning course, it’s awesome! JP Simard for his help on iOS. And finally, the TensorFlow team & r/MachineLearning for their help & inspiration.
... And thanks to everyone who used & shared the app! It made staring at pictures of hotdogs for months on end totally worth it 😅
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
A.I., Startups & HBO’s Silicon Valley. Get in touch: timanglade@gmail.com
|
Sophia Ciocca | 53K | 9 | https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe?source=tag_archive---------5---------------- | How Does Spotify Know You So Well? – Member Feature Stories – Medium | Member Feature Story
A software engineer explains the science behind personalized music recommendations
Photo by studioEAST/Getty Images
Photo by studioEAST/Getty Images
This Monday — just like every Monday before it — over 100 million Spotify users found a fresh new playlist waiting for them called Discover Weekly. It’s a custom mixtape of 30 songs they’ve never listened to before but will probably love, and it’s pretty much magic.
I’m a huge fan of Spotify, and particularly Discover Weekly. Why? It makes me feel seen. It knows my musical tastes better than any person in my entire life ever has, and I’m consistently delighted by how satisfyingly just right it is every week, with tracks I probably would never have found myself or known I would like.
For those of you who live under a soundproof rock, let me introduce you to my virtual best friend:
As it turns out, I’m not alone in my obsession with Discover Weekly. The user base goes crazy for it, which has driven Spotify to rethink its focus, and invest more resources into algorithm-based playlists.
Ever since Discover Weekly debuted in 2015, I’ve been dying to know how it works (What’s more, I’m a Spotify fangirl, so I sometimes like to pretend that I work there and research their products.) After three weeks of mad Googling, I feel like I’ve finally gotten a glimpse behind the curtain.
So how does Spotify do such an amazing job of choosing those 30 songs for each person each week? Let’s zoom out for a second to look at how other music services have tackled music recommendations, and how Spotify’s doing it better.
Back in the 2000s, Songza kicked off the online music curation scene using manual curation to create playlists for users. This meant that a team of “music experts” or other human curators would put together playlists that they just thought sounded good, and then users would listen to those playlists. (Later, Beats Music would employ this same strategy.) Manual curation worked alright, but it was based on that specific curator’s choices, and therefore couldn’t take into account each listener’s individual music taste.
Like Songza, Pandora was also one of the original players in digital music curation. It employed a slightly more advanced approach, instead manually tagging attributes of songs. This meant a group of people listened to music, chose a bunch of descriptive words for each track, and tagged the tracks accordingly. Then, Pandora’s code could simply filter for certain tags to make playlists of similar-sounding music.
Around that same time, a music intelligence agency from the MIT Media Lab called The Echo Nest was born, which took a radical, cutting-edge approach to personalized music. The Echo Nest used algorithms to analyze the audio and textual content of music, allowing it to perform music identification, personalized recommendation, playlist creation, and analysis.
Finally, taking another approach is Last.fm, which still exists today and uses a process called collaborative filtering to identify music its users might like, but more on that in a moment.
So if that’s how other music curation services have handled recommendations, how does Spotify’s magic engine run? How does it seem to nail individual users’ tastes so much more accurately than any of the other services?
Spotify doesn’t actually use a single revolutionary recommendation model. Instead, they mix together some of the best strategies used by other services to create their own uniquely powerful discovery engine.
To create Discover Weekly, there are three main types of recommendation models that Spotify employs:
Let’s dive into how each of these recommendation models work!
First, some background: When people hear the words “collaborative filtering,” they generally think of Netflix, as it was one of the first companies to use this method to power a recommendation model, taking users’ star-based movie ratings to inform its understanding of which movies to recommend to other similar users.
After Netflix was successful, the use of collaborative filtering spread quickly, and is now often the starting point for anyone trying to make a recommendation model.
Unlike Netflix, Spotify doesn’t have a star-based system with which users rate their music. Instead, Spotify’s data is implicit feedback — specifically, the stream counts of the tracks and additional streaming data, such as whether a user saved the track to their own playlist, or visited the artist’s page after listening to a song.
But what is collaborative filtering, truly, and how does it work? Here’s a high-level rundown, explained in a quick conversation:
What’s going on here? Each of these individuals has track preferences: the one on the left likes tracks P, Q, R, and S, while the one on the right likes tracks Q, R, S, and T.
Collaborative filtering then uses that data to say:
“Hmmm... You both like three of the same tracks — Q, R, and S — so you are probably similar users. Therefore, you’re each likely to enjoy other tracks that the other person has listened to, that you haven’t heard yet.”
Therefore, it suggests that the one on the right check out track P — the only track not mentioned, but that his “similar” counterpart enjoyed — and the one on the left check out track T, for the same reasoning. Simple, right?
But how does Spotify actually use that concept in practice to calculate millions of users’ suggested tracks based on millions of other users’ preferences?
With matrix math, done with Python libraries!
In actuality, this matrix you see here is gigantic. Each row represents one of Spotify’s 140 million users — if you use Spotify, you yourself are a row in this matrix — and each column represents one of the 30 million songs in Spotify’s database.
Then, the Python library runs this long, complicated matrix factorization formula:
When it finishes, we end up with two types of vectors, represented here by X and Y. X is a user vector, representing one single user’s taste, and Y is a song vector, representing one single song’s profile.
Now we have 140 million user vectors and 30 million song vectors. The actual content of these vectors is just a bunch of numbers that are essentially meaningless on their own, but are hugely useful when compared.
To find out which users’ musical tastes are most similar to mine, collaborative filtering compares my vector with all of the other users’ vectors, ultimately spitting out which users are the closest matches. The same goes for the Y vector, songs: you can compare a single song’s vector with all the others, and find out which songs are most similar to the one in question.
Collaborative filtering does a pretty good job, but Spotify knew they could do even better by adding another engine. Enter NLP.
The second type of recommendation models that Spotify employs are Natural Language Processing (NLP) models. The source data for these models, as the name suggests, are regular ol’ words: track metadata, news articles, blogs, and other text around the internet.
Natural Language Processing, which is the ability of a computer to understand human speech as it is spoken, is a vast field unto itself, often harnessed through sentiment analysis APIs.
The exact mechanisms behind NLP are beyond the scope of this article, but here’s what happens on a very high level: Spotify crawls the web constantly looking for blog posts and other written text about music to figure out what people are saying about specific artists and songs — which adjectives and what particular language is frequently used in reference to those artists and songs, and which other artists and songs are also being discussed alongside them.
While I don’t know the specifics of how Spotify chooses to then process this scraped data, I can offer some insight based on how the Echo Nest used to work with them. They would bucket Spotify’s data up into what they call “cultural vectors” or “top terms.” Each artist and song had thousands of top terms that changed on the daily. Each term had an associated weight, which correlated to its relative importance — roughly, the probability that someone will describe the music or artist with that term.
Then, much like in collaborative filtering, the NLP model uses these terms and weights to create a vector representation of the song that can be used to determine if two pieces of music are similar. Cool, right?
First, a question. You might be thinking:
First of all, adding a third model further improves the accuracy of the music recommendation service. But this model also serves a secondary purpose: unlike the first two types, raw audio models take new songs into account.
Take, for example, a song your singer-songwriter friend has put up on Spotify. Maybe it only has 50 listens, so there are few other listeners to collaboratively filter it against. It also isn’t mentioned anywhere on the internet yet, so NLP models won’t pick it up. Luckily, raw audio models don’t discriminate between new tracks and popular tracks, so with their help, your friend’s song could end up in a Discover Weekly playlist alongside popular songs!
But how can we analyze raw audio data, which seems so abstract?
With convolutional neural networks!
Convolutional neural networks are the same technology used in facial recognition software. In Spotify’s case, they’ve been modified for use on audio data instead of pixels. Here’s an example of a neural network architecture:
This particular neural network has four convolutional layers, seen as the thick bars on the left, and three dense layers, seen as the more narrow bars on the right. The inputs are time-frequency representations of audio frames, which are then concatenated, or linked together, to form the spectrogram.
The audio frames go through these convolutional layers, and after passing through the last one, you can see a “global temporal pooling” layer, which pools across the entire time axis, effectively computing statistics of the learned features across the time of the song.
After processing, the neural network spits out an understanding of the song, including characteristics like estimated time signature, key, mode, tempo, and loudness. Below is a plot of data for a 30-second snippet of “Around the World” by Daft Punk.
Ultimately, this reading of the song’s key characteristics allows Spotify to understand fundamental similarities between songs and therefore which users might enjoy them, based on their own listening history.
That covers the basics of the three major types of recommendation models feeding Spotify’s Recommendations Pipeline, and ultimately powering the Discover Weekly playlist!
Of course, these recommendation models are all connected to Spotify’s larger ecosystem, which includes giant amounts of data storage and uses lots of Hadoop clusters to scale recommendations and make these engines work on enormous matrices, endless online music articles, and huge numbers of audio files.
I hope this was informative and piqued your curiosity like it did mine. For now, I’ll be working my way through my own Discover Weekly, finding my new favorite music while appreciating all the machine learning that’s going on behind the scenes. 🎶
Thanks also to ladycollective for reading this article and suggesting edits.
Software engineer, writer, and generally creative human. Interested in art, feminism, mindfulness, and authenticity. http://sophiaciocca.com
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage — with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade
|
Dhruv Parthasarathy | 4.3K | 12 | https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4?source=tag_archive---------6---------------- | A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN | At Athelas, we use Convolutional Neural Networks(CNNs) for a lot more than just classification! In this post, we’ll see how CNNs can be used, with great results, in image instance segmentation.
Ever since Alex Krizhevsky, Geoff Hinton, and Ilya Sutskever won ImageNet in 2012, Convolutional Neural Networks(CNNs) have become the gold standard for image classification. In fact, since then, CNNs have improved to the point where they now outperform humans on the ImageNet challenge!
While these results are impressive, image classification is far simpler than the complexity and diversity of true human visual understanding.
In classification, there’s generally an image with a single object as the focus and the task is to say what that image is (see above). But when we look at the world around us, we carry out far more complex tasks.
We see complicated sights with multiple overlapping objects, and different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations to one another!
Can CNNs help us with such complex tasks? Namely, given a more complicated image, can we use CNNs to identify the different objects in the image, and their boundaries? As has been shown by Ross Girshick and his peers over the last few years, the answer is conclusively yes.
Through this post, we’ll cover the intuition behind some of the main techniques used in object detection and segmentation and see how they’ve evolved from one implementation to the next. In particular, we’ll cover R-CNN (Regional CNN), the original application of CNNs to this problem, along with its descendants Fast R-CNN, and Faster R-CNN. Finally, we’ll cover Mask R-CNN, a paper released recently by Facebook Research that extends such object detection techniques to provide pixel level segmentation. Here are the papers referenced in this post:
Inspired by the research of Hinton’s lab at the University of Toronto, a small team at UC Berkeley, led by Professor Jitendra Malik, asked themselves what today seems like an inevitable question:
Object detection is the task of finding the different objects in an image and classifying them (as seen in the image above). The team, comprised of Ross Girshick (a name we’ll see again), Jeff Donahue, and Trevor Darrel found that this problem can be solved with Krizhevsky’s results by testing on the PASCAL VOC Challenge, a popular object detection challenge akin to ImageNet. They write,
Let’s now take a moment to understand how their architecture, Regions With CNNs (R-CNN) works.
Understanding R-CNN
The goal of R-CNN is to take in an image, and correctly identify where the main objects (via a bounding box) in the image.
But how do we find out where these bounding boxes are? R-CNN does what we might intuitively do as well - propose a bunch of boxes in the image and see if any of them actually correspond to an object.
R-CNN creates these bounding boxes, or region proposals, using a process called Selective Search which you can read about here. At a high level, Selective Search (shown in the image above) looks at the image through windows of different sizes, and for each size tries to group together adjacent pixels by texture, color, or intensity to identify objects.
Once the proposals are created, R-CNN warps the region to a standard square size and passes it through to a modified version of AlexNet (the winning submission to ImageNet 2012 that inspired R-CNN), as shown above.
On the final layer of the CNN, R-CNN adds a Support Vector Machine (SVM) that simply classifies whether this is an object, and if so what object. This is step 4 in the image above.
Improving the Bounding Boxes
Now, having found the object in the box, can we tighten the box to fit the true dimensions of the object? We can, and this is the final step of R-CNN. R-CNN runs a simple linear regression on the region proposal to generate tighter bounding box coordinates to get our final result. Here are the inputs and outputs of this regression model:
So, to summarize, R-CNN is just the following steps:
R-CNN works really well, but is really quite slow for a few simple reasons:
In 2015, Ross Girshick, the first author of R-CNN, solved both these problems, leading to the second algorithm in our short history - Fast R-CNN. Let’s now go over its main insights.
Fast R-CNN Insight 1: RoI (Region of Interest) Pooling
For the forward pass of the CNN, Girshick realized that for each image, a lot of proposed regions for the image invariably overlapped causing us to run the same CNN computation again and again (~2000 times!). His insight was simple — Why not run the CNN just once per image and then find a way to share that computation across the ~2000 proposals?
This is exactly what Fast R-CNN does using a technique known as RoIPool (Region of Interest Pooling). At its core, RoIPool shares the forward pass of a CNN for an image across its subregions. In the image above, notice how the CNN features for each region are obtained by selecting a corresponding region from the CNN’s feature map. Then, the features in each region are pooled (usually using max pooling). So all it takes us is one pass of the original image as opposed to ~2000!
Fast R-CNN Insight 2: Combine All Models into One Network
The second insight of Fast R-CNN is to jointly train the CNN, classifier, and bounding box regressor in a single model. Where earlier we had different models to extract image features (CNN), classify (SVM), and tighten bounding boxes (regressor), Fast R-CNN instead used a single network to compute all three.
You can see how this was done in the image above. Fast R-CNN replaced the SVM classifier with a softmax layer on top of the CNN to output a classification. It also added a linear regression layer parallel to the softmax layer to output bounding box coordinates. In this way, all the outputs needed came from one single network! Here are the inputs and outputs to this overall model:
Even with all these advancements, there was still one remaining bottleneck in the Fast R-CNN process — the region proposer. As we saw, the very first step to detecting the locations of objects is generating a bunch of potential bounding boxes or regions of interest to test. In Fast R-CNN, these proposals were created using Selective Search, a fairly slow process that was found to be the bottleneck of the overall process.
In the middle 2015, a team at Microsoft Research composed of Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, found a way to make the region proposal step almost cost free through an architecture they (creatively) named Faster R-CNN.
The insight of Faster R-CNN was that region proposals depended on features of the image that were already calculated with the forward pass of the CNN (first step of classification). So why not reuse those same CNN results for region proposals instead of running a separate selective search algorithm?
Indeed, this is just what the Faster R-CNN team achieved. In the image above, you can see how a single CNN is used to both carry out region proposals and classification. This way, only one CNN needs to be trained and we get region proposals almost for free! The authors write:
Here are the inputs and outputs of their model:
How the Regions are Generated
Let’s take a moment to see how Faster R-CNN generates these region proposals from CNN features. Faster R-CNN adds a Fully Convolutional Network on top of the features of the CNN creating what’s known as the Region Proposal Network.
The Region Proposal Network works by passing a sliding window over the CNN feature map and at each window, outputting k potential bounding boxes and scores for how good each of those boxes is expected to be. What do these k boxes represent?
Intuitively, we know that objects in an image should fit certain common aspect ratios and sizes. For instance, we know that we want some rectangular boxes that resemble the shapes of humans. Likewise, we know we won’t see many boxes that are very very thin. In such a way, we create k such common aspect ratios we call anchor boxes. For each such anchor box, we output one bounding box and score per position in the image.
With these anchor boxes in mind, let’s take a look at the inputs and outputs to this Region Proposal Network:
We then pass each such bounding box that is likely to be an object into Fast R-CNN to generate a classification and tightened bounding boxes.
So far, we’ve seen how we’ve been able to use CNN features in many interesting ways to effectively locate different objects in an image with bounding boxes.
Can we extend such techniques to go one step further and locate exact pixels of each object instead of just bounding boxes? This problem, known as image segmentation, is what Kaiming He and a team of researchers, including Girshick, explored at Facebook AI using an architecture known as Mask R-CNN.
Much like Fast R-CNN, and Faster R-CNN, Mask R-CNN’s underlying intuition is straight forward. Given that Faster R-CNN works so well for object detection, could we extend it to also carry out pixel level segmentation?
Mask R-CNN does this by adding a branch to Faster R-CNN that outputs a binary mask that says whether or not a given pixel is part of an object. The branch (in white in the above image), as before, is just a Fully Convolutional Network on top of a CNN based feature map. Here are its inputs and outputs:
But the Mask R-CNN authors had to make one small adjustment to make this pipeline work as expected.
RoiAlign - Realigning RoIPool to be More Accurate
When run without modifications on the original Faster R-CNN architecture, the Mask R-CNN authors realized that the regions of the feature map selected by RoIPool were slightly misaligned from the regions of the original image. Since image segmentation requires pixel level specificity, unlike bounding boxes, this naturally led to inaccuracies.
The authors were able to solve this problem by cleverly adjusting RoIPool to be more precisely aligned using a method known as RoIAlign.
Imagine we have an image of size 128x128 and a feature map of size 25x25. Let’s imagine we want features the region corresponding to the top-left 15x15 pixels in the original image (see above). How might we select these pixels from the feature map?
We know each pixel in the original image corresponds to ~ 25/128 pixels in the feature map. To select 15 pixels from the original image, we just select 15 * 25/128 ~= 2.93 pixels.
In RoIPool, we would round this down and select 2 pixels causing a slight misalignment. However, in RoIAlign, we avoid such rounding. Instead, we use bilinear interpolation to get a precise idea of what would be at pixel 2.93. This, at a high level, is what allows us to avoid the misalignments caused by RoIPool.
Once these masks are generated, Mask R-CNN combines them with the classifications and bounding boxes from Faster R-CNN to generate such wonderfully precise segmentations:
If you’re interested in trying out these algorithms yourselves, here are relevant repositories:
Faster R-CNN
Mask R-CNN
In just 3 years, we’ve seen how the research community has progressed from Krizhevsky et. al’s original result to R-CNN, and finally all the way to such powerful results as Mask R-CNN. Seen in isolation, results like Mask R-CNN seem like incredible leaps of genius that would be unapproachable. Yet, through this post, I hope you’ve seen how such advancements are really the sum of intuitive, incremental improvements through years of hard work and collaboration. Each of the ideas proposed by R-CNN, Fast R-CNN, Faster R-CNN, and finally Mask R-CNN were not necessarily quantum leaps, yet their sum products have led to really remarkable results that bring us closer to a human level understanding of sight.
What particularly excites me, is that the time between R-CNN and Mask R-CNN was just three years! With continued funding, focus, and support, how much further can Computer Vision improve over the next three years?
If you see any errors or issues in this post, please contact me at dhruv@getathelas.com and I”ll immediately correct them!
If you’re interested in applying such techniques, come join us at Athelas where we apply Computer Vision to blood diagnostics daily:
Other posts we’ve written:
Thanks to Bharath Ramsundar, Pranav Ramkrishnan, Tanay Tandon, and Oliver Cameron for help with this post!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
@dhruvp. VP Eng @Athelas. MIT Math and CS Undergrad ’13. MIT CS Masters ’14. Previously: Director of AI Programs @ Udacity.
Blood Diagnostics through Deep Learning http://athelas.com
|
Andrej Karpathy | 35K | 8 | https://medium.com/@karpathy/software-2-0-a64152b37c35?source=tag_archive---------7---------------- | Software 2.0 – Andrej Karpathy – Medium | I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. Unfortunately, this interpretation completely misses the forest for the trees. Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. They are Software 2.0.
The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer identifies a specific point in program space with some desirable behavior.
In contrast, Software 2.0 can be written in much more abstract, human unfriendly language, such as the weights of a neural network. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried).
Instead, our approach is to specify some goal on the behavior of a desirable program (e.g., “satisfy a dataset of input output pairs of examples”, or “win a game of Go”), write a rough skeleton of the code (e.g. a neural net architecture), that identifies a subset of program space to search, and use the computational resources at our disposal to search this space for a program that works. In the specific case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent.
It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data (or more generally, identify a desirable behavior) than to explicitly write the program. In these cases, the programmers will often split into two. The 2.0 programmers manually curate, maintain, massage, clean and label datasets; each labeled example literally programs the final system because the dataset gets compiled into Software 2.0 code via the optimization. Meanwhile, the 1.0 programmers maintain the surrounding tools, analytics, visualizations, labeling interfaces, infrastructure, and the training code.
Let’s briefly examine some concrete examples of this ongoing transition. In each of these areas we’ve seen improvements over the last few years when we give up on trying to address a complex problem by writing explicit code and instead transition the code into the 2.0 stack.
Visual Recognition used to consist of engineered features with a bit of machine learning sprinkled on top at the end (e.g., an SVM). Since then, we discovered much more powerful visual features by obtaining large datasets (e.g. ImageNet) and searching in the space of Convolutional Neural Network architectures. More recently, we don’t even trust ourselves to hand-code the architectures and we’ve begun searching over those as well.
Speech recognition used to involve a lot of preprocessing, gaussian mixture models and hidden markov models, but today consist almost entirely of neural net stuff. A very related, often cited humorous quote attributed to Fred Jelinek from 1985 reads “Every time I fire a linguist, the performance of our speech recognition system goes up”.
Speech synthesis has historically been approached with various stitching mechanisms, but today the state of the art models are large ConvNets (e.g. WaveNet) that produce raw audio signal outputs.
Machine Translation has usually been approaches with phrase-based statistical techniques, but neural networks are quickly becoming dominant. My favorite architectures are trained in the multilingual setting, where a single model translates from any source language to any target language, and in weakly supervised (or entirely unsupervised) settings.
Games. Explicitly hand-coded Go playing programs have been developed for a long while, but AlphaGo Zero (a ConvNet that looks at the raw state of the board and plays a move) has now become by far the strongest player of the game. I expect we’re going to see very similar results in other areas, e.g. DOTA 2, or StarCraft.
Databases. More traditional systems outside of Artificial Intelligence are also seeing early hints of a transition. For instance, “The Case for Learned Index Structures” replaces core components of a data management system with a neural network, outperforming cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory.
You’ll notice that many of my links above involve work done at Google. This is because Google is currently at the forefront of re-writing large chunks of itself into Software 2.0 code. “One model to rule them all” provides an early sketch of what this might look like, where the statistical strength of the individual domains is amalgamated into one consistent understanding of the world.
Why should we prefer to port complex programs into Software 2.0? Clearly, one easy answer is that they work better in practice. However, there are a lot of other convenient reasons to prefer this stack. Let’s take a look at some of the benefits of Software 2.0 (think: a ConvNet) compared to Software 1.0 (think: a production-level C++ code base). Software 2.0 is:
Computationally homogeneous. A typical neural network is, to the first order, made up of a sandwich of only two operations: matrix multiplication and thresholding at zero (ReLU). Compare that with the instruction set of classical software, which is significantly more heterogenous and complex. Because you only have to provide Software 1.0 implementation for a small number of the core computational primitives (e.g. matrix multiply), it is much easier to make various correctness/performance guarantees.
Simple to bake into silicon. As a corollary, since the instruction set of a neural network is relatively small, it is significantly easier to implement these networks much closer to silicon, e.g. with custom ASICs, neuromorphic chips, and so on. The world will change when low-powered intelligence becomes pervasive around us. E.g., small, inexpensive chips could come with a pretrained ConvNet, a speech recognizer, and a WaveNet speech synthesis network all integrated in a small protobrain that you can attach to stuff.
Constant running time. Every iteration of a typical neural net forward pass takes exactly the same amount of FLOPS. There is zero variability based on the different execution paths your code could take through some sprawling C++ code base. Of course, you could have dynamic compute graphs but the execution flow is normally still significantly constrained. This way we are also almost guaranteed to never find ourselves in unintended infinite loops.
Constant memory use. Related to the above, there is no dynamically allocated memory anywhere so there is also little possibility of swapping to disk, or memory leaks that you have to hunt down in your code.
It is highly portable. A sequence of matrix multiplies is significantly easier to run on arbitrary computational configurations compared to classical binaries or scripts.
It is very agile. If you had a C++ code and someone wanted you to make it twice as fast (at cost of performance if needed), it would be highly non-trivial to tune the system for the new spec. However, in Software 2.0 we can take our network, remove half of the channels, retrain, and there — it runs exactly at twice the speed and works a bit worse. It’s magic. Conversely, if you happen to get more data/compute, you can immediately make your program work better just by adding more channels and retraining.
Modules can meld into an optimal whole. Our software is often decomposed into modules that communicate through public functions, APIs, or endpoints. However, if two Software 2.0 modules that were originally trained separately interact, we can easily backpropagate through the whole. Think about how amazing it could be if your web browser could automatically re-design the low-level system instructions 10 stacks down to achieve a higher efficiency in loading web pages. With 2.0, this is the default behavior.
It is better than you. Finally, and most importantly, a neural network is a better piece of code than anything you or I can come up with in a large fraction of valuable verticals, which currently at the very least involve anything to do with images/video and sound/speech.
The 2.0 stack also has some of its own disadvantages. At the end of the optimization we’re left with large networks that work well, but it’s very hard to tell how. Across many applications areas, we’ll be left with a choice of using a 90% accurate model we understand, or 99% accurate model we don’t.
The 2.0 stack can fail in unintuitive and embarrassing ways ,or worse, they can “silently fail”, e.g., by silently adopting biases in their training data, which are very difficult to properly analyze and examine when their sizes are easily in the millions in most cases.
Finally, we’re still discovering some of the peculiar properties of this stack. For instance, the existence of adversarial examples and attacks highlights the unintuitive nature of this stack.
Software 1.0 is code we write. Software 2.0 is code we do not write, but seems to work well. It is likely that any setting where the program is not obvious but one can repeatedly evaluate the performance of it (e.g. — did you classify some images correctly? do you win games of Go?) will be subject to this transition, because the optimization can find much better code than what we can write.
If you think of neural networks as a new software stack and not just a pretty good classifier, it quickly becomes apparent there is a lot of work to do.
For example, from a systems perspective, in the 1.0 stack LLVM IR forms a middle layer between a number of front ends (languages) and back ends (architectures) and provides an opportunity for optimization. With neural networks we’re already seeing an explosion of front ends for specifying program subsets to search over (PyTorch, TF, Chainer, mxnet, etc) and back ends to run the training (“compilation”) and inference (CPU, GPU, TPU?, IPU?, ...), but what is a fitting IR, and how we can optimize it (Halide-like)?
As another example, we’ve built up a vast amount of tooling that assists humans in writing 1.0 code, like powerful IDEs with features like syntax highlighting, debuggers, profilers, go to def, git integration, etc. Who is going to develop the first powerful Software 2.0 IDEs, which help with all of the workflows in accumulating, visualizing, cleaning, labeling, and sourcing datasets? There is a lot of room for a layer of intelligence assisting the 2.0 programmers, e.g. perhaps the IDE bubbles up images that the network suspects are mislabeled, or assists in labeling, or finds examples where the network is currently uncertain.
Finally, in the long term, the future of Software 2.0 is bright because it is increasingly clear to many that when we develop AGI, it will certainly be written in Software 2.0.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Director of AI at Tesla. Previously Research Scientist at OpenAI and PhD student at Stanford. I like to train deep neural nets on large datasets.
|
Sebastian Heinz | 4.4K | 13 | https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877?source=tag_archive---------8---------------- | A simple deep learning model for stock price prediction using TensorFlow | For a recent hackathon that we did at STATWORX, some of our team members scraped minutely S&P 500 data from the Google Finance API. The data consisted of index as well as stock prices of the S&P’s 500 constituents. Having this data at hand, the idea of developing a deep learning model for predicting the S&P 500 index based on the 500 constituents prices one minute ago came immediately on my mind.
Playing around with the data and building the deep learning model with TensorFlow was fun and so I decided to write my first Medium.com story: a little TensorFlow tutorial on predicting S&P 500 stock prices. What you will read is not an in-depth tutorial, but more a high-level introduction to the important building blocks and concepts of TensorFlow models. The Python code I’ve created is not optimized for efficiency but understandability. The dataset I’ve used can be downloaded from here (40MB).
Our team exported the scraped stock data from our scraping server as a csv file. The dataset contains n = 41266 minutes of data ranging from April to August 2017 on 500 stocks as well as the total S&P 500 index price. Index and stocks are arranged in wide format.
The data was already cleaned and prepared, meaning missing stock and index prices were LOCF’ed (last observation carried forward), so that the file did not contain any missing values.
A quick look at the S&P time series using pyplot.plot(data['SP500']):
Note: This is actually the lead of the S&P 500 index, meaning, its value is shifted 1 minute into the future. This operation is necessary since we want to predict the next minute of the index and not the current minute.
The dataset was split into training and test data. The training data contained 80% of the total dataset. The data was not shuffled but sequentially sliced. The training data ranges from April to approx. end of July 2017, the test data ends end of August 2017.
There are a lot of different approaches to time series cross validation, such as rolling forecasts with and without refitting or more elaborate concepts such as time series bootstrap resampling. The latter involves repeated samples from the remainder of the seasonal decomposition of the time series in order to simulate samples that follow the same seasonal pattern as the original time series but are not exact copies of its values.
Most neural network architectures benefit from scaling the inputs (sometimes also the output). Why? Because most common activation functions of the network’s neurons such as tanh or sigmoid are defined on the [-1, 1] or [0, 1] interval respectively. Nowadays, rectified linear unit (ReLU) activations are commonly used activations which are unbounded on the axis of possible activation values. However, we will scale both the inputs and targets anyway. Scaling can be easily accomplished in Python using sklearn’s MinMaxScaler.
Remark: Caution must be undertaken regarding what part of the data is scaled and when. A common mistake is to scale the whole dataset before training and test split are being applied. Why is this a mistake? Because scaling invokes the calculation of statistics e.g. the min/max of a variable. When performing time series forecasting in real life, you do not have information from future observations at the time of forecasting. Therefore, calculation of scaling statistics has to be conducted on training data and must then be applied to the test data. Otherwise, you use future information at the time of forecasting which commonly biases forecasting metrics in a positive direction.
TensorFlow is a great piece of software and currently the leading deep learning and neural network computation framework. It is based on a C++ low level backend but is usually controlled via Python (there is also a neat TensorFlow library for R, maintained by RStudio). TensorFlow operates on a graph representation of the underlying computational task. This approach allows the user to specify mathematical operations as elements in a graph of data, variables and operators. Since neural networks are actually graphs of data and mathematical operations, TensorFlow is just perfect for neural networks and deep learning. Check out this simple example (stolen from our deep learning introduction from our blog):
In the figure above, two numbers are supposed to be added. Those numbers are stored in two variables, a and b. The two values are flowing through the graph and arrive at the square node, where they are being added. The result of the addition is stored into another variable, c. Actually, a, b and c can be considered as placeholders. Any numbers that are fed into a and b get added and are stored into c. This is exactly how TensorFlow works. The user defines an abstract representation of the model (neural network) through placeholders and variables. Afterwards, the placeholders get "filled" with real data and the actual computations take place. The following code implements the toy example from above in TensorFlow:
After having imported the TensorFlow library, two placeholders are defined using tf.placeholder(). They correspond to the two blue circles on the left of the image above. Afterwards, the mathematical addition is defined via tf.add(). The result of the computation is c = 9. With placeholders set up, the graph can be executed with any integer value for a and b. Of course, the former problem is just a toy example. The required graphs and computations in a neural network are much more complex.
As mentioned before, it all starts with placeholders. We need two placeholders in order to fit our model: X contains the network's inputs (the stock prices of all S&P 500 constituents at time T = t) and Y the network's outputs (the index value of the S&P 500 at time T = t + 1).
The shape of the placeholders correspond to [None, n_stocks] with [None] meaning that the inputs are a 2-dimensional matrix and the outputs are a 1-dimensional vector. It is crucial to understand which input and output dimensions the neural net needs in order to design it properly.
The None argument indicates that at this point we do not yet know the number of observations that flow through the neural net graph in each batch, so we keep if flexible. We will later define the variable batch_size that controls the number of observations per training batch.
Besides placeholders, variables are another cornerstone of the TensorFlow universe. While placeholders are used to store input and target data in the graph, variables are used as flexible containers within the graph that are allowed to change during graph execution. Weights and biases are represented as variables in order to adapt during training. Variables need to be initialized, prior to model training. We will get into that a litte later in more detail.
The model consists of four hidden layers. The first layer contains 1024 neurons, slightly more than double the size of the inputs. Subsequent hidden layers are always half the size of the previous layer, which means 512, 256 and finally 128 neurons. A reduction of the number of neurons for each subsequent layer compresses the information the network identifies in the previous layers. Of course, other network architectures and neuron configurations are possible but are out of scope for this introduction level article.
It is important to understand the required variable dimensions between input, hidden and output layers. As a rule of thumb in multilayer perceptrons (MLPs, the type of networks used here), the second dimension of the previous layer is the first dimension in the current layer for weight matrices. This might sound complicated but is essentially just each layer passing its output as input to the next layer. The biases dimension equals the second dimension of the current layer’s weight matrix, which corresponds the number of neurons in this layer.
After definition of the required weight and bias variables, the network topology, the architecture of the network, needs to be specified. Hereby, placeholders (data) and variables (weighs and biases) need to be combined into a system of sequential matrix multiplications.
Furthermore, the hidden layers of the network are transformed by activation functions. Activation functions are important elements of the network architecture since they introduce non-linearity to the system. There are dozens of possible activation functions out there, one of the most common is the rectified linear unit (ReLU) which will also be used in this model.
The image below illustrates the network architecture. The model consists of three major building blocks. The input layer, the hidden layers and the output layer. This architecture is called a feedforward network. Feedforward indicates that the batch of data solely flows from left to right. Other network architectures, such as recurrent neural networks, also allow data flowing “backwards” in the network.
The cost function of the network is used to generate a measure of deviation between the network’s predictions and the actual observed training targets. For regression problems, the mean squared error (MSE) function is commonly used. MSE computes the average squared deviation between predictions and targets. Basically, any differentiable function can be implemented in order to compute a deviation measure between predictions and targets.
However, the MSE exhibits certain properties that are advantageous for the general optimization problem to be solved.
The optimizer takes care of the necessary computations that are used to adapt the network’s weight and bias variables during training. Those computations invoke the calculation of so called gradients, that indicate the direction in which the weights and biases have to be changed during training in order to minimize the network’s cost function. The development of stable and speedy optimizers is a major field in neural network an deep learning research.
Here the Adam Optimizer is used, which is one of the current default optimizers in deep learning development. Adam stands for “Adaptive Moment Estimation” and can be considered as a combination between two other popular optimizers AdaGrad and RMSProp.
Initializers are used to initialize the network’s variables before training. Since neural networks are trained using numerical optimization techniques, the starting point of the optimization problem is one the key factors to find good solutions to the underlying problem. There are different initializers available in TensorFlow, each with different initialization approaches. Here, I use the tf.variance_scaling_initializer(), which is one of the default initialization strategies.
Note, that with TensorFlow it is possible to define multiple initialization functions for different variables within the graph. However, in most cases, a unified initialization is sufficient.
After having defined the placeholders, variables, initializers, cost functions and optimizers of the network, the model needs to be trained. Usually, this is done by minibatch training. During minibatch training random data samples of n = batch_size are drawn from the training data and fed into the network. The training dataset gets divided into n / batch_size batches that are sequentially fed into the network. At this point the placeholders X and Y come into play. They store the input and target data and present them to the network as inputs and targets.
A sampled data batch of X flows through the network until it reaches the output layer. There, TensorFlow compares the models predictions against the actual observed targets Y in the current batch. Afterwards, TensorFlow conducts an optimization step and updates the networks parameters, corresponding to the selected learning scheme. After having updated the weights and biases, the next batch is sampled and the process repeats itself. The procedure continues until all batches have been presented to the network. One full sweep over all batches is called an epoch.
The training of the network stops once the maximum number of epochs is reached or another stopping criterion defined by the user applies.
During the training, we evaluate the networks predictions on the test set — the data which is not learned, but set aside — for every 5th batch and visualize it. Additionally, the images are exported to disk and later combined into a video animation of the training process (see below). The model quickly learns the shape und location of the time series in the test data and is able to produce an accurate prediction after some epochs. Nice!
One can see that the networks rapidly adapts to the basic shape of the time series and continues to learn finer patterns of the data. This also corresponds to the Adam learning scheme that lowers the learning rate during model training in order not to overshoot the optimization minimum. After 10 epochs, we have a pretty close fit to the test data! The final test MSE equals 0.00078 (it is very low, because the target is scaled). The mean absolute percentage error of the forecast on the test set is equal to 5.31% which is pretty good. Note, that this is just a fit to the test data, no actual out of sample metrics in a real world scenario.
Please note that there are tons of ways of further improving this result: design of layers and neurons, choosing different initialization and activation schemes, introduction of dropout layers of neurons, early stopping and so on. Furthermore, different types of deep learning models, such as recurrent neural networks might achieve better performance on this task. However, this is not the scope of this introductory post.
The release of TensorFlow was a landmark event in deep learning research. Its flexibility and performance allows researchers to develop all kinds of sophisticated neural network architectures as well as other ML algorithms. However, flexibility comes at the cost of longer time-to-model cycles compared to higher level APIs such as Keras or MxNet. Nonetheless, I am sure that TensorFlow will make its way to the de-facto standard in neural network and deep learning development in research and practical applications. Many of our customers are already using TensorFlow or start developing projects that employ TensorFlow models. Also our data science consultants at STATWORX are heavily using TensorFlow for deep learning and neural net research and development. Let’s see what Google has planned for the future of TensorFlow. One thing that is missing, at least in my opinion, is a neat graphical user interface for designing and developing neural net architectures with TensorFlow backend. Maybe, this is something Google is already working on ;)
If you have any comments or questions on my first Medium story, feel free to comment below! I will try to answer them. Also, feel free to use my code or share this story with your peers on social platforms of your choice.
Update: I’ve added both the Python script as well as a (zipped) dataset to a Github repository. Feel free to clone and fork.
Lastly, follow me on: Twitter | LinkedIn
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
CEO @ STATWORX. Doing data science, stats and ML for over a decade. Food, wine and cocktail enthusiast. Check our website: https://www.statworx.com
Highlights from Machine Learning Research, Projects and Learning Materials. From and For ML Scientists, Engineers an Enthusiasts.
|
Netflix Technology Blog | 25K | 13 | https://medium.com/netflix-techblog/artwork-personalization-c589f074ad76?source=tag_archive---------9---------------- | Artwork Personalization at Netflix – Netflix TechBlog – Medium | By Ashok Chandrashekar, Fernando Amat, Justin Basilico and Tony Jebara
For many years, the main goal of the Netflix personalized recommendation system has been to get the right titles in front each of our members at the right time. With a catalog spanning thousands of titles and a diverse member base spanning over a hundred million accounts, recommending the titles that are just right for each member is crucial. But the job of recommendation does not end there. Why should you care about any particular title we recommend? What can we say about a new and unfamiliar title that will pique your interest? How do we convince you that a title is worth watching? Answering these questions is critical in helping our members discover great content, especially for unfamiliar titles. One avenue to address this challenge is to consider the artwork or imagery we use to portray the titles. If the artwork representing a title captures something compelling to you, then it acts as a gateway into that title and gives you some visual “evidence” for why the title might be good for you. The artwork may highlight an actor that you recognize, capture an exciting moment like a car chase, or contain a dramatic scene that conveys the essence of a movie or TV show. If we present that perfect image on your homepage (and as they say: an image is worth a thousand words), then maybe, just maybe, you will give it a try. This is yet another way Netflix differs from traditional media offerings: we don’t have one product but over a 100 million different products with one for each of our members with personalized recommendations and personalized visuals.
In previous work, we discussed an effort to find the single perfect artwork for each title across all our members. Through multi-armed bandit algorithms, we hunted for the best artwork for a title, say Stranger Things, that would earn the most plays from the largest fraction of our members. However, given the enormous diversity in taste and preferences, wouldn’t it be better if we could find the best artwork for each of our members to highlight the aspects of a title that are specifically relevant to them?
As inspiration, let us explore scenarios where personalization of artwork would be meaningful. Consider the following examples where different members have different viewing histories. On the left are three titles a member watched in the past. To the right of the arrow is the artwork that a member would get for a particular movie that we recommend for them.
Let us consider trying to personalize the image we use to depict the movie Good Will Hunting. Here we might personalize this decision based on how much a member prefers different genres and themes. Someone who has watched many romantic movies may be interested in Good Will Hunting if we show the artwork containing Matt Damon and Minnie Driver, whereas, a member who has watched many comedies might be drawn to the movie if we use the artwork containing Robin Williams, a well-known comedian.
In another scenario, let’s imagine how the different preferences for cast members might influence the personalization of the artwork for the movie Pulp Fiction. A member who watches many movies featuring Uma Thurman would likely respond positively to the artwork for Pulp Fiction that contains Uma. Meanwhile, a fan of John Travolta may be more interested in watching Pulp Fiction if the artwork features John.
Of course, not all the scenarios for personalizing artwork are this clear and obvious. So we don’t enumerate such hand-derived rules but instead rely on the data to tell us what signals to use. Overall, by personalizing artwork we help each title put its best foot forward for every member and thus improve our member experience.
At Netflix, we embrace personalization and algorithmically adapt many aspects of our member experience, including the rows we select for the homepage, the titles we select for those rows, the galleries we display, the messages we send, and so forth. Each new aspect that we personalize has unique challenges; personalizing the artwork we display is no exception and presents different personalization challenges. One challenge of image personalization is that we can only select a single piece of artwork to represent each title in each place we present it. In contrast, typical recommendation settings let us present multiple selections to a member where we can subsequently learn about their preferences from the item a member selects. This means that image selection is a chicken-and-egg problem operating in a closed loop: if a member plays a title it can only come from the image that we decided to present to that member. What we seek to understand is when presenting a specific piece of artwork for a title influenced a member to play (or not to play) a title and when a member would have played a title (or not) regardless of which image we presented. Therefore artwork personalization sits on top of the traditional recommendation problem and the algorithms need to work in conjunction with each other. Of course, to properly learn how to personalize artwork we need to collect a lot of data to find signals that indicate when one piece of artwork is significantly better for a member.
Another challenge is to understand the impact of changing artwork that we show a member for a title between sessions. Does changing artwork reduce recognizability of the title and make it difficult to visually locate the title again, for example if the member thought was interested before but had not yet watched it? Or, does changing the artwork itself lead the member to reconsider it due to an improved selection? Clearly, if we find better artwork to present to a member we should probably use it; but continuous changes can also confuse people. Changing images also introduces an attribution problem as it becomes unclear which image led a member to be interested in a title.
Next, there is the challenge of understanding how artwork performs in relation to other artwork we select in the same page or session. Maybe a bold close-up of the main character works for a title on a page because it stands out compared to the other artwork. But if every title had a similar image then the page as a whole may not seem as compelling. Looking at each piece of artwork in isolation may not be enough and we need to think about how to select a diverse set of images across titles on a page and across a session. Beyond the artwork for other titles, the effectiveness of the artwork for a title may depend on what other types of evidence and assets (e.g. synopses, trailers, etc.) we also display for that title. Thus, we may need a diverse selection where each can highlight complementary aspects of a title that may be compelling to a member.
To achieve effective personalization, we also need a good pool of artwork for each title. This means that we need several assets where each is engaging, informative and representative of a title to avoid “clickbait”. The set of images for a title also needs to be diverse enough to cover a wide potential audience interested in different aspects of the content. After all, how engaging and informative a piece of artwork is truly depends on the individual seeing it. Therefore, we need to have artwork that highlights not only different themes in a title but also different aesthetics. Our teams of artists and designers strive to create images that are diverse across many dimensions. They also take into consideration the personalization algorithms which will select the images during their creative process for generating artwork.
Finally, there are engineering challenges to personalize artwork at scale. One challenge is that our member experience is very visual and thus contains a lot of imagery. So using personalized selection for each asset means handling a peak of over 20 million requests per second with low latency. Such a system must be robust: failing to properly render the artwork in our UI brings a significantly degrades the experience. Our personalization algorithm also needs to respond quickly when a title launches, which means rapidly learning to personalize in a cold-start situation. Then, after launch, the algorithm must continuously adapt as the effectiveness of artwork may change over time as both the title evolves through its life cycle and member tastes evolve.
Much of the Netflix recommendation engine is powered by machine learning algorithms. Traditionally, we collect a batch of data on how our members use the service. Then we run a new machine learning algorithm on this batch of data. Next we test this new algorithm against the current production system through an A/B test. An A/B test helps us see if the new algorithm is better than our current production system by trying it out on a random subset of members. Members in group A get the current production experience while members in group B get the new algorithm. If members in group B have higher engagement with Netflix, then we roll-out the new algorithm to the entire member population. Unfortunately, this batch approach incurs regret: many members over a long period of time did not benefit from the better experience. This is illustrated in the figure below.
To reduce this regret, we move away from batch machine learning and consider online machine learning. For artwork personalization, the specific online learning framework we use is contextual bandits. Rather than waiting to collect a full batch of data, waiting to learn a model, and then waiting for an A/B test to conclude, contextual bandits rapidly figure out the optimal personalized artwork selection for a title for each member and context. Briefly, contextual bandits are a class of online learning algorithms that trade off the cost of gathering training data required for learning an unbiased model on an ongoing basis with the benefits of applying the learned model to each member context. In our previous unpersonalized image selection work, we used non-contextual bandits where we found the winning image regardless of the context. For personalization, the member is the context as we expect different members to respond differently to the images.
A key property of contextual bandits is that they are designed to minimize regret. At a high level, the training data for a contextual bandit is obtained through the injection of controlled randomization in the learned model’s predictions. The randomization schemes can vary in complexity from simple epsilon-greedy formulations with uniform randomness to closed loop schemes that adaptively vary the degree of randomization as a function of model uncertainty. We broadly refer to this process as data exploration. The number of candidate artworks that are available for a title along with the size of the overall population for which the system will be deployed informs the choice of the data exploration strategy. With such exploration, we need to log information about the randomization for each artwork selection. This logging allows us to correct for skewed selection propensities and thereby perform offline model evaluation in an unbiased fashion, as described later.
Exploration in contextual bandits typically has a cost (or regret) due to the fact that our artwork selection in a member session may not use the predicted best image for that session. What impact does this randomization have on the member experience (and consequently on our metrics)? With over a hundred millions members, the regret incurred by exploration is typically very small and is amortized across our large member base with each member implicitly helping provide feedback on artwork for a small portion of the catalog. This makes the cost of exploration per member negligible, which is an important consideration when choosing contextual bandits to drive a key aspect of our member experience. Randomization and exploration with contextual bandits would be less suitable if the cost of exploration were high.
Under our online exploration scheme, we obtain a training dataset that records, for each (member, title, image) tuple, whether that selection resulted in a play of the title or not. Furthermore, we can control the exploration such that artwork selections do not change too often. This gives a cleaner attribution of the member’s engagement to specific artwork. We also carefully determine the label for each observation by looking at the quality of engagement to avoid learning a model that recommends “clickbait” images: ones that entice a member to start playing but ultimately result in low-quality engagement.
In this online learning setting, we train our contextual bandit model to select the best artwork for each member based on their context. We typically have up to a few dozen candidate artwork images per title. To learn the selection model, we can consider a simplification of the problem by ranking images for a member independently across titles. Even with this simplification we can still learn member image preferences across titles because, for every image candidate, we have some members who were presented with it and engaged with the title and some members who were presented with it and did not engage. These preferences can be modeled to predict for each (member, title, image) tuple, the probability that the member will enjoy a quality engagement. These can be supervised learning models or contextual bandit counterparts with Thompson Sampling, LinUCB, or Bayesian methods that intelligently balance making the best prediction with data exploration.
In contextual bandits, the context is usually represented as an feature vector provided as input to the model. There are many signals we can use as features for this problem. In particular, we can consider many attributes of the member: the titles they’ve played, the genre of the titles, interactions of the member with the specific title, their country, their language preferences, the device that the member is using, the time of day and the day of week. Since our algorithm selects images in conjunction with our personalized recommendation engine, we can also use signals regarding what our various recommendation algorithms think of the title, irrespective of what image is used to represent it.
An important consideration is that some images are naturally better than others in the candidate pool. We observe the overall take rates for all the images in our data exploration, which is simply the number of quality plays divided by the number of impressions. Our previous work on unpersonalized artwork selection used overall differences in take rates to determine the single best image to select for a whole population. In our new contextual personalized model, the overall take rates are still important and personalization still recovers selections that agree on average with the unpersonalized model’s ranking.
The optimal assignment of image artwork to a member is a selection problem to find the best candidate image from a title’s pool of available images. Once the model is trained as above, we use it to rank the images for each context. The model predicts the probability of play for a given image in a given a member context. We sort a candidate set of images by these probabilities and pick the one with the highest probability. That is the image we present to that particular member.
To evaluate our contextual bandit algorithms prior to deploying them online on real members, we can use an offline technique known as replay [1]. This method allows us to answer counterfactual questions based on the logged exploration data (Figure 1). In other words, we can compare offline what would have happened in historical sessions under different scenarios if we had used different algorithms in an unbiased way.
Replay allows us to see how members would have engaged with our titles if we had hypothetically presented images that were selected through a new algorithm rather than the algorithm used in production. For images, we are interested in several metrics, particularly the take fraction, as described above. Figure 2 shows how contextual bandit approach helps increase the average take fraction across the catalog compared to random selection or non-contextual bandits.
After experimenting with many different models offline and finding ones that had a substantial increase in replay, we ultimately ran an A/B test to compare the most promising personalized contextual bandits against unpersonalized bandits. As we suspected, the personalization worked and generated a significant lift in our core metrics. We also saw a reasonable correlation between what we measured offline in replay and what we saw online with the models. The online results also produced some interesting insights. For example, the improvement of personalization was larger in cases where the member had no prior interaction with the title. This makes sense because we would expect that the artwork would be more important to someone when a title is less familiar.
With this approach, we’ve taken our first steps in personalizing the selection of artwork for our recommendations and across our service. This has resulted in a meaningful improvement in how our members discover new content... so we’ve rolled it out to everyone! This project is the first instance of personalizing not just what we recommend but also how we recommend to our members. But there are many opportunities to expand and improve this initial approach. These opportunities include developing algorithms to handle cold-start by personalizing new images and new titles as quickly as possible, for example by using techniques from computer vision. Another opportunity is extending this personalization approach across other types of artwork we use and other evidence that describe our titles such as synopses, metadata, and trailers. There is also an even broader problem: helping artists and designers figure out what new imagery we should add to the set to make a title even more compelling and personalizable.
If these types of challenges interest you, please let us know! We are always looking for great people to join our team, and, for these types of projects, we are especially excited by candidates with machine learning and/or computer vision expertise.
[1] L. Li, W. Chu, J. Langford, and X. Wang, “Unbiased Offline Evaluation of Contextual-bandit-based News Article Recommendation Algorithms,” in Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, New York, NY, USA, 2011, pp. 297–306.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Learn more about how Netflix designs, builds, and operates our systems and engineering organizations
Learn about Netflix’s world class engineering efforts, company culture, product developments and more.
|
Michael Jordan | 34K | 16 | https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7?source=tag_archive---------0---------------- | Artificial Intelligence — The Revolution Hasn’t Happened Yet | Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us.
There is a different narrative that one can tell about the current era. Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”
We didn’t do the amniocentesis, and a healthy girl was born a few months later. But the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. And this happened day after day until it somehow got fixed. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other places and times. The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.
I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.
Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.
While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways.
Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.
And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design.
The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically.
Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems.
This confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny.
Historically, the phrase “AI” was coined in the late 1950’s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. We will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially intelligent entity should seem to be one of us, if not physically at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory and control theory already existed, and were often inspired by human intelligence (and animal intelligence), these fields were arguably focused on “low-level” signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. “AI” was meant to focus on something different — the “high-level” or “cognitive” capability of humans to “reason” and to “think.” Sixty years later, however, high-level reasoning and thought remain elusive. The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions.
Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.
Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the case of the Apollo spaceships, these ideas have often been hidden behind the scenes, and have been the handiwork of researchers focused on specific engineering challenges. Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon.
One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play.
The past two decades have seen major progress — in industry and academia — in a complementary aspiration to human-imitative AI that is often referred to as “Intelligence Augmentation” (IA). Here computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists. While services of this kind could conceivably involve high-level reasoning and thought, currently they don’t — they mostly perform various kinds of string-matching and numerical operations that capture patterns that humans can make use of.
Hoping that the reader will tolerate one last acronym, let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits.
For example, returning to my personal anecdote, we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising in such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers.
We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.
Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system (an II system) will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction.
As for the necessity argument, it is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist? Even more polemically: if our goal was to build chemical factories, should we have first created an artificial chemist who would have then worked out how to build a chemical factory?
A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda.
It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms.
Of course, classical human-imitative AI problems remain of great interest as well. However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.
IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems. And we will want computers to trigger new levels of human creativity, not replace human creativity (whatever that might mean).
It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term “AI,” apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.)
But we need to move beyond the particular historical perspectives of McCarthy and Wiener.
We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II.
This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives. Moreover, in this understanding and shaping there is a need for a diverse set of voices from all walks of life, not merely a dialog among the technologically attuned. Focusing narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard.
While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences and the humanities.
On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope — society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the data-focused and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be viewed as constituting an engineering discipline.
Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be.
In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.
I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead.
Michael I. Jordan
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Michael I. Jordan is a Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley.
|
Blaise Aguera y Arcas | 8.7K | 15 | https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477?source=tag_archive---------1---------------- | Do algorithms reveal sexual orientation or just expose our stereotypes? | by Blaise Agüera y Arcas, Alexander Todorov and Margaret Mitchell
A study claiming that artificial intelligence can infer sexual orientation from facial images caused a media uproar in the Fall of 2017. The Economist featured this work on the cover of their September 9th magazine; on the other hand two major LGBTQ organizations, The Human Rights Campaign and GLAAD, immediately labeled it “junk science”. Michal Kosinski, who co-authored the study with fellow researcher Yilun Wang, initially expressed surprise, calling the critiques “knee-jerk” reactions. However, he then proceeded to make even bolder claims: that such AI algorithms will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone.
Kosinski’s controversial claims are nothing new. Last year, two computer scientists from China posted a non-peer-reviewed paper online in which they argued that their AI algorithm correctly categorizes “criminals” with nearly 90% accuracy from a government ID photo alone. Technology startups had also begun to crop up, claiming that they can profile people’s character from their facial images. These developments had prompted the three of us to collaborate earlier in the year on a Medium essay, Physiognomy’s New Clothes, to confront claims that AI face recognition reveals deep character traits. We described how the junk science of physiognomy has roots going back into antiquity, with practitioners in every era resurrecting beliefs based on prejudice using the new methodology of the age. In the 19th century this included anthropology and psychology; in the 20th, genetics and statistical analysis; and in the 21st, artificial intelligence.
In late 2016, the paper motivating our physiognomy essay seemed well outside the mainstream in tech and academia, but as in other areas of discourse, what recently felt like a fringe position must now be addressed head on. Kosinski is a faculty member of Stanford’s Graduate School of Business, and this new study has been accepted for publication in the respected Journal of Personality and Social Psychology. Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid. We will focus on the science.
The authors trained and tested their “sexual orientation detector” using 35,326 images from public profiles on a US dating website. Composite images of the lesbian, gay, and straight men and women in the sample reveal a great deal about the information available to the algorithm:
Clearly there are differences between these four composite faces. Wang and Kosinski assert that the key differences are in physiognomy, meaning that a sexual orientation tends to go along with a characteristic facial structure. However, we can immediately see that some of these differences are more superficial. For example, the “average” straight woman appears to wear eyeshadow, while the “average” lesbian does not. Glasses are clearly visible on the gay man, and to a lesser extent on the lesbian, while they seem absent in the heterosexual composites. Might it be the case that the algorithm’s ability to detect orientation has little to do with facial structure, but is due rather to patterns in grooming, presentation and lifestyle?
We conducted a survey of 8,000 Americans using Amazon’s Mechanical Turk crowdsourcing platform to see if we could independently confirm these patterns, asking 77 yes/no questions such as “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The results show that lesbians indeed use eyeshadow much less than straight women do, gay men and women do both wear glasses more, and young opposite-sex-attracted men are considerably more likely to have prominent facial hair than their gay or same-sex-attracted peers.
Breaking down the answers by the age of the respondent can provide a richer and clearer view of the data than any single statistic. In the following figures, we show the proportion of women who answer “yes” to “Do you ever use makeup?” (top) and “Do you wear eyeshadow?” (bottom), averaged over 6-year age intervals:
The blue curves represent strictly opposite-sex attracted women (a nearly identical set to those who answered “yes” to “Are you heterosexual or straight?”); the cyan curve represents women who answer “yes” to either or both of “Are you sexually attracted to women?” and “Are you romantically attracted to women?”; and the red curve represents women who answer “yes” to “Are you homosexual, gay or lesbian?”. [1] The shaded regions around each curve show 68% confidence intervals. [2] The patterns revealed here are intuitive; it won’t be breaking news to most that straight women tend to wear more makeup and eyeshadow than same-sex attracted and (even more so) lesbian-identifying women. On the other hand these curves also show us how often these stereotypes are violated.
That same-sex attracted men of most ages wear glasses significantly more than exclusively opposite-sex attracted men do might be a bit less obvious, but this trend is equally clear: [3]
A proponent of physiognomy might be tempted to guess that this is somehow related to differences in visual acuity between these populations of men. However, asking the question “Do you like how you look in glasses?” reveals that this is likely more of a stylistic choice:
Same-sex attracted women also report wearing glasses more, as well as liking how they look in glasses more, across a range of ages:
One can also see how opposite-sex attracted women under the age of 40 wear contact lenses significantly more than same-sex attracted women, despite reporting that they have a vision defect at roughly the same rate, further illustrating how the difference is driven by an aesthetic preference: [4]
Similar analysis shows that young same-sex attracted men are much less likely to have hairy faces than opposite-sex attracted men (“serious facial hair” in our plots is defined as answering “yes” to having a goatee, beard, or moustache, but “no” to stubble). Overall, opposite-sex attracted men in our sample are 35% more likely to have serious facial hair than same-sex attracted men, and for men under the age of 31 (who are overrepresented on dating websites), this rises to 75%.
Wang and Kosinski speculate in their paper that the faintness of the beard and moustache in their gay male composite might be connected with prenatal underexposure to androgens (male hormones), resulting in a feminizing effect, hence sparser facial hair. The fact that we see a cohort of same-sex attracted men in their 40s who have just as much facial hair as opposite-sex attracted men suggests a different story, in which fashion trends and cultural norms play the dominant role in choices about facial hair among men, not differing exposure to hormones early in development.
The authors of the paper additionally note that the heterosexual male composite appears to have darker skin than the other three composites. Our survey confirms that opposite-sex attracted men consistently self-report having a tan face (“Yes” to “Is your face tan?”) slightly more often than same-sex attracted men:
Once again Wang and Kosinski reach for a hormonal explanation, writing: “While the brightness of the facial image might be driven by many factors, previous research found that testosterone stimulates melanocyte structure and function leading to a darker skin”. However, a simpler answer is suggested by the responses to the question “Do you work outdoors?”:
Overall, opposite-sex attracted men are 29% more likely to work outdoors, and among men under 31, this rises to 39%. Previous research has found that increased exposure to sunlight leads to darker skin! [5]
None of these results prove that there is no physiological basis for sexual orientation; in fact ample evidence shows us that orientation runs much deeper than a choice or a “lifestyle”. In a critique aimed in part at fraudulent “conversion therapy” programs, United States Surgeon General David Satcher wrote in a 2001 report, “Sexual orientation is usually determined by adolescence, if not earlier [...], and there is no valid scientific evidence that sexual orientation can be changed”. It follows that if we dig deeply enough into human physiology and neuroscience we will eventually find reliable correlates and maybe even the origins of sexual orientation. In our survey we also find some evidence of outwardly visible correlates of orientation that are not cultural: perhaps most strikingly, very tall women are overrepresented among lesbian-identifying respondents. [6] However, while this is interesting, it’s very far from a good predictor of women’s sexual orientation. Makeup and eyeshadow do much better.
The way Wang and Kosinski measure the efficacy of their “AI gaydar” is equivalent to choosing a straight and a gay or lesbian face image, both from data “held out” during the training process, and asking how often the algorithm correctly guesses which is which. 50% performance would be no better than random chance. For women, guessing that the taller of the two is the lesbian achieves only 51% accuracy — barely above random chance. This is because, despite the statistically meaningful overrepresentation of tall women among the lesbian population, the great majority of lesbians are not unusually tall.
By contrast, the performance measures in the paper, 81% for gay men and 71% for lesbian women, seem impressive. [7] Consider, however, that we can achieve comparable results with trivial models based only on a handful of yes/no survey questions about presentation. For example, for pairs of women, one of whom is lesbian, the following not-exactly-superhuman algorithm is on average 63% accurate: if neither or both women wear eyeshadow, flip a coin; otherwise guess that the one who wears eyeshadow is straight, and the other lesbian. Adding six more yes/no questions about presentation (“Do you ever use makeup?”, “Do you have long hair?”, “Do you have short hair?”, “Do you ever use colored lipstick?”, “Do you like how you look in glasses?”, and “Do you work outdoors?”) as additional signals raises the performance to 70%. [8] Given how many more details about presentation are available in a face image, 71% performance no longer seems so impressive.
Several studies, including a recent one in the Journal of Sex Research, have shown that human judges’ “gaydar” is no more reliable than a coin flip when the judgement is based on pictures taken under well-controlled conditions (head pose, lighting, glasses, makeup, etc.). It’s better than chance if these variables are not controlled for, because a person’s presentation — especially if that person is out — involves social signaling. We signal our orientation and many other kinds of status, presumably in order to attract the kind of attention we want and to fit in with people like us. [9]
Wang and Kosinski argue against this interpretation on the grounds that their algorithm works on Facebook selfies of openly gay men as well as dating website selfies. The issue, however, is not whether the images come from a dating website or Facebook, but whether they are self-posted or taken under standardized conditions. Most people present themselves in ways that have been calibrated over many years of media consumption, observing others, looking in the mirror, and gauging social reactions. In one of the earliest “gaydar” studies using social media, participants could categorize gay men with about 58% accuracy; but when the researchers used Facebook images of gay and heterosexual men posted by their friends (still far from a perfect control), the accuracy dropped to 52%.
If subtle biases in image quality, expression, and grooming can be picked up on by humans, these biases can also be detected by an AI algorithm. While Wang and Kosinski acknowledge grooming and style, they believe that the chief differences between their composite images relate to face shape, arguing that gay men’s faces are more “feminine” (narrower jaws, longer noses, larger foreheads) while lesbian faces are more “masculine” (larger jaws, shorter noses, smaller foreheads). As with less facial hair on gay men and darker skin on straight men, they suggest that the mechanism is gender-atypical hormonal exposure during development. This echoes a widely discredited 19th century model of homosexuality, “sexual inversion”.
More likely, heterosexual men tend to take selfies from slightly below, which will have the apparent effect of enlarging the chin, shortening the nose, shrinking the forehead, and attenuating the smile (see our selfies below). This view emphasizes dominance — or, perhaps more benignly, an expectation that the viewer will be shorter. On the other hand, as a wedding photographer notes in her blog, “when you shoot from above, your eyes look bigger, which is generally attractive — especially for women.” This may be a heteronormative assessment.
When a face is photographed from below, the nostrils are prominent, while higher shooting angles de-emphasize and eventually conceal them altogether. Looking again at the composite images, we can see that the heterosexual male face has more pronounced dark spots corresponding to the nostrils than the gay male, while the opposite is true for the female faces. This is consistent with a pattern of heterosexual men on average shooting from below, heterosexual women from above as the wedding photographer suggests, and gay men and lesbian women from directly in front. A similar pattern is evident in the eyebrows: shooting from above makes them look more V-shaped, but their apparent shape becomes flatter, and eventually caret-shaped (^) as the camera is lowered. Shooting from below also makes the outer corners of the eyes appear lower. In short, the changes in the average positions of facial landmarks are consistent with what we would expect to see from differing selfie angles.
The ambiguity between shooting angle and the real physical sizes of facial features is hard to fully disentangle from a two-dimensional image, both for a human viewer and for an algorithm. Although the authors are using face recognition technology designed to try to cancel out all effects of head pose, lighting, grooming, and other variables not intrinsic to the face, we can confirm that this doesn’t work perfectly; that’s why multiple distinct images of a person help when grouping photos by subject in Google Photos, and why a person may initially appear in more than one group.
Tom White, a researcher at Victoria University in New Zealand, has experimented with the same facial recognition engine Kosinski and Wang use (VGG Face), and has found that its output varies systematically based on variables like smiling and head pose. When he trains a classifier based on VGG Face’s output to distinguish a happy expression from a neutral one, it gets the answer right 92% of the time — which is significant, given that the heterosexual female composite has a much more pronounced smile. Changes in head pose might be even more reliably detectable; for 576 test images, a classifier is able to pick out the ones facing to the right with 100% accuracy.
In summary, we have shown how the obvious differences between lesbian or gay and straight faces in selfies relate to grooming, presentation, and lifestyle — that is, differences in culture, not in facial structure. These differences include:
We’ve demonstrated that just a handful of yes/no questions about these variables can do nearly as good a job at guessing orientation as supposedly sophisticated facial recognition AI. Further, the current generation of facial recognition remains sensitive to head pose and facial expression. Therefore — at least at this point — it’s hard to credit the notion that this AI is in some way superhuman at “outing” us based on subtle but unalterable details of our facial structure.
This doesn’t negate the privacy concerns the authors and various commentators have raised, but it emphasizes that such concerns relate less to AI per se than to mass surveillance, which is troubling regardless of the technologies used (even when, as in the days of the Stasi in East Germany, these were nothing but paper files and audiotapes). Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place.
We are hopeful about the confluence of new, powerful AI technologies with social science, but not because we believe in reviving the 19th century research program of inferring people’s inner character from their outer appearance. Rather, we believe AI is an essential tool for understanding patterns in human culture and behavior. It can expose stereotypes inherent in everyday language. It can reveal uncomfortable truths, as in Google’s work with the Geena Davis Institute, where our face gender classifier established that men are seen and heard nearly twice as often as women in Hollywood movies (yet female-led films outperform others at the box office!). Making social progress and holding ourselves to account is more difficult without such hard evidence, even when it only confirms our suspicions.
Two of us (Margaret Mitchell and Blaise Agüera y Arcas) are research scientists specializing in machine learning and AI at Google; Agüera y Arcas leads a team that includes deep learning applied to face recognition, and powers face grouping in Google Photos. Alex Todorov is a professor in the Psychology Department at Princeton, where he directs the social perception lab. He is the author of Face Value: The Irresistible Influence of First Impressions.
[1] This wording is based on several large national surveys, which we were able to use to sanity-check our numbers. About 6% of respondents identified as “homosexual, gay or lesbian” and 85% as “heterosexual”. About 4% (of all genders) were exclusively same-sex attracted. Of the men, 10% were either sexually or romantically same-sex attracted, and of the women, 20%. Just under 1% of respondents were trans, and about 2% identified with both or neither of the pronouns “she” and “he”. These numbers are broadly consistent with other surveys, especially when considered as a function of age. The Mechanical Turk population skews somewhat younger than the overall population of the US, and consistent with other studies, our data show that younger people are far more likely to identify non-heteronormatively.
[2] These are wider for same-sex attracted and lesbian women because they are minority populations, resulting in a larger sampling error. The same holds for older people in our sample.
[3] For the remainder of the plots we stick to opposite-sex attracted and same-sex attracted, as the counts are higher and the error bars therefore smaller; these categories are also somewhat less culturally freighted, since they rely on questions about attraction rather than identity. As with eyeshadow and makeup, the effects are similar and often even larger when comparing heterosexual-identifying with lesbian- or gay-identifying people.
[4] Although we didn’t test this explicitly, slightly different rates of laser correction surgery seem a likely cause of the small but growing disparity between opposite-sex attracted and same-sex attracted women who answer “yes” to the vision defect questions as they age.
[5] This finding may prompt the further question, “Why do more opposite-sex attracted men work outdoors?” This is not addressed by any of our survey questions, but hopefully the other evidence presented here will discourage an essentialist assumption such as “straight men are just more outdoorsy” without the evidence of a controlled study that can support the leap from correlation to cause. Such explanations are a form of logical fallacy sometimes called a just-so story: “an unverifiable narrative explanation for a cultural practice”.
[6] Of the 253 lesbian-identified women in the sample, 5, or 2%, were over six feet, and 25, or 10%, were over 5’9”. Out of 3,333 heterosexual women (women who answered “yes” to “Are you heterosexual or straight?”), only 16, or 0.5%, were over six feet, and 152, or 5%, were over 5’9”.
[7] They note that these figures rise to 91% for men and 83% for women if 5 images are considered.
[8] These results are based on the simplest possible machine learning technique, a linear classifier. The classifier is trained on a randomly chosen 70% of the data, with the remaining 30% of the data held out for testing. Over 500 repetitions of this procedure, the error is 69.53% ± 2.98%. With the same number of repetitions and holdout, basing the decision on height alone gives an error of 51.08% ± 3.27%, and basing it on eyeshadow alone yields 62.96% ± 2.39%.
[9] A longstanding body of work, e.g. Goffman’s The Presentation of Self in Everyday Life (1959) and Jones and Pittman’s Toward a General Theory of Strategic Self-Presentation (1982), delves more deeply into why we present ourselves the way we do, both for instrumental reasons (status, power, attraction) and because our presentation informs and is informed by how we conceive of our social selves.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Blaise Aguera y Arcas leads Google’s AI group in Seattle. He founded Seadragon, and was one of the creators of Photosynth at Microsoft.
|
James Le | 18.4K | 11 | https://towardsdatascience.com/a-tour-of-the-top-10-algorithms-for-machine-learning-newbies-dde4edffae11?source=tag_archive---------2---------------- | A Tour of The Top 10 Algorithms for Machine Learning Newbies | In machine learning, there’s something called the “No Free Lunch” theorem. In a nutshell, it states that no one algorithm works best for every problem, and it’s especially relevant for supervised learning (i.e. predictive modeling).
For example, you can’t say that neural networks are always better than decision trees or vice-versa. There are many factors at play, such as the size and structure of your dataset.
As a result, you should try many different algorithms for your problem, while using a hold-out “test set” of data to evaluate performance and select the winner.
Of course, the algorithms you try must be appropriate for your problem, which is where picking the right machine learning task comes in. As an analogy, if you need to clean your house, you might use a vacuum, a broom, or a mop, but you wouldn’t bust out a shovel and start digging.
However, there is a common principle that underlies all supervised machine learning algorithms for predictive modeling.
This is a general learning task where we would like to make predictions in the future (Y) given new examples of input variables (X). We don’t know what the function (f) looks like or its form. If we did, we would use it directly and we would not need to learn it from data using machine learning algorithms.
The most common type of machine learning is to learn the mapping Y = f(X) to make predictions of Y for new X. This is called predictive modeling or predictive analytics and our goal is to make the most accurate predictions possible.
For machine learning newbies who are eager to understand the basic of machine learning, here is a quick tour on the top 10 machine learning algorithms used by data scientists.
Linear regression is perhaps one of the most well-known and well-understood algorithms in statistics and machine learning.
Predictive modeling is primarily concerned with minimizing the error of a model or making the most accurate predictions possible, at the expense of explainability. We will borrow, reuse and steal algorithms from many different fields, including statistics and use them towards these ends.
The representation of linear regression is an equation that describes a line that best fits the relationship between the input variables (x) and the output variables (y), by finding specific weightings for the input variables called coefficients (B).
For example: y = B0 + B1 * x
We will predict y given the input x and the goal of the linear regression learning algorithm is to find the values for the coefficients B0 and B1.
Different techniques can be used to learn the linear regression model from data, such as a linear algebra solution for ordinary least squares and gradient descent optimization.
Linear regression has been around for more than 200 years and has been extensively studied. Some good rules of thumb when using this technique are to remove variables that are very similar (correlated) and to remove noise from your data, if possible. It is a fast and simple technique and good first algorithm to try.
Logistic regression is another technique borrowed by machine learning from the field of statistics. It is the go-to method for binary classification problems (problems with two class values).
Logistic regression is like linear regression in that the goal is to find the values for the coefficients that weight each input variable. Unlike linear regression, the prediction for the output is transformed using a non-linear function called the logistic function.
The logistic function looks like a big S and will transform any value into the range 0 to 1. This is useful because we can apply a rule to the output of the logistic function to snap values to 0 and 1 (e.g. IF less than 0.5 then output 1) and predict a class value.
Because of the way that the model is learned, the predictions made by logistic regression can also be used as the probability of a given data instance belonging to class 0 or class 1. This can be useful for problems where you need to give more rationale for a prediction.
Like linear regression, logistic regression does work better when you remove attributes that are unrelated to the output variable as well as attributes that are very similar (correlated) to each other. It’s a fast model to learn and effective on binary classification problems.
Logistic Regression is a classification algorithm traditionally limited to only two-class classification problems. If you have more than two classes then the Linear Discriminant Analysis algorithm is the preferred linear classification technique.
The representation of LDA is pretty straight forward. It consists of statistical properties of your data, calculated for each class. For a single input variable this includes:
Predictions are made by calculating a discriminate value for each class and making a prediction for the class with the largest value. The technique assumes that the data has a Gaussian distribution (bell curve), so it is a good idea to remove outliers from your data before hand. It’s a simple and powerful method for classification predictive modeling problems.
Decision Trees are an important type of algorithm for predictive modeling machinelearning.
The representation of the decision tree model is a binary tree. This is your binary tree from algorithms and data structures, nothing too fancy. Each node represents a single input variable (x) and a split point on that variable (assuming the variable is numeric).
The leaf nodes of the tree contain an output variable (y) which is used to make a prediction. Predictions are made by walking the splits of the tree until arriving at a leaf node and output the class value at that leaf node.
Trees are fast to learn and very fast for making predictions. They are also often accurate for a broad range of problems and do not require any special preparation for your data.
Naive Bayes is a simple but surprisingly powerful algorithm for predictive modeling.
The model is comprised of two types of probabilities that can be calculated directly from your training data: 1) The probability of each class; and 2) The conditional probability for each class given each x value. Once calculated, the probability model can be used to make predictions for new data using Bayes Theorem. When your data is real-valued it is common to assume a Gaussian distribution (bell curve) so that you can easily estimate these probabilities.
Naive Bayes is called naive because it assumes that each input variable is independent. This is a strong assumption and unrealistic for real data, nevertheless, the technique is very effective on a large range of complex problems.
The KNN algorithm is very simple and very effective. The model representation for KNN is the entire training dataset. Simple right?
Predictions are made for a new data point by searching through the entire training set for the K most similar instances (the neighbors) and summarizing the output variable for those K instances. For regression problems, this might be the mean output variable, for classification problems this might be the mode (or most common) class value.
The trick is in how to determine the similarity between the data instances. The simplest technique if your attributes are all of the same scale (all in inches for example) is to use the Euclidean distance, a number you can calculate directly based on the differences between each input variable.
KNN can require a lot of memory or space to store all of the data, but only performs a calculation (or learn) when a prediction is needed, just in time. You can also update and curate your training instances over time to keep predictions accurate.
The idea of distance or closeness can break down in very high dimensions (lots of input variables) which can negatively affect the performance of the algorithm on your problem. This is called the curse of dimensionality. It suggests you only use those input variables that are most relevant to predicting the output variable.
A downside of K-Nearest Neighbors is that you need to hang on to your entire training dataset. The Learning Vector Quantization algorithm (or LVQ for short) is an artificial neural network algorithm that allows you to choose how many training instances to hang onto and learns exactly what those instances should look like.
The representation for LVQ is a collection of codebook vectors. These are selected randomly in the beginning and adapted to best summarize the training dataset over a number of iterations of the learning algorithm. After learned, the codebook vectors can be used to make predictions just like K-Nearest Neighbors. The most similar neighbor (best matching codebook vector) is found by calculating the distance between each codebook vector and the new data instance. The class value or (real value in the case of regression) for the best matching unit is then returned as the prediction. Best results are achieved if you rescale your data to have the same range, such as between 0 and 1.
If you discover that KNN gives good results on your dataset try using LVQ to reduce the memory requirements of storing the entire training dataset.
Support Vector Machines are perhaps one of the most popular and talked about machine learning algorithms.
A hyperplane is a line that splits the input variable space. In SVM, a hyperplane is selected to best separate the points in the input variable space by their class, either class 0 or class 1. In two-dimensions, you can visualize this as a line and let’s assume that all of our input points can be completely separated by this line. The SVM learning algorithm finds the coefficients that results in the best separation of the classes by the hyperplane.
The distance between the hyperplane and the closest data points is referred to as the margin. The best or optimal hyperplane that can separate the two classes is the line that has the largest margin. Only these points are relevant in defining the hyperplane and in the construction of the classifier. These points are called the support vectors. They support or define the hyperplane. In practice, an optimization algorithm is used to find the values for the coefficients that maximizes the margin.
SVM might be one of the most powerful out-of-the-box classifiers and worth trying on your dataset.
Random Forest is one of the most popular and most powerful machine learning algorithms. It is a type of ensemble machine learning algorithm called Bootstrap Aggregation or bagging.
The bootstrap is a powerful statistical method for estimating a quantity from a data sample. Such as a mean. You take lots of samples of your data, calculate the mean, then average all of your mean values to give you a better estimation of the true mean value.
In bagging, the same approach is used, but instead for estimating entire statistical models, most commonly decision trees. Multiple samples of your training data are taken then models are constructed for each data sample. When you need to make a prediction for new data, each model makes a prediction and the predictions are averaged to give a better estimate of the true output value.
Random forest is a tweak on this approach where decision trees are created so that rather than selecting optimal split points, suboptimal splits are made by introducing randomness.
The models created for each sample of the data are therefore more different than they otherwise would be, but still accurate in their unique and different ways. Combining their predictions results in a better estimate of the true underlying output value.
If you get good results with an algorithm with high variance (like decision trees), you can often get better results by bagging that algorithm.
Boosting is an ensemble technique that attempts to create a strong classifier from a number of weak classifiers. This is done by building a model from the training data, then creating a second model that attempts to correct the errors from the first model. Models are added until the training set is predicted perfectly or a maximum number of models are added.
AdaBoost was the first really successful boosting algorithm developed for binary classification. It is the best starting point for understanding boosting. Modern boosting methods build on AdaBoost, most notably stochastic gradient boosting machines.
AdaBoost is used with short decision trees. After the first tree is created, the performance of the tree on each training instance is used to weight how much attention the next tree that is created should pay attention to each training instance. Training data that is hard to predict is given more weight, whereas easy to predict instances are given less weight. Models are created sequentially one after the other, each updating the weights on the training instances that affect the learning performed by the next tree in the sequence. After all the trees are built, predictions are made for new data, and the performance of each tree is weighted by how accurate it was on training data.
Because so much attention is put on correcting mistakes by the algorithm it is important that you have clean data with outliers removed.
A typical question asked by a beginner, when facing a wide variety of machine learning algorithms, is “which algorithm should I use?” The answer to the question varies depending on many factors, including: (1) The size, quality, and nature of data; (2) The available computational time; (3) The urgency of the task; and (4) What you want to do with the data.
Even an experienced data scientist cannot tell which algorithm will perform the best before trying different algorithms. Although there are many other Machine Learning algorithms, these are the most popular ones. If you’re a newbie to Machine Learning, these would be a good starting point to learn.
— —
If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. You can find my own code on GitHub, and more of my writing and projects at https://jameskle.com/. You can also follow me on Twitter, email me directly or find me on LinkedIn.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Blue Ocean Thinker (https://jameskle.com/)
Sharing concepts, ideas, and codes.
|
Emmanuel Ameisen | 12.8K | 13 | https://blog.insightdatascience.com/how-to-solve-90-of-nlp-problems-a-step-by-step-guide-fda605278e4e?source=tag_archive---------3---------------- | How to solve 90% of NLP problems: a step-by-step guide | For more content like this, follow Insight and Emmanuel on Twitter.
Whether you are an established company or working to launch a new service, you can always leverage text data to validate, improve, and expand the functionalities of your product. The science of extracting meaning and learning from text data is an active topic of research called Natural Language Processing (NLP).
NLP produces new and exciting results on a daily basis, and is a very large field. However, having worked with hundreds of companies, the Insight team has seen a few key practical applications come up much more frequently than any other:
While many NLP papers and tutorials exist online, we have found it hard to find guidelines and tips on how to approach these problems efficiently from the ground up.
After leading hundreds of projects a year and gaining advice from top teams all over the United States, we wrote this post to explain how to build Machine Learning solutions to solve problems like the ones mentioned above. We’ll begin with the simplest method that could work, and then move on to more nuanced solutions, such as feature engineering, word vectors, and deep learning.
After reading this article, you’ll know how to:
We wrote this post as a step-by-step guide; it can also serve as a high level overview of highly effective standard approaches.
This post is accompanied by an interactive notebook demonstrating and applying all these techniques. Feel free to run the code and follow along!
Every Machine Learning problem starts with data, such as a list of emails, posts, or tweets. Common sources of textual information include:
“Disasters on Social Media” dataset
For this post, we will use a dataset generously provided by CrowdFlower, called “Disasters on Social Media”, where:
Our task will be to detect which tweets are about a disastrous event as opposed to an irrelevant topic such as a movie. Why? A potential application would be to exclusively notify law enforcement officials about urgent emergencies while ignoring reviews of the most recent Adam Sandler film. A particular challenge with this task is that both classes contain the same search terms used to find the tweets, so we will have to use subtler differences to distinguish between them.
In the rest of this post, we will refer to tweets that are about disasters as “disaster”, and tweets about anything else as “irrelevant”.
We have labeled data and so we know which tweets belong to which categories. As Richard Socher outlines below, it is usually faster, simpler, and cheaper to find and label enough data to train a model on, rather than trying to optimize a complex unsupervised method.
One of the key skills of a data scientist is knowing whether the next step should be working on the model or the data. A good rule of thumb is to look at the data first and then clean it up. A clean dataset will allow a model to learn meaningful features and not overfit on irrelevant noise.
Here is a checklist to use to clean your data: (see the code for more details):
After following these steps and checking for additional errors, we can start using the clean, labelled data to train models!
Machine Learning models take numerical values as input. Models working on images, for example, take in a matrix representing the intensity of each pixel in each color channel.
Our dataset is a list of sentences, so in order for our algorithm to extract patterns from the data, we first need to find a way to represent it in a way that our algorithm can understand, i.e. as a list of numbers.
A natural way to represent text for computers is to encode each character individually as a number (ASCII for example). If we were to feed this simple representation into a classifier, it would have to learn the structure of words from scratch based only on our data, which is impossible for most datasets. We need to use a higher level approach.
For example, we can build a vocabulary of all the unique words in our dataset, and associate a unique index to each word in the vocabulary. Each sentence is then represented as a list that is as long as the number of distinct words in our vocabulary. At each index in this list, we mark how many times the given word appears in our sentence. This is called a Bag of Words model, since it is a representation that completely ignores the order of words in our sentence. This is illustrated below.
We have around 20,000 words in our vocabulary in the “Disasters of Social Media” example, which means that every sentence will be represented as a vector of length 20,000. The vector will contain mostly 0s because each sentence contains only a very small subset of our vocabulary.
In order to see whether our embeddings are capturing information that is relevant to our problem (i.e. whether the tweets are about disasters or not), it is a good idea to visualize them and see if the classes look well separated. Since vocabularies are usually very large and visualizing data in 20,000 dimensions is impossible, techniques like PCA will help project the data down to two dimensions. This is plotted below.
The two classes do not look very well separated, which could be a feature of our embeddings or simply of our dimensionality reduction. In order to see whether the Bag of Words features are of any use, we can train a classifier based on them.
When first approaching a problem, a general best practice is to start with the simplest tool that could solve the job. Whenever it comes to classifying data, a common favorite for its versatility and explainability is Logistic Regression. It is very simple to train and the results are interpretable as you can easily extract the most important coefficients from the model.
We split our data in to a training set used to fit our model and a test set to see how well it generalizes to unseen data. After training, we get an accuracy of 75.4%. Not too shabby! Guessing the most frequent class (“irrelevant”) would give us only 57%. However, even if 75% precision was good enough for our needs, we should never ship a model without trying to understand it.
A first step is to understand the types of errors our model makes, and which kind of errors are least desirable. In our example, false positives are classifying an irrelevant tweet as a disaster, and false negatives are classifying a disaster as an irrelevant tweet. If the priority is to react to every potential event, we would want to lower our false negatives. If we are constrained in resources however, we might prioritize a lower false positive rate to reduce false alarms. A good way to visualize this information is using a Confusion Matrix, which compares the predictions our model makes with the true label. Ideally, the matrix would be a diagonal line from top left to bottom right (our predictions match the truth perfectly).
Our classifier creates more false negatives than false positives (proportionally). In other words, our model’s most common error is inaccurately classifying disasters as irrelevant. If false positives represent a high cost for law enforcement, this could be a good bias for our classifier to have.
To validate our model and interpret its predictions, it is important to look at which words it is using to make decisions. If our data is biased, our classifier will make accurate predictions in the sample data, but the model would not generalize well in the real world. Here we plot the most important words for both the disaster and irrelevant class. Plotting word importance is simple with Bag of Words and Logistic Regression, since we can just extract and rank the coefficients that the model used for its predictions.
Our classifier correctly picks up on some patterns (hiroshima, massacre), but clearly seems to be overfitting on some meaningless terms (heyoo, x1392). Right now, our Bag of Words model is dealing with a huge vocabulary of different words and treating all words equally. However, some of these words are very frequent, and are only contributing noise to our predictions. Next, we will try a way to represent sentences that can account for the frequency of words, to see if we can pick up more signal from our data.
In order to help our model focus more on meaningful words, we can use a TF-IDF score (Term Frequency, Inverse Document Frequency) on top of our Bag of Words model. TF-IDF weighs words by how rare they are in our dataset, discounting words that are too frequent and just add to the noise. Here is the PCA projection of our new embeddings.
We can see above that there is a clearer distinction between the two colors. This should make it easier for our classifier to separate both groups. Let’s see if this leads to better performance. Training another Logistic Regression on our new embeddings, we get an accuracy of 76.2%.
A very slight improvement. Has our model started picking up on more important words? If we are getting a better result while preventing our model from “cheating” then we can truly consider this model an upgrade.
The words it picked up look much more relevant! Although our metrics on our test set only increased slightly, we have much more confidence in the terms our model is using, and thus would feel more comfortable deploying it in a system that would interact with customers.
Our latest model managed to pick up on high signal words. However, it is very likely that if we deploy this model, we will encounter words that we have not seen in our training set before. The previous model will not be able to accurately classify these tweets, even if it has seen very similar words during training.
To solve this problem, we need to capture the semantic meaning of words, meaning we need to understand that words like ‘good’ and ‘positive’ are closer than ‘apricot’ and ‘continent.’ The tool we will use to help us capture meaning is called Word2Vec.
Using pre-trained words
Word2Vec is a technique to find continuous embeddings for words. It learns from reading massive amounts of text and memorizing which words tend to appear in similar contexts. After being trained on enough data, it generates a 300-dimension vector for each word in a vocabulary, with words of similar meaning being closer to each other.
The authors of the paper open sourced a model that was pre-trained on a very large corpus which we can leverage to include some knowledge of semantic meaning into our model. The pre-trained vectors can be found in the repository associated with this post.
A quick way to get a sentence embedding for our classifier is to average Word2Vec scores of all words in our sentence. This is a Bag of Words approach just like before, but this time we only lose the syntax of our sentence, while keeping some semantic information.
Here is a visualization of our new embeddings using previous techniques:
The two groups of colors look even more separated here, our new embeddings should help our classifier find the separation between both classes. After training the same model a third time (a Logistic Regression), we get an accuracy score of 77.7%, our best result yet! Time to inspect our model.
Since our embeddings are not represented as a vector with one dimension per word as in our previous models, it’s harder to see which words are the most relevant to our classification. While we still have access to the coefficients of our Logistic Regression, they relate to the 300 dimensions of our embeddings rather than the indices of words.
For such a low gain in accuracy, losing all explainability seems like a harsh trade-off. However, with more complex models we can leverage black box explainers such as LIME in order to get some insight into how our classifier works.
LIME
LIME is available on Github through an open-sourced package. A black-box explainer allows users to explain the decisions of any classifier on one particular example by perturbing the input (in our case removing words from the sentence) and seeing how the prediction changes.
Let’s see a couple explanations for sentences from our dataset.
However, we do not have time to explore the thousands of examples in our dataset. What we’ll do instead is run LIME on a representative sample of test cases and see which words keep coming up as strong contributors. Using this approach we can get word importance scores like we had for previous models and validate our model’s predictions.
Looks like the model picks up highly relevant words implying that it appears to make understandable decisions. These seem like the most relevant words out of all previous models and therefore we’re more comfortable deploying in to production.
We’ve covered quick and efficient approaches to generate compact sentence embeddings. However, by omitting the order of words, we are discarding all of the syntactic information of our sentences. If these methods do not provide sufficient results, you can utilize more complex model that take in whole sentences as input and predict labels without the need to build an intermediate representation. A common way to do that is to treat a sentence as a sequence of individual word vectors using either Word2Vec or more recent approaches such as GloVe or CoVe. This is what we will do below.
Convolutional Neural Networks for Sentence Classification train very quickly and work well as an entry level deep learning architecture. While Convolutional Neural Networks (CNN) are mainly known for their performance on image data, they have been providing excellent results on text related tasks, and are usually much quicker to train than most complex NLP approaches (e.g. LSTMs and Encoder/Decoder architectures). This model preserves the order of words and learns valuable information on which sequences of words are predictive of our target classes. Contrary to previous models, it can tell the difference between “Alex eats plants” and “Plants eat Alex.”
Training this model does not require much more work than previous approaches (see code for details) and gives us a model that is much better than the previous ones, getting 79.5% accuracy! As with the models above, the next step should be to explore and explain the predictions using the methods we described to validate that it is indeed the best model to deploy to users. By now, you should feel comfortable tackling this on your own.
Here is a quick recap of the approach we’ve successfully used:
These approaches were applied to a particular example case using models tailored towards understanding and leveraging short text such as tweets, but the ideas are widely applicable to a variety of problems. I hope this helped you, we’d love to hear your comments and questions! Feel free to comment below or reach out to @EmmanuelAmeisen here or on Twitter.
Want to learn applied Artificial Intelligence from top professionals in Silicon Valley or New York? Learn more about the Artificial Intelligence program.
Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
AI Lead at Insight AI @EmmanuelAmeisen
Insight Fellows Program - Your bridge to a career in data
|
Mybridge | 10.1K | 6 | https://medium.mybridge.co/30-amazing-machine-learning-projects-for-the-past-year-v-2018-b853b8621ac7?source=tag_archive---------4---------------- | 30 Amazing Machine Learning Projects for the Past Year (v.2018) | For the past year, we’ve compared nearly 8,800 open source Machine Learning projects to pick Top 30 (0.3% chance).
This is an extremely competitive list and it carefully picks the best open source Machine Learning libraries, datasets and apps published between January and December 2017. Mybridge AI evaluates the quality by considering popularity, engagement and recency. To give you an idea about the quality, the average number of Github stars is 3,558.
Open source projects can be useful for data scientists. You can learn by reading the source code and build something on top of the existing projects. Give a plenty of time to play around with Machine Learning projects you may have missed for the past year.
<Recommended Learning>
A) Neural Networks
Deep Learning A-ZTM: Hands-On Artificial Neural Networks
[68,745 recommends, 4.5/5 stars]
B) TensorFlow
Complete Guide to TensorFlow for Deep Learning with Python
[17,834 recommends, 4.6/5 stars]
(Click the numbers below. Credit given to the biggest contributor.)
FastText: Library for fast text representation and classification. [11786 stars on Github]. Courtesy of Facebook Research
........... [ Muse: Multilingual Unsupervised or Supervised word Embeddings, based on Fast Text. 695 stars on Github]
Deep-photo-styletransfer: Code and data for paper “Deep Photo Style Transfer” [9747 stars on Github]. Courtesy of Fujun Luan, Ph.D. at Cornell University
The world’s simplest facial recognition api for Python and the command line [8672 stars on Github]. Courtesy of Adam Geitgey
Magenta: Music and Art Generation with Machine Intelligence [8113 stars on Github].
Sonnet: TensorFlow-based neural network library [5731 stars on Github]. Courtesy of Malcolm Reynolds at Deepmind
deeplearn.js: A hardware-accelerated machine intelligence library for the web [5462 stars on Github]. Courtesy of Nikhil Thorat at Google Brain
Fast Style Transfer in TensorFlow [4843 stars on Github]. Courtesy of Logan Engstrom at MIT
Pysc2: StarCraft II Learning Environment [3683 stars on Github]. Courtesy of Timo Ewalds at DeepMind
AirSim: Open source simulator based on Unreal Engine for autonomous vehicles from Microsoft AI & Research [3861 stars on Github]. Courtesy of Shital Shah at Microsoft
Facets: Visualizations for machine learning datasets [3371 stars on Github]. Courtesy of Google Brain
Style2Paints: AI colorization of images [3310 stars on Github].
Tensor2Tensor: A library for generalized sequence to sequence models — Google Research [3087 stars on Github]. Courtesy of Ryan Sepassi at Google Brain
Image-to-image translation in PyTorch (e.g. horse2zebra, edges2cats, and more) [2847 stars on Github]. Courtesy of Jun-Yan Zhu, Ph.D at Berkeley
Faiss: A library for efficient similarity search and clustering of dense vectors. [2629 stars on Github]. Courtesy of Facebook Research
Fashion-mnist: A MNIST-like fashion product database [2780 stars on Github]. Courtesy of Han Xiao, Research Scientist Zalando Tech
ParlAI: A framework for training and evaluating AI models on a variety of openly available dialog datasets [2578 stars on Github]. Courtesy of Alexander Miller at Facebook Research
Fairseq: Facebook AI Research Sequence-to-Sequence Toolkit [2571 stars on Github].
Pyro: Deep universal probabilistic programming with Python and PyTorch [2387 stars on Github]. Courtesy of Uber AI Labs
iGAN: Interactive Image Generation powered by GAN [2369 stars on Github].
Deep-image-prior: Image restoration with neural networks but without learning [2188 stars on Github]. Courtesy of Dmitry Ulyanov, Ph.D at Skoltech
Face_classification: Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV. [1967 stars on Github].
Speech-to-Text-WaveNet : End-to-end sentence level English speech recognition using DeepMind’s WaveNet and tensorflow [1961 stars on Github]. Courtesy of Namju Kim at Kakao Brain
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [1954 stars on Github]. Courtesy of Yunjey Choi at Korea University
Ml-agents: Unity Machine Learning Agents [1658 stars on Github]. Courtesy of Arthur Juliani, Deep Learning at Unity3D
DeepVideoAnalytics: A distributed visual search and visual data analytics platform [1494 stars on Github]. Courtesy of Akshay Bhat, Ph.D at Cornell University
OpenNMT: Open-Source Neural Machine Translation in Torch [1490 stars on Github].
Pix2pixHD: Synthesizing and manipulating 2048x1024 images with conditional GANs [1283 stars on Github]. Courtesy of Ming-Yu Liu at AI Research Scientist at Nvidia
Horovod: Distributed training framework for TensorFlow. [1188 stars on Github]. Courtesy of Uber Engineering
AI-Blocks: A powerful and intuitive WYSIWYG interface that allows anyone to create Machine Learning models [899 stars on Github].
Deep neural networks for voice conversion (voice style transfer) in Tensorflow [845 stars on Github]. Courtesy of Dabi Ahn, AI Research at Kakao Brain
That’s it for Machine Learning open source projects. If you like this curation, read best daily articles based on your programming skills on our website.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
We rank articles for professionals
Read more and achieve more
|
David Foster | 12.8K | 11 | https://medium.com/applied-data-science/how-to-build-your-own-alphazero-ai-using-python-and-keras-7f664945c188?source=tag_archive---------5---------------- | How to build your own AlphaZero AI using Python and Keras | In this article I’ll attempt to cover three things:
In March 2016, Deepmind’s AlphaGo beat 18 times world champion Go player Lee Sedol 4–1 in a series watched by over 200 million people. A machine had learnt a super-human strategy for playing Go, a feat previously thought impossible, or at the very least, at least a decade away from being accomplished.
This in itself, was a remarkable achievement. However, on 18th October 2017, DeepMind took a giant leap further.
The paper ‘Mastering the Game of Go without Human Knowledge’ unveiled a new variant of the algorithm, AlphaGo Zero, that had defeated AlphaGo 100–0. Incredibly, it had done so by learning solely through self-play, starting ‘tabula rasa’ (blank state) and gradually finding strategies that would beat previous incarnations of itself. No longer was a database of human expert games required to build a super-human AI .
A mere 48 days later, on 5th December 2017, DeepMind released another paper ‘Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm’ showing how AlphaGo Zero could be adapted to beat the world-champion programs StockFish and Elmo at chess and shogi. The entire learning process, from being shown the games for the first time, to becoming the best computer program in the world, had taken under 24 hours.
With this, AlphaZero was born — the general algorithm for getting good at something, quickly, without any prior knowledge of human expert strategy.
There are two amazing things about this achievement:
It cannot be overstated how important this is. This means that the underlying methodology of AlphaGo Zero can be applied to ANY game with perfect information (the game state is fully known to both players at all times) because no prior expertise is required beyond the rules of the game.
This is how it was possible for DeepMind to publish the chess and shogi papers only 48 days after the original AlphaGo Zero paper. Quite literally, all that needed to change was the input file that describes the mechanics of the game and to tweak the hyper-parameters relating to the neural network and Monte Carlo tree search.
If AlphaZero used super-complex algorithms that only a handful of people in the world understood, it would still be an incredible achievement. What makes it extraordinary is that a lot of the ideas in the paper are actually far less complex than previous versions. At its heart, lies the following beautifully simple mantra for learning:
Doesn’t that sound a lot like how you learn to play games? When you play a bad move, it’s either because you misjudged the future value of resulting positions, or you misjudged the likelihood that your opponent would play a certain move, so didn’t think to explore that possibility. These are exactly the two aspects of gameplay that AlphaZero is trained to learn.
Firstly, check out the AlphaGo Zero cheat sheet for a high level understanding of how AlphaGo Zero works. It’s worth having that to refer to as we walk through each part of the code. There’s also a great article here that explains how AlphaZero works in more detail.
Clone this Git repository, which contains the code I’ll be referencing.
To start the learning process, run the top two panels in the run.ipynb Jupyter notebook. Once it’s built up enough game positions to fill its memory the neural network will begin training. Through additional self-play and training, it will gradually get better at predicting the game value and next moves from any position, resulting in better decision making and smarter overall play.
We’ll now have a look at the code in more detail, and show some results that demonstrate the AI getting stronger over time.
N.B — This is my own understanding of how AlphaZero works based on the information available in the papers referenced above. If any of the below is incorrect, apologies and I’ll endeavour to correct it!
The game that our algorithm will learn to play is Connect4 (or Four In A Row). Not quite as complex as Go... but there are still 4,531,985,219,092 game positions in total.
The game rules are straightforward. Players take it in turns to enter a piece of their colour in the top of any available column. The first player to get four of their colour in a row — each vertically, horizontally or diagonally, wins. If the entire grid is filled without a four-in-a-row being created, the game is drawn.
Here’s a summary of the key files that make up the codebase:
This file contains the game rules for Connect4.
Each squares is allocated a number from 0 to 41, as follows:
The game.py file gives the logic behind moving from one game state to another, given a chosen action. For example, given the empty board and action 38, the takeAction method return a new game state, with the starting player’s piece at the bottom of the centre column.
You can replace the game.py file with any game file that conforms to the same API and the algorithm will in principal, learn strategy through self play, based on the rules you have given it.
This contains the code that starts the learning process. It loads the game rules and then iterates through the main loop of the algorithm, which consist of three stages:
There are two agents involved in this loop, the best_player and the current_player.
The best_player contains the best performing neural network and is used to generate the self play memories. The current_player then retrains its neural network on these memories and is then pitched against the best_player. If it wins, the neural network inside the best_player is switched for the neural network inside the current_player, and the loop starts again.
This contains the Agent class (a player in the game). Each player is initialised with its own neural network and Monte Carlo Search Tree.
The simulate method runs the Monte Carlo Tree Search process. Specifically, the agent moves to a leaf node of the tree, evaluates the node with its neural network and then backfills the value of the node up through the tree.
The act method repeats the simulation multiple times to understand which move from the current position is most favourable. It then returns the chosen action to the game, to enact the move.
The replay method retrains the neural network, using memories from previous games.
This file contains the Residual_CNN class, which defines how to build an instance of the neural network.
It uses a condensed version of the neural network architecture in the AlphaGoZero paper — i.e. a convolutional layer, followed by many residual layers, then splitting into a value and policy head.
The depth and number of convolutional filters can be specified in the config file.
The Keras library is used to build the network, with a backend of Tensorflow.
To view individual convolutional filters and densely connected layers in the neural network, run the following inside the the run.ipynb notebook:
This contains the Node, Edge and MCTS classes, that constitute a Monte Carlo Search Tree.
The MCTS class contains the moveToLeaf and backFill methods previously mentioned, and instances of the Edge class store the statistics about each potential move.
This is where you set the key parameters that influence the algorithm.
Adjusting these variables will affect that running time, neural network accuracy and overall success of the algorithm. The above parameters produce a high quality Connect4 player, but take a long time to do so. To speed the algorithm up, try the following parameters instead.
Contains the playMatches and playMatchesBetweenVersions functions that play matches between two agents.
To play against your creation, run the following code (it’s also in the run.ipynb notebook)
When you run the algorithm, all model and memory files are saved in the run folder, in the root directory.
To restart the algorithm from this checkpoint later, transfer the run folder to the run_archive folder, attaching a run number to the folder name. Then, enter the run number, model version number and memory version number into the initialise.py file, corresponding to the location of the relevant files in the run_archive folder. Running the algorithm as usual will then start from this checkpoint.
An instance of the Memory class stores the memories of previous games, that the algorithm uses to retrain the neural network of the current_player.
This file contains a custom loss function, that masks predictions from illegal moves before passing to the cross entropy loss function.
The locations of the run and run_archive folders.
Log files are saved to the log folder inside the run folder.
To turn on logging, set the values of the logger_disabled variables to False inside this file.
Viewing the log files will help you to understand how the algorithm works and see inside its ‘mind’. For example, here is a sample from the logger.mcts file.
Equally from the logger.tourney file, you can see the probabilities attached to each move, during the evaluation phase:
Training over a couple of days produces the following chart of loss against mini-batch iteration number:
The top line is the error in the policy head (the cross entropy of the MCTS move probabilities, against the output from the neural network). The bottom line is the error in the value head (the mean squared error between the actual game value and the neural network predict of the value). The middle line is an average of the two.
Clearly, the neural network is getting better at predicting the value of each game state and the likely next moves. To show how this results in stronger and stronger play, I ran a league between 17 players, ranging from the 1st iteration of the neural network, up to the 49th. Each pairing played twice, with both players having a chance to play first.
Here are the final standings:
Clearly, the later versions of the neural network are superior to the earlier versions, winning most of their games. It also appears that the learning hasn’t yet saturated — with further training time, the players would continue to get stronger, learning more and more intricate strategies.
As an example, one clear strategy that the neural network has favoured over time is grabbing the centre column early. Observe the difference between the first version of the algorithm and say, the 30th version:
1st neural network version
30th neural network version
This is a good strategy as many lines require the centre column — claiming this early ensures your opponent cannot take advantage of this. This has been learnt by the neural network, without any human input.
There is a game.py file for a game called ‘Metasquares’ in the games folder. This involves placing X and O markers in a grid to try to form squares of different sizes. Larger squares score more points than smaller squares and the player with the most points when the grid is full wins.
If you switch the Connect4 game.py file for the Metasquares game.py file, the same algorithm will learn how to play Metasquares instead.
Hopefully you find this article useful — let me know in the comments below if you find any typos or have questions about anything in the codebase or article and I’ll get back to you as soon as possible.
If you would like to learn more about how our company, Applied Data Science develops innovative data science solutions for businesses, feel free to get in touch through our website or directly through LinkedIn.
... and if you like this, feel free to leave a few hearty claps :)
Applied Data Science is a London based consultancy that implements end-to-end data science solutions for businesses, delivering measurable value. If you’re looking to do more with your data, let’s talk.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Co-founder of Applied Data Science
Cutting edge data science news and projects
|
George Seif | 11.4K | 11 | https://towardsdatascience.com/the-5-clustering-algorithms-data-scientists-need-to-know-a36d136ef68?source=tag_archive---------6---------------- | The 5 Clustering Algorithms Data Scientists Need to Know | Clustering is a Machine Learning technique that involves the grouping of data points. Given a set of data points, we can use a clustering algorithm to classify each data point into a specific group. In theory, data points that are in the same group should have similar properties and/or features, while data points in different groups should have highly dissimilar properties and/or features. Clustering is a method of unsupervised learning and is a common technique for statistical data analysis used in many fields.
In Data Science, we can use clustering analysis to gain some valuable insights from our data by seeing what groups the data points fall into when we apply a clustering algorithm. Today, we’re going to look at 5 popular clustering algorithms that data scientists need to know and their pros and cons!
K-Means is probably the most well know clustering algorithm. It’s taught in a lot of introductory data science and machine learning classes. It’s easy to understand and implement in code! Check out the graphic below for an illustration.
K-Means has the advantage that it’s pretty fast, as all we’re really doing is computing the distances between points and group centers; very few computations! It thus has a linear complexity O(n).
On the other hand, K-Means has a couple of disadvantages. Firstly, you have to select how many groups/classes there are. This isn’t always trivial and ideally with a clustering algorithm we’d want it to figure those out for us because the point of it is to gain some insight from the data. K-means also starts with a random choice of cluster centers and therefore it may yield different clustering results on different runs of the algorithm. Thus, the results may not be repeatable and lack consistency. Other cluster methods are more consistent.
K-Medians is another clustering algorithm related to K-Means, except instead of recomputing the group center points using the mean we use the median vector of the group. This method is less sensitive to outliers (because of using the Median) but is much slower for larger datasets as sorting is required on each iteration when computing the Median vector.
Mean shift clustering is a sliding-window-based algorithm that attempts to find dense areas of data points. It is a centroid-based algorithm meaning that the goal is to locate the center points of each group/class, which works by updating candidates for center points to be the mean of the points within the sliding-window. These candidate windows are then filtered in a post-processing stage to eliminate near-duplicates, forming the final set of center points and their corresponding groups. Check out the graphic below for an illustration.
An illustration of the entire process from end-to-end with all of the sliding windows is show below. Each black dot represents the centroid of a sliding window and each gray dot is a data point.
In contrast to K-means clustering there is no need to select the number of clusters as mean-shift automatically discovers this. That’s a massive advantage. The fact that the cluster centers converge towards the points of maximum density is also quite desirable as it is quite intuitive to understand and fits well in a naturally data-driven sense. The drawback is that the selection of the window size/radius “r” can be non-trivial.
DBSCAN is a density based clustered algorithm similar to mean-shift, but with a couple of notable advantages. Check out another fancy graphic below and let’s get started!
DBSCAN poses some great advantages over other clustering algorithms. Firstly, it does not require a pe-set number of clusters at all. It also identifies outliers as noises unlike mean-shift which simply throws them into a cluster even if the data point is very different. Additionally, it is able to find arbitrarily sized and arbitrarily shaped clusters quite well.
The main drawback of DBSCAN is that it doesn’t perform as well as others when the clusters are of varying density. This is because the setting of the distance threshold ε and minPoints for identifying the neighborhood points will vary from cluster to cluster when the density varies. This drawback also occurs with very high-dimensional data since again the distance threshold ε becomes challenging to estimate.
One of the major drawbacks of K-Means is its naive use of the mean value for the cluster center. We can see why this isn’t the best way of doing things by looking at the image below. On the left hand side it looks quite obvious to the human eye that there are two circular clusters with different radius’ centered at the same mean. K-Means can’t handle this because the mean values of the clusters are a very close together. K-Means also fails in cases where the clusters are not circular, again as a result of using the mean as cluster center.
Gaussian Mixture Models (GMMs) give us more flexibility than K-Means. With GMMs we assume that the data points are Gaussian distributed; this is a less restrictive assumption than saying they are circular by using the mean. That way, we have two parameters to describe the shape of the clusters: the mean and the standard deviation! Taking an example in two dimensions, this means that the clusters can take any kind of elliptical shape (since we have standard deviation in both the x and y directions). Thus, each Gaussian distribution is assigned to a single cluster.
In order to find the parameters of the Gaussian for each cluster (e.g the mean and standard deviation) we will use an optimization algorithm called Expectation–Maximization (EM). Take a look at the graphic below as an illustration of the Gaussians being fitted to the clusters. Then we can proceed on to the process of Expectation–Maximization clustering using GMMs.
There are really 2 key advantages to using GMMs. Firstly GMMs are a lot more flexible in terms of cluster covariance than K-Means; due to the standard deviation parameter, the clusters can take on any ellipse shape, rather than being restricted to circles. K-Means is actually a special case of GMM in which each cluster’s covariance along all dimensions approaches 0. Secondly, since GMMs use probabilities, they can have multiple clusters per data point. So if a data point is in the middle of two overlapping clusters, we can simply define its class by saying it belongs X-percent to class 1 and Y-percent to class 2. I.e GMMs support mixed membership.
Hierarchical clustering algorithms actually fall into 2 categories: top-down or bottom-up. Bottom-up algorithms treat each data point as a single cluster at the outset and then successively merge (or agglomerate) pairs of clusters until all clusters have been merged into a single cluster that contains all data points. Bottom-up hierarchical clustering is therefore called hierarchical agglomerative clustering or HAC. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. Check out the graphic below for an illustration before moving on to the algorithm steps
Hierarchical clustering does not require us to specify the number of clusters and we can even select which number of clusters looks best since we are building a tree. Additionally, the algorithm is not sensitive to the choice of distance metric; all of them tend to work equally well whereas with other clustering algorithms, the choice of distance metric is critical. A particularly good use case of hierarchical clustering methods is when the underlying data has a hierarchical structure and you want to recover the hierarchy; other clustering algorithms can’t do this. These advantages of hierarchical clustering come at the cost of lower efficiency, as it has a time complexity of O(n3), unlike the linear complexity of K-Means and GMM.
There are your top 5 clustering algorithms that a data scientist should know! We’ll end off with an awesome visualization of how well these algorithms and a few others perform, courtesy of Scikit Learn!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Certified Nerd. AI / Machine Learning Engineer.
Sharing concepts, ideas, and codes.
|
Mybridge | 6.6K | 6 | https://medium.mybridge.co/30-amazing-python-projects-for-the-past-year-v-2018-9c310b04cdb3?source=tag_archive---------7---------------- | 30 Amazing Python Projects for the Past Year (v.2018) | For the past year, we’ve compared nearly 15,000 open source Python projects to pick Top 30 (0.2% chance).
This is an extremely competitive list and it carefully picks the best open source Python libraries, tools and programs published between January and December 2017. Mybridge AI evaluates the quality by considering popularity, engagement and recency. To give you an idea about the quality, the average number of Github stars is 3,707.
Open source projects can be useful for programmers. You can learn by reading the source code and build something on top of the existing projects. Give a plenty of time to play around with Python projects you may have missed for the past year.
<Recommended Learning>
A) Beginner
The Python Bible: Build 11 Projects and Go from Beginner to Pro
[27,672 recommends, 4.7/5 stars]
B) Data Science
Python for Data Science and Machine Learning Bootcamp: Use NumPy, Pandas, Seaborn , Matplotlib , Plotly
[90,212 recommends, 4.6/5 stars]
(Click the numbers below. Credit given to the biggest contributor.)
Home-assistant (v0.6+): Open-source home automation platform running on Python 3 [11357 stars on Github]. Courtesy of Paulus Schoutsen
Pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration [11019 stars on Github]. Courtesy of Adam Paszke and others at PyTorch Team
Grumpy: A Python to Go source code transcompiler and runtime. [8367 stars on Github]. Courtesy of Dylan Trotter and others at Google
Sanic: Async Python 3.5+ web server that’s written to go fast [8028 stars on Github]. Courtesy of Channel Cat and Eli Uriegas
Python-fire: A library for automatically generating command line interfaces (CLIs) from absolutely any Python object. [7775 stars on Github]. Courtesy of David Bieber and others at Google Brain.
spaCy (v2.0): Industrial-strength Natural Language Processing (NLP) with Python and Cython [7633 stars on Github]. Courtesy of Matthew Honnibal
Pipenv: Python Development Workflow for Humans [7273 stars on Github]. Courtesy of Kenneth Reitz
MicroPython: A lean and efficient Python implementation for microcontrollers and constrained systems [5728 stars on Github].
Prophet: Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth [4369 stars on Github]. Courtesy of Facebook
SerpentAI: Game Agent Framework in Python. Helping you create AIs / Bots to play any game [3411 stars on Github]. Courtesy of Nicholas Brochu
Dash: Interactive, reactive web apps in pure python [3281 stars on Github]. Courtesy of Chris P
InstaPy: Instagram Bot. Like/Comment/Follow Automation Script. [3179 stars on Github]. Courtesy of TimG
Apistar: A fast and expressive API framework. For Python [3024 stars on Github]. Courtesy of Tom Christie
Faiss: A library for efficient similarity search and clustering of dense vectors [2717 stars on Github]. Courtesy of Matthijs Douze and others at Facebook Research
MechanicalSoup: A Python library for automating interaction with websites [2244 stars on Github].
Better-exceptions: Pretty and useful exceptions in Python, automatically [2121 stars on Github]. Courtesy of Qix
Flashtext: Extract Keywords from sentence or Replace keywords in sentences [2019 stars on Github]. Courtesy of Vikash Singh
Maya: Datetime for Humans in Python [1828 stars on Github]. Kenneth Reitz
Mimesis (v1.0): Python library, which helps generate mock data in different languages for various purposes. These data can be especially useful at various stages of software development and testing [1732 stars on Github]. Courtesy of Líkið Geimfari
Open-paperless: Scan, index, and archive all of your paper documents. A document management system. [1717 stars on Github]. Courtesy of Tina Zhou
Fsociety: Hacking Tools Pack. A Penetration Testing Framework. [1585 stars on Github]. Courtesy of Manis Manisso
LivePython: Visually trace Python code in real-time [1577 stars on Github]. Courtesy of Anastasis Germanidis
Hatch: A modern project, package, and virtual env manager for Python [1537 stars on Github]. Courtesy of Ofek Lev
Tangent: Source-to-Source Debuggable Derivatives in Pure Python [1433 stars on Github]. Courtesy of Alex Wiltschko and others at Google Brain
Clairvoyant: A Python program that identifies and monitors historical cues for short term stock movement [1159 stars on Github]. Courtesy of Anthony Federico
MonkeyType: A system for Python that generates static type annotations by collecting runtime types. [1143 stars on Github]. Courtesy of Carl Meyer at Instagram Engineering
Eel: A little Python library for making simple Electron-like HTML/JS GUI apps [1137 stars on Github].
Surprise v1.0: A Python scikit for building and analyzing recommender systems [1103 stars on Github].
Gain: Web crawling framework for everyone. [1009 stars on Github]. Courtesy of 高久力
PDFTabExtract: A set of tools for extracting tables from PDF files helping to do data mining on scanned documents. [722 stars on Github].
That’s it for Python Open Source of the Year. If you like this curation, read best daily articles based on your programming skills on our website.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
We rank articles for professionals
Read more and achieve more
|
Simon Greenman | 10.2K | 16 | https://towardsdatascience.com/who-is-going-to-make-money-in-ai-part-i-77a2f30b8cef?source=tag_archive---------8---------------- | Who Is Going To Make Money In AI? Part I – Towards Data Science | We are in the midst of a gold rush in AI. But who will reap the economic benefits? The mass of startups who are all gold panning? The corporates who have massive gold mining operations? The technology giants who are supplying the picks and shovels? And which nations have the richest seams of gold?
We are currently experiencing another gold rush in AI. Billions are being invested in AI startups across every imaginable industry and business function. Google, Amazon, Microsoft and IBM are in a heavyweight fight investing over $20 billion in AI in 2016. Corporates are scrambling to ensure they realise the productivity benefits of AI ahead of their competitors while looking over their shoulders at the startups. China is putting its considerable weight behind AI and the European Union is talking about a $22 billion AI investment as it fears losing ground to China and the US.
AI is everywhere. From the 3.5 billion daily searches on Google to the new Apple iPhone X that uses facial recognition to Amazon Alexa that cutely answers our questions. Media headlines tout the stories of how AI is helping doctors diagnose diseases, banks better assess customer loan risks, farmers predict crop yields, marketers target and retain customers, and manufacturers improve quality control. And there are think tanks dedicated to studying the physical, cyber and political risks of AI.
AI and machine learning will become ubiquitous and woven into the fabric of society. But as with any gold rush the question is who will find gold? Will it just be the brave, the few and the large? Or can the snappy upstarts grab their nuggets? Will those providing the picks and shovel make most of the money? And who will hit pay dirt?
As I started thinking about who was going to make money in AI I ended up with seven questions. Who will make money across the (1) chip makers, (2) platform and infrastructure providers, (3) enabling models and algorithm providers, (4) enterprise solution providers, (5) industry vertical solution providers, (6) corporate users of AI and (7) nations? While there are many ways to skin the cat of the AI landscape, hopefully below provides a useful explanatory framework — a value chain of sorts. The companies noted are representative of larger players in each category but in no way is this list intended to be comprehensive or predictive.
Even though the price of computational power has fallen exponentially, demand is rising even faster. AI and machine learning with its massive datasets and its trillions of vector and matrix calculations has a ferocious and insatiable appetite. Bring on the chips.
NVIDIA’s stock is up 1500% in the past two years benefiting from the fact that their graphical processing unit (GPU) chips that were historically used to render beautiful high speed flowing games graphics were perfect for machine learning. Google recently launched its second generation of Tensor Processing Units (TPUs). And Microsoft is building its own Brainwave AI machine learning chips. At the same time startups such as Graphcore, who has raised over $110M, is looking to enter the market. Incumbents chip providers such as IBM, Intel, Qualcomm and AMD are not standing still. Even Facebook is rumoured to be building a team to design its own AI chips. And the Chinese are emerging as serious chip players with Cambricon Technology announcing the first cloud AI chip this past week.
What is clear is that the cost of designing and manufacturing chips then sustaining a position as a global chip leader is very high. It requires extremely deep pockets and a world class team of silicon and software engineers. This means that there will be very few new winners. Just like the gold rush days those that provide the cheapest and most widely used picks and shovels will make a lot of money.
The AI race is now also taking place in the cloud. Amazon realised early that startups would much rather rent computers and software than buy it. And so it launched Amazon Web Services (AWS) in 2006. Today AI is demanding so much compute power that companies are increasingly turning to the cloud to rent hardware through Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings.
The fight is on among the tech giants. Microsoft is offering their hybrid public and private Azure cloud service that allegedly has over one million computers. And in the past few weeks they announced that their Brainwave hardware solutionsdramatically accelerate machine learning with their own Bing search engine performance improving by a factor of ten. Google is rushing to play catchup with its own GoogleCloud offering. And we are seeing the Chinese Alibaba starting to take global share.
Amazon — Microsoft — Google and IBM are going to continue to duke this one out. And watch out for the massively scaled cloud players from China. The big picks and shovels guys will win again.
Today Google is the world’s largest AI company attracting the best AI minds, spending small country size GDP budgets on R&D, and sitting on the best datasets gleamed from the billions of users of their services. AI is powering Google’s search, autonomous vehicles, speech recognition, intelligent reasoning, massive search and even its own work on drug discovery and disease diangosis.
And the incredible AI machine learning software and algorithms that are powering all of Google’s AI activity — TensorFlow — is now being given away for free. Yes for free! TensorFlow is now an open source software project available to the world. And why are they doing this? As Jeff Dean, head of Google Brain, recently said there are 20 million organisations in the world that could benefit from machine learning today. If millions of companies use this best in class free AI software then they are likely to need lots of computing power. And who is better served to offer that? Well Google Cloud is of course optimised for TensorFlow and related AI services. And once you become reliant on their software and their cloud you become a very sticky customer for many years to come. No wonder it is a brutal race for global AI algorithm dominance with Amazon — Microsoft — IBM also offering their own cheap or free AI software services.
We are also seeing a fight for not only machine learning algorithms but cognitive algorithms that offer services for conversational agents and bots, speech, natural language processing (NLP) and semantics, vision, and enhanced core algorithms. One startup in this increasingly contested space is Clarifai who provides advanced image recognition systems for businesses to detect near-duplicates and visual searches. It has raised nearly $40M over the past three years. The market for vision related algorithms and services is estimated to be a cumulative $8 billion in revenue between 2016 and 2025.
The giants are not standing still. IBM, for example, is offering its Watson cognitive products and services. They have twenty or so APIs for chatbots, vision, speech, language, knowledge management and empathy that can be simply be plugged into corporate software to create AI enabled applications. Cognitive APIs are everywhere. KDnuggets lists here over 50 of the top cognitive services from the giants and startups. These services are being put into the cloud as AI as a Service (AIaaS) to make them more accessible. Just recently Microsoft’s CEO Satya Nadella claimed that a million developers are using their AI APIs, services and tools for building AI-powered apps and nearly 300,000 developers are using their tools for chatbots. I wouldn’t want to be a startup competing with these Goliaths.
The winners in this space are likely to favour the heavyweights again. They can hire the best research and engineering talent, spend the most money, and have access to the largest datasets. To flourish startups are going to have to be really well funded, supported by leading researchers with a whole battery of IP patents and published papers, deep domain expertise, and have access to quality datasets. And they should have excellent navigational skills to sail ahead of the giants or sail different races. There will many startup casualties, but those that can scale will find themselves as global enterprises or quickly acquired by the heavyweights. And even if a startup has not found a path to commercialisation, then they could become acquihires (companies bought for their talent) if they are working on enabling AI algorithms with a strong research oriented team. We saw this in 2014 when DeepMind, a two year old London based company that developed unique reinforcement machine learning algorithms, was acquired by Google for $400M.
Enterprise software has been dominated by giants such as Salesforce, IBM, Oracle and SAP. They all recognise that AI is a tool that needs to be integrated into their enterprise offerings. But many startups are rushing to become the next generation of enterprise services filling in gaps where the incumbents don’t currently tread or even attempting to disrupt them.
We analysed over two hundred use cases in the enterprise space ranging from customer management to marketing to cybersecurity to intelligence to HR to the hot area of Cognitive Robotic Process Automation (RPA). The enterprise field is much more open than previous spaces with a veritable medley of startups providing point solutions for these use cases. Today there are over 200 AI powered companies just in the recruitment space, many of them AI startups. Cybersecurity leader DarkTrace and RPA leader UiPathhave war chests in the $100 millions. The incumbents also want to make sure their ecosystems stay on the forefront and are investing in startups that enhance their offering. Salesforce has invested in Digital Genius a customer management solution and similarly Unbable that offers enterprise translation services. Incumbents also often have more pressing problems. SAP, for example, is rushing to play catchup in offering a cloud solution, let alone catchup in AI. We are also seeing tools providers trying to simplify the tasks required to create, deploy and manage AI services in the enterprise. Machine learning training, for example, is a messy business where 80% of time can be spent on data wrangling. And an inordinate amount of time is spent on testing and tuning of what is called hyperparameters. Petuum, a tools provider based in Pittsburgh in the US, has raised over $100M to help accelerate and optimise the deployment of machine learning models.
Many of these enterprise startup providers can have a healthy future if they quickly demonstrate that they are solving and scaling solutions to meet real world enterprise needs. But as always happens in software gold rushes there will be a handful of winners in each category. And for those AI enterprise category winners they are likely to be snapped up, along with the best in-class tool providers, by the giants if they look too threatening.
AI is driving a race for the best vertical industry solutions. There are a wealth of new AI powered startups providing solutions to corporate use cases in the healthcare, financial services, agriculture, automative, legal and industrial sectors. And many startups are taking the ambitious path to disrupt the incumbent corporate players by offering a service directly to the same customers.
It is clear that many startups are providing valuable point solutions and can succeed if they have access to (1) large and proprietary data training sets, (2) domain knowledge that gives them deep insights into the opportunities within a sector, (3) a deep pool of talent around applied AI and (4) deep pockets of capital to fund rapid growth. Those startups that are doing well generally speak the corporate commercial language of customers, business efficiency and ROI in the form of well developed go-to-market plans.
For example, ZestFinance has raised nearly $300M to help improve credit decision making that will provide fair and transparent credit to everyone. They claim they have the world’s best data scientists. But they would, wouldn’t they? For those startups that are looking to disrupt existing corporate players they need really deep pockets. For example, Affirm, that offers loans to consumers at the point of sale, has raised over $700M. These companies quickly need to create a defensible moat to ensure they remain competitive. This can come from data network effects where more data begets better AI based services and products that gets more revenue and customers that gets more data. And so the flywheel effect continues.
And while corporates might look to new vendors in their industry for AI solutions that could enhance their top and bottom line, they are not going to sit back and let upstarts muscle in on their customers. And they are not going to sit still and let their corporate competitors gain the first advantage through AI. There is currently a massive race for corporate innovation. Large companies have their own venture groups investing in startups, running accelerators and building their own startups to ensure that they are leaders in AI driven innovation.
Large corporates are in a strong position against the startups and smaller companies due to their data assets. Data is the fuel for AI and machine learning. Who is better placed to take advantage of AI than the insurance company that has reams of historic data on underwriting claims? The financial services company that knows everything about consumer financial product buying behaviour? Or the search company that sees more user searches for information than any other?
Corporates large and small are well positioned to extract value from AI. In fact Gartner research predicts AI-derived business value is projected to reach up to $3.9 trillion by 2022. There are hundreds if not thousands of valuable use cases that AI can addresses across organisations. Corporates can improve their customer experience, save costs, lower prices, drive revenues and sell better products and services powered by AI. AI will help the big get bigger often at the expense of smaller companies. But they will need to demonstrate strong visionary leadership, an ability to execute, and a tolerance for not always getting technology enabled projects right on the first try.
Countries are also also in a battle for AI supremacy. China has not been shy about its call to arms around AI. It is investing massively in growing technical talent and developing startups. Its more lax regulatory environment, especially in data privacy, helps China lead in AI sectors such as security and facial recognition. Just recently there was an example of Chinese police picking out one most wanted face in a crowd of 50,000 at a music concert. And SenseTime Group Ltd, that analyses faces and images on a massive scale, reported it raised $600M becoming the most valuable global AI startup. The Chinese point out that their mobile market is 3x the size of the US and there are 50x more mobile payments taking place — this is a massive data advantage. The European focus on data privacy regulation could put them at a disadvantage in certain areas of AI even if the Union is talking about a $22B investment in AI.
The UK, Germany, France and Japan have all made recent announcements about their nation state AI strategies. For example, President Macron said the French government will spend $1.85 billion over the next five years to support the AI ecosystem including the creation of large public datasets. Companies such as Google’s DeepMind and Samsung have committed to open new Paris labs and Fujitsu is expanding its Paris research centre. The British just announced a $1.4 billion push into AI including funding of 1000 AI PhDs. But while nations are investing in AI talent and the ecosystem, the question is who will really capture the value. Will France and the UK simply be subsidising PhDs who will be hired by Google? And while payroll and income taxes will be healthy on those six figure machine learning salaries, the bulk of the economic value created could be with this American company, its shareholders, and the smiling American Treasury.
AI will increase productivity and wealth in companies and countries. But how will that wealth be distributed when the headlines suggest that 30 to 40% of our jobs will be taken by the machines? Economists can point to lessons from hundreds of years of increasing technology automation. Will there be net job creation or net job loss? The public debate often cites Geoffrey Hinton, the godfather of machine learning, who suggested radiologists will lose their jobs by the dozen as machines diagnose diseases from medical images. But then we can look to the Chinese who are using AI to assist radiologists in managing the overwhelming demand to review 1.4 billion CT scans annually for lung cancer. The result is not job losses but an expanded market with more efficient and accurate diagnosis. However there is likely to be a period of upheaval when much of the value will go to those few companies and countries that control AI technology and data. And lower skilled countries whose wealth depends on jobs that are targets of AI automation will likely suffer. AI will favour the large and the technologically skilled.
In examining the landscape of AI it has became clear that we are now entering a truly golden era for AI. And there are few key themes appearing as to where the economic value will migrate:
In short it looks like the AI gold rush will favour the companies and countries with control and scale over the best AI tools and technology, the data, the best technical workers, the most customers and the strongest access to capital. Those with scale will capture the lion’s share of the economic value from AI. In some ways ‘plus ça change, plus c’est la même chose.’ But there will also be large golden nuggets that will be found by a few choice brave startups. But like any gold rush many startups will hit pay dirt. And many individuals and societies will likely feel like they have not seen the benefits of the gold rush.
This is the first part in a series of articles I intend to write on the topic of the economics of AI. I welcome your feedback.
Written by Simon Greenman
I am a lover of technology and how it can be applied in the business world. I run my own advisory firm Best Practice AI helping executives of enterprises and startups accelerate the adoption of ROI based AI applications . Please get in touch to discuss this. If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. And please post your comments or you can email me directly or find me on LinkedIn or twitter or follow me at Simon Greenman.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
AI guy. MapQuest guy. Grow, innovate and transform companies with tech. Start-up investor, mentor and geek.
Sharing concepts, ideas, and codes.
|
Eugenio Culurciello | 6.4K | 8 | https://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0?source=tag_archive---------9---------------- | The fall of RNN / LSTM – Towards Data Science | We fell for Recurrent neural networks (RNN), Long-short term memory (LSTM), and all their variants. Now it is time to drop them!
It is the year 2014 and LSTM and RNN make a great come-back from the dead. We all read Colah’s blog and Karpathy’s ode to RNN. But we were all young and unexperienced. For a few years this was the way to solve sequence learning, sequence translation (seq2seq), which also resulted in amazing results in speech to text comprehension and the raise of Siri, Cortana, Google voice assistant, Alexa. Also let us not forget machine translation, which resulted in the ability to translate documents into different languages or neural machine translation, but also translate images into text, text into images, and captioning video, and ... well you got the idea.
Then in the following years (2015–16) came ResNet and Attention. One could then better understand that LSTM were a clever bypass technique. Also attention showed that MLP network could be replaced by averaging networks influenced by a context vector. More on this later.
It only took 2 more years, but today we can definitely say:
But do not take our words for it, also see evidence that Attention based networks are used more and more by Google, Facebook, Salesforce, to name a few. All these companies have replaced RNN and variants for attention based models, and it is just the beginning. RNN have the days counted in all applications, because they require more resources to train and run than attention-based models. See this post for more info.
Remember RNN and LSTM and derivatives use mainly sequential processing over time. See the horizontal arrow in the diagram below:
This arrow means that long-term information has to sequentially travel through all cells before getting to the present processing cell. This means it can be easily corrupted by being multiplied many time by small numbers < 0. This is the cause of vanishing gradients.
To the rescue, came the LSTM module, which today can be seen as multiple switch gates, and a bit like ResNet it can bypass units and thus remember for longer time steps. LSTM thus have a way to remove some of the vanishing gradients problems.
But not all of it, as you can see from the figure above. Still we have a sequential path from older past cells to the current one. In fact the path is now even more complicated, because it has additive and forget branches attached to it. No question LSTM and GRU and derivatives are able to learn a lot of longer term information! See results here; but they can remember sequences of 100s, not 1000s or 10,000s or more.
And one issue of RNN is that they are not hardware friendly. Let me explain: it takes a lot of resources we do not have to train these network fast. Also it takes much resources to run these model in the cloud, and given that the demand for speech-to-text is growing rapidly, the cloud is not scalable. We will need to process at the edge, right into the Amazon Echo! See note below for more details.
If sequential processing is to be avoided, then we can find units that “look-ahead” or better “look-back”, since most of the time we deal with real-time causal data where we know the past and want to affect future decisions. Not so in translating sentences, or analyzing recorded videos, for example, where we have all data and can reason on it more time. Such look-back/ahead units are neural attention modules, which we previously explained here.
To the rescue, and combining multiple neural attention modules, comes the “hierarchical neural attention encoder”, shown in the figure below:
A better way to look into the past is to use attention modules to summarize all past encoded vectors into a context vector Ct.
Notice there is a hierarchy of attention modules here, very similar to the hierarchy of neural networks. This is also similar to Temporal convolutional network (TCN), reported in Note 3 below.
In the hierarchical neural attention encoder multiple layers of attention can look at a small portion of recent past, say 100 vectors, while layers above can look at 100 of these attention modules, effectively integrating the information of 100 x 100 vectors. This extends the ability of the hierarchical neural attention encoder to 10,000 past vectors.
But more importantly look at the length of the path needed to propagate a representation vector to the output of the network: in hierarchical networks it is proportional to log(N) where N are the number of hierarchy layers. This is in contrast to the T steps that a RNN needs to do, where T is the maximum length of the sequence to be remembered, and T >> N.
This architecture is similar to a neural Turing machine, but lets the neural network decide what is read out from memory via attention. This means an actual neural network will decide which vectors from the past are important for future decisions.
But what about storing to memory? The architecture above stores all previous representation in memory, unlike neural Turning machines. This can be rather inefficient: think about storing the representation of every frame in a video — most times the representation vector does not change frame-to-frame, so we really are storing too much of the same! What can we do is add another unit to prevent correlated data to be stored. For example by not storing vectors too similar to previously stored ones. But this is really a hack, the best would be to be let the application guide what vectors should be saved or not. This is the focus of current research studies. Stay tuned for more information.
Tell your friends! It is very surprising to us to see so many companies still use RNN/LSTM for speech to text, many unaware that these networks are so inefficient and not scalable. Please tell them about this post.
About training RNN/LSTM: RNN and LSTM are difficult to train because they require memory-bandwidth-bound computation, which is the worst nightmare for hardware designer and ultimately limits the applicability of neural networks solutions. In short, LSTM require 4 linear layer (MLP layer) per cell to run at and for each sequence time-step. Linear layers require large amounts of memory bandwidth to be computed, in fact they cannot use many compute unit often because the system has not enough memory bandwidth to feed the computational units. And it is easy to add more computational units, but hard to add more memory bandwidth (note enough lines on a chip, long wires from processors to memory, etc). As a result, RNN/LSTM and variants are not a good match for hardware acceleration, and we talked about this issue before here and here. A solution will be compute in memory-devices like the ones we work on at FWDNXT.
See this repository for a simple example of these techniques.
Note 1: Hierarchical neural attention is similar to the ideas in WaveNet. But instead of a convolutional neural network we use hierarchical attention modules. Also: Hierarchical neural attention can be also bi-directional.
Note 2: RNN and LSTM are memory-bandwidth limited problems (see this for details). The processing unit(s) need as much memory bandwidth as the number of operations/s they can provide, making it impossible to fully utilize them! The external bandwidth is never going to be enough, and a way to slightly ameliorate the problem is to use internal fast caches with high bandwidth. The best way is to use techniques that do not require large amount of parameters to be moved back and forth from memory, or that can be re-used for multiple computation per byte transferred (high arithmetic intensity).
Note 3: Here is a paper comparing CNN to RNN. Temporal convolutional network (TCN) “outperform canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory”.
Note 4: Related to this topic, is the fact that we know little of how our human brain learns and remembers sequences. “We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks” — these chunks remind me of small convolutional or attention like networks on smaller sequences, that then are hierarchically strung together like in the hierarchical neural attention encoder and Temporal convolutional network (TCN). More studies make me think that working memory is similar to RNN networks that uses recurrent real neuron networks, and their capacity is very low. On the other hand both the cortex and hippocampus give us the ability to remember really long sequences of steps (like: where did I park my car at airport 5 days ago), suggesting that more parallel pathways may be involved to recall long sequences, where attention mechanism gate important chunks and force hops in parts of the sequence that is not relevant to the final goal or task.
Note 5: The above evidence shows we do not read sequentially, in fact we interpret characters, words and sentences as a group. An attention-based or convolutional module perceives the sequence and projects a representation in our mind. We would not be misreading this if we processed this information sequentially! We would stop and notice the inconsistencies!
I have almost 20 years of experience in neural networks in both hardware and software (a rare combination). See about me here: Medium, webpage, Scholar, LinkedIn, and more...
If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
I dream and build new technology
Sharing concepts, ideas, and codes.
|
WiseWolf Fund | 14.2K | 8 | https://medium.com/@wisewolf_fund/unique-trends-to-look-out-for-with-artificial-intelligence-1db3de178463?source=---------0---------------- | GAME-CHANGING TRENDS TO LOOK OUT FOR WITH AI – WiseWolf Fund – Medium | Artificial Intelligence is a state-of-the-art technological trend that many companies are trying to integrate into their business. A recent report by McKinsey states that Baidu, the Chinese equivalent of Alphabet, invested $20 billion in AI last year. At the same time, Alphabet invested roughly $30 billion in developing AI technologies. The Chinese government has been actively pursuing AI technology in an attempt to control a future cornerstone innovation. Companies in the US are also investing time, money and energy into advancing AI technology.
The reason for such interest towards artificial intelligence is that artificial intelligence can enhance any product or function. This is why companies and governments make considerable investments in the research and development of this technology. Its role in increasing the production performance while simultaneously reducing the costs cannot be underestimated.
Since some of the largest entities in the world are focused on promoting the AI technology, it would be wise to understand and follow the trend. AI is already shaping the economy, and in the near future, its effect may be even more significant. Ignoring the new technology and its influence on the global economic situation is a recipe for failure.
Despite the huge public interest and attention towards AI, its evolution is still somewhat halted by the objective causes. As any new and fast-developing industry, AI is quickly outgrowing its environment. According to Adam Temper, an author of many creative researches on artificial intelligence, the development of AI is mostly limited by the “lack of employees with relevant expertise, very few mature standard industry tools, limited high quality training material available, few options for easy access to preconfigured machine learning environments, and the general focus in the industry on implementation rather than design”.
With any new complex technology, the learning curve is steep. Our educational institutions are several steps behind the commercial applications of this technology. It is important that AI scientists work collaboratively, sharing knowledge and best practice, to address this deficiency. AI is rapidly increasing its impact on society; we need to ensure that the power of AI doesn’t remain with the elite few.
Another factor that may be hindering the progress of AI is the cautious stance that people tend to take towards it. Artificial intelligence is still too sci-fi, too strange and, therefore, sometimes scary. When people learn to trust AI, it will make a true quantum leap in the way of general adoption and application. Adam Temper supports this point, too, describing the possible ways for AI technology to gain public trust as
At the same time, if we analyze the primary purpose of AI, we will see it for what it really is — a tool to perform the routine tasks relieving humans for something more creative or innovative. When asked about the current trends and opportunities of AI, Aaron Edell, CEO and co-founder of Machine Box, and one of the top writers on AI, described them as follows:
AI has also become a political talking point in recent years. There have been arguments that AI will help to create jobs, but that it will also cause certain workers to lose their jobs. For example, estimations prove that self-driving vehicles will cause 25,000 truck drivers to lose their jobs each month. Also, as much as 1 million pickers and packers working in US warehouses could be out of a job. This is due to the fact that by implementing AI, factories can operate with as few as a dozen of workers.
Naturally, companies gladly implement artificial intelligence, as it ensures considerable savings. At the same time, governments are concerned about the current employment situation as well as the short-term and long-term predictions. Some countries have already begun to plan measures about the new AI technology that are intended to keep the economy stable.
In fact, it would not be fair to say that artificial intelligence causes people to lose jobs. True, the whole point of automation is making machines do what people used to do before. However, it would be more correct if we said that artificial intelligence reshapes the employment situation. Together with taking over human functions, it creates other jobs, forces people to master new skills, encourages workers to increase productivity. But it is obvious that AI is going to turn the regular sequence of events upside down.
Therefore, the best approach is not to wait until AI leaves you unemployed, but rather proactively embrace it and learn to live with it. As we said already, AI can also create jobs, so a wise move would be to learn to manage AI-based tools. With the advance of AI products, learning to work with them may secure you a job and even promote your career.
Your future largely depends on your current and expected income. However, another important factor is the way you manage your finances. Of course, investing in your own or your children’s knowledge is one of the best investments you can ever make. At the same time, if you need some financial cushion to secure your family’s welfare, you should look at the available investment opportunities.
And this is where artificial intelligence may become your best friend, professional consultant and investment manager. In the recent years, in addition to the traditional banks and financial institutions, we have witnessed the appearance of a totally new and innovative investment system.
We are talking about the blockchain technology and the cryptocurrencies that it supports. Millions of people all over the world have already appreciated the transparency and flexibility of the blockchain networks. By watching the cryptocurrency trends carefully and trading wisely, individual investors have made fortunes within a very short time.
Nowadays, the cryptocurrency opportunities are open for everyone, not only for the industry experts. There are investment funds running on artificial intelligence that are available for individual investors.
With such funds, you are, on one hand, protected by the blockchain technology. It ensures proper safety of your funds and the security of your transactions. On the other hand, you do not need to be an investment expert to make wise decisions. This is where artificial intelligence is at your service. It analyzes the existing trends on the extremely volatile cryptocurrency market and shows you the best opportunities.
The main point is that we should not regard AI as a threat to our careers and a danger to our well-being. Instead, we should analyze the investment openings created by AI technology that can secure our prosperity. For example, Wolf Coin is using AI technology to create a seamless investment channel for savvy individuals. This robust channel opens great opportunities that investors can use to become new rich kids on the block. Most noteworthy, the low entry cost of $10 has made it one offer that will enjoy a huge buzz. The focus on this new market opening will help people build a solid financial nest egg that will keep them safe even in the face of the storm.
Wisewolf Fund launching the Wolf Coin focused its effort on creating a great opportunity for people who wish to benefit from cryptocurrency trading but are new to this trend. With artificial intelligence and advanced analytical algorithms, the fund arranges the most favorable conditions for individual investors.
Mainstream manufacturers, companies, and factories are embracing AI technology to change the mode of their operations. Therefore, it is critical to keep tabs on this reality as it can bring many benefits that cannot be found elsewhere. AI is one of the hottest topics of discussion, however, it is now clear that AI is here to stay. So, people should accept the obvious in order to create the future that they desire. The wisest strategy is to embrace artificial intelligence and let it work to maintain our well-being.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
The WiseWolf Crypto Fund provides an easy way to enter the cryptocurrency market even for non-techies.
|
Justin Lee | 8.3K | 11 | https://medium.com/swlh/chatbots-were-the-next-big-thing-what-happened-5fc49dd6fa61?source=---------1---------------- | Chatbots were the next big thing: what happened? – The Startup – Medium | Oh, how the headlines blared:
Chatbots were The Next Big Thing.
Our hopes were sky high. Bright-eyed and bushy-tailed, the industry was ripe for a new era of innovation: it was time to start socializing with machines.
And why wouldn’t they be? All the road signs pointed towards insane success.
At the Mobile World Congress 2017, chatbots were the main headliners. The conference organizers cited an ‘overwhelming acceptance at the event of the inevitable shift of focus for brands and corporates to chatbots’.
In fact, the only significant question around chatbots was who would monopolize the field, not whether chatbots would take off in the first place:
One year on, we have an answer to that question.
No.
Because there isn’t even an ecosystem for a platform to dominate.
Chatbots weren’t the first technological development to be talked up in grandiose terms and then slump spectacularly.
The age-old hype cycle unfolded in familiar fashion...
Expectations built, built, and then..... It all kind of fizzled out.
The predicted paradim shift didn’t materialize.
And apps are, tellingly, still alive and well.
We look back at our breathless optimism and turn to each other, slightly baffled:
“is that it? THAT was the chatbot revolution we were promised?”
Digit’s Ethan Bloch sums up the general consensus:
According to Dave Feldman, Vice President of Product Design at Heap, chatbots didn’t just take on one difficult problem and fail: they took on several and failed all of them.
Bots can interface with users in different ways. The big divide is text vs. speech. In the beginning (of computer interfaces) was the (written) word.
Users had to type commands manually into a machine to get anything done.
Then, graphical user interfaces (GUIs) came along and saved the day. We became entranced by windows, mouse clicks, icons. And hey, we eventually got color, too!
Meanwhile, a bunch of research scientists were busily developing natural language (NL) interfaces to databases, instead of having to learn an arcane database query language.
Another bunch of scientists were developing speech-processing software so that you could just speak to your computer, rather than having to type. This turned out to be a whole lot more difficult than anyone originally realised:
The next item on the agenda was holding a two-way dialog with a machine. Here’s an example dialog (dating back to the 1990s) with VCR setup system:
Pretty cool, right? The system takes turns in collaborative way, and does a smart job of figuring out what the user wants.
It was carefully crafted to deal with conversations involving VCRs, and could only operate within strict limitations.
Modern day bots, whether they use typed or spoken input, have to face all these challenges, but also work in an efficient and scalable way on a variety of platforms.
Basically, we’re still trying to achieve the same innovations we were 30 years ago.
Here’s where I think we’re going wrong:
An oversized assumption has been that apps are ‘over’, and would be replaced by bots.
By pitting two such disparate concepts against one another (instead of seeing them as separate entities designed to serve different purposes) we discouraged bot development.
You might remember a similar war cry when apps first came onto the scene ten years ago: but do you remember when apps replaced the internet?
It’s said that a new product or service needs to be two of the following: better, cheaper, or faster. Are chatbots cheaper or faster than apps? No — not yet, at least.
Whether they’re ‘better’ is subjective, but I think it’s fair to say that today’s best bot isn’t comparable to today’s best app.
Plus, nobody thinks that using Lyft is too complicated, or that it’s too hard to order food or buy a dress on an app. What is too complicated is trying to complete these tasks with a bot — and having the bot fail.
A great bot can be about as useful as an average app. When it comes to rich, sophisticated, multi-layered apps, there’s no competition.
That’s because machines let us access vast and complex information systems, and the early graphical information systems were a revolutionary leap forward in helping us locate those systems.
Modern-day apps benefit from decades of research and experimentation. Why would we throw this away?
But, if we swap the word ‘replace’ with ‘extend’, things get much more interesting.
Today’s most successful bot experiences take a hybrid approach, incorporating chat into a broader strategy that encompasses more traditional elements.
The next wave will be multimodal apps, where you can say what you want (like with Siri) and get back information as a map, text, or even a spoken response.
Another problematic aspect of the sweeping nature of hype is that it tends to bypass essential questions like these.
For plenty of companies, bots just aren’t the right solution. The past two years are littered with cases of bots being blindly applied to problems where they aren’t needed.
Building a bot for the sake of it, letting it loose and hoping for the best will never end well:
The vast majority of bots are built using decision-tree logic, where the bot’s canned response relies on spotting specific keywords in the user input.
The advantage of this approach is that it’s pretty easy to list all the cases that they are designed to cover. And that’s precisely their disadvantage, too.
That’s because these bots are purely a reflection of the capability, fastidiousness and patience of the person who created them; and how many user needs and inputs they were able to anticipate.
Problems arise when life refuses to fit into those boxes.
According to recent reports, 70% of the 100,000+ bots on Facebook Messenger are failing to fulfil simple user requests. This is partly a result of developers failing to narrow their bot down to one strong area of focus.
When we were building GrowthBot, we decided to make it specific to sales and marketers: not an ‘all-rounder’, despite the temptation to get overexcited about potential capabilties.
Remember: a bot that does ONE thing well is infinitely more helpful than a bot that does multiple things poorly.
A competent developer can build a basic bot in minutes — but one that can hold a conversation? That’s another story. Despite the constant hype around AI, we’re still a long way from achieving anything remotely human-like.
In an ideal world, the technology known as NLP (natural language processing) should allow a chatbot to understand the messages it receives. But NLP is only just emerging from research labs and is very much in its infancy.
Some platforms provide a bit of NLP, but even the best is at toddler-level capacity (for example, think about Siri understanding your words, but not their meaning.)
As Matt Asay outlines, this results in another issue: failure to capture the attention and creativity of developers.
And conversations are complex. They’re not linear. Topics spin around each other, take random turns, restart or abruptly finish.
Today’s rule-based dialogue systems are too brittle to deal with this kind of unpredictability, and statistical approaches using machine learning are just as limited. The level of AI required for human-like conversation just isn’t available yet.
And in the meantime, there are few high-quality examples of trailblazing bots to lead the way. As Dave Feldman remarked:
Once upon a time, the only way to interact with computers was by typing arcane commands to the terminal. Visual interfaces using windows, icons or a mouse were a revolution in how we manipulate information
There’s a reasons computing moved from text-based to graphical user interfaces (GUIs). On the input side, it’s easier and faster to click than it is to type.
Tapping or selecting is obviously preferable to typing out a whole sentence, even with predictive (often error-prone ) text. On the output side, the old adage that a picture is worth a thousand words is usually true.
We love optical displays of information because we are highly visual creatures. It’s no accident that kids love touch screens. The pioneers who dreamt up graphical interface were inspired by cognitive psychology, the study of how the brain deals with communication.
Conversational UIs are meant to replicate the way humans prefer to communicate, but they end up requiring extra cognitive effort. Essentially, we’re swapping something simple for a more-complex alternative.
Sure, there are some concepts that we can only express using language (“show me all the ways of getting to a museum that give me 2000 steps but don’t take longer than 35 minutes”), but most tasks can be carried out more efficiently and intuitively with GUIs than with a conversational UI.
Aiming for a human dimension in business interactions makes sense.
If there’s one thing that’s broken about sales and marketing, it’s the lack of humanity: brands hide behind ticket numbers, feedback forms, do-not-reply-emails, automated responses and gated ‘contact us’ forms.
Facebook’s goal is that their bots should pass the so-called Turing Test, meaning you can’t tell whether you are talking to a bot or a human. But a bot isn’t the same as a human. It never will be.
A conversation encompasses so much more than just text.
Humans can read between the lines, leverage contextual information and understand double layers like sarcasm. Bots quickly forget what they’re talking about, meaning it’s a bit like conversing with someone who has little or no short-term memory.
As HubSpot team pinpointed:
People aren’t easily fooled, and pretending a bot is a human is guaranteed to diminish returns (not to mention the fact that you’re lying to your users).
And even those rare bots that are powered by state-of-the-art NLP, and excel at processing and producing content, will fall short in comparison.
And here’s the other thing. Conversational UIs are built to replicate the way humans prefer to communicate — with other humans.
But is that how humans prefer to interact with machines?
Not necessarily.
At the end of the day, no amount of witty quips or human-like mannerisms will save a bot from conversational failure.
In a way, those early-adopters weren’t entirely wrong.
People are yelling at Google Home to play their favorite song, ordering pizza from the Domino’s bot and getting makeup tips from Sephora. But in terms of consumer response and developer involvement, chatbots haven’t lived up to the hype generated circa 2015/16.
Not even close.
Computers are good at being computers. Searching for data, crunching numbers, analyzing opinions and condensing that information.
Computers aren’t good at understanding human emotion. The state of NLP means they still don’t ‘get’ what we’re asking them, never mind how we feel.
That’s why it’s still impossible to imagine effective customer support, sales or marketing without the essential human touch: empathy and emotional intelligence.
For now, bots can continue to help us with automated, repetitive, low-level tasks and queries; as cogs in a larger, more complex system. And we did them, and ourselves, a disservice by expecting so much, so soon.
But that’s not the whole story.
Yes, our industry massively overestimated the initial impact chatbots would have. Emphasis on initial.
As Bill Gates once said:
The hype is over. And that’s a good thing. Now, we can start examining the middle-grounded grey area, instead of the hyper-inflated, frantic black and white zone.
I believe we’re at the very beginning of explosive growth. This sense of anti-climax is completely normal for transformational technology.
Messaging will continue to gain traction. Chatbots aren’t going away. NLP and AI are becoming more sophisticated every day.
Developers, apps and platforms will continue to experiment with, and heavily invest in, conversational marketing.
And I can’t wait to see what happens next.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Head of Growth for GrowthBot, Messaging & Conversational Strategy @HubSpot
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
|
Michael Solana | 680 | 5 | https://medium.com/s/story/artificial-intelligence-is-humanitys-rorschach-test-6fb1ef9c0ce4?source=---------2---------------- | Artificial Intelligence Is Humanity's Rorschach Test | Member Feature Story
Slime Sunday / Founders Fund
Slime Sunday / Founders Fund
I don’t fear artificial intelligence, I fear people who fear artificial intelligence.
It’s the 1960s. A psychologist stares at his patient — a balding, middle-aged foreman with a cigarette in his hand, and a curl of smoke around him like a halo on an acid trip. The psychologist holds up an inkblot, an ambiguous, black splatter on a white flashcard, and asks his patient what he sees. The thinking is his patient, not willing or otherwise able to express his feelings, his thoughts, his motivations, might inadvertently reveal some piece of his inner self while describing the ambiguous. The foreman doesn’t see a nondescript swiggle, or stain. He sees a man and woman making love, perhaps violently. He sees a mother holding her child. He sees a grisly murder. While the descriptions of these inkblots reveal very little about the world, they reveal a great deal about the man describing them, because when faced with an inscrutable abstract he projects himself onto the ambiguous.
Let’s look at this in the context of artificial intelligence. I’m not talking about self-driving cars, or algorithms serving ads for wallpaper and nice leather boots on Gmail. I’m not talking about the stuff we call artificial intelligence to raise money from bewildered venture capitalists on Sand Hill Road. I’m talking about general artificial intelligence, which is a computer that wants stuff, and chiefly to live. I’m talking about building a conscious machine just smart enough to make itself smarter. From here, the thought experiment runs like this: the conscious machine does make itself smarter, and once it’s smarter, it learns how to make itself smarter, which it does for good measure. The smarter the machine becomes, the faster this pattern repeats itself, and the intelligence of the machine begins to increase exponentially. In this way, a conscious artificial intelligence born on a Tuesday morning might be twice as smart as the smartest man who ever lived by Wednesday afternoon, and omnipotent by Friday. This is how we invent the thing that invents God. In nerd lore, it’s known as the Singularity. The question — the only question that could possibly matter to a human no longer at the top of the intellectual food chain — is what does an exponential intelligence want? Conventional wisdom: it extremely wants to murder you.
The dystopian version of superintelligence is illustrated with frequency by leaders in the technology industry, and is famously depicted by Hollywood in films like Terminator, or more recently Ex Machina, and even the Avengers. The “angry god A.I.” is a story you know, because it is the story you are constantly told: we build the thinking machine, it surpasses our abilities in every way, and it destroys us for one of any number of reasons. Maybe it perceives us as a threat. Maybe we’re just in its way, and it hardly perceives us at all — humanity, a disposable insect race. There are of course many arguments in opposition to the now ubiquitous concept of our apocalypse by artificial intelligence. I myself have called into question the logic of such dystopian arguments in Anatomy of Next. But our subject here is less pertaining to the nature of the conscious machine than it is to the way we talk about this subject, and what it means. First, consider that most of the artificial intelligence depicted in culture looks human, a representation with no basis in technological reality. Then, the true scope of the Singularity is almost impossible to predict, which begs a question: where are these opinions about the broadly unknowable coming from?
There’s an obvious difficulty in trying to understand the hypothetical motivations of a hypothetically god-like intelligence. To your beloved labradoodle, you are a being of immense magic with near unfathomable motivations. You summon light and sound from inanimate matter, soar through the streets on angry metal, cast fire from your hands! The labradoodle’s conception of man is distorted because there is a vast difference between the intelligence of a dog, and the intelligence of a human. Let us name this difference ‘x.’ Now, as we try and understand the difference between the most intelligent human who has ever lived and a hypothetical god-like intelligence born of the Singularity, let us set our difference in intelligence at a conservative ‘1000x.’
How does one even begin to conceive of a being this smart?
Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. Here, each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power?
Psychological projection seems to work in several contexts outside of general artificial intelligence. In the technology industry the concept of “meritocracy” is now hotly debated. How much of your life is determined by luck, and how much by chance? There’s no answer here we know for sure, but has there ever been a better Rorschach test for separating high-achievers from people who were given what they have? Questions pertaining to human nature are almost open self-reflection. Are we basically good, with some exceptions, or are humans basically beasts, with an animal nature just barely contained by a set of slowly-eroding stories we tell ourselves — law, faith, society. The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror.
Not everyone who fears general artificial intelligence would cause harm to others. There are many people who haven’t thought deeply about these questions at all. They look to their neighbors for cues on what to think, and there is no shortage of people willing to tell them. The media has ads to sell, after all, and historically they have found great success in doing this with horror stories. But as we try to understand the people who have thought about these questions with some depth — with the depth required of a thoughtful screenplay, for example, or a book, or a company — it’s worth considering the inkblot.
technology, liberty, teenagers with superpowers. vp @foundersfund. creator + producer #anatomyofnext.
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage — with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade
|
Emmanuel Ameisen | 935 | 11 | https://blog.insightdatascience.com/reinforcement-learning-from-scratch-819b65f074d8?source=---------3---------------- | Reinforcement Learning from scratch – Insight Data | Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program.
Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch.
Recently, I gave a talk at the O’Reilly AI conference in Beijing about some of the interesting lessons we’ve learned in the world of NLP. While there, I was lucky enough to attend a tutorial on Deep Reinforcement Learning (Deep RL) from scratch by Unity Technologies. I thought that the session, led by Arthur Juliani, was extremely informative and wanted to share some big takeaways below.
In our conversations with companies, we’ve seen a rise of interesting Deep RL applications, tools and results. In parallel, the inner workings and applications of Deep RL, such as AlphaGo pictured above, can often seem esoteric and hard to understand. In this post, I will give an overview of core aspects of the field that can be understood by anyone.
Many of the visuals are from the slides of the talk, and some are new. The explanations and opinions are mine. If anything is unclear, reach out to me here!
Deep RL is a field that has seen vast amounts of research interest, including learning to play Atari games, beating pro players at Dota 2, and defeating Go champions. Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?), Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?). In dialog systems for example, classical Deep Learning aims to learn the right response for a given query. On the other hand, Deep Reinforcement Learning focuses on the right sequences of sentences that will lead to a positive outcome, for example a happy customer.
This makes Deep RL particularly attractive for tasks that require planning and adaptation, such as manufacturing or self-driving. However, industry applications have trailed behind the rapidly advancing results coming out of the research community. A major reason is that Deep RL often requires an agent to experiment millions of times before learning anything useful. The best way to do this rapidly is by using a simulation environment. This tutorial will be using Unity to create environments to train agents in.
For this workshop led by Arthur Juliani and Leon Chen, their goal was to get every participants to successfully train multiple Deep RL algorithms in 4 hours. A tall order! Below, is a comprehensive overview of many of the main algorithms that power Deep RL today. For a more complete set of tutorials, Arthur Juliani wrote an 8-part series starting here.
Deep RL can be used to best the top human players at Go, but to understand how that’s done, you first need to understand a few simple concepts, starting with much easier problems.
1/It all starts with slot machines
Let’s imagine you are faced with 4 chests that you can pick from at each turn. Each of them have a different average payout, and your goal is to maximize the total payout you receive after a fixed number of turns. This is a classic problem called Multi-armed bandits and is where we will start. The crux of the problem is to balance exploration, which helps us learn about which states are good, and exploitation, where we now use what we know to pick the best slot machine.
Here, we will utilize a value function that maps our actions to an estimated reward, called the Q function. First, we’ll initialize all Q values at equal values. Then, we’ll update the Q value of each action (picking each chest) based on how good the payout was after choosing this action. This allows us to learn a good value function. We will approximate our Q function using a neural network (starting with a very shallow one) that learns a probability distribution (by using a softmax) over the 4 potential chests.
While the value function tells us how good we estimate each action to be, the policy is the function that determines which actions we end up taking. Intuitively, we might want to use a policy that picks the action with the highest Q value. This performs poorly in practice, as our Q estimates will be very wrong at the start before we gather enough experience through trial and error. This is why we need to add a mechanism to our policy to encourage exploration. One way to do that is to use epsilon greedy, which consists of taking a random action with probability epsilon. We start with epsilon being close to 1, always choosing random actions, and lower epsilon as we go along and learn more about which chests are good. Eventually, we learn which chests are best.
In practice, we might want to take a more subtle approach than either taking the action we think is the best, or a random action. A popular method is Boltzmann Exploration, which adjust probabilities based on our current estimate of how good each chest is, adding in a randomness factor.
2/Adding different states
The previous example was a world in which we were always in the same state, waiting to pick from the same 4 chests in front of us. Most real-word problems consist of many different states. That is what we will add to our environment next. Now, the background behind chests alternates between 3 colors at each turn, changing the average values of the chests. This means we need to learn a Q function that depends not only on the action (the chest we pick), but the state (what the color of the background is). This version of the problem is called Contextual Multi-armed Bandits.
Surprisingly, we can use the same approach as before. The only thing we need to add is an extra dense layer to our neural network, that will take in as input a vector representing the current state of the world.
3/Learning about the consequences of our actions
There is another key factor that makes our current problem simpler than mosts. In most environments, such as in the maze depicted above, the actions that we take have an impact on the state of the world. If we move up on this grid, we might receive a reward or we might receive nothing, but the next turn we will be in a different state. This is where we finally introduce a need for planning.
First, we will define our Q function as the immediate reward in our current state, plus the discounted reward we are expecting by taking all of our future actions. This solution works if our Q estimate of states is accurate, so how can we learn a good estimate?
We will use a method called Temporal Difference (TD) learning to learn a good Q function. The idea is to only look at a limited number of steps in the future. TD(1) for example, only uses the next 2 states to evaluate the reward.
Surprisingly, we can use TD(0), which looks at the current state, and our estimate of the reward the next turn, and get great results. The structure of the network is the same, but we need to go through one forward step before receiving the error. We then use this error to back propagate gradients, like in traditional Deep Learning, and update our value estimates.
3+/Introducing Monte Carlo
Another method to estimate the eventual success of our actions is Monte Carlo Estimates. This consists of playing out the entire episode with our current policy until we reach an end (success by reaching a green block or failure by reaching a red block in the image above) and use that result to update our value estimates for each traversed state. This allows us to propagate values efficiently in one batch at the end of an episode, instead of every time we make a move. The cost is that we are introducing noise to our estimates, since we attribute very distant rewards to them.
4/The world is rarely discrete
The previous methods were using neural networks to approximate our value estimates by mapping from a discrete number of states and actions to a value. In the maze for example, there were 49 states (squares) and 4 actions (move in each adjacent direction). In this environment, we are trying to learn how to balance a ball on a 2 dimensional paddle, by deciding at each time step whether we want to tilt the paddle left or right. Here, the state space becomes continuous (the angle of the paddle, and the position of the ball). The good news is, we can still use Neural Networks to approximate this function!
A note about off-policy vs on-policy learning: The methods we used previously, are off-policy methods, meaning we can generate data with any strategy(using epsilon greedy for example) and learn from it. On-policy methods can only learn from actions that were taken following our policy (remember, a policy is the method we use to determine which actions to take). This constrains our learning process, as we have to have an exploration strategy that is built in to the policy itself, but allows us to tie results directly to our reasoning, and enables us to learn more efficiently.
The approach we will use here is called Policy Gradients, and is an on-policy method. Previously, we were first learning a value function Q for each action in each state and then building a policy on top. In Vanilla Policy Gradient, we still use Monte Carlo Estimates, but we learn our policy directly through a loss function that increases the probability of choosing rewarding actions. Since we are learning on policy, we cannot use methods such as epsilon greedy (which includes random choices), to get our agent to explore the environment. The way that we encourage exploration is by using a method called entropy regularization, which pushes our probability estimates to be wider, and thus will encourage us to make riskier choices to explore the space.
4+/Leveraging deep learning for representations
In practice, many state of the art RL methods require learning both a policy and value estimates. The way we do this with deep learning is by having both be two separate outputs of the same backbone neural network, which will make it easier for our neural network to learn good representations.
One method to do this is Advantage Actor Critic (A2C). We learn our policy directly with policy gradients (defined above), and learn a value function using something called Advantage. Instead of updating our value function based on rewards, we update it based on our advantage, which measures how much better or worse an action was than our previous value function estimated it to be. This helps make learning more stable compared to simple Q Learning and Vanilla Policy Gradients.
5/Learning directly from the screen
There is an additional advantage to using Deep Learning for these methods, which is that Deep Neural Networks excel at perceptive tasks. When a human plays a game, the information received is not a list of states, but an image (usually of a screen, or a board, or the surrounding environment).
Image-based Learning combines a Convolutional Neural Network (CNN) with RL. In this environment, we pass in a raw image instead of features, and add a 2 layer CNN to our architecture without changing anything else! We can even inspect activations to see what the network picks up on to determine value, and policy. In the example below, we can see that the network uses the current score and distant obstacles to estimate the value of the current state, while focusing on nearby obstacles for determining actions. Neat!
As a side note, while toying around with the provided implementation, I’ve found that visual learning is very sensitive to hyperparameters. Changing the discount rate slightly for example, completely prevented the neural network from learning even on a toy application. This is a widely known problem, but it is interesting to see it first hand.
6/Nuanced actions
So far, we’ve played with environments with continuous and discrete state spaces. However, every environment we studied had a discrete action space: we could move in one of four directions, or tilt the paddle to the left or right. Ideally, for applications such as self-driving cars, we would like to learn continuous actions, such as turning the steering wheel between 0 and 360 degrees. In this environment called 3D ball world, we can choose to tilt the paddle to any value on each of its axes. This gives us more control as to how we perform actions, but makes the action space much larger.
We can approach this by approximating our potential choices with Gaussian distributions. We learn a probability distribution over potential actions by learning the mean and variance of a Gaussian distribution, and our policy we sample from that distribution. Simple, in theory :).
7/Next steps for the brave
There are a few concepts that separate the algorithms described above from state of the art approaches. It’s interesting to see that conceptually, the best robotics and game-playing algorithms are not that far away from the ones we just explored:
That’s it for this overview, I hope this has been informative and fun! If you are looking to dive deeper into the theory of RL, give Arthur’s posts a read, or diving deeper by following David Silver’s UCL course. If you are looking to learn more about the projects we do at Insight, or how we work with companies, please check us out below, or reach out to me here.
Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley, New York, or Toronto? Learn more about the Insight Artificial Intelligence Fellows Program.
Are you a company working in AI and would like to get involved in the Insight AI Fellows Program? Feel free to get in touch.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
AI Lead at Insight AI @EmmanuelAmeisen
Insight Fellows Program - Your bridge to a career in data
|
Irhum Shafkat | 2K | 15 | https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1?source=---------4---------------- | Intuitively Understanding Convolutions for Deep Learning | The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code.
However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images.
The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.
The kernel repeats this process for every location it slides over, converting a 2D matrix of features into yet another 2D matrix of features. The output features are essentially, the weighted sums (with the weights being the values of the kernel itself) of the input features located roughly in the same location of the output pixel on the input layer.
Whether or not an input feature falls within this “roughly same location”, gets determined directly by whether it’s in the area of the kernel that produced the output or not. This means the size of the kernel directly determines how many (or few) input features get combined in the production of a new output feature.
This is all in pretty stark contrast to a fully connected layer. In the above example, we have 5×5=25 input features, and 3×3=9 output features. If this were a standard fully connected layer, you’d have a weight matrix of 25×9 = 225 parameters, with every output feature being the weighted sum of every single input feature. Convolutions allow us to do this transformation with only 9 parameters, with each output feature, instead of “looking at” every input feature, only getting to “look” at input features coming from roughly the same location. Do take note of this, as it’ll be critical to our later discussion.
Before we move on, it’s definitely worth looking into two techniques that are commonplace in convolution layers: Padding and Strides.
Padding does something pretty clever to solve this: pad the edges with extra, “fake” pixels (usually of value 0, hence the oft-used term “zero padding”). This way, the kernel when sliding can allow the original edge pixels to be at its center, while extending into the fake pixels beyond the edge, producing an output the same size as the input.
The idea of the stride is to skip some of the slide locations of the kernel. A stride of 1 means to pick slides a pixel apart, so basically every single slide, acting as a standard convolution. A stride of 2 means picking slides 2 pixels apart, skipping every other slide in the process, downsizing by roughly a factor of 2, a stride of 3 means skipping every 2 slides, downsizing roughly by factor 3, and so on.
More modern networks, such as the ResNet architectures entirely forgo pooling layers in their internal layers, in favor of strided convolutions when needing to reduce their output sizes.
Of course, the diagrams above only deals with the case where the image has a single input channel. In practicality, most input images have 3 channels, and that number only increases the deeper you go into a network. It’s pretty easy to think of channels, in general, as being a “view” of the image as a whole, emphasising some aspects, de-emphasising others.
So this is where a key distinction between terms comes in handy: whereas in the 1 channel case, where the term filter and kernel are interchangeable, in the general case, they’re actually pretty different. Each filter actually happens to be a collection of kernels, with there being one kernel for every single input channel to the layer, and each kernel being unique.
Each filter in a convolution layer produces one and only one output channel, and they do it like so:
Each of the kernels of the filter “slides” over their respective input channels, producing a processed version of each. Some kernels may have stronger weights than others, to give more emphasis to certain input channels than others (eg. a filter may have a red kernel channel with stronger weights than others, and hence, respond more to differences in the red channel features than the others).
Each of the per-channel processed versions are then summed together to form one channel. The kernels of a filter each produce one version of each channel, and the filter as a whole produces one overall output channel.
Finally, then there’s the bias term. The way the bias term works here is that each output filter has one bias term. The bias gets added to the output channel so far to produce the final output channel.
And with the single filter case down, the case for any number of filters is identical: Each filter processes the input with its own, different set of kernels and a scalar bias with the process described above, producing a single output channel. They are then concatenated together to produce the overall output, with the number of output channels being the number of filters. A nonlinearity is then usually applied before passing this as input to another convolution layer, which then repeats this process.
Even with the mechanics of the convolution layer down, it can still be hard to relate it back to a standard feed-forward network, and it still doesn’t explain why convolutions scale to, and work so much better for image data.
Suppose we have a 4×4 input, and we want to transform it into a 2×2 grid. If we were using a feedforward network, we’d reshape the 4×4 input into a vector of length 16, and pass it through a densely connected layer with 16 inputs and 4 outputs. One could visualize the weight matrix W for a layer:
And although the convolution kernel operation may seem a bit strange at first, it is still a linear transformation with an equivalent transformation matrix. If we were to use a kernel K of size 3 on the reshaped 4×4 input to get a 2×2 output, the equivalent transformation matrix would be:
(Note: while the above matrix is an equivalent transformation matrix, the actual operation is usually implemented as a very different matrix multiplication[2])
The convolution then, as a whole, is still a linear transformation, but at the same time it’s also a dramatically different kind of transformation. For a matrix with 64 elements, there’s just 9 parameters which themselves are reused several times. Each output node only gets to see a select number of inputs (the ones inside the kernel). There is no interaction with any of the other inputs, as the weights to them are set to 0.
It’s useful to see the convolution operation as a hard prior on the weight matrix. In this context, by prior, I mean predefined network parameters. For example, when you use a pretrained model for image classification, you use the pretrained network parameters as your prior, as a feature extractor to your final densely connected layer.
In that sense, there’s a direct intuition between why both are so efficient (compared to their alternatives). Transfer learning is efficient by orders of magnitude compared to random initialization, because you only really need to optimize the parameters of the final fully connected layer, which means you can have fantastic performance with only a few dozen images per class.
Here, you don’t need to optimize all 64 parameters, because we set most of them to zero (and they’ll stay that way), and the rest we convert to shared parameters, resulting in only 9 actual parameters to optimize. This efficiency matters, because when you move from the 784 inputs of MNIST to real world 224×224×3 images, thats over 150,000 inputs. A dense layer attempting to halve the input to 75,000 inputs would still require over 10 billion parameters. For comparison, the entirety of ResNet-50 has some 25 million parameters.
So fixing some parameters to 0, and tying parameters increases efficiency, but unlike the transfer learning case, where we know the prior is good because it works on a large general set of images, how do we know this is any good?
The answer lies in the feature combinations the prior leads the parameters to learn.
Early on in this article, we discussed that:
So with backpropagation coming in all the way from the classification nodes of the network, the kernels have the interesting task of learning weights to produce features only from a set of local inputs. Additionally, because the kernel itself is applied across the entire image, the features the kernel learns must be general enough to come from any part of the image.
If this were any other kind of data, eg. categorical data of app installs, this would’ve been a disaster, for just because your number of app installs and app type columns are next to each other doesn’t mean they have any “local, shared features” common with app install dates and time used. Sure, the four may have an underlying higher level feature (eg. which apps people want most) that can be found, but that gives us no reason to believe the parameters for the first two are exactly the same as the parameters for the latter two. The four could’ve been in any (consistent) order and still be valid!
Pixels however, always appear in a consistent order, and nearby pixels influence a pixel e.g. if all nearby pixels are red, it’s pretty likely the pixel is also red. If there are deviations, that’s an interesting anomaly that could be converted into a feature, and all this can be detected from comparing a pixel with its neighbors, with other pixels in its locality.
And this idea is really what a lot of earlier computer vision feature extraction methods were based around. For instance, for edge detection, one can use a Sobel edge detection filter, a kernel with fixed parameters, operating just like the standard one-channel convolution:
For a non-edge containing grid (eg. the background sky), most of the pixels are the same value, so the overall output of the kernel at that point is 0. For a grid with an vertical edge, there is a difference between the pixels to the left and right of the edge, and the kernel computes that difference to be non-zero, activating and revealing the edges. The kernel only works only a 3×3 grids at a time, detecting anomalies on a local scale, yet when applied across the entire image, is enough to detect a certain feature on a global scale, anywhere in the image!
So the key difference we make with deep learning is ask this question: Can useful kernels be learnt? For early layers operating on raw pixels, we could reasonably expect feature detectors of fairly low level features, like edges, lines, etc.
There’s an entire branch of deep learning research focused on making neural network models interpretable. One of the most powerful tools to come out of that is Feature Visualization using optimization[3]. The idea at core is simple: optimize a image (usually initialized with random noise) to activate a filter as strongly as possible. This does make intuitive sense: if the optimized image is completely filled with edges, that’s strong evidence that’s what the filter itself is looking for and is activated by. Using this, we can peek into the learnt filters, and the results are stunning:
One important thing to notice here is that convolved images are still images. The output of a small grid of pixels from the top left of an image will still be on the top left. So you can run another convolution layer on top of another (such as the two on the left) to extract deeper features, which we visualize.
Yet, however deep our feature detectors get, without any further changes they’ll still be operating on very small patches of the image. No matter how deep your detectors are, you can’t detect faces from a 3×3 grid. And this is where the idea of the receptive field comes in.
A essential design choice of any CNN architecture is that the input sizes grow smaller and smaller from the start to the end of the network, while the number of channels grow deeper. This, as mentioned earlier, is often done through strides or pooling layers. Locality determines what inputs from the previous layer the outputs get to see. The receptive field determines what area of the original input to the entire network the output gets to see.
The idea of a strided convolution is that we only process slides a fixed distance apart, and skip the ones in the middle. From a different point of view, we only keep outputs a fixed distance apart, and remove the rest[1].
We then apply a nonlinearity to the output, and per usual, then stack another new convolution layer on top. And this is where things get interesting. Even if were we to apply a kernel of the same size (3×3), having the same local area, to the output of the strided convolution, the kernel would have a larger effective receptive field:
This is because the output of the strided layer still does represent the same image. It is not so much cropping as it is resizing, only thing is that each single pixel in the output is a “representative” of a larger area (of whose other pixels were discarded) from the same rough location from the original input. So when the next layer’s kernel operates on the output, it’s operating on pixels collected from a larger area.
(Note: if you’re familiar with dilated convolutions, note that the above is not a dilated convolution. Both are methods of increasing the receptive field, but dilated convolutions are a single layer, while this takes place on a regular convolution following a strided convolution, with a nonlinearity inbetween)
This expansion of the receptive field allows the convolution layers to combine the low level features (lines, edges), into higher level features (curves, textures), as we see in the mixed3a layer.
Followed by a pooling/strided layer, the network continues to create detectors for even higher level features (parts, patterns), as we see for mixed4a.
The repeated reduction in image size across the network results in, by the 5th block on convolutions, input sizes of just 7×7, compared to inputs of 224×224. At this point, each single pixel represents a grid of 32×32 pixels, which is huge.
Compared to earlier layers, where an activation meant detecting an edge, here, an activation on the tiny 7×7 grid is one for a very high level feature, such as for birds.
The network as a whole progresses from a small number of filters (64 in case of GoogLeNet), detecting low level features, to a very large number of filters(1024 in the final convolution), each looking for an extremely specific high level feature. Followed by a final pooling layer, which collapses each 7×7 grid into a single pixel, each channel is a feature detector with a receptive field equivalent to the entire image.
Compared to what a standard feedforward network would have done, the output here is really nothing short of awe-inspiring. A standard feedforward network would have produced abstract feature vectors, from combinations of every single pixel in the image, requiring intractable amounts of data to train.
The CNN, with the priors imposed on it, starts by learning very low level feature detectors, and as across the layers as its receptive field is expanded, learns to combine those low-level features into progressively higher level features; not an abstract combination of every single pixel, but rather, a strong visual hierarchy of concepts.
By detecting low level features, and using them to detect higher level features as it progresses up its visual hierarchy, it is eventually able to detect entire visual concepts such as faces, birds, trees, etc, and that’s what makes them such powerful, yet efficient with image data.
With the visual hierarchy CNNs build, it is pretty reasonable to assume that their vision systems are similar to humans. And they’re really great with real world images, but they also fail in ways that strongly suggest their vision systems aren’t entirely human-like. The most major problem: Adversarial Examples[4], examples which have been specifically modified to fool the model.
Adversarial examples would be a non-issue if the only tampered ones that caused the models to fail were ones that even humans would notice. The problem is, the models are susceptible to attacks by samples which have only been tampered with ever so slightly, and would clearly not fool any human. This opens the door for models to silently fail, which can be pretty dangerous for a wide range of applications from self-driving cars to healthcare.
Robustness against adversarial attacks is currently a highly active area of research, the subject of many papers and even competitions, and solutions will certainly improve CNN architectures to become safer and more reliable.
CNNs were the models that allowed computer vision to scale from simple applications to powering sophisticated products and services, ranging from face detection in your photo gallery to making better medical diagnoses. They might be the key method in computer vision going forward, or some other new breakthrough might just be around the corner. Regardless, one thing is for sure: they’re nothing short of amazing, at the heart of many present-day innovative applications, and are most certainly worth deeply understanding.
Hope you enjoyed this article! If you’d like to stay connected, you’ll find me on Twitter here. If you have a question, comments are welcome! — I find them to be useful to my own learning process as well.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Curious programmer, tinkers around in Python and deep learning.
Sharing concepts, ideas, and codes.
|
Sam Drozdov | 2.3K | 6 | https://uxdesign.cc/an-intro-to-machine-learning-for-designers-5c74ba100257?source=---------5---------------- | An intro to Machine Learning for designers – UX Collective | There is an ongoing debate about whether or not designers should write code. Wherever you fall on this issue, most people would agree that designers should know about code. This helps designers understand constraints and empathize with developers. It also allows designers to think outside of the pixel perfect box when problem solving. For the same reasons, designers should know about machine learning.
Put simply, machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Even though Arthur Samuel coined the term over fifty years ago, only recently have we seen the most exciting applications of machine learning — digital assistants, autonomous driving, and spam-free email all exist thanks to machine learning.
Over the past decade new algorithms, better hardware, and more data have made machine learning an order of magnitude more effective. Only in the past few years companies like Google, Amazon, and Apple have made some of their powerful machine learning tools available to developers. Now is the best time to learn about machine learning and apply it to the products you are building.
Since machine learning is now more accessible than ever before, designers today have the opportunity to think about how machine learning can be applied to improve their products. Designers should be able to talk with software developers about what is possible, how to prepare, and what outcomes to expect. Below are a few example applications that should serve as inspiration for these conversations.
Machine learning can help create user-centric products by personalizing experiences to the individuals who use them. This allows us to improve things like recommendations, search results, notifications, and ads.
Machine learning is effective at finding abnormal content. Credit card companies use this to detect fraud, email providers use this to detect spam, and social media companies use this to detect things like hate speech.
Machine learning has enabled computers to begin to understand the things we say (natural-language processing) and the things we see (computer vision). This allows Siri to understand “Siri, set a reminder...”, Google Photos to create albums of your dog, and Facebook to describe a photo to those visually impaired.
Machine learning is also helpful in understanding how users are grouped. This insight can then be used to look at analytics on a group-by-group basis. From here, different features can be evaluated across groups or be rolled out to only a particular group of users.
Machine learning allows us to make predictions about how a user might behave next. Knowing this, we can help prepare for a user’s next action. For example, if we can predict what content a user is planning on viewing, we can preload that content so it’s immediately ready when they want it.
Depending on the application and what data is available, there are different types of machine learning algorithms to choose from. I’ll briefly cover each of the following.
Supervised learning allows us to make predictions using correctly labeled data. Labeled data is a group of examples that has informative tags or outputs. For example, photos with associated hashtags or a house’s features (eq. number of bedrooms, location) and its price.
By using supervised learning we can fit a line to the labelled data that either splits the data into categories or represents the trend of the data. Using this line we are able to make predictions on new data. For example, we can look at new photos and predict hashtags or look at a new house’s features and predict its price.
If the output we are trying to predict is a list of tags or values we call it classification. If the output we are trying to predict is a number we call it regression.
Unsupervised learning is helpful when we have unlabeled data or we are not exactly sure what outputs (like an image’s hashtags or a house’s price) are meaningful. Instead we can identify patterns among unlabeled data. For example, we can identify related items on an e-commerce website or recommend items to someone based on others who made similar purchases.
If the pattern is a group we call it a cluster. If the pattern is a rule (e.q. if this, then that) we call it an association.
Reinforcement learning doesn’t use an existing data set. Instead we create an agent to collect its own data through trial-and-error in an environment where it is reinforced with a reward. For example, an agent can learn to play Mario by receiving a positive reward for collecting coins and a negative reward for walking into a Goomba.
Reinforcement learning is inspired by the way that humans learn and has turned out to be an effective way to teach computers. Specifically, reinforcement has been effective at training computers to play games like Go and Dota.
Understanding the problem you are trying to solve and the available data will constrain the types of machine learning you can use (e.q. identifying objects in an image with supervised learning requires a labeled data set of images). However, constraints are the fruit of creativity. In some cases, you can set out to collect data that is not already available or consider other approaches.
Even though machine learning is a science, it comes with a margin of error. It is important to consider how a user’s experience might be impacted by this margin of error. For example, when an autonomous car fails to recognize its surroundings people can get hurt.
Even though machine learning has never been as accessible as it is today, it still requires additional resources (developers and time) to be integrated into a product. This makes it important to think about whether the resulting impact justifies the amount of resources needed to implement.
We have barely covered the tip of the iceberg, but hopefully at this point you feel more comfortable thinking about how machine learning can be applied to your product. If you are interested in learning more about machine learning, here are some helpful resources:
Thanks for reading. Chat with me on Twitter @samueldrozdov
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Digital Product Designer samueldrozdov.com
Curated stories on user experience, usability, and product design. By @fabriciot and @caioab.
|
Conor Dewey | 252 | 10 | https://towardsdatascience.com/the-big-list-of-ds-ml-interview-resources-2db4f651bd63?source=---------6---------------- | The Big List of DS/ML Interview Resources – Towards Data Science | Data science interviews certainly aren’t easy. I know this first hand. I’ve participated in over 50 individual interviews and phone screens while applying for competitive internships over the last calendar year. Through this exciting and somewhat (at times, very) painful process, I’ve accumulated a plethora of useful resources that helped me prepare for and eventually pass data science interviews.
Long story short, I’ve decided to sort through all my bookmarks and notes in order to deliver a comprehensive list of data science resources.
With this list by your side, you should have more than enough effective tools at your disposal next time you’re prepping for a big interview.
It’s worth noting that many of these resources are naturally going to geared towards entry-level and intern data science positions, as that’s where my expertise lies. Keep that in mind and enjoy!
Here’s some of the more general resources covering data science as a whole. Specifically, I highly recommend checking out the first two links regarding 120 Data Science Interview Questions. While the ebook itself is a couple bucks out of pocket, the answers themselves are free on Quora. These were some of my favorite full-coverage questions to practice with right before an interview.
Even Data Scientists cannot escape the dreaded algorithmic coding interview. In my experience, this isn’t the case 100% of the time, but chances are you’ll be asked to work through something similar to an easy or medium question on LeetCode or HackerRank.
As far as language goes, most companies will let you use whatever language you want. Personally, I did almost all of my algorithmic coding in Java even though the positions were targeted at Python and R programmers. If I had to recommend one thing, it’s to break out your wallet and invest in Cracking the Coding Interview. It absolutely lives up to the hype. I plan to continue using it for years to come.
Once the interviewer knows that you can think-through problems and code effectively, chances are that you’ll move onto some more data science specific applications. Depending on the interviewer and the position, you will likely be able to choose between Python and R as your tool of choice. Since I’m partial to Python, my resources below will primarily focus on effectively using Pandas and NumPy for data analysis.
A data science interview typically isn’t complete without checking your knowledge of SQL. This can be done over the phone or through a live coding question, more likely the latter. I’ve found that the difficulty level of these questions can vary a good bit, ranging from being painfully easy to requiring complex joins and obscure functions.
Our good friend, statistics is still crucial for Data Scientists and it’s reflected as such in interviews. I had many interviews begin by seeing if I can explain a common statistics or probability concept in simple and concise terms. As positions get more experienced, I suspect this happens less and less as traditional statistical questions begin to take the more practical form of A/B testing scenarios, covered later in the post.
You’ll notice that I’ve compiled a few more resources here than in other sections. This isn’t a mistake. Machine learning is a complex field that is a virtual guarantee in data science interviews today.
The way that you’ll be tested on this is no guarantee however. It may come up as a conceptual question regarding cross validation or bias-variance tradeoff, or it may take the form of a take home assignment with a dataset attached. I’ve seen both several times, so you’ve got to be prepared for anything.
Specifically, check out the Machine Learning Flashcards below, they’re only a couple bucks and were my by far my favorite way to quiz myself on any conceptual ML stuff.
This won’t be covered in every single data science interview, but it’s certainly not uncommon. Most interviews will have atleast one section solely dedicated to product thinking which often lends itself to A/B testing of some sort. Make sure your familiar with the concepts and statistical background necessary in order to be prepared when it comes up. If you have time to spare, I took the free online course by Udacity and overall, I was pretty impressed.
Lastly, I wanted to call out all of the posts related to data science jobs and interviewing that I read over and over again to understand, not only how to prepare, but what to expect as well. If you only check out one section here, this is the one to focus on. This is the layer that sits on top of all the technical skills and application. Don’t overlook it.
I hope you find these resources useful during your next interview or job search. I know I did, truthfully I’m just glad that I saved these links somewhere. Lastly, this post is part of an ongoing initiative to ‘open-source’ my experience applying and interviewing at data science positions, so if you enjoyed this content then be sure to follow me for more stuff like this.
If you’re interested in receiving my weekly rundown of interesting articles and resources focused on data science, machine learning, and artificial intelligence, then subscribe to Self Driven Data Science using the form below!
If you enjoyed this post, feel free to hit the clap button and if you’re interested in posts to come, make sure to follow me on Medium at the link below — I’ll be writing and shipping every day this month as part of a 30-Day Challenge.
This article was originally published on conordewey.com
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Data Scientist & Writer | www.conordewey.com
Sharing concepts, ideas, and codes.
|
Abhishek Parbhakar | 937 | 6 | https://towardsdatascience.com/must-know-information-theory-concepts-in-deep-learning-ai-e54a5da9769d?source=---------7---------------- | Must know Information Theory concepts in Deep Learning (AI) | Information theory is an important field that has made significant contribution to deep learning and AI, and yet is unknown to many. Information theory can be seen as a sophisticated amalgamation of basic building blocks of deep learning: calculus, probability and statistics. Some examples of concepts in AI that come from Information theory or related fields:
In the early 20th century, scientists and engineers were struggling with the question: “How to quantify the information? Is there a analytical way or a mathematical measure that can tell us about the information content?”. For example, consider below two sentences:
It is not difficult to tell that the second sentence gives us more information since it also tells that Bruno is “big” and “brown” in addition to being a “dog”. How can we quantify the difference between two sentences? Can we have a mathematical measure that tells us how much more information second sentence have as compared to the first?
Scientists were struggling with these questions. Semantics, domain and form of data only added to the complexity of the problem. Then, mathematician and engineer Claude Shannon came up with the idea of “Entropy” that changed our world forever and marked the beginning of “Digital Information Age”.
Shannon proposed that the “semantic aspects of data are irrelevant”, and nature and meaning of data doesn’t matter when it comes to information content. Instead he quantified information in terms of probability distribution and “uncertainty”. Shannon also introduced the term “bit”, that he humbly credited to his colleague John Tukey. This revolutionary idea not only laid the foundation of Information Theory but also opened new avenues for progress in fields like artificial intelligence.
Below we discuss four popular, widely used and must known Information theoretic concepts in deep learning and data sciences:
Also called Information Entropy or Shannon Entropy.
Entropy gives a measure of uncertainty in an experiment. Let’s consider two experiments:
If we compare the two experiments, in exp 2 it is easier to predict the outcome as compared to exp 1. So, we can say that exp 1 is inherently more uncertain/unpredictable than exp 2. This uncertainty in the experiment is measured using entropy.
Therefore, if there is more inherent uncertainty in the experiment then it has higher entropy. Or lesser the experiment is predictable more is the entropy. The probability distribution of experiment is used to calculate the entropy.
A deterministic experiment, which is completely predictable, say tossing a coin with P(H)=1, has entropy zero. An experiment which is completely random, say rolling fair dice, is least predictable, has maximum uncertainty, and has the highest entropy among such experiments.
Another way to look at entropy is the average information gained when we observe outcomes of an random experiment. The information gained for a outcome of an experiment is defined as a function of probability of occurrence of that outcome. More the rarer is the outcome, more is the information gained from observing it.
For example, in an deterministic experiment, we always know the outcome, so no new information gained is here from observing the outcome and hence entropy is zero.
For a discrete random variable X, with possible outcomes (states) x_1,...,x_n the entropy, in unit of bits, is defined as:
where p(x_i) is the probability of i^th outcome of X.
Cross entropy is used to compare two probability distributions. It tells us how similar two distributions are.
Cross entropy between two probability distributions p and q defined over same set of outcomes is given by:
Mutual information is a measure of mutual dependency between two probability distributions or random variables. It tells us how much information about one variable is carried by the another variable.
Mutual information captures dependency between random variables and is more generalized than vanilla correlation coefficient, which captures only the linear relationship.
Mutual information of two discrete random variables X and Y is defined as:
where p(x,y) is the joint probability distribution of X and Y, and p(x) and p(y) are the marginal probability distribution of X and Y respectively.
Also called Relative Entropy.
KL divergence is another measure to find similarities between two probability distributions. It measures how much one distribution diverges from the other.
Suppose, we have some data and true distribution underlying it is ‘P’. But we don’t know this ‘P’, so we choose a new distribution ‘Q’ to approximate this data. Since ‘Q’ is just an approximation, it won’t be able to approximate the data as good as ‘P’ and some information loss will occur. This information loss is given by KL divergence.
KL divergence between ‘P’ and ‘Q’ tells us how much information we lose when we try to approximate data given by ‘P’ with ‘Q’.
KL divergence of a probability distribution Q from another probability distribution P is defined as:
KL divergence is commonly used in unsupervised machine learning technique Variational Autoencoders.
Information Theory was originally formulated by mathematician and electrical engineer Claude Shannon in his seminal paper “A Mathematical Theory of Communication” in 1948.
Note: Terms experiments, random variable & AI, machine learning, deep learning, data science have been used loosely above but have technically different meanings.
In case you liked the article, do follow me Abhishek Parbhakar for more articles related to AI, philosophy and economics.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Finding equilibria among AI, philosophy, and economics.
Sharing concepts, ideas, and codes.
|
Aman Dalmia | 2.3K | 17 | https://blog.usejournal.com/what-i-learned-from-interviewing-at-multiple-ai-companies-and-start-ups-a9620415e4cc?source=---------8---------------- | What I learned from interviewing at multiple AI companies and start-ups | Over the past 8 months, I’ve been interviewing at various companies like Google’s DeepMind, Wadhwani Institute of AI, Microsoft, Ola, Fractal Analytics, and a few others primarily for the roles — Data Scientist, Software Engineer & Research Engineer. In the process, not only did I get an opportunity to interact with many great minds, but also had a peek at myself along with a sense of what people really look for when interviewing someone. I believe that if I’d had this knowledge before, I could have avoided many mistakes and have prepared in a much better manner, which is what the motivation behind this post is, to be able to help someone bag their dream place of work.
This post arose from a discussion with one of my juniors on the lack of really fulfilling job opportunities offered through campus placements for people working in AI. Also, when I was preparing, I noticed people using a lot of resources but as per my experience over the past months, I realised that one can do away with a few minimal ones for most roles in AI, all of which I’m going to mention at the end of the post. I begin with How to get noticed a.k.a. the interview. Then I provide a List of companies and start-ups to apply, which is followed by How to ace that interview. Based on whatever experience I’ve had, I add a section on What we should strive to work for. I conclude with Minimal Resources you need for preparation.
NOTE: For people who are sitting for campus placements, there are two things I’d like to add. Firstly, most of what I’m going to say (except for the last one maybe) is not going to be relevant to you for placements. But, and this is my second point, as I mentioned before, opportunities on campus are mostly in software engineering roles having no intersection with AI. So, this post is specifically meant for people who want to work on solving interesting problems using AI. Also, I want to add that I haven’t cleared all of these interviews but I guess that’s the essence of failure — it’s the greatest teacher! The things that I mention here may not all be useful but these are things that I did and there’s no way for me to know what might have ended up making my case stronger.
To be honest, this step is the most important one. What makes off-campus placements so tough and exhausting is getting the recruiter to actually go through your profile among the plethora of applications that they get. Having a contact inside the organisation place a referral for you would make it quite easy, but, in general, this part can be sub-divided into three keys steps:
a) Do the regulatory preparation and do that well: So, with regulatory preparation, I mean —a LinkedIn profile, a Github profile, a portfolio website and a well-polished CV. Firstly, your CV should be really neat and concise. Follow this guide by Udacity for cleaning up your CV — Resume Revamp. It has everything that I intend to say and I’ve been using it as a reference guide myself. As for the CV template, some of the in-built formats on Overleaf are quite nice. I personally use deedy-resume. Here’s a preview:
As it can be seen, a lot of content can be fit into one page. However, if you really do need more than that, then the format linked above would not work directly. Instead, you can find a modified multi-page format of the same here. The next most important thing to mention is your Github profile. A lot of people underestimate the potential of this, just because unlike LinkedIn, it doesn’t have a “Who Viewed Your Profile” option. People DO go through your Github because that’s the only way they have to validate what you have mentioned in your CV, given that there’s a lot of noise today with people associating all kinds of buzzwords with their profile. Especially for data science, open-source has a big role to play too with majority of the tools, implementations of various algorithms, lists of learning resources, all being open-sourced. I discuss the benefits of getting involved in Open-Source and how one can start from scratch in an earlier post here. The bare minimum for now should be:
• Create a Github account if you don’t already have one.• Create a repository for each of the projects that you have done.• Add documentation with clear instructions on how to run the code• Add documentation for each file mentioning the role of each function, the meaning of each parameter, proper formatting (e.g. PEP8 for Python) along with a script to automate the previous step (Optional).
Moving on, the third step is what most people lack, which is having a portfolio website demonstrating their experience and personal projects. Making a portfolio indicates that you are really serious about getting into the field and adds a lot of points to the authenticity factor. Also, you generally have space constraints on your CV and tend to miss out on a lot of details. You can use your portfolio to really delve deep into the details if you want to and it’s highly recommended to include some sort of visualisation or demonstration of the project/idea. It’s really easy to create one too as there are a lot of free platforms with drag-and-drop features making the process really painless. I personally use Weebly which is a widely used tool. It’s better to have a reference to begin with. There are a lot of awesome ones out there but I referred to Deshraj Yadav’s personal website to begin with making mine:
Finally, a lot of recruiters and start-ups have nowadays started using LinkedIn as their go-to platform for hiring. A lot of good jobs get posted there. Apart from recruiters, the people working at influential positions are quite active there as well. So, if you can grab their attention, you have a good chance of getting in too. Apart from that, maintaining a clean profile is necessary for people to have the will to connect with you. An important part of LinkedIn is their search tool and for you to show up, you must have the relevant keywords interspersed over your profile. It took me a lot of iterations and re-evaluations to finally have a decent one. Also, you should definitely ask people with or under whom you’ve worked with to endorse you for your skills and add a recommendation talking about their experience of working with you. All of this increases your chance of actually getting noticed. I’ll again point towards Udacity’s guide for LinkedIn and Github profiles.
All this might seem like a lot, but remember that you don’t need to do it in a single day or even a week or a month. It’s a process, it never ends. Setting up everything at first would definitely take some effort but once it’s there and you keep updating it regularly as events around you keep happening, you’ll not only find it to be quite easy, but also you’ll be able to talk about yourself anywhere anytime without having to explicitly prepare for it because you become so aware about yourself.
b) Stay authentic: I’ve seen a lot of people do this mistake of presenting themselves as per different job profiles. According to me, it’s always better to first decide what actually interests you, what would you be happy doing and then search for relevant opportunities; not the other way round. The fact that the demand for AI talent surpasses the supply for the same gives you this opportunity. Spending time on your regulatory preparation mentioned above would give you an all-around perspective on yourself and help make this decision easier. Also, you won’t need to prepare answers to various kinds of questions that you get asked during an interview. Most of them would come out naturally as you’d be talking about something you really care about.
c) Networking: Once you’re done with a), figured out b), Networking is what will actually help you get there. If you don’t talk to people, you miss out on hearing about many opportunities that you might have a good shot at. It’s important to keep connecting with new people each day, if not physically, then on LinkedIn, so that upon compounding it after many days, you have a large and strong network. Networking is NOT messaging people to place a referral for you. When I was starting off, I did this mistake way too often until I stumbled upon this excellent article by Mark Meloon, where he talks about the importance of building a real connection with people by offering our help first. Another important step in networking is to get your content out. For example, if you’re good at something, blog about it and share that blog on Facebook and LinkedIn. Not only does this help others, it helps you as well. Once you have a good enough network, your visibility increases multi-fold. You never know how one person from your network liking or commenting on your posts, may help you reach out to a much broader audience including people who might be looking for someone of your expertise.
I’m presenting this list in alphabetical order to avoid the misinterpretation of any specific preference. However, I do place a “*” on the ones that I’d personally recommend. This recommendation is based on either of the following: mission statement, people, personal interaction or scope of learning. More than 1 “*” is purely based on the 2nd and 3rd factors.
Your interview begins the moment you have entered the room and a lot of things can happen between that moment and the time when you’re asked to introduce yourself — your body language and the fact that you’re smiling while greeting them plays a big role, especially when you’re interviewing for a start-up as culture-fit is something that they extremely care about. You need to understand that as much as the interviewer is a stranger to you, you’re a stranger to him/her too. So, they’re probably just as nervous as you are.
It’s important to view the interview as more of a conversation between yourself and the interviewer. Both of you are looking for a mutual fit — you are looking for an awesome place to work at and the interviewer is looking for an awesome person (like you) to work with. So, make sure that you’re feeling good about yourself and that you take the charge of making the initial moments of your conversation pleasant for them. And the easiest way I know how to make that happen is to smile.
There are mostly two types of interviews — one, where the interviewer has come with come prepared set of questions and is going to just ask you just that irrespective of your profile and the second, where the interview is based on your CV. I’ll start with the second one.
This kind of interview generally begins with a “Can you tell me a bit about yourself?”. At this point, 2 things are a big NO — talking about your GPA in college and talking about your projects in detail. An ideal statement should be about a minute or two long, should give a good idea on what have you been doing till now, and it’s not restricted to academics. You can talk about your hobbies like reading books, playing sports, meditation, etc — basically, anything that contributes to defining you. The interviewer will then take something that you talk about here as a cue for his next question, and then the technical part of the interview begins. The motive of this kind of interview is to really check whether whatever you have written on your CV is true or not:
There would be a lot of questions on what could be done differently or if “X” was used instead of “Y”, what would have happened. At this point, it’s important to know the kind of trade-offs that is usually made during implementation, for e.g. if the interviewer says that using a more complex model would have given better results, then you might say that you actually had less data to work with and that would have lead to overfitting. In one of the interviews, I was given a case-study to work on and it involved designing algorithms for a real-world use case. I’ve noticed that once I’ve been given the green flag to talk about a project, the interviewers really like it when I talk about it in the following flow:
Problem > 1 or 2 previous approaches > Our approach > Result > Intuition
The other kind of interview is really just to test your basic knowledge. Don’t expect those questions to be too hard. But they would definitely scratch every bit of the basics that you should be having, mainly based around Linear Algebra, Probability, Statistics, Optimisation, Machine Learning and/or Deep Learning. The resources mentioned in the Minimal Resources you need for preparation section should suffice, but make sure that you don’t miss out one bit among them. The catch here is the amount of time you take to answer those questions. Since these cover the basics, they expect that you should be answering them almost instantly. So, do your preparation accordingly.
Throughout the process, it’s important to be confident and honest about what you know and what you don’t know. If there’s a question that you’re certain you have no idea about, say it upfront rather than making “Aah”, “Um” sounds. If some concept is really important but you are struggling with answering it, the interviewer would generally (depending on how you did in the initial parts) be happy to give you a hint or guide you towards the right solution. It’s a big plus if you manage to pick their hints and arrive at the correct solution. Try to not get nervous and the best way to avoid that is by, again, smiling.
Now we come to the conclusion of the interview where the interviewer would ask you if you have any questions for them. It’s really easy to think that your interview is done and just say that you have nothing to ask. I know many people who got rejected just because of failing at this last question. As I mentioned before, it’s not only you who is being interviewed. You are also looking for a mutual fit with the company itself. So, it’s quite obvious that if you really want to join a place, you must have many questions regarding the work culture there or what kind of role are they seeing you in. It can be as simple as being curious about the person interviewing you. There’s always something to learn from everything around you and you should make sure that you leave the interviewer with the impression that you’re truly interested in being a part of their team. A final question that I’ve started asking all my interviewers, is for a feedback on what they might want me to improve on. This has helped me tremendously and I still remember every feedback that I’ve gotten which I’ve incorporated into my daily life.
That’s it. Based on my experience, if you’re just honest about yourself, are competent, truly care about the company you’re interviewing for and have the right mindset, you should have ticked all the right boxes and should be getting a congratulatory mail soon 😄
We live in an era full of opportunities and that applies to anything that you love. You just need to strive to become the best at it and you will find a way to monetise it. As Gary Vaynerchuk (just follow him already) says:
This is a great time to be working in AI and if you’re truly passionate about it, you have so much that you can do with AI. You can empower so many people that have always been under-represented. We keep nagging about the problems surrounding us, but there’s been never such a time where common people like us can actually do something about those problems, rather than just complaining. Jeffrey Hammerbacher (Founder, Cloudera) had famously said:
We can do so much with AI than we can ever imagine. There are many extremely challenging problems out there which require incredibly smart people like you to put your head down on and solve. You can make many lives better. Time to let go of what is “cool”, or what would “look good”. THINK and CHOOSE wisely.
Any Data Science interview comprises of questions mostly of a subset of the following four categories: Computer Science, Math, Statistics and Machine Learning.
If you’re not familiar with the math behind Deep Learning, then you should consider going over my last post for resources to understand them. However, if you are comfortable, I’ve found that the chapters 2, 3 and 4 of the Deep Learning Book are enough to prepare/revise for theoretical questions during such interviews. I’ve been preparing summaries for a few chapters which you can refer to where I’ve tried to even explain a few concepts that I found challenging to understand at first, in case you are not willing to go through the entire chapters. And if you’ve already done a course on probability, you should be comfortable answering a few numerical as well. For stats, covering these topics should be enough.
Now, the range of questions here can vary depending on the type of position you are applying for. If it’s a more traditional Machine Learning based interview where they want to check your basic knowledge in ML, you can complete any one of the following courses:- Machine Learning by Andrew Ng — CS 229- Machine Learning course by Caltech Professor Yaser Abu-Mostafa
Important topics are: Supervised Learning (Classification, Regression, SVM, Decision Tree, Random Forests, Logistic Regression, Multi-layer Perceptron, Parameter Estimation, Bayes’ Decision Rule), Unsupervised Learning (K-means Clustering, Gaussian Mixture Models), Dimensionality Reduction (PCA).
Now, if you’re applying for a more advanced position, there’s a high chance that you might be questioned on Deep Learning. In that case, you should be very comfortable with Convolutional Neural Networks (CNNs) and/or (depending upon what you’ve worked on) Recurrent Neural Networks (RNNs) and their variants. And by being comfortable, you must know what is the fundamental idea behind Deep Learning, how CNNs/RNNs actually worked, what kind of architectures have been proposed and what has been the motivation behind those architectural changes. Now, there’s no shortcut for this. Either you understand them or you put enough time to understand them. For CNNs, the recommended resource is Stanford’s CS 231N and CS 224N for RNNs. I found this Neural Network class by Hugo Larochelle to be really enlightening too. Refer this for a quick refresher too. Udacity coming to the aid here too. By now, you should have figured out that Udacity is a really important place for an ML practitioner. There are not a lot of places working on Reinforcement Learning (RL) in India and I too am not experienced in RL as of now. So, that’s one thing to add to this post sometime in the future.
Getting placed off-campus is a long journey of self-realisation. I realise that this has been another long post and I’m again extremely grateful to you for valuing my thoughts. I hope that this post finds a way of being useful to you and that it helped you in some way to prepare for your next Data Science interview better. If it did, I request you to really think about what I talk about in What we should strive to work for.
I’m very thankful to my friends from IIT Guwahati for their helpful feedback, especially Ameya Godbole, Kothapalli Vignesh and Prabal Jain. A majority of what I mention here, like “viewing an interview as a conversation” and “seeking feedback from our interviewers”, arose from multiple discussions with Prabal who has been advising me constantly on how I can improve my interviewing skills.
This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.
Follow our publication to see more product & design stories featured by the Journal team.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
AI Fanatic • Math Lover • Dreamer
The official Journal blog
|
Lance Ulanoff | 15.1K | 5 | https://medium.com/@LanceUlanoff/did-google-duplex-just-pass-the-turing-test-ffcfe6868b02?source=---------9---------------- | Did Google Duplex just pass the Turing Test? – Lance Ulanoff – Medium | I think it was the first “Um.” That was the moment when I realized I was hearing something extraordinary: A computer carrying out a completely natural and very human-sounding conversation with a real person. And it wasn’t just a random talk. This conversation had a purpose, a destination: to make an appointment at a hair salon.
The entity making the call and appointment was Google Assistant running Duplex, Google’s still experimental AI voice system and the venue was Google I/O, Google’s yearly developer conference, which this year focused heavily on the latest developments in AI, Machine- and Deep-Learning.
Google CEO Sundar Pichai explained that what we were hearing was a real phone call made to a hair salon that didn’t know it was part of an experiment or that they were talking to a computer. He launched Duplex by asking Google Assistant to book a haircut appointment for Tuesday morning. The AI did the rest.
Duplex made the call and, when someone at the salon picked up, the voice AI started the conversation with:
“Hi, I’m calling to book a woman’s hair cut appointment for a client, um, I’m looking for something on May third?”
When the attendant asked Duplex to give her one second, Duplex responded with:
“Mmm-hmm.”
The conversation continued as the salon representative presented various dates and times and the AI asked about other options. Eventually, the AI and the salon worker agreed on an appointment date and time.
What I heard was so convincing I had trouble discerning who was the salon worker and who (what) was the Duplex AI. It was stunning and somewhat disconcerting. I liken it to the feeling you’d get if a store mannequin suddenly smiled at you.
It was easily the most remarkable human-computer conversation I’d ever heard and the closest thing I’ve seen a voice AI passing the Turing Test, which is the AI threshold suggested by Computer Scientist Alan Turing in the 1950s. Turing posited that by 2000 computers would be able to fool humans into thinking they were conversing with other humans at least 30% of the time.
He was right. In 2014, a chatbot named Eugene Goostman successfully impersonated a wise-ass 14-year old programmer during lengthy text-based chats with unsuspecting humans.
Turing, however hadn’t necessarily considered voice-based systems and, for obvious reasons, talking computers are somewhat less adept at fooling humans. Spend a few minutes conversing with your voice assistant of choice and you’ll soon discover their limitations.
Their speech can be stilted, pronunciations off and response times can be slow (especially if they’re trying to access a cloud-based server) and forget about conversations. Most can handle two consecutive queries at most and they virtually all require a trigger phrase like “Alexa” or “Hey Siri.” (Google is working on removing unnecessary “Okay Googles” in short back and forth convos with the digital assistant).
Google Assistant running Duplex didn’t exhibit any of those short comings. It sounded like a young female assistant carefully scheduling her boss’s haircut. In addition to the natural cadence, Google added speech disfluencies (the verbal ticks, “ums,” “uhs,” and “mm-hmms”) and latency or pauses that naturally occur when people are speaking. The result is a perfectly human voice produced entirely by a computer.
The second call demonstration, where a male-voiced Duplex tried to make restaurant reservations, was even more remarkable. The human call participant didn’t entirely understand Duplex’s verbal requests and then told Duplex that, for the number of people it wanted to bring to the restaurant, they didn’t need a reservation. Duplex handled all this without missing a beat.
“The amazing thing is that the assistant can actually understand the nuances of conversation,” said Pichai during the keynote. That ability comes by way of neural network technology and intensive machine learning,
For as accomplished as Duplex is in making hair appointments and restaurant reservations, it might stumble in deeper or more abstract conversations. In a blog post on Duplex development, Google engineers explained that they constrained Duplex’s training to “closed domains” or well-defined topics (like dinner reservations and hair appointments) This gave them the ability to perform intense exploration of the topics and focus training. Duplex was guided during training within the domain by “experienced operators” who could keep track of mistakes and worked with engineers to improve responses.
In short, this means that while Duplex has your hair and dining-out options covered, it could stumble in movie reservations and negotiations with your cable provider.
Even so, Duplex fooled two humans. I heard no hesitation or confusion. In the hair salon call, there was no indication that the salon worker thought something was amiss. She wanted to help this young woman make an appointment. What will she think when she learns she was duped by Duplex?
Obviously, Duplex’s conversations were also short, each lasting less than a minute, putting them well-short of the Turing Test benchmark. I would’ve enjoyed hearing the conversations devolve as they extended a few minutes or more.
I’m sure Duplex will soon tackle more domains and longer conversations, and it will someday pass the Turing Test.
It’s only a matter of time before Duplex is handling other mundane or difficult calls for us, like calling our parents with our own voices (see Wavenet technology). Eventually, we’ll have our Duplex voices call each other, handling pleasantries and making plans, which Google Assistant can then drop in our Google Calendar.
But that’s the future.
For now, Duplex’s performance stands as a powerful proof of concept for our long-imagined future of conversational AI’s capable of helping, entertaining and engaging with us. It’s the first major step on the path to the AI depicted in the movie Her where Joaquin Phoenix starred as a man who falls in love with his chatty voice assistant played by the disembodied voice of Scarlett Johansson.
So, no, Duplex didn’t pass the Turing test, but I do wonder what Alan Turing would think of it.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Tech expert, journalist, social media commentator, amateur cartoonist and robotics fan.
|
Sophia Arakelyan | 7 | 4 | https://buzzrobot.com/from-ballerina-to-ai-researcher-part-i-46fce67f809b?source=---------2---------------- | From Ballerina to AI Researcher: Part I – buZZrobot | Last year, I published the article “From Ballerina to AI writer” where I described how I embraced the technical part of AI without a technical background. But having love and passion for AI, I educated myself and was able to build a neural net classifier and do projects in Deep RL.
Recently, I’ve become a participant in the OpenAI Scholarship Program (OpenAI is a non-profit that gathers top AI researchers to ensure the safety of AI to benefit humanity). Every week for the next three months I’ll publish blog posts sharing my story of transformation from a person dedicated to 15 years of professional dancing and then writing about tech and AI to actually conducting AI research.
Finding your true calling — the key component of happiness
My primary goal with the series of blog posts “From Ballerina to AI researcher” is to show that it’s never too late to embrace a new field, start over again, and find your true calling. Finding work you love is one of the most important components of happiness - — something that you do every day and invest your time in to grow; that makes you feel fulfilled, gives you energy; something that is a refuge for your soul.
Great things never come easy. We have to be able to fight to make great things happen. But you can’t fight for something you don’t believe in, especially if you don’t feel like it’s really important for you and humanity. Finding that thing is a real challenge. I feel lucky that I found my true passion — AI. To me, the technology itself and the AI community — researchers, scientists, people who dedicate their lives to building the most powerful technology of all time with the mission to benefit humanity and make it safe for us — is a great source of energy.
The structure of the blog post series
Today, I’m giving an overall intro of what I’m going to cover in my “From Ballerina to AI Researcher” series.
I’ll dedicate the sequence of blog posts during the OpenAI Scholars program to several aspects of AI technology. I’ll cover those areas that concern me a lot, like AI and automation, bias in ML, dual use of AI, etc.
Also, the structure of my posts will include some insights on what I’m working on right now (the final technical project will be available by the end of August and will be open-sourced).
I feel very lucky to have Alec Radford, an experienced researcher, as my mentor who guides me in the NLP and NLU research area.
First week of my scholarship
I’ve dedicated my first week within the program to learning about the Transformer architecture that performs much better on sequential data compared to RNNs, LSTMs.
The novelty of the architecture is its multi-head self-attention mechanism. According to the original paper, experiments with the transformer on two machine translation tasks showed the model to be superior in quality while being more parallelizable and requiring significantly less time to train.
More concretely, when RNNs or CNNs take a sequence as an input, it goes through sentences word by word, which is a huge obstacle toward parallelization of the process (takes more time to train models). Moreover, if sequences are too long, the model tends to forget the content of distant positions in sequence or mixes it with the following positions’ content — this is the fundamental problem in dealing with sequential data. The transformer architecture reduced this problem thanks to the multi-head self-attention mechanism.
I digged into RNN, LSTM models to catch up with the background information. To that end, I’ve found Andrew Ng’s course on Deep Learning along with the papers extremely useful. To develop insights regarding the transformer, I went through the following resources: the video by Łukasz Kaiser from Google Brain, one of the model’s creators; a blog post with very well elaborated content re: the model, ran the code tensor2tensor and the code using the PyTorch framework from this paper to “feel” the difference between the TF and PyTorch frameworks.
Overall, the goal within the program is to develop deep comprehension of the NLU research area: challenges, current state of the art; and to formulate and test hypotheses that tackle the most important problems of the field.
I’ll share more on what I’m working on in my future articles. Meanwhile, if you have questions/feedback, please leave a comment.
If you want to learn more about me, here are my Facebook and Twitter accounts.
I’d appreciate your feedback on my posts, such as what topics are most interesting to you that I should consider further coverage on.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Former ballerina turned AI writer. Fan of sci-fi, astrophysics. Consciousness is the key. Founder of buZZrobot.com
The publication aims to cover practical aspects of AI technology, use cases along with interviews with notable people in the AI field.
|
Matt Schlicht | 5K | 11 | https://chatbotsmagazine.com/the-complete-beginner-s-guide-to-chatbots-8280b7b906ca?source=tag_archive---------3---------------- | The Complete Beginner’s Guide To Chatbots – Chatbots Magazine | What are chatbots? Why are they such a big opportunity? How do they work? How can I build one? How can I meet other people interested in chatbots?
These are the questions we’re going to answer for you right now.
Ready? Let’s do this.
(Do you work in ecommerce? Stop reading and click here, we made something for you.)
(p.s. here is where I believe the future of bots is headed, you will probably disagree with me at first.)
(p.p.s. My newest guide about conversational commerce is up, I think you’ll find it super interesting.)
A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.).
If you haven’t wrapped your head around it yet, don’t worry. Here’s an example to help you visualize a chatbot.
If you wanted to buy shoes from Nordstrom online, you would go to their website, look around until you find the shoes you wanted, and then you would purchase them.
If Nordstrom makes a bot, which I’m sure they will, you would simply be able to message Nordstrom on Facebook. It would ask you what you’re looking for and you would simply... tell it.
Instead of browsing a website, you will have a conversation with the Nordstrom bot, mirroring the type of experience you would get when you go into the retail store.
Watch this video from Facebook’s recent F8 conference (where they make their major announcements). At the 7:30 mark, David Marcus, the Vice President of Messaging Products at Facebook, explains what it looks like to buy shoes in a Facebook Messenger bot.
Buying shoes isn’t the only thing chatbots can be used for. Here are a couple of other examples:
See? With bots, the possibilities are endless. You can build anything imaginable, and I encourage you to do just that.
But why make a bot? Sure, it looks cool, it’s using some super advanced technology, but why should someone spend their time and energy on it?
It’s a huge opportunity. HUGE. Scroll down and I’ll explain.
You are probably wondering “Why does anyone care about chatbots? They look like simple text based services... what’s the big deal?”
Great question. I’ll tell you why people care about chatbots.
It’s because for the first time ever people are using messenger apps more than they are using social networks.
Let that sink in for a second.
People are using messenger apps more than they are using social networks.
So, logically, if you want to build a business online, you want to build where the people are. That place is now inside messenger apps.
This is why chatbots are such a big deal. It’s potentially a huge business opportunity for anyone willing to jump headfirst and build something people want.
But, how do these bots work? How do they know how to talk to people and answer questions? Isn’t that artificial intelligence and isn’t that insanely hard to do?
Yes, you are correct, it is artificial intelligence, but it’s something that you can totally do yourself.
Let me explain.
There are two types of chatbots, one functions based on a set of rules, and the other more advanced version uses machine learning.
What does this mean?
Chatbot that functions based on rules:
Chatbot that functions using machine learning:
Bots are created with a purpose. A store will likely want to create a bot that helps you purchase something, where someone like Comcast might create a bot that can answer customer support questions.
You start to interact with a chatbot by sending it a message. Click here to try sending a message to the CNN chatbot on Facebook.
So, if these bots use artificial intelligence to make them work well... isn’t that really hard to do? Don’t I need to be an expert at artificial intelligence to be able to build something that has artificial intelligence?
Short answer? No, you don’t have to be an expert at artificial intelligence to create an awesome chatbot that has artificial intelligence. Just make sure to not over promise on your application’s abilities. If you can’t make the product good with artificial intelligence right now, it might be best to not put it in yet.
However, over the past decade quite a bit of advancements have been made in the area of artificial intelligence, so much in fact that anyone who knows how to code can incorporate some level of artificial intelligence into their products.
How do you build artificial intelligence into your bot? Don’t worry, I’ve got you covered, I’ll tell you how to do it in the next section of this post.
Building a chatbot can sound daunting, but it’s totally doable. You’ll be creating an artificial intelligence powered chatting machine in no time (or, of course, you can always build a basic chat bot that doesn’t have a fancy AI brain and strictly follows rules).
You will need to figure out what problem you are going to solve with your bot, choose which platform your bot will live on (Facebook, Slack, etc), set up a server to run your bot from, and choose which service you will use to build your bot.
Here are a ton of resources to get you started.
Platform documentation:
Other Resources:
Don’t want to build your own?
Now that you’ve got your chatbot and artificial intelligence resources, maybe it’s time you met other people who are also interested in chatbots.
Chatbots have been around for decades, but because of the recent advancements in artificial intelligence and machine learning, there is a big opportunity for people to create bots that are better, faster, and stronger.
If you’re reading this, you probably fall into one of these categories:
Wouldn’t it be awesome if you had a place to meet, learn, and share information with other people interested in chatbots? Yeah, we thought so too.
That’s why I created a forum called “Chatbot News”, and it has quickly become the largest community related to Chatbots.
The members of the Chatbots group are investors who manage well over $2 billion in capital, employees at Facebook, Instagram, Fitbit, Nike, and Ycombinator companies, and hackers from around the world.
We would love if you joined. Click here to request an invite private chatbots community.
I have also created the Silicon Valley Chatbots Meetup, register here to be notified when we schedule our first event.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
CEO of Octane AI, Founder of Chatbots Magazine, YC Alum, Forbes 30 Under 30, product at Ustream for 4 years (sold for $130mil), did digital for Lil Wayne.
Chatbots, AI, NLP, Facebook Messenger, Slack, Telegram, and more.
|
Gil Fewster | 3.3K | 5 | https://medium.freecodecamp.org/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805?source=tag_archive---------4---------------- | The mind-blowing AI announcement from Google that you probably missed. | Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text.
I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own.
In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year.
Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System.
This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock.
Luckily, I’m here to bring you up to speed. Here’s the deal.
Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance.
Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.
This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.
All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.
The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.
Google Translate invented its own language to help it translate more effectively.
What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.
Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes)
To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post:
Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated.
And this leads the Google engineers onto that truly astonishing discovery of creation:
So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics.
I just thought maybe you’d want to know.
Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly.
Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective.
Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
A tinkerer
Our community publishes stories worth reading on development, design, and data science.
|
Adam Geitgey | 10.4K | 15 | https://medium.com/@ageitgey/machine-learning-is-fun-part-2-a26a10b68df3?source=tag_archive---------5---------------- | Machine Learning is Fun! Part 2 – Adam Geitgey – Medium | Update: This article is part of a series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8!
You can also read this article in Italiano, Español, Français, Türkçe, Русский, 한국어 Português, فارسی, Tiếng Việt or 普通话.
In Part 1, we said that Machine Learning is using generic algorithms to tell you something interesting about your data without writing any code specific to the problem you are solving. (If you haven’t already read part 1, read it now!).
This time, we are going to see one of these generic algorithms do something really cool — create video game levels that look like they were made by humans. We’ll build a neural network, feed it existing Super Mario levels and watch new ones pop out!
Just like Part 1, this guide is for anyone who is curious about machine learning but has no idea where to start. The goal is be accessible to anyone — which means that there’s a lot of generalizations and we skip lots of details. But who cares? If this gets anyone more interested in ML, then mission accomplished.
Back in Part 1, we created a simple algorithm that estimated the value of a house based on its attributes. Given data about a house like this:
We ended up with this simple estimation function:
In other words, we estimated the value of the house by multiplying each of its attributes by a weight. Then we just added those numbers up to get the house’s value.
Instead of using code, let’s represent that same function as a simple diagram:
However this algorithm only works for simple problems where the result has a linear relationship with the input. What if the truth behind house prices isn’t so simple? For example, maybe the neighborhood matters a lot for big houses and small houses but doesn’t matter at all for medium-sized houses. How could we capture that kind of complicated detail in our model?
To be more clever, we could run this algorithm multiple times with different of weights that each capture different edge cases:
Now we have four different price estimates. Let’s combine those four price estimates into one final estimate. We’ll run them through the same algorithm again (but using another set of weights)!
Our new Super Answer combines the estimates from our four different attempts to solve the problem. Because of this, it can model more cases than we could capture in one simple model.
Let’s combine our four attempts to guess into one big diagram:
This is a neural network! Each node knows how to take in a set of inputs, apply weights to them, and calculate an output value. By chaining together lots of these nodes, we can model complex functions.
There’s a lot that I’m skipping over to keep this brief (including feature scaling and the activation function), but the most important part is that these basic ideas click:
It’s just like LEGO! We can’t model much with one single LEGO block, but we can model anything if we have enough basic LEGO blocks to stick together:
The neural network we’ve seen always returns the same answer when you give it the same inputs. It has no memory. In programming terms, it’s a stateless algorithm.
In many cases (like estimating the price of house), that’s exactly what you want. But the one thing this kind of model can’t do is respond to patterns in data over time.
Imagine I handed you a keyboard and asked you to write a story. But before you start, my job is to guess the very first letter that you will type. What letter should I guess?
I can use my knowledge of English to increase my odds of guessing the right letter. For example, you will probably type a letter that is common at the beginning of words. If I looked at stories you wrote in the past, I could narrow it down further based on the words you usually use at the beginning of your stories. Once I had all that data, I could use it to build a neural network to model how likely it is that you would start with any given letter.
Our model might look like this:
But let’s make the problem harder. Let’s say I need to guess the next letter you are going to type at any point in your story. This is a much more interesting problem.
Let’s use the first few words of Ernest Hemingway’s The Sun Also Rises as an example:
What letter is going to come next?
You probably guessed ’n’ — the word is probably going to be boxing. We know this based on the letters we’ve already seen in the sentence and our knowledge of common words in English. Also, the word ‘middleweight’ gives us an extra clue that we are talking about boxing.
In other words, it’s easy to guess the next letter if we take into account the sequence of letters that came right before it and combine that with our knowledge of the rules of English.
To solve this problem with a neural network, we need to add state to our model. Each time we ask our neural network for an answer, we also save a set of our intermediate calculations and re-use them the next time as part of our input. That way, our model will adjust its predictions based on the input that it has seen recently.
Keeping track of state in our model makes it possible to not just predict the most likely first letter in the story, but to predict the most likely next letter given all previous letters.
This is the basic idea of a Recurrent Neural Network. We are updating the network each time we use it. This allows it to update its predictions based on what it saw most recently. It can even model patterns over time as long as we give it enough of a memory.
Predicting the next letter in a story might seem pretty useless. What’s the point?
One cool use might be auto-predict for a mobile phone keyboard:
But what if we took this idea to the extreme? What if we asked the model to predict the next most likely character over and over — forever? We’d be asking it to write a complete story for us!
We saw how we could guess the next letter in Hemingway’s sentence. Let’s try generating a whole story in the style of Hemingway.
To do this, we are going to use the Recurrent Neural Network implementation that Andrej Karpathy wrote. Andrej is a Deep-Learning researcher at Stanford and he wrote an excellent introduction to generating text with RNNs, You can view all the code for the model on github.
We’ll create our model from the complete text of The Sun Also Rises — 362,239 characters using 84 unique letters (including punctuation, uppercase/lowercase, etc). This data set is actually really small compared to typical real-world applications. To generate a really good model of Hemingway’s style, it would be much better to have at several times as much sample text. But this is good enough to play around with as an example.
As we just start to train the RNN, it’s not very good at predicting letters. Here’s what it generates after a 100 loops of training:
You can see that it has figured out that sometimes words have spaces between them, but that’s about it.
After about 1000 iterations, things are looking more promising:
The model has started to identify the patterns in basic sentence structure. It’s adding periods at the ends of sentences and even quoting dialog. A few words are recognizable, but there’s also still a lot of nonsense.
But after several thousand more training iterations, it looks pretty good:
At this point, the algorithm has captured the basic pattern of Hemingway’s short, direct dialog. A few sentences even sort of make sense.
Compare that with some real text from the book:
Even by only looking for patterns one character at a time, our algorithm has reproduced plausible-looking prose with proper formatting. That is kind of amazing!
We don’t have to generate text completely from scratch, either. We can seed the algorithm by supplying the first few letters and just let it find the next few letters.
For fun, let’s make a fake book cover for our imaginary book by generating a new author name and a new title using the seed text of “Er”, “He”, and “The S”:
Not bad!
But the really mind-blowing part is that this algorithm can figure out patterns in any sequence of data. It can easily generate real-looking recipes or fake Obama speeches. But why limit ourselves human language? We can apply this same idea to any kind of sequential data that has a pattern.
In 2015, Nintendo released Super Mario MakerTM for the Wii U gaming system.
This game lets you draw out your own Super Mario Brothers levels on the gamepad and then upload them to the internet so you friends can play through them. You can include all the classic power-ups and enemies from the original Mario games in your levels. It’s like a virtual LEGO set for people who grew up playing Super Mario Brothers.
Can we use the same model that generated fake Hemingway text to generate fake Super Mario Brothers levels?
First, we need a data set for training our model. Let’s take all the outdoor levels from the original Super Mario Brothers game released in 1985:
This game has 32 levels and about 70% of them have the same outdoor style. So we’ll stick to those.
To get the designs for each level, I took an original copy of the game and wrote a program to pull the level designs out of the game’s memory. Super Mario Bros. is a 30-year-old game and there are lots of resources online that help you figure out how the levels were stored in the game’s memory. Extracting level data from an old video game is a fun programming exercise that you should try sometime.
Here’s the first level from the game (which you probably remember if you ever played it):
If we look closely, we can see the level is made of a simple grid of objects:
We could just as easily represent this grid as a sequence of characters with one character representing each object:
We’ve replaced each object in the level with a letter:
...and so on, using a different letter for each different kind of object in the level.
I ended up with text files that looked like this:
Looking at the text file, you can see that Mario levels don’t really have much of a pattern if you read them line-by-line:
The patterns in a level really emerge when you think of the level as a series of columns:
So in order for the algorithm to find the patterns in our data, we need to feed the data in column-by-column. Figuring out the most effective representation of your input data (called feature selection) is one of the keys of using machine learning algorithms well.
To train the model, I needed to rotate my text files by 90 degrees. This made sure the characters were fed into the model in an order where a pattern would more easily show up:
Just like we saw when creating the model of Hemingway’s prose, a model improves as we train it.
After a little training, our model is generating junk:
It sort of has an idea that ‘-’s and ‘=’s should show up a lot, but that’s about it. It hasn’t figured out the pattern yet.
After several thousand iterations, it’s starting to look like something:
The model has almost figured out that each line should be the same length. It has even started to figure out some of the logic of Mario: The pipes in mario are always two blocks wide and at least two blocks high, so the “P”s in the data should appear in 2x2 clusters. That’s pretty cool!
With a lot more training, the model gets to the point where it generates perfectly valid data:
Let’s sample an entire level’s worth of data from our model and rotate it back horizontal:
This data looks great! There are several awesome things to notice:
Finally, let’s take this level and recreate it in Super Mario Maker:
Play it yourself!
If you have Super Mario Maker, you can play this level by bookmarking it online or by looking it up using level code 4AC9–0000–0157-F3C3.
The recurrent neural network algorithm we used to train our model is the same kind of algorithm used by real-world companies to solve hard problems like speech detection and language translation. What makes our model a ‘toy’ instead of cutting-edge is that our model is generated from very little data. There just aren’t enough levels in the original Super Mario Brothers game to provide enough data for a really good model.
If we could get access to the hundreds of thousands of user-created Super Mario Maker levels that Nintendo has, we could make an amazing model. But we can’t — because Nintendo won’t let us have them. Big companies don’t give away their data for free.
As machine learning becomes more important in more industries, the difference between a good program and a bad program will be how much data you have to train your models. That’s why companies like Google and Facebook need your data so badly!
For example, Google recently open sourced TensorFlow, its software toolkit for building large-scale machine learning applications. It was a pretty big deal that Google gave away such important, capable technology for free. This is the same stuff that powers Google Translate.
But without Google’s massive trove of data in every language, you can’t create a competitor to Google Translate. Data is what gives Google its edge. Think about that the next time you open up your Google Maps Location History or Facebook Location History and notice that it stores every place you’ve ever been.
In machine learning, there’s never a single way to solve a problem. You have limitless options when deciding how to pre-process your data and which algorithms to use. Often combining multiple approaches will give you better results than any single approach.
Readers have sent me links to other interesting approaches to generating Super Mario levels:
If you liked this article, please consider signing up for my Machine Learning is Fun! email list. I’ll only email you when I have something new and awesome to share. It’s the best way to find out when I write more articles like this.
You can also follow me on Twitter at @ageitgey, email me directly or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.
Now continue on to Machine Learning is Fun Part 3!
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Interested in computers and machine learning. Likes to write about it.
|
David Venturi | 10.6K | 20 | https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0?source=tag_archive---------6---------------- | Every single Machine Learning course on the internet, ranked by your reviews | A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost.
I’m almost finished now. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. So I started creating a review-driven guide that recommends the best courses for each subject within data science.
For the first guide in the series, I recommended a few coding classes for the beginner data scientist. Then it was statistics and probability classes. Then introductions to data science. Also, data visualization.
For this guide, I spent a dozen hours trying to identify every online machine learning course offered as of May 2017, extracting key bits of information from their syllabi and reviews, and compiling their ratings. My end goal was to identify the three best courses available and present them to you, below.
For this task, I turned to none other than the open source Class Central community, and its database of thousands of course ratings and reviews.
Since 2011, Class Central founder Dhawal Shah has kept a closer eye on online courses than arguably anyone else in the world. Dhawal personally helped me assemble this list of resources.
Each course must fit three criteria:
We believe we covered every notable course that fits the above criteria. Since there are seemingly hundreds of courses on Udemy, we chose to consider the most-reviewed and highest-rated ones only.
There’s always a chance that we missed something, though. So please let us know in the comments section if we left a good course out.
We compiled average ratings and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings.
We made subjective syllabus judgment calls based on three factors:
A popular definition originates from Arthur Samuel in 1959: machine learning is a subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data. Just as humans can learn from experience, so can computers, where data = experience.
A machine learning workflow is the process required for carrying out a machine learning project. Though individual projects can differ, most workflows share several common tasks: problem evaluation, data exploration, data preprocessing, model training/testing/deployment, etc. Below you’ll find helpful visualization of these core steps:
The ideal course introduces the entire process and provides interactive examples, assignments, and/or quizzes where students can perform each task themselves.
First off, let’s define deep learning. Here is a succinct description:
As would be expected, portions of some of the machine learning courses contain deep learning content. I chose not to include deep learning-only courses, however. If you are interested in deep learning specifically, we’ve got you covered with the following article:
My top three recommendations from that list would be:
Several courses listed below ask students to have prior programming, calculus, linear algebra, and statistics experience. These prerequisites are understandable given that machine learning is an advanced discipline.
Missing a few subjects? Good news! Some of this experience can be acquired through our recommendations in the first two articles (programming, statistics) of this Data Science Career Guide. Several top-ranked courses below also provide gentle calculus and linear algebra refreshers and highlight the aspects most relevant to machine learning for those less familiar.
Stanford University’s Machine Learning on Coursera is the clear current winner in terms of ratings, reviews, and syllabus fit. Taught by the famous Andrew Ng, Google Brain founder and former chief scientist at Baidu, this was the class that sparked the founding of Coursera. It has a 4.7-star weighted average rating over 422 reviews.
Released in 2011, it covers all aspects of the machine learning workflow. Though it has a smaller scope than the original Stanford class upon which it is based, it still manages to cover a large number of techniques and algorithms. The estimated timeline is eleven weeks, with two weeks dedicated to neural networks and deep learning. Free and paid options are available.
Ng is a dynamic yet gentle instructor with a palpable experience. He inspires confidence, especially when sharing practical implementation tips and warnings about common pitfalls. A linear algebra refresher is provided and Ng highlights the aspects of calculus most relevant to machine learning.
Evaluation is automatic and is done via multiple choice quizzes that follow each lesson and programming assignments. The assignments (there are eight of them) can be completed in MATLAB or Octave, which is an open-source version of MATLAB. Ng explains his language choice:
Though Python and R are likely more compelling choices in 2017 with the increased popularity of those languages, reviewers note that that shouldn’t stop you from taking the course.
A few prominent reviewers noted the following:
Columbia University’s Machine Learning is a relatively new offering that is part of their Artificial Intelligence MicroMasters on edX. Though it is newer and doesn’t have a large number of reviews, the ones that it does have are exceptionally strong. Professor John Paisley is noted as brilliant, clear, and clever. It has a 4.8-star weighted average rating over 10 reviews.
The course also covers all aspects of the machine learning workflow and more algorithms than the above Stanford offering. Columbia’s is a more advanced introduction, with reviewers noting that students should be comfortable with the recommended prerequisites (calculus, linear algebra, statistics, probability, and coding).
Quizzes (11), programming assignments (4), and a final exam are the modes of evaluation. Students can use either Python, Octave, or MATLAB to complete the assignments. The course’s total estimated timeline is eight to ten hours per week over twelve weeks. It is free with a verified certificate available for purchase.
Below are a few of the aforementioned sparkling reviews:
Machine Learning A-ZTM on Udemy is an impressively detailed offering that provides instruction in both Python and R, which is rare and can’t be said for any of the other top courses. It has a 4.5-star weighted average rating over 8,119 reviews, which makes it the most reviewed course of the ones considered.
It covers the entire machine learning workflow and an almost ridiculous (in a good way) number of algorithms through 40.5 hours of on-demand video. The course takes a more applied approach and is lighter math-wise than the above two courses. Each section starts with an “intuition” video from Eremenko that summarizes the underlying theory of the concept being taught. de Ponteves then walks through implementation with separate videos for both Python and R.
As a “bonus,” the course includes Python and R code templates for students to download and use on their own projects. There are quizzes and homework challenges, though these aren’t the strong points of the course.
Eremenko and the SuperDataScience team are revered for their ability to “make the complex simple.” Also, the prerequisites listed are “just some high school mathematics,” so this course might be a better option for those daunted by the Stanford and Columbia offerings.
A few prominent reviewers noted the following:
Our #1 pick had a weighted average rating of 4.7 out of 5 stars over 422 reviews. Let’s look at the other alternatives, sorted by descending rating. A reminder that deep learning-only courses are not included in this guide — you can find those here.
The Analytics Edge (Massachusetts Institute of Technology/edX): More focused on analytics in general, though it does cover several machine learning topics. Uses R. Strong narrative that leverages familiar real-world examples. Challenging. Ten to fifteen hours per week over twelve weeks. Free with a verified certificate available for purchase. It has a 4.9-star weighted average rating over 214 reviews.
Python for Data Science and Machine Learning Bootcamp (Jose Portilla/Udemy): Has large chunks of machine learning content, but covers the whole data science process. More of a very detailed intro to Python. Amazing course, though not ideal for the scope of this guide. 21.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 3316 reviews.
Data Science and Machine Learning Bootcamp with R (Jose Portilla/Udemy): The comments for Portilla’s above course apply here as well, except for R. 17.5 hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.6-star weighted average rating over 1317 reviews.
Machine Learning Series (Lazy Programmer Inc./Udemy): Taught by a data scientist/big data engineer/full stack software engineer with an impressive resume, Lazy Programmer currently has a series of 16 machine learning-focused courses on Udemy. In total, the courses have 5000+ ratings and almost all of them have 4.6 stars. A useful course ordering is provided in each individual course’s description. Uses Python. Cost varies depending on Udemy discounts, which are frequent.
Machine Learning (Georgia Tech/Udacity): A compilation of what was three separate courses: Supervised, Unsupervised and Reinforcement Learning. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Bite-sized videos, as is Udacity’s style. Friendly professors. Estimated timeline of four months. Free. It has a 4.56-star weighted average rating over 9 reviews.
Implementing Predictive Analytics with Spark in Azure HDInsight (Microsoft/edX): Introduces the core concepts of machine learning and a variety of algorithms. Leverages several big data-friendly tools, including Apache Spark, Scala, and Hadoop. Uses both Python and R. Four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.5-star weighted average rating over 6 reviews.
Data Science and Machine Learning with Python — Hands On! (Frank Kane/Udemy): Uses Python. Kane has nine years of experience at Amazon and IMDb. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 4139 reviews.
Scala and Spark for Big Data and Machine Learning (Jose Portilla/Udemy): “Big data” focus, specifically on implementation in Scala and Spark. Ten hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.5-star weighted average rating over 607 reviews.
Machine Learning Engineer Nanodegree (Udacity): Udacity’s flagship Machine Learning program, which features a best-in-class project review system and career support. The program is a compilation of several individual Udacity courses, which are free. Co-created by Kaggle. Estimated timeline of six months. Currently costs $199 USD per month with a 50% tuition refund available for those who graduate within 12 months. It has a 4.5-star weighted average rating over 2 reviews.
Learning From Data (Introductory Machine Learning) (California Institute of Technology/edX): Enrollment is currently closed on edX, but is also available via CalTech’s independent platform (see below). It has a 4.49-star weighted average rating over 42 reviews.
Learning From Data (Introductory Machine Learning) (Yaser Abu-Mostafa/California Institute of Technology): “A real Caltech course, not a watered-down version.” Reviews note it is excellent for understanding machine learning theory. The professor, Yaser Abu-Mostafa, is popular among students and also wrote the textbook upon which this course is based. Videos are taped lectures (with lectures slides picture-in-picture) uploaded to YouTube. Homework assignments are .pdf files. The course experience for online students isn’t as polished as the top three recommendations. It has a 4.43-star weighted average rating over 7 reviews.
Mining Massive Datasets (Stanford University): Machine learning with a focus on “big data.” Introduces modern distributed file systems and MapReduce. Ten hours per week over seven weeks. Free. It has a 4.4-star weighted average rating over 30 reviews.
AWS Machine Learning: A Complete Guide With Python (Chandra Lingam/Udemy): A unique focus on cloud-based machine learning and specifically Amazon Web Services. Uses Python. Nine hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 62 reviews.
Introduction to Machine Learning & Face Detection in Python (Holczer Balazs/Udemy): Uses Python. Eight hours of on-demand video. Cost varies depending on Udemy discounts, which are frequent. It has a 4.4-star weighted average rating over 162 reviews.
StatLearning: Statistical Learning (Stanford University): Based on the excellent textbook, “An Introduction to Statistical Learning, with Applications in R” and taught by the professors who wrote it. Reviewers note that the MOOC isn’t as good as the book, citing “thin” exercises and mediocre videos. Five hours per week over nine weeks. Free. It has a 4.35-star weighted average rating over 84 reviews.
Machine Learning Specialization (University of Washington/Coursera): Great courses, but last two classes (including the capstone project) were canceled. Reviewers note that this series is more digestable (read: easier for those without strong technical backgrounds) than other top machine learning courses (e.g. Stanford’s or Caltech’s). Be aware that the series is incomplete with recommender systems, deep learning, and a summary missing. Free and paid options available. It has a 4.31-star weighted average rating over 80 reviews.
From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase (Loony Corn/Udemy): “A down-to-earth, shy but confident take on machine learning techniques.” Taught by four-person team with decades of industry experience together. Uses Python. Cost varies depending on Udemy discounts, which are frequent. It has a 4.2-star weighted average rating over 494 reviews.
Principles of Machine Learning (Microsoft/edX): Uses R, Python, and Microsoft Azure Machine Learning. Part of the Microsoft Professional Program Certificate in Data Science. Three to four hours per week over six weeks. Free with a verified certificate available for purchase. It has a 4.09-star weighted average rating over 11 reviews.
Big Data: Statistical Inference and Machine Learning (Queensland University of Technology/FutureLearn): A nice, brief exploratory machine learning course with a focus on big data. Covers a few tools like R, H2O Flow, and WEKA. Only three weeks in duration at a recommended two hours per week, but one reviewer noted that six hours per week would be more appropriate. Free and paid options available. It has a 4-star weighted average rating over 4 reviews.
Genomic Data Science and Clustering (Bioinformatics V) (University of California, San Diego/Coursera): For those interested in the intersection of computer science and biology and how it represents an important frontier in modern science. Focuses on clustering and dimensionality reduction. Part of UCSD’s Bioinformatics Specialization. Free and paid options available. It has a 4-star weighted average rating over 3 reviews.
Intro to Machine Learning (Udacity): Prioritizes topic breadth and practical tools (in Python) over depth and theory. The instructors, Sebastian Thrun and Katie Malone, make this class so fun. Consists of bite-sized videos and quizzes followed by a mini-project for each lesson. Currently part of Udacity’s Data Analyst Nanodegree. Estimated timeline of ten weeks. Free. It has a 3.95-star weighted average rating over 19 reviews.
Machine Learning for Data Analysis (Wesleyan University/Coursera): A brief intro machine learning and a few select algorithms. Covers decision trees, random forests, lasso regression, and k-means clustering. Part of Wesleyan’s Data Analysis and Interpretation Specialization. Estimated timeline of four weeks. Free and paid options available. It has a 3.6-star weighted average rating over 5 reviews.
Programming with Python for Data Science (Microsoft/edX): Produced by Microsoft in partnership with Coding Dojo. Uses Python. Eight hours per week over six weeks. Free and paid options available. It has a 3.46-star weighted average rating over 37 reviews.
Machine Learning for Trading (Georgia Tech/Udacity): Focuses on applying probabilistic machine learning approaches to trading decisions. Uses Python. Part of Udacity’s Machine Learning Engineer Nanodegree and Georgia Tech’s Online Master’s Degree (OMS). Estimated timeline of four months. Free. It has a 3.29-star weighted average rating over 14 reviews.
Practical Machine Learning (Johns Hopkins University/Coursera): A brief, practical introduction to a number of machine learning algorithms. Several one/two-star reviews expressing a variety of concerns. Part of JHU’s Data Science Specialization. Four to nine hours per week over four weeks. Free and paid options available. It has a 3.11-star weighted average rating over 37 reviews.
Machine Learning for Data Science and Analytics (Columbia University/edX): Introduces a wide range of machine learning topics. Some passionate negative reviews with concerns including content choices, a lack of programming assignments, and uninspiring presentation. Seven to ten hours per week over five weeks. Free with a verified certificate available for purchase. It has a 2.74-star weighted average rating over 36 reviews.
Recommender Systems Specialization (University of Minnesota/Coursera): Strong focus one specific type of machine learning — recommender systems. A four course specialization plus a capstone project, which is a case study. Taught using LensKit (an open-source toolkit for recommender systems). Free and paid options available. It has a 2-star weighted average rating over 2 reviews.
Machine Learning With Big Data (University of California, San Diego/Coursera): Terrible reviews that highlight poor instruction and evaluation. Some noted it took them mere hours to complete the whole course. Part of UCSD’s Big Data Specialization. Free and paid options available. It has a 1.86-star weighted average rating over 14 reviews.
Practical Predictive Analytics: Models and Methods (University of Washington/Coursera): A brief intro to core machine learning concepts. One reviewer noted that there was a lack of quizzes and that the assignments were not challenging. Part of UW’s Data Science at Scale Specialization. Six to eight hours per week over four weeks. Free and paid options available. It has a 1.75-star weighted average rating over 4 reviews.
The following courses had one or no reviews as of May 2017.
Machine Learning for Musicians and Artists (Goldsmiths, University of London/Kadenze): Unique. Students learn algorithms, software tools, and machine learning best practices to make sense of human gesture, musical audio, and other real-time data. Seven sessions in length. Audit (free) and premium ($10 USD per month) options available. It has one 5-star review.
Applied Machine Learning in Python (University of Michigan/Coursera): Taught using Python and the scikit learn toolkit. Part of the Applied Data Science with Python Specialization. Scheduled to start May 29th. Free and paid options available.
Applied Machine Learning (Microsoft/edX): Taught using various tools, including Python, R, and Microsoft Azure Machine Learning (note: Microsoft produces the course). Includes hands-on labs to reinforce the lecture content. Three to four hours per week over six weeks. Free with a verified certificate available for purchase.
Machine Learning with Python (Big Data University): Taught using Python. Targeted towards beginners. Estimated completion time of four hours. Big Data University is affiliated with IBM. Free.
Machine Learning with Apache SystemML (Big Data University): Taught using Apache SystemML, which is a declarative style language designed for large-scale machine learning. Estimated completion time of eight hours. Big Data University is affiliated with IBM. Free.
Machine Learning for Data Science (University of California, San Diego/edX): Doesn’t launch until January 2018. Programming examples and assignments are in Python, using Jupyter notebooks. Eight hours per week over ten weeks. Free with a verified certificate available for purchase.
Introduction to Analytics Modeling (Georgia Tech/edX): The course advertises R as its primary programming tool. Five to ten hours per week over ten weeks. Free with a verified certificate available for purchase.
Predictive Analytics: Gaining Insights from Big Data (Queensland University of Technology/FutureLearn): Brief overview of a few algorithms. Uses Hewlett Packard Enterprise’s Vertica Analytics platform as an applied tool. Start date to be announced. Two hours per week over four weeks. Free with a Certificate of Achievement available for purchase.
Introducción al Machine Learning (Universitas Telefónica/Miríada X): Taught in Spanish. An introduction to machine learning that covers supervised and unsupervised learning. A total of twenty estimated hours over four weeks.
Machine Learning Path Step (Dataquest): Taught in Python using Dataquest’s interactive in-browser platform. Multiple guided projects and a “plus” project where you build your own machine learning system using your own data. Subscription required.
The following six courses are offered by DataCamp. DataCamp’s hybrid teaching style leverages video and text-based instruction with lots of examples through an in-browser code editor. A subscription is required for full access to each course.
Introduction to Machine Learning (DataCamp): Covers classification, regression, and clustering algorithms. Uses R. Fifteen videos and 81 exercises with an estimated timeline of six hours.
Supervised Learning with scikit-learn (DataCamp): Uses Python and scikit-learn. Covers classification and regression algorithms. Seventeen videos and 54 exercises with an estimated timeline of four hours.
Unsupervised Learning in R (DataCamp): Provides a basic introduction to clustering and dimensionality reduction in R. Sixteen videos and 49 exercises with an estimated timeline of four hours.
Machine Learning Toolbox (DataCamp): Teaches the “big ideas” in machine learning. Uses R. 24 videos and 88 exercises with an estimated timeline of four hours.
Machine Learning with the Experts: School Budgets (DataCamp): A case study from a machine learning competition on DrivenData. Involves building a model to automatically classify items in a school’s budget. DataCamp’s “Supervised Learning with scikit-learn” is a prerequisite. Fifteen videos and 51 exercises with an estimated timeline of four hours.
Unsupervised Learning in Python (DataCamp): Covers a variety of unsupervised learning algorithms using Python, scikit-learn, and scipy. The course ends with students building a recommender system to recommend popular musical artists. Thirteen videos and 52 exercises with an estimated timeline of four hours.
Machine Learning (Tom Mitchell/Carnegie Mellon University): Carnegie Mellon’s graduate introductory machine learning course. A prerequisite to their second graduate level course, “Statistical Machine Learning.” Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. A 2011 version of the course also exists. CMU is one of the best graduate schools for studying machine learning and has a whole department dedicated to ML. Free.
Statistical Machine Learning (Larry Wasserman/Carnegie Mellon University): Likely the most advanced course in this guide. A follow-up to Carnegie Mellon’s Machine Learning course. Taped university lectures with practice problems, homework assignments, and a midterm (all with solutions) posted online. Free.
Undergraduate Machine Learning (Nando de Freitas/University of British Columbia): An undergraduate machine learning course. Lectures are filmed and put on YouTube with the slides posted on the course website. The course assignments are posted as well (no solutions, though). de Freitas is now a full-time professor at the University of Oxford and receives praise for his teaching abilities in various forums. Graduate version available (see below).
Machine Learning (Nando de Freitas/University of British Columbia): A graduate machine learning course. The comments in de Freitas’ undergraduate course (above) apply here as well.
This is the fifth of a six-piece series that covers the best online courses for launching yourself into the data science field. We covered programming in the first article, statistics and probability in the second article, intros to data science in the third article, and data visualization in the fourth.
The final piece will be a summary of those articles, plus the best online courses for other key topics such as data wrangling, databases, and even software engineering.
If you’re looking for a complete list of Data Science online courses, you can find them on Class Central’s Data Science and Big Data subject page.
If you enjoyed reading this, check out some of Class Central’s other pieces:
If you have suggestions for courses I missed, let me know in the responses!
If you found this helpful, click the 💚 so more people will see it here on Medium.
This is a condensed version of my original article published on Class Central, where I’ve included detailed course syllabi.
From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.
Curriculum Lead, Projects @ DataCamp. I created my own data science master’s program.
Our community publishes stories worth reading on development, design, and data science.
|