解析 DeepMind 深度强化学习 (Deep Reinforcement Learning) 技术

文/Not_GOD(简书作者)
原文链接:http://www.jianshu.com/p/d347bb2ca53c
著作权归作者所有,转载请联系作者获得授权,并标注“简书作者”。

Two years ago, a small company in London called DeepMind uploaded their pioneering paper “Playing Atari with Deep Reinforcement Learning” to Arxiv. In this paper they demonstrated how a computer learned to play Atari 2600 video games by observing just the screen pixels and receiving a reward when the game score increased. The result was remarkable, because the games and the goals in every game were very different and designed to be challenging for humans. The same model architecture, without any change, was used to learn seven different games, and in three of them the algorithm performed even better than a human!

It has been hailed since then as the first step towards general artificial intelligence – an AI that can survive in a variety of environments, instead of being confined to strict realms such as playing chess. No wonder DeepMind was immediately bought by Google and has been on the forefront of deep learning research ever since. In February 2015 their paper “Human-level control through deep reinforcement learning” was featured on the cover of Nature, one of the most prestigious journals in science. In this paper they applied the same model to 49 different games and achieved superhuman performance in half of them.

Still, while deep models for supervised and unsupervised learning have seen widespread adoption in the community, deep reinforcement learning has remained a bit of a mystery. In this blog post I will be trying to demystify this technique and understand the rationale behind it. The intended audience is someone who already has background in machine learning and possibly in neural networks, but hasn’t had time to delve into reinforcement learning yet.

我们按照下面的几个问题来看看到底深度强化学习技术长成什么样?

  1. 什么是强化学习的主要挑战?针对这个问题,我们会讨论 credit assignment 问题和 exploration-exploitation 困境。
  2. 如何使用数学来形式化强化学习?我们会定义 Markov Decision Process 并用它来对强化学习进行分析推理。
  3. 我们如何指定长期的策略?这里,定义了 discounted future reward,这也给出了在下面部分的算法的基础。
  4. 如何估计或者近似未来收益?给出了简单的基于表的 Q-learning 算法的定义和分析。
  5. 如果状态空间非常巨大该怎么办?这里的 Q-table 就可以使用(深度)神经网络来替代。
  6. 怎么样将这个模型真正可行?采用 Experience replay 技术来稳定神经网络的学习。
  7. 这已经足够了么?最后会研究一些对 exploration-exploitation 问题的简单解决方案。

强化学习

Consider the game Breakout. In this game you control a paddle at the bottom of the screen and have to bounce the ball back to clear all the bricks in the upper half of the screen. Each time you hit a brick, it disappears and your score increases – you get a reward.

图 1:Atari Breakout 游戏:来自 DeepMind

Suppose you want to teach a neural network to play this game. Input to your network would be screen images, and output would be three actions: left, right or fire (to launch the ball). It would make sense to treat it as a classification problem – for each game screen you have to decide, whether you should move left, right or press fire. Sounds straightforward? Sure, but then you need training examples, and a lots of them. Of course you could go and record game sessions using expert players, but that’s not really how we learn. We don’t need somebody to tell us a million times which move to choose at each screen. We just need occasional feedback that we did the right thing and can then figure out everything else ourselves.
This is the task reinforcement learning tries to solve. Reinforcement learning lies somewhere in between supervised and unsupervised learning. Whereas in supervised learning one has a target label for each training example and in unsupervised learning one has no labels at all, in reinforcement learning one has sparse and time-delayed labels – the rewards. 基于这些收益,agent 必须学会在环境中如何行动。

尽管这个想法非常符合直觉,但是实际操作时却困难重重。例如,当你击中一个砖块,并得到收益时,这常常和最近做出的行动(paddle 的移动)没有关系。在你将 paddle 移动到了正确的位置时就可以将球弹回,其实所有的困难的工作已经完成。这个问题被称作是 credit assignment 问题——先前的行动会影响到当前的收益的获得与否及收益的总量。

一旦你想出来一个策略来收集一定数量的收益,你是要固定在当前的策略上还是尝试其他的策略来得到可能更好的策略呢?在上面的 Breakout 游戏中,简单的策略就是移动到最左边的等在那里。发出球球时,球更可能是向左飞去,这样你能够轻易地在死前获得 10 分。但是,你真的满足于做到这个程度么? 这就是 exploration-exploit 困境 ——你应当利用已有的策略还是去探索其他的可能更好的策略。

强化学习是关于人类(或者更一般的动物)学习的重要模型。我们受到来自父母、分数、薪水的奖赏都是收益的各类例子。credit assignment 问题exploration-exploitation 困境 在商业和人际关系中常常出现。这也是研究强化学习及那些提供尝试新的观点的沙盒的博弈游戏的重要原因。

Markov Decision Process

现在的问题就是,如何来形式化这个强化学习问题使得我们可以对其进行分析推理。目前最常见的就是将其表示成一个 Markov decision process。

假设你是一个 agent,在一个环境中(比如说 Breakout 游戏)。环境会处在某个状态下(比如说,paddle 的位置、球的位置和方向、每个砖块是否存在等等)。agent 可以在环境中执行某种行动(比如说,将 paddle 向左或者向右移动)。这些行动有时候会带来收益(比如说分数的增加)。行动会改变环境并使其新的状态,然后 agent 又可以这行另外的行动,如此进行下去。如何选择这些行动的规则称为策略。一般来说环境是随机的,这意味着下一个状态的出现在某种程度上是随机的(例如,你输了一个球的时候,重新启动新球,这个时候的射出方向是随机选定的)。

图 2:左:强化学习问题。右:Markov decision process

状态和行动的集合,以及从一个状态跳转到另一个状态的规则,共同真诚了 Markov decision process。这个过程(比方说一次游戏过程)的一个 episode 形成了一个有限的状态、行动和收益的序列:

这里的 s_i 表示状态,a_i 表示行动,而 r_{i+1} 是在执行行动的汇报。episode 以 terminal 状态 s_n 结尾(可能就是“游戏结束”画面)。MDP 依赖于 Markov 假设——下一个状态 s_{i+1} 的概率仅仅依赖于当前的状态 s_i 和行动 a_i,但不依赖此前的状态和行动。

Discounted Future Reward

To perform well in the long-term, we need to take into account not only the immediate rewards, but also the future rewards we are going to get. How should we go about that?

Given one run of the Markov decision process, we can easily calculate the total reward for one episode:

Given that, the total future reward from time point t onward can be expressed as:

But because our environment is stochastic, we can never be sure, if we will get the same rewards the next time we perform the same actions. The more into the future we go, the more it may diverge. For that reason it is common to use discounted future reward instead:

Here γ is the discount factor between 0 and 1 – the more into the future the reward is, the less we take it into consideration. It is easy to see, that discounted future reward at time step t can be expressed in terms of the same thing at time step t+1:

If we set the discount factor γ=0, then our strategy will be short-sighted and we rely only on the immediate rewards. If we want to balance between immediate and future rewards, we should set discount factor to something like γ=0.9. If our environment is deterministic and the same actions always result in same rewards, then we can set discount factor γ=1.

A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward.

Q-learning

In Q-learning we define a function Q(s, a) representing the maximum discounted future reward when we perform action a in state s, and continue optimally from that point on.

**

**The way to think about Q(s, a) is that it is “the best possible score at the end of the game after performing action a in state s“. It is called Q-function, because it represents the “quality” of a certain action in a given state.

This may sound like quite a puzzling definition. How can we estimate the score at the end of game, if we know just the current state and action, and not the actions and rewards coming after that? We really can’t. But as a theoretical construct we can assume existence of such a function. Just close your eyes and repeat to yourself five times: “Q(s, a) exists, Q(s, a) exists, …”. Feel it?

If you’re still not convinced, then consider what the implications of having such a function would be. Suppose you are in state and pondering whether you should take action a or b. You want to select the action that results in the highest score at the end of game. Once you have the magical Q-function, the answer becomes really simple – pick the action with the highest Q-value!

Here π represents the policy, the rule how we choose an action in each state.

OK, how do we get that Q-function then? Let’s focus on just one transition <s, a, r, s’>. Just like with discounted future rewards in the previous section, we can express the Q-value of state s and action a in terms of the Q-value of the next state s’.

This is called the Bellman equation. If you think about it, it is quite logical – maximum future reward for this state and action is the immediate reward plus maximum future reward for the next state.

The main idea in Q-learning is that we can iteratively approximate the Q-function using the Bellman equation. In the simplest case the Q-function is implemented as a table, with states as rows and actions as columns. The gist of the Q-learning algorithm is as simple as the following[1]:

α in the algorithm is a learning rate that controls how much of the difference between previous Q-value and newly proposed Q-value is taken into account. In particular, when α=1, then two Q[s,a] cancel and the update is exactly the same as the Bellman equation.

The maxa’ Q[s’,a’] that we use to update Q[s,a] is only an approximation and in early stages of learning it may be completely wrong. However the approximation get more and more accurate with every iteration and it has been shown, that if we perform this update enough times, then the Q-function will converge and represent the true Q-value.

Deep Q Network

环境的状态可以用 paddle 的位置,球的位置和方向以及每个砖块是否消除来确定。不过这个直觉上的表示与游戏相关。我们能不能获得某种更加通用适合所有游戏的表示呢?最明显的选择就是屏幕像素——他们隐式地包含所有关于除了球的速度和方向外的游戏情形的相关信息。不过两个时间上相邻接的屏幕可以包含这两个丢失的信息。

如果我们像 DeepMind 的论文中那样处理游戏屏幕的话——获取四幅最后的屏幕画面,将他们重新规整为 84 X 84 的大小,转换为 256 灰度层级——我们会得到一个 256^{84X84X4} 大小的可能游戏状态。这意味着我们的 Q-table 中需要有 10^67970 行——这比已知的宇宙空间中的原子的数量还要大得多!可能有人会说,很多像素的组合(也就是状态)不会出现——这样其实可以使用一个稀疏的 table 来包含那些被访问到的状态。即使这样,很多的状态仍然是很少被访问到的,也许需要宇宙的生命这么长的时间让 Q-table 收敛。我们希望理想化的情形是有一个对那些还未遇见的状态的 Q-value 的猜测。

这里就是深度学习发挥作用的地方。神经网络其实对从高度结构化的数据中获取特征非常在行。我们可以用神经网络表示 Q-function,以状态(四幅屏幕画面)和行动作为输入,以对应的 Q-value 作为输出。另外,我们可以仅仅用游戏画面作为输入对每个可能的行动输出一个 Q-value。后面这个观点对于我们想要进行 Q-value 的更新或者选择最优的 Q-value 对应操作来说要更方便一些,这样我们仅仅需要进行一遍网络的前向传播就可立即得到所有行动的 Q-value。

图 3:左:DQN 的初级形式;右:DQN 的优化形式,用在 DeepMind 的论文中的版本

DeepMind 使用的深度神经网络架构如下:

这实际上是一个经典的卷积神经网络,包含 3 个卷积层加上 2 个全连接层。对图像识别的人们应该会注意到这里没有包含 pooling 层。但如果你好好想想这里的情况,你会明白,pooling 层会带来变换不变性 —— 网络会对图像中的对象的位置没有很强的感知。这个特性在诸如 ImageNet 这样的分类问题中会有用,但是在这里游戏的球的位置其实是潜在能够决定收益的关键因素,我们自然不希望失去这样的信息了!

网络的输入是 4 幅 84X84 的灰度屏幕画面。网络的输出是对每个可能的行动(在 Atari 中是 18 个)。Q-value 可能是任何实数,其实这是一个回归任务,我们可以通过简单的平方误差来进行优化。

给定转移 <s, a, r, s’>,Q-table 更新规则变动如下:

  1. 对当前的状态 s 执行前向传播,获得对所有行动的预测 Q-value
  2. 对下一状态 s’ 执行前向传播,计算网络输出最大操作:max_{a’} Q(s’, a’)
  3. 设置行动的 Q-value 目标值为 r + γ max_{a’} Q(s’, a’)。使用第二步的 max 值。对所有其他的行动,设置为和第一步返回结果相同的 Q-value 目标值,让这些输出的误差设置为 0
  4. 使用反向传播算法更新权重

Experience Replay

到现在,我们有了使用 Q-learning 如何估计未来回报和用卷积神经网络近似 Q-function 的方法。但是有个问题是,使用非线性函数来近似 Q-value 其实是非常不稳定的。对这个问题其实也有一堆技巧来让其收敛。不过这样也会花点时间,在单个 GPU 上估计要一个礼拜。

其中最为重要的技巧是 experience replay。在游戏过程中,所有的经验 <s, a, r’, s’> 都存放在一个 replay memory 中。训练网络时,replay memory 中随机的 minibatch 会取代最近的状态转移。这会将连续的训练样本之间的相似性打破,否则容易将网络导向一个局部最优点。同样 experience replay 让训练任务与通常的监督学习更加相似,这样也简化了程序的 debug 和算法的测试。当然我们实际上也是可以收集人类玩家的 experience 并用这些数据进行训练。

Exploration-Exploitation

Q-learning 试着解决 credit assignment 问题——将受益按时间传播,直到导致获得受益的实际的关键决策点为止。但是我们并没有解决 exploration-exploitation 困境……

首先我们注意到,当 Q-table 或者 Q-network 随机初始化时,其预测结果也是随机的。如果我们选择一个拥有最高的 Q-value 的行动,这个行动是随机的,这样 agent 会进行任性的“exploration”。当 Q-function 收敛时,它会返回一个更加一致的 Q-value 此时 exploration 的次数就下降了。所以我们可以说,Q-learning 将 exploration 引入了算法的一部分。但是这样的 exploration 是贪心的,它会采用找到的最有效的策略。

对上面问题的一个简单却有效的解决方案是 ε-greedy exploration——以概率ε选择一个随机行动,否则按照最高的 Q-value 进行贪心行动。在 DeepMind 的系统中,对ε本身根据时间进行的从 1 到 0.1 的下降,也就是说开始时系统完全进行随机的行动来最大化地 explore 状态空间,然后逐渐下降到一个固定的 exploration 的比例。

Deep Q-learning 算法

现在我们得到加入 experience replay的最终版本:

DeepMind 其实还使用了很多的技巧来让系统工作得更好——如 target network、error clipping、reward clipping 等等,这里我们不做介绍。

该算法最为有趣的一点就是它可以学习任何东西。你仔细想想——由于我们的 Q-function 是随机初始化的,刚开始给出的结果就完全是垃圾。然后我们就用这样的垃圾(下个状态的最高 Q-value)作为网络的目标,仅仅会偶然地引入微小的收益。这听起来非常疯狂,为什么它能够学习任何有意义的东西呢?然而,它确实如此神奇。

Final notes

Many improvements to deep Q-learning have been proposed since its first introduction – Double Q-learning, Prioritized Experience Replay, Dueling Network Architecture and extension to continuous action space to name a few. For latest advancements check out the NIPS 2015 deep reinforcement learning workshop and ICLR 2016 (search for “reinforcement” in title). But beware, that deep Q-learning has been patented by Google.

It is often said, that artificial intelligence is something we haven’t figured out yet. Once we know how it works, it doesn’t seem intelligent any more. But deep Q-networks still continue to amaze me. Watching them figure out a new game is like observing an animal in the wild – a rewarding experience by itself.

Credits

Thanks to Ardi Tampuu, Tanel Pärnamaa, Jaan Aru, Ilya Kuzovkin, Arjun Bansal and Urs Köster for comments and suggestions on the drafts of this post.

Links

David Silver’s lecture about deep reinforcement learning
Slightly awkward but accessible illustration of Q-learning
UC Berkley’s course on deep reinforcement learning
David Silver’s reinforcement learning course
Nando de Freitas’ course on machine learning (two lectures about reinforcement learning in the end)
Andrej Karpathy’s course on convolutional neural networks

[1] Algorithm adapted from http://artint.info/html/ArtInt_265.html

 

文/Not_GOD(简书作者)
原文链接:http://www.jianshu.com/p/d347bb2ca53c
著作权归作者所有,转载请联系作者获得授权,并标注“简书作者”。

循环神经网络---Recurrent Neural Networks

循环神经网络(Recurrent Neural Networks)是目前非常流行的神经网络模型,在自然语言处理的很多任务中已经展示出卓越的效果。但是在介绍 RNN 的诸多文章中,通常都是介绍 RNN 的使用方法和实战效果,很少有文章会介绍关于该神经网络的训练过程。

循环神经网络是一个在时间上传递的神经网络,网络的深度就是时间的长度。该神经网络是专门用来处理时间序列问题的,能够提取时间序列的信息。如果是前向神经网络,每一层的神经元信号只能够向下一层传播,样本的处理在时刻上是独立的。对于循环神经网络而言,神经元在这个时刻的输出可以直接影响下一个时间点的输入,因此该神经网络能够处理时间序列方面的问题。

本文将会从数学的角度展开关于循环神经网络的使用方法和训练过程,在本文中,会假定读者已经掌握数学分析中的导数偏导数链式法则梯度下降法等基础内容。本文将会使用传统的后向传播算法(Back Propagation)来训练 RNN 模型。

微信公众号文章循环神经网络

This slideshow requires JavaScript.

 

 

线性回归(Linear Regression)

回归方法是为了对连续型的数据做出预测,其中最简单的回归方法当然就是线性回归。顾名思义,线性回归就是使用线性方程来对已知的数据集合进行拟合,达到预测未来的目的。线性回归的优点就是结果十分容易理解,计算公式简单;缺点则是对非线性的数据拟合程度不够好。例如,用一个线性函数 y = kx + b 去拟合二次函数 f(x) = x^{2},结果总是不尽人意。为了解决这类问题,有人提出了局部加权线性回归(locally weighted linear regression)岭回归(ridge regression)LASSO 和 前向逐步线性回归(forward stagewise linear regression)。本文中将会一一介绍这些回归算法。

(一)线性回归(Linear Regression)

假设矩阵 X 的每一行表示一个样本,每一列表示相应的特征,列向量 Y 表示矩阵 X 所对应的取值,那么我们需要找到一个列向量 \Theta 使得 Y=X\Theta。当然,这样的 \Theta 在现实的数据集中几乎不可能存在。不过,我们可以寻找一个 \Theta 使得列向量 Y-X\Theta 的 Eulidean 范数足够小。换言之,我们需要找到一个向量 \Theta 使得

\sum_{i=1}^{m}(y_{i}-x_{i}\Theta)^{2} = (Y-X\Theta)^{T}(Y-X\Theta)

的取值足够小,其中 m 是矩阵 X 的行数,x_{i} 表示矩阵 X 的第 i 个行向量。通过数学计算可以得到:

(Y-X\Theta)^{T}(Y-X\Theta)=Y^{T}Y-2Y^{T}X\Theta + \Theta^{T}X^{T}X\Theta

\Theta 求导之后得到:-2X^{T}Y + 2X^{T}X\Theta=0,求解 \Theta 之后得到 \Theta = (X^{T}X)^{-1}X^{T}Y。因此,对于矩阵 X 和列向量 Y 而言,最佳的线性回归系数是

\Theta = (X^{T}X)^{-1}X^{T}Y.

举例说明:蓝色的是数据集,使用线性回归计算的话会得到一条直线。

qq%e5%9b%be%e7%89%8720161028185606

(二)局部加权线性回归(Locally Weighted Linear Regression)

线性回归的一个问题就是会出现欠拟合的情况,因为线性方程确实很难精确地描述现实生活的大量数据集。因此有人提出了局部加权线性回归(Locally Weighted Linear Regression),在该算法中,给每一个点都赋予一定的权重,也就是

\sum_{i=1}^{m}w_{i}(y_{i}-x_{i}\Theta)^{2} = (Y-X\Theta)^{T}W(Y-X\Theta),

其中 W 表示以 \{w_{1},...,w_{m}\} 为对角线的对角矩阵,其中 m 是矩阵 X 的行数,x_{i} 表示矩阵 X 的第 i 个行向量。通过计算可以得到:

(Y-X\Theta)^{T}W(Y-X\Theta)=Y^{T}WY-2Y^{T}WX\Theta+\Theta^{T}X^{T}WX\Theta,

\Theta 求导之后得到:

-2(Y^{T}WX)^{T} + 2 X^{T}WX\Theta = -2X^{T}WY+2X^{T}WX\Theta.

令导数等于零之后得到:\Theta = (X^{T}WX)^{-1}X^{T}WY。因此,如果使用局部加权线性回归的话,最佳的系数就是

\Theta = (X^{T}WX)^{-1}X^{T}WY.

局部加权线性回归需要确定权重矩阵 W 的值,那么就需要定义对角线的取值,通常情况下我们会使用高斯核。

w_{i} = \exp\{-\frac{(x_{i}-x)^{2}}{2k^{2}}\}.

其中 k 是参数。从高斯核的定义可以看出,如果 xx_{i} 隔得很近,那么 w_{i} 就会较大;如果隔得较远,那么 w_{i} 就会趋向于零。意思就是说:在局部形成了线性回归的算法,在整体并不一定是线性回归。在局部线性回归中,k 就是唯一的参数值。

如果选择了合适的 k,可以得到一条看上去还不错的曲线;如果选择了不合适的 k,就有可能出现过拟合的情况。

qq%e5%9b%be%e7%89%8720161028185611qq%e5%9b%be%e7%89%8720161028185617

(三)岭回归(Ridge Regression)和 LASSO

如果在某种特殊的情况下,特征的个数 n 大于样本的个数 m,i.e. 矩阵 X 的列数多于行数,那么 X 不是一个满秩矩阵,因此在计算 (X^{T}X)^{-1} 的时候会出现问题。为了解决这个问题,有人引入了岭回归(ridge regression)的概念。也就是说在计算矩阵的逆的时候,增加了一个对角矩阵,目的是使得可以对矩阵进行求逆。用数学语言来描述就是矩阵 X^{T}X 加上 \lambda I,这里的 I 是一个 n\times n 的对角矩阵,使得矩阵 X^{T}X+\lambda I 是一个可逆矩阵。在这种情况下,回归系数的计算公式变成了

\Theta = (X^{T}X+\lambda I)^{-1}X^{T}Y.

岭回归最初只是为了解决特征数目大于样本数目的情况,现在也可以用于在估计中加入偏差,从而得到更好的估计。

从另一个角度来讲,当样本的特征很多,而样本的数量相对少的时候,\sum_{i=1}^{m}(y_{i}-x_{i}\Theta)^{2} 很容易过拟合。为了缓解过拟合的问题,可以引入正则化项。如果使用 L^{2} 正则化,那么目标函数则是

\sum_{i=1}^{m}(y_{i}-x_{i}\Theta)^{2}+\lambda||\Theta||_{2}^{2}=(Y-X\Theta)^{T}(Y-X\Theta)+\lambda \Theta^{T}\Theta,

其中 \lambda>0。通过数学推导可以得到:

(Y-X\Theta)^{T}(Y-X\Theta)+\lambda \Theta^{T}\Theta = Y^{T}Y - 2\Theta^{T}X^{T}Y+\Theta^{T}X^{T}X\Theta+\lambda\Theta^{T}I\Theta.

\Theta 求导之后得到:

-2X^{T}Y+2(X^{T}X+\lambda I)\Theta,

令导数等于零可以得到:\Theta = (X^{T}X + \lambda I)^{-1}X^{T}Y. 因此,从另一个角度来说,岭回归(Ridge Regression)是在线性规划的基础上添加了一个 L^{2} 范数的正则化,可以用来降低过拟合的风险。

需要注意的是:在进行岭回归的时候,需要在一开始就对特征进行标准化处理,使得每一维度的特征具有相同的重要性。具体来说就是 (特征-特征的均值)/特征的方差,让每一维度的特征都满足零均值和单位方差。

另外,如果把岭回归中的 L^{2} 范数正则化替换成 L^{1} 范数,那么目标函数就变成了

\sum_{i=1}^{m}(y_{i}-x_{i}\Theta)^{2}+\lambda ||\Theta||_{1}

其中的参数 \lambda>0L^{1}L^{2} 范数都有助于降低过拟合的风险,使用 L^{1} 范数的方法被称为 LASSO(Least Absolute Shrinkage and Selection Operation)。使用 L^{1} 范数比使用 L^{2} 范数更加容易获得稀疏解(sparse solution),即它求得的参数 \Theta 会有更少的非零分量。\Theta 获得稀疏解意味着初始的 n 个特征中仅有对应着 \Theta 的非零分量的特征才会出现在最终的模型中。于是,求解 L^{1} 范数正则化的结果是得到了仅采用一部分原始特征的模型;从另一个角度来说,基于 L^{1} 正则化的学习方法就是一种嵌入式的特征选择方法,其特征选择的过程和训练的过程融为一体,同时完成。

(四)前向逐步线性回归(Forward Stagewise Linear Regression)

前向逐步线性回归算法是一种贪心算法,目的是在每一步都尽可能的减少误差。初始化的时候,所有的权重都设置为1,然后每一步所做的据测就是对某个权重增加或者减少一个很小的值 \epsilon

该算法的伪代码如下所示:

数据标准化,使其分布满足零均值和单位方差
在每一轮的迭代中:
  设置当前最小误差为正无穷
  对每个特征:
    增大或者缩小:
      改变一个系数得到一个新的权重W
      计算新W下的误差
      如果误差Error小于当前误差:设置Wbest等于当前的W
    将W设置为新的Wbest

(五)总结

与分类一样,回归也是预测目标值的过程。但是分类预测的是离散型变量,回归预测的是连续型变量。但是在大多数情况下,数据之间会很复杂,这种情况下使用线性模型确实不是特别合适,需要采用其余的方法,例如非线性模型等。

该如何做大中型 UGC 平台(如新浪微博)的反垃圾(anti-spam)工作?

来自知乎

帅帅 产品经理
宋一松 Facebook,Uber
收录于 编辑推荐 159 人赞同
aviat 淫欲、暴食、贪婪、怠惰、暴怒、嫉妒、傲慢
iammutex 彩石手机CTO – 做最好的中老年智能手机

如何在 Kaggle 首战中进入前 10%

Introduction

Kaggle 是目前最大的 Data Scientist 聚集地。很多公司会拿出自家的数据并提供奖金,在 Kaggle 上组织数据竞赛。我最近完成了第一次比赛,在 2125 个参赛队伍中排名第 98 位(~ 5%)。因为是第一次参赛,所以对这个成绩我已经很满意了。在 Kaggle 上一次比赛的结果除了排名以外,还会显示的就是 Prize Winner,10% 或是 25% 这三档。所以刚刚接触 Kaggle 的人很多都会以 25% 或是 10% 为目标。在本文中,我试图根据自己第一次比赛的经验和从其他 Kaggler 那里学到的知识,为刚刚听说 Kaggle 想要参赛的新手提供一些切实可行的冲刺 10% 的指导。

本文的英文版见这里

Kaggle Profile

Kaggler 绝大多数都是用 Python 和 R 这两门语言的。因为我主要使用 Python,所以本文提到的例子都会根据 Python 来。不过 R 的用户应该也能不费力地了解到工具背后的思想。

首先简单介绍一些关于 Kaggle 比赛的知识:

  • 不同比赛有不同的任务,分类、回归、推荐、排序等。比赛开始后训练集和测试集就会开放下载。
  • 比赛通常持续 2 ~ 3 个月,每个队伍每天可以提交的次数有限,通常为 5 次。
  • 比赛结束前一周是一个 Deadline,在这之后不能再组队,也不能再新加入比赛。所以想要参加比赛请务必在这一 Deadline 之前有过至少一次有效的提交
  • 一般情况下在提交后会立刻得到得分的反馈。不同比赛会采取不同的评分基准,可以在分数栏最上方看到使用的评分方法。
  • 反馈的分数是基于测试集的一部分计算的,剩下的另一部分会被用于计算最终的结果。所以最后排名会变动。
  • LB 指的就是在 Leaderboard 得到的分数,由上,有 Public LBPrivate LB 之分。
  • 自己做的 Cross Validation 得到的分数一般称为 CV 或是 Local CV。一般来说 CV 的结果比 LB 要可靠。
  • 新手可以从比赛的 ForumScripts 中找到许多有用的经验和洞见。不要吝啬提问,Kaggler 都很热情。

那么就开始吧!

P.S. 本文假设读者对 Machine Learning 的基本概念和常见模型已经有一定了解。 Enjoy Reading!

General Approach

在这一节中我会讲述一次 Kaggle 比赛的大致流程。

Data Exploration

在这一步要做的基本就是 EDA (Exploratory Data Analysis),也就是对数据进行探索性的分析,从而为之后的处理和建模提供必要的结论。

通常我们会用 pandas 来载入数据,并做一些简单的可视化来理解数据。

Visualization

通常来说 matplotlibseaborn 提供的绘图功能就可以满足需求了。

比较常用的图表有:

  • 查看目标变量的分布。当分布不平衡时,根据评分标准和具体模型的使用不同,可能会严重影响性能。
  • Numerical Variable,可以用 Box Plot 来直观地查看它的分布。
  • 对于坐标类数据,可以用 Scatter Plot 来查看它们的分布趋势和是否有离群点的存在。
  • 对于分类问题,将数据根据 Label 的不同着不同的颜色绘制出来,这对 Feature 的构造很有帮助。
  • 绘制变量之间两两的分布和相关度图表。

这里有一个在著名的 Iris 数据集上做了一系列可视化的例子,非常有启发性。

Statistical Tests

我们可以对数据进行一些统计上的测试来验证一些假设的显著性。虽然大部分情况下靠可视化就能得到比较明确的结论,但有一些定量结果总是更理想的。不过,在实际数据中经常会遇到非 i.i.d. 的分布。所以要注意测试类型的的选择和对显著性的解释。

在某些比赛中,由于数据分布比较奇葩或是噪声过强,Public LB 的分数可能会跟Local CV 的结果相去甚远。可以根据一些统计测试的结果来粗略地建立一个阈值,用来衡量一次分数的提高究竟是实质的提高还是由于数据的随机性导致的。

Data Preprocessing

大部分情况下,在构造 Feature 之前,我们需要对比赛提供的数据集进行一些处理。通常的步骤有:

  • 有时数据会分散在几个不同的文件中,需要 Join 起来。
  • 处理 Missing Data
  • 处理 Outlier
  • 必要时转换某些 Categorical Variable 的表示方式。
  • 有些 Float 变量可能是从未知的 Int 变量转换得到的,这个过程中发生精度损失会在数据中产生不必要的 Noise,即两个数值原本是相同的却在小数点后某一位开始有不同。这对 Model 可能会产生很负面的影响,需要设法去除或者减弱 Noise。

这一部分的处理策略多半依赖于在前一步中探索数据集所得到的结论以及创建的可视化图表。在实践中,我建议使用 iPython Notebook 进行对数据的操作,并熟练掌握常用的 pandas 函数。这样做的好处是可以随时得到结果的反馈和进行修改,也方便跟其他人进行交流(在 Data Science 中 Reproducible Results 是很重要的)。

下面给两个例子。

Outlier

Outlier Example

这是经过 Scaling 的坐标数据。可以发现右上角存在一些离群点,去除以后分布比较正常。

Dummy Variables

对于 Categorical Variable,常用的做法就是 One-hot encoding。即对这一变量创建一组新的伪变量,对应其所有可能的取值。这些变量中只有这条数据对应的取值为 1,其他都为 0。

如下,将原本有 7 种可能取值的 Weekdays 变量转换成 7 个 Dummy Variables。

Dummies Example

要注意,当变量可能取值的范围很大(比如一共有成百上千类)时,这种简单的方法就不太适用了。这时没有有一个普适的方法,但我会在下一小节描述其中一种。

Feature Engineering

有人总结 Kaggle 比赛是 “Feature 为主,调参和 Ensemble 为辅”,我觉得很有道理。Feature Engineering 能做到什么程度,取决于对数据领域的了解程度。比如在数据包含大量文本的比赛中,常用的 NLP 特征就是必须的。怎么构造有用的 Feature,是一个不断学习和提高的过程。

一般来说,当一个变量从直觉上来说对所要完成的目标有帮助,就可以将其作为 Feature。至于它是否有效,最简单的方式就是通过图表来直观感受。比如:

Checking Feature Validity

Feature Selection

总的来说,我们应该生成尽量多的 Feature,相信 Model 能够挑出最有用的 Feature。但有时先做一遍 Feature Selection 也能带来一些好处:

  • Feature 越少,训练越快。
  • 有些 Feature 之间可能存在线性关系,影响 Model 的性能。
  • 通过挑选出最重要的 Feature,可以将它们之间进行各种运算和操作的结果作为新的 Feature,可能带来意外的提高。

Feature Selection 最实用的方法也就是看 Random Forest 训练完以后得到的Feature Importance 了。其他有一些更复杂的算法在理论上更加 Robust,但是缺乏实用高效的实现,比如这个。从原理上来讲,增加 Random Forest 中树的数量可以在一定程度上加强其对于 Noisy Data 的 Robustness。

看 Feature Importance 对于某些数据经过脱敏处理的比赛尤其重要。这可以免得你浪费大把时间在琢磨一个不重要的变量的意义上。

Feature Encoding

这里用一个例子来说明在一些情况下 Raw Feature 可能需要经过一些转换才能起到比较好的效果。

假设有一个 Categorical Variable 一共有几万个取值可能,那么创建 Dummy Variables 的方法就不可行了。这时一个比较好的方法是根据 Feature Importance 或是这些取值本身在数据中的出现频率,为最重要(比如说前 95% 的 Importance)那些取值(有很大可能只有几个或是十几个)创建 Dummy Variables,而所有其他取值都归到一个“其他”类里面。

Model Selection

准备好 Feature 以后,就可以开始选用一些常见的模型进行训练了。Kaggle 上最常用的模型基本都是基于树的模型:

  • Gradient Boosting
  • Random Forest
  • Extra Randomized Trees

以下模型往往在性能上稍逊一筹,但是很适合作为 Ensemble 的 Base Model。这一点之后再详细解释。(当然,在跟图像有关的比赛中神经网络的重要性还是不能小觑的。)

  • SVM
  • Linear Regression
  • Logistic Regression
  • Neural Networks

以上这些模型基本都可以通过 sklearn 来使用。

当然,这里不能不提一下 XgboostGradient Boosting 本身优秀的性能加上Xgboost 高效的实现,使得它在 Kaggle 上广为使用。几乎每场比赛的获奖者都会用 Xgboost 作为最终 Model 的重要组成部分。在实战中,我们往往会以 Xgboost 为主来建立我们的模型并且验证 Feature 的有效性。顺带一提,在 Windows 上安装 Xgboost 很容易遇到问题,目前已知最简单、成功率最高的方案可以参考我在这篇帖子中的描述

Model Training

在训练时,我们主要希望通过调整参数来得到一个性能不错的模型。一个模型往往有很多参数,但其中比较重要的一般不会太多。比如对 sklearnRandomForestClassifier 来说,比较重要的就是随机森林中树的数量 n_estimators 以及在训练每棵树时最多选择的特征数量 max_features。所以我们需要对自己使用的模型有足够的了解,知道每个参数对性能的影响是怎样的

通常我们会通过一个叫做 Grid Search 的过程来确定一组最佳的参数。其实这个过程说白了就是根据给定的参数候选对所有的组合进行暴力搜索。

1
2
3
param_grid = {‘n_estimators’: [300, 500], ‘max_features’: [10, 12, 14]}
model = grid_search.GridSearchCV(estimator=rfr, param_grid=param_grid, n_jobs=1, cv=10, verbose=20, scoring=RMSE)
model.fit(X_train, y_train)

顺带一提,Random Forest 一般在 max_features 设为 Feature 数量的平方根附近得到最佳结果。

这里要重点讲一下 Xgboost 的调参。通常认为对它性能影响较大的参数有:

  • eta:每次迭代完成后更新权重时的步长。越小训练越慢。
  • num_round:总共迭代的次数。
  • subsample:训练每棵树时用来训练的数据占全部的比例。用于防止 Overfitting。
  • colsample_bytree:训练每棵树时用来训练的特征的比例,类似 RandomForestClassifiermax_features
  • max_depth:每棵树的最大深度限制。与 Random Forest 不同,Gradient Boosting 如果不对深度加以限制,最终是会 Overfit 的
  • early_stopping_rounds:用于控制在 Out Of Sample 的验证集上连续多少个迭代的分数都没有提高后就提前终止训练。用于防止 Overfitting。

一般的调参步骤是:

  1. 将训练数据的一部分划出来作为验证集。
  2. 先将 eta 设得比较高(比如 0.1),num_round 设为 300 ~ 500。
  3. 用 Grid Search 对其他参数进行搜索
  4. 逐步将 eta 降低,找到最佳值。
  5. 以验证集为 watchlist,用找到的最佳参数组合重新在训练集上训练。注意观察算法的输出,看每次迭代后在验证集上分数的变化情况,从而得到最佳的 early_stopping_rounds
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
X_dtrain, X_deval, y_dtrain, y_deval = cross_validation.train_test_split(X_train, y_train, random_state=1026, test_size=0.3)
dtrain = xgb.DMatrix(X_dtrain, y_dtrain)
deval = xgb.DMatrix(X_deval, y_deval)
watchlist = [(deval, ‘eval’)]
params = {
‘booster’: ‘gbtree’,
‘objective’: ‘reg:linear’,
‘subsample’: 0.8,
‘colsample_bytree’: 0.85,
‘eta’: 0.05,
‘max_depth’: 7,
‘seed’: 2016,
‘silent’: 0,
‘eval_metric’: ‘rmse’
}
clf = xgb.train(params, dtrain, 500, watchlist, early_stopping_rounds=50)
pred = clf.predict(xgb.DMatrix(df_test))

最后要提一点,所有具有随机性的 Model 一般都会有一个 seed 或是 random_state 参数用于控制随机种子。得到一个好的 Model 后,在记录参数时务必也记录下这个值,从而能够在之后重现 Model。

Cross Validation

Cross Validation 是非常重要的一个环节。它让你知道你的 Model 有没有 Overfit,是不是真的能够 Generalize 到测试集上。在很多比赛中 Public LB 都会因为这样那样的原因而不可靠。当你改进了 Feature 或是 Model 得到了一个更高的 CV 结果,提交之后得到的 LB 结果却变差了,一般认为这时应该相信 CV 的结果。当然,最理想的情况是多种不同的 CV 方法得到的结果和 LB 同时提高,但这样的比赛并不是太多。

在数据的分布比较随机均衡的情况下,5-Fold CV 一般就足够了。如果不放心,可以提到 10-Fold但是 Fold 越多训练也就会越慢,需要根据实际情况进行取舍。

很多时候简单的 CV 得到的分数会不大靠谱,Kaggle 上也有很多关于如何做 CV 的讨论。比如这个。但总的来说,靠谱的 CV 方法是 Case By Case 的,需要在实际比赛中进行尝试和学习,这里就不再(也不能)叙述了。

Ensemble Generation

Ensemble Learning 是指将多个不同的 Base Model 组合成一个 Ensemble Model 的方法。它可以同时降低最终模型的 Bias 和 Variance(证明可以参考这篇论文,我最近在研究类似的理论,可能之后会写新文章详述),从而在提高分数的同时又降低 Overfitting 的风险。在现在的 Kaggle 比赛中要不用 Ensemble 就拿到奖金几乎是不可能的。

常见的 Ensemble 方法有这么几种:

  • Bagging:使用训练数据的不同随机子集来训练每个 Base Model,最后进行每个 Base Model 权重相同的 Vote。也即 Random Forest 的原理。
  • Boosting:迭代地训练 Base Model,每次根据上一个迭代中预测错误的情况修改训练样本的权重。也即 Gradient Boosting 的原理。比 Bagging 效果好,但更容易 Overfit。
  • Blending:用不相交的数据训练不同的 Base Model,将它们的输出取(加权)平均。实现简单,但对训练数据利用少了。
  • Stacking:接下来会详细介绍。

从理论上讲,Ensemble 要成功,有两个要素:

  • Base Model 之间的相关性要尽可能的小。这就是为什么非 Tree-based Model 往往表现不是最好但还是要将它们包括在 Ensemble 里面的原因。Ensemble 的 Diversity 越大,最终 Model 的 Bias 就越低。
  • Base Model 之间的性能表现不能差距太大。这其实是一个 Trade-off,在实际中很有可能表现相近的 Model 只有寥寥几个而且它们之间相关性还不低。但是实践告诉我们即使在这种情况下 Ensemble 还是能大幅提高成绩。

Stacking

相比 Blending,Stacking 能更好地利用训练数据。以 5-Fold Stacking 为例,它的基本原理如图所示:

Stacking

整个过程很像 Cross Validation。首先将训练数据分为 5 份,接下来一共 5 个迭代,每次迭代时,将 4 份数据作为 Training Set 对每个 Base Model 进行训练,然后在剩下一份 Hold-out Set 上进行预测。同时也要将其在测试数据上的预测保存下来。这样,每个 Base Model 在每次迭代时会对训练数据的其中 1 份做出预测,对测试数据的全部做出预测。5 个迭代都完成以后我们就获得了一个 #训练数据行数 x #Base Model 数量 的矩阵,这个矩阵接下来就作为第二层的 Model 的训练数据。当第二层的 Model 训练完以后,将之前保存的 Base Model 对测试数据的预测(因为每个 Base Model 被训练了 5 次,对测试数据的全体做了 5 次预测,所以对这 5 次求一个平均值,从而得到一个形状与第二层训练数据相同的矩阵)拿出来让它进行预测,就得到最后的输出。

这里给出我的实现代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class Ensemble(object):
def __init__(self, n_folds, stacker, base_models):
self.n_folds = n_folds
self.stacker = stacker
self.base_models = base_models
def fit_predict(self, X, y, T):
X = np.array(X)
y = np.array(y)
T = np.array(T)
folds = list(KFold(len(y), n_folds=self.n_folds, shuffle=True, random_state=2016))
S_train = np.zeros((X.shape[0], len(self.base_models)))
S_test = np.zeros((T.shape[0], len(self.base_models)))
for i, clf in enumerate(self.base_models):
S_test_i = np.zeros((T.shape[0], len(folds)))
for j, (train_idx, test_idx) in enumerate(folds):
X_train = X[train_idx]
y_train = y[train_idx]
X_holdout = X[test_idx]
# y_holdout = y[test_idx]
clf.fit(X_train, y_train)
y_pred = clf.predict(X_holdout)[:]
S_train[test_idx, i] = y_pred
S_test_i[:, j] = clf.predict(T)[:]
S_test[:, i] = S_test_i.mean(1)
self.stacker.fit(S_train, y)
y_pred = self.stacker.predict(S_test)[:]
return y_pred

获奖选手往往会使用比这复杂得多的 Ensemble,会出现三层、四层甚至五层,不同的层数之间有各种交互,还有将经过不同的 Preprocessing 和不同的 Feature Engineering 的数据用 Ensemble 组合起来的做法。但对于新手来说,稳稳当当地实现一个正确的 5-Fold Stacking 已经足够了。

*Pipeline

可以看出 Kaggle 比赛的 Workflow 还是比较复杂的。尤其是 Model Selection 和 Ensemble。理想情况下,我们需要搭建一个高自动化的 Pipeline,它可以做到:

  • 模块化 Feature Transform,只需写很少的代码就能将新的 Feature 更新到训练集中。
  • 自动化 Grid Search,只要预先设定好使用的 Model 和参数的候选,就能自动搜索并记录最佳的 Model。
  • 自动化 Ensemble Generation,每个一段时间将现有最好的 K 个 Model 拿来做 Ensemble。

对新手来说,第一点可能意义还不是太大,因为 Feature 的数量总是人脑管理的过来的;第三点问题也不大,因为往往就是在最后做几次 Ensemble。但是第二点还是很有意义的,手工记录每个 Model 的表现不仅浪费时间而且容易产生混乱。

Crowdflower Search Results Relevance 的第一名获得者 Chenglong Chen 将他在比赛中使用的 Pipeline 公开了,非常具有参考和借鉴意义。只不过看懂他的代码并将其中的逻辑抽离出来搭建这样一个框架,还是比较困难的一件事。可能在参加过几次比赛以后专门抽时间出来做会比较好。

Home Depot Search Relevance

在这一节中我会具体分享我在 Home Depot Search Relevance 比赛中是怎么做的,以及比赛结束后从排名靠前的队伍那边学到的做法。

首先简单介绍这个比赛。Task 是判断用户搜索的关键词和网站返回的结果之间的相关度有多高。相关度是由 3 个人类打分取平均得到的,每个人可能打 1 ~ 3 分,所以这是一个回归问题。数据中包含用户的搜索词,返回的产品的标题和介绍,以及产品相关的一些属性比如品牌、尺寸、颜色等。使用的评分基准是 RMSE

这个比赛非常像 Crowdflower Search Results Relevance 那场比赛。不过那边用的评分基准是 Quadratic Weighted Kappa,把 1 误判成 4 的惩罚会比把 1 判成 2 的惩罚大得多,所以在最后 Decode Prediction 的时候会更麻烦一点。除此以外那次比赛没有提供产品的属性。

EDA

由于加入比赛比较晚,当时已经有相当不错的 EDA 了。尤其是这个。从中我得到的启发有:

  • 同一个搜索词/产品都出现了多次,数据分布显然不 i.i.d.
  • 文本之间的相似度很有用。
  • 产品中有相当大一部分缺失属性,要考虑这会不会使得从属性中得到的 Feature 反而难以利用。
  • 产品的 ID 对预测相关度很有帮助,但是考虑到训练集和测试集之间的重叠度并不太高,利用它会不会导致 Overfitting?

Preprocessing

这次比赛中我的 Preprocessing 和 Feature Engineering 的具体做法都可以在这里看到。我只简单总结一下和指出重要的点。

  1. 利用 Forum 上的 Typo Dictionary 修正搜索词中的错误。
  2. 统计属性的出现次数,将其中出现次数多又容易利用的记录下来。
  3. 将训练集和测试集合并,并与产品描述和属性 Join 起来。这是考虑到后面有一系列操作,如果不合并的话就要重复写两次了。
  4. 对所有文本能做 StemmingTokenizing,同时手工做了一部分格式统一化(比如涉及到数字和单位的)同义词替换

Feature

  • *Attribute Features
    • 是否包含某个特定的属性(品牌、尺寸、颜色、重量、内用/外用、是否有能源之星认证等)
    • 这个特定的属性是否匹配
  • Meta Features
    • 各个文本域的长度
    • 是否包含属性域
    • 品牌(将所有的品牌做数值离散化)
    • 产品 ID
  • 简单匹配
    • 搜索词是否在产品标题、产品介绍或是产品属性中出现
    • 搜索词在产品标题、产品介绍或是产品属性中出现的数量和比例
    • *搜索词中的第 i 个词是否在产品标题、产品介绍或是产品属性中出现
  • 搜索词和产品标题、产品介绍以及产品属性之间的文本相似度
  • Latent Semantic Indexing:通过将 BOW/TF-IDF Vectorization 得到的矩阵进行 SVD 分解,我们可以得到不同搜索词/产品组合的 Latent 标识。这个 Feature 使得 Model 能够在一定程度上对不同的组合做出区别,从而解决某些产品缺失某些 Feature 的问题。

值得一提的是,上面打了 * 的 Feature 都是我在最后一批加上去的。问题是,使用这批 Feature 训练得到的 Model 反而比之前的要差,而且还差不少。我一开始是以为因为 Feature 的数量变多了所以一些参数需要重新调优,但在浪费了很多时间做 Grid Search 以后却发现还是没法超过之前的分数。这可能就是之前提到的 Feature 之间的相互作用导致的问题。当时我设想过一个看到过好几次的解决方案,就是将使用不同版本 Feature 的 Model 通过 Ensemble 组合起来。但最终因为时间关系没有实现。事实上排名靠前的队伍分享的解法里面基本都提到了将不同的 Preprocessing 和 Feature Engineering 做 Ensemble 是获胜的关键。

Model

我一开始用的是 RandomForestRegressor,后来在 Windows 上折腾 Xgboost 成功了就开始用 XGBRegressorXGB 的优势非常明显,同样的数据它只需要不到一半的时间就能跑完,节约了很多时间。

比赛中后期我基本上就是一边台式机上跑 Grid Search,一边在笔记本上继续研究 Feature。

这次比赛数据分布很不独立,所以期间多次遇到改进的 Feature 或是 Grid Search新得到的参数训练出来的模型反而 LB 分数下降了。由于被很多前辈教导过要相信自己的 CV,我的决定是将 5-Fold 提到 10-Fold,然后以 CV 为标准继续前进。

Ensemble

最终我的 Ensemble 的 Base Model 有以下四个:

  • RandomForestRegressor
  • ExtraTreesRegressor
  • GradientBoostingRegressor
  • XGBRegressor

第二层的 Model 还是用的 XGB

因为 Base Model 之间的相关都都太高了(最低的一对也有 0.9),我原本还想引入使用 gblinearXGBRegressor 以及 SVR,但前者的 RMSE 比其他几个 Model 高了 0.02(这在 LB 上有几百名的差距),而后者的训练实在太慢了。最后还是只用了这四个。

值得一提的是,在开始做 Stacking 以后,我的 CV 和 LB 成绩的提高就是完全同步的了。

在比赛最后两天,因为身心疲惫加上想不到还能有什么显著的改进,我做了一件事情:用 20 个不同的随机种子来生成 Ensemble,最后取 Weighted Average。这个其实算是一种变相的 Bagging。其意义在于按我实现 Stacking 的方式,我在训练 Base Model 时只用了 80% 的训练数据,而训练第二层的 Model 时用了 100% 的数据,这在一定程度上增大了 Overfitting 的风险。而每次更改随机种子可以确保每次用的是不同的 80%,这样在多次训练取平均以后就相当于逼近了使用 100% 数据的效果。这给我带来了大约 0.0004 的提高,也很难受说是真的有效还是随机性了。

比赛结束后我发现我最好的单个 Model 在 Private LB 上的得分是 0.46378,而最终 Stacking 的得分是 0.45849。这是 174 名和 98 名的差距。也就是说,我单靠 Feature 和调参进到了 前 10%,而 Stacking 使我进入了前 5%。

Lessons Learned

比赛结束后一些队伍分享了他们的解法,从中我学到了一些我没有做或是做的不够好的地方:

  • 产品标题的组织方式是有 Pattern 的,比如一个产品是否带有某附件一定会用With/Without XXX 的格式放在标题最后。
  • 使用外部数据,比如 WordNetReddit 评论数据集等来训练同义词和上位词(在一定程度上替代 Word2Vec)词典。
  • 基于字母而不是单词的 NLP Feature。这一点我让我十分费解,但请教以后发现非常有道理。举例说,排名第三的队伍在计算匹配度时,将搜索词和内容中相匹配的单词的长度也考虑进去了。这是因为他们发现越长的单词约具体,所以越容易被用户认为相关度高。此外他们还使用了逐字符的序列比较(difflib.SequenceMatcher),因为这个相似度能够衡量视觉上的相似度。像这样的 Feature 的确不是每个人都能想到的。
  • 标注单词的词性,找出中心词,计算基于中心词的各种匹配度和距离。这一点我想到了,但没有时间尝试。
  • 将产品标题/介绍中 TF-IDF 最高的一些 Trigram 拿出来,计算搜索词中出现在这些 Trigram 中的比例;反过来以搜索词为基底也做一遍。这相当于是从另一个角度抽取了一些 Latent 标识。
  • 一些新颖的距离尺度,比如 Word Movers Distance
  • 除了 SVD 以外还可以用上 NMF
  • 最重要的 Feature 之间的 Pairwise Polynomial Interaction
  • 针对数据不 i.i.d. 的问题,在 CV 时手动构造测试集与验证集之间产品 ID 不重叠和重叠的两种不同分割,并以与实际训练集/测试集的分割相同的比例来做 CV 以逼近 LB 的得分分布

至于 Ensemble 的方法,我暂时还没有办法学到什么,因为自己只有最简单的 Stacking 经验。

Summary

Takeaways

  1. 比较早的时候就开始做 Ensemble 是对的,这次比赛到倒数第三天我还在纠结 Feature。
  2. 很有必要搭建一个 Pipeline,至少要能够自动训练并记录最佳参数。
  3. Feature 为王。我花在 Feature 上的时间还是太少。
  4. 可能的话,多花点时间去手动查看原始数据中的 Pattern。

Issues Raised

我认为在这次比赛中遇到的一些问题是很有研究价值的:

  1. 在数据分布并不 i.i.d. 甚至有 Dependency 时如何做靠谱的 CV
  2. 如何量化 Ensemble 中 Diversity vs. Accuracy 的 Trade-off。
  3. 如何处理 Feature 之间互相影响导致性能反而下降。

Beginner Tips

给新手的一些建议:

  1. 选择一个感兴趣的比赛。如果你对相关领域原本就有一些洞见那就更理想了。
  2. 根据我描述的方法开始探索、理解数据并进行建模。
  3. 通过 Forum 和 Scripts 学习其他人对数据的理解和构建 Feature 的方式。
  4. 如果之前有过类似的比赛,可以去找当时获奖者的 Interview 和 Blog Post 作为参考,往往很有用。
  5. 在得到一个比较不错的 LB 分数(比如已经接近前 10%)以后可以开始尝试做 Ensemble。
  6. 如果觉得自己有希望拿到奖金,开始找人组队吧!
  7. 到比赛结束为止要绷紧一口气不能断,尽量每天做一些新尝试。
  8. 比赛结束后学习排名靠前的队伍的方法,思考自己这次比赛中的不足和发现的问题,可能的话再花点时间将学到的新东西用实验进行确认,为下一次比赛做准备
  9. 好好休息!

Reference

  1. Beating Kaggle the Easy Way – Dong Ying
  2. Solution for Prudential Life Insurance Assessment – Nutastray
  3. Search Results Relevance Winner’s Interview: 1st place, Chenglong Chen

 

Introduction

Kaggle is the best place for learning from other data scientists. Many companies provide data and prize money to set up data science competitions on Kaggle. Recently I had my first shot on Kaggle and ranked 98th (~ 5%) among 2125 teams. Since this is my Kaggle debut, I feel quite satisfied. Because many Kaggle beginners set 10% as their first goal, here I want to share my experience in achieving that goal.

This post is also available in Chinese.

Kaggle Profile

Most Kagglers use Python and R. I prefer Python, but R users should have no difficulty in understanding the ideas behind tools and languages.

First let’s go through some facts about Kaggle competitions in case you are not very familiar with them.

  • Different competitions have different tasks: classification, regression, recommendation, ordering… Training set and testing set will be open for download after the competition launches.
  • A competition typically lasts for 2 ~ 3 months. Each team can submit for a limited amount of times a day. Usually it’s 5 times a day.
  • There will be a deadline one week before the end of the competition, after which you cannot merge teams or enter the competition. Therefore be sure to have at least one valid submission before that.
  • You will get you score immediately after the submission. Different competitions use different scoring metrics, which are explained by the question mark on the leaderboard.
  • The score you get is calculated on a subset of testing set, which is commonly referred to as a Public LB score. Whereas the final result will use the remaining data in the testing set, which is referred to as Private LB score.
  • The score you get by local cross validation is commonly referred to as a CVscore. Generally speaking, CV scores are more reliable than LB scores.
  • Beginners can learn a lot from Forum and Scripts. Do not hesitate to ask, Kagglers are very kind and helpful.

I assume that readers are familiar with basic concepts and models of machine learning. Enjoy reading!

General Approach

In this section, I will walk you through the whole process of a Kaggle competition.

Data Exploration

What we do at this stage is called EDA (Exploratory Data Analysis), which means analytically exploring data in order to provide some insights for subsequent processing and modeling.

Usually we would load the data using Pandas and make some visualizations to understand the data.

Visualization

For plotting, Matplotlib and Seaborn should suffice.

Some common practices:

  • Inspect the distribution of target variable. Depending on what scoring metric is used, an imbalanced distribution of target variable might harm the model’s performance.
  • For numerical variables, use box plot to inspect their distributions.
  • For coordinates-like data, use scatter plot to inspect the distribution and check for outliers.
  • For classification tasks, plot the data with points colored according to their labels. This can help with feature engineering.
  • Make pairwise distribution plots and examine correlations between pairs of variables.

Be sure to read this very inspiring tutorial of exploratory visualization before you go on.

Statistical Tests

We can perform some statistical tests to confirm our hypotheses. Sometimes we can get enough intuition from visualization, but quantitative results are always good to have. Note that we will always encounter non-i.i.d. data in real world. So we have to be careful about the choice of tests and how we interpret the findings.

In many competitions public LB scores are not very consistent with local CV scores due to noise or non-i.id. distribution. You can use test results to roughly set a threshold for determining whether an increase of score is an genuine one or due to randomness.

Data Preprocessing

In most cases, we need to preprocess the dataset before constructing features. Some common steps are:

  • Sometimes several files are provided and we need to join them.
  • Deal with missing data.
  • Deal with outliers.
  • Encode categorical variables if necessary.
  • Deal with noise. For example you may have some floats derived from unknown integers. The loss of precision during floating-point operations can bring much noise into the data: two seemingly different values might be the same before conversion. Sometimes noise harms model and we would want to avoid that.

How we choose to perform preprocessing largely depends on what we learn about the data in the previous stage. In practice, I recommend using iPython Notebook for data manipulating and mastering usages of frequently used Pandas operations. The advantage is that you get to see the results immediately and are able to modify or rerun operations. Also this makes it very convenient to share your approaches with others. After all reproducible results are very important in data science.

Let’s have some examples.

Outlier

Outlier Example

The plot shows some scaled coordinates data. We can see that there are some outliers in the top-right corner. Exclude them and the distribution looks good.

Dummy Variables

For categorical variables, a common practice is One-hot Encoding. For a categorical variable with n possible values, we create a group of n dummy variables. Suppose a record in the data takes one value for this variable, then the corresponding dummy variable is set to 1 while other dummies in the same group are all set to 0.

Dummies Example

Like this, we transform DayOfWeek into 7 dummy variables.

Note that when the categorical variable can takes many values (hundreds or more), this might not work well. It’s difficult to find a general solution to that, but I’ll discuss one scenario in the next section.

Feature Engineering

Some describe the essence of Kaggle competitions as feature engineering supplemented by model tuning and ensemble learning. Yes, that makes a lot of sense. Feature engineering gets your very far. Yet it is how well you know about the domain of given data that decides how far you can go. For example, in a competition where data is mainly consisted of texts, common NLP features are a must. The approach of constructing useful features is something we all have to continuously learn in order to do better.

Basically, when you feel that a variable is intuitively useful for the task, you can include it as a feature. But how do you know it actually works? The simplest way is to check by plotting it against the target variable like this:

Checking Feature Validity

Feature Selection

Generally speaking, we should try to craft as many features as we can and have faith in the model’s ability to pick up the most significant features. Yet there’s still something to gain from feature selection beforehand:

  • Less features mean faster training
  • Some features are linearly related to others. This might put a strain on the model.
  • By picking up the most important features, we can use interactions between them as new features. Sometimes this gives surprising improvement.

The simplest way to inspect feature importance is by fitting a random forest model. There exist more robust feature selection algorithms (e.g. this) which are theoretically superior but not practicable due to the absence of efficient implementation. You can combat noisy data (to an extent) simply by increasing number of trees used in random forest.

This is important for competitions in which data is anonymized because you won’t waste time trying to figure out the meaning of a variable that’s of no significance.

Feature Encoding

Sometimes raw features have to be converted to some other formats for them to be work properly.

For example, suppose we have a categorical variable which can take more than 10K different values. Then naively creating dummy variables is not a feasible option. An acceptable solution is to create dummy variables for only a subset of the values (e.g. values that constitute 95% of the feature importance) and assign everything else to an ‘others’ class.

Model Selection

With the features set, we can start training models. Kaggle competitions usually favor tree-based models:

  • Gradient Boosted Trees
  • Random Forest
  • Extra Randomized Trees

These models are slightly worse in terms of performance, but are suitable as base models in ensemble learning (will be discussed later):

  • SVM
  • Linear Regression
  • Logistic Regression
  • Neural Networks

Of course, neural networks are very important in image-related competitions.

All these models can be accessed using Sklearn.

Here I want to emphasize the greatness of Xgboost. The outstanding performance of gradient boosted trees and Xgboost’s efficient implementation makes it very popular in Kaggle competitions. Nowadays almost every winner uses Xgboost in one way or another.

BTW, installing Xgboost on Windows could be a painstaking process. You can refer to this post by me if you run into problems.

Model Training

We can obtain a good model by tuning its parameters. A model usually have many parameters, but only a few of them are important to its performance. For example, the most important parameters for random forset is the number of trees in the forest and the maximum number of features used in developing each tree. We need to understand how models work and what impact does each of the parameters have to the model’s performance, be it accuracy, robustness or speed.

Normally we would find the best set of parameters by a process called grid search. Actually what it does is simply iterating through all the possible combinations and find the best one.

1
2
3
4
5
param_grid = {‘n_estimators’: [300, 500], ‘max_features’: [10, 12, 14]}
model = grid_search.GridSearchCV(
estimator=rfr, param_grid=param_grid, n_jobs=1, cv=10, verbose=20, scoring=RMSE
)
model.fit(X_train, y_train)

By the way, random forest usually reach optimum when max_features is set to the square root of the total number of features.

Here I’d like to stress some points about tuning XGB. These parameters are generally considered to have real impacts on its performance:

  • eta: Step size used in updating weights. Lower eta means slower training.
  • num_round: Total round of iterations.
  • subsample: The ratio of training data used in each iteration. This is to combat overfitting.
  • colsample_bytree: The ratio of features used in each iteration. This is like max_features of RandomForestClassifier.
  • max_depth: The maximum depth of each tree. Unlike random forest,gradient boosting would eventually overfit if we do not limit its depth.
  • early_stopping_rounds: Controls how many iterations that do not show a increase of score on validation set are needed for the algorithm to stop early. This is to combat overfitting, too.

Usual tuning steps:

  1. Reserve a portion of training set as the validation set.
  2. Set eta to a relatively high value (e.g. 0.1), num_round to 300 ~ 500.
  3. Use grid search to find best combination of other parameters.
  4. Gradually lower eta to find the optimum.
  5. Use the validation set as watch_list to re-train the model with the best parameters. Observe how score changes on validation set in each iteration. Find the optimal value for early_stopping_rounds.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
X_dtrain, X_deval, y_dtrain, y_deval = \
cross_validation.train_test_split(X_train, y_train, random_state=1026, test_size=0.3)
dtrain = xgb.DMatrix(X_dtrain, y_dtrain)
deval = xgb.DMatrix(X_deval, y_deval)
watchlist = [(deval, ‘eval’)]
params = {
‘booster’: ‘gbtree’,
‘objective’: ‘reg:linear’,
‘subsample’: 0.8,
‘colsample_bytree’: 0.85,
‘eta’: 0.05,
‘max_depth’: 7,
‘seed’: 2016,
‘silent’: 0,
‘eval_metric’: ‘rmse’
}
clf = xgb.train(params, dtrain, 500, watchlist, early_stopping_rounds=50)
pred = clf.predict(xgb.DMatrix(df_test))

Finally, note that models with randomness all have a parameter like seed or random_state to control the random seed. You must record this with all other parameters when you get a good model. Otherwise you wouldn’t be able to reproduce it.

Cross Validation

Cross validation is an essential step. It tells us whether our model is at high risk of overfitting. In many competitions, public LB scores are not very reliable. Often when we improve the model and get a better local CV score, the LB score becomes worse. It is widely believed that we should trust our CV scores under such situation. Ideally we would want CV scores obtained by different approaches to improve in sync with each other and with the LB score, but this is not always possible.

Usually 5-fold CV is good enough. If we use more folds, the CV score would become more reliable, but the training takes longer to finish as well.

How to do CV properly is not a trivial problem. It requires constant experiment and case-by-case discussion. Many Kagglers share their CV approaches (like this one) after competitions where it’s not easy to do reliable CV.

Ensemble Generation

Ensemble Learning refers to ways of combining different models. It reduces both bias and variance of the final model (you can find a proof here), thusincreasing the score and reducing the risk of overfitting. Recently it became virtually impossible to win prize without using ensemble in Kaggle competitions.

Common approaches of ensemble learning are:

  • Bagging: Use different random subsets of training data to train each base model. Then base models vote to generate the final predictions. This is how random forest works.
  • Boosting: Train base models iteratively, modify the weights of training samples according to the last iteration. This is how gradient boosted trees work. It performs better than bagging but is more prone to overfitting.
  • Blending: Use non-overlapping data to train different base models and take a weighted average of them to obtain the final predictions. This is easy to implement but uses less data.
  • Stacking: To be discussed next.

In theory, for the ensemble to perform well, two elements matter:

  • Base models should be as unrelated as possibly. This is why we tend to include non-tree-base models in the ensemble even though they don’t perform as well. The math says that the greater the diversity, and less bias in the final ensemble.
  • Performance of base models shouldn’t differ to much.

Actually we have a trade-off here. In practice we may end up with highly related models of comparable performances. Yet we ensemble them anyway because it usually increase performance even under this circumstance.

Stacking

Compared with blending, stacking makes better use of training data. Here’s a diagram of how it works:

Stacking

(Taken from Faron. Many thanks!)

It’s much like cross validation. Take 5-fold stacking as an example. First we split the training data into 5 folds. Next we will do 5 iterations. In each iteration, train every base model on 4 folds and predict on the hold-out fold. You have to keep the predictions on the testing data as well. This way, in each iteration every base model will make predictions on 1 fold of the training data and all of the testing data. After 5 iterations we will obtain a matrix of shape #(rows in training data) X #(base models). This matrix is then fed to the stacker in the second level. After the stacker is fitted, use the predictions on testing data by base models (each base model is trained 5 times, therefore we have to take an average to obtain a matrix of the same shape) as the input for the stacker and obtain our final predictions.

Maybe it’s better to just show the codes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class Ensemble(object):
def __init__(self, n_folds, stacker, base_models):
self.n_folds = n_folds
self.stacker = stacker
self.base_models = base_models
def fit_predict(self, X, y, T):
X = np.array(X)
y = np.array(y)
T = np.array(T)
folds = list(KFold(len(y), n_folds=self.n_folds, shuffle=True, random_state=2016))
S_train = np.zeros((X.shape[0], len(self.base_models)))
S_test = np.zeros((T.shape[0], len(self.base_models)))
for i, clf in enumerate(self.base_models):
S_test_i = np.zeros((T.shape[0], len(folds)))
for j, (train_idx, test_idx) in enumerate(folds):
X_train = X[train_idx]
y_train = y[train_idx]
X_holdout = X[test_idx]
# y_holdout = y[test_idx]
clf.fit(X_train, y_train)
y_pred = clf.predict(X_holdout)[:]
S_train[test_idx, i] = y_pred
S_test_i[:, j] = clf.predict(T)[:]
S_test[:, i] = S_test_i.mean(1)
self.stacker.fit(S_train, y)
y_pred = self.stacker.predict(S_test)[:]
return y_pred

Prize winners usually have larger and much more complicated ensembles. For beginner, implementing a correct 5-fold stacking is good enough.

*Pipeline

We can see that the workflow for a Kaggle competition is quite complex, especially for model selection and ensemble. Ideally, we need a highly automated pipeline capable of:

  • Modularized feature transform. We only need to write a few lines of codes and the new feature is added to the training set.
  • Automated grid search. We only need to set up models and parameter grid, the search will be run and best parameters are recorded.
  • Automated ensemble generation. Use best K models for ensemble as soon as last generation is done.

For beginners, the first one is not very important because the number of features is quite manageable; the third one is not important either because typically we only do several ensembles at the end of the competition. But the second one is good to have because manually recording the performance and parameters of each model is time-consuming and error-prone.

Chenglong Chen, the winner of Crowdflower Search Results Relevance, once released his pipeline on GitHub. It’s very complete and efficient. Yet it’s still very hard to understand and extract all his logic to build a general framework. This is something you might want to do when you have plenty of time.

Home Depot Search Relevance

In this section I will share my solution in Home Depot Search Relevance and what I learned from top teams after the competition.

The task in this competitions is to predict how relevant a result is for a search term on Home Depot website. The relevance score is an average of three human evaluators and ranges between 1 ~ 3. Therefore it’s a regression task. The datasets contains search terms, product titles / descriptions and some attributes like brand, size and color. The metric is RMSE.

This is much like Crowdflower Search Results Relevance. The difference is thatQuadratic Weighted Kappa is used in that competition and therefore complicated the final cutoff of regression scores. Also there were no attributes provided in that competition.

EDA

There were several quite good EDAs by the time I joined the competition, especially this one. I learned that:

  • Many search terms / products appeared several times.
  • Text similarities are great features.
  • Many products don’t have attributes features. Would this be a problem?
  • Product ID seems to have strong predictive power. However the overlap of product ID between the training set and the testing set is not very high. Would this contribute to overfitting?

Preprocessing

You can find how I did preprocessing and feature engineering on GitHub. I’ll only give a brief summary here:

  1. Use typo dictionary posted in forum to correct typos in search terms.
  2. Count attributes. Find those frequent and easily exploited ones.
  3. Join the training set with the testing set. This is important because otherwise you’ll have to do feature transform twice.
  4. Do stemming and tokenizing for all the text fields. Some normalization(with digits and units) and synonym substitutions are performed manually.

Feature

  • *Attribute Features
    • Whether the product contains a certain attribute (brand, size, color, weight, indoor/outdoor, energy star certified …)
    • Whether a certain attribute matches with search term
  • Meta Features
    • Length of each text field
    • Whether the product contains attribute fields
    • Brand (encoded as integers)
    • Product ID
  • Matching
    • Whether search term appears in product title / description / attributes
    • Count and ratio of search term’s appearance in product title / description / attributes
    • *Whether the i-th word of search term appears in product title / description / attributes
  • Text similarities between search term and product title/description/attributes
  • Latent Semantic Indexing: By performing SVD decomposition to the matrix obtained from BOW/TF-IDF Vectorization, we get the latent descriptions of different search term / product groups. This enables our model to distinguish between groups and assign different weights to features, therefore solving the issue of dependent data and products lacking some features (to an extent).

Note that features listed above with * are the last batch of features I added. The problem is that the model trained on data that included these features performed worse than the previous ones. At first I thought that the increase in number of features would require re-tuning of model parameters. However, after wasting much CPU time on grid search, I still could not beat the old model. I think it might be the issue of feature correlation mentioned above. I actually knew a solution that might work, which is to combine models trained on different version of features by stacking. Unfortunately I didn’t have enough time to try it. As a matter of fact, most of top teams regard the ensemble of models trained with different preprocessing and feature engineering pipelines as a key to success.

Model

At first I was using RandomForestRegressor to build my model. Then I triedXgboost and it turned out to be more than twice as fast as Sklearn. From that on what I do everyday is basically running grid search on my PC while working on features on my laptop.

Dataset in this competition is not trivial to validate. It’s not i.i.d. and many records are dependent. Many times I used better features / parameters only to end with worse LB scores. As repeatedly stated by many accomplished Kagglers, you have to trust your own CV score under such situation. Therefore I decided to use 10-fold instead of 5-fold in cross validation and ignore the LB score in the following attempts.

Ensemble

My final model is an ensemble consisting of 4 base models:

  • RandomForestRegressor
  • ExtraTreesRegressor
  • GradientBoostingRegressor
  • XGBRegressor

The stacker (L2 model) is also a XGBRegressor.

The problem is that all my base models are highly correlated (with a lowest correlation of 0.9). I thought of including linear regression, SVM regression and XGBRegressor with linear booster into the ensemble, but these models had RMSE scores that are 0.02 higher (this accounts for a gap of hundreds of places on the leaderboard) than the 4 models I finally used. Therefore I decided not to use more models although they would have brought much more diversity.

The good news is that, despite base models being highly correlated, stacking really bumps up my score. What’s more, my CV score and LB score are in complete sync after I started stacking.

During the last two days of the competition, I did one more thing: use 20 or so different random seeds to generate the ensemble and take a weighted average of them as the final submission. This is actually a kind of bagging. It makes sense in theory because in stacking I used 80% of the data to train base models in each iteration, whereas 100% of the data is used to train the stacker. Therefore it’s less clean. Making multiple runs with different seeds makes sure that different 80% of the data are used each time, thus reducing the risk of information leak. Yet by doing this I only achieved an increase of 0.0004, which might be just due to randomness.

After the competition, I found out that my best single model scores 0.46378 on the private leaderboard, whereas my best stacking ensemble scores 0.45849. That was the difference between the 174th place and the 98th place. In other words, feature engineering and model tuning got me into 10%, whereas stacking got me into 5%.

Lessons Learned

There’s much to learn from the solutions shared by top teams:

  • There’s a pattern in the product title. For example, whether a product is accompanied by a certain accessory will be indicated by With/Without XXXat the end of the title.
  • Use external data. For example use WordNet or Reddit Comments Dataset to train synonyms and hypernyms.
  • Some features based on letters instead of words. At first I was rather confused by this. But it makes perfect sense if you consider it. For example, the team that won the 3rd place took the number of letters matched into consideration when computing text similarity. They argued that longer words are more specific and thus more likely to be assigned high relevance scores by human. They also used char-by-char sequence comparison (difflib.SequenceMatcher) to measure visual similarity, which they claimed to be important for human.
  • POS-tag words and find anchor words. Use anchor words for computing various distances.
  • Extract top-ranking trigrams from the TF-IDF of product title / description field and compute the ratio of word from search terms that appear in these trigrams. Vice versa. This is like computing latent indexes from another point of view.
  • Some novel distance metrics like Word Movers Distance
  • Apart from SVD, some used NMF.
  • Generate pairwise polynomial interactions between top-ranking features.
  • For CV, construct splits in which product IDs do not overlap between training set and testing set, and splits in which IDs do. Then we can use these with corresponding ratio to approximate the impact of public/private LB split in our local CV.

Summary

Takeaways

  1. It was a good call to start doing ensembles early in the competition. As it turned out, I was still playing with features during the very last days.
  2. It’s of high priority that I build a pipeline capable of automatic model training and recording best parameters.
  3. Features matter the most! I didn’t spend enough time on features in this competition.
  4. If possible, spend some time to manually inspect raw data for patterns.

Issues Raised

Several issues I encountered in this competitions are of high research values.

  1. How to do reliable CV with dependent data.
  2. How to quantify the trade-off between diversity and accuracy in ensemble learning.
  3. How to deal with feature interaction which harms the model’s performance. And how to determine whether new features are effective in such situation.

Beginner Tips

  1. Choose a competition you’re interested in. It would be better if you’ve already have some insights about the problem domain.
  2. Following my approach or somebody else’s, start exploring, understanding and modeling data.
  3. Learn from forum and scripts. See how other interpret data and construct features.
  4. Find winner interviews / blog post of previous competitions. They’re very helpful.
  5. Start doing ensemble after you have reached a pretty good score (e.g. ~ 10%) or you feel that there isn’t much room for new features (which, sadly, always turns out to be false).
  6. If you think you may have a chance to win the prize, try teaming up!
  7. Don’t give up until the end of the competition. At least try something new every day.
  8. Learn from the sharings of top teams after the competition. Reflect on your approaches. If possible, spend some time verifying what you learn.
  9. Get some rest!

Reference

  1. Beating Kaggle the Easy Way – Dong Ying
  2. Search Results Relevance Winner’s Interview: 1st place, Chenglong Chen
  3. (Chinese) Solution for Prudential Life Insurance Assessment – Nutastray

 

 

 

机器学习正在安全领域挂起一阵小旋风,但这里面有BUG

如今,安全领域是机器学习(Machine learning)正在大力进军的一个方向。

| 把机器学习应用到安全领域,老板们跃跃欲试

如果你亲自参加了 2016 RSA 大会,就会发现几乎没有哪家公司在说自家安全领域的产品时,不提及机器学习。这是为什么呢?

可能对外行人来说,机器学习就像一种魔法,能解决所有的安全问题:你把一堆未标识的数据统统塞进会机器学习的系统中,它就能分辨出连人类专家都分辨不出的数据规律,并且还可以学习新的行为指令和适应环境威胁。不仅如此,就连为规则加密也劳烦不到你,因为系统已经自动为你搞定这一切。

要真是像这样的话,那机器学习可真就是今年的重头戏了!但讽刺的是,每个人都兴师动众说要在这个领域搞出点名堂来,但真正理解什么是机器学习,或明白机器学习到底能用来做什么的人,却是凤毛麟角。可想而知,在这种大环境下机器学习大多是被滥用的,尤其在安全领域

| 用机器学习有效解决安全问题,正确的方法是?

把机器学习应用到安全领域,大多会涉及到一种技术——异常检测(anomaly detection),它可以识别哪些部分和预期模式或数据集不匹配。但技术销售方要注意,这种技术只在某些条件下有效——不过显然,他们还不知道自己已经犯下错误:他们会告诉你,分析过你公司的网络流量后,就可以用机器学习 揪出暗藏在网络中的黑客。但事实上,机器学习根本就做不到。这时候,你要立刻对这个销售商保持一丝怀疑。

那到底什么情况下才有效?答案是,只有为低维度的问题也配备上高质量的标识数据,这样的机器学习才是有效的。但很不幸,企业在实施过程并没有做到这一点。如果要检测新型的攻击方式,你得有很清晰并且经过标识的攻击案例。这就是说,如果没有透彻理解正常的网络行为,机器学习是不可能发现黑客的。再说,所有的黑客都很狡猾,他们一定会把自己伪装的天衣无缝。

| 机器学习和异常检测,用在哪里价值最大?

机器学习和异常检测真正有用的地方,在于它们能将人类行为分类。

事实证明,人类的预测能力非常强,他们也有能力建立非常精确的个体用户行为模型,让模型探测到异常情况。

其实,人们在这方面已小有成就,比如隐式认证( Implicit Authentication)。隐式认证采用生物特征识别技术,基于击键力度、节奏和打字模式等技术对用户身份进行认证。不管是改善用户体验还是增强安全性,这个技术的优势都相当明显。最起码,它免除了用户记忆密码的负担和输入密码的麻烦。由于隐式认证所需元素大多是低维的, 机器学习就只需处理少量几个参数,这也使得收集用户的高品质标识数据变得很方便。所以,即使有行为差异或信号干扰, 机器学习还是能正确为计算机视觉进行图形搭配。同理,机器学习也能通过识别出个体的独特行为而进行身份验证,这当然也不在话下。

不过,它是怎么做到的呢?

其实,你走路、站立等所有动作,是由众多因素共同决定的,比如生理状况,年龄,性别,肌肉记忆等等。并且对个体来说,这些动作不会有太大改变。因此,不经意间,你口袋中的手机就通过内置传感器精确捕捉到了这些信息,并记录下来。而想要通过运动行为来识别一个人, 4 秒的运动信息就已足够。另外,通过对比用户的历史和当下的定位记录也可以进行身份识别。人们总是生活在各种各样的习惯当中,通过观察他们什么时候从哪出发,就能预测被测者到底是不是用户本人。

我们的手机和电脑上已有大量的传感器,以后随着可穿戴设备的普及和物联网的发展,传感器的数量更会暴增。用户大量的行为数据和环境数据就这样被收集起来,提供给机器学习,让它为用户建立个体模型,并找到各个因素之间的相互关系。

| 让机器学习进行安全防护,你需要做哪些功课?

想进行安全防护,就必须让你的系统提前知道都存在哪些威胁模型。

首先,也是最重要的事——收集数据。这些数据必须非常精确,才能用来训练系统,起到抵抗威胁的作用。不过身份认证系统要真是遭到攻击,你也不用过于担心。因为行为变化还是比较好检测的,系统很快就能识别出异常情况。比如,如果一个设备不小心被偷,那么这个设备被偷之后所记录的运动状态,地理位置和用法就会和之前的记录有明显不同。不过,系统是接受这种可能存在的异常情况的,这时候用户就需要在系统上以另外的方式确认身份,调整系统,以使假阳性最小化。而一旦我们在不同设备上连接起 4 个因素,那么隐式认证的假阳性就会低于 0.001% 。

这个世界上并没有哪一种机器学习真的神奇到能解决所有的安全问题。设计者想用机器学习创建一个有用的安全防卫产品,就需要对底层系统有深刻理解,并且承认很多问题并不适合用机器学习来解决。不过不用担心,那些处在浪潮之巅的科技公司会将这些问题一步步消灭掉。

机器学习正在安全领域酝酿着一股势不可挡的市场狂潮。

未来的网络安全,离不开机器学习

信息安全一直就是猫与老鼠的游戏。好家伙新建一堵墙,坏家伙便想方设法通过或绕过它。但最近,坏家伙们似乎越来越轻易地就可以通过这堵墙。要想阻止他们,我们的能力需要有一个巨大的提升,这可能意味着我们需要更广泛地使用机器学习技术。

这可能会惊到行业外的旁观者,但机器学习目前并没有广泛地影响到IT安全领域。安全专家认为,尽管信用卡欺诈侦查系统和网络设备制造商正在使用先进的分析方法,但实际上每个大型公司常见的自动化安全行动——比如检测个人电脑上的恶意软件或者识别网络中的恶意活动——大部分都要依靠人类适时地对这些行动进行代码编写和配置。

尽管机器学习技术在网络安全领域的应用已经有了广泛的学术研究,但我们现在才刚开始了解这项技术对安全工具的影响。一些创业公司(如Invincea, Cylance, Exabeam和Argyle Data)正在利用机器学习驱动安全工具,使得它们比目前主要的安全软件供应商提供的工具更快捷和精准。

用数据摧毁恶意软件

Invincea是美国弗吉尼亚州一家专门检测恶意软件和维护网络安全的公司。这家公司的首席研究工程师Josh Saxe认为,是时候摒弃上世纪90年代的基于特征码和文件哈希值的分析技术了。

Saxe说:「我了解到,一些反病毒公司已经涉足机器学习领域,但是他们赖以生存的仍然是特征码检测。他们基于文件哈希值或者模式匹配来检测恶意软件,这是人类研究员想出来的检测给定样品的分析技术。」

Invincea先进的恶意软件检测系统有一部分是基于 DARPA 的网络基因组项目。

他说:「他们在检测过去常见的恶意软件上很成功,但是他们并不擅长检测新的恶意软件,这也是当下网络犯罪大行其道的原因之一。即使你安装了杀毒系统,其他人还是能成功侵入你的电脑,因为特征码检测的方法根本不起作用。」

在Invincea,Saxe正带领团队用机器学习建立更完善的恶意软件检测系统。这个项目是DARPA网络基因组项目的一部分,主要是使用机器学习来摧毁检测到的恶意软件,包括反向还原恶意软件的运行方式、在代码中进行社交网络分析、使用机器学习系统快速摧毁自然网络环境中出现的恶意软件新样本。

「我们已经证明,我们开发的基于机器学习的方法比传统反病毒系统更有效。机器学习系统能够自动完成人类分析员所做的工作,甚至能做得更好。把机器学习系统与大量的训练数据结合,就能击败基于特征码的传统检测系统。」

Invincea采用深度学习方法来加快算法的训练。目前,Saxe有大约150万个良性或恶意软件样品用来训练算法,这些都在使用 Python 工具的GPU中进行。他希望,随着样本数据增加到3000万,机器学习系统的性能优势会有一个线性增长。

「我们拥有的训练数据越多,用来训练机器学习系统的恶意软件的数量越多,那机器学习系统在检测恶意软件上的性能优势就会越明显,」他说。

Saxe说Invincea目前的计划是在2016年的终端安全产品上加载更多基于深度学习的功能。具体来说,就是把这种能力添加到已经使用机器学习技术的终端安全产品Cynomix上。

恶意用户检测

机器学习还有助于IT安全的其他方面:检测恶意的内部用户和识别损坏的账户。

正如主要的反病毒产品依赖特征码来识别恶意软件一样,监测用户活动的工具也是倚赖特征码。基于特征码的检测方法在恶意软件检测上开始失效,同样的,它在检测用户活动领域的效果也不尽如人意。

「过去,企业的安全人员严重倚赖特征码方法——比如IP地址黑名单。」用户行为分析工具提供商Exabeam的首席数据科学家Derek Lin说到。

他说:「这种方法寻找的是已经发生的事情。基于特征码的方法存在的问题是,只有事件发生过后,他们才能看到留下的特征码。而现在,安全人员非常聚焦于检测没有特征码的恶意事件。」

Exabeam通过追踪用户的远程连接信息、设备、IP地址和凭证建立了一张用户活动图。

如今,精明的犯罪分子知道稍微改变一下他们的路径就能战胜特征码检测。所以,如果被侵入的检测系统中存有一个IP黑名单,网络犯罪分子可以通过在他处理下的大面积网域中不断来回跳动来打破这个IP黑名单。

Exabeam并没有固守昔日的防御策略,而是基于Gartner的UBA( User Behavior Analytics,用户行为分析)概念采取了主动出击的方法。UBA背后的思路是你没法事先知道机器或用户的好坏,所以先假设他们是恶意的,你的网络是缺乏抵抗力的,所以你时刻对每个人的行为进行监测和制作模型,从而找到恶意行为者。

这就是用到机器学习算法的地方。Lin和他的团队获取了多种多样的资源(如服务器日志、虚拟私人网络日志和VPN日志等),使用各种监督和非监督式机器学习算法来检测用户行为的异常模式。

Lin说:「以上都是描绘用户行为的画像,问题是这是如何做到的。对于网络上每个用户或实体,我们尝试建立一个正常的简略图——这里涉及到统计学分析。然后,我们在概念水平上寻找与正常值的偏差……我们使用基于行为的方法来寻找系统中的异常,让他们浮现出来,方便安全分析员查看。」

机器学习在安全领域的未来

「想一想我们经历过的几次主要的网络安全浪潮,网络犯罪分子正寻找有效地方法来打破安全系统,我们也要回以反击。机器学习会成为反击武器中的中流砥柱吗?答案是肯定的。」安全软件供应商Townsend Security创始人兼CEO Patrick Townsend说到。

他说:「现在我们正开始获得能够有效处理大量未结构化数据和检测模式的系统,我希望下一波网络安全浪潮中的产品是基于认知计算的。看看Watson,既然它可以赢得危险边缘(Jeopardy)游戏,那为什么它不可以用来广泛地分析和理解网络安全事件呢?我认为我们正处于用基于认知的计算来帮助处理安全问题的萌芽阶段。」

Invincea的Saxe希望可以成为弄潮儿。他说:「我并不惊讶该领域的公司没有抓住这次浪潮,生产出基于新的深度学习的算法。对机器学习的训练才刚实现不久。这在10年前是没法有效完成的。」

zr9558's Blog