从调和级数到 RIEMANN ZETA 函数(一)

Riemann Zeta 函数

Riemann Zeta 函数(Riemann zeta function),\zeta(s),是一个关于复数 s 的方程。在复平面上,当复数 s 的实数部分 \sigma=\Re s >1 时,\zeta(s) 就是如下的级数形式:

\zeta(s) = \sum_{n=1}^{\infty}\frac{1}{n^{s}}.

调和级数的概念与性质

既然提到了级数,首先让我们来回顾一下级数的定义是什么?

级数的定义:在数学中,一个有穷或者无穷的序列 (x_{0},x_{1},x_{2},...) 的形式和 S = x_{0}+x_{1}+x_{2}+... 称为级数,里面的每一项都称为级数的通项。

级数收敛的定义:令 S_{n}=x_{0}+...+x_{n},如果存在有限的 S 使得 \lim_{n\rightarrow \infty}S_{n}=S,那么就称该级数收敛。否则,该级数就称为发散级数。

然后下面我们来研究一下调和级数的基本性质。调和级数的表达式写出来十分简单,那就是 Riemann Zeta 函数在 s=1 的取值,i.e.

\zeta(1) = \sum_{n=1}^{+\infty}\frac{1}{n}.

提到级数的收敛或发散,就必须要提到关于级数收敛的等价定理(Cauchy 判别法),那就是:级数 S_{n} 收敛当且仅当对任意的 \epsilon>0,存在 N 使得对于任意的 m, n>N 都有 |S_{m}-S_{n}|<\epsilon.

既然是等价定理,那么就可以使用 Cauchy 判别法来判断调和级数是否收敛。

Method 1.

S_{n}=\sum_{k=1}^{n}\frac{1}{k},

直接通过计算得到

|S_{2n}-S_{n}|=\frac{1}{n+1}+...+\frac{1}{2n}>\frac{1}{2n}+...+\frac{1}{2n}=\frac{1}{2},

说明该级数是不收敛的,也就是调和级数是发散的。

除了基于 Cauchy 收敛准则的证明之外,能否写出判断调和级数发散的其他方法呢?答案是肯定的。以下有一种使用初等数学方法就能够解释调和级数发散的方法。

Method 2.

\sum_{n=1}^{+\infty}\frac{1}{n}

=1+\frac{1}{2}+(\frac{1}{3}+\frac{1}{4})+(\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8})+...

>1+\frac{1}{2}+(\frac{1}{4}+\frac{1}{4})+(\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8})+...

=1+\frac{1}{2}+\frac{1}{2}+\frac{1}{2}+...=+\infty.

既然都提到了高等数学,那么当然不能仅仅局限于使用初等数学的技巧来解决问题。而且如果只是用初等数学的方法,在拓展性方面就会受到极大的限制。

Method 3. 调和级数的发散可以通过定积分的技巧来进行解决。

HarmonicSeries

1+\frac{1}{2}+...+\frac{1}{n}

>\int_{1}^{2}\frac{1}{x}dx + \int_{2}^{3}\frac{1}{x}dx+...+\int_{n}^{n+1}\frac{1}{x}dx

=\int_{1}^{n+1}\frac{1}{x}dx=\ln(n+1)

因此,\sum_{n=1}^{\infty}\frac{1}{n}=+\infty.

从上面的定积分的方法可以预计出调和级数的量级大约是对数的量级,那么能否精确的估计出来呢?例如下面这个问题:

问题:\lim_{n\rightarrow +\infty}\frac{\sum_{k=1}^{n}\frac{1}{k}}{ln(n)}=?

通过 L’Hospital 法则可知:\lim_{x\rightarrow 0}x/\ln(1+x)=1.

通过 Stolz 定理可知:

\lim_{n\rightarrow +\infty}\frac{\sum_{k=1}^{n}\frac{1}{k}}{ln(n)}

= \lim_{n\rightarrow +\infty}\frac{\frac{1}{n}}{\ln(n/(n-1))}

= \lim_{x\rightarrow 0}\frac{x}{\ln(1+x)}=1

除此之外,我们同样可以证明

\lim_{n\rightarrow+\infty}(1+\frac{1}{2}+...+\frac{1}{n}-\ln(n))

这个极限是存在并且有限的。

调和级数的推广

那么,如果在考虑 \zeta(2) 也就是级数

\zeta(2) = \sum_{n=1}^{\infty}\frac{1}{n^{2}}

是否收敛的时候,能否用到以上类似的技巧呢?首先,确实也存在各种各样的初等数学技巧,例如:

Method 1.

\sum_{n=1}^{+\infty}\frac{1}{n^{2}}<1+\sum_{n=2}^{+\infty}\frac{1}{n(n-1)}=1+\sum_{n=2}^{+\infty}(\frac{1}{n-1}-\frac{1}{n})=2.

Method 2. 使用数学归纳法。也就是要证明:

\sum_{k=1}^{n}1/k^{2}\leq 2-\frac{1}{n}.

n=1 的时候,公式是正确的。假设 n 的时候是正确的,那么我们有\sum_{k=1}^{n}1/k^{2}\leq 2-\frac{1}{n}。计算可得:

\sum_{k=1}^{n+1}\frac{1}{k^{2}}

<2-\frac{1}{n}+\frac{1}{(n+1)^{2}}

= 2- \frac{1}{n+1}-\frac{1}{n(n+1)^{2}}

\leq 2-\frac{1}{n+1}.

因此,不等式正确,所以 \sum_{n=1}^{+\infty}1/n^{2} 收敛。

其次,在判断调和级数发散的时候,使用的定积分的方法同样可以应用在这个场景下。

Method 3.

1+\frac{1}{2^{2}}+...+\frac{1}{n^{2}}

<1+\int_{1}^{2}\frac{1}{x^{2}}dx+...+\int_{n-1}^{n}\frac{1}{x^{2}}dx

=1+\int_{1}^{n}\frac{1}{x^{2}}dx=1+1-\frac{1}{n}<2.

那么这个是针对次数等于2的情况,对于一般的情形,

\zeta(s)=\sum_{n=1}^{+\infty}\frac{1}{n^{s}},\sigma = \Re(s)>1.

使用定积分的技术,同样可以证明对于任意的 \sigma = \Re(s)>1,都有 \zeta(s) 是收敛的。但是 \zeta(1) 是发散的。

Riemann Zeta 函数中某些点的取值

除此之外,既然 \zeta(s)\sigma = \Re(s)>1 的时候收敛,能否计算出某些函数的特殊值呢?答案是肯定的,例如,我们可以使用 Fourier 级数来计算出 \zeta(2), \zeta(4), \zeta(6),... 的取值。首先,我们回顾一下 Fourier 级数的一些性质:

假设 f(x) 是一个关于 2\pi 的周期函数, i.e. f(x)=f(x+2\pi) 对于所有的 x \in \mathbb{R} 都成立。那么函数 f(x) 的 Fourier 级数就定义为

a_{0}+\sum_{n=1}^{\infty} (a_{n} \cos(nx) +b_{n} \sin(nx)),

其中,a_{0}= \frac{1}{2\pi} \int_{-\pi}^{\pi} f(x) dx,

a_{n}= \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \cos(nx) dx n\geq 1,

b_{n}= \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin(nx) dx n\geq 1,

定理 1. 如果 f(x) 在区间 (-\pi, \pi) 上满足 Lipschitz 条件,那么

f(x) =a_{0}+\sum_{n=1}^{\infty} (a_{n} \cos(nx) +b_{n} \sin(nx)).

定理 2. Parseval’s 恒等式.

\frac{1}{\pi} \int_{-\pi}^{\pi} |f(x)|^{2} dx= 2a_{0}^{2}+ \sum_{n=1}^{\infty} (a_{n}^{2}+b_{n}^{2}).

下面我们就来证明下列恒等式:

\sum_{n=1}^{\infty} \frac{1}{(2n-1)^{2}}=\frac{\pi^{2}}{8}

\sum_{n=1}^{\infty} \frac{1}{n^{2}}=\frac{\pi^{2}}{6}

\sum_{n=1}^{\infty} \frac{1}{(2n-1)^{4}}=\frac{\pi^{4}}{96}

\sum_{n=1}^{\infty} \frac{1}{n^{4}}=\frac{\pi^{4}}{90}

证明:

选择在区间 (-\pi, \pi) 上的函数 f(x)=|x|,并且该函数是关于 2\pi 的周期函数。

使用 a_{n}b_{n} 的公式,我们可以得到函数 f(x)=|x| 的 Fourier 级数是

\frac{\pi}{2} + \sum_{n=1}^{\infty} \frac{2((-1)^{n}-1)}{\pi}  \cdot \frac{cos(nx)}{n^{2}}

从定理1, 令 x=0, 可以得到

0= \frac{\pi}{2} + \sum_{n=1}^{\infty} \frac{2((-1)^{n}-1)}{n^{2} \pi}  = \frac{\pi}{2} + \sum_{m=1}^{\infty} \frac{-4}{(2m-1)^{2}\pi}  = \frac{\pi}{2} - \frac{4}{\pi} \sum_{m=1}^{\infty} \frac{1}{(2m-1)^{2}}

因此,\sum_{n=1}^{\infty} \frac{1}{(2n-1)^{2}}=\frac{\pi^{2}}{8} .

假设 S=\sum_{n=1}^{\infty} \frac{1}{n^{2}} , 可以得到

S=\sum_{odd} \frac{1}{n^{2}} + \sum_{even} \frac{1}{n^{2}}  = \frac{\pi^{2}}{8} + \frac{1}{4} S .

因此 S=\frac{\pi^{2}}{6} .

从 Parserval’s 恒等式,我们知道

\frac{2\pi^{2}}{3}= \frac{1}{\pi} \int_{-\pi}^{\pi} x^{2}dx  = 2\cdot (\frac{\pi}{2})^{2} + \sum_{n=1}^{\infty} \frac{4((-1)^{n}-1)^{2}}{\pi^{2}\cdot n^{4}}  = \frac{\pi^{2}}{2} + \sum_{m=1}^{\infty} \frac{16}{\pi^{2} (2m-1)^{4}}

因此 \sum_{n=1}^{\infty} \frac{1}{(2n-1)^{4}} = \frac{\pi^{4}}{96} .

假设 S=\sum_{n=1}^{\infty} \frac{1}{n^{4}} , 得到

S=\sum_{odd} \frac{1}{n^{4}} + \sum_{even} \frac{1}{n^{4}}  = \frac{\pi^{4}}{96} + \frac{1}{16} S

因此, S=\frac{\pi^{4}}{90} .

总结

本篇文章从调和级数的发散性开始,介绍了判断调和级数是否收敛的几种方法。进一步考虑了其他级数的收敛性,并通过 Fourier 级数的方法计算出了部分 Riemann Zeta 函数的取值。

Advertisements

如何从零到一地开始机器学习?

导语:作为一个数学系出身,半路出家开始搞机器学习的人,在学习机器学习的过程中自然踩了无数的坑,也走过很多本不该走的弯路。于是很想总结一份如何入门机器学习的资料,也算是为后来人做一点点微小的贡献。

在 2016 年 3 月,随着 AlphaGo 打败了李世乭,人工智能开始大规模的进入人们的视野。不仅是互联网的工程师们很关注人工智能的发展,就连外面的吃瓜群众也开始关注人工智能对日常生活的影响。随着人脸识别能力的日益增强,个性化新闻推荐 app 的横行天下,TensorFlow 等开源工具被更多的人所知晓,于是就有越来越多的人开始逐步的转行到人工智能的领域,无论是计算机出身的后台开发人员,电子通信等工程师,还是数学物理等传统理科人士,都有人逐步开始转行到机器学习的领域。

作为一个数学系出身,半路出家开始搞机器学习的人,在学习机器学习的过程中自然踩了无数的坑,也走过很多本不该走的弯路。于是很想总结一份如何入门机器学习的资料,也算是为后来人做一点点微小的贡献。

作为一个转行的人,自然要介绍一下自己的专业背景。笔者在本科的时候的专业是数学与应用数学,外行人可以理解为基础数学。在博士期间的研究方向是动力系统和分形几何,所做的还是基础数学,和计算机的关系不大。如果有人想了解笔者究竟在做什么科研的话,可以参考知乎文章“复动力系统(1)— Fatou集与Julia集”。至于机器学习的话,在读书期间基本上也没接触过,甚至没听说过还有这种东西。不过在读书期间由于专业需要,C++ 之类的代码还是能够写一些的,在 UVA OJ 上面也留下过自己的足迹。


2015 年:尝试转型

行路难,行路难,多歧路,今安在?

在 2015 年毕业之后机缘巧合,恰好进入腾讯公司从事机器学习的相关工作。不过刚进来的时候压力也不小,现在回想起来的话,当时走了一些不该走的弯路。用李白的《行路难》中的诗词来描述当时的心情就是“行路难,行路难,多歧路,今安在?”在 2015 年 10 月份,第一次接触到一个不大不小的项目,那就是 XX 推荐项目。而这个项目是当时组内所接到的第二个推荐项目,当年的推荐系统还是搭建在大数据集群上的,完全没有任何说明文档和前端页面,当时的整个系统和全部流程复杂而繁琐。不过在接触这个系统的过程中,逐步开始学习了 Linux 操作系统的一些简单命令,SQL 的使用方法。了解 SQL 的话其实不只是通过了这个系统,通过当时的 ADS 值班,帮助业务方提取数据,也把 SQL 的基础知识进一步的加深了。SQL 的学习的话,在2015年读过两本非常不错的入门教材《SQL基础教程》与《HIVE编程指南》。Linux 的相关内容阅读了《Linux 命令行与 Shell 脚本编程大全》之后也就大概有所了解了。于是工作了一段时间之后,为了总结一些常见的 SQL 算法,写过一篇文章 “HIVE基础介绍“。

在做推荐项目的过程中,除了要使用 SQL 来处理数据,要想做机器学习,还需要了解常见的机器学习算法。当年接触到的第一个机器学习算法就是逻辑回归(Logistic Regression),既然提到了机器学习的逻辑回归,无法避免的就是交叉验证的概念,这个是机器学习中的一个基本概念。通过物品的类别属性和用户的基本特征来构造出新的特征,例如特征的内积(inner product)。后来在学习的过程中逐步添加了特征的外积和笛卡尔积,除了特征的交叉之外,还有很多的方法来构造特征,例如把特征标准化,归一化,离散化,二值化等操作。除了构造特征之外,如何判断特征的重要性则是一个非常关键的问题。最常见的方法就是查看训练好的模型的权重,另外还可以使用 Pearson 相关系数和 KL 散度等数学工具来粗糙的判断特征是否有效。在此期间也写过一些文章“交叉验证”,“特征工程简介”,“KL散度”。关于特征工程,除了阅读一些必要的书籍之外,最重要的还是要实践,只有实践才能够让自己的经验更加丰富。

在做推荐系统的时候,之前都是通过逻辑回归算法(Logistic Regression)离线地把模型的权重算好,然后导入线上系统,再进行实时的计算和打分。除了离线的算法之外,在 2015 年的 12 月份了解到了能够在线学习的 FTRL 算法。调研了之后在 2016 年初在组内进行了分享,同时在 zr9558.com 上面分享了自己的总结,最近把该文章转移到自己的微信公众号上“Follow the Regularized Leader”。

在做 XX 推荐项目的过程中,了解到了数据才是整个机器学习项目的基石,如果数据的质量不佳,那就需要进行数据的预处理,甚至推动开发人员去解决数据上报的问题。通常来说,要想做好一个推荐项目,除了特征工程和算法之外,最重要的就是数据的核对。当时的经验是需要核对多方的数据,那就是算法离线计算出来的结果,线上计算出来的结果,真实产品中所展示的结果这三方的数据必须要完全一致,一旦不一致,就需要复盘核查,而不是继续推进项目。在此期间,踩过无数的数据的坑,因此得到的经验就是一定要反复的核查数据。


2016:从零到一

站在巨人的肩膀上,才能看得更远。-—学习推荐系统

站在巨人的肩膀上,才能看得更远。”到了 2016 年的 2 月份,除了 XX 推荐项目的首页个性化调优算法之外,还开启了另外一个小项目,尝试开启首页的 tab,那就是针对不同的用户推荐不同的物品。这个小项目简单一点的做法就是使用 ItemCF 或者热传导传播的算法,在用户收听过某个节目之后,就给用户推荐相似的节目。这种场景其实在工业界早就有了成功的案例,也不算是一个新的场景。就好比与用户在某电商网站上看中了某本书,然后就被推荐了其他的相关书籍。之前也写过一篇推荐系统的简单算法“物质扩散算法”,推荐给大家参考一下。至于 ItemCF 和热传导算法的相关内容,会在后续的 Blog 中持续完善。

“读书千遍,其义自见。”在使用整个推荐系统的过程中,笔者只是大概知道了整个系统是如何搭建而成的。而要整体的了解机器学习的相关算法,光做项目则是远远不够的。在做推荐业务的这段时间,周志华老师的教材《机器学习》在2016年初上市,于是花了一些时间来阅读这本书籍。但是个人感觉这本书难度不大,只是需要另外一本书结合着看才能够体会其中的精妙之处,那就是《机器学习实战》。在《机器学习实战》中,不仅有机器学习相关算法的原理描述,还有详细的源代码,这足以让每一个初学者从新手到入门了。

路漫漫其修远兮,吾将上下而求索

说到从零到一,其实指的是在这一年体验了如何从零到一地做一个新业务。到了 2016 年的时候,为了把机器学习引入业务安全领域,在部门内部成立了 XX 项目组,这个项目在部门内部其实并没有做过大规模的尝试,也并没有成功的经验,甚至也没有一个合适的系统让人使用,而且安全业务和推荐业务基本上不是一回事。因为对于推荐系统而言,给用户的推荐是否准确决定了 CTR 是否达标,但是对于安全系统而言,要想上线打击黑产的话,准确率则需要 99% 以上才行。之前的推荐系统用得最多的算法就是逻辑回归,而且会存储物品和用户的两类特征,其余的算法主要还是 ItemCF 和热传导算法。这就导致了当时做 XX 项目的时候,之前的技术方案并不可用,需要基于业务安全的实际场景来重新搭建一套框架体系。

但是当时做安全项目的时候并没有实际的业务经验,而且暂定的计划是基于 XX1 和 XX2 两个业务来进行试点机器学习。为了做好这个项目,一开始笔者调研了几家号称做机器学习+安全的初创公司,其中调研的最多的就是 XX 这家公司,因为他们家发表了一篇文章,里面介绍了机器学习如何应用在业务安全上,那就是搭建一套无监督+有监督+人工打标签的对抗体系。笔者还是总结了当时两三个月所学的异常点检测算法,文章的链接如下:“异常点监测算法(一)”,“异常点检测算法(二)”,“异常点检测算法(三)”,“异常点检测算法综述”。

在 2016 年底的时候,说起来也是机缘巧合,有的同事看到了我在 2016 年 11 月份发表的 KM 文章“循环神经网络”,就来找笔者探讨了一下如何构建游戏 AI。当时笔者对游戏AI的应用场景几乎不了解,只知道 DeepMind 做出了 AlphaGo,在 2013 年使用了深度神经网络玩 Atari 游戏。在12月份花费了一定的时间研究了强化学习和深度学习,也搭建过简单的 DQN 网络进行强化学习的训练。通过几次的接触和交流之后总算 2017 年 1 月份做出一个简单的游戏 AI,通过机器学习也能够进行游戏 AI 的自主学习。虽然不在游戏部门,但是通过这件事情,笔者对游戏 AI 也产生了浓厚的兴趣,撰写过两篇文章“强化学习与泛函分析”,“深度学习与强化学习”。


2017 年:再整旗鼓

在做日常项目的同时,在 2017 年也接触量子计算。在后续几个月的工作中,持续调研了量子计算的基础知识,一些量子机器学习的技术方案,写了两篇文章“量子计算(一)”,“量子计算(二)”介绍了量子计算的基础概念和技巧。

三十功名尘与土,八千里路云和月

提到再整旗鼓,其实指的是在 2017 年再次从零到一的做全新的项目。到了 2017 年 7 月份,随着业务安全的机器学习框架已经逐渐完善,XX 项目也快走到了尾声,于是就又有了新的项目到了自己的手里,那就是智能运维项目。运营中心这边还在探索和起步阶段,业界的智能运维(AIOPS)的提出也是在2017年才逐步开始,那就是从手工运维,自动化运维,逐步走向人工智能运维的阶段,也就是所谓的 AIOPS。只有这样,运营中心才有可能实现真正的咖啡运维阶段。

正式接触到运维项目是 2017 年 8 月份,从跟业务运维同学的沟通情况来看,当时有几个业务的痛点和难点。例如:Monitor 时间序列的异常检测,哈勃的根因分析,ROOT 系统的根源分析,故障排查,成本优化等项目。在 AIOPS 人员短缺,并且学术界并不怎么研究这类技术方案的前提下,如何在运维中开展机器学习那就是一个巨大的难题。就像当年有神盾系统,无论怎么做都可以轻松的接入其余推荐业务,并且也有相对成熟的内部经验,学术界和工业界都有无数成功的案例。但是智能运维这一块,在 2017 年才被推广出来,之前都是手工运维和 DevOps 的一些内容。于是,如何尽快搭建一套能够在部门内使用的智能运维体系就成了一个巨大的挑战。面临的难题基本上有以下几点:

1. 历史包袱沉重

2. AIOPS 人员短缺

3. 没有成熟的系统框架

在这种情况下,外部引进技术是不可能了,只能够靠自研,合作的同事主要是业务运维和运营开发。当时第一个接触的智能运维项目就是哈勃的多维下钻分析,其业务场景就是一旦发现了成功率等指标下跌之后,需要从多维的指标中精准的发现异常,例如从运营商,省份,手机等指标中发现导致成功率下跌的原因,这就是经典的根因分析。这一块在调研之后发现,主要几篇文章可以参考,综合考虑了之后撰写了一份资料,那就是“根因分析的探索”。PS:除了哈勃多维下钻之外,个人感觉在 BI 智能商业分析中,其实也可以是这类方法来智能的发现“为什么DAU下跌?”“为什么收入没有达到预期”等问题。

除了哈勃多维下钻之外,Monitor 的时间序列异常检测算法则是更为棘手的项目。之前的 Monitor 异常检测算法,就是靠开发人员根据曲线的特点设定三个阈值(最大值,最小值,波动率)来进行异常检测。这样的结果就是准确率不准,覆盖率不够,人力成本巨大。在上百万条曲线都需要进行异常检测的时候,每一条曲线都需要人工配置阈值是完全不合理的。于是,导致的结果就是每周都需要有人值班,有了问题还不一定能够及时发现。而对于时间序列算法,大家通常能够想到的就是 ARIMA 算法,深度学习的 RNN 与 LSTM 算法,Facebook 近期开源的 Prophet 工具。这些方法笔者都调研过,并且未来会撰写相关的文章介绍 ARIMA,RNN,Prophet 的使用,欢迎大家交流。

其实以上的几种时间序列预测和异常检测算法,主要还是基于单条时间序列来做的,而且基本上是针对那些比较平稳,具有历史规律的时间序列来进行操作的。如果针对每一条曲线都单独搭建一个时间序列模型的话,那和阈值检测没有任何的区别,人力成本依旧巨大。而且在 Monitor 的实际场景下,这些时间序列异常检测模型都有着自身的缺陷,无法做到“百万条KPI曲线一人挑”的效果。于是在经历了很多调研之后,我们创新性的提出了一个技术方案,成功的做到了“百万条曲线”的异常检测就用几个模型搞定。那就是无监督学习的方案加上有监督学习的方案,第一层我们使用无监督算法过滤掉大部分的异常,第二层我们使用了有监督的算法来提升准确率和召回率。在时间序列异常检测的各类算法中,通常的论文里面都是针对某一类时间序列,使用某一类模型,效果可以达到最优。但是在我们的应用场景下,见过的曲线千奇百怪,笔者都说不清楚有多少曲线的形状,因此只用某一类时间序列的模型是绝对不可取的。但是,在学习机器学习的过程中,有一种集成学习的办法,那就是把多个模型的结果作为特征,使用这些特征来训练一个较为通用的模型,从而对所有的 Monitor 时间序列进行异常检测。这一类方法笔者总结过,那就是“时间序列简介(一)”,最终我们做到了“百万条曲线一人挑”,成功去掉了制定阈值的业务效果。


2018年:走向未来

亦余心之所善兮,虽九死其犹未悔。

在转行的过程中,笔者也走过弯路,体会过排查数据问题所带来的痛苦,经历过业务指标达成所带来的喜悦,感受过如何从零到一搭建一套系统。在此撰写一篇文章来记录笔者这两年多的成长经历,希望能够尽微薄之力帮助到那些有志向转行来做机器学习的人。从这两年做项目的经历来看,要想从零到一地做好项目,在一开始就必须要有一个好的规划,然后一步一步的根据项目的进展调整前进的方向。但是如果没有一个足够的知识积累,就很难找到合适的前进方向。

亦余心之所善兮,虽九死其犹未悔。”在某些时候会有人为了短期的利益而放弃了一个长远的目标,但是如果要让自己走得更远,最佳的方案是让自己和团队一起成长,最好的是大家都拥有一个长远的目标,不能因为一些微小的波动而放任自己。同时,如果团队或个人急于求成,往往会导致失败,而坚持不懈的学习则是做科研和开展工作的不二法门。

诗人陆游曾经教育过他的后辈:“汝果欲学诗,功夫在诗外”。意思是说,如果你想真正地写出好的诗词,就要在生活上下功夫,去体验生活的酸甜苦辣,而不是抱着一本诗词歌赋来反复阅读。如果看过天龙八部的人就知道,鸠摩智当时上少林寺去挑战,在少林高僧面前展示出自己所学的少林七十二绝技,诸多少林高僧无不大惊失色。而当时的虚竹在旁边观战,就对少林高僧们说:“鸠摩智所耍的招数虽然是少林绝技,但是本质上却是使用小无相功催动出来的。虽然招数相同,但是却用的道家的内力。”为什么少林的高僧们没有看出来鸠摩智武功的关键之处呢,那是因为少林高僧们在练功的时候,一直抱着武学秘籍在修炼,一辈子练到头了也就13门绝技。其实从鸠摩智的个人修炼来看,修练武学的关键并不在武学秘籍里。没有找到关键的佛经,没有找到运功的法门,无论抱着武学秘籍修炼多少年,终究与别人有着本质上的差距。

笔者在 SNG 社交网络运营部的这两年多,用过推荐项目,做过安全项目,正在做运维项目,也算是部门内唯一一个(不知道是否准确)做过三种项目的人,使用过推荐系统,从零到一搭建过两个系统。目前笔者的个人兴趣集中在 AIOPS 这个场景下,因为笔者相信在业务运维这个传统领域,机器学习一定有着自己的用武之地。相信在不久的将来,AIOPS 将会在运维上面的各个场景落地,真正的走向咖啡运维。

张戎

2018年2月

时间序列的自回归模型—从线性代数的角度来看

Fibonacci 序列

在开始讲解时间序列的自回归模型(AutoRegression Model)之前,我们需要先介绍一下线性代数的基础知识。为了介绍线性代数的基础知识,我们可以先看一个简单的例子。考虑整数序列 Fibonacci 序列,它的通项公式是一个递归函数,每项的取值是由前两项所生成的,其具体的公式就是

F_{n}=F_{n-1}+F_{n-2}

其初始值是 F_{0}=0, F_{1} = 1。按照其递归公式来计算,我们可以详细写出前面的几项,那就是:

0,1,1,2,3,5,8,13,21,34,55,89,144,…

但是,计算 Fibonacci 的通项公式则比计算等差数列和等比数列的通项公式复杂的多,因为这里需要使用线性代数的技巧才能解决这个问题。

求解 Fibonacci 序列的通项公式 --- 矩阵对角化

根据 Fibonacci 数列的递归公式,基于矩阵乘法的定义,Fibonacci 序列可以写成如下形式:

\left( \begin{array}{c} F_{n+2}  \\ F_{n+1}  \\ \end{array}\right)= \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \\ \end{array}\right) \left( \begin{array}{c} F_{n+1}  \\ F_{n}  \\ \end{array}\right) = A \left( \begin{array}{c} F_{n+1}  \\ F_{n}  \\ \end{array}\right),

\left( \begin{array}{c} F_{n}  \\ F_{n-1}  \\ \end{array}\right)= A \left( \begin{array}{c} F_{n-1}  \\ F_{n-2}  \\ \end{array}\right) = \cdots = A^{n-1} \left( \begin{array}{c} F_{1}  \\ F_{0}  \\ \end{array}\right).

这就意味着我们需要计算出矩阵 A 的幂。在线性代数里面,为了计算矩阵的 n 次方,除了通过矩阵乘法的公式直接计算之外,还有一个经典的技巧,那就是将矩阵对角化。详细来说,如果 m\times m 的矩阵 A 能够对角化,那就是存在可逆矩阵 P 使得

P^{-1}AP = diag(\lambda_{1},\cdots,\lambda_{m})

\implies AP = P diag(\lambda_{1},\cdots,\lambda_{m})

\implies A = P diag(\lambda_{1},\cdots,\lambda_{m})P^{-1}.

其中 D = diag(\lambda_{1},\cdots,\lambda_{m}) 表示一个 m\times m 的对角矩阵,其对角的元素从左上角到右下角依次是 \lambda_{1},\cdots,\lambda_{m}。如果把矩阵 P 写成列向量的形式,i.e. P =(\vec{\alpha}_{1},\cdots,\vec{\alpha}_{m}),那么以上的矩阵方程就可以转换为 A\vec{\alpha}_{i} = \lambda_{i}\vec{\alpha}_{i}, 1\leq i\leq m。进一步来说,如果要计算矩阵 A 的幂,就可以得到:

A^{k} = (PDP^{-1})\cdots(PDP^{-1}) = P D^{k} P^{-1}= P diag(\lambda_{1}^{k},\cdots,\lambda_{m}^{k})P^{-1}.

另外,如果想知道一个矩阵 A 的特征值和特征向量,则需要计算以下多项式的根,i.e. 计算关于 \lambda 的多项式的解,

det(\lambda I - A) = 0,

其中 I 是一个单位矩阵(identity matrix)。

按照以上的思路,如果令

A = \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \\ \end{array}\right),

可以计算出 A 的两个特征值分别是 \phi=(1+\sqrt{5})/2- \phi^{-1} = (1-\sqrt{5})/2,它们所对应的特征向量分别是:

\vec{\alpha}_{1} = (\phi,1)^{T}, \vec{\alpha}_{2} = (-\phi^{-1},1).

因此直接计算可以得到

F_{k} = \frac{1}{\sqrt{5}}\bigg(\frac{1+\sqrt{5}}{2}\bigg)^{k} - \frac{1}{\sqrt{5}}\bigg(\frac{1-\sqrt{5}}{2}\bigg)^{k}=\frac{\phi^{k}-(-\phi)^{-k}}{\sqrt{5}}.

通过上面的计算方法,为了计算 Fibonacci 数列的通项公式,我们可以先把它转换成一个矩阵求幂的问题,于是我们就能矩阵对角化的方法把 Fibonacci 数列的通项公式求出来。

时间序列的弱平稳性

要讲解自回归模型,就必须提到时间序列的弱平稳性。一个时间序列 \{x_{t}\}_{t\geq 0} 具有弱平稳性(Weak Stationary)指的是:

  1. E(x_{t}) 对于所有的 t\geq 0 都是恒定的;
  2. Var(x_{t}) 对于所有的 t\geq 0 都是恒定的;
  3. x_{t}x_{t-h} 的协方差对于所有的 t\geq 0 都是恒定的。

另外,时间序列的自相关方程(AutoCorrelation Function)指的是对于 h = 1,2,3,\cdots,可以定义 ACF 为

ACF(x_{t},x_{t-h}) = \frac{Covariance(x_{t},x_{t-h})}{\sqrt{Var(x_{t})\cdot Var(x_{t-h})}}.

如果时间序列 \{x_{t}\}_{t\geq 0} 在弱平稳性的假定下,ACF 将会简化为

ACF(x_{t},x_{t-h}) = \frac{Covariance(x_{t},x_{t-h})}{Var(x_{t})}.

时间序列的自回归模型(AutoRegression Model)

AR(1) 模型

AR(1) 模型指的是时间序列 \{x_{t}\}_{t\geq 0} 在时间戳 t 时刻的取值 x_{t} 与时间戳 t - 1 时刻的取值 x_{t-1} 相关,其公式就是:

x_{t}=\delta+\phi_{1}x_{t-1}+w_{t}

这个时间序列 \{x_{t}\}_{t\geq 0} 满足如下条件:

  1. w_{t}\sim N(0,\sigma_{w}^{2}),并且 w_{t} 满足 iid 条件。其中 N(0,\sigma_{w}^{2}) 表示 Gauss 正态分布,它的均值是0,方差是 \sigma_{w}^{2}
  2. w_{t}x_{t} 是相互独立的(independent)。
  3. x_{0},x_{1},\cdots弱平稳的,i.e. 必须满足 |\phi_{1}|<1

如果选择初始条件 x_{0}=1,则可以得到一些 AR(1) 模型的例子如下图所示:

AR Models

从 AR(1) 以上的定义出发,我们可以得到:

  1. E(x_{t}) = \delta/(1-\phi_{1}).
  2. Var(x_{t}) = \sigma_{w}^{2}/(1-\phi_{1}^{2}).
  3. Covariance(x_{t},x_{t-h}) = \phi_{1}^{h}.

Proof of 1. 从 AR(1) 的模型出发,可以得到

E(x_{t}) = E(\delta + \phi_{1}x_{t-1}+w_{t})  = \delta + \phi_{1}E(x_{t-1}) = \delta + \phi_{1}E(x_{t}),

从而,E(x_{t}) = \delta/(1-\phi_{1}).

Proof of 2. 从 AR(1) 的模型出发,可以得到

Var(x_{t}) = Var(\delta + \phi_{1}x_{t-1}+w_{t})

= \phi_{1}^{2}Var(x_{t-1}) +Var(w_{t}) = \phi_{1}^{2}Var(x_{t}) + \sigma_{w}^{2},

从而,Var(x_{t}) =\sigma_{w}^{2}/(1-\phi_{1}^{2}).

Proof of 3.\mu = E(x_{t}), \forall t\geq 0. 从 x_{t} 的定义出发,可以得到:

x_{t}-\mu = \phi_{1}(x_{t-1}-\mu)+w_{t}

= \phi_{1}^{h}(x_{t-h}-\mu) + \phi_{1}^{h-1}w_{t-h+1}+\cdots+\phi_{1}w_{t-1}+w_{t},

从而,

\rho_{h} = Covariance(x_{t},x_{t-h}) = \frac{E((x_{t}-\mu)\cdot(x_{t-h}-\mu))}{Var(x_{t})}=\phi_{1}^{h}.

AR(1) 模型与一维动力系统

特别的,如果假设 w_{t} 恒等于零,就可以得到 x_{t} =\delta + \phi_{1}x_{t-1} 对于所有的 t\geq 1 都成立。也就是可以写成一个一维函数的迭代公式:

f(x) = \phi_{1}x + \delta,

下面我们要计算 f^{n}(x) 的收敛性,这里的 f^{n}(x) = f\circ\cdots\circ f(x) 表示函数 fn 次迭代。

Method 1. 

通过 f(x) 的定义直接计算可以得到:

f^{n}(x) = \phi_{1}^{n}x+ \frac{1-\phi_{1}^{n}}{1-\phi_{1}}\delta,

n\rightarrow \infty,可以得到 f^{n}(x)\rightarrow \delta/(1-\phi_{1})。这与 E(x_{t}) = \delta/(1-\phi_{1}) 其实是保持一致的。

另外,如果 |\phi_{1}|>1,可以从公式上得到 f^{n}(x) \rightarrow \inftyn\rightarrow \infty

Method 2.

将原函数转换成 Lipschitz 函数的形式,i.e. 如果 |\phi_{1}|<1,那么

f(x)-\frac{\delta}{1-\phi_{1}} = \phi_{1}(x-\frac{\delta}{1-\phi_{1}})

\implies |f(x)-\frac{\delta}{1-\phi_{1}}| <\frac{1+|\phi_{1}|}{2}\cdot|x-\frac{\delta}{1-\phi_{1}}|

\implies |f^{n}(x)-\frac{\delta}{1-\phi_{1}}|<\bigg(\frac{1+|\phi_{1}|}{2}\bigg)^{n}\cdot|x-\frac{\delta}{1-\phi_{1}}|.

因为 (1+|\phi_{1}|)/2<1,我们可以得到 \lim_{n\rightarrow\infty}f^{n}(x)=\delta/(1-\phi_{1}). i.e. f^{n}(x) 趋近于 \delta/(1-\phi_{1}).

反之,如果 |\phi_{1}|>1,很容易得到

f(x)-\frac{\delta}{1-\phi_{1}} = \phi_{1}(x-\frac{\delta}{1-\phi_{1}})

\implies |f(x)-\frac{\delta}{1-\phi_{1}}| >\frac{1+|\phi_{1}|}{2}\cdot|x-\frac{\delta}{1-\phi_{1}}|

\implies |f^{n}(x)-\frac{\delta}{1-\phi_{1}}|>\bigg(\frac{1+|\phi_{1}|}{2}\bigg)^{n}\cdot|x-\frac{\delta}{1-\phi_{1}}|.

因此,在 |\phi_{1}|>1 这种条件下,f^{n}(x)\rightarrow \infty as n\rightarrow \infty. 特别地,对于一阶差分方程 x_{t} =\delta + \phi_{1}x_{t-1} 而言,如果 |\phi_{1}|>1,那么 x_{t} 的取值会越来越大,这与现实的状况不相符,所以在时间序列的研究中,一般都会假设 |\phi_{1}|<1

AR(p) 模型

按照之前类似的定义,可以把 AR(1) 模型扩充到 AR(p) 模型,也就是说:

1. AR(1) 模型形如:

x_{t}=\delta+\phi_{1}x_{t-1}+w_{t}.

2. AR(2) 模型形如:

x_{t} = \delta + \phi_{1}x_{t-1}+\phi_{2}x_{t-2}+w_{t}.

3. AR(p) 模型形如:

x_{t} = \delta + \phi_{1}x_{t-1}+\phi_{2}x_{t-2}+\cdots+\phi_{p}x_{t-p}+w_{t}.

AR(p) 模型的稳定性 --- 基于线性代数

对于 AR(2) 模型,可以假定 \delta = 0 并且忽略误差项,因此可以得到简化版的模型形如:

x_{t}= \phi_{1}x_{t-1} + \phi_{2}x_{t-2}.

写成矩阵的形式形如:

\left( \begin{array}{c} x_{t+2}  \\ x_{t+1}  \\ \end{array}\right)= \left( \begin{array}{cc} \phi_{1} & \phi_{2} \\ 1 & 0 \\ \end{array}\right) \left( \begin{array}{c} x_{t+1}  \\ x_{t}  \\ \end{array}\right) = A \left( \begin{array}{c} x_{t+1}  \\ x_{t}  \\ \end{array}\right).

求解其特征多项式则是基于 det(\lambda I - A) = 0,求解可以得到 \lambda^{2}-\phi_{1}\lambda - \phi_{2} =0,i.e. A^{k} = P diag(\lambda_{1}^{k}, \lambda_{2}^{k})P^{-1}。当 \lambda_{1}, \lambda_{2} 都在单位圆内部的时候,也就是该模型 x_{t+2} = \phi_{1}x_{t+1}+\phi_{2}x_{t} 满足稳定性的条件。

对于更加一般的 AR(p) 模型,也就是考虑 p 阶差分方程

x_{t} = \phi_{1}x_{t-1}+\phi_{2}x_{t-2}+\cdots+\phi_{p}x_{t-p}.

可以用同样的方法将其转换成矩阵的形式,那就是:

\left(\begin{array}{c} x_{t+p} \\ x_{t+p-1} \\ \vdots \\ x_{t+1}\\ \end{array}\right) = \left(\begin{array}{ccccc} \phi_{1} & \phi_{2} &\cdots & \phi_{p-1} & \phi_{p} \\ 1 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \\ \end{array}\right) \left(\begin{array}{c} x_{t+p-1} \\ x_{t+p-2} \\ \vdots \\ x_{t} \\ \end{array}\right) = A \left(\begin{array}{c} x_{t+p-1} \\ x_{t+p-2} \\ \vdots \\ x_{t} \\ \end{array}\right)

计算 det(\lambda I - A) = 0,可以得到其特征多项式为:

\lambda^{p}-\phi_{1}\lambda^{p-1}-\phi_{2}\lambda^{p-2}-\cdots-\phi_{p}=0.

当每个特征值都在单位圆盘内部的时候,i.e. |\lambda_{i}|<1, \forall 1\leq i\leq p,该 p 阶差分方程

x_{t} = \phi_{1}x_{t-1}+\phi_{2}x_{t-2}+\cdots+\phi_{p}x_{t-p}

存在稳定性的解。

 

如何理解时间序列?— 从Riemann积分和Lebesgue积分谈起

Riemann 积分和 Lebesgue 积分是数学中两个非常重要的概念。本文将会从 Riemann 积分和 Lebesgue 积分的定义出发,介绍它们各自的性质和联系。

积分

Riemann 积分

Riemann 积分虽然被称为 Riemann 积分,但是在 Riemann 之前就有学者对这类积分进行了详细的研究。早在阿基米德时代,阿基米德为了计算曲线 x^{2} 在 [0,1] 区间上与 X 坐标轴所夹的图形面积,就使用了 Riemann 积分的思想。 他把 [0,1] 区间等长地切割成 n 段,每一段使用一个长方形去逼近 x^{2} 这条曲线的分段面积,再把 n 取得很大,所以得到当 n 趋近于无穷的时候,就知道该面积其实是 1/3。

下面来看一下 Riemann 积分的详细定义。

Riemann Integral1

考虑定义在闭区间 [a,b] 上的函数 f(x)

取一个有限的点列 a=x_{0}<x_{1}<\cdots<x_{n}=b\lambda = \max(x_{i+1}-x_{i}) 表示这些区间长度的最大值,在这里 0\leq i\leq n-1。在每一个子区间上[x_{i},x_{i+1}] 上取出一个点 t_{i}\in[x_{i},x_{i+1}]。而函数 f(x) 关于以上取样分割的 Riemann 和就是以下公式:

\sum_{i=0}^{n-1}f(t_{i})(x_{i+1}-x_{i}).

当我们说该函数 f(x) 在闭区间 [a,b] 上的取值是 s 的意思是:

对于任意 \epsilon>0,存在 \delta>0 使得对于任意取样分割,当 \lambda<\delta 时,就有

|\sum_{i=0}^{n-1}f(t_{i})(x_{i+1}-x_{i}) - s|<\epsilon.

通常来说,用符号来表示就是:\int_{a}^{b}f(x)=s.

用几幅图来描述 Riemann 积分的思想就是:

 

Lebesgue 积分

Riemann 积分是为了计算曲线与 X 轴所围成的面积,而 Lebesgue 积分也是做同样的事情,但是计算面积的方法略有不同。要想直观的解释两种积分的原理,可以参见下图:

RiemannANDLebesgue
Riemann 积分(上)与 Lebesgue 积分(下)

Riemann 积分是把一条曲线的底部分成等长的区间,测量每一个区间上的曲线高度,所以总面积就是这些区间与高度所围成的面积和。

Lebesgue 积分是把曲线化成等高线图,每两根相邻等高线的差值是一样的。每根等高线之内含有它所圈着的长度,因此总面积就是这些等高线内的面积之和。

用再形象一点的语言来描述就是:吃一块汉堡有多种方式

  1. Riemann 积分:从一个角落开始一口一口吃,每口都包含所有的配料;
  2. Lebesgue 积分:从最上层开始吃,按照“面包-配菜-肉-蛋-面包”的节奏,一层一层来吃。

再看一幅图的表示就是:Riemann 积分是按照蓝色的数字顺序相加的,Lebesgue 积分是按照红色的数字来顺序相加的。

Riemann and Lebesgue1

 

基于这些基本的思想,就可以给出 Lebesgue 积分的定义:

简单函数指的是对指示函数的有限线性组合,i.e. \sum_{k}a_{k}I_{S_{k}},这里的 a_{k} 是系数,S_{k} 是可测集合,I 表示指示函数。当 a_{k} 非负时,令

\int(\sum_{k}a_{k}1_{S_{k}})d\mu = \sum_{k}a_{k}\int 1_{S_{k}}d\mu = \sum_{k}a_{k}\mu(S_{k}).

如果 f 是一个非负可测函数时,可以定义函数 f 在可测集合 E 上的 Lebesgue 积分是:

\int_{E}f d\mu = \sup\{\int_{E}sd\mu: \bold{0}\leq s\leq f\},

这里的 s 指的是非负简单函数,\bold{0} 表示零函数,这里的大小关系表示对定义域内的每个点都要成立。

而对于可测函数时,可以把可测函数 f 转换成 f= f^{+}-f^{-},而这里的 f^{+}f^{-} 都是非负可测函数。所以可以定义任意可测函数的 Lebesgue 积分如下:

\int fd\mu = \int f^{+}d\mu - \int f^{-}d\mu..

Riemann 积分与Lebesgue 积分的关系

定义了两种积分之后,也许有人会问它们之间是否存在矛盾?其实,它们之间是不矛盾的,因为有学者证明了这样的定理:

如果有界函数 f 在闭区间 [a,b] 是 Riemann 可积的,则它也是 Lebesgue 可积的,并且它们的积分值相等:

(R)\int_{a}^{b}f(x)dx = (L)\int_{[a,b]}f(x)dx.

左侧是表示 Riemann 积分,右侧表示 Lebesgue 积分。

形象化一点的语言描述就是:无论从角落一口一口地吃汉堡,还是从顶至下一层一层吃,所吃的汉堡都是同一个。

但是 Lebesgue 积分比 Riemann 积分有着更大的优势,例如 Dirichlet 函数,

  1. x 是有理数时,D(x) = 1
  2. x 是无理数时,D(x) = 0.

Dirichlet 函数是定义在实数轴的函数,并且值域是 \{0,1\},无法画出函数图像,它不是 Riemann 可积的,但是它 Lebesgue 可积。

 

时间序列

提到时间序列,也就是把以上所讨论的连续函数换成离散函数而已,把定义域从一个闭区间 [a,b] 换成 \{1,2,3,\cdots\} 这样的定义域而已。所以,之前所讨论的很多连续函数的想法都可以应用在时间序列上。

时间序列的表示 — 基于 Riemann 积分

现在我们可以按照 Riemann 积分的计算方法来表示一个时间序列的特征,于是就有学者把时间序列按照横轴切分成很多段,每一段使用某个简单函数(线性函数等)来表示,于是就有了以下的方法:

  1. 分段线性逼近(Piecewise Linear Approximation)
  2. 分段聚合逼近(Piecewise Aggregate Approximation)
  3. 分段常数逼近(Piecewise Constant Approximation)

说到这几种算法,其实最本质的思想就是进行数据降维的工作,用少数的数据来进行原始时间序列的表示(Representation)。用数学化的语言来描述时间序列的数据降维(Data Reduction)就是:把原始的时间序列 \{x_{1},\cdots,x_{N}\}\{x_{1}^{'},\cdots, x_{D}^{'}\} 来表示,其中 D<N。那么后者就是原始序列的一种表示(representation)。

分段聚合逼近(Piecewise Aggregate Approximation)— 类似 Riemann 积分

在这种算法中,分段聚合逼近(Piecewise Aggregate Approximation)是一种非常经典的算法。假设原始的时间序列是 C = \{x_{1},\cdots, x_{N}\},定义 PAA 的序列是:\overline{C} = \{\overline{x}_{1},\cdots,\overline{x}_{w}\}

其中

\overline{x}_{i} = \frac{w}{N} \cdot \sum_{j=\frac{N}{w}(i-1)+1}^{\frac{N}{w}i} x_{j}.

在这里 1\leq i\leq w。用图像来表示那就是:

PAA

至于分段线性逼近(Piecewise Linear Approximation)和分段常数逼近(Piecewise Constant Approximation),只需要在 \overline{x}_{i} 的定义上稍作修改即可。

符号特征(Symbolic Approximation)— 类似用简单函数来计算 Lebesgue 积分

在推荐系统的特征工程里面,特征通常来说可以做归一化,二值化,离散化等操作。例如,用户的年龄特征,一般不会直接使用具体的年月日,而是划分为某个区间段,例如 0~6(婴幼儿时期),7~12(小学),13~17(中学),18~22(大学)等阶段。

其实在得到分段特征之后,分段特征在某种程度上来说依旧是某些连续值,能否把连续值划分为一些离散的值呢?于是就有学者使用一些符号来表示时间序列的关键特征,也就是所谓的符号表示法(Symbolic Representation)。下面来介绍经典的 SAX Representation。

如果我们希望使用 \alpha 个符号来表示时间序列,那么我们其实可以考虑正态分布 N(0,1),用\{z_{1/\alpha},\cdots,z_{(\alpha-1)/\alpha}\} 来表示 Gauss 曲线下方的一些点,而这些点把 Gauss 曲线下方的面积等分成了 \alpha 段。用 \{l_{1},\cdots,l_{\alpha}\} 表示 \alpha 个字母。

SAX 方法的流程如下:

  1. 正规化(normalization):把原始的时间序列映射到一个新的时间序列,新的时间序列满足均值为零,方差为一的条件。
  2. 分段表示(PAA):\{x_{1},\cdots, x_{N}\} \Rightarrow  \{\overline{x}_{1},\cdots,\overline{x}_{w}\}
  3. 符号表示(SAX):如果 \overline{x}_{i}<z_{1/\alpha},那么 \hat{X}_{i}=l_{1};如果 z_{(j-1)/\alpha}\leq \overline{x}_{i}<z_{j/\alpha},那么 \hat{X}_{i} = l_{j},在这里 2\leq j\leq \alpha;如果 \overline{x}_{i}\geq z_{(\alpha-1)/\alpha},那么 \hat{X}_{i} = l_{\alpha}

于是,我们就可以用 \{l_{1},\cdots,l_{\alpha}\}\alpha 个字母来表示原始的时间序列了。

SAX

时间序列的表示 — 基于 Lebesgue 积分

要想考虑一个时间序列的值分布情况,其实就类似于 Lebesgue 积分的计算方法,考虑它们的分布情况,然后使用某些函数去逼近时间序列。要考虑时间序列的值分布情况,可以考虑熵的概念。

熵(Entropy)

通常来说,要想描述一种确定性与不确定性,熵(entropy)是一种不错的指标。对于离散空间而言,一个系统的熵(entropy)可以这样来表示:

\text{entropy}(X) = -\sum_{i=1}^{\infty}P\{x=x_{i}\}\ln(P\{x=x_{i}\}).

如果一个系统的熵(entropy)越大,说明这个系统就越混乱;如果一个系统的熵越小,那么说明这个系统就更加确定。

提到时间序列的熵特征,一般来说有几个经典的熵指标,其中有一个就是 binned entropy。

分桶熵(Binned Entropy)

从熵的定义出发,可以考虑把时间序列的值进行分桶的操作,例如,可以把 [min, max] 这个区间等分为十个小区间,那么时间序列的取值就会分散在这十个桶中。根据这个等距分桶的情况,就可以计算出这个概率分布的熵(entropy)。i.e. Binned Entropy 就可以定义为:

\text{binned entropy}(X) = -\sum_{k=0}^{\min(maxbin, len(X))} p_{k}\ln(p_{k})\cdot 1_{(p_{k}>0)},

其中 p_{k} 表示时间序列 X 的取值落在第 k 个桶的比例(概率),maxbin 表示桶的个数,len(X) 表示时间序列 X 的长度。

如果一个时间序列的 Binned Entropy 较大,说明这一段时间序列的取值是较为均匀的分布在 [min, max] 之间的;如果一个时间序列的 Binned Entropy 较小,说明这一段时间序列的取值是集中在某一段上的。

总结

在本篇文章中,笔者从 Riemann 积分和 Lebesgue 积分出发,介绍了它们的基本概念,性质和联系。然后从两种积分出发,探讨了时间序列的分段特征,时间序列的熵特征。在未来的 Blog 中,笔者将会介绍时间序列的更多相关内容。

 

A Guide to Time Series Forecasting with ARIMA in Python 3

https://www.digitalocean.com/community/tutorials/a-guide-to-time-series-forecasting-with-arima-in-python-3#step-2-—-importing-packages-and-loading-data

 

Introduction

Time series provide the opportunity to forecast future values. Based on previous values, time series can be used to forecast trends in economics, weather, and capacity planning, to name a few. The specific properties of time-series data mean that specialized statistical methods are usually required.

In this tutorial, we will aim to produce reliable forecasts of time series. We will begin by introducing and discussing the concepts of autocorrelation, stationarity, and seasonality, and proceed to apply one of the most commonly used method for time-series forecasting, known as ARIMA.

One of the methods available in Python to model and predict future points of a time series is known as SARIMAX, which stands for Seasonal AutoRegressive Integrated Moving Averages with eXogenous regressors. Here, we will primarily focus on the ARIMA component, which is used to fit time-series data to better understand and forecast future points in the time series.

Prerequisites

This guide will cover how to do time-series analysis on either a local desktop or a remote server. Working with large datasets can be memory intensive, so in either case, the computer will need at least 2GB of memory to perform some of the calculations in this guide.

To make the most of this tutorial, some familiarity with time series and statistics can be helpful.

For this tutorial, we’ll be using Jupyter Notebook to work with the data. If you do not have it already, you should follow our tutorial to install and set up Jupyter Notebook for Python 3.

Step 1 — Installing Packages

To set up our environment for time-series forecasting, let’s first move into our local programming environment or server-based programming environment:

  • cd environments
  • . my_env/bin/activate

From here, let’s create a new directory for our project. We will call it ARIMA and then move into the directory. If you call the project a different name, be sure to substitute your name for ARIMA throughout the guide

  • mkdir ARIMA
  • cd ARIMA

This tutorial will require the warningsitertoolspandasnumpymatplotlib and statsmodels libraries. The warnings and itertools libraries come included with the standard Python library set so you shouldn’t need to install them.

Like with other Python packages, we can install these requirements with pip.
We can now install pandasstatsmodels, and the data plotting package matplotlib. Their dependencies will also be installed:

  • pip install pandas numpy statsmodels matplotlib

At this point, we’re now set up to start working with the installed packages.

Step 2 — Importing Packages and Loading Data

To begin working with our data, we will start up Jupyter Notebook:

  • jupyter notebook

To create a new notebook file, select New > Python 3 from the top right pull-down menu:

Create a new Python 3 notebook

This will open a notebook.

As is best practice, start by importing the libraries you will need at the top of your notebook:

import warnings
import itertools
import pandas as pd
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')

We have also defined a matplotlib style of fivethirtyeight for our plots.

We’ll be working with a dataset called “Atmospheric CO2 from Continuous Air Samples at Mauna Loa Observatory, Hawaii, U.S.A.,” which collected CO2 samples from March 1958 to December 2001. We can bring in this data as follows:

data = sm.datasets.co2.load_pandas()
y = data.data

Let’s preprocess our data a little bit before moving forward. Weekly data can be tricky to work with since it’s a briefer amount of time, so let’s use monthly averages instead. We’ll make the conversion with the resample function. For simplicity, we can also use the fillna() function to ensure that we have no missing values in our time series.

# The 'MS' string groups the data in buckets by start of the month
y = y['co2'].resample('MS').mean()

# The term bfill means that we use the value before filling in missing values
y = y.fillna(y.bfill())

print(y)
Output
co2
1958-03-01  316.100000
1958-04-01  317.200000
1958-05-01  317.433333
...
2001-11-01  369.375000
2001-12-01  371.020000

Let’s explore this time series e as a data visualization:

y.plot(figsize=(15, 6))
plt.show()

Figure 1: CO2 Levels Time Series

Some distinguishable patterns appear when we plot the data. The time series has an obvious seasonality pattern, as well as an overall increasing trend.

To learn more about time series pre-processing, please refer to “A Guide to Time Series Visualization with Python 3,” where the steps above are described in much more detail.

Now that we’ve converted and explored our data, let’s move on to time series forecasting with ARIMA.

Step 3 — The ARIMA Time Series Model

One of the most common methods used in time series forecasting is known as the ARIMA model, which stands for AutoregRessive Integrated Moving Average. ARIMA is a model that can be fitted to time series data in order to better understand or predict future points in the series.

There are three distinct integers (pdq) that are used to parametrize ARIMA models. Because of that, ARIMA models are denoted with the notation ARIMA(p, d, q). Together these three parameters account for seasonality, trend, and noise in datasets:

  • p is the auto-regressive part of the model. It allows us to incorporate the effect of past values into our model. Intuitively, this would be similar to stating that it is likely to be warm tomorrow if it has been warm the past 3 days.
  • d is the integrated part of the model. This includes terms in the model that incorporate the amount of differencing (i.e. the number of past time points to subtract from the current value) to apply to the time series. Intuitively, this would be similar to stating that it is likely to be same temperature tomorrow if the difference in temperature in the last three days has been very small.
  • q is the moving average part of the model. This allows us to set the error of our model as a linear combination of the error values observed at previous time points in the past.

When dealing with seasonal effects, we make use of the seasonal ARIMA, which is denoted as ARIMA(p,d,q)(P,D,Q)s. Here, (p, d, q) are the non-seasonal parameters described above, while (P, D, Q) follow the same definition but are applied to the seasonal component of the time series. The term s is the periodicity of the time series (4 for quarterly periods, 12 for yearly periods, etc.).

The seasonal ARIMA method can appear daunting because of the multiple tuning parameters involved. In the next section, we will describe how to automate the process of identifying the optimal set of parameters for the seasonal ARIMA time series model.

Step 4 — Parameter Selection for the ARIMA Time Series Model

When looking to fit time series data with a seasonal ARIMA model, our first goal is to find the values of ARIMA(p,d,q)(P,D,Q)s that optimize a metric of interest. There are many guidelines and best practices to achieve this goal, yet the correct parametrization of ARIMA models can be a painstaking manual process that requires domain expertise and time. Other statistical programming languages such as R provide automated ways to solve this issue, but those have yet to be ported over to Python. In this section, we will resolve this issue by writing Python code to programmatically select the optimal parameter values for our ARIMA(p,d,q)(P,D,Q)s time series model.

We will use a “grid search” to iteratively explore different combinations of parameters. For each combination of parameters, we fit a new seasonal ARIMA model with the SARIMAX() function from the statsmodels module and assess its overall quality. Once we have explored the entire landscape of parameters, our optimal set of parameters will be the one that yields the best performance for our criteria of interest. Let’s begin by generating the various combination of parameters that we wish to assess:

# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)

# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))

# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]

print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
Output
Examples of parameter combinations for Seasonal ARIMA...
SARIMAX: (0, 0, 1) x (0, 0, 1, 12)
SARIMAX: (0, 0, 1) x (0, 1, 0, 12)
SARIMAX: (0, 1, 0) x (0, 1, 1, 12)
SARIMAX: (0, 1, 0) x (1, 0, 0, 12)

We can now use the triplets of parameters defined above to automate the process of training and evaluating ARIMA models on different combinations. In Statistics and Machine Learning, this process is known as grid search (or hyperparameter optimization) for model selection.

When evaluating and comparing statistical models fitted with different parameters, each can be ranked against one another based on how well it fits the data or its ability to accurately predict future data points. We will use the AIC (Akaike Information Criterion) value, which is conveniently returned with ARIMA models fitted using statsmodels. The AIC measures how well a model fits the data while taking into account the overall complexity of the model. A model that fits the data very well while using lots of features will be assigned a larger AIC score than a model that uses fewer features to achieve the same goodness-of-fit. Therefore, we are interested in finding the model that yields the lowest AIC value.

The code chunk below iterates through combinations of parameters and uses the SARIMAX function from statsmodels to fit the corresponding Seasonal ARIMA model. Here, the order argument specifies the (p, d, q) parameters, while the seasonal_order argument specifies the (P, D, Q, S) seasonal component of the Seasonal ARIMA model. After fitting each SARIMAX()model, the code prints out its respective AICscore.

warnings.filterwarnings("ignore") # specify to ignore warning messages

for param in pdq:
    for param_seasonal in seasonal_pdq:
        try:
            mod = sm.tsa.statespace.SARIMAX(y,
                                            order=param,
                                            seasonal_order=param_seasonal,
                                            enforce_stationarity=False,
                                            enforce_invertibility=False)

            results = mod.fit()

            print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
        except:
            continue

Because some parameter combinations may lead to numerical misspecifications, we explicitly disabled warning messages in order to avoid an overload of warning messages. These misspecifications can also lead to errors and throw an exception, so we make sure to catch these exceptions and ignore the parameter combinations that cause these issues.

The code above should yield the following results, this may take some time:

Output
SARIMAX(0, 0, 0)x(0, 0, 1, 12) - AIC:6787.3436240402125
SARIMAX(0, 0, 0)x(0, 1, 1, 12) - AIC:1596.711172764114
SARIMAX(0, 0, 0)x(1, 0, 0, 12) - AIC:1058.9388921320026
SARIMAX(0, 0, 0)x(1, 0, 1, 12) - AIC:1056.2878315690562
SARIMAX(0, 0, 0)x(1, 1, 0, 12) - AIC:1361.6578978064144
SARIMAX(0, 0, 0)x(1, 1, 1, 12) - AIC:1044.7647912940095
...
...
...
SARIMAX(1, 1, 1)x(1, 0, 0, 12) - AIC:576.8647112294245
SARIMAX(1, 1, 1)x(1, 0, 1, 12) - AIC:327.9049123596742
SARIMAX(1, 1, 1)x(1, 1, 0, 12) - AIC:444.12436865161305
SARIMAX(1, 1, 1)x(1, 1, 1, 12) - AIC:277.7801413828764

The output of our code suggests that SARIMAX(1, 1, 1)x(1, 1, 1, 12) yields the lowest AIC value of 277.78. We should therefore consider this to be optimal option out of all the models we have considered.

Step 5 — Fitting an ARIMA Time Series Model

Using grid search, we have identified the set of parameters that produces the best fitting model to our time series data. We can proceed to analyze this particular model in more depth.

We’ll start by plugging the optimal parameter values into a new SARIMAX model:

mod = sm.tsa.statespace.SARIMAX(y,
                                order=(1, 1, 1),
                                seasonal_order=(1, 1, 1, 12),
                                enforce_stationarity=False,
                                enforce_invertibility=False)

results = mod.fit()

print(results.summary().tables[1])
Output
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1          0.3182      0.092      3.443      0.001       0.137       0.499
ma.L1         -0.6255      0.077     -8.165      0.000      -0.776      -0.475
ar.S.L12       0.0010      0.001      1.732      0.083      -0.000       0.002
ma.S.L12      -0.8769      0.026    -33.811      0.000      -0.928      -0.826
sigma2         0.0972      0.004     22.634      0.000       0.089       0.106
==============================================================================

The summary attribute that results from the output of SARIMAX returns a significant amount of information, but we’ll focus our attention on the table of coefficients. The coef column shows the weight (i.e. importance) of each feature and how each one impacts the time series. The P>|z| column informs us of the significance of each feature weight. Here, each weight has a p-value lower or close to 0.05, so it is reasonable to retain all of them in our model.

When fitting seasonal ARIMA models (and any other models for that matter), it is important to run model diagnostics to ensure that none of the assumptions made by the model have been violated. The plot_diagnostics object allows us to quickly generate model diagnostics and investigate for any unusual behavior.

results.plot_diagnostics(figsize=(15, 12))
plt.show()

Figure 2: Model Diagnostics

Our primary concern is to ensure that the residuals of our model are uncorrelated and normally distributed with zero-mean. If the seasonal ARIMA model does not satisfy these properties, it is a good indication that it can be further improved.

In this case, our model diagnostics suggests that the model residuals are normally distributed based on the following:

  • In the top right plot, we see that the red KDE line follows closely with the N(0,1) line (where N(0,1)) is the standard notation for a normal distribution with mean 0 and standard deviation of 1). This is a good indication that the residuals are normally distributed.
  • The qq-plot on the bottom left shows that the ordered distribution of residuals (blue dots) follows the linear trend of the samples taken from a standard normal distribution with N(0, 1). Again, this is a strong indication that the residuals are normally distributed.
  • The residuals over time (top left plot) don’t display any obvious seasonality and appear to be white noise. This is confirmed by the autocorrelation (i.e. correlogram) plot on the bottom right, which shows that the time series residuals have low correlation with lagged versions of itself.

Those observations lead us to conclude that our model produces a satisfactory fit that could help us understand our time series data and forecast future values.

Although we have a satisfactory fit, some parameters of our seasonal ARIMA model could be changed to improve our model fit. For example, our grid search only considered a restricted set of parameter combinations, so we may find better models if we widened the grid search.

Step 6 — Validating Forecasts

We have obtained a model for our time series that can now be used to produce forecasts. We start by comparing predicted values to real values of the time series, which will help us understand the accuracy of our forecasts. The get_prediction() and conf_int() attributes allow us to obtain the values and associated confidence intervals for forecasts of the time series.

pred = results.get_prediction(start=pd.to_datetime('1998-01-01'), dynamic=False)
pred_ci = pred.conf_int()

The code above requires the forecasts to start at January 1998.

The dynamic=False argument ensures that we produce one-step ahead forecasts, meaning that forecasts at each point are generated using the full history up to that point.

We can plot the real and forecasted values of the CO2 time series to assess how well we did. Notice how we zoomed in on the end of the time series by slicing the date index.

ax = y['1990':].plot(label='observed')
pred.predicted_mean.plot(ax=ax, label='One-step ahead Forecast', alpha=.7)

ax.fill_between(pred_ci.index,
                pred_ci.iloc[:, 0],
                pred_ci.iloc[:, 1], color='k', alpha=.2)

ax.set_xlabel('Date')
ax.set_ylabel('CO2 Levels')
plt.legend()

plt.show()

Figure 3: CO2 Levels Static Forecast

Overall, our forecasts align with the true values very well, showing an overall increase trend.

It is also useful to quantify the accuracy of our forecasts. We will use the MSE (Mean Squared Error), which summarizes the average error of our forecasts. For each predicted value, we compute its distance to the true value and square the result. The results need to be squared so that positive/negative differences do not cancel each other out when we compute the overall mean.

y_forecasted = pred.predicted_mean
y_truth = y['1998-01-01':]

# Compute the mean square error
mse = ((y_forecasted - y_truth) ** 2).mean()
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
Output
The Mean Squared Error of our forecasts is 0.07

The MSE of our one-step ahead forecasts yields a value of 0.07, which is very low as it is close to 0. An MSE of 0 would that the estimator is predicting observations of the parameter with perfect accuracy, which would be an ideal scenario but it not typically possible.

However, a better representation of our true predictive power can be obtained using dynamic forecasts. In this case, we only use information from the time series up to a certain point, and after that, forecasts are generated using values from previous forecasted time points.

In the code chunk below, we specify to start computing the dynamic forecasts and confidence intervals from January 1998 onwards.

pred_dynamic = results.get_prediction(start=pd.to_datetime('1998-01-01'), dynamic=True, full_results=True)
pred_dynamic_ci = pred_dynamic.conf_int()

Plotting the observed and forecasted values of the time series, we see that the overall forecasts are accurate even when using dynamic forecasts. All forecasted values (red line) match pretty closely to the ground truth (blue line), and are well within the confidence intervals of our forecast.

ax = y['1990':].plot(label='observed', figsize=(20, 15))
pred_dynamic.predicted_mean.plot(label='Dynamic Forecast', ax=ax)

ax.fill_between(pred_dynamic_ci.index,
                pred_dynamic_ci.iloc[:, 0],
                pred_dynamic_ci.iloc[:, 1], color='k', alpha=.25)

ax.fill_betweenx(ax.get_ylim(), pd.to_datetime('1998-01-01'), y.index[-1],
                 alpha=.1, zorder=-1)

ax.set_xlabel('Date')
ax.set_ylabel('CO2 Levels')

plt.legend()
plt.show()

Figure 4: CO2 Levels Dynamic Forecast

Once again, we quantify the predictive performance of our forecasts by computing the MSE:

# Extract the predicted and true values of our time series
y_forecasted = pred_dynamic.predicted_mean
y_truth = y['1998-01-01':]

# Compute the mean square error
mse = ((y_forecasted - y_truth) ** 2).mean()
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
Output
The Mean Squared Error of our forecasts is 1.01

The predicted values obtained from the dynamic forecasts yield an MSE of 1.01. This is slightly higher than the one-step ahead, which is to be expected given that we are relying on less historical data from the time series.

Both the one-step ahead and dynamic forecasts confirm that this time series model is valid. However, much of the interest around time series forecasting is the ability to forecast future values way ahead in time.

Step 7 — Producing and Visualizing Forecasts

In the final step of this tutorial, we describe how to leverage our seasonal ARIMA time series model to forecast future values. The get_forecast() attribute of our time series object can compute forecasted values for a specified number of steps ahead.

# Get forecast 500 steps ahead in future
pred_uc = results.get_forecast(steps=500)

# Get confidence intervals of forecasts
pred_ci = pred_uc.conf_int()

We can use the output of this code to plot the time series and forecasts of its future values.

ax = y.plot(label='observed', figsize=(20, 15))
pred_uc.predicted_mean.plot(ax=ax, label='Forecast')
ax.fill_between(pred_ci.index,
                pred_ci.iloc[:, 0],
                pred_ci.iloc[:, 1], color='k', alpha=.25)
ax.set_xlabel('Date')
ax.set_ylabel('CO2 Levels')

plt.legend()
plt.show()

Figure 5: Time Series and Forecast of Future Values

Both the forecasts and associated confidence interval that we have generated can now be used to further understand the time series and foresee what to expect. Our forecasts show that the time series is expected to continue increasing at a steady pace.

As we forecast further out into the future, it is natural for us to become less confident in our values. This is reflected by the confidence intervals generated by our model, which grow larger as we move further out into the future.

Conclusion

In this tutorial, we described how to implement a seasonal ARIMA model in Python. We made extensive use of the pandas and statsmodels libraries and showed how to run model diagnostics, as well as how to produce forecasts of the CO2 time series.

Here are a few other things you could try:

  • Change the start date of your dynamic forecasts to see how this affects the overall quality of your forecasts.
  • Try more combinations of parameters to see if you can improve the goodness-of-fit of your model.
  • Select a different metric to select the best model. For example, we used the AIC measure to find the best model, but you could seek to optimize the out-of-sample mean square error instead.

For more practice, you could also try to load another time series dataset to produce your own forecasts.

9 Comments

  • B
  • I
  • UL
  • OL
  • Code
  • Highlight
  • Table
  • 0

    Hi! Thanks for sharing this.
    I was trying to forecast hourly values. The seasonality to capture should be similar as the 168th previous value. This means, Friday 9PM of this week should be similar than Friday 9PM of the past week.
    That is why I decided to use 168 seasionality (24*7) but it takes very long and consumes lots of memory. I’ve tried several times using 7 and 24 seasionality but it wasn’t doing it well when forecasting (previous fitting with dynamic set to False was working perfectly). Do you have any advice for this situation? Thanks in advance.

  • 0

    Thanks for the Guide.

    I tried this with my own data. And at the model result summary part, I got ma.L1 having p-value over 0.88. So, I definitely want to get rid of this feature from the model. But how do I do that? How to remove a feature from the model??

    • 0

      Hi!
      Thanks for taking the time to read through this tutorial! Yes, a p-value of 0.88 would suggest that your ma.L1 feature is not very informative. The simplest way to start would be to try and remove the MA features from your model. You can achieve this by refitting your time-series models while explicitly setting the Q parameter to zero, this will ensure that no MA components are used when you fit your model.

  • 0

    very nice tutorial. thanks! I am a new one to ARIMA model, I want to ask you some questions.
    1) I found you use all the historical data to fit an ARIMA Time Series Model, and use part of all the historical data to validate mode, with code: pred = results.getprediction(start=pd.todatetime(‘1998-01-01’), dynamic=False) predci = pred.confint()
    But why the data to validate model is one part of data for fitting model before. You know in machine learning, the train data and test data is split, I don’t why here is different.
    2) how about data stationary, could you tell me why you set the enforce_stationary is false.

    3) how about days data not month data(average) for fit mode to predict, how about week data for prediction, could you tell me how to do it

    thanks!

  • 0

    I got an error on line:
    pred = results.getprediction(start=pd.todatetime(‘1998-02-01’), dynamic=False)

    File “pandas_libs\tslib.pyx”, line 1080, in pandas.libs.tslib.Timestamp.richcmp (pandas_libs\tslib.c:20281)
    TypeError: Cannot compare type ‘Timestamp’ with type ‘int’

    How can I solve it?

  • 1

    Hi.
    Thank you so much for your wonderful sharing. Is there are any way to catch the minimum value of AIC automatically?
    It would be wonderful, if the best set for ARIMAX was stored on a external variable and pass them to next step.
    Is it possible? how?
    Thanks you

    • 0

      Use this code

      warnings.filterwarnings("ignore") # specify to ignore warning messages
      AIC_list = pd.DataFrame({}, columns=['pram','param_seasonal','AIC'])
      for param in pdq:
          for param_seasonal in seasonal_pdq:
              try:
                  mod = sm.tsa.statespace.SARIMAX(y,
                                                  order=param,
                                                  seasonal_order=param_seasonal,
                                                  enforce_stationarity=False,
                                                  enforce_invertibility=False)
      
                  results = mod.fit()
      
                  print('ARIMA{}x{} - AIC:{}'.format(param, param_seasonal, results.aic))
                  temp = pd.DataFrame([[ param ,  param_seasonal , results.aic ]], columns=['pram','param_seasonal','AIC'])
                  AIC_list = AIC_list.append( temp, ignore_index=True)  # DataFrame append 는 일반 list append 와 다르게 이렇게 지정해주어야한다.
                  del temp
      
              except:
                  continue
      
      m = np.amin(AIC_list['AIC'].values) # Find minimum value in AIC
      l = AIC_list['AIC'].tolist().index(m) # Find index number for lowest AIC
      Min_AIC_list = AIC_list.iloc[l,:]
      
      print("### Min_AIC_list ### \n{}".format(Min_AIC_list))
      
      mod = sm.tsa.statespace.SARIMAX(y,
                                      order=Min_AIC_list['pram'],
                                      seasonal_order=Min_AIC_list['pram_seasonal'],
                                      enforce_stationarity=False,
                                      enforce_invertibility=False)
      
      results = mod.fit()
      
      print(results.summary().tables[1])
      
      results.plot_diagnostics(figsize=(15, 12))
      plt.show()
      
      
    • 0

      Revised code (sorry in the previous code, I was missing one thing.)

      
      warnings.filterwarnings("ignore") # specify to ignore warning messages
      AIC_list = pd.DataFrame({}, columns=['param','param_seasonal','AIC'])
      for param in pdq:
          for param_seasonal in seasonal_pdq:
              try:
                  mod = sm.tsa.statespace.SARIMAX(y,
                                                  order=param,
                                                  seasonal_order=param_seasonal,
                                                  enforce_stationarity=False,
                                                  enforce_invertibility=False)
      
                  results = mod.fit()
      
                  print('ARIMA{}x{} - AIC:{}'.format(param, param_seasonal, results.aic))
                  temp = pd.DataFrame([[ param ,  param_seasonal , results.aic ]], columns=['param','param_seasonal','AIC'])
                  AIC_list = AIC_list.append( temp, ignore_index=True)  # DataFrame append 는 일반 list append 와 다르게 이렇게 지정해주어야한다.
                  del temp
      
              except:
                  continue
      
      
      m = np.amin(AIC_list['AIC'].values) # Find minimum value in AIC
      l = AIC_list['AIC'].tolist().index(m) # Find index number for lowest AIC
      Min_AIC_list = AIC_list.iloc[l,:]
      
      
      
      mod = sm.tsa.statespace.SARIMAX(y,
                                      order=Min_AIC_list['param'],
                                      seasonal_order=Min_AIC_list['param_seasonal'],
                                      enforce_stationarity=False,
                                      enforce_invertibility=False)
      results = mod.fit()
      
      print("### Min_AIC_list ### \n{}".format(Min_AIC_list))
      
      print(results.summary().tables[1])
      
      results.plot_diagnostics(figsize=(15, 12))
      plt.show()
      
      

时序数据与事件的关联分析

文章是:”Correlating Events with Time Series for Incident Diagnosis” 是微软在2014年的工作,并且发表在KDD上。

本文提出了一种无监督和统计判别的算法,可以检测出事件(E)与时间序列(S)的关联关系,并且可以检测出时间序列(S)的单调性(上升或者下降)。在这篇文章中,选择的事件有CPU(Memory, Disk)Intensive Program,Query Alert;选择的时间序列有 CPU(Memory)Usage,Disk Transfer Rate。时间序列的特点是它们的值域范围都是[0,1]。

Table1

案例是:时间序列是CPU的Usage,事件是Disk Intensive task和CPU intensive task。

Figure1

关联关系的挖掘分成三个部分:

(1)是否存在关联性(Existence of Dependency):在事件(E)与时间序列(S)之间是否存在关联关系。

(2)关联关系的因果关系(Temporal Order of Dependency):是事件(E)导致了时间序列(S)的变化还是时间序列(S)导致了事件(E)的发生。

(3)关联关系的单调性影响(Monotonic Effect of Dependency):用于判断时间序列(S)是发生了突增或者是突降。

基本概念:

给定一个事件序列(E),事件发生的时间戳是T_{E}=(t_{1},\cdots,t_{n}),这里n表示有n个事件发生。时间序列(S)表示为S=(s_{1},\cdots,s_{m}),这里的m表示时间序列的长度。时间序列的时间戳可以选择一个等差序列,等差用\tau来表示,并且T_{S}=(t(s_{1}),\cdots,t(s_{n})),and t(s_{i}) =t(s_{i-1})+\tau

e_{i}来表示某个事件,\ell_{k}^{rear}(S,e_{i})表示序列S在事件e_{i}之后的长度为k的子序列,\ell_{k}^{front}(S,e_{i})表示序列S在事件e_{i}之前的长度为k的子序列。如果事件E与时间序列S之间存在关联关系,那么

\Gamma^{front}=\{\ell_{k}^{front}(S,e_{i}), i=1,\cdots,n\}

\Gamma^{rear}=\{\ell_{k}^{rear}(S,e_{i}),i=1,\cdots,n\}应该是不一样的。

定义一:如果事件序列E和时间序列S是相关的,并且S->E,当且仅当\Gamma^{front}=\{\ell_{k}^{front}(S,e_{i}), i=1,\cdots,n\}和随机选择的子序列分布不一致。

定义二:如果事件序列E和时间序列S是相关的,并且E->S,当且仅当\Gamma^{rear}=\{\ell_{k}^{rear}(S,e_{i}),i=1,\cdots,n\}和随机选择的子序列分布不一致,并且\Gamma^{front}=\{\ell_{k}^{front}(S,e_{i}), i=1,\cdots,n\}和随机选择的子序列分布一致。

定义三:如果事件序列E和时间序列S是相关的,那么S->E或者E->S

定义四:如果E->S (or S->E),并且时间序列相比E之前是增加了,那么记为E\stackrel{+}{\longrightarrow} S (or S\stackrel{+}{\longrightarrow} E)。如果E->S (or S->E),并且时间序列相比E之前是减少了,那么记为E\stackrel{-}{\longrightarrow} S (or S\stackrel{-}{\longrightarrow} E)。

方法论:

第一步:最邻近算法(类似kNN)(Nearest Neighbor Method)

在计算时间序列之间距离的时候,使用DTW算法或者DTW-D算法会优于L1或者L2算法。

\Gamma^{front}来做例子,\Gamma^{front}=\{\ell_{k}^{front}(S,e_{i}), i=1,\cdots,n\}\Theta =\{\theta_{1},\cdots,\theta_{\tilde{n}}\} 是随机选择的,Z=\Gamma \cup \Theta,可以标记为Z_{1},\cdots,Z_{p},其中p=n+\tilde{n}Z_{i}=\ell_{k}^{front}(S,e_{i}) when 1\leq i\leq nZ_{i}=\theta_{i-n} when n+1\leq i\leq p。可以使用记号A=A_{1}\cup A_{2},其中A_{1}=\Gamma^{front}A_{2}=\Theta=\{\theta_{1},\cdots,\theta_{\tilde{n}}\}是随机选择的。

对于集合 Ax\in A 而言,NN_{r}(x,A) 表示A-\{x\}中距离x最近的第r个元素,对于两个不相交的集合A_{1}A_{2},可以定义方程:

I_{r}(x,A_{1},A_{2})=1 when x\in A_{i} \&\& NN_{r}(x,A)\in A_{i},

I_{r}(x,A_{1},A_{2})=0 when otherwise.

该方程I_{r}(x,A_{1},A_{2})表示x与x的第r个最近的邻居是否在同一个子集内。

定义

T_{r,p}=\frac{1}{pr}\sum_{i=1}^{p}\sum_{j=1}^{r}I_{j}(x_{i},A_{1},A_{2}),

在这里p=n+\tilde{n}表示样本的总个数,x_{i}表示集合A的第i个元素。从直觉上讲,如果T_{r,p}小,则说明两类samples A_{1},A_{2}混合得非常好,表示无异常情况;如果T_{r,p}大,则说明两类samples A_{1},A_{2}有区分度,很多元素与它的邻居集中在某个子集里面,说明 A_{1} 这个集合与 A_{2} 有区分度。

根据文献里面的观点,当p足够大的时候,(pr)^{\frac{1}{2}}(T_{r,p}-\mu_{r})/\sigma_{r}遵循标准Gauss分布,其参数是\mu_{r}=(\lambda_{1})^{2}+(\lambda_{2})^{2}, \sigma_{r}^{2}=\lambda_{1}\lambda_{2}+4\lambda_{1}^{2}\lambda_{2}^{2},

\lambda_{1}=n/p=n/(n+\tilde{n}), \lambda_{2}=\tilde{n}/(n+\tilde{n})

根据传统的Gauss分布Test方法,\Gamma^{front}\Theta有显著的不同,当(pr)^{\frac{1}{2}}(T_{r,p}-\mu_{r})/\sigma_{r}^{2}>\alpha,在这里,参数可以按照以下标准设置:

\alpha = 1.96 for P=0.025

\alpha = 2.58 for P=0.001

如果\Gamma^{front}\Theta存在显著性偏差,那么说明\Gamma^{front}应该返回异常的标识。类似的,如果使用\Gamma^{rear}并且它与\Theta存在显著性偏差,那么说明\Gamma^{rear}应该返回异常的标识。

 

第二步:关联顺序的挖掘(Mining Existence and Temporal Order)

Figure3

如果前面的子序列\Gamma^{front}与随机选择的子序列\Theta有显著偏差,那么说明时序的变化导致了事件的发生,S\rightarrow E

如果后面的子序列\Gamma^{rear}与随机选择的子序列\Theta有显著偏差,那么说明事件导致了时序的变化,E\rightarrow S

在Figure 3中,CPU Intensive Program 导致了 CPU Usage,并且 CPU Usage 导致了 SQL Query Alert。

 

第三步:单调性的影响类型(Mining Effect Type)

现在需要判断时间序列是突增还是突降了,需要引入t_{score}的概念。

对于\Gamma^{front}=\{\ell_{k}^{front}(S,e_{i}), i=1,\cdots,n\}\Gamma^{rear}=\{\ell_{k}^{rear}(S,e_{i}), i=1,\cdots,n\}而言,其中n是E中的事件个数。t_{score}就可以定义为:

t_{score}=\frac{\mu_{\Gamma^{front}} - \mu_{\Gamma^{rear}}}{\sqrt{\frac{\sigma_{\Gamma^{front}}^{2}+\sigma_{\Gamma^{rear}}^{2}}{n}}}.

那么,如果t_{score}>\alpha,可以得到 E\stackrel{-}{\longrightarrow}S 或者 S\stackrel{-}{\longrightarrow} E;如果t_{score}<-\alpha,可以得到 E\stackrel{+}{\longrightarrow}S 或者 S\stackrel{+}{\longrightarrow} E

其中参数可以设置为:

\alpha = 1.96 for P=0.025

\alpha = 2.58 for P=0.001

 

算法综述:

algorithm

其中,5,6行是为了计算相关性,D_{r} 是 True 表示 \Gamma^{rear} 有异常,否则表示正常;D_{f} 是 True 表示 \Gamma^{front} 有异常,否则表示正常。

7-13行是 E\rightarrow S 的情形,因为\Gamma^{rear} 异常,同时 \Gamma^{front} 正常,说明事件导致了时间序列的变化。7-13行是为了计算 t_{score} 的范围,判断是显著的提升还是下降。

14-20行是 S\rightarrow E 的情形,因为\Gamma^{front} 异常,就导致了事件的发生。14-20行是为了计算 t_{score} 的范围,判断是显著的提升还是下降。

参数:

时间序列的长度 k 可以设置为第一次达到顶峰的长度,

最邻近的元素个数 r=\ln(p),其中p是样本的总个数。

Figure5

Figure6

其他算法:

(1)Pearson Correlation

(2)J-Measure Correlation

 

 

How to Convert a Time Series to a Supervised Learning Problem in Python

https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/

Machine learning methods like deep learning can be used for time series forecasting.

Before machine learning can be used, time series forecasting problems must be re-framed as supervised learning problems. From a sequence to pairs of input and output sequences.

In this tutorial, you will discover how to transform univariate and multivariate time series forecasting problems into supervised learning problems for use with machine learning algorithms.

After completing this tutorial, you will know:

  • How to develop a function to transform a time series dataset into a supervised learning dataset.
  • How to transform univariate time series data for machine learning.
  • How to transform multivariate time series data for machine learning.

Let’s get started.

How to Convert a Time Series to a Supervised Learning Problem in Python

Time Series vs Supervised Learning

Before we get started, let’s take a moment to better understand the form of time series and supervised learning data.

A time series is a sequence of numbers that are ordered by a time index. This can be thought of as a list or column of ordered values.

For example:

A supervised learning problem is comprised of input patterns (X) and output patterns (y), such that an algorithm can learn how to predict the output patterns from the input patterns.

For example:

For more on this topic, see the post:

Pandas shift() Function

A key function to help transform time series data into a supervised learning problem is the Pandas shift() function.

Given a DataFrame, the shift() function can be used to create copies of columns that are pushed forward (rows of NaN values added to the front) or pulled back (rows of NaN values added to the end).

This is the behavior required to create columns of lag observations as well as columns of forecast observations for a time series dataset in a supervised learning format.

Let’s look at some examples of the shift function in action.

We can define a mock time series dataset as a sequence of 10 numbers, in this case a single column in a DataFrame as follows:

Running the example prints the time series data with the row indices for each observation.

We can shift all the observations down by one time step by inserting one new row at the top. Because the new row has no data, we can use NaN to represent “no data”.

The shift function can do this for us and we can insert this shifted column next to our original series.

Running the example gives us two columns in the dataset. The first with the original observations and a new shifted column.

We can see that shifting the series forward one time step gives us a primitive supervised learning problem, although with X and y in the wrong order. Ignore the column of row labels. The first row would have to be discarded because of the NaN value. The second row shows the input value of 0.0 in the second column (input or X) and the value of 1 in the first column (output or y).

We can see that if we can repeat this process with shifts of 2, 3, and more, how we could create long input sequences (X) that can be used to forecast an output value (y).

The shift operator can also accept a negative integer value. This has the effect of pulling the observations up by inserting new rows at the end. Below is an example:

Running the example shows a new column with a NaN value as the last value.

We can see that the forecast column can be taken as an input (X) and the second as an output value (y). That is the input value of 0 can be used to forecast the output value of 1.

Technically, in time series forecasting terminology the current time (t) and future times (t+1, t+n) are forecast times and past observations (t-1, t-n) are used to make forecasts.

We can see how positive and negative shifts can be used to create a new DataFrame from a time series with sequences of input and output patterns for a supervised learning problem.

This permits not only classical X -> y prediction, but also X -> Y where both input and output can be sequences.

Further, the shift function also works on so-called multivariate time series problems. That is where instead of having one set of observations for a time series, we have multiple (e.g. temperature and pressure). All variates in the time series can be shifted forward or backward to create multivariate input and output sequences. We will explore this more later in the tutorial.

The series_to_supervised() Function

We can use the shift() function in Pandas to automatically create new framings of time series problems given the desired length of input and output sequences.

This would be a useful tool as it would allow us to explore different framings of a time series problem with machine learning algorithms to see which might result in better performing models.

In this section, we will define a new Python function named series_to_supervised() that takes a univariate or multivariate time series and frames it as a supervised learning dataset.

The function takes four arguments:

  • data: Sequence of observations as a list or 2D NumPy array. Required.
  • n_in: Number of lag observations as input (X). Values may be between [1..len(data)] Optional. Defaults to 1.
  • n_out: Number of observations as output (y). Values may be between [0..len(data)-1]. Optional. Defaults to 1.
  • dropnan: Boolean whether or not to drop rows with NaN values. Optional. Defaults to True.

The function returns a single value:

  • return: Pandas DataFrame of series framed for supervised learning.

The new dataset is constructed as a DataFrame, with each column suitably named both by variable number and time step. This allows you to design a variety of different time step sequence type forecasting problems from a given univariate or multivariate time series.

Once the DataFrame is returned, you can decide how to split the rows of the returned DataFrame into X and y components for supervised learning any way you wish.

The function is defined with default parameters so that if you call it with just your data, it will construct a DataFrame with t-1 as X and t as y.

The function is confirmed to be compatible with Python 2 and Python 3.

The complete function is listed below, including function comments.

Can you see obvious ways to make the function more robust or more readable?
Please let me know in the comments below.

Now that we have the whole function, we can explore how it may be used.

One-Step Univariate Forecasting

It is standard practice in time series forecasting to use lagged observations (e.g. t-1) as input variables to forecast the current time step (t).

This is called one-step forecasting.

The example below demonstrates a one lag time step (t-1) to predict the current time step (t).

Running the example prints the output of the reframed time series.

We can see that the observations are named “var1” and that the input observation is suitably named (t-1) and the output time step is named (t).

We can also see that rows with NaN values have been automatically removed from the DataFrame.

We can repeat this example with an arbitrary number length input sequence, such as 3. This can be done by specifying the length of the input sequence as an argument; for example:

The complete example is listed below.

Again, running the example prints the reframed series. We can see that the input sequence is in the correct left-to-right order with the output variable to be predicted on the far right.

Multi-Step or Sequence Forecasting

A different type of forecasting problem is using past observations to forecast a sequence of future observations.

This may be called sequence forecasting or multi-step forecasting.

We can frame a time series for sequence forecasting by specifying another argument. For example, we could frame a forecast problem with an input sequence of 2 past observations to forecast 2 future observations as follows:

The complete example is listed below:

Running the example shows the differentiation of input (t-n) and output (t+n) variables with the current observation (t) considered an output.

Multivariate Forecasting

Another important type of time series is called multivariate time series.

This is where we may have observations of multiple different measures and an interest in forecasting one or more of them.

For example, we may have two sets of time series observations obs1 and obs2 and we wish to forecast one or both of these.

We can call series_to_supervised() in exactly the same way.

For example:

Running the example prints the new framing of the data, showing an input pattern with one time step for both variables and an output pattern of one time step for both variables.

Again, depending on the specifics of the problem, the division of columns into X and Y components can be chosen arbitrarily, such as if the current observation of var1 was also provided as input and only var2 was to be predicted.

You can see how this may be easily used for sequence forecasting with multivariate time series by specifying the length of the input and output sequences as above.

For example, below is an example of a reframing with 1 time step as input and 2 time steps as forecast sequence.

Running the example shows the large reframed DataFrame.

Experiment with your own dataset and try multiple different framings to see what works best.

Summary

In this tutorial, you discovered how to reframe time series datasets as supervised learning problems with Python.

Specifically, you learned:

  • About the Pandas shift() function and how it can be used to automatically define supervised learning datasets from time series data.
  • How to reframe a univariate time series into one-step and multi-step supervised learning problems.
  • How to reframe multivariate time series into one-step and multi-step supervised learning problems.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

zr9558's Blog