机器学习正在安全领域挂起一阵小旋风,但这里面有BUG

如今,安全领域是机器学习(Machine learning)正在大力进军的一个方向。

| 把机器学习应用到安全领域,老板们跃跃欲试

如果你亲自参加了 2016 RSA 大会,就会发现几乎没有哪家公司在说自家安全领域的产品时,不提及机器学习。这是为什么呢?

可能对外行人来说,机器学习就像一种魔法,能解决所有的安全问题:你把一堆未标识的数据统统塞进会机器学习的系统中,它就能分辨出连人类专家都分辨不出的数据规律,并且还可以学习新的行为指令和适应环境威胁。不仅如此,就连为规则加密也劳烦不到你,因为系统已经自动为你搞定这一切。

要真是像这样的话,那机器学习可真就是今年的重头戏了!但讽刺的是,每个人都兴师动众说要在这个领域搞出点名堂来,但真正理解什么是机器学习,或明白机器学习到底能用来做什么的人,却是凤毛麟角。可想而知,在这种大环境下机器学习大多是被滥用的,尤其在安全领域

| 用机器学习有效解决安全问题,正确的方法是?

把机器学习应用到安全领域,大多会涉及到一种技术——异常检测(anomaly detection),它可以识别哪些部分和预期模式或数据集不匹配。但技术销售方要注意,这种技术只在某些条件下有效——不过显然,他们还不知道自己已经犯下错误:他们会告诉你,分析过你公司的网络流量后,就可以用机器学习 揪出暗藏在网络中的黑客。但事实上,机器学习根本就做不到。这时候,你要立刻对这个销售商保持一丝怀疑。

那到底什么情况下才有效?答案是,只有为低维度的问题也配备上高质量的标识数据,这样的机器学习才是有效的。但很不幸,企业在实施过程并没有做到这一点。如果要检测新型的攻击方式,你得有很清晰并且经过标识的攻击案例。这就是说,如果没有透彻理解正常的网络行为,机器学习是不可能发现黑客的。再说,所有的黑客都很狡猾,他们一定会把自己伪装的天衣无缝。

| 机器学习和异常检测,用在哪里价值最大?

机器学习和异常检测真正有用的地方,在于它们能将人类行为分类。

事实证明,人类的预测能力非常强,他们也有能力建立非常精确的个体用户行为模型,让模型探测到异常情况。

其实,人们在这方面已小有成就,比如隐式认证( Implicit Authentication)。隐式认证采用生物特征识别技术,基于击键力度、节奏和打字模式等技术对用户身份进行认证。不管是改善用户体验还是增强安全性,这个技术的优势都相当明显。最起码,它免除了用户记忆密码的负担和输入密码的麻烦。由于隐式认证所需元素大多是低维的, 机器学习就只需处理少量几个参数,这也使得收集用户的高品质标识数据变得很方便。所以,即使有行为差异或信号干扰, 机器学习还是能正确为计算机视觉进行图形搭配。同理,机器学习也能通过识别出个体的独特行为而进行身份验证,这当然也不在话下。

不过,它是怎么做到的呢?

其实,你走路、站立等所有动作,是由众多因素共同决定的,比如生理状况,年龄,性别,肌肉记忆等等。并且对个体来说,这些动作不会有太大改变。因此,不经意间,你口袋中的手机就通过内置传感器精确捕捉到了这些信息,并记录下来。而想要通过运动行为来识别一个人, 4 秒的运动信息就已足够。另外,通过对比用户的历史和当下的定位记录也可以进行身份识别。人们总是生活在各种各样的习惯当中,通过观察他们什么时候从哪出发,就能预测被测者到底是不是用户本人。

我们的手机和电脑上已有大量的传感器,以后随着可穿戴设备的普及和物联网的发展,传感器的数量更会暴增。用户大量的行为数据和环境数据就这样被收集起来,提供给机器学习,让它为用户建立个体模型,并找到各个因素之间的相互关系。

| 让机器学习进行安全防护,你需要做哪些功课?

想进行安全防护,就必须让你的系统提前知道都存在哪些威胁模型。

首先,也是最重要的事——收集数据。这些数据必须非常精确,才能用来训练系统,起到抵抗威胁的作用。不过身份认证系统要真是遭到攻击,你也不用过于担心。因为行为变化还是比较好检测的,系统很快就能识别出异常情况。比如,如果一个设备不小心被偷,那么这个设备被偷之后所记录的运动状态,地理位置和用法就会和之前的记录有明显不同。不过,系统是接受这种可能存在的异常情况的,这时候用户就需要在系统上以另外的方式确认身份,调整系统,以使假阳性最小化。而一旦我们在不同设备上连接起 4 个因素,那么隐式认证的假阳性就会低于 0.001% 。

这个世界上并没有哪一种机器学习真的神奇到能解决所有的安全问题。设计者想用机器学习创建一个有用的安全防卫产品,就需要对底层系统有深刻理解,并且承认很多问题并不适合用机器学习来解决。不过不用担心,那些处在浪潮之巅的科技公司会将这些问题一步步消灭掉。

机器学习正在安全领域酝酿着一股势不可挡的市场狂潮。

Advertisements

未来的网络安全,离不开机器学习

信息安全一直就是猫与老鼠的游戏。好家伙新建一堵墙,坏家伙便想方设法通过或绕过它。但最近,坏家伙们似乎越来越轻易地就可以通过这堵墙。要想阻止他们,我们的能力需要有一个巨大的提升,这可能意味着我们需要更广泛地使用机器学习技术。

这可能会惊到行业外的旁观者,但机器学习目前并没有广泛地影响到IT安全领域。安全专家认为,尽管信用卡欺诈侦查系统和网络设备制造商正在使用先进的分析方法,但实际上每个大型公司常见的自动化安全行动——比如检测个人电脑上的恶意软件或者识别网络中的恶意活动——大部分都要依靠人类适时地对这些行动进行代码编写和配置。

尽管机器学习技术在网络安全领域的应用已经有了广泛的学术研究,但我们现在才刚开始了解这项技术对安全工具的影响。一些创业公司(如Invincea, Cylance, Exabeam和Argyle Data)正在利用机器学习驱动安全工具,使得它们比目前主要的安全软件供应商提供的工具更快捷和精准。

用数据摧毁恶意软件

Invincea是美国弗吉尼亚州一家专门检测恶意软件和维护网络安全的公司。这家公司的首席研究工程师Josh Saxe认为,是时候摒弃上世纪90年代的基于特征码和文件哈希值的分析技术了。

Saxe说:「我了解到,一些反病毒公司已经涉足机器学习领域,但是他们赖以生存的仍然是特征码检测。他们基于文件哈希值或者模式匹配来检测恶意软件,这是人类研究员想出来的检测给定样品的分析技术。」

Invincea先进的恶意软件检测系统有一部分是基于 DARPA 的网络基因组项目。

他说:「他们在检测过去常见的恶意软件上很成功,但是他们并不擅长检测新的恶意软件,这也是当下网络犯罪大行其道的原因之一。即使你安装了杀毒系统,其他人还是能成功侵入你的电脑,因为特征码检测的方法根本不起作用。」

在Invincea,Saxe正带领团队用机器学习建立更完善的恶意软件检测系统。这个项目是DARPA网络基因组项目的一部分,主要是使用机器学习来摧毁检测到的恶意软件,包括反向还原恶意软件的运行方式、在代码中进行社交网络分析、使用机器学习系统快速摧毁自然网络环境中出现的恶意软件新样本。

「我们已经证明,我们开发的基于机器学习的方法比传统反病毒系统更有效。机器学习系统能够自动完成人类分析员所做的工作,甚至能做得更好。把机器学习系统与大量的训练数据结合,就能击败基于特征码的传统检测系统。」

Invincea采用深度学习方法来加快算法的训练。目前,Saxe有大约150万个良性或恶意软件样品用来训练算法,这些都在使用 Python 工具的GPU中进行。他希望,随着样本数据增加到3000万,机器学习系统的性能优势会有一个线性增长。

「我们拥有的训练数据越多,用来训练机器学习系统的恶意软件的数量越多,那机器学习系统在检测恶意软件上的性能优势就会越明显,」他说。

Saxe说Invincea目前的计划是在2016年的终端安全产品上加载更多基于深度学习的功能。具体来说,就是把这种能力添加到已经使用机器学习技术的终端安全产品Cynomix上。

恶意用户检测

机器学习还有助于IT安全的其他方面:检测恶意的内部用户和识别损坏的账户。

正如主要的反病毒产品依赖特征码来识别恶意软件一样,监测用户活动的工具也是倚赖特征码。基于特征码的检测方法在恶意软件检测上开始失效,同样的,它在检测用户活动领域的效果也不尽如人意。

「过去,企业的安全人员严重倚赖特征码方法——比如IP地址黑名单。」用户行为分析工具提供商Exabeam的首席数据科学家Derek Lin说到。

他说:「这种方法寻找的是已经发生的事情。基于特征码的方法存在的问题是,只有事件发生过后,他们才能看到留下的特征码。而现在,安全人员非常聚焦于检测没有特征码的恶意事件。」

Exabeam通过追踪用户的远程连接信息、设备、IP地址和凭证建立了一张用户活动图。

如今,精明的犯罪分子知道稍微改变一下他们的路径就能战胜特征码检测。所以,如果被侵入的检测系统中存有一个IP黑名单,网络犯罪分子可以通过在他处理下的大面积网域中不断来回跳动来打破这个IP黑名单。

Exabeam并没有固守昔日的防御策略,而是基于Gartner的UBA( User Behavior Analytics,用户行为分析)概念采取了主动出击的方法。UBA背后的思路是你没法事先知道机器或用户的好坏,所以先假设他们是恶意的,你的网络是缺乏抵抗力的,所以你时刻对每个人的行为进行监测和制作模型,从而找到恶意行为者。

这就是用到机器学习算法的地方。Lin和他的团队获取了多种多样的资源(如服务器日志、虚拟私人网络日志和VPN日志等),使用各种监督和非监督式机器学习算法来检测用户行为的异常模式。

Lin说:「以上都是描绘用户行为的画像,问题是这是如何做到的。对于网络上每个用户或实体,我们尝试建立一个正常的简略图——这里涉及到统计学分析。然后,我们在概念水平上寻找与正常值的偏差……我们使用基于行为的方法来寻找系统中的异常,让他们浮现出来,方便安全分析员查看。」

机器学习在安全领域的未来

「想一想我们经历过的几次主要的网络安全浪潮,网络犯罪分子正寻找有效地方法来打破安全系统,我们也要回以反击。机器学习会成为反击武器中的中流砥柱吗?答案是肯定的。」安全软件供应商Townsend Security创始人兼CEO Patrick Townsend说到。

他说:「现在我们正开始获得能够有效处理大量未结构化数据和检测模式的系统,我希望下一波网络安全浪潮中的产品是基于认知计算的。看看Watson,既然它可以赢得危险边缘(Jeopardy)游戏,那为什么它不可以用来广泛地分析和理解网络安全事件呢?我认为我们正处于用基于认知的计算来帮助处理安全问题的萌芽阶段。」

Invincea的Saxe希望可以成为弄潮儿。他说:「我并不惊讶该领域的公司没有抓住这次浪潮,生产出基于新的深度学习的算法。对机器学习的训练才刚实现不久。这在10年前是没法有效完成的。」

Machine learning and big data know it wasn’t you who just swiped your credit card

You’re sitting at home minding your own business when you get a call from your credit card’s fraud detection unit asking if you’ve just made a purchase at a department store in your city. It wasn’t you who bought expensive electronics using your credit card – in fact, it’s been in your pocket all afternoon. So how did the bank know to flag this single purchase as most likely fraudulent?

Credit card companies have a vested interest in identifying financial transactions that are illegitimate and criminal in nature. The stakes are high. According to the Federal Reserve Payments Study, Americans used credit cards to pay for 26.2 billion purchases in 2012. The estimated loss due to unauthorized transactions that year was US$6.1 billion. The federal Fair Credit Billing Act limits the maximum liability of a credit card owner to $50 for unauthorized transactions, leaving credit card companies on the hook for the balance. Obviously fraudulent payments can have a big effect on the companies’ bottom lines. The industry requires any vendors that process credit cards to go through security audits every year. But that doesn’t stop all fraud.

In the banking industry, measuring risk is critical. The overall goal is to figure out what’s fraudulent and what’s not as quickly as possible, before too much financial damage has been done. So how does it all work? And who’s winning in the arms race between the thieves and the financial institutions?

Gathering the troops

From the consumer perspective, fraud detection can seem magical. The process appears instantaneous, with no human beings in sight. This apparently seamless and instant action involves a number of sophisticated technologies in areas ranging from finance and economics to law to information sciences.

Of course, there are some relatively straightforward and simple detection mechanisms that don’t require advanced reasoning. For example, one good indicator of fraud can be an inability to provide the correct zip code affiliated with a credit card when it’s used at an unusual location. But fraudsters are adept at bypassing this kind of routine check – after all, finding out a victim’s zip code could be as simple as doing a Google search.

Traditionally, detecting fraud relied on data analysis techniques that required significant human involvement. An algorithm would flag suspicious cases to be closely reviewed ultimately by human investigators who may even have called the affected cardholders to ask if they’d actually made the charges. Nowadays the companies are dealing with a constant deluge of so many transactions that they need to rely on big data analytics for help. Emerging technologies such as machine learning and cloud computing are stepping up the detection game.

Learning what’s legit, what’s shady

Simply put, machine learning refers to self-improving algorithms, which are predefined processes conforming to specific rules, performed by a computer. A computer starts with a model and then trains it through trial and error. It can then make predictions such as the risks associated with a financial transaction.

A machine learning algorithm for fraud detection needs to be trained first by being fed the normal transaction data of lots and lots of cardholders. Transaction sequences are an example of this kind of training data. A person may typically pump gas one time a week, go grocery shopping every two weeks and so on. The algorithm learns that this is a normal transaction sequence.

After this fine-tuning process, credit card transactions are run through the algorithm, ideally in real time. It then produces a probability number indicating the possibility of a transaction being fraudulent (for instance, 97%). If the fraud detection system is configured to block any transactions whose score is above, say, 95%, this assessment could immediately trigger a card rejection at the point of sale.

The algorithm considers many factors to qualify a transaction as fraudulent: trustworthiness of the vendor, a cardholder’s purchasing behavior including time and location, IP addresses, etc. The more data points there are, the more accurate the decision becomes.

This process makes just-in-time or real-time fraud detection possible. No person can evaluate thousands of data points simultaneously and make a decision in a split second.

Here’s a typical scenario. When you go to a cashier to check out at the grocery store, you swipe your card. Transaction details such as time stamp, amount, merchant identifier and membership tenure go to the card issuer. These data are fed to the algorithm that’s learned your purchasing patterns. Does this particular transaction fit your behavioral profile, consisting of many historic purchasing scenarios and data points?

 

The algorithm knows right away if your card is being used at the restaurant you go to every Saturday morning – or at a gas station two time zones away at an odd time such as 3:00 a.m. It also checks if your transaction sequence is out of the ordinary. If the card is suddenly used for cash-advance services twice on the same day when the historic data show no such use, this behavior is going to up the fraud probability score. If the transaction’s fraud score is above a certain threshold, often after a quick human review, the algorithm will communicate with the point-of-sale system and ask it to reject the transaction. Online purchases go through the same process.

In this type of system, heavy human interventions are becoming a thing of the past. In fact, they could actually be in the way since the reaction time will be much longer if a human being is too heavily involved in the fraud-detection cycle. However, people can still play a role – either when validating a fraud or following up with a rejected transaction. When a card is being denied for multiple transactions, a person can call the cardholder before canceling the card permanently.

Computer detectives, in the cloud

The sheer number of financial transactions to process is overwhelming, truly, in the realm of big data. But machine learning thrives on mountains of data – more information actually increases the accuracy of the algorithm, helping to eliminate false positives. These can be triggered by suspicious transactions that are really legitimate (for instance, a card used at an unexpected location). Too many alerts are as bad as none at all.

It takes a lot of computing power to churn through this volume of data. For instance, PayPal processes more than 1.1 petabytes of data for 169 million customer accounts at any given moment. This abundance of data – one petabyte, for instance, is more than 200,000 DVDs’ worth – has a positive influence on the algorithms’ machine learning, but can also be a burden on an organization’s computing infrastructure.

Enter cloud computing. Off-site computing resources can play an important role here. Cloud computing is scalable and not limited by the company’s own computing power.

Fraud detection is an arms race between good guys and bad guys. At the moment, the good guys seem to be gaining ground, with emerging innovations in IT technologies such as chip and pin technologies, combined with encryption capabilities, machine learning, big data and, of course, cloud computing.

Fraudsters will surely continue trying to outwit the good guys and challenge the limits of the fraud detection system. Drastic changes in the payment paradigms themselves are another hurdle. Your phone is now capable of storing credit card information and can be used to make payments wirelessly – introducing new vulnerabilities. Luckily, the current generation of fraud detection technology is largely neutral to the payment system technologies.

当朋友圈更新多到看不完时,来看看Facebook是怎么优化信息流的

【编者按】本文是FREES互联网团队成员覃超与徐万鸿进行的一场 Ask Me Anything。徐是前 Facebook 新闻流排序组的资深工程师,在今年9月回国出任神州专车 CTO。本文中他们聊的是关于 Facebook 的 Growth Hacking 策略、反垃圾信息系统、信息流排序,以及为什么选择回国参与创业。雷锋网(公众号:雷锋网)做了不修改原意的编辑。

 

所谓新闻流排序(news feed ranking),指的是 Facebook 的一项看家本领:用户每天会收到两三千条新鲜事,却只会阅读前 50 至 100 条。利用机器学习将用户最想看的内容排到最前面,从而提高粘性和日活。

这固然是一篇着重技术的文章,所在公司 Facebook 更是世界上最大的互联网公司之一。但这并不妨碍创业者从中得到经验。利用 A/B 测试作为迭代方法,借助 Growth Hacking 的核心——数据来驱动开发,新员工的入职宣讲……这些做法都体现了这位社交之王不同维度的文化所在:精神层面注重实现梦想,统一目标;而这一目标下放到微观层面,就是对于数据的尊重。

Facebook利用Sigma 系统做了什么?

我第一次去Facebook工作的时候,当时专注于用户增长的 VP 负责宣讲。他说将来全球所有人都会使用 Facebook,这家公司将来会成为万亿美元的公司,这让我印象很深刻。公司的所有人都很兴奋,对设定的目标有非常大的信心。他们的工作使命感非常强,非常专注。

这是Facebook给我印象深刻的一件事。

在 Facebook 的 site-integrity (站点完整性) 组工作了两年。当时 Facebook 有很多的垃圾私信、垃圾信息,就像人人、微博上有各种广告、垃圾链接。有些用户的账号被盗用了,会使用个人页面发送垃圾短信、广告、病毒,还有一些不受欢迎的朋友请求。我会处理所有类似这些涉及到影响用户体验的东西。

Facebook 使用了一个叫做 sigma 的系统来抵制这些垃圾信息。这个系统安装在 2000 多台机器上面,Facebook 用户做的任何事情,都会经过 sigma 系统分析处理,比如评论、链接、朋友请求,都会被这个系统进行判断,是正常行为、滥用行为还是有问题的行为。

利用 Sigma 系统,Facebook 会对垃圾信息进行过滤和清理。

举个例子说,比如发送朋友请求,Facebook 的系统会自动判断一下:如果这个人的朋友请求都被别人拒绝了,他再发送朋友请求是不会被批准的。如果一个人发送的朋友请求十个有九个都被拒绝了,那么他下一次的朋友请求就会被系统拒绝。

当然这个系统还有其他的判断信号。

它是一个机器学习系统,通过你之前发的朋友请求拒绝概率高低来判断你被拒绝的概率有多高。

如果这个比率很高,Facebook 会让你进行手机短信或其他方式认证,来验证是软件还是真人发送的,以此判断你是不是真的要发送朋友请求,比如你发出的朋友请求对象与你没有任何共同好友,那就可能是一个不合理的请求。

基本上,你在 Facebook 上做的任何事情,都会经过这个系统来分析、预测、决定是否允许你发出信息,借此希望会减少生态圈中的骚扰行为。当时 Facebook 每天有上百亿次的信息发生要通过这个系统进行判断。

机器学习是Sigma 系统的核心

Sigma 系统中有些是人为规则也有机器算法,请求通过和拒绝就是一个迅捷数据组(Scrum)。任务通过,则说明这个任务是一个对机器学习来说的正样本,被拒绝则是一个负样本,很像 0 和 1。

比如发送朋友请求如果被接受,y 值是 1,如果被拒绝就是 0。如果是评论和点赞,系统就能寻找 y 值,用户发送的不当信息就会被删除。

而机器学习是整个 Sigma 系统的核心。

另外一个方法是通过一些异常行为的分析、数据挖掘的方法来分析用户的异常行为。

比如一个人发的同样类型评论非常多,所有评论里都有一个相似链接,这就非常有问题。正常操作不会在不同人的主页上留同样的评论,这显然属于异常行为,我们不会允许。

新闻流是Facebook最重要的产品

我工作两年之后选择去了这个组。

“排序” 指的是信息流的顺序。它决定了打开你的 Facebook 朋友圈,你的信息流是个什么样子,信息的位置。每个人产生的内容、新闻会有两三千个,用户只能看到 50-100 个。你需要把两三千个最好地展示出来。有些我们不给用户显示,比如你喜欢游戏,你的朋友不喜欢。

我 2012 年刚去的时候,新闻流排序组只有五六个人,尽管这可能是公司最大的机器学习系统,最核心的产品。每天有十亿多人上线,每个用户花 40 分钟在 Facebook 上,其中一半时间都花在新闻流上。Facebook 大部分收入来自新闻流广告。比如说,移动广告收入占所有广告的 70%,而其中所有的移动的广告都来自新闻流。不管是从用户的停留时间,还是收入来说,新闻流都是最重要的产品。

新闻流是 Facebook 最重要的产品,直接决定了用户所看到的内容。

做好新闻流排序是很难的问题,因为用户在新闻流上的行为有很多种,不只是传统广告点击或者不点击这一种操作,用户可以在新闻流里赞、评论、分享或者隐藏这个新闻流,也可以播放视频。我需要理解用户喜欢什么东西,评论、分享什么东西,想看什么样的视频。理解用户的兴趣所在,根据我们的讯息把最好的东西放在新闻流的最前面。

以国内的社交媒体作对比来说,微信的朋友圈是所有内容全部显示,它不需要排序,是因为朋友圈容量不是特别多,大家可以看完所有的内容。朋友越来越多的话,没有时间把分享看完,排序是必然的事情。你会很容易漏掉很重要人的图片,它们迅速埋没在大部分你不感兴趣的内容了。

Facebook 之前也是全部显示,慢慢到后来用户是看不完所有的信息的。如果不做排序,把最好的服务挑出来的话,用户不会愿意访问新闻流,因为他看到很多不感兴趣的东西,感兴趣的部分他已经没有时间找出来了。从不排序到排序是必然的过程,你的朋友越来越多,公众页面越来越多,排序是必然的。

比如说新浪微博没有做排序,有些细节杂乱无章,他们测试过,但是做得不太好。所以放弃了。微信的朋友圈也会到要做排序的阶段。Facebook 不只是排序,还会隐藏用户不感兴趣的内容,比如你的朋友玩过 Candy Crush 游戏,但可能你本身不玩任何游戏,关于这方面的信息就没有意义。Facebook 就不会给你显示这些内容——“朋友们在玩什么游戏”。

社交媒体的碎片化已成事实。只有采取更好的排序手段,推送给用户更精准的内容,才能提高平台停留时间,加强粘性。

新闻流排序的工作原理是什么?

基本上,新闻流是从两三千条内容里面,挑出了 四五十 个。按照每个内容打分,分高的内容排在最前面。每个内容、照片、分享或者状态,我们会预测一些概率值,比如你点赞的概率,评论、分享的概率。每个用户的行为,比如点赞、分享、评论,系统都会给权值。评这些用户行为概率是通过机器学习来系统计算的。如果用户对某个内容点赞、评论或者分享,说明用户愿意看到这个内容,对内容产生了反馈。

举个例子来说,比如你是我的好友,你上传了 100 张照片,我点赞了 20 次,那么点赞概率就是 20%。我们知道每个用户以前对哪些内容点赞、评论,这些都是我们的训练样本。我们通过学习用户的历史行为,进行相同类型、相同个人的未来行为预测,因为用户短期行为不会大幅变化,过去对哪些东西进行评论,将来也很有可能对相似内容进行评论。

对用户内容的预测

很多人关心,是否可以针对用户内容来进行预测?比如分析用户发了什么样的文字或者图片?这是可以的。如果是图片我们可以抽取图片特点,对图片进行模式识别,分析图片的主题,打上相应的标签,用机器来识别这些图片。现在在做相应的工作。Facebook 有 AI 实验室,可以对图片进行内容识别。

那么,Facebook 该如何检测这套算法的有效性呢?该如何进行更新迭代?

其实,这可以通过 A/B 测试来实现。我们会抽取 1% 用户进行新的算法,1%进行旧的算法。如果新的算法下用户每天点赞、评论或者分享次数增长了,那说明新的算法更好。我们就把新的算法发布给所有的用户。我们主要的核心目标是:让日活跃用户更多,停留时间更长,访问 Facebook 更频繁。

A/B 测试是很好的迭代方法。建立起核心指标,进行 A/B 测试,看新的改动能否提高核心指标,提高就发布,没有提高就不用发布。这很像 Growth hacking,当然最终目的还是提高 DAU。如果用户喜欢你的新闻流,就会更频繁访问,最终目的还是在线时长和日活跃用户。

A/B 测试是 Facebook 用来测试迭代可行性的手段,目前峰瑞资本所投资的吆喝科技,想让初创企业也能使用到这一技术。

“我已经没法看完所有朋友圈的内容”

我已经没法看完所有朋友圈内容了。一种改进方法是排序,把最好的内容放最前面,通过你以前点赞的内容,来学习你关心的内容,比如你女朋友发的东西你都会点赞。另外一种改进方法叫做 “内容置顶”(Story bumped)。有时候我早上起来刷微信,会看不完,只看了一小部分。过一会儿再刷的时候,已经没有什么新的内容了。

Facebook 的内容置顶功能会把你没有看完的东西再放到上面去再次推送给你。

微信是知道哪些内容你没看过的,我有很多在美国的朋友,朋友圈会有很多内容,上班前看不完只看了一部分。再刷新的时候就已经没有新的东西出来了,我也没有关心没看完的东西,朋友发的照片。Facebook 的 “内容置顶” 把很重要的、还没看的、有点旧的内容放在朋友圈前面,让你再看一眼,怕你漏掉重要的内容。

在九月份的时候我加入神州专车担任 CTO,从事业角度来说,我希望把从 Facebook 学到的公司文化、技术带回中国。中国在计算机行业上有很大的潜力。现在国内的产品质量上和美国产品已经相当了,比如微信,Facebook 的产品经理也学习了微信里面的功能。再往后面看几年的话,中国有机会赶上美国。

计算机学科已经成熟,创造力在慢慢变好。很多初创企业尝试不同的想法,中国的创业者是美国的好多倍,都在尝试不同的想法,会诞生出成功的公司。技术上,中国正在逼近美国,甚至会超越美国。长远来看,中国的计算机行业、互联网行业,应该是有潜力成为世界上互联网行业最好的国家。

Fighting spam with Haskell

One of our weapons in the fight against spam, malware, and other abuse on Facebook is a system called Sigma. Its job is to proactively identify malicious actions on Facebook, such as spam, phishing attacks, posting links to malware, etc. Bad content detected by Sigma is removed automatically so that it doesn’t show up in your News Feed.

We recently completed a two-year-long major redesign of Sigma, which involved replacing the in-house FXL language previously used to program Sigma with Haskell. The Haskell-powered Sigma now runs in production, serving more than one million requests per second.

Haskell isn’t a common choice for large production systems like Sigma, and in this post, we’ll explain some of the thinking that led to that decision. We also wanted to share the experiences and lessons we learned along the way. We made several improvements to GHC (the Haskell compiler) and fed them back upstream, and we were able to achieve better performance from Haskell compared with the previous implementation.

How does Sigma work?

Sigma is a rule engine, which means it runs a set of rules, called policies. Every interaction on Facebook — from posting a status update to clicking “like” — results in Sigma evaluating a set of policies specific to that type of interaction. These policies make it possible for us to identify and block malicious interactions before they affect people on Facebook.

Policies are continuously deployed. At all times, the source code in the repository is the code running in Sigma, allowing us to move quickly to deploy policies in response to new abuses. This also means that safety in the language we write policies in is important. We don’t allow code to be checked into the repository unless it is type-correct.

Louis Brandy of Facebook’s Site Integrity team discusses scalable spam fighting and the anti-abuse structure at Facebook and Instagram in a 2014 @Scale talk.

Why Haskell?

The original language we designed for writing policies, FXL, was not ideal for expressing the growing scale and complexity of Facebook policies. It lacked certain abstraction facilities, such as user-defined data types and modules, and its implementation, based on an interpreter, was slower than we wanted. We wanted the performance and expressivity of a fully fledged programming language. Thus, we decided to migrate to an existing language rather than try to improve FXL.

The following features were at the top of our list when we were choosing a replacement:

1. Purely functional and strongly typed. This ensures that policies can’t inadvertently interact with each other, they can’t crash Sigma, and they are easy to test in isolation. Strong types help eliminate many bugs before putting policies into production.

2. Automatically batch and overlap data fetches. Policies typically fetch data from other systems at Facebook, so we want to employ concurrency wherever possible for efficiency. We want concurrency to be implicit, so that engineers writing policies can concentrate on fighting spam and not worry about concurrency. Implicit concurrency also prevents the code from being cluttered with efficiency-related details that would obscure the functionality, and make the code harder to understand and modify.

3. Push code changes to production in minutes. This enables us to deploy new or updated policies quickly.

4. Performance. FXL’s slower performance meant that we were writing anything performance-critical in C++ and putting it in Sigma itself. This had a number of drawbacks, particularly the time required to roll out changes.

5. Support for interactive development. Developers working on policies want to be able to experiment and test their code interactively, and to see the results immediately.

Haskell measures up quite well: It is a purely functional and strongly typed language, and it has a mature optimizing compiler and an interactive environment (GHCi). It also has all the abstraction facilities we would need, it has a rich set of libraries available, and it’s backed by an active developer community.

That left us with two features from our list to address: (1) automatic batching and concurrency, and (2) hot-swapping of compiled code.

Automatic batching and concurrency: The Haxl framework

All the existing concurrency abstractions in Haskell are explicit, meaning that the user needs to say which things should happen concurrently. For data-fetching, which can be considered a purely functional operation, we wanted a programming model in which the system just exploits whatever concurrency is available, without the programmer having to use explicit concurrency constructs. We developed the Haxl framework to address this issue: Haxl enables multiple data-fetching operations to be automatically batched and executed concurrently.

We discussed Haxl in an earlier blog post, and we published a paper on Haxl at the ICFP 2014 conference. Haxl is open source and available on GitHub.

In addition to the Haxl framework, we needed help from the Haskell compiler in the form of theApplicative do-notation. This allows programmers to write sequences of statements that the compiler automatically rearranges to exploit concurrency. We also designed and implemented Applicative do-notation in GHC.

Hot-swapping of compiled code

Every time someone checks new code into the repository of policies, we want to have that code running on every machine in the Sigma fleet as quickly as possible. Haskell is a compiled language, so that involves compiling the code and distributing the new compiled code to all the machines running Sigma.

We want to update the compiled rules in a running Sigma process on the fly, while it is serving requests. Changing the code of a running program is a tricky problem in general, and it has been the subject of a great deal of research in the academic community. In our case, fortunately, the problem is simpler: Requests to Sigma are short-lived, so we don’t need to switch a running request to new code. We can serve new requests on the new code and let the existing requests finish before we discard the old code. We’re careful to ensure that we don’t change any code associated with persistent state in Sigma.

Loading and unloading code currently uses GHC’s built-in runtime linker, although in principle, we could use the system dynamic linker. To unload the old version of the code, the garbage collector gets involved. The garbage collector detects when old code is no longer being used by a running request, so we know when it is safe to unload it from the running process.

How Haskell fits in

Haskell is sandwiched between two layers of C++ in Sigma. At the top, we use the C++ thrift server. In principle, Haskell can act as a thrift server, but the C++ thrift server is more mature and performant. It also supports more features. Furthermore, it can work seamlessly with the Haskell layers below because we can call into Haskell from C++. For these reasons, it made sense to use C++ for the server layer.

At the lowest layer, we have existing C++ client code for talking to other internal services. Rather than rewrite this code in Haskell, which would duplicate the functionality and create an additional maintenance burden, we wrapped each C++ client in a Haxl data source using Haskell’s Foreign Function Interface (FFI) so we could use it from Haskell.

Haskell’s FFI is designed to call C rather than C++, so calling C++ requires an intermediate C layer. In most cases, we were able to avoid the intermediate C layer by using a compile-time tool that demangles C++ function names so they can be called directly from Haskell.

Performance

Perhaps the biggest question here is “Does it run fast enough?” Requests to Sigma result from users performing actions on Facebook, such as sending a message on Messenger, and Sigma must respond before the action can take place. So we wanted to serve requests fast enough to avoid interruptions to the user experience.

The graph below shows the relative throughput performance between FXL and Haskell for the 25 most common types of requests served by Sigma (these requests account for approximately 95 percent of Sigma’s typical workload).

Haskell performs as much as three times faster than FXL for certain requests. On a typical workload mix, we measured a 20 percent to 30 percent improvement in overall throughput, meaning we can serve 20 percent to 30 percent more traffic with the same hardware. We believe additional improvements are possible through performance analysis, tuning, and optimizing the GHC runtime for our workload.

Achieving this level of performance required a lot of hard work, profiling the Haskell code, and identifying and resolving performance bottlenecks.

Here are a few specific things we did:

 

  • We implemented automatic memoization of top-level computations using a source-to-source translator. This is particularly beneficial in our use-case where multiple policies can refer to the same shared value, and we want to compute it only once. Note, this is per-request memoization rather than global memoization, which lazy evaluation already provides.
  • We made a change to the way GHC manages the heap, to reduce the frequency of garbage collections on multicore machines. GHC’s default heap settings are frugal, so we also use a larger allocation area size of at least 64 MB per core.
  • Fetching remote data usually involves marshaling the data structure across the C++/Haskell boundary. If the whole data structure isn’t required, it is better to marshal only the pieces needed. Or better still, don’t fetch the whole thing — although that’s only possible if the remote service implements an appropriate API.
  • We uncovered a nasty performance bug in aeson, the Haskell JSON parsing library. Bryan O’Sullivan, the author of aeson, wrote a nice blog post about how he fixed it. It turns out that when you do things at Facebook scale, those one-in-a-million corner cases tend to crop up all the time.

 

Resource limits

In a latency-sensitive service, you don’t want a single request using a lot of resources and slowing down other requests on the same machine. In this case, the “resources” include everything on the machine that is shared by the running requests — CPU, memory, network bandwidth, and so on.

A request that uses a lot of resources is normally a bug that we want to fix. It does happen from time to time, often as a result of a condition that occurs in production that wasn’t encountered during testing — perhaps an innocuous operation provided with some unexpectedly large input data, or pathological performance of an algorithm on certain rare inputs, for example. When this happens, we want Sigma to terminate the affected request with an error (that will subsequently result in the bug being fixed) and continue without any impact on the performance of other requests being served.

To make this possible, we implemented allocation limits in GHC, which places a bound on the amount of memory a thread can allocate before it is terminated. Terminating a computation safely is a hard problem in general, but Haskell provides a safe way to abort a computation in the form of asynchronous exceptions. Asynchronous exceptions allow us to write most of most of our code ignoring the potential for summary termination and still have all the nice guarantees that we need in the event that the limit is hit, including safe releasing of resources, closing network connections, and so forth.

The following graph illustrates of how well allocation limits work in practice. It tracks the maximum live memory across various groups of machines in the Sigma fleet. When we enabled one request that had some resource-intensive outliers, we saw large spikes in the maximum live memory, which disappeared when we enabled allocation limits.

Enabling interactive development

Facebook engineers develop policies interactively, testing code against real data as they go. To enable this workflow in Haskell, we needed the GHCi environment to work with our full stack, including making requests to other back-end services from the command line.

To make this work, we had to make our build system link all the C++ dependencies of our code into a shared library that GHCi could load. We also customized the GHCi front end to implement some of our own commands and streamline the desired workflows. The result is an interactive environment in which developers can load their code from source in a few seconds and work on it with a fast turnaround time. They have the full set of APIs available and can test against real production data sources.

While GHCi isn’t as easy to customize as it could be, we’ve already made several improvements and contributed them upstream. We hope to make more improvements in the future.

Packages and build systems

In addition to GHC itself, we make use of a lot of open-source Haskell library code. Haskell has its own packaging and build system, Cabal, and the open-source packages are all hosted onHackage. The problem with this setup is that the pace of change on Hackage is fast, there are often breakages, and not all combinations of packages work well together. The system of version dependencies in Cabal relies too much on package authors getting it right, which is hard to ensure, and the tool support isn’t what it could be. We found that using packages directly from Hackage together with Facebook’s internal build tools meant adding or updating an existing package sometimes led to a yak-shaving exercise involving a cascade of updates to other packages, often with an element of trial and error to find the right version combinations.

As a result of this experience, we switched to Stackage as our source of packages. Stackage provides a set of package versions that are known to work together, freeing us from the problem of having to find the set by trial and error.

Did we find bugs in GHC?

Yes, most notably:

 

  • We fixed a bug in GHC’s garbage collector that was causing our Sigma processes to crash every few hours. The bug had gone undetected in GHC for several years.
  • We fixed a bug in GHC’s handling of finalizers that occasionally caused crashes during process shutdown.

 

Following these fixes, we haven’t seen any crashes in either the Haskell runtime or the Haskell code itself across our whole fleet.

What else?

At Facebook, we’re using Haskell at scale to fight spam and other types of abuse. We’ve found it to be reliable and performant in practice. Using the Haxl framework, our engineers working on spam fighting can focus on functionality rather than on performance, while the system can exploit the available concurrency automatically.

For more information on spam fighting at Facebook, check out our Protect the Graph page, or watch videos from our recent Spam Fighting @Scale event.