One of our weapons in the fight against spam, malware, and other abuse on Facebook is a system called Sigma. Its job is to proactively identify malicious actions on Facebook, such as spam, phishing attacks, posting links to malware, etc. Bad content detected by Sigma is removed automatically so that it doesn’t show up in your News Feed.

We recently completed a two-year-long major redesign of Sigma, which involved replacing the in-house FXL language previously used to program Sigma with Haskell. The Haskell-powered Sigma now runs in production, serving more than one million requests per second.

Haskell isn’t a common choice for large production systems like Sigma, and in this post, we’ll explain some of the thinking that led to that decision. We also wanted to share the experiences and lessons we learned along the way. We made several improvements to GHC (the Haskell compiler) and fed them back upstream, and we were able to achieve better performance from Haskell compared with the previous implementation.

## How does Sigma work?

Sigma is a rule engine, which means it runs a set of rules, called policies. Every interaction on Facebook — from posting a status update to clicking “like” — results in Sigma evaluating a set of policies specific to that type of interaction. These policies make it possible for us to identify and block malicious interactions before they affect people on Facebook.

Policies are continuously deployed. At all times, the source code in the repository is the code running in Sigma, allowing us to move quickly to deploy policies in response to new abuses. This also means that safety in the language we write policies in is important. We don’t allow code to be checked into the repository unless it is type-correct.

Louis Brandy of Facebook’s Site Integrity team discusses scalable spam fighting and the anti-abuse structure at Facebook and Instagram in a 2014 @Scale talk.

The original language we designed for writing policies, FXL, was not ideal for expressing the growing scale and complexity of Facebook policies. It lacked certain abstraction facilities, such as user-defined data types and modules, and its implementation, based on an interpreter, was slower than we wanted. We wanted the performance and expressivity of a fully fledged programming language. Thus, we decided to migrate to an existing language rather than try to improve FXL.

The following features were at the top of our list when we were choosing a replacement:

1. Purely functional and strongly typed. This ensures that policies can’t inadvertently interact with each other, they can’t crash Sigma, and they are easy to test in isolation. Strong types help eliminate many bugs before putting policies into production.

2. Automatically batch and overlap data fetches. Policies typically fetch data from other systems at Facebook, so we want to employ concurrency wherever possible for efficiency. We want concurrency to be implicit, so that engineers writing policies can concentrate on fighting spam and not worry about concurrency. Implicit concurrency also prevents the code from being cluttered with efficiency-related details that would obscure the functionality, and make the code harder to understand and modify.

3. Push code changes to production in minutes. This enables us to deploy new or updated policies quickly.

4. Performance. FXL’s slower performance meant that we were writing anything performance-critical in C++ and putting it in Sigma itself. This had a number of drawbacks, particularly the time required to roll out changes.

5. Support for interactive development. Developers working on policies want to be able to experiment and test their code interactively, and to see the results immediately.

Haskell measures up quite well: It is a purely functional and strongly typed language, and it has a mature optimizing compiler and an interactive environment (GHCi). It also has all the abstraction facilities we would need, it has a rich set of libraries available, and it’s backed by an active developer community.

That left us with two features from our list to address: (1) automatic batching and concurrency, and (2) hot-swapping of compiled code.

## Automatic batching and concurrency: The Haxl framework

All the existing concurrency abstractions in Haskell are explicit, meaning that the user needs to say which things should happen concurrently. For data-fetching, which can be considered a purely functional operation, we wanted a programming model in which the system just exploits whatever concurrency is available, without the programmer having to use explicit concurrency constructs. We developed the Haxl framework to address this issue: Haxl enables multiple data-fetching operations to be automatically batched and executed concurrently.

We discussed Haxl in an earlier blog post, and we published a paper on Haxl at the ICFP 2014 conference. Haxl is open source and available on GitHub.

In addition to the Haxl framework, we needed help from the Haskell compiler in the form of theApplicative do-notation. This allows programmers to write sequences of statements that the compiler automatically rearranges to exploit concurrency. We also designed and implemented Applicative do-notation in GHC.

## Hot-swapping of compiled code

Every time someone checks new code into the repository of policies, we want to have that code running on every machine in the Sigma fleet as quickly as possible. Haskell is a compiled language, so that involves compiling the code and distributing the new compiled code to all the machines running Sigma.

We want to update the compiled rules in a running Sigma process on the fly, while it is serving requests. Changing the code of a running program is a tricky problem in general, and it has been the subject of a great deal of research in the academic community. In our case, fortunately, the problem is simpler: Requests to Sigma are short-lived, so we don’t need to switch a running request to new code. We can serve new requests on the new code and let the existing requests finish before we discard the old code. We’re careful to ensure that we don’t change any code associated with persistent state in Sigma.

Loading and unloading code currently uses GHC’s built-in runtime linker, although in principle, we could use the system dynamic linker. To unload the old version of the code, the garbage collector gets involved. The garbage collector detects when old code is no longer being used by a running request, so we know when it is safe to unload it from the running process.

Haskell is sandwiched between two layers of C++ in Sigma. At the top, we use the C++ thrift server. In principle, Haskell can act as a thrift server, but the C++ thrift server is more mature and performant. It also supports more features. Furthermore, it can work seamlessly with the Haskell layers below because we can call into Haskell from C++. For these reasons, it made sense to use C++ for the server layer.

At the lowest layer, we have existing C++ client code for talking to other internal services. Rather than rewrite this code in Haskell, which would duplicate the functionality and create an additional maintenance burden, we wrapped each C++ client in a Haxl data source using Haskell’s Foreign Function Interface (FFI) so we could use it from Haskell.

Haskell’s FFI is designed to call C rather than C++, so calling C++ requires an intermediate C layer. In most cases, we were able to avoid the intermediate C layer by using a compile-time tool that demangles C++ function names so they can be called directly from Haskell.

## Performance

Perhaps the biggest question here is “Does it run fast enough?” Requests to Sigma result from users performing actions on Facebook, such as sending a message on Messenger, and Sigma must respond before the action can take place. So we wanted to serve requests fast enough to avoid interruptions to the user experience.

The graph below shows the relative throughput performance between FXL and Haskell for the 25 most common types of requests served by Sigma (these requests account for approximately 95 percent of Sigma’s typical workload).

Haskell performs as much as three times faster than FXL for certain requests. On a typical workload mix, we measured a 20 percent to 30 percent improvement in overall throughput, meaning we can serve 20 percent to 30 percent more traffic with the same hardware. We believe additional improvements are possible through performance analysis, tuning, and optimizing the GHC runtime for our workload.

Achieving this level of performance required a lot of hard work, profiling the Haskell code, and identifying and resolving performance bottlenecks.

Here are a few specific things we did:

• We implemented automatic memoization of top-level computations using a source-to-source translator. This is particularly beneficial in our use-case where multiple policies can refer to the same shared value, and we want to compute it only once. Note, this is per-request memoization rather than global memoization, which lazy evaluation already provides.
• We made a change to the way GHC manages the heap, to reduce the frequency of garbage collections on multicore machines. GHC’s default heap settings are frugal, so we also use a larger allocation area size of at least 64 MB per core.
• Fetching remote data usually involves marshaling the data structure across the C++/Haskell boundary. If the whole data structure isn’t required, it is better to marshal only the pieces needed. Or better still, don’t fetch the whole thing — although that’s only possible if the remote service implements an appropriate API.
• We uncovered a nasty performance bug in aeson, the Haskell JSON parsing library. Bryan O’Sullivan, the author of aeson, wrote a nice blog post about how he fixed it. It turns out that when you do things at Facebook scale, those one-in-a-million corner cases tend to crop up all the time.

## Resource limits

In a latency-sensitive service, you don’t want a single request using a lot of resources and slowing down other requests on the same machine. In this case, the “resources” include everything on the machine that is shared by the running requests — CPU, memory, network bandwidth, and so on.

A request that uses a lot of resources is normally a bug that we want to fix. It does happen from time to time, often as a result of a condition that occurs in production that wasn’t encountered during testing — perhaps an innocuous operation provided with some unexpectedly large input data, or pathological performance of an algorithm on certain rare inputs, for example. When this happens, we want Sigma to terminate the affected request with an error (that will subsequently result in the bug being fixed) and continue without any impact on the performance of other requests being served.

To make this possible, we implemented allocation limits in GHC, which places a bound on the amount of memory a thread can allocate before it is terminated. Terminating a computation safely is a hard problem in general, but Haskell provides a safe way to abort a computation in the form of asynchronous exceptions. Asynchronous exceptions allow us to write most of most of our code ignoring the potential for summary termination and still have all the nice guarantees that we need in the event that the limit is hit, including safe releasing of resources, closing network connections, and so forth.

The following graph illustrates of how well allocation limits work in practice. It tracks the maximum live memory across various groups of machines in the Sigma fleet. When we enabled one request that had some resource-intensive outliers, we saw large spikes in the maximum live memory, which disappeared when we enabled allocation limits.

## Enabling interactive development

Facebook engineers develop policies interactively, testing code against real data as they go. To enable this workflow in Haskell, we needed the GHCi environment to work with our full stack, including making requests to other back-end services from the command line.

To make this work, we had to make our build system link all the C++ dependencies of our code into a shared library that GHCi could load. We also customized the GHCi front end to implement some of our own commands and streamline the desired workflows. The result is an interactive environment in which developers can load their code from source in a few seconds and work on it with a fast turnaround time. They have the full set of APIs available and can test against real production data sources.

While GHCi isn’t as easy to customize as it could be, we’ve already made several improvements and contributed them upstream. We hope to make more improvements in the future.

## Packages and build systems

In addition to GHC itself, we make use of a lot of open-source Haskell library code. Haskell has its own packaging and build system, Cabal, and the open-source packages are all hosted onHackage. The problem with this setup is that the pace of change on Hackage is fast, there are often breakages, and not all combinations of packages work well together. The system of version dependencies in Cabal relies too much on package authors getting it right, which is hard to ensure, and the tool support isn’t what it could be. We found that using packages directly from Hackage together with Facebook’s internal build tools meant adding or updating an existing package sometimes led to a yak-shaving exercise involving a cascade of updates to other packages, often with an element of trial and error to find the right version combinations.

As a result of this experience, we switched to Stackage as our source of packages. Stackage provides a set of package versions that are known to work together, freeing us from the problem of having to find the set by trial and error.

## Did we find bugs in GHC?

Yes, most notably:

• We fixed a bug in GHC’s garbage collector that was causing our Sigma processes to crash every few hours. The bug had gone undetected in GHC for several years.
• We fixed a bug in GHC’s handling of finalizers that occasionally caused crashes during process shutdown.

Following these fixes, we haven’t seen any crashes in either the Haskell runtime or the Haskell code itself across our whole fleet.

## What else?

At Facebook, we’re using Haskell at scale to fight spam and other types of abuse. We’ve found it to be reliable and performant in practice. Using the Haxl framework, our engineers working on spam fighting can focus on functionality rather than on performance, while the system can exploit the available concurrency automatically.

For more information on spam fighting at Facebook, check out our Protect the Graph page, or watch videos from our recent Spam Fighting @Scale event.

# 最大似然估计（Maximal Likelihood Estimation）

## （一）基本思想

$P(x_{1},...,x_{n}) = f_{D}(x_{1},...,x_{n}|\theta)$.

$L(\theta) = f_{D}(x_{1},...,x_{n}|\theta)$,

Remark. 最大似然函数不一定是唯一的，甚至不一定是存在的。

## （二）基本算法

（1）定义似然函数；

（2）对似然函数求导数，或者说对似然函数的对数求导数，目的都是为了更加方便地计算一阶导数；

（3）令一阶导数等于零，得到关于参数 $\theta$ 的似然方程；

（4）求解似然方程，得到的参数就是最大似然估计。在求解的过程中，如果不能够直接求解方程的话，可以采取牛顿法来近似求解。

## （三）例子

### （i）Bernoulli 分布（Bernoulli Distribution）

$f(x_{i};p)=p^{x_{i}}(1-p)^{1-x_{i}}, \text{ for } x_{i}=0 \text{ or } 1 \text{ and } 0.

$L(p)=\prod_{i=1}^{n}f(x_{i};p)=p^{\sum_{i=1}^{n}x_{i}}(1-p)^{n-\sum_{i=1}^{n}x_{i}}$.

$\ln(L(p))=(\sum_{i=1}^{n}x_{i})\ln(p) + (n-\sum_{i=1}^{n}x_{i})\ln(1-p)$

$\frac{\partial\ln(L(p))}{\partial p} = \frac{\sum_{i=1}^{n}x_{i}}{p}-\frac{n-\sum_{i=1}^{n}x_{i}}{1-p}$

$\frac{\partial\ln(L(p))}{\partial p} = 0$，可以得到

$p=\sum_{i=1}^{n}x_{i}/n$.

$\hat{p}=\sum_{i=1}^{n}x_{i}/n$.

### （ii）Gaussian Distribution

$f(x_{i};\mu,\sigma^{2}) = \frac{1}{\sqrt{2\pi}\sigma}\exp(-\frac{(x_{i}-\theta)^{2}}{2\sigma^{2}})$.

$L(\mu,\sigma) = \prod_{i=1}^{n} f(x_{i};\mu,\sigma^{2}) = (2\pi)^{-n/2}\sigma^{-n}\exp(-\frac{\sum_{i=1}^{n}(x_{i}-\mu)^{2}}{2\sigma^{2}})$

$\ln(L(\mu,\sigma)) = -\frac{n}{2}\ln(2\pi) - n\ln(\sigma) - \frac{\sum_{i=1}^{n}(x_{i}-\mu)^{2}}{2\sigma^{2}}$

$\frac{\partial \ln(L(\mu,\sigma))}{\partial \mu} = \frac{\sum_{i=1}^{n}(x_{i}-\mu)}{\sigma^{2}}$

$\frac{\partial \ln(L(\mu,\sigma))}{\partial \sigma} = -\frac{n}{\sigma} + \frac{\sum_{i=1}^{n}(x_{i}-\mu)^{2}}{\sigma^{3}}$

$\frac{\partial \ln(L(\mu,\sigma))}{\partial \mu} =0$ 和  $\frac{\partial \ln(L(\mu,\sigma))}{\partial \sigma}=0$，可以求解方程组得到：

$\hat{\mu}= \sum_{i=1}^{n}x_{i}/n$,

$\hat{\sigma}^{2} = \sum_{i=1}^{n}(x_{i}-\mu)^{2}/n$.

### (iii) Weibull 分布（Weibull Distribution）

$f(x;\lambda,k) = \frac{k}{\lambda}(\frac{x}{\lambda})^{k-1}\exp^{-(x/\lambda)^{k}} \text{ for } x\geq 0, f(x;\lambda,k)=0 \text{ for } x<0.$

Weibull 分布的累积分布函数

$F(x;k,\lambda) = 1- \exp^{-(x/\lambda)^{k}} \text{ for } x\geq 0$,

$F(x;k,\lambda) = 0 \text{ for } x<0$.

Weibull 分布的分位函数（quantile function, inverse cumulative distribution）是

$Q(p;k,\lambda) = \lambda(-\ln(1-p))^{1/k} \text{ for } 0\leq p <1$.

$p(x_{i};k,\lambda) = \frac{k}{\lambda}(\frac{x_{i}}{\lambda})^{k-1}\exp(-(\frac{x_{i}}{\lambda})^{k})$.

$L(k,\lambda) = \prod_{i=1}^{n}p(x_{i};k,\lambda)$

$\ln(L(k,\lambda)) = n\ln(k) - nk\ln(\lambda) + (k-1)\sum_{i=1}^{n}\ln(x_{i}) - \sum_{i=1}^{n}x_{i}^{k}/\lambda^{k}.$

$\frac{\partial \ln(L(k,\lambda))}{\partial \lambda} = - \frac{nk}{\lambda} + \frac{k\sum_{i=1}^{n}x_{i}^{k}}{\lambda^{k+1}},$

$\frac{\partial \ln(L(k,\lambda))}{\partial k} = \frac{n}{k} - n\ln(\lambda) + \sum_{i=1}^{n}\ln(x_{i}) -\sum_{i=1}^{n}(\frac{x_{i}}{\lambda})^{k}\ln(\frac{x_{i}}{\lambda}).$

$\lambda^{k}=\frac{\sum_{i=1}^{n}x_{i}^{k}}{n},$

$\frac{1}{k} = \frac{\sum_{i=1}^{n}x_{i}^{k}\ln(x_{i})}{\sum_{i=1}^{n}x_{i}^{k}} -\frac{\sum_{i=1}^{n}\ln(x_{i})}{n}.$

$f(k) = \frac{\sum_{i=1}^{n}x_{i}^{k}\ln(x_{i})}{\sum_{i=1}^{n}x_{i}^{k}} - \frac{\sum_{i=1}^{n}\ln(x_{i})}{n} - \frac{1}{k}.$

$f^{'}(k)= \frac{\sum_{i=1}^{n}x_{i}^{k}(\ln(x_{i}))^{2}}{\sum_{i=1}^{n}x_{i}^{k}}-(\frac{\sum_{i=1}^{n}x_{i}^{k}\ln(x_{i})}{\sum_{i=1}^{n}x_{i}^{k}})^{2} + \frac{1}{k^{2}}.$

$(\sum_{i=1}^{n}x_{i}^{k}(\ln(x_{i}))^{2})\cdot(\sum_{i=1}^{n}x_{i}^{k})\geq (\sum_{i=1}^{n}x_{i}^{k}\ln(x_{i}))^{2}.$

$\lim_{k\rightarrow 0^{+}}f(k) = -\infty,$

$\lim_{k\rightarrow +\infty}f(k) > 0 \text{ if } \forall x_{i}>1.$

$k_{0}= 0.0001,$

$k_{n+1} = k_{n}- \frac{f(k_{n})}{f^{'}(k_{n})} \text{ for all } n\geq 0.$

$n$ 的次数足够大的时候，$k_{n}$ 就可以被当作最大似然估计。

# How machine learning can help the security industry

Machine learning (ML) is such a hot area in security right now.

At the 2016 RSA Conference, you would be hard pressed to find a company that is not claiming to use ML for security. And why not? To the layperson, ML seems like the magic solution to all security problems. Take a bunch of unlabeled data, pump it through a system with some ML magic inside, and it can somehow identify patterns even human experts can’t find — all while learning and adapting to new behaviors and threats. Rather than having to code the rules, these systems can discover the rules all by themselves.

Oh, if only that were the case! ML is this year’s “big data”: Everyone is claiming to do it, but few actually do it right or even understand what it’s good for. Especially in security, I’ve seen more misapplications than appropriate ones.

Most applications of ML in security use a form of anomaly detection, which is used to spot events that do not match an expected pattern. Anomaly detection is a useful technique in certain circumstances, but too often, vendors misapply it. For example, they will claim to analyze network traffic in an enterprise and use ML to find hackers in your network. This does not work, and you should be immediately skeptical of the vendors who make this claim.

Effective machine learning requires a low dimensionality problem with high-quality labeled data. Unfortunately, deployments in real enterprises have neither. Detecting novel attacks requires either clear, labeled examples of attacks, which you do not have by definition, or a complete, exhaustive understanding of “normal” network behavior, which is impossible for any real network. And any sophisticated attacker will make an attack appear as seamless and “typical” as possible, to avoid setting off alarms.

### Where does ML work?

One example where ML and anomaly detection can actually work well for security is in classifying human behavior. Humans, it turns out, are fairly predictable, and it is possible to build fairly accurate models of individual user behavior and detect when it doesn’t match their normal behavior.

We’ve had success in using ML for implicit authentication via analyzing a user’s biometrics, behavior, and environment. Implicit authentication is a technique that allows users to authenticate without performing any explicit actions like entering a password or swiping a fingerprint. This has clear benefits to both the user experience as well as for security. Users don’t need to be bothered with extra steps, we can use many authentication factors (rather than just one, a password), and it can happen continuously in the background.

Implicit authentication is well-suited to ML because most of the factors are low dimensional, meaning they involve a small number of parameters, and you can passively gather high-quality labeled data about user identities. Much like ML is effective in matching images for computer vision even in the presence of variance and noise, it is also effective in matching unique human behavioral aspects.

One example of this technology is how we can authenticate users based on unique aspects to the way they move. Attributes of the way you walk, sit, and stand are influenced by a large number of factors (including physiology, age, gender, and muscle memory), but are largely consistent for an individual. It is actually possible to accurately detect some of these attributes from the motion sensors in your phone in your pocket. In fact, after four seconds of motion data from a phone in your pocket, we can detect enough of these attributes to identify you. Another example is in using a user’s location history to authenticate them. Humans are creatures of habit, and by looking at where they came from and when, we can make an estimate of whether it’s them.

There are enough sensors in phones and computers (and more recently, wearables and IoT devices) that it is possible to passively pick up a large number of unique attributes about a user’s behavior and environment. We can then use ML to build a unique model for an individual user and find correlations between factors.

### Threat models and anomaly detection

In any security system, it is important to understand the threat models you are trying to protect against. When using ML for security, you need to explicitly gather data, model the threats your system is protecting against, and use the model to train your system. Fortunately, for attacks against authentication, it is often possible to detect behavioral changes. For example, when a device is stolen, there are often clear changes in terms of its movement, location, and usage. And because false negatives are acceptable in that they just require the user to re-authenticate with a different method, we can tune the system to minimize false positives. In fact, once we combine four factors across multiple devices, we can get below a 0.001 percent false positive rate on implicit authentication.

There is no magic machine learning genie that can solve all your security problems. Building an effective security product that uses ML requires a deep understanding of the underlying system, and many security problems are just not appropriate for ML. For those that are, it’s a very powerful technique. And don’t worry, the companies on the hype train will soon move on to newer fads, like mobile self-driving AR blockchain drone marketplaces.

# Hausdorff dimension of the graphs of the classical Weierstrass functions

In this paper, we obtain the explicit value of the Hausdorff dimension of the graphs of the classical Weierstrass functions, by proving absolute continuity of the SRB measures of the associated solenoidal attractors.

1. Introduction

In Real Analysis, the classical Weierstrass function is

$\displaystyle W_{\lambda,b}(x) = \sum\limits_{n=0}^{\infty} \lambda^n \cos(2\pi b^n x)$

with ${1/b < \lambda < 1}$.

Note that the Weierstrass functions have the form

$\displaystyle f^{\phi}_{\lambda,b}(x) = \sum\limits_{n=0}^{\infty} \lambda^n \phi(b^n x)$

where ${\phi}$ is a ${\mathbb{Z}}$-periodic ${C^2}$-function.

Weierstrass (1872) and Hardy (1916) were interested in ${W_{\lambda,b}}$ because they are concrete examples of continuous but nowhere differentiable functions.

Remark 1 The graph of ${f^{\phi}_{\lambda,b}}$ tends to be a “fractal object” because ${f^{\phi}_{\lambda,b}}$ is self-similar in the sense that

$\displaystyle f^{\phi}_{\lambda, b}(x) = \phi(x) + \lambda f^{\phi}_{\lambda,b}(bx)$

We will come back to this point later.

Remark 2 ${f^{\phi}_{\lambda,b}}$ is a ${C^{\alpha}}$-function for all ${0\leq \alpha < \frac{-\log\lambda}{\log b}}$. In fact, for all ${x,y\in[0,1]}$, we have

$\displaystyle \frac{f^{\phi}_{\lambda, b}(x) - f^{\phi}_{\lambda,b}(y)}{|x-y|^{\alpha}} = \sum\limits_{n=0}^{\infty} \lambda^n b^{n\alpha} \left(\frac{\phi(b^n x) - \phi(b^n y)}{|b^n x - b^n y|^{\alpha}}\right),$

so that

$\displaystyle \frac{f^{\phi}_{\lambda, b}(x) - f^{\phi}_{\lambda,b}(y)}{|x-y|^{\alpha}} \leq \|\phi\|_{C^{\alpha}} \sum\limits_{n=0}^{\infty}(\lambda b^{\alpha})^n:=C(\phi,\alpha,\lambda,b) < \infty$

whenever ${\lambda b^{\alpha} < 1}$, i.e., ${\alpha < -\log\lambda/\log b}$.

The study of the graphs of ${W_{\lambda,b}}$ as fractal sets started with the work of Besicovitch-Ursell in 1937.

Remark 3 The Hausdorff dimension of the graph of a ${C^{\alpha}}$-function ${f:[0,1]\rightarrow\mathbb{R}}$is

$\displaystyle \textrm{dim}(\textrm{graph}(f))\leq 2 - \alpha$

Indeed, for each ${n\in\mathbb{N}}$, the Hölder continuity condition

$\displaystyle |f(x)-f(y)|\leq C|x-y|^{\alpha}$

leads us to the “natural cover” of ${G=\textrm{graph}(f)}$ by the family ${(R_{j,n})_{j=1}^n}$ of rectangles given by

$\displaystyle R_{j,n}:=\left[\frac{j-1}{n}, \frac{j}{n}\right] \times \left[f(j/n)-\frac{C}{n^{\alpha}}, f(j/n)+\frac{C}{n^{\alpha}}\right]$

Nevertheless, a direct calculation with the family ${(R_{j,n})_{j=1}^n}$ does not give us an appropriate bound on ${\textrm{dim}(G)}$. In fact, since ${\textrm{diam}(R_{j,n})\leq 4C/n^{\alpha}}$ for each ${j=1,\dots, n}$, we have

$\displaystyle \sum\limits_{j=1}^n\textrm{diam}(R_{j,n})^d\leq n\left(\frac{4C}{n^{\alpha}}\right)^d = (4C)^{1/\alpha} < \infty$

for ${d=1/\alpha}$. Because ${n\in\mathbb{N}}$ is arbitrary, we deduce that ${\textrm{dim}(G)\leq 1/\alpha}$. Of course, this bound is certainly suboptimal for ${\alpha<1/2}$ (because we know that ${\textrm{dim}(G)\leq 2 < 1/\alpha}$ anyway).Fortunately, we can refine the covering ${(R_{j,n})}$ by taking into account that each rectangle ${R_{j,n}}$ tends to be more vertical than horizontal (i.e., its height ${2C/n^{\alpha}}$ is usually larger than its width ${1/n}$). More precisely, we can divide each rectangle ${R_{j,n}}$ into ${\lfloor n^{1-\alpha}\rfloor}$ squares, say

$\displaystyle R_{j,n} = \bigcup\limits_{k=1}^{\lfloor n^{1-\alpha}\rfloor}Q_{j,n,k},$

such that every square ${Q_{j,n,k}}$ has diameter ${\leq 2C/n}$. In this way, we obtain a covering ${(Q_{j,n,k})}$ of ${G}$ such that

$\displaystyle \sum\limits_{j=1}^n\sum\limits_{k=1}^{\lfloor n^{1-\alpha}\rfloor} \textrm{diam}(Q_{j,n,k})^d \leq n\cdot n^{1-\alpha}\cdot\left(\frac{2}{n}\right)^d\leq (2C)^{2-\alpha}<\infty$

for ${d=2-\alpha}$. Since ${n\in\mathbb{N}}$ is arbitrary, we conclude the desired bound

$\displaystyle \textrm{dim}(G)\leq 2-\alpha$

A long-standing conjecture about the fractal geometry of ${W_{\lambda,b}}$ is:

Conjecture (Mandelbrot 1977): The Hausdorff dimension of the graph of ${W_{\lambda,b}}$ is

$\displaystyle 1<\textrm{dim}(\textrm{graph}(W_{\lambda,b})) = 2 + \frac{\log\lambda}{\log b} < 2$

Remark 4 In view of remarks 2 and 3, the whole point of Mandelbrot’s conjecture is to establish the lower bound

$\displaystyle \textrm{dim}(\textrm{graph}(W_{\lambda,b})) \geq 2 + \frac{\log\lambda}{\log b}$

Remark 5 The analog of Mandelbrot conjecture for the box and packing dimensions is known to be true: see, e.g., these papers here and here).

In a recent paper (see here), Shen proved the following result:

Theorem 1 (Shen) For any ${b\geq 2}$ integer and for all ${1/b < \lambda < 1}$, the Mandelbrot conjecture is true, i.e.,

$\displaystyle \textrm{dim}(\textrm{graph}(W_{\lambda,b})) = 2 + \frac{\log\lambda}{\log b}$

Remark 6 The techniques employed by Shen also allow him to show that given ${\phi:\mathbb{R}\rightarrow\mathbb{R}}$ a ${\mathbb{Z}}$-periodic, non-constant, ${C^2}$ function, and given ${b\geq 2}$ integer, there exists ${K=K(\phi,b)>1}$ such that

$\displaystyle \textrm{dim}(\textrm{graph}(f^{\phi}_{\lambda,b})) = 2 + \frac{\log\lambda}{\log b}$

for all ${1/K < \lambda < 1}$.

Remark 7 A previous important result towards Mandelbrot’s conjecture was obtained by Barańsky-Barány-Romanowska (in 2014): they proved that for all ${b\geq 2}$ integer, there exists ${1/b < \lambda_b < 1}$ such that

$\displaystyle \textrm{dim}(\textrm{graph}(W_{\lambda,b})) = 2 + \frac{\log\lambda}{\log b}$

for all ${\lambda_b < \lambda < 1}$.

The remainder of this post is dedicated to give some ideas of Shen’s proof of Theorem1 by discussing the particular case when ${1/b<\lambda<2/b}$ and ${b\in\mathbb{N}}$ is large.

2. Ledrappier’s dynamical approach

If ${b\geq 2}$ is an integer, then the self-similar function ${f^{\phi}_{\lambda,b}}$ (cf. Remark 1) is also ${\mathbb{Z}}$-periodic, i.e., ${f^{\phi}_{\lambda,b}(x+1) = f^{\phi}_{\lambda,b}(x)}$ for all ${x\in\mathbb{R}}$. In particular, if ${b\geq 2}$ is an integer, then ${\textrm{graph}(f^{\phi}_{\lambda,b})}$ is an invariant repeller for the endomorphism ${\Phi:\mathbb{R}/\mathbb{Z}\times\mathbb{R}\rightarrow \mathbb{R}/\mathbb{Z}\times\mathbb{R}}$ given by

$\displaystyle \Phi(x,y) = \left(bx\textrm{ mod }1, \frac{y-\phi(x)}{\lambda}\right)$

This dynamical characterization of ${G = \textrm{graph}(f^{\phi}_{\lambda,b})}$ led Ledrappier to the following criterion for the validity of Mandelbrot’s conjecture when ${b\geq 2}$ is an integer.

Denote by ${\mathcal{A}}$ the alphabet ${\mathcal{A}=\{0,\dots,b-1\}}$. The unstable manifolds of ${\Phi}$through ${G}$ have slopes of the form

$\displaystyle (1,-\gamma \cdot s(x,u))$

where ${\frac{1}{b} < \gamma = \frac{1}{\lambda b} <1}$, ${x\in\mathbb{R}}$, ${u\in\mathcal{A}^{\mathbb{N}}}$, and

$\displaystyle s(x,u):=\sum\limits_{n=0}^{\infty} \gamma^n \phi'\left(\frac{x + u_1 + u_2 b + \dots + u_n b^{n-1}}{b^n}\right)$

In this context, the push-forwards ${m_x := (u\mapsto s(x,u))_*\mathbb{P}}$ of the Bernoulli measure ${\mathbb{P}}$ on ${\mathcal{A}^{\mathbb{N}}}$ (induced by the discrete measure assigning weight ${1/b}$ to each letter of the alphabet ${\mathcal{A}}$) play the role of conditional measures along vertical fibers of the unique Sinai-Ruelle-Bowen (SRB) measure ${\theta}$ of the expanding endomorphism ${T:\mathbb{R}/\mathbb{Z}\times\mathbb{R} \rightarrow \mathbb{R}/\mathbb{Z}\times\mathbb{R}}$,

$\displaystyle T(x,y) = (bx\textrm{ mod }1, \gamma y + \psi(x)),$

where ${\gamma=1/\lambda b}$ and ${\psi(x)=\phi'(x)}$. In plain terms, this means that

$\displaystyle \theta = \int_{\mathbb{R}/\mathbb{Z}} m_x \, d\textrm{Leb}(x) \ \ \ \ \ (1)$

where ${\theta}$ is the unique ${T}$-invariant probability measure which is absolutely continuous along unstable manifolds (see Tsujii’s paper).

As it was shown by Ledrappier in 1992, the fractal geometry of the conditional measures ${m_x}$ have important consequences for the fractal geometry of the graph ${G}$:

Theorem 2 (Ledrappier) Suppose that for Lebesgue almost every ${x\in\mathbb{R}}$ the conditional measures ${m_x}$ have dimension ${\textrm{dim}(m_x)=1}$, i.e.,

$\displaystyle \lim\limits_{r\rightarrow 0}\frac{\log m_x(B(z,r))}{\log r} = 1 \textrm{ for } m_x\textrm{-a.e. } z$

Then, the graph ${G=\textrm{graph}(f^{\phi}_{\lambda,b})}$ has Hausdorff dimension

$\displaystyle \textrm{dim}(G) = 2 + \frac{\log\lambda}{\log b}$

Remark 8 Very roughly speaking, the proof of Ledrappier theorem goes as follows. By Remark 4, it suffices to prove that ${\textrm{dim}(G)\geq 2 + \frac{\log\lambda}{\log b}}$. By Frostman lemma, we need to construct a Borel measure ${\nu}$ supported on ${G}$ such that

$\displaystyle \underline{\textrm{dim}}(\nu) := \textrm{ ess }\inf \underline{d}(\nu,x) \geq 2 + \frac{\log\lambda}{\log b}$

where ${\underline{d}(\nu,x):=\liminf\limits_{r\rightarrow 0}\log \nu(B(x,r))/\log r}$. Finally, the main point is that the assumptions in Ledrappier theorem allow to prove that the measure ${\mu^{\phi}_{\lambda, b}}$ given by the lift to ${G}$ of the Lebesgue measure on ${[0,1]}$ via the map ${x\mapsto (x,f^{\phi}_{\lambda,b}(x))}$satisfies

$\displaystyle \underline{\textrm{dim}}(\mu^{\phi}_{\lambda,b}) \geq 2 + \frac{\log\lambda}{\log b}$

An interesting consequence of Ledrappier theorem and the equation 1 is the following criterion for Mandelbrot’s conjecture:

Corollary 3 If ${\theta}$ is absolutely continuous with respect to the Lebesgue measure ${\textrm{Leb}_{\mathbb{R}^2}}$, then

$\displaystyle \textrm{dim}(G) = 2 + \frac{\log\lambda}{\log b}$

Proof: By (1), the absolute continuity of ${\theta}$ implies that ${m_x}$ is absolutely continuous with respect to ${\textrm{Leb}_{\mathbb{R}}}$ for Lebesgue almost every ${x\in\mathbb{R}}$.

Since ${m_x\ll \textrm{Leb}_{\mathbb{R}}}$ for almost every ${x}$ implies that ${\textrm{dim}(m_x)=1}$ for almost every ${x}$, the desired corollary now follows from Ledrappier’s theorem. $\Box$

3. Tsujii’s theorem

The relevance of Corollary 3 is explained by the fact that Tsujii found an explicittransversality condition implying the absolute continuity of ${\theta}$.

More precisely, Tsujii firstly introduced the following definition:

Definition 4

• Given ${\varepsilon>0}$, ${\delta>0}$ and ${x_0\in\mathbb{R}/\mathbb{Z}}$, we say that two infinite words ${u, v\in\mathcal{A}^{\mathbb{N}}}$ are ${(\varepsilon,\delta)}$-transverse at ${x_0}$ if either

$\displaystyle |s(x_0,u)-s(x_0,v)|>\varepsilon$

or

$\displaystyle |s'(x_0,u)-s'(x_0,v)|>\delta$

• Given ${q\in\mathbb{N}}$, ${\varepsilon>0}$, ${\delta>0}$ and ${x_0\in\mathbb{R}/\mathbb{Z}}$, we say that two finite words ${k,l\in\mathcal{A}^q}$ are ${(\varepsilon,\delta)}$-transverse at ${x_0}$ if ${ku}$, ${lv}$ are ${(\varepsilon,\delta)}$-transverse at ${x_0}$for all pairs of infinite words ${u,v\in\mathcal{A}^{\mathbb{N}}}$; otherwise, we say that ${k}$ and ${l}$ are${(\varepsilon,\delta)}$-tangent at ${x_0}$;
• ${E(q,x_0;\varepsilon,\delta):= \{(k,l)\in\mathcal{A}^q\times\mathcal{A}^q: (k,l) \textrm{ is } (\varepsilon,\delta)\textrm{-tangent at } x_0\}}$
• ${E(q,x_0):=\bigcap\limits_{\varepsilon>0}\bigcap\limits_{\delta>0} E(q,x_0;\varepsilon,\delta)}$;
• ${e(q,x_0):=\max\limits_{k\in\mathcal{A}^q}\#\{l\in\mathcal{A}^q: (k,l)\in E(q,x_0)\}}$
• ${e(q):=\max\limits_{x_0\in\mathbb{R}/\mathbb{Z}} e(q,x_0)}$.

Next, Tsujii proves the following result:

Theorem 5 (Tsujii) If there exists ${q\geq 1}$ integer such that ${e(q)<(\gamma b)^q}$, then

$\displaystyle \theta\ll\textrm{Leb}_{\mathbb{R}^2}$

Remark 9 Intuitively, Tsujii’s theorem says the following. The transversality condition ${e(q)<(\gamma b)^q}$ implies that the majority of strong unstable manifolds ${\ell^{uu}}$are mutually transverse, so that they almost fill a small neighborhood ${U}$ of some point ${x_0}$ (see the figure below extracted from this paper of Tsujii). Since the SRB measure ${\theta}$ is absolutely continuous along strong unstable manifolds, the fact that the ${\ell^{uu}}$‘s almost fill ${U}$ implies that ${\theta}$ becomes “comparable” to the restriction of the Lebesgue measure ${\textrm{Leb}_{\mathbb{R}^2}}$ to ${U}$.

Remark 10 In this setting, Barańsky-Barány-Romanowska obtained their main result by showing that, for adequate choices of the parameters ${\lambda}$ and ${b}$, one has ${e(1)=1}$. Indeed, once we know that ${e(1)=1}$, since ${1<\gamma b}$, they can apply Tsujii’s theorem and Ledrappier’s theorem (or rather Corollary 3) to derive the validity of Mandelbrot’s conjecture for certain parameters ${\lambda}$ and ${b}$.

For the sake of exposition, we will give just a flavor of the proof of Theorem 1 by sketching the derivation of the following result:

Proposition 6 Let ${\phi(x) = \cos(2\pi x)}$. If ${1/2<\gamma=1/\lambda b <1}$ and ${b\in\mathbb{N}}$ is sufficiently large, then

$\displaystyle e(1)<\gamma b$

In particular, by Corollary 3 and Tsujii’s theorem, if ${1/2<\gamma=1/\lambda b <1}$ and ${b\in\mathbb{N}}$ is sufficiently large, then Mandelbrot’s conjecture is valid, i.e.,

$\displaystyle \textrm{dim}(W_{\lambda,b}) = 2+\frac{\log\lambda}{\log b}$

Remark 11 The proof of Theorem 1 in full generality (i.e., for ${b\geq 2}$ integer and ${1/b<\lambda<1}$) requires the introduction of a modified version of Tsujii’s transversality condition: roughly speaking, Shen defines a function ${\sigma(q)\leq e(q)}$(inspired from Peter-Paul inequality) and he proves

• (a) a variant of Proposition 6: if ${b\geq 2}$ integer and ${1/b<\lambda<1}$, then ${\sigma(q)<(\gamma b)^q}$ for some integer ${q}$;
• (b) a variant of Tsujii’s theorem: if ${\sigma(q)<(\gamma b)^q}$ for some integer ${q}$, then ${\theta\ll\textrm{Leb}_{\mathbb{R}^2}}$.

See Sections 2, 3, 4 and 5 of Shen’s paper for more details.

We start the (sketch of) proof of Proposition 6 by recalling that the slopes of unstable manifolds are given by

$\displaystyle s(x,u):=-2\pi\sum\limits_{n=0}^{\infty} \gamma^n \sin\left(2\pi\frac{x + u_1 + u_2 b + \dots + u_n b^{n-1}}{b^n}\right)$

for ${x\in\mathbb{R}}$, ${u\in\mathcal{A}^{\mathbb{N}}}$, so that

$\displaystyle s'(x,u)=-4\pi^2\sum\limits_{n=0}^{\infty} \left(\frac{\gamma}{b}\right)^n \cos\left(2\pi\frac{x + u_1 + u_2 b + \dots + u_n b^{n-1}}{b^n}\right)$

Remark 12 Since ${\gamma/b < \gamma}$, the series defining ${s'(x,u)}$ converges faster than the series defining ${s(x,u)}$.

By studying the first term of the expansion of ${s(x,u)}$ and ${s'(x,u)}$ (while treating the remaining terms as a “small error term”), it is possible to show that if ${(k,l)\in E(1,x_0)}$, then

$\displaystyle \left|\sin\left(2\pi\frac{x_0+k}{b}\right) - \sin\left(2\pi\frac{x_0+l}{b}\right)\right| \leq\frac{2\gamma}{1-\gamma} \ \ \ \ \ (2)$

and

$\displaystyle \left|\cos\left(2\pi\frac{x_0+k}{b}\right) - \cos\left(2\pi\frac{x_0+l}{b}\right)\right| \leq \frac{2\gamma}{b-\gamma} \ \ \ \ \ (3)$

(cf. Lemma 3.2 in Shen’s paper).

Using these estimates, we can find an upper bound for ${e(1)}$ as follows. Take ${x_0\in\mathbb{R}/\mathbb{Z}}$ with ${e(1)=e(1,x_0)}$, and let ${k\in\mathcal{A}}$ be such that ${(k,l_1),\dots,(k,l_{e(1)})\in E(1,x_0)}$ distinct elements listed in such a way that

$\displaystyle \sin(2\pi x_i)\leq \sin(2\pi x_{i+1})$

for all ${i=1,\dots,e(1)-1}$, where ${x_i:=(x_0+l_i)/b}$.

From (3), we see that

$\displaystyle \left|\cos\left(2\pi x_i\right) - \cos\left(2\pi x_{i+1}\right)\right| \leq \frac{4\gamma}{b-\gamma}$

for all ${i=1,\dots,e(1)-1}$.

Since

$\displaystyle (\cos(2\pi x_i)-\cos(2\pi x_{i+1}))^2 + (\sin(2\pi x_i)-\sin(2\pi x_{i+1}))^2 = 4\sin^2(\pi(x_i-x_{i+1}))\geq 4\sin^2(\pi/b),$

it follows that

$\displaystyle |\sin(2\pi x_i)-\sin(2\pi x_{i+1})|\geq \sqrt{4\sin^2\left(\frac{\pi}{b}\right) - \left(\frac{4\gamma}{b-\gamma}\right)^2} \ \ \ \ \ (4)$

Now, we observe that

$\displaystyle \sqrt{4\sin^2\left(\frac{\pi}{b}\right) - \left(\frac{4\gamma}{b-\gamma}\right)^2} > \frac{4}{b} \ \ \ \ \ (5)$

for ${b}$ large enough. Indeed, this happens because

• ${\sqrt{z^2-w^2}>2(z-w)}$ if ${z+w>4(z-w)}$;
• ${z+w>4(z-w)}$ if ${z/w:=u < 5/3}$;
• ${\frac{2\sin(\frac{\pi}{b})}{\frac{4\gamma}{b-\gamma}}\rightarrow \frac{2\pi}{4\gamma} (< \frac{5}{3})}$ as ${b\rightarrow\infty}$, and ${2\sin(\frac{\pi}{b}) - \frac{4\gamma}{b-\gamma} \rightarrow (2\pi-4\gamma)\frac{1}{b} (>\frac{2}{b})}$ as ${b\rightarrow\infty}$ (here we used ${\gamma<1}$).

By combining (4) and (5), we deduce that

$\displaystyle |\sin(2\pi x_i)-\sin(2\pi x_{i+1})| > 4/b$

for all ${i=1,\dots, e(1)-1}$.

Since ${-1\leq\sin(2\pi x_1)\leq\sin(2\pi x_2)\leq\dots\leq\sin(2\pi x_{e(1)})\leq 1}$, the previous estimate implies that

$\displaystyle \frac{4}{b}(e(1)-1)<\sum\limits_{i=1}^{e(1)-1}(\sin(2\pi x_{i+1}) - \sin(2\pi x_i)) = \sin(2\pi x_{e(1)}) - \sin(2\pi x_1)\leq 2,$

i.e.,

$\displaystyle e(1)<1+\frac{b}{2}$

Thus, it follows from our assumptions (${\gamma>1/2}$, ${b}$ large) that

$\displaystyle e(1)<1+\frac{b}{2}<\gamma b$

This completes the (sketch of) proof of Proposition 6 (and our discussion of Shen’s talk).

# 为什么说绝大多数人都是“低品质勤奋者”？

（一）一个普通“勤奋者”的模糊肖像

（二）表演“勤奋”，还是想把事情搞定？

1、之前计划好的雅思没有准备好，只得弃考或者硬着头皮裸考，导致无法出国；

2、一直想提高的演讲和写作技能也没大长进，所以那次难得的公众表达机会就这么白白溜走；

3、甚至你一直期待的“减肥成功后，自信满满地向女神大胆表白”这样的美好画面也没有出现，原因想必大家都了解。

（三）多数人为了逃避真正的思考，愿意做任何事情

（四）“深度思考”才能带来“认知升级”，从而成为“高品质勤奋者”

“为什么那里缴获的短枪与长枪的比例比其它战斗略高?”

“为什么那里缴获和击毁的小车与大车的比例比其它战斗略高?”

“为什么在那里俘虏和击毙的军官与士兵的比例比其它战斗略高?”