Properties

Generic DPPs as mixtures of projection DPPs

Projection DPPs are the building blocks of the model in the sense that generic DPPs are mixtures of projection DPPs.

Consider \(\mathcal{X} \sim \operatorname{DPP}(K)\) and write the spectral decomposition of the corresponding kernel as

\[K = \sum_{n=1}^{\infty} \lambda_n \phi(x) \overline{\phi(y)}.\]

Then, denote \(\mathcal{X}^B \sim \operatorname{DPP}(K^B)\) with

\[K = \sum_{n=1}^{\infty} B_n \phi(x) \overline{\phi(y)}, \quad \text{where} \quad B_n \sim \mathcal{B}er(\lambda_n) \text{ are independent},\]

\(\mathcal{X}^B\) is obtained by first sampling \(B_1, \dots, B_N\) independently and then sampling conditionally from \(\operatorname{DPP}(K^B)\), the DPP with orthogonal projection kernel \(K^B\).

Finally, we have \(\mathcal{X} \overset{d}{=} \mathcal{X}^B\).

Linear statistics

Expectation

\[\mathbb{E}\left[ \sum_{X \in \mathcal{X}} f(X) \right] = \int f(x) K(x,x) \mu(dx) = \operatorname{trace}(Kf) = \operatorname{trace}(fK).\]

Variance

\[\begin{split}\operatorname{\mathbb{V}ar}\left[ \sum_{X \in \mathcal{X}} f(X) \right] &= \mathbb{E}\left[ \sum_{X \neq Y \in \mathcal{X}} f(X) f(Y) + \sum_{X \in \mathcal{X}} f(X)^2 \right] - \mathbb{E}\left[ \sum_{X \in \mathcal{X}} f(X) \right]^2\\ &= \iint f(x)f(y) [K(x,x)K(y,y)-K(x,y)K(y,x)] \mu(dx) \mu(dy)\\ &\quad + \int f(x)^2 K(x,x) \mu(dx) - \left[\int f(x) K(x,x) \mu(dx)\right]^2 \\ &= \int f(x)^2 K(x,x) \mu(dx) - \iint f(x)f(y) K(x,y)K(y,x) \mu(dx) \mu(dy)\\ &= \operatorname{trace}(f^2K) - \operatorname{trace}(fKfK).\end{split}\]
  1. Hermitian kernel i.e. \(K(x,y)=\overline{K(y,x)}\)

    \[\operatorname{\mathbb{V}ar}\left[ \sum_{X \in \mathcal{X}} f(X) \right] = \int f(x)^2 K(x,x) \mu(dx) - \iint f(x)f(y) |K(x,y)|^2 \mu(dx) \mu(dy).\]
  2. Orthogonal projection case i.e. \(K^2 = K = K^*\)

    Using \(K(x,x) = \int K(x,y) K(y,x) \mu(dy) = \int |K(x,y)|^2 \mu(dy)\),

    \[\operatorname{\mathbb{V}ar}\left[ \sum_{X \in \mathcal{X}} f(X) \right] = \frac12 \iint [f(x) - f(y)]^2 |K(x,y)|^2 \mu(dy) \mu(dx).\]

Number of points

For projection DPPs, i.e., when \(K\) is the kernel associated to an orthogonal projector, one can show that \(|\mathcal{X}|=\operatorname{rank}(K)=\operatorname{Trace}(K)\) almost surely (see, e.g., [HKPVirag06] Lemma 17).

In the general case, based on the fact that generic DPPs are mixtures of projection DPPs, we have

\[|\mathcal{X}| = \sum_{i=1}^{\infty} \operatorname{\mathcal{B}er}(\lambda_i).\]

Note

  • For any Borel set \(B\), instantiating \(f=1_{B}\) yields nice expressions for the expectation and variance of the number of points falling in \(B\).

See also

Number of points in the finite case

Thinning

Important

The class of DPPs is closed under independent thinning.

Let \(\lambda > 1\). The configuration of points \(\mathcal{X}^{\lambda}\) obtained after subsampling the points of a configuration \(\mathcal{X}\sim \operatorname{DPP}(K)\) with i.i.d. \(\operatorname{\mathcal{B}er}\left(\frac{1}{\lambda}\right)\) is still a DPP with kernel \(\frac{1}{\lambda} K\). To see this, let’s compute the correlation functions of the thinned process

\[\begin{split}\mathbb{E}\left[ \sum_{\substack{(x_1,\dots,x_k) \\ x_i \neq x_j \in \mathcal{X}^{\lambda}} } f(x_1,\dots,x_k) \right] &= \mathbb{E}\left[ \mathbb{E}\left[ \sum_{\substack{(x_1,\dots,x_k) \\ x_i \neq x_j \in \mathcal{X} } } f(x_1,\dots,x_k) \prod_{i=1}^k 1_{\{x_i \in \mathcal{X}^{\lambda} \}} \Bigg| \mathcal{X}\right] \right]\\ &= \mathbb{E}\left[ \sum_{\substack{(x_1,\dots,x_k) \\ x_i \neq x_j \in \mathcal{X} } } f(x_1,\dots,x_k) \mathbb{E}\left[ \prod_{i=1}^k B_i \Bigg| \mathcal{X} \right] \right]\\ &= \mathbb{E}\left[ \sum_{\substack{(x_1,\dots,x_k) \\ x_i \neq x_j \in \mathcal{X} } } f(x_1,\dots,x_k) \frac{1}{\lambda^k} \right]\\ &= \int f(x_1,\dots,x_k) \det \left[ \frac{1}{\lambda} K(x_i,x_j) \right]_{1\leq i,j\leq k} \mu^{\otimes k}(dx).\end{split}\]