K$^2$IE: Kernel Method-based Kernel Intensity Estimators for Inhomogeneous Poisson Processes

Authors: Hideaki Kim, Tomoharu Iwata, Akinori Fujino

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through experiments on synthetic datasets, we show that K2IE achieves comparable predictive performance while significantly surpassing the state-of-the-art kernel method-based estimator in computational efficiency. In Section 4, we compare K2IE with conventional nonparametric intensity estimators on synthetic datasets, and confirm the effectiveness of the proposed method.
Researcher Affiliation Industry Hideaki Kim 1 Tomoharu Iwata 1 Akinori Fujino 1 1NTT Corporation, Japan. Correspondence to: Hideaki Kim <EMAIL>.
Pseudocode No The paper describes mathematical formulations and derivations, but does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes 2Codes are available at: https://github.com/HidKim/K2IE
Open Datasets Yes We conducted an additional experiment using an open 2D real-world dataset, bei, in the R package spatsta (GPL3). It consists of locations of 3605 trees of the species Beilschmiedia pendula in a tropical rain forest (Hubbel & Foster, 1983).
Dataset Splits Yes Following (Cronie et al., 2024), we randomly labeled the data points with independent and identically distributed marks {1, 2, 3} from a multinomial distribution with parameters (p1, p2, p3) = (0.3, 0.3, 0.7), and assigned the points with label 1 and 2 to training data and test data, respectively; we repeated it 100 times for evaluation.
Hardware Specification Yes All models were implemented using Tensor Flow-2.102 and executed on a Mac Book Pro equipped with a 12-core CPU (Apple M2 Max), with the GPU disabled.
Software Dependencies Yes All models were implemented using Tensor Flow-2.102 and executed on a Mac Book Pro equipped with a 12-core CPU (Apple M2 Max), with the GPU disabled.
Experiment Setup Yes KIE optimized the hyper-parameter β through 5-fold cross-validation based on the negative log-likelihood function; FIE optimized the hyper-parameters, (β, γ), using the same cross-validation procedure as KIE; For K2IE, the hyper-parameters, (β, γ), were optimized via 5-fold cross-validation with the least squares loss function (10). For all models, the Monte Carlo cross-validation with p-thinning (Cronie et al., 2024) was adopted, where p was fixed at 0.6. A 10 × 10 logarithmic grid search was conducted for γ ∈ [0.1, 100] and β ∈ [0.1, 100]β, where β = (β1, . . . , βd) for βi = 1/maxj Xmax ij − minj Xmin ij . For FIE, the gradient descent algorithm Adam (Kingma & Ba, 2014) was employed to solve the dual optimization problem (9).