| "Who experiences large model decay and why?" A Hierarchical Framework for Diagnosing Heterogeneous Performance Drift |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| "Why Is There a Tumor?": Tell Me the Reason, Show Me the Evidence |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| $K^2$VAE: A Koopman-Kalman Enhanced Variational AutoEncoder for Probabilistic Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| $S^2$FGL: Spatial Spectral Federated Graph Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| $\mathcalVista\mathcalDPO$: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| $\mathrmμ$nit Scaling: Simple and Scalable FP8 LLM Training |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| $\textttI$^2$MoE$: Interpretable Multimodal Interaction-aware Mixture-of-Experts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| $∞$-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| (How) Can Transformers Predict Pseudo-Random Numbers? |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| (How) Do Language Models Track State? |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| 3D Question Answering via only 2D Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| 3D-LMVIC: Learning-based Multi-View Image Compression with 3D Gaussian Geometric Priors |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Bayesian Model Selection Criterion for Selecting Pretraining Checkpoints |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| A Bregman Proximal Viewpoint on Neural Operators |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Causal World Model Underlying Next Token Prediction: Exploring GPT in a Controlled Environment |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Certified Unlearning Approach without Access to Source Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Chaotic Dynamics Framework Inspired by Dorsal Stream for Event Signal Processing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Checks-and-Balances Framework for Context-Aware Ethical AI Alignment |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| A Classification View on Meta Learning Bandits |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Closer Look at Backdoor Attacks on CLIP |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Closer Look at Generalized BH Algorithm for Out-of-Distribution Detection |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Closer Look at Multimodal Representation Collapse |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| A Closer Look at Transformers for Time Series Forecasting: Understanding Why They Work and Where They Struggle |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| A Cognac Shot To Forget Bad Memories: Corrective Unlearning for Graph Neural Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Computationally Efficient Algorithm for Infinite-Horizon Average-Reward Linear MDPs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Attention Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| A First-order Generative Bilevel Optimization Framework for Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| A Forget-and-Grow Strategy for Deep Reinforcement Learning Scaling in Continuous Control |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A General Framework for Inference-time Scaling and Steering of Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A General Graph Spectral Wavelet Convolution via Chebyshev Order Decomposition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A General Representation-Based Approach to Multi-Source Domain Adaptation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| A Generalizable Physics-Enhanced State Space Model for Long-Term Dynamics Forecasting in Complex Environments |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Generalization Result for Convergence in Learning-to-Optimize |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| A Generalization Theory for Zero-Shot Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Generic Family of Graphical Models: Diversity, Efficiency, and Heterogeneity |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| A Geometric Approach to Personalized Recommendation with Set-Theoretic Constraints Using Box Embeddings |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Hitchhiker’s Guide to Scaling Law Estimation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Lens into Interpretable Transformer Mistakes via Semantic Dependency |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| A Machine Learning Approach to Duality in Statistical Physics |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Manifold Perspective on the Statistical Generalization of Graph Neural Networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Market for Accuracy: Classification Under Competition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Mathematical Framework for AI-Human Integration in Work |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Memory Efficient Randomized Subspace Optimization Method for Training Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Meta-learner for Heterogeneous Effects in Difference-in-Differences |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Mixed-Curvature based Pre-training Paradigm for Multi-Task Vehicle Routing Solver |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Mixture-Based Framework for Guiding Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Model of Place Field Reorganization During Reward Maximization |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Multi-Region Brain Model to Elucidate the Role of Hippocampus in Spatially Embedded Decision-Making |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| A Near Linear Query Lower Bound for Submodular Maximization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Near-Optimal Single-Loop Stochastic Algorithm for Convex Finite-Sum Coupled Compositional Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A New Concentration Inequality for Sampling Without Replacement and Its Application for Transductive Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Non-Asymptotic Convergent Analysis for Scored-Based Graph Generative Model via a System of Stochastic Differential Equations |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| A Non-isotropic Time Series Diffusion Model with Moving Average Transitions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Novel Characterization of the Population Area Under the Risk Coverage Curve (AURC) and Rates of Finite Sample Estimators |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Parameter-Free and Near-Optimal Zeroth-Order Algorithm for Stochastic Convex Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Parametric Contextual Online Learning Theory of Brokerage |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Peer-review Look on Multi-modal Clustering: An Information Bottleneck Realization Method |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| A Physics-Augmented Deep Learning Framework for Classifying Single Molecule Force Spectroscopy Data |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| A Physics-Informed Machine Learning Framework for Safe and Optimal Control of Autonomous Systems |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| A Reasoning-Based Approach to Cryptic Crossword Clue Solving |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| A Recipe for Causal Graph Regression: Confounding Effects Revisited |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Reduction Framework for Distributionally Robust Reinforcement Learning under Average Reward |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Rescaling-Invariant Lipschitz Bound Based on Path-Metrics for Modern ReLU Network Parameterizations |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| A Sample Efficient Conditional Independence Test in the Presence of Discretization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Selective Learning Method for Temporal Graph Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Sharper Global Convergence Analysis for Average Reward Reinforcement Learning via an Actor-Critic Approach |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Simple Model of Inference Scaling Laws |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| A Square Peg in a Square Hole: Meta-Expert for Long-Tailed Semi-Supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Sub-Problem Quantum Alternating Operator Ansatz for Correlation Clustering |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Tale of Two Structures: Do LLMs Capture the Fractal Complexity of Language? |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Theoretical Framework For Overfitting In Energy-based Modeling |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Theoretical Justification for Asymmetric Actor-Critic Algorithms |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Theoretical Study of (Hyper) Self-Attention through the Lens of Interactions: Representation, Training, Generalization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Theory for Conditional Generative Modeling on Multiple Data Sources |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Trichotomy for List Transductive Online Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Two-Stage Learning-to-Defer Approach for Multi-Task Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Unified Approach to Routing and Cascading for LLMs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Unified Comparative Study with Generalized Conformity Scores for Multi-Output Conformal Regression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Unified Framework for Entropy Search and Expected Improvement in Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Unified Framework for Generalization Error Analysis of Learning with Arbitrary Discrete Weak Features |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Unified Theoretical Analysis of Private and Robust Offline Alignment: from RLHF to DPO |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| A Unified View on Learning Unnormalized Distributions via Noise-Contrastive Estimation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| A Variational Framework for Improving Naturalness in Generative Spoken Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Variational Information Theoretic Approach to Out-of-Distribution Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Variational Perspective on Generative Protein Fitness Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Versatile Influence Function for Data Attribution with Non-Decomposable Loss |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A-PSRO: A Unified Strategy Learning Method with Advantage Metric for Normal-form Games |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AAAR-1.0: Assessing AI’s Potential to Assist Research |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $α$-$β$-Divergence |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ABNet: Adaptive explicit-Barrier Net for Safe and Scalable Robot Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ADDQ: Adaptive distributional double Q-learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ADHMR: Aligning Diffusion-based Human Mesh Recovery via Direct Preference Optimization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| ADIOS: Antibody Development via Opponent Shaping |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AEQA-NAT : Adaptive End-to-end Quantization Alignment Training Framework for Non-autoregressive Machine Translation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AGAV-Rater: Adapting Large Multimodal Model for AI-Generated Audio-Visual Quality Assessment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AI for Global Climate Cooperation: Modeling Global Climate Negotiations, Agreements, and Long-Term Cooperation in RICE-N |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| AKORN: Adaptive Knots generated Online for RegressioN splines |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| AKRMap: Adaptive Kernel Regression for Trustworthy Visualization of Cross-Modal Embeddings |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ALMTokenizer: A Low-bitrate and Semantic-rich Audio Codec Tokenizer for Audio Language Modeling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AMPO: Active Multi Preference Optimization for Self-play Preference Selection |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ARS: Adaptive Reward Scaling for Multi-Task Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ATA: Adaptive Task Allocation for Efficient Resource Management in Distributed Machine Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AUTOCIRCUIT-RL: Reinforcement Learning-Driven LLM for Automated Circuit Topology Generation |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
4 |
| Ab Initio Nonparametric Variable Selection for Scalable Symbolic Regression with Large $p$ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Accelerated Diffusion Models via Speculative Sampling |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Accelerating Large Language Model Reasoning via Speculative Search |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Accelerating PDE-Constrained Optimization by the Derivative of Neural Operators |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Accelerating Quantum Reinforcement Learning with a Quantum Natural Policy Gradient Based Approach |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Accelerating Spectral Clustering under Fairness Constraints |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Accelerating Unbiased LLM Evaluation via Synthetic Feedback |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Accurate Identification of Communication Between Multiple Interacting Neural Populations |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Accurate and Efficient World Modeling with Masked Latent Transformers |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Achieving Linear Speedup and Near-Optimal Complexity for Decentralized Optimization over Row-stochastic Networks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Action Dubber: Timing Audible Actions via Inflectional Flow |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Action-Constrained Imitation Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Action-Dependent Optimality-Preserving Reward Shaping |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Action-Minimization Meets Generative Modeling: Efficient Transition Path Sampling with the Onsager-Machlup Functional |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Activation Space Interventions Can Be Transferred Between Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Activation by Interval-wise Dropout: A Simple Way to Prevent Neural Networks from Plasticity Loss |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Active Evaluation Acquisition for Efficient LLM Benchmarking |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Active Fine-Tuning of Multi-Task Policies |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Active Learning for Efficient Discovery of Optimal Combinatorial Perturbations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Active Learning with Selective Time-Step Acquisition for PDEs |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Active Reward Modeling: Adaptive Preference Labeling for Large Language Model Alignment |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Active Treatment Effect Estimation via Limited Samples |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Active feature acquisition via explainability-driven ranking |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Actor-Critics Can Achieve Optimal Sample Efficiency |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Ad Hoc Teamwork via Offline Goal-Based Decision Transformers |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Ad-Hoc Human-AI Coordination Challenge |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AdaDecode: Accelerating LLM Decoding with Adaptive Layer Parallelism |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AdaSplash: Adaptive Sparse Flash Attention |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AdaWorld: Learning Adaptable World Models with Latent Actions |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Adapter Naturally Serves as Decoupler for Cross-Domain Few-Shot Semantic Segmentation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Adapting Precomputed Features for Efficient Graph Condensation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adapting While Learning: Grounding LLMs for Scientific Problems with Tool Usage Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adapting to Evolving Adversaries with Regularized Continual Robust Training |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Adaptive Data Collection for Robust Learning Across Multiple Distributions |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adaptive Elicitation of Latent Information Using Natural Language |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adaptive Estimation and Learning under Temporal Distribution Shift |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adaptive Exploration for Multi-Reward Multi-Policy Evaluation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Adaptive Flow Matching for Resolving Small-Scale Physics |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Learn-then-Test: Statistically Valid and Efficient Hyperparameter Selection |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive Localization of Knowledge Negation for Continual LLM Unlearning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Adaptive Median Smoothing: Adversarial Defense for Unlearned Text-to-Image Diffusion Models at Inference Time |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Message Passing: A General Framework to Mitigate Oversmoothing, Oversquashing, and Underreaching |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adaptive Multi-prompt Contrastive Network for Few-shot Out-of-distribution Detection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adaptive Partitioning Schemes for Optimistic Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adaptive Sample Sharing for Multi Agent Linear Bandits |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adaptive Self-improvement LLM Agentic System for ML Library Development |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive Sensitivity Analysis for Robust Augmentation against Natural Corruptions in Image Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adaptive kernel predictors from feature-learning infinite limits of neural networks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Addressing Imbalanced Domain-Incremental Learning through Dual-Balance Collaborative Experts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Addressing Misspecification in Simulation-based Inference through Data-driven Calibration |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adjoint Sampling: Highly Scalable Diffusion Samplers via Adjoint Matching |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough? |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Adjustment for Confounding using Pre-Trained Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AdvAgent: Controllable Blackbox Red-teaming on Web Agents |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Advancing Personalized Learning with Neural Collapse for Long-Tail Challenge |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
3 |
| Adversarial Combinatorial Semi-bandits with Graph Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adversarial Inception Backdoor Attacks against Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarial Inputs for Linear Algebra Backends |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Adversarial Reasoning at Jailbreaking Time |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarial Robust Generalization of Graph Neural Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adversarial Robustness in Two-Stage Learning-to-Defer: Algorithms and Guarantees |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adversarial Robustness via Deformable Convolution with Stochasticity |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adversaries Can Misuse Combinations of Safe Models |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AffectGPT: A New Dataset, Model, and Benchmark for Emotion Understanding with Multimodal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AffinityFlow: Guided Flows for Antibody Affinity Maturation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Agent Reviewers: Domain-specific Multimodal Agents with Shared Memory for Paper Review |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Agent Workflow Memory |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Agent-Centric Actor-Critic for Asynchronous Multi-Agent Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Agent-as-a-Judge: Evaluate Agents with Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Aggregation Buffer: Revisiting DropEdge with a New Parameter Block |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Aggregation of Dependent Expert Distributions in Multimodal Variational Autoencoders |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Alberta Wells Dataset: Pinpointing Oil and Gas Wells from Satellite Imagery |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Algorithm Development in Neural Networks: Insights from the Streaming Parity Task |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Algorithmic Recourse for Long-Term Improvement |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Algorithms and Hardness for Active Learning on Graphs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Algorithms with Calibrated Machine Learning Predictions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Aligned Multi Objective Optimization |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Aligning LLMs by Predicting Preferences from User Writing Samples |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Aligning Multimodal Representations through an Information Bottleneck |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Aligning Protein Conformation Ensemble Generation with Physical Feedback |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Aligning Spoken Dialogue Models from User Interactions |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| All-Purpose Mean Estimation over R: Optimal Sub-Gaussianity with Outlier Robustness and Low Moments Performance |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| All-atom Diffusion Transformers: Unified generative modelling of molecules and materials |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| All-atom inverse protein folding through discrete flow matching |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Almost Optimal Fully Dynamic $k$-Center Clustering with Recourse |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Alpha-SQL: Zero-Shot Text-to-SQL using Monte Carlo Tree Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AlphaPO: Reward Shape Matters for LLM Alignment |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| AlphaQCM: Alpha Discovery in Finance with Distributional Reinforcement Learning |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| AlphaVerus: Bootstrapping Formally Verified Code Generation through Self-Improving Translation and Treefinement |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An Adaptive Orthogonal Convolution Scheme for Efficient and Flexible CNN Architectures |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| An All-Atom Generative Model for Designing Protein Complexes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| An Analysis for Reasoning Bias of Language Models with Small Initialization |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| An Architecture Search Framework for Inference-Time Techniques |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| An Asymptotically Optimal Approximation Algorithm for Multiobjective Submodular Maximization at Scale |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| An Augmentation-Aware Theory for Self-Supervised Contrastive Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| An Effective and Secure Federated Multi-View Clustering Method with Information-Theoretic Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An Efficient Private GPT Never Autoregressively Decodes |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| An Efficient Pruner for Large Language Model with Theoretical Guarantee |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| An Efficient Search-and-Score Algorithm for Ancestral Graphs using Multivariate Information Scores for Complex Non-linear and Categorical Data |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| An Empirical Study on Configuring In-Context Learning Demonstrations for Unleashing MLLMs’ Sentimental Perception Capability |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| An End-to-End Model for Logits-Based Large Language Models Watermarking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| An Error Analysis of Flow Matching for Deep Generative Modeling |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| An Expressive and Self-Adaptive Dynamical System for Efficient Function Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| An Improved Clique-Picking Algorithm for Counting Markov Equivalent DAGs via Super Cliques Transfer |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| An Instrumental Value for Data Production and its Application to Data Pricing |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| An Interpretable N-gram Perplexity Threat Model for Large Language Model Jailbreaks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| An Online Adaptive Sampling Algorithm for Stochastic Difference-of-convex Optimization with Time-varying Distributions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| An Online Statistical Framework for Out-of-Distribution Detection |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| An analytic theory of creativity in convolutional diffusion models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| An in depth look at the Procrustes-Wasserstein distance: properties and barycenters |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| AnalogGenie-Lite: Enhancing Scalability and Precision in Circuit Topology Discovery through Lightweight Graph Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Analytical Construction on Geometric Architectures: Transitioning from Static to Temporal Link Prediction |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Analytical Lyapunov Function Discovery: An RL-based Generative Approach |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Analyze Feature Flow to Enhance Interpretation and Steering in Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Angle Domain Guidance: Latent Diffusion Requires Rotation Rather Than Extrapolation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Annealing Flow Generative Models Towards Sampling High-Dimensional and Multi-Modal Distributions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuning Attack |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AnyEdit: Edit Any Knowledge Encoded in Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Anytime-Constrained Equilibria in Polynomial Time |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Approximate Differential Privacy of the $\ell_2$ Mechanism |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Approximate Forest Completion and Learning-Augmented Algorithms for Metric Minimum Spanning Trees |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Approximately Correct Label Distribution Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Approximating Latent Manifolds in Neural Networks via Vanishing Ideals |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Approximation to Smooth Functions by Low-Rank Swish Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Arbitrarily-Conditioned Multi-Functional Diffusion for Multi-Physics Emulation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Are High-Quality AI-Generated Images More Difficult for Models to Detect? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Are Large Brainwave Foundation Models Capable Yet ? Insights from Fine-Tuning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Are Large Language Models Ready for Multi-Turn Tabular Data Analysis? |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Are Sparse Autoencoders Useful? A Case Study in Sparse Probing |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ArrayDPS: Unsupervised Blind Speech Separation with a Diffusion Prior |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Arrow: Accelerator for Time Series Causal Discovery with Time Weaving |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Assessing Safety Risks and Quantization-aware Safety Patching for Quantized Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AssistanceZero: Scalably Solving Assistance Games |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Asymmetric Decision-Making in Online Knowledge Distillation: Unifying Consensus and Divergence |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| AtlasD: Automatic Local Symmetry Discovery |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Attention Mechanisms Perspective: Exploring LLM Processing of Graph-Structured Data |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Attention-Level Speculation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Attention-Only Transformers via Unrolled Subspace Denoising |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Attributes Shape the Embedding Space of Face Recognition Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AuPair: Golden Example Pairs for Code Repair |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Audio Flamingo 2: An Audio-Language Model with Long-Audio Understanding and Expert Reasoning Abilities |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Auditing $f$-differential privacy in one run |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Auditing Prompt Caching in Language Model APIs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Auto-reconfiguration for Latency Minimization in CPU-based DNN Serving |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| AutoAL: Automated Active Learning with Differentiable Query Strategy Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AutoAdvExBench: Benchmarking Autonomous Exploitation of Adversarial Example Defenses |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AutoCATE: End-to-End, Automated Treatment Effect Estimation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AutoElicit: Using Large Language Models for Expert Prior Elicitation in Predictive Modelling |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| AutoEval Done Right: Using Synthetic Data for Model Evaluation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| AutoStep: Locally adaptive involutive MCMC |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Autoencoder-Based Hybrid Replay for Class-Incremental Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Autoformulation of Mathematical Optimization Models Using LLMs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Automated Benchmark Generation for Repository-Level Coding Tasks |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
3 |
| Automated Hypothesis Validation with Agentic Sequential Falsifications |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Automated Red Teaming with GOAT: the Generative Offensive Agent Tester |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Automatic Differentiation of Optimization Algorithms with Time-Varying Updates |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Automatic Reward Shaping from Confounded Offline Data |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Automatically Identify and Rectify: Robust Deep Contrastive Multi-view Clustering in Noisy Scenarios |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Automatically Interpreting Millions of Features in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Autonomy-of-Experts Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Average Certified Radius is a Poor Metric for Randomized Smoothing |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Average Sensitivity of Hierarchical $k$-Median Clustering |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Avoiding Catastrophe in Online Learning by Asking for Help |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Avoiding spurious sharpness minimization broadens applicability of SAM |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse Autoencoders |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| B-score: Detecting biases in large language models using response history |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| BAME: Block-Aware Mask Evolution for Efficient N:M Sparse Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BARK: A Fully Bayesian Tree Kernel for Black-box Optimization |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| BARNN: A Bayesian Autoregressive and Recurrent Neural Network |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BAnG: Bidirectional Anchored Generation for Conditional RNA Design |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BCE vs. CE in Deep Feature Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| BDC-CLIP: Brownian Distance Covariance for Adapting CLIP to Action Recognition |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| BECAME: Bayesian Continual Learning with Adaptive Model Merging |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BEST-Route: Adaptive LLM Routing with Test-Time Optimal Compute |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BILBO: BILevel Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| BOOD: Boundary-based Out-Of-Distribution Data Generation |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| BOPO: Neural Combinatorial Optimization via Best-anchored and Objective-guided Preference Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BRIDGE: Bootstrapping Text to Control Time-Series Generation via Multi-Agent Iterative Optimization and Diffusion Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| BSLoRA: Enhancing the Parameter Efficiency of LoRA with Intra-Layer and Inter-Layer Sharing |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| BSO: Binary Spiking Online Optimization Algorithm |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| BSemiFL: Semi-supervised Federated Learning via a Bayesian Approach |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| BaWA: Automatic Optimizing Pruning Metric for Large Language Models with Balanced Weight and Activation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| BackSlash: Rate Constrained Optimized Training of Large Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Backdoor Attacks in Token Selection of Attention Mechanism |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| BalancEdit: Dynamically Balancing the Generality-Locality Trade-off in Multi-modal Model Editing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Balanced Learning for Domain Adaptive Semantic Segmentation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Balancing Efficiency and Expressiveness: Subgraph GNNs with Walk-Based Centrality |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Balancing Interference and Correlation in Spatial Experimental Designs: A Causal Graph Cut Approach |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Balancing Model Efficiency and Performance: Adaptive Pruner for Long-tailed Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Balancing Preservation and Modification: A Region and Semantic Aware Metric for Instruction-Based Image Editing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Balancing the Scales: A Theoretical and Algorithmic Framework for Learning from Imbalanced Data |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| BanditSpec: Adaptive Speculative Decoding via Bandit Algorithms |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Banyan: Improved Representation Learning with Explicit Structure |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Batch List-Decodable Linear Regression via Higher Moments |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| BaxBench: Can LLMs Generate Correct and Secure Backends? |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Bayesian Active Learning for Bivariate Causal Discovery |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Bayesian Basis Function Approximation for Scalable Gaussian Process Priors in Deep Generative Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bayesian Inference for Correlated Human Experts and Classifiers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bayesian Neural Scaling Law Extrapolation with Prior-Data Fitted Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bayesian Optimization from Human Feedback: Near-Optimal Regret Bounds |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bayesian Weight Enhancement with Steady-State Adaptation for Test-time Adaptation in Dynamic Environments |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Be Confident: Uncovering Overfitting in MLLM Multi-Task Tuning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Be a Goldfish: Forgetting Bad Conditioning in Sparse Linear Regression via Variational Autoencoders |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Behavior-Regularized Diffusion Policy Optimization for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Behavior-agnostic Task Inference for Robust Offline In-context Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Behavioral Exploration: Learning to Explore via In-Context Adaptation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Bellman Unbiasedness: Toward Provably Efficient Distributional Reinforcement Learning with General Value Function Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Benchmarking Abstract and Reasoning Abilities Through A Theoretical Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Benchmarking Quantum Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Benefits of Early Stopping in Gradient Descent for Overparameterized Logistic Regression |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Benign Overfitting in Token Selection of Attention Mechanism |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Best Subset Selection: Optimal Pursuit for Feature Selection and Elimination |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Best of Both Worlds: Advantages of Hybrid Graph Sequence Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Best of Both Worlds: Regret Minimization versus Minimax Play |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Better to Teach than to Give: Domain Generalized Semantic Segmentation via Agent Queries with Diffusion Model Guidance |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Beyond Atoms: Enhancing Molecular Pretrained Representations with 3D Space Modeling |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Beyond CVaR: Leveraging Static Spectral Risk Measures for Enhanced Decision-Making in Distributional Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Beyond Communication Overhead: A Multilevel Monte Carlo Approach for Mitigating Compression Bias in Distributed Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Beyond Confidence: Exploiting Homogeneous Pattern for Semi-Supervised Semantic Segmentation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Cropped Regions: New Benchmark and Corresponding Baseline for Chinese Scene Text Retrieval in Diverse Layouts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Beyond Entropy: Region Confidence Proxy for Wild Test-Time Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Beyond Log-Concavity and Score Regularity: Improved Convergence Bounds for Score-Based Generative Models in W2-distance |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Beyond Low-rank Decomposition: A Shortcut Approach for Efficient On-Device Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Beyond Message Passing: Neural Graph Pattern Machine |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Beyond Minimax Rates in Group Distributionally Robust Optimization via a Novel Notion of Sparsity |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Beyond One-Hot Labels: Semantic Mixing for Model Calibration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Self-Interest: How Group Strategies Reshape Content Creation in Recommendation Platforms? |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Beyond Self-Repellent Kernels: History-Driven Target Towards Efficient Nonlinear MCMC on General Graphs |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Beyond Sensor Data: Foundation Models of Behavioral Data from Wearables Improve Health Predictions |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Beyond Task-Specific Reasoning: A Unified Conditional Generative Framework for Abstract Visual Reasoning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Beyond The Rainbow: High Performance Deep Reinforcement Learning on a Desktop PC |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Beyond Topological Self-Explainable GNNs: A Formal Explainability Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Beyond Zero Initialization: Investigating the Impact of Non-Zero Initialization on LoRA Fine-Tuning Dynamics |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Beyond the Permutation Symmetry of Transformers: The Role of Rotation for Model Fusion |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Bi-perspective Splitting Defense: Achieving Clean-Seed-Free Backdoor Security |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BiAssemble: Learning Collaborative Affordance for Bimanual Geometric Assembly |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
3 |
| BiMaCoSR: Binary One-Step Diffusion Model Leveraging Flexible Matrix Compression for Real Super-Resolution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BiMark: Unbiased Multilayer Watermarking for Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Bifurcate then Alienate: Incomplete Multi-view Clustering via Coupled Distribution Learning with Linear Overhead |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Binary Hypothesis Testing for Softmax Models and Leverage Score Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| BinauralFlow: A Causal and Streamable Approach for High-Quality Binaural Speech Synthesis with Flow Matching Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bipartite Ranking From Multiple Labels: On Loss Versus Label Aggregation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Bivariate Causal Discovery with Proxy Variables: Integral Solving and Beyond |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Black-Box Adversarial Attacks on LLM-Based Code Completion |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Blink of an eye: a simple theory for feature localization in generative models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| BlockDialect: Block-wise Fine-grained Mixed Format Quantization for Energy-Efficient LLM Inference |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| BoA: Attention-aware Post-training Quantization without Backpropagation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bongard in Wonderland: Visual Puzzles that Still Make AI Go Mad? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Boost-and-Skip: A Simple Guidance-Free Diffusion for Minority Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Boosting Adversarial Robustness with CLAT: Criticality Leveraged Adversarial Training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting Masked ECG-Text Auto-Encoders as Discriminative Learners |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting Multi-Domain Fine-Tuning of Large Language Models through Evolving Interactions between Samples |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting Protein Graph Representations through Static-Dynamic Fusion |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Boosting Virtual Agent Learning and Reasoning: A Step-Wise, Multi-Dimensional, and Generalist Reward Model with Benchmark |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
3 |
| Bootstrapping Self-Improvement of Language Model Programs for Zero-Shot Schema Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| BounDr.E: Predicting Drug-likeness via Biomedical Knowledge Alignment and EM-like One-Class Boundary Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bounded Rationality for LLMs: Satisficing Alignment at Inference-Time |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| BoxLM: Unifying Structures and Semantics of Medical Concepts for Diagnosis Prediction in Healthcare |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Branches: Efficiently Seeking Optimal Sparse Decision Trees via AO* |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Breaking Barriers: Combinatorial Algorithms for Non-Monotone Submodular Maximization with Sublinear Adaptivity and $1/e$ Approximation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Breaking Silos: Adaptive Model Fusion Unlocks Better Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Breaking the $n^1.5$ Additive Error Barrier for Private and Efficient Graph Sparsification via Private Expander Decomposition |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Breaking the Barrier of Hard Samples: A Data-Centric Approach to Synthetic Data for Medical Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Breaking the Curse of Multiagency in Robust Multi-Agent Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Breaking the Quadratic Barrier: Robust Cardinality Sketches for Adaptive Queries |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Bridging Fairness and Efficiency in Conformal Inference: A Surrogate-Assisted Group-Clustered Approach |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
3 |
| Bridging Layout and RTL: Knowledge Distillation based Timing Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bridging Protein Sequences and Microscopy Images with Unified Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Broadband Ground Motion Synthesis by Diffusion Model with Minimal Condition |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Byzantine-Resilient Federated Alternating Gradient Descent and Minimization for Partly-Decoupled Low Rank Matrix Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like Retrieval-Augmented Generation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| C2IQL: Constraint-Conditioned Implicit Q-learning for Safe Offline Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CABS: Conflict-Aware and Balanced Sparsification for Enhancing Model Merging |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| CACTI: Leveraging Copy Masking and Contextual Information to Improve Tabular Data Imputation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| CAD-Editor: A Locate-then-Infill Framework with Automated Training Data Synthesis for Text-Based CAD Editing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CALM: Consensus-Aware Localized Merging for Multi-Task Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CAN: Leveraging Clients As Navigators for Generative Replay in Federated Continual Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| CASE-Bench: Context-Aware SafEty Benchmark for Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| CAT: Contrastive Adversarial Training for Evaluating the Robustness of Protective Perturbations in Latent Diffusion Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CEGA: A Cost-Effective Approach for Graph-Based Model Extraction and Acquisition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CERTAIN: Context Uncertainty-aware One-Shot Adaptation for Context-based Offline Meta Reinforcement Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| CFP-Gen: Combinatorial Functional Protein Generation via Diffusion Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CFPT: Empowering Time Series Forecasting through Cross-Frequency Interaction and Periodic-Aware Timestamp Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| CLIMB: Data Foundations for Large Scale Multimodal Clinical Foundation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CLOVER: Cross-Layer Orthogonal Vectors Pruning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CMoS: Rethinking Time Series Prediction Through the Lens of Chunk-wise Spatial Correlations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| COExpander: Adaptive Solution Expansion for Combinatorial Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| COKE: Core Kernel for More Efficient Approximation of Kernel Weights in Multiple Kernel Clustering |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| COMRECGC: Global Graph Counterfactual Explainer through Common Recourse |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| COSDA: Counterfactual-based Susceptibility Risk Framework for Open-Set Domain Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| CPCF: A Cross-Prompt Contrastive Framework for Referring Multimodal Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CRANE: Reasoning with constrained LLM generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CSG-ODE: ControlSynth Graph ODE For Modeling Complex Evolution of Dynamic Graphs |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CSTrack: Enhancing RGB-X Tracking via Compact Spatiotemporal Features |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CSV-Occ: Fusing Multi-frame Alignment for Occupancy Prediction with Temporal Cross State Space Model and Central Voting Mechanism |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CTBench: A Library and Benchmark for Certified Training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CUPS: Improving Human Pose-Shape Estimators with Conformalized Deep Uncertainty |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Ca2-VDM: Efficient Autoregressive Video Diffusion Model with Causal Generation and Cache Sharing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CaDA: Cross-Problem Routing Solver with Constraint-Aware Dual-Attention |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Cache Me If You Must: Adaptive Key-Value Quantization for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Calibrated Language Models and How to Find Them with Label Smoothing |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Calibrated Physics-Informed Uncertainty Quantification |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Calibrated Value-Aware Model Learning with Probabilistic Environment Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Calibrating Video Watch-time Predictions with Credible Prototype Alignment |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Can Biologically Plausible Temporal Credit Assignment Rules Match BPTT for Neural Similarity? E-prop as an Example |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Can DBNNs Robust to Environmental Noise for Resource-constrained Scenarios? |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Can Diffusion Models Learn Hidden Inter-Feature Rules Behind Images? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Can Large Language Models Understand Intermediate Representations in Compilers? |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Can Transformers Learn Full Bayesian Inference in Context? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Can Transformers Reason Logically? A Study in SAT Solving |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Can We Predict Performance of Large Models across Vision-Language Tasks? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cannot See the Forest for the Trees: Invoking Heuristics and Biases to Elicit Irrational Choices of LLMs |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Canonical Rank Adaptation: An Efficient Fine-Tuning Strategy for Vision Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Catch Your Emotion: Sharpening Emotion Perception in Multimodal Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Catching Two Birds with One Stone: Reward Shaping with Dual Random Networks for Balancing Exploration and Exploitation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| CateKV: On Sequential Consistency for Long-Context LLM Inference Acceleration |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Categorical Distributional Reinforcement Learning with Kullback-Leibler Divergence: Convergence and Asymptotics |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Categorical Schrödinger Bridge Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Catoni Contextual Bandits are Robust to Heavy-tailed Rewards |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Causal Abstraction Inference under Lossy Representations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Causal Abstraction Learning based on the Semantic Embedding Principle |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Causal Attribution Analysis for Continuous Outcomes |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Causal Discovery from Conditionally Stationary Time Series |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Causal Effect Identification in lvLiNGAM from Higher-Order Cumulants |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Causal Invariance-aware Augmentation for Brain Graph Contrastive Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Causal Logistic Bandits with Counterfactual Fairness Constraints |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| Causal-PIK: Causality-based Physical Reasoning with a Physics-Informed Kernel |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Causality Inspired Federated Learning for OOD Generalization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Causality-Aware Contrastive Learning for Robust Multivariate Time-Series Anomaly Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cavia: Camera-controllable Multi-view Video Diffusion with View-Integrated Attention |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| CellFlux: Simulating Cellular Morphology Changes via Flow Matching |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Censor Dependent Variational Inference |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Certifiably Robust Model Evaluation in Federated Learning under Meta-Distributional Shifts |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Certification for Differentially Private Prediction in Gradient-Based Training |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Certified Unlearning for Neural Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Chameleon: A Flexible Data-mixing Framework for Language Model Pretraining and Finetuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Channel Normalization for Time Series Channel Identification |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Chaos Meets Attention: Transformers for Large-Scale Dynamical Prediction |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Chip Placement with Diffusion Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Circumventing Backdoor Space via Weight Symmetry |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Clipped SGD Algorithms for Performative Prediction: Tight Bounds for Stochastic Bias and Remedies |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Clipping Improves Adam-Norm and AdaGrad-Norm when the Noise Is Heavy-Tailed |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Clone-Robust AI Alignment |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Closed-Loop Long-Horizon Robotic Planning via Equilibrium Sequence Modeling |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Closed-form Solutions: A New Perspective on Solving Differential Equations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Clustering Items through Bandit Feedback: Finding the Right Feature out of Many |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Clustering Properties of Self-Supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Clustering via Self-Supervised Diffusion |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| CoCoA-Mix: Confusion-and-Confidence-Aware Mixture Model for Context Optimization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CoDy: Counterfactual Explainers for Dynamic Graphs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CoMemo: LVLMs Need Image Context with Image Memory |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CoPINN: Cognitive Physics-Informed Neural Networks |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| CoSER: Coordinating LLM-Based Persona Simulation of Established Roles |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| CoastalBench: A Decade-Long High-Resolution Dataset to Emulate Complex Coastal Processes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Code-Generated Graph Representations Using Multiple LLM Agents for Material Properties Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CodeIO: Condensing Reasoning Patterns via Code Input-Output Prediction |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CodeSync: Synchronizing Large Language Models with Dynamic Code Evolution at Scale |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CogMath: Assessing LLMs’ Authentic Mathematical Ability from a Human Cognitive Perspective |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CogReact: A Reinforced Framework to Model Human Cognitive Reaction Modulated by Dynamic Intervention |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| CollabLLM: From Passive Responders to Active Collaborators |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Collaborative Mean Estimation Among Heterogeneous Strategic Agents: Individual Rationality, Fairness, and Truthful Contribution |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Collapse or Thrive: Perils and Promises of Synthetic Data in a Self-Generating World |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Collapse-Proof Non-Contrastive Self-Supervised Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CombiMOTS: Combinatorial Multi-Objective Tree Search for Dual-Target Molecule Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Combinatorial Reinforcement Learning with Preference Feedback |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Come Together, But Not Right Now: A Progressive Strategy to Boost Low-Rank Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CommVQ: Commutative Vector Quantization for KV Cache Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Communicating Activations Between Language Model Agents |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Commute Graph Neural Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Compact Matrix Quantum Group Equivariant Neural Networks |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Comparing Comparisons: Informative and Easy Human Feedback with Distinguishability Queries |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Comparing Few to Rank Many: Active Human Preference Learning Using Randomized Frank-Wolfe Method |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Compelling ReLU Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Competing Bandits in Matching Markets via Super Stability |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Competitively Consistent Clustering |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Complete-Tree Space Favors Data-Efficient Link Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Complex Wavelet Mutual Information Loss: A Multi-Scale Loss Function for Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Componential Prompt-Knowledge Alignment for Domain Incremental Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Compositional Causal Reasoning Evaluation in Language Models |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Compositional Condition Question Answering in Tabular Understanding |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Compositional Flows for 3D Molecule and Synthesis Pathway Co-design |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Compositional Generalization via Forced Rendering of Disentangled Latents |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Compositional Risk Minimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Compositional Scene Understanding through Inverse Generative Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Compressed Image Generation with Denoising Diffusion Codebook Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Compressing tree ensembles through Level-wise Optimization and Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Compression via Pre-trained Transformers: A Study on Byte-Level Multimodal Data |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Compute Optimal Inference and Provable Amortisation Gap in Sparse Autoencoders |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Compute or Load KV Cache? Why Not Both? |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Computing Optimal Transport Maps and Wasserstein Barycenters Using Conditional Normalizing Flows |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Computing Voting Rules with Improvement Feedback |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| ConText: Driving In-context Learning for Text Removal and Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Concentration Distribution Learning from Label Distributions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Concept Reachability in Diffusion Models: Beyond Dataset Constraints |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Concept-Based Unsupervised Domain Adaptation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Concept-Centric Token Interpretation for Vector-Quantized Generative Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Concurrent Reinforcement Learning with Aggregated States via Randomized Least Squares Value Iteration |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Conditional Diffusion Model with Nonlinear Data Transformation for Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conditioning Diffusions Using Malliavin Calculus |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ConfPO: Exploiting Policy Model Confidence for Critical Token Selection in Preference Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Confidence Difference Reflects Various Supervised Signals in Confidence-Difference Classification |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Conformal Anomaly Detection in Event Sequences |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conformal Prediction as Bayesian Quadrature |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Conformal Prediction with Cellwise Outliers: A Detect-then-Impute Approach |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Conformal Tail Risk Control for Large Language Model Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conformity Score Averaging for Classification |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Confounder-Free Continual Learning via Recursive Feature Normalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Consensus Based Stochastic Optimal Control |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Consensus Is All You Get: The Role of Attention in Transformers |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Conservative Offline Goal-Conditioned Implicit V-Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Constant Stepsize Local GD for Logistic Regression: Acceleration by Instability |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Constrain Alignment with Sparse Autoencoders |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Constrained Belief Updates Explain Geometric Structures in Transformer Representations |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Constrained Exploitability Descent: An Offline Reinforcement Learning Method for Finding Mixed-Strategy Nash Equilibrium |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Constrained Online Convex Optimization with Polyak Feasibility Steps |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Constrained Pareto Set Identification with Bandit Feedback |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Context Matters: Query-aware Dynamic Long Sequence Modeling of Gigapixel Images |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Context is Key: A Benchmark for Forecasting with Essential Textual Information |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Context-Informed Neural ODEs Unexpectedly Identify Broken Symmetries: Insights from the Poincaré–Hopf Theorem |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Contextual Bandits for Unbounded Context Distributions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Contextual Linear Bandits with Delay as Payoff |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Contextual Online Decision Making with Infinite-Dimensional Functional Regression |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Contextual Optimization Under Model Misspecification: A Tractable and Generalizable Approach |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Contextures: Representations from Contexts |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Continual Generalized Category Discovery: Learning and Forgetting from a Bayesian Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Continual Reinforcement Learning by Planning with Online World Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Continuous Bayesian Model Selection for Multivariate Causal Discovery |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Continuous Semi-Implicit Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Continuous Visual Autoregressive Generation via Score Maximization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Continuous-Time Analysis of Heavy Ball Momentum in Min-Max Games |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Continuously Updating Digital Twins using Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Contour Integration Underlies Human-Like Vision |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Contract Design Under Approximate Best Responses |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Contradiction Retrieval via Contrastive Learning with Sparsity |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Contrastive Learning with Simplicial Convolutional Networks for Short-Text Classification |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Contrastive Localized Language-Image Pre-Training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Contrastive Private Data Synthesis via Weighted Multi-PLM Fusion |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Contrastive Visual Data Augmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Control and Realism: Best of Both Worlds in Layout-to-Image without Training |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Controllable Data Generation with Hierarchical Neural Representations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Controlled Generation with Equivariant Variational Flow Matching |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Controlling Large Language Model with Latent Action |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Controlling Neural Collapse Enhances Out-of-Distribution Detection and Transfer Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Controlling Underestimation Bias in Constrained Reinforcement Learning for Safe Exploration |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Convergence Analysis of Policy Gradient Methods with Dynamic Stochasticity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Convergence of Consistency Model with Multistep Sampling under General Data Assumptions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Convergence of Mean-Field Langevin Stochastic Descent-Ascent for Distributional Minimax Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Convergence of Policy Mirror Descent Beyond Compatible Function Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Convex Markov Games: A New Frontier for Multi-Agent Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Cooperation of Experts: Fusing Heterogeneous Information with Large Margin |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Copilot Arena: A Platform for Code LLM Evaluation in the Wild |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Core Context Aware Transformers for Long Context Language Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Core Knowledge Deficits in Multi-Modal Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Correlated Errors in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Correlation Clustering Beyond the Pivot Algorithm |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Cost-efficient Collaboration between On-device and Cloud Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CostFilter-AD: Enhancing Anomaly Detection through Matching Cost Filtering |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Counterfactual Contrastive Learning with Normalizing Flows for Robust Treatment Effect Estimation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Counterfactual Effect Decomposition in Multi-Agent Sequential Decision Making |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Counterfactual Graphical Models: Constraints and Inference |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Counterfactual Voting Adjustment for Quality Assessment and Fairer Voting in Online Platforms with Helpfulness Evaluation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Counting atoms faster: policy-based nuclear magnetic resonance pulse sequencing for atomic abundance measurement |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Counting in Small Transformers: The Delicate Interplay between Attention and Feed-Forward Layers |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
4 |
| Cover learning for large-scale topology representation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Covered Forest: Fine-grained generalization analysis of graph neural networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cowpox: Towards the Immunity of VLM-based Multi-Agent Systems |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Cradle: Empowering Foundation Agents towards General Computer Control |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Craftium: Bridging Flexibility and Efficiency for Rich 3D Single- and Multi-Agent Environments |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM’s Reasoning Capability |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Cross-City Latent Space Alignment for Consistency Region Embedding |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Cross-Modal Alignment via Variational Copula Modelling |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Cross-regularization: Adaptive Model Complexity through Validation Gradients |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Curriculum Learning for Biological Sequence Prediction: The Case of De Novo Peptide Sequencing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Curse of High Dimensionality Issue in Transformer for Long Context Modeling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CursorCore: Assist Programming through Aligning Anything |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| CurvGAD: Leveraging Curvature for Enhanced Graph Anomaly Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Curvature Enhanced Data Augmentation for Regression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Curvature-aware Graph Attention for PDEs on Manifolds |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Customizing the Inductive Biases of Softmax Attention using Structured Matrices |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Cut out and Replay: A Simple yet Versatile Strategy for Multi-Label Online Continual Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| D-Fusion: Direct Preference Optimization for Aligning Diffusion Models with Visually Consistent Samples |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DA-KD: Difficulty-Aware Knowledge Distillation for Efficient Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DAMA: Data- and Model-aware Alignment of Multi-modal LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DANCE: Dual Unbiased Expansion with Group-acquired Alignment for Out-of-distribution Graph Fairness Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| DCBM: Data-Efficient Visual Concept Bottleneck Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DEALing with Image Reconstruction: Deep Attentive Least Squares |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DIME: Diffusion-Based Maximum Entropy Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DIS-CO: Discovering Copyrighted Content in VLMs Training Data |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| DISCO: learning to DISCover an evolution Operator for multi-physics-agnostic prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| DLP: Dynamic Layerwise Pruning in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning Based on Constant-Overhead Linear Secret Resharing |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DMOSpeech: Direct Metric Optimization via Distilled Diffusion Model in Zero-Shot Speech Synthesis |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DOLPHIN: A Programmable Framework for Scalable Neurosymbolic Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DPCore: Dynamic Prompt Coreset for Continual Test-Time Adaptation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| DPO Meets PPO: Reinforced Token Optimization for RLHF |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DRAG: Data Reconstruction Attack using Guided Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DS-VLM: Diffusion Supervision Vision Language Model |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| DSBRouter: End-to-end Global Routing via Diffusion Schrödinger Bridge |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| DTZO: Distributed Trilevel Zeroth Order Learning with Provable Non-Asymptotic Convergence |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DUNIA: Pixel-Sized Embeddings via Cross-Modal Alignment for Earth Observation Applications |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DVI:A Derivative-based Vision Network for INR |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Data Mixing Optimization for Supervised Fine-Tuning of Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Data-Driven Selection of Instrumental Variables for Additive Nonlinear, Constant Effects Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Data-Juicer Sandbox: A Feedback-Driven Suite for Multimodal Data-Model Co-development |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Data-driven Design of Randomized Control Trials with Guaranteed Treatment Effects |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| DataDecide: How to Predict Best Pretraining Data with Small Experiments |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Dataflow-Guided Neuro-Symbolic Language Models for Type Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| David and Goliath: Small One-step Model Beats Large Diffusion with Score Post-training |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| De-AntiFake: Rethinking the Protective Perturbations Against Voice Cloning Attacks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| De-coupled NeuroGF for Shortest Path Distance Approximations on Large Terrain Graphs |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| De-mark: Watermark Removal in Large Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| DeFoG: Discrete Flow Matching for Graph Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decision Making under the Exponential Family: Distributionally Robust Optimisation with Bayesian Ambiguity Sets |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Decision Mixer: Integrating Long-term and Local Dependencies via Dynamic Token Selection for Decision-Making |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decision Theoretic Foundations for Conformal Prediction: Optimal Uncertainty Quantification for Risk-Averse Agents |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Decision-aware Training of Spatiotemporal Forecasting Models to Select a Top-K Subset of Sites for Intervention |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Decoding Rewards in Competitive Games: Inverse Game Theory with Entropy Regularization |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Decomposition of Graphic Design with Unified Multimodal Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Decoupled SGDA for Games with Intermittent Strategy Communication |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Bayesian Filter for Bayes-Faithful Data Assimilation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Electromagnetic Structure Design Under Limited Evaluation Budgets |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Deep Fuzzy Multi-view Learning for Reliable Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Deep Neural Cellular Potts Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Deep Principal Support Vector Machines for Nonlinear Sufficient Dimension Reduction |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Reinforcement Learning from Hierarchical Preference Design |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Deep Ridgelet Transform and Unified Universality Theorem for Deep and Shallow Joint-Group-Equivariant Machines |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Deep Streaming View Clustering |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Deep Sturm–Liouville: From Sample-Based to 1D Regularization with Learnable Orthogonal Basis Functions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Deep Unsupervised Hashing via External Guidance |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| DeepCrossAttention: Supercharging Transformer Residual Connections |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DeepLayout: Learning Neural Representations of Circuit Placement Layout |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Defending LVLMs Against Vision Attacks Through Partial-Perception Supervision |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Delay-DSGN: A Dynamic Spiking Graph Neural Network with Delay Mechanisms for Evolving Graph |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Deliberation in Latent Space via Differentiable Cache Augmentation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Delta Decompression for MoE-based LLMs Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Demeaned Sparse: Efficient Anomaly Detection by Residual Estimate |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Demonstration Selection for In-Context Learning via Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Demystifying Catastrophic Forgetting in Two-Stage Incremental Object Detector |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Demystifying Long Chain-of-Thought Reasoning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Demystifying Singular Defects in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Demystifying the Paradox of Importance Sampling with an Estimated History-Dependent Behavior Policy in Off-Policy Evaluation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Dendritic Localized Learning: Toward Biologically Plausible Algorithm |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Density Ratio Estimation with Conditional Probability Paths |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Density Ratio Estimation-based Bayesian Optimization with Semi-Supervised Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dequantified Diffusion-Schrödinger Bridge for Density Ratio Estimation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Design Considerations in Offline Preference-based RL |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Designing Cyclic Peptides via Harmonic SDE with Atom-Bond Modeling |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Detecting Strategic Deception with Linear Probes |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Determinant Estimation under Memory Constraints and Neural Scaling Laws |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Determining Layer-wise Sparsity for Large Language Models Through a Theoretical Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Deterministic Sparse Fourier Transform for Continuous Signals with Frequency Gap |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Devil is in the Details: Density Guidance for Detail-Aware Generation with Flow Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| DexScale: Automating Data Scaling for Sim2Real Generalizable Robot Control |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| DiLQR: Differentiable Iterative Linear Quadratic Regulator via Implicit Differentiation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DiMa: Understanding the Hardness of Online Matching Problems via Diffusion Models |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| DiTAR: Diffusion Transformer Autoregressive Modeling for Speech Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diagonal Symmetrization of Neural Network Solvers for the Many-Electron Schrödinger Equation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dialogue Without Limits: Constant-Sized KV Caches for Extended Response in LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diff-MoE: Diffusion Transformer with Time-Aware and Space-Adaptive Experts |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DiffAdvMAP: Flexible Diffusion-Based Framework for Generating Natural Unrestricted Adversarial Examples |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DiffMS: Diffusion Generation of Molecules Conditioned on Mass Spectra |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Differentiable Quadratic Optimization For the Maximum Independent Set Problem |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Differentiable Solver Search for Fast Diffusion Sampling |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Differentiable Structure Learning with Ancestral Constraints |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Differential Coding for Training-Free ANN-to-SNN Conversion |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Differential Privacy Guarantees of Markov Chain Monte Carlo Algorithms |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Differential Privacy Under Class Imbalance: Methods and Empirical Insights |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Differentially Private Analysis for Binary Response Models: Optimality, Estimation, and Inference |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Differentially Private Boxplots |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Differentially Private Federated $k$-Means Clustering with Server-Side Data |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Differentially Private Space-Efficient Algorithms for Counting Distinct Elements in the Turnstile Model |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Diffusion Adversarial Post-Training for One-Step Video Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Diffusion Counterfactual Generation with Semantic Abduction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion Instruction Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion Models are Secretly Exchangeable: Parallelizing DDPMs via Auto Speculation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Diffusion Sampling Correction via Approximately 10 Parameters |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion models for Gaussian distributions: Exact solutions and Wasserstein errors |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Diffusion on Language Model Encodings for Protein Sequence Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diffusion-based Adversarial Purification from the Perspective of the Frequency Domain |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| DiffusionVLA: Scaling Robot Foundation Models via Unified Diffusion and Autoregression |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dimension-Free Adaptive Subgradient Methods with Frequent Directions |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dimension-Independent Rates for Structured Neural Density Estimation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Dimensionality Reduction on Complex Vector Spaces for Euclidean Distance with Dynamic Weights |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| DipLLM: Fine-Tuning LLM for Strategic Decision-making in Diplomacy |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Direct Density Ratio Optimization: A Statistically Consistent Approach to Aligning Large Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Direct Motion Models for Assessing Generated Videos |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Direct Prediction Set Minimization via Bilevel Conformal Classifier Training |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Directed Graph Grammars for Sequence-based Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Directly Forecasting Belief for Reinforcement Learning with Delays |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Discovering Global False Negatives On the Fly for Self-supervised Contrastive Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Discovering Latent Causal Graphs from Spatiotemporal Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Discovering Physics Laws of Dynamical Systems via Invariant Function Learning |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Discovering Spoofing Attempts on Language Model Watermarks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Discovering Symbolic Cognitive Models from Human and Animal Behavior |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Discovering a Zero (Zero-Vector Class of Machine Learning) |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Discrepancies are Virtue: Weak-to-Strong Generalization through Lens of Intrinsic Dimension |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Discrepancy Minimization in Input-Sparsity Time |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Discrete Markov Probabilistic Models: An Improved Discrete Score-Based Framework with sharp convergence bounds under minimal assumptions |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Discrete Neural Algorithmic Reasoning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Discrete and Continuous Difference of Submodular Minimization |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
4 |
| Discriminative Finetuning of Generative Large Language Models without Reward Models and Human Preference Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Discriminative Policy Optimization for Token-Level Reward Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Disentangled Graph Spectral Domain Adaptation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Disentangling Invariant Subgraph via Variance Contrastive Estimation under Distribution Shifts |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Disentangling and Integrating Relational and Sensory Information in Transformer Architectures |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Disparate Conditional Prediction in Multiclass Classifiers |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Diss-l-ECT: Dissecting Graph Data with Local Euler Characteristic Transforms |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Dissecting Submission Limit in Desk-Rejections: A Mathematical Analysis of Fairness in AI Conference Policies |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Distillation Scaling Laws |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Distillation of Discrete Diffusion through Dimensional Correlations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Distilling the Knowledge in Data Pruning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Distinguishing Cause from Effect with Causal Velocity Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Distributed Conformal Prediction via Message Passing |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Distributed Differentially Private Data Analytics via Secure Sketching |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Distributed Event-Based Learning via ADMM |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Distributed Nonparametric Estimation: from Sparse to Dense Samples per Terminal |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Distributed Parallel Gradient Stacking(DPGS): Solving Whole Slide Image Stacking Challenge in Multi-Instance Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Distributed Retraction-Free and Communication-Efficient Optimization on the Stiefel Manifold |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Distribution-aware Fairness Learning in Medical Image Segmentation From A Control-Theoretic Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Distributional Diffusion Models with Scoring Rules |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Distributionally Robust Active Learning for Gaussian Process Regression |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Distributionally Robust Multi-Agent Reinforcement Learning for Dynamic Chute Mapping |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Distributionally Robust Policy Learning under Concept Drifts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diverging Preferences: When do Annotators Disagree and do Models Know? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Diverse Prototypical Ensembles Improve Robustness to Subpopulation Shift |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diversified Flow Matching with Translation Identifiability |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Diversifying Policy Behaviors with Extrinsic Behavioral Curiosity |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diversity By Design: Leveraging Distribution Matching for Offline Model-Based Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Divide and Conquer: Exploring Language-centric Tree Reasoning for Video Question-Answering |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Divide and Conquer: Grounding LLMs as Efficient Decision-Making Agents via Offline Hierarchical Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Divide and Conquer: Learning Label Distribution with Subtasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diving into Self-Evolving Training for Multimodal Reasoning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Do Bayesian Neural Networks Actually Behave Like Bayesian Models? |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Do Multiple Instance Learning Models Transfer? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Do NOT Think That Much for 2+3=? On the Overthinking of Long Reasoning Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Do Not Mimic My Voice : Speaker Identity Unlearning for Zero-Shot Text-to-Speech |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Do Vision-Language Models Really Understand Visual Language? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Do We Need to Verify Step by Step? Rethinking Process Supervision from a Theoretical Perspective |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Do We Really Need Message Passing in Brain Network Modeling? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DocKS-RAG: Optimizing Document-Level Relation Extraction through LLM-Enhanced Hybrid Prompt Tuning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DocVXQA: Context-Aware Visual Explanations for Document Question Answering |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Does Data Scaling Lead to Visual Compositional Generalization? |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Does Generation Require Memorization? Creative Diffusion Models using Ambient Diffusion |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Does One-shot Give the Best Shot? Mitigating Model Inconsistency in One-shot Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Does learning the right latent variables necessarily improve in-context learning? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Domain-Adapted Diffusion Model for PROTAC Linker Design Through the Lens of Density Ratio in Chemical Space |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Domain2Vec: Vectorizing Datasets to Find the Optimal Data Mixture without Training |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Don’t Restart, Just Reuse: Reoptimizing MILPs with Dynamic Parameters |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Double Machine Learning for Causal Inference under Shared-State Interference |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| Double-Filter: Efficient Fine-tuning of Pre-trained Vision-Language Models via Patch&Layer Filtering |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Doubly Protected Estimation for Survival Outcomes Utilizing External Controls for Randomized Clinical Trials |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
2 |
| Doubly Robust Conformalized Survival Analysis with Right-Censored Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Doubly Robust Fusion of Many Treatments for Policy Learning |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| DragLoRA: Online Optimization of LoRA Adapters for Drag-based Image Editing in Diffusion Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DragSolver: A Multi-Scale Transformer for Real-World Automotive Drag Coefficient Estimation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DriveGPT: Scaling Autoregressive Behavior Models for Driving |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Drug-TTA: Test-Time Adaptation for Drug Virtual Screening via Multi-task Meta-Auxiliary Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dual Feature Reduction for the Sparse-group Lasso and its Adaptive Variant |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Dueling Convex Optimization with General Preferences |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| DyCodeEval: Dynamic Benchmarking of Reasoning Capabilities in Code Large Language Models Under Data Contamination |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| DyPolySeg: Taylor Series-Inspired Dynamic Polynomial Fitting Network for Few-shot Point Cloud Semantic Segmentation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DynaMind: Reasoning over Abstract Video Dynamics for Embodied Decision-Making |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Dynamic Mixture of Curriculum LoRA Experts for Continual Multimodal Instruction Tuning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamic Similarity Graph Construction with Kernel Density Estimation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Dynamic Sparse Training of Diagonally Sparse Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamical Modeling of Behaviorally Relevant Spatiotemporal Patterns in Neural Imaging Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamical phases of short-term memory mechanisms in RNNs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| E-LDA: Toward Interpretable LDA Topic Models with Strong Guarantees in Logarithmic Parallel Time |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| EAGLES: Towards Effective, Efficient, and Economical Federated Graph Learning via Unified Sparsification |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| EARL-BO: Reinforcement Learning for Multi-Step Lookahead, High-Dimensional Bayesian Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EARTH: Epidemiology-Aware Neural ODE with Continuous Disease Transmission Graph |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| EEG-Language Pretraining for Highly Label-Efficient Clinical Phenotyping |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| EFDTR: Learnable Elliptical Fourier Descriptor Transformer for Instance Segmentation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| EGPlace: An Efficient Macro Placement Method via Evolutionary Search with Greedy Repositioning Guided Mutation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ELEMENTAL: Interactive Learning from Demonstrations and Vision-Language Models for Reward Design in Robotics |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ELITE: Enhanced Language-Image Toxicity Evaluation for Safety |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| ELMO : Efficiency via Low-precision and Peak Memory Optimization in Large Output Spaces |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ELoRA: Low-Rank Adaptation for Equivariant GNNs |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| ENAHPool: The Edge-Node Attention-based Hierarchical Pooling for Graph Neural Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| ENSUR: Equitable and Statistically Unbiased Recommendation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| EPIC: Efficient Position-Independent Caching for Serving Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ERICT: Enhancing Robustness by Identifying Concept Tokens in Zero-Shot Vision Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ESPFormer: Doubly-Stochastic Attention with Expected Sliced Transport Plans |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ETTA: Elucidating the Design Space of Text-to-Audio Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| EVOLvE: Evaluating and Optimizing LLMs For In-Context Exploration |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Earley-Driven Dynamic Pruning for Efficient Structured Decoding |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| EasyInv: Toward Fast and Better DDIM Inversion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| EcoMapper: Generative Modeling for Climate-Aware Satellite Imagery |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Edge-Colored Clustering in Hypergraphs: Beyond Minimizing Unsatisfied Edges |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| EditLord: Learning Code Transformation Rules for Code Editing |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Editable Concept Bottleneck Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| Editable Noise Map Inversion: Encoding Target-image into Noise For High-Fidelity Image Manipulation |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| EduLLM: Leveraging Large Language Models and Framelet-Based Signed Hypergraph Neural Networks for Student Performance Prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Effective and Efficient Masked Image Generation Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EffiCoder: Enhancing Code Generation in Large Language Models through Efficiency-Aware Fine-tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Efficient ANN-SNN Conversion with Error Compensation Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Efficient Bisection Projection to Ensure Neural-Network Solution Feasibility for Optimization over General Set |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Core-set Selection for Deep Learning Through Squared Loss Minimization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Efficient Curvature-Aware Hypergradient Approximation for Bilevel Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Diffusion Models for Symmetric Manifolds |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Distributed Optimization under Heavy-Tailed Noise |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Efficient Federated Incomplete Multi-View Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Fine-Grained Guidance for Diffusion Model Based Symbolic Music Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient First-Order Optimization on the Pareto Set for Multi-Objective Learning under Preference Guidance |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Efficient Generative Modeling with Residual Vector Quantization-Based Tokens |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Graph Continual Learning via Lightweight Graph Neural Tangent Kernels-based Dataset Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Efficient Heterogeneity-Aware Federated Active Data Selection |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Efficient Length-Generalizable Attention via Causal Retrieval for Long-Context Language Modeling |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Efficient LiDAR Reflectance Compression via Scanning Serialization |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Efficient Logit-based Knowledge Distillation of Deep Spiking Neural Networks for Full-Range Timestep Deployment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Long Context Fine-tuning with Chunk Flow |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Molecular Conformer Generation with SO(3)-Averaged Flow Matching and Reflow |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Motion Prompt Learning for Robust Visual Tracking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Multi-modal Long Context Learning for Training-free Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Multivariate Robust Mean Estimation Under Mean-Shift Contamination |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Efficient Network Automatic Relevance Determination |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Noise Calculation in Deep Learning-based MRI Reconstructions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Online Reinforcement Learning for Diffusion Policy |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Optimization with Orthogonality Constraint: a Randomized Riemannian Submanifold Method |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Personalized Adaptation for Physiological Signal Foundation Model |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Efficient Quantification of Multimodal Interaction at Sample Level |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Efficient Robotic Policy Learning via Latent Space Backward Planning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Skill Discovery via Regret-Aware Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Efficient Source-free Unlearning via Energy-Guided Data Synthesis and Discrimination-Aware Multitask Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Time Series Processing for Transformers and State-Space Models through Token Merging |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Efficient and Privacy-Preserving Soft Prompt Transfer for LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient and Scalable Density Functional Theory Hamiltonian Prediction through Adaptive Sparsity |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Efficient and Separate Authentication Image Steganography Network |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficiently Access Diffusion Fisher: Within the Outer Product Span Space |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficiently Serving Large Multimodal Models Using EPD Disaggregation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Efficiently Vectorized MCMC on Modern Accelerators |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| EgoPrivacy: What Your First-Person Camera Says About You? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Ehrenfeucht-Haussler Rank and Chain of Thought |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Eigen Analysis of Conjugate Kernel and Neural Tangent Kernel |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Eigenspectrum Analysis of Neural Networks without Aspect Ratio Bias |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Eliciting Language Model Behaviors with Investigator Agents |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Elucidating Flow Matching ODE Dynamics via Data Geometry and Denoisers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Elucidating the Design Space of Multimodal Protein Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Elucidating the design space of language models for image generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Embedding Safety into RL: A New Take on Trust Region Methods |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Emergence and Effectiveness of Task Vectors in In-Context Learning: An Encoder Decoder Perspective |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Emergence in non-neural models: grokking modular arithmetic via average gradient outer product |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Emergent Response Planning in LLMs |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| EmoGrowth: Incremental Multi-label Emotion Decoding with Augmented Emotional Relation Graph |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Emoji Attack: Enhancing Jailbreak Attacks Against Judge LLM Detection |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Emotional Face-to-Speech |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Empirical Privacy Variance |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Empower Structure-Based Molecule Optimization with Gradient Guided Bayesian Flow Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Empowering World Models with Reflection for Embodied Video Prediction |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EnIGMA: Interactive Tools Substantially Assist LM Agents in Finding Security Vulnerabilities |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Enabling Optimal Decisions in Rehearsal Learning under CARE Condition |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EncryptedLLM: Privacy-Preserving Large Language Model Inference via GPU-Accelerated Fully Homomorphic Encryption |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| End-to-End Learning Framework for Solving Non-Markovian Optimal Control |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Energy-Based Flow Matching for Generating 3D Molecular Structure |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Energy-Based Preference Model Offers Better Offline Alignment than the Bradley-Terry Preference Model |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Enforcing Idempotency in Neural Networks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Enforcing Latent Euclidean Geometry in Single-Cell VAEs for Manifold Interpolation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model Reliability |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Enhancing Decision-Making of Large Language Models via Actor-Critic |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Diversity In Parallel Agents: A Maximum State Entropy Exploration Story |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Foundation Models with Federated Domain Knowledge Infusion |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Enhancing Graph Contrastive Learning for Protein Graphs from Perspective of Invariance |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Enhancing Graph Invariant Learning from a Negative Inference Perspective |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Enhancing Ligand Validity and Affinity in Structure-Based Drug Design with Multi-Reward Optimization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Enhancing Logits Distillation with Plug&Play Kendall’s $τ$ Ranking Loss |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Parallelism in Decentralized Stochastic Convex Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Enhancing Performance of Explainable AI Models with Constrained Concept Refinement |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Rating-Based Reinforcement Learning to Effectively Leverage Feedback from Large Vision-Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Enhancing Spectral GNNs: From Topology and Perturbation Perspectives |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Enhancing Statistical Validity and Power in Hybrid Controlled Trials: A Randomization Inference Approach with Conformal Selective Borrowing |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Enhancing Target-unspecific Tasks through a Features Matrix |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Enhancing Treatment Effect Estimation via Active Learning: A Counterfactual Covering Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Visual Localization with Cross-Domain Image Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing the Influence of Labels on Unlabeled Nodes in Graph Convolutional Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Ensemble Distribution Distillation via Flow Matching |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Ensemble Learned Bloom Filters: Two Oracles are Better than One |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EpiCoder: Encompassing Diversity and Complexity in Code Generation |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Epsilon-VAE: Denoising as Visual Decoding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| EquivaMap: Leveraging LLMs for Automatic Equivalence Checking of Optimization Formulations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Equivalence is All: A Unified View for Self-supervised Graph Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Equivariant Neural Tangent Kernels |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Equivariant Polynomial Functional Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EraseAnything: Enabling Concept Erasure in Rectified Flow Transformers |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Ergodic Generative Flows |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Erwin: A Tree-based Hierarchical Transformer for Large-scale Physical Systems |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| EvFocus: Learning to Reconstruct Sharp Images from Out-of-Focus Event Streams |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Evaluating Judges as Evaluators: The JETTS Benchmark of LLM-as-Judges as Test-Time Scaling Evaluators |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Evaluating LLMs Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Evaluating Neuron Explanations: A Unified Framework with Sanity Checks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Event-Customized Image Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| EvoControl: Multi-Frequency Bi-Level Control for High-Frequency Continuous Control |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EvoMesh: Adaptive Physical Simulation with Hierarchical Graph Evolutions |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| EvoPress: Accurate Dynamic Model Compression via Evolutionary Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Evolving Minds: Logic-Informed Inference from Temporal Action Patterns |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Evolving Prompts In-Context: An Open-ended, Self-replicating Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Ex-VAD: Explainable Fine-grained Video Anomaly Detection Based on Visual-Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ExLM: Rethinking the Impact of $\texttt[MASK]$ Tokens in Masked Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Exact Recovery of Sparse Binary Vectors from Generalized Linear Measurements |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Exact risk curves of signSGD in High-Dimensions: quantifying preconditioning and noise-compression effects |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exactly Tight Information-theoretic Generalization Bounds via Binary Jensen-Shannon Divergence |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exogenous Isomorphism for Counterfactual Identifiability |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| ExpProof : Operationalizing Explanations for Confidential Models with ZKPs |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Expected Variational Inequalities |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Expert Race: A Flexible Routing Strategy for Scaling Diffusion Transformer with Mixture of Experts |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Explainable Concept Generation through Vision-Language Preference Learning for Understanding Neural Networks’ Internal Representations |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Explaining the role of Intrinsic Dimensionality in Adversarial Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Explaining, Fast and Slow: Abstraction and Refinement of Provable Explanations |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Explicit Discovery of Nonlinear Symmetries from Dynamic Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Explicit Exploration for High-Welfare Equilibria in Game-Theoretic Multiagent Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Explicit Preference Optimization: No Need for an Implicit Reward Model |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploiting Curvature in Online Convex Optimization with Delayed Feedback |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Exploiting Presentative Feature Distributions for Parameter-Efficient Continual Learning of Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Exploiting Similarity for Computation and Communication-Efficient Decentralized Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Exploring Criteria of Loss Reweighting to Enhance LLM Unlearning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Exploring Invariance in Images through One-way Wave Equations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Exploring Large Action Sets with Hyperspherical Embeddings using von Mises-Fisher Sampling |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Exploring Representations and Interventions in Time Series Foundation Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Exploring Vision Semantic Prompt for Efficient Point Cloud Understanding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Exponential Family Variational Flow Matching for Tabular Data Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Expressive Score-Based Priors for Distribution Matching with Geometry-Preserving Regularization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ExtPose: Robust and Coherent Pose Estimation by Extending ViTs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Extracting Rare Dependence Patterns via Adaptive Sample Reweighting |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Extractive Structures Learned in Pretraining Enable Generalization on Finetuned Facts |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Extreme Value Policy Optimization for Safe Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| FAB-PPI: Frequentist, Assisted by Bayes, Prediction-Powered Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FACTER: Fairness-Aware Conformal Thresholding and Prompt Engineering for Enabling Fair LLM-Based Recommender Systems |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| FDGen: A Fairness-Aware Graph Generation Model |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| FEAT-KD: Learning Concise Representations for Single and Multi-Target Regression via TabNet Knowledge Distillation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| FG-CLIP: Fine-Grained Visual and Textual Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FIC-TSC: Learning Time Series Classification with Fisher Information Constraint |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| FLAM: Frame-Wise Language-Audio Modeling |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| FOCoOp: Enhancing Out-of-Distribution Robustness in Federated Prompt Learning for Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| FOUNDER: Grounding Foundation Models in World Models for Open-Ended Embodied Decision Making |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| FSL-SAGE: Accelerating Federated Split Learning via Smashed Activation Gradient Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FSTLLM: Spatio-Temporal LLM for Few Shot Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FactTest: Factuality Testing in Large Language Models with Finite-Sample and Distribution-Free Guarantees |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fair Clustering via Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FairICP: Encouraging Equalized Odds via Inverse Conditional Permutation |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| FairPFN: A Tabular Foundation Model for Causal Fairness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fairness Overfitting in Machine Learning: An Information-Theoretic Perspective |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Fairness on Principal Stratum: A New Perspective on Counterfactual Fairness |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Falcon: Fast Visuomotor Policies via Partial Denoising |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| False Coverage Proportion Control for Conformal Prediction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Falsification of Unconfoundedness by Testing Independence of Causal Mechanisms |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Fast Estimation of Partial Dependence Functions using Trees |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Fast Exact Unlearning for In-Context Learning Data for LLMs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fast Incomplete Multi-view Clustering by Flexible Anchor Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fast Inference with Kronecker-Sparse Matrices |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Fast Large Language Model Collaborative Decoding via Speculation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fast Min-$ε$ Segmented Regression using Constant-Time Segment Merging |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fast Tensor Completion via Approximate Richardson Iteration |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Fast Video Generation with Sliding Tile Attention |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fast and Low-Cost Genomic Foundation Models via Outlier Removal |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fast and Provable Algorithms for Sparse PCA with Improved Sample Complexity |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Fast and Robust: Task Sampling with Posterior and Diversity Synergies for Adaptive Decision-Makers in Randomized Environments |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast, Accurate Manifold Denoising by Tunneling Riemannian Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| FastCAV: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Faster Approximation Algorithms for k-Center via Data Reduction |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Faster Global Minimum Cut with Predictions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Faster Rates for Private Adversarial Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Faster Stochastic Optimization with Arbitrary Delays via Adaptive Asynchronous Mini-Batching |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Faster and Stronger: When ANN-SNN Conversion Meets Parallel Spiking Calculation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Feasible Action Search for Bandit Linear Programs via Thompson Sampling |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FeatSharp: Your Vision Model Features, Sharper |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Feature Importance Metrics in the Presence of Missing Data |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Feature Learning beyond the Lazy-Rich Dichotomy: Insights from Representational Geometry |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Feature Shift Localization Network |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Feature learning from non-Gaussian inputs: the case of Independent Component Analysis in high dimensions |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Feature out! Let Raw Image as Your Condition for Blind Face Restoration |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Feature-Mapping Topology Optimization with Neural Heaviside Signed Distance Functions |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Features are fate: a theory of transfer learning in high-dimensional regression |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| FedBEns: One-Shot Federated Learning based on Bayesian Ensemble |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FedClean: A General Robust Label Noise Correction for Federated Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| FedECADO: A Dynamical System Model of Federated Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| FedOne: Query-Efficient Federated Learning for Black-box Discrete Prompt Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| FedPHA: Federated Prompt Learning for Heterogeneous Client Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| FedSMU: Communication-Efficient and Generalization-Enhanced Federated Learning through Symbolic Model Updates |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| FedSSI: Rehearsal-Free Continual Federated Learning with Synergistic Synaptic Intelligence |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Federated Causal Structure Learning with Non-identical Variable Sets |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Federated Disentangled Tuning with Textual Prior Decoupling and Visual Dynamic Adaptation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Federated Generalised Variational Inference: A Robust Probabilistic Federated Learning Framework |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Federated In-Context Learning: Iterative Refinement for Improved Answer Quality |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Federated Incomplete Multi-view Clustering with Globally Fused Graph Guidance |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Federated Learning for Feature Generalization with Convex Constraints |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Federated Node-Level Clustering Network with Cross-Subgraph Link Mending |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Federated Oriented Learning: A Practical One-Shot Personalized Federated Learning Framework |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Feedforward Few-shot Species Range Estimation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Few-Shot Learner Generalizes Across AI-Generated Image Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Feynman-Kac Correctors in Diffusion: Annealing, Guidance, and Product of Experts |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FicGCN: Unveiling the Homomorphic Encryption Efficiency from Irregular Graph Convolutional Networks |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Field Matching: an Electrostatic Paradigm to Generate and Transfer Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Finding Wasserstein Ball Center: Efficient Algorithm and The Applications in Fairness |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Fine-Grained Captioning of Long Videos through Scene Graph Consolidation |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Finite-Sample Convergence Bounds for Trust Region Policy Optimization in Mean Field Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Finite-Time Analysis of Discrete-Time Stochastic Interpolants |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Finite-Time Convergence Rates in Stochastic Stackelberg Games with Smooth Algorithmic Agents |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Finite-Time Global Optimality Convergence in Deep Neural Actor-Critic Methods for Decentralized Multi-Agent Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| FireFlow: Fast Inversion of Rectified Flow for Image Semantic Editing |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FisherSFT: Data-Efficient Supervised Fine-Tuning of Language Models Using Information Gain |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Fishers for Free? Approximating the Fisher Information Matrix by Recycling the Squared Gradient Accumulator |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Fixed-Confidence Multiple Change Point Identification under Bandit Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Fixing the Double Penalty in Data-Driven Weather Forecasting Through a Modified Spherical Harmonic Loss Function |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fixing the Loose Brake: Exponential-Tailed Stopping Time in Best Arm Identification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| FlashTP: Fused, Sparsity-Aware Tensor Product for Machine Learning Interatomic Potentials |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Flat-LoRA: Low-Rank Adaptation over a Flat Loss Landscape |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FlatQuant: Flatness Matters for LLM Quantization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fleet of Agents: Coordinated Problem Solving with Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Flex3D: Feed-Forward 3D Generation with Flexible Reconstruction Model and Input View Curation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| FlexControl: Computation-Aware Conditional Control with Differentiable Router for Text-to-Image Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FlexTok: Resampling Images into 1D Token Sequences of Flexible Length |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| FlexiClip: Locality-Preserving Free-Form Character Animation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| FlexiReID: Adaptive Mixture of Expert for Multi-Modal Person Re-Identification |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Flexibility-conditioned protein structure design with flow matching |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Flexible Tails for Normalizing Flows |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Flexible and Efficient Grammar-Constrained Decoding |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Flexible, Efficient, and Stable Adversarial Attacks on Machine Unlearning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| FlipAttack: Jailbreak LLMs via Flipping |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| FloE: On-the-Fly MoE Inference on Memory-constrained GPU |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Floating-Point Neural Networks Can Represent Almost All Floating-Point Functions |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Flow Matching for Denoised Social Recommendation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Flow Matching for Few-Trial Neural Adaptation with Stable Latent Dynamics |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Flow Q-Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Flow-based Domain Randomization for Learning and Sequencing Robotic Skills |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Flow-field inference from neural data using deep recurrent networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| FlowAR: Scale-wise Autoregressive Image Generation Meets Flow Matching |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| FlowDrag: 3D-aware Drag-based Image Editing with Mesh-guided Deformation Vector Flow Fields |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Flowing Datasets with Wasserstein over Wasserstein Gradient Flows |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fluctuations of the largest eigenvalues of transformed spiked Wigner matrices |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Focal-SAM: Focal Sharpness-Aware Minimization for Long-Tailed Classification |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Focus On This, Not That! Steering LLMs with Adaptive Feature Specification |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Foundation Model Insights and a Multi-Model Approach for Superior Fine-Grained One-shot Subset Selection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Foundation Molecular Grammar: Multi-Modal Foundation Models Induce Interpretable Molecular Graph Languages |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FourierMamba: Fourier Learning Integration with State Space Models for Image Deraining |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Fragments to Facts: Partial-Information Fragment Inference from LLMs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| FrameBridge: Improving Image-to-Video Generation with Bridge Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Fraud-Proof Revenue Division on Subscription Platforms |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Free Process Rewards without Process Labels |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| FreeMesh: Boosting Mesh Generation with Coordinates Merging |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| From Black Boxes to Transparent Minds: Evaluating and Enhancing the Theory of Mind in Multimodal Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| From Complex to Atomic: Enhancing Augmented Generation via Knowledge-Aware Dual Rewriting and Reasoning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| From Crowdsourced Data to High-quality Benchmarks: Arena-Hard and Benchbuilder Pipeline |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| From Feature Interaction to Feature Generation: A Generative Paradigm of CTR Prediction Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From Individual Experience to Collective Evidence: A Reporting-Based Framework for Identifying Systemic Harms |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| From Jack of All Trades to Master of One: Specializing LLM-based Autoraters to a Test Set |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From Kernels to Features: A Multi-Scale Adaptive Theory of Feature Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| From Language Models over Tokens to Language Models over Characters |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| From Logits to Hierarchies: Hierarchical Clustering made Simple |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories, and Applications |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From RAG to Memory: Non-Parametric Continual Learning for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| From Spectrum-free towards Baseline-view-free: Double-track Proximity Driven Multi-view Clustering |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| From Theory to Practice: Rethinking Green and Martin Kernels for Unleashing Graph Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| From Thousands to Billions: 3D Visual Language Grounding via Render-Supervised Distillation from 2D VLMs |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| From Token to Rhythm: A Multi-Scale Approach for ECG-Language Pretraining |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| From Uncertain to Safe: Conformal Adaptation of Diffusion Models for Safe PDE Control |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| From Weight-Based to State-Based Fine-Tuning: Further Memory Reduction on LoRA with Parallel Control |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fully Dynamic Embedding into $\ell_p$ Spaces |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Fully Dynamic Euclidean Bi-Chromatic Matching in Sublinear Update Time |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fully Heteroscedastic Count Regression with Deep Double Poisson Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearch |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Function Encoders: A Principled Approach to Transfer Learning in Hilbert Spaces |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Function-Space Learning Rates |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Function-to-Style Guidance of LLMs for Code Translation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Functional Alignment Can Mislead: Examining Model Stitching |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Fundamental Limits of Visual Autoregressive Transformers: Universal Approximation Abilities |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Fundamental limits of learning in sequence multi-index models and deep attention networks: high-dimensional asymptotics and sharp thresholds |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FuseUNet: A Multi-Scale Feature Fusion Method for U-like Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fusing Reward and Dueling Feedback in Stochastic Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| G-Adaptivity: optimised graph-based mesh relocation for finite element methods |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| G-Sim: Generative Simulations with Large Language Models and Gradient-Free Calibration |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| GANQ: GPU-Adaptive Non-Uniform Quantization for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GAPrompt: Geometry-Aware Point Cloud Prompt for 3D Vision Model |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| GCAL: Adapting Graph Models to Evolving Domain Shifts |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GEFA: A General Feature Attribution Framework Using Proxy Gradient Estimation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| GHOST: Generalizable One-Shot Federated Graph Learning with Proxy-Based Topology Knowledge Retention |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| GIVE: Structured Reasoning of Large Language Models with Knowledge Graph Inspired Veracity Extrapolation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| GLGENN: A Novel Parameter-Light Equivariant Neural Networks Architecture Based on Clifford Geometric Algebras |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| GMAIL: Generative Modality Alignment for generated Image Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GPEN: Global Position Encoding Network for Enhanced Subgraph Representation Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GPTAQ: Efficient Finetuning-Free Quantization for Asymmetric Calibration |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via Multi-Step Reasoning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| GRAIL: Graph Edit Distance and Node Alignment using LLM-Generated Code |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GRAM: A Generative Foundation Reward Model for Reward Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| GS-Bias: Global-Spatial Bias Learner for Single-Image Test-Time Adaptation of Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GSM-$∞$: How Do your LLMs Behave over Infinitely Increasing Reasoning Complexity and Context Length? |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| GTR: A General, Multi-View, and Dynamic Framework for Trajectory Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Galileo: Learning Global & Local Features of Many Remote Sensing Modalities |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Gamma Distribution PCA-Enhanced Feature Learning for Angle-Robust SAR Target Recognition |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Gandalf the Red: Adaptive Security for LLMs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Gap-Dependent Bounds for Federated $Q$-Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
3 |
| GaussMark: A Practical Approach for Structural Watermarking of Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GaussMarker: Robust Dual-Domain Watermark for Diffusion Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Gaussian Mixture Flow Matching Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GenMol: A Drug Discovery Generalist with Discrete Diffusion |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GenZSL: Generative Zero-Shot Learning Via Inductive Variational Autoencoder |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| General agents need world models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| General framework for online-to-nonconvex conversion: Schedule-free SGD is also effective for nonconvex optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Generalists vs. Specialists: Evaluating LLMs on Highly-Constrained Biophysical Sequence Optimization Tasks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generalizable Multi-Camera 3D Object Detection from a Single Source via Fourier Cross-View Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Generalization Analysis for Controllable Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Generalization Analysis for Supervised Contrastive Representation Learning under Non-IID Settings |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Generalization Bounds via Meta-Learned Model Representations: PAC-Bayes and Sample Compression Hypernetworks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generalization Performance of Ensemble Clustering: From Theory to Algorithm |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Generalization Principles for Inference over Text-Attributed Graphs with Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generalization and Robustness of the Tilted Empirical Risk |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Generalization in Federated Learning: A Conditional Mutual Information Framework |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Generalization of noisy SGD in unbounded non-convex settings |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Generalized Category Discovery via Reciprocal Learning and Class-Wise Distribution Regularization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalized Interpolating Discrete Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generalized Random Forests Using Fixed-Point Trees |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Generalized Smooth Bilevel Optimization with Nonconvex Lower-Level |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Generalized Venn and Venn-Abers Calibration with Applications in Conformal Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Generalized additive models via direct optimization of regularized decision stump forests |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Generalizing Causal Effects from Randomized Controlled Trials to Target Populations across Diverse Environments |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs? |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Generating Hypotheses of Dynamic Causal Graphs in Neuroscience: Leveraging Generative Factor Models of Observed Time Series |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generation from Noisy Examples |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Generative Audio Language Modeling with Continuous-valued Tokens and Masked Next-Token Prediction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Generative Data Mining with Longtail-Guided Diffusion |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Generative Human Trajectory Recovery via Embedding-Space Conditional Diffusion |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generative Intervention Models for Causal Perturbation Modeling |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Generative Modeling Reinvents Supervised Learning: Label Repurposing with Predictive Consistency Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generative Point Cloud Registration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generative Social Choice: The Next Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Geometric Algebra Planes: Convex Implicit Neural Volumes |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Geometric Contact Flows: Contactomorphisms for Dynamics and Control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Geometric Feature Embedding for Effective 3D Few-Shot Class Incremental Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Geometric Generative Modeling with Noise-Conditioned Graph Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Geometric Hyena Networks for Large-scale Equivariant Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Geometric Median (GM) Matching for Robust k-Subset Selection from Noisy Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Geometric Representation Condition Improves Equivariant Molecule Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Geometric Resampling in Nearly Linear Time for Follow-the-Perturbed-Leader with Best-of-Both-Worlds Guarantee in Bandit Problems |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Geometric and Physical Constraints Synergistically Enhance Neural PDE Surrogates |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Geometry Informed Tokenization of Molecules for Language Model Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Geometry-Informed Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Global Context-aware Representation Learning for Spatially Resolved Transcriptomics |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $μ$ Parametrization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Global Optimization with a Power-Transformed Objective and Gaussian Smoothing |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Global curvature for second-order optimization of neural networks |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Global-Local Dirichlet Processes for Clustering Grouped Data in the Presence of Group-Specific Idiosyncratic Variables |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| GoIRL: Graph-Oriented Inverse Reinforcement Learning for Multimodal Trajectory Prediction |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Goal-Oriented Skill Abstraction for Offline Multi-Task Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Going Deeper into Locally Differentially Private Graph Neural Networks |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GradPS: Resolving Futile Neurons in Parameter Sharing Network for Multi-Agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Gradient Aligned Regression via Pairwise Losses |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Gradient Boosting Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Gradient Descent Converges Arbitrarily Fast for Logistic Regression via Large and Adaptive Stepsizes |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Gradient Flow Provably Learns Robust Classifiers for Orthonormal GMMs |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Gradient Inversion of Multimodal Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Gradient-based Explanations for Deep Learning Survival Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Gradual Transition from Bellman Optimality Operator to Bellman Operator in Online Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Grammar-Forced Translation of Natural Language to Temporal Logic using LLMs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Graph Adaptive Autoregressive Moving Average Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph Attention is Not Always Beneficial: A Theoretical Analysis of Graph Attention Mechanisms via Contextual Stochastic Block Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph Diffusion for Robust Multi-Agent Coordination |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Graph Generative Pre-trained Transformer |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Inverse Style Transfer for Counterfactual Explainability |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Minimum Factorization Distance and Its Application to Large-Scale Graph Data Clustering |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Neural Network Generalization With Gaussian Mixture Model Based Augmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph World Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph-Assisted Stitching for Offline Hierarchical Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Graph-Based Algorithms for Diverse Similarity Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph-Supported Dynamic Algorithm Configuration for Multi-Objective Combinatorial Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph4MM: Weaving Multimodal Learning with Structural Information |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GraphCL: Graph-based Clustering for Semi-Supervised Medical Image Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GraphGPT: Generative Pre-trained Graph Eulerian Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Gravity-Bench-v1: A Benchmark on Gravitational Physics Discovery for Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Great Models Think Alike and this Undermines AI Oversight |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Gridded Transformer Neural Processes for Spatio-Temporal Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Griffin: Towards a Graph-Centric Relational Database Foundation Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GrokFormer: Graph Fourier Kolmogorov-Arnold Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Grokking Beyond the Euclidean Norm of Model Parameters |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Grokking at the Edge of Linear Separability |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Grokking in the Wild: Data Augmentation for Real-World Multi-Hop Reasoning with Transformers |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Guarantees of a Preconditioned Subgradient Algorithm for Overparameterized Asymmetric Low-rank Matrix Recovery |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| GuardAgent: Safeguard LLM Agents via Knowledge-Enabled Reasoning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Guided Search Strategies in Non-Serializable Environments with Applications to Software Engineering Agents |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Guided Structural Inference: Leveraging Priors with Soft Gating Mechanisms |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Guided Zeroth-Order Methods for Stochastic Non-convex Problems with Decision-Dependent Distributions |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Gumiho: A Hybrid Architecture to Prioritize Early Tokens in Speculative Decoding |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| H-Tuning: Toward Low-Cost and Efficient ECG-based Cardiovascular Disease Detection with Pre-Trained Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HALoS: Hierarchical Asynchronous Local SGD over Slow Networks for Geo-Distributed Large Language Model Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HEAP: Hyper Extended A-PDHG Operator for Constrained High-dim PDEs |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| HGOT: Self-supervised Heterogeneous Graph Neural Network with Optimal Transport |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| HPS: Hard Preference Sampling for Human Preference Alignment |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HYGMA: Hypergraph Coordination Networks with Dynamic Grouping for Multi-Agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Habitizing Diffusion Planning for Efficient and Effective Decision Making |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Handling Imbalanced Pseudolabels for Vision-Language Models with Concept Alignment and Confusion-Aware Calibrated Margin |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| HaploVL: A Single-Transformer Baseline for Multi-Modal Understanding |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hardware and Software Platform Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HarmoniCa: Harmonizing Training and Inference for Better Feature Caching in Diffusion Transformer Acceleration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Harmonizing Geometry and Uncertainty: Diffusion with Hyperspheres |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Harnessing Heterogeneous Statistical Strength for Personalized Federated Learning via Hierarchical Bayesian Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HashAttention: Semantic Sparsity for Faster Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Haste Makes Waste: A Simple Approach for Scaling Graph Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Heads up! Large Language Models Can Perform Tasks Without Your Instruction via Selective Attention Head Masking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Heavy-Tailed Linear Bandits: Huber Regression with One-Pass Update |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Hessian Geometry of Latent Space in Generative Models |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| HetSSNet: Spatial-Spectral Heterogeneous Graph Learning Network for Panchromatic and Multispectral Images Fusion |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Heterogeneous Data Game: Characterizing the Model Competition Across Multiple Data Sources |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Heterogeneous Label Shift: Theory and Algorithm |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Heterogeneous Sufficient Dimension Reduction and Subspace Clustering |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Heterogeneous Treatment Effect in Time-to-Event Outcomes: Harnessing Censored Data with Recursively Imputed Trees |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Hgformer: Hyperbolic Graph Transformer for Collaborative Filtering |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
3 |
| Hi-Patch: Hierarchical Patch GNN for Irregular Multivariate Time Series |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HiRemate: Hierarchical Approach for Efficient Re-materialization of Neural Networks |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Hidden No More: Attacking and Defending Private Third-Party LLM Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hide & Seek: Transformer Symmetries Obscure Sharpness & Riemannian Geometry Finds It |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Hierarchical Equivariant Policy via Frame Transfer |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Hierarchical Graph Tokenization for Molecule-Language Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Hierarchical Masked Autoregressive Models with Low-Resolution Token Pivots |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hierarchical Overlapping Clustering on Graphs: Cost Function, Algorithm and Scalability |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hierarchical Planning for Complex Tasks with Knowledge Graph-RAG and Symbolic Verification |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Hierarchical Refinement: Optimal Transport to Infinity and Beyond |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hierarchical Reinforcement Learning with Targeted Causal Interventions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hierarchical Reinforcement Learning with Uncertainty-Guided Diffusional Subgoals |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| High Dynamic Range Novel View Synthesis with Single Exposure |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| High Probability Bound for Cross-Learning Contextual Bandits with Unknown Context Distributions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| High-Dimensional Prediction for Sequential Decision Making |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| High-Dimensional Tensor Regression With Oracle Properties |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| High-Fidelity Simultaneous Speech-To-Speech Translation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Highly Compressed Tokenizer Can Generate Without Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| History-Guided Video Diffusion |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Holistic Physics Solver: Learning PDEs in a Unified Spectral-Physical Space |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Homophily Enhanced Graph Domain Adaptation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| How Compositional Generalization and Creativity Improve as Diffusion Models are Trained |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| How Contaminated Is Your Benchmark? Measuring Dataset Leakage in Large Language Models with Kernel Divergence |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How Distributed Collaboration Influences the Diffusion Model Training? A Theoretical Perspective |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| How Do Images Align and Complement LiDAR? Towards a Harmonized Multi-modal 3D Panoptic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How Do Large Language Monkeys Get Their Power (Laws)? |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| How Do Transformers Learn Variable Binding in Symbolic Programs? |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| How Effective Can Dropout Be in Multiple Instance Learning ? |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| How Expressive are Knowledge Graph Foundation Models? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| How Far Is Video Generation from World Model: A Physical Law Perspective |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| How Much Can Transfer? BRIDGE: Bounded Multi-Domain Graph Foundation Model with Generalization Guarantees |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| How Much Can We Forget about Data Contamination? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How Transformers Learn Regular Language Recognition: A Theoretical Study on Training Dynamics and Implicit Bias |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| How Transformers Learn Structured Data: Insights From Hierarchical Filtering |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| How does Labeling Error Impact Contrastive Learning? A Perspective from Data Dimensionality Reduction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| How to Evaluate and Mitigate IP Infringement in Visual Generative AI? |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How to Synthesize Text Data without Model Collapse? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| How to Train Your Multi-Exit Model? Analyzing the Impact of Training Strategies |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| How to set AdamW’s weight decay as you scale model and dataset size |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Human Body Restoration with One-Step Diffusion Model and A New Benchmark |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Human Cognition-Inspired Hierarchical Fuzzy Learning Machine |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Human-Aligned Image Models Improve Visual Decoding from the Brain |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Hybrid Batch Normalisation: Resolving the Dilemma of Batch Normalisation in Federated Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Hybrid Quantum-Classical Multi-Agent Pathfinding |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Hybrid Spiking Vision Transformer for Object Detection with Event Cameras |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| HybridGS: High-Efficiency Gaussian Splatting Data Compression using Dual-Channel Sparse Representation and Point Cloud Encoder |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Hyper-Transforming Latent Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hyper: Hyperparameter Robust Efficient Exploration in Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| HyperIMTS: Hypergraph Neural Network for Irregular Multivariate Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| HyperIV: Real-time Implied Volatility Smoothing |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| HyperNear: Unnoticeable Node Injection Attacks on Hypergraph Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| HyperTree Planning: Enhancing LLM Reasoning via Hierarchical Thinking |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Hyperband-based Bayesian Optimization for Black-box Prompt Selection |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Hyperbolic-PDE GNN: Spectral Graph Neural Networks in the Perspective of A System of Hyperbolic Partial Differential Equations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hyperspherical Normalization for Scalable Deep Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Hypo3D: Exploring Hypothetical Reasoning in 3D |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Hypothesis Testing for Generalized Thurstone Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| IBCircuit: Towards Holistic Circuit Discovery with Information Bottleneck |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ICLShield: Exploring and Mitigating In-Context Learning Backdoor Attacks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| IL-SOAR : Imitation Learning with Soft Optimistic Actor cRitic |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| IMPACT: Iterative Mask-based Parallel Decoding for Text-to-Audio Generation with Diffusion Modeling |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| IMTS is Worth Time $\times$ Channel Patches: Visual Masked Autoencoders for Irregular Multivariate Time Series Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| INRFlow: Flow Matching for INRs in Ambient Space |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| IRBridge: Solving Image Restoration Bridge with Pre-trained Generative Diffusion Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| IT$^3$: Idempotent Test-Time Training |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ITBench: Evaluating AI Agents across Diverse Real-World IT Automation Tasks |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| ITFormer: Bridging Time Series and Natural Language for Multi-Modal QA with Large-Scale Multitask Dataset |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Identifiable Object Representations under Spatial Ambiguities |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Identification of Latent Confounders via Investigating the Tensor Ranks of the Nonlinear Observations |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Identifying Causal Direction via Variational Bayesian Compression |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Identifying Metric Structures of Deep Latent Variable Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Identifying Neural Dynamics Using Interventional State Space Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Identifying and Understanding Cross-Class Features in Adversarial Training |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Identifying biological perturbation targets through causal differential networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Idiosyncrasies in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Imagine While Reasoning in Space: Multimodal Visualization-of-Thought |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
4 |
| Imitation Learning from a Single Temporally Misaligned Video |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Implicit Language Models are RNNs: Balancing Parallelization and Expressivity |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Implicit Regularization for Tubal Tensor Factorizations via Gradient Descent |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Implicit Riemannian Optimism with Applications to Min-Max Problems |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Implicit Subgraph Neural Network |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Implicit degree bias in the link prediction task |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Importance Corrected Neural JKO Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Importance Sampling for Nonlinear Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Impossible Videos |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Improved Algorithm for Deep Active Learning under Imbalance via Optimal Separation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improved Approximations for Hard Graph Problems using Predictions |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Improved Coresets for Vertical Federated Learning: Regularized Linear and Logistic Regressions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improved Discretization Complexity Analysis of Consistency Models: Variance Exploding Forward Process and Decay Discretization Scheme |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Improved Expressivity of Hypergraph Neural Networks through High-Dimensional Generalized Weisfeiler-Leman Algorithms |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improved Last-Iterate Convergence of Shuffling Gradient Methods for Nonsmooth Convex Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Learning via k-DTW: A Novel Dissimilarity Measure for Curves |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improved Lower Bounds for First-order Stochastic Non-convex Optimization under Markov Sampling |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Off-policy Reinforcement Learning in Biological Sequence Design |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improved Online Confidence Bounds for Multinomial Logistic Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Improved Regret Analysis in Gaussian Process Bandits: Optimality for Noiseless Reward, RKHS norm, and Non-Stationary Variance |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Sample Complexity for Private Nonsmooth Nonconvex Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Theoretically-Grounded Evolutionary Algorithms for Subset Selection with a Linear Cost Constraint |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Improved and Oracle-Efficient Online $\ell_1$-Multicalibration |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improving Compositional Generation with Diffusion Models Using Lift Scores |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improving Consistency Models with Generator-Augmented Flows |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Continual Learning Performance and Efficiency with Auxiliary Classifiers |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving Diversity in Language Models: When Temperature Fails, Change the Loss |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Flow Matching by Aligning Flow Divergence |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Generalization in Federated Learning with Highly Heterogeneous Data via Momentum-Based Stochastic Controlled Weight Averaging |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Improving Generalization with Flat Hilbert Bayesian Inference |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving LLM Safety Alignment with Dual-Objective Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving LLM Video Understanding with 16 Frames Per Second |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving LLMs for Recommendation with Out-Of-Vocabulary Tokens |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Memory Efficiency for Training KANs via Meta Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improving Model Alignment Through Collective Intelligence of Open-Source Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Improving Multi-Class Calibration through Normalization-Aware Isotonic Techniques |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving Multimodal Learning Balance and Sufficiency through Data Remixing |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Out-of-Distribution Detection via Dynamic Covariance Calibration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| Improving Out-of-Distribution Detection with Markov Logic Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improving Rationality in the Reasoning Process of Language Models through Self-playing Game |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improving Reward Model Generalization from Adversarial Process Enhanced Preferences |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improving Soft Unification with Knowledge Graph Embedding Methods |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Transformer World Models for Data-Efficient RL |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Improving Value Estimation Critically Enhances Vanilla Policy Gradient |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Improving Your Model Ranking on Chatbot Arena by Vote Rigging |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Zero-Shot Adversarial Robustness in Vision-Language Models by Closed-form Alignment of Adversarial Path Simplices |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Improving the Continuity of Goal-Achievement Ability via Policy Self-Regularization for Goal-Conditioned Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Improving the Diffusability of Autoencoders |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving the Effective Receptive Field of Message-Passing Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving the Scaling Laws of Synthetic Data with Deliberate Practice |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving the Statistical Efficiency of Cross-Conformal Prediction |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving the Variance of Differentially Private Randomized Experiments through Clustering |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| In-Context Adaptation to Concept Drift for Learned Database Operations |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| In-Context Deep Learning via Transformer Models |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| In-Context Denoising with One-Layer Transformers: Connections between Attention and Associative Memory Retrieval |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| In-Context Fine-Tuning for Time-Series Foundation Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| In-Context Learning and Occam’s Razor |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| In-Context Learning as Conditioned Associative Memory Retrieval |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| In-Context Linear Regression Demystified: Training Dynamics and Mechanistic Interpretability of Multi-Head Softmax Attention |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| In-Context Reinforcement Learning From Suboptimal Historical Data |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Incentivize without Bonus: Provably Efficient Model-based Online Multi-agent RL for Markov Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Incorporating Arbitrary Matrix Group Equivariance into KANs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Incremental Gradient Descent with Small Epoch Counts is Surprisingly Slow on Ill-Conditioned Problems |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Independence Tests for Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Inducing, Detecting and Characterising Neural Modules: A Pipeline for Functional Interpretability in Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Inductive Gradient Adjustment for Spectral Bias in Implicit Neural Representations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Inductive Moment Matching |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| InfAlign: Inference-aware language model alignment |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Inference-Time Alignment of Diffusion Models with Direct Noise Optimization |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Info-Coevolution: An Efficient Framework for Data Model Coevolution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| InfoCons: Identifying Interpretable Critical Concepts in Point Clouds via Information Theory |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| InfoSEM: A Deep Generative Model with Informative Priors for Gene Regulatory Network Inference |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Information Bottleneck-guided MLPs for Robust Spatial-temporal Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Instance Correlation Graph-based Naive Bayes |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Instance-Optimal Pure Exploration for Linear Bandits on Continuous Arms |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Instruct2See: Learning to Remove Any Obstructions Across Distributions |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Instruction-Following Pruning for Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Integer Programming for Generalized Causal Bootstrap Designs |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Integrating Intermediate Layer Optimization and Projected Gradient Descent for Solving Inverse Problems with Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Integration-free Kernels for Equivariant Gaussian Process Modelling |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Interaction-Aware Gaussian Weighting for Clustered Federated Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Interchangeable Token Embeddings for Extendable Vocabulary and Alpha-Equivalence |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Internal Causal Mechanisms Robustly Predict Language Model Out-of-Distribution Behaviors |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Interpolating Neural Network-Tensor Decomposition (INN-TD): a scalable and interpretable approach for large-scale physics-based problems |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Interpreting CLIP with Hierarchical Sparse Autoencoders |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpreting the Repeated Token Phenomenon in Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Intersectional Fairness in Reinforcement Learning with Large State and Constraint Spaces |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Introducing 3D Representation for Dense Volume-to-Volume Translation via Score Fusion |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Invariant Deep Uplift Modeling for Incentive Assignment in Online Marketing via Probability of Necessity and Sufficiency |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Inverse Bridge Matching Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Inverse Flow and Consistency Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Inverse Optimization via Learning Feasible Regions |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Inverse Problem Sampling in Latent Space Using Sequential Monte Carlo |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Inverse problems with experiment-guided AlphaFold |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Investigating Non-Transitivity in LLM-as-a-Judge |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Investigating the Overlooked Hessian Structure: From CNNs to LLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Is Best-of-N the Best of Them? Coverage, Scaling, and Optimality in Inference-Time Alignment |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Is Complex Query Answering Really Complex? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Is Noise Conditioning Necessary for Denoising Generative Models? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Isolated Causal Effects of Natural Language |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Iterative Vectors: In-Context Gradient Steering without Backpropagation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| It’s My Data Too: Private ML for Datasets with Multi-User Training Examples |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Jacobian Sparse Autoencoders: Sparsify Computations, Not Just Activations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Janus: Dual-Server Multi-Round Secure Aggregation with Verifiability for Federated Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Joint Learning of Energy-based Models and their Partition Function |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Joint Localization and Activation Editing for Low-Resource Fine-Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Joint Metric Space Embedding by Unbalanced Optimal Transport with Gromov–Wasserstein Marginal Penalization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Joker: Joint Optimization Framework for Lightweight Kernel Machines |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Just Enough Shifts: Mitigating Over-Refusal in Aligned Language Models with Targeted Representation Fine-Tuning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| K$^2$IE: Kernel Method-based Kernel Intensity Estimators for Inhomogeneous Poisson Processes |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| KABB: Knowledge-Aware Bayesian Bandits for Dynamic Expert Coordination in Multi-Agent Systems |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| KBQA-o1: Agentic Knowledge Base Question Answering with Monte Carlo Tree Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| KEA: Keeping Exploration Alive by Proactively Coordinating Exploration Strategies |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| KGMark: A Diffusion Watermark for Knowledge Graphs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| KIND: Knowledge Integration and Diversion for Training Decomposable Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| KV Shifting Attention Enhances Language Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| KVTuner: Sensitivity-Aware Layer-Wise Mixed-Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Kandinsky Conformal Prediction: Beyond Class- and Covariate-Conditional Coverage |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Kernel Quantile Embeddings and Associated Probability Metrics |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Kernel-based Unsupervised Embedding Alignment for Enhanced Visual Representation in Vision-language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| KernelBench: Can LLMs Write Efficient GPU Kernels? |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| KinDEL: DNA-Encoded Library Dataset for Kinase Inhibitors |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Kinetic Langevin Diffusion for Crystalline Materials Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Knowledge Retention in Continual Model-Based Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Knowledge Swapping via Learning and Unlearning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Knowledge-Guided Wasserstein Distributionally Robust Optimization |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| KoNODE: Koopman-Driven Neural Ordinary Differential Equations with Evolving Parameters for Time Series Analysis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Kona: An Efficient Privacy-Preservation Framework for KNN Classification by Communication Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| KoopSTD: Reliable Similarity Analysis between Dynamical Systems via Approximating Koopman Spectrum with Timescale Decoupling |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| L-Diffusion: Laplace Diffusion for Efficient Pathology Image Segmentation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| L3A: Label-Augmented Analytic Adaptation for Multi-Label Class Incremental Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| LADA: Scalable Label-Specific CLIP Adapter for Continual Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LAION-C: An Out-of-Distribution Benchmark for Web-Scale Vision Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| LARM: Large Auto-Regressive Model for Long-Horizon Embodied Intelligence |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| LASER: Attention with Exponential Transformation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LAST SToP for Modeling Asynchronous Time Series |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LAuReL: Learned Augmented Residual Layer |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LBI-FL: Low-Bit Integerized Federated Learning with Temporally Dynamic Bit-Width Allocation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LDMol: A Text-to-Molecule Diffusion Model with Structurally Informative Latent Space Surpasses AR Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LEAPS: A discrete neural sampler via locally equivariant networks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| LEMoN: Label Error Detection using Multimodal Neighbors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LETS Forecast: Learning Embedology for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LEVIS: Large Exact Verifiable Input Spaces for Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LGDM: Latent Guidance in Diffusion Models for Perceptual Evaluations |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| LIMEFLDL: A Local Interpretable Model-Agnostic Explanations Approach for Label Distribution Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LIVS: A Pluralistic Alignment Dataset for Inclusive Public Spaces |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLM Alignment as Retriever Optimization: An Information Retrieval Perspective |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LLM Data Selection and Utilization via Dynamic Bi-level Optimization |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| LLM Enhancers for GNNs: An Analysis from the Perspective of Causal Mechanism Identification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLM-Assisted Semantically Diverse Teammate Generation for Efficient Multi-agent Coordination |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LLM-Augmented Chemical Synthesis and Design Decision Programs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| LLM-SRBench: A New Benchmark for Scientific Equation Discovery with Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LLMScan: Causal Scan for LLM Misbehavior Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLMs Can Reason Faster Only If We Let Them |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LLMs can see and hear without any training |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| LLaVA-ReID: Selective Multi-image Questioner for Interactive Person Re-Identification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LMAct: A Benchmark for In-Context Imitation Learning with Long Multimodal Demonstrations |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| LOB-Bench: Benchmarking Generative AI for Finance - an Application to Limit Order Book Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LOCATE 3D: Real-World Object Localization via Self-Supervised Learning in 3D |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LOGO — Long cOntext aliGnment via efficient preference Optimization |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| LRA-QViT: Integrating Low-Rank Approximation and Quantization for Robust and Efficient Vision Transformers |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LSCD: Lomb–Scargle Conditioned Diffusion for Time series Imputation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LV-XAttn: Distributed Cross-Attention for Long Visual Inputs in Multimodal Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| La RoSA: Enhancing LLM Efficiency via Layerwise Rotated Sparse Activation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LaMAGIC2: Advanced Circuit Formulations for Language Model-Based Analog Topology Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| LaRA: Benchmarking Retrieval-Augmented Generation and Long-Context LLMs – No Silver Bullet for LC or RAG Routing |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Label Distribution Propagation-based Label Completion for Crowdsourcing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Ladder-Residual: Parallelism-Aware Architecture for Accelerating Large Model Inference with Communication Overlapping |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LangDAug: Langevin Data Augmentation for Multi-Source Domain Generalization in Medical Image Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LangTime: A Language-Guided Unified Model for Time Series Forecasting with Proximal Policy Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Language Models May Verbatim Complete Text They Were Not Explicitly Trained On |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Language Models as Implicit Tree Search |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Language Models over Canonical Byte-Pair Encodings |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| LapSum - One Method to Differentiate Them All: Ranking, Sorting and Top-k Selection |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Laplace Transform Based Low-Complexity Learning of Continuous Markov Semigroups |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Large Continual Instruction Assistant |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Large Displacement Motion Transfer with Unsupervised Anytime Interpolation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Large Language Model-driven Large Neighborhood Search for Large-Scale MILP Problems |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Large Language Models are Demonstration Pre-Selectors for Themselves |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Large Language Models to Diffusion Finetuning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Large Language-Geometry Model: When LLM meets Equivariance |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Larger or Smaller Reward Margins to Select Preferences for LLM Alignment? |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Latent Action Learning Requires Supervision in the Presence of Distractors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Latent Diffusion Planning for Imitation Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Latent Imputation before Prediction: A New Computational Paradigm for De Novo Peptide Sequencing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Latent Mamba Operator for Partial Differential Equations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Latent Preference Coding: Aligning Large Language Models via Discrete Latent Codes |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Latent Score-Based Reweighting for Robust Classification on Imbalanced Tabular Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Latent Thought Models with Variational Bayes Inference-Time Computation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Latent Variable Causal Discovery under Selection Bias |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Latent Variable Estimation in Bayesian Black-Litterman Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Layer by Layer: Uncovering Hidden Representations in Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Layer-wise Quantization for Quantized Optimistic Dual Averaging |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Lean and Mean Adaptive Optimization via Subset-Norm and Subspace-Momentum with Convergence Guarantees |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learn Beneficial Noise as Graph Augmentation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learn Singularly Perturbed Solutions via Homotopy Dynamics |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Learn from Downstream and Be Yourself in Multimodal Large Language Models Fine-Tuning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learn to Vaccinate: Combining Structure Learning and Effective Vaccination for Epidemic and Outbreak Control |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Learnable Spatial-Temporal Positional Encoding for Link Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learngene Tells You How to Customize: Task-Aware Parameter Initialization at Flexible Scales |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Adaptive Lighting via Channel-Aware Guidance |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning Adversarial MDPs with Stochastic Hard Constraints |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning Along the Arrow of Time: Hyperbolic Geometry for Backward-Compatible Representation Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Attribute-Aware Hash Codes for Fine-Grained Image Retrieval via Query Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Bayesian Nash Equilibrium in Auction Games via Approximate Best Response |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
4 |
| Learning Cascade Ranking as One Network |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning Changes in Graphon Attachment Network Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Classifiers That Induce Markets |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning Compact Semantic Information for Incomplete Multi-View Missing Multi-Label Classification |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning Condensed Graph via Differentiable Atom Mapping for Reaction Yield Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Configurations for Data-Driven Multi-Objective Optimization |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning Curves of Stochastic Gradient Descent in Kernel Regression |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Learning Distances from Data with Normalizing Flows and Score Matching |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Distribution-wise Control in Representation Space for Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Dynamics in Continual Pre-Training for Large Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning Dynamics under Environmental Constraints via Measurement-Induced Bundle Structures |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Learning Efficient Robotic Garment Manipulation with Standardization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning Event Completeness for Weakly Supervised Video Anomaly Detection |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning Extrapolative Sequence Transformations from Markov Chains |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Fused State Representations for Control from Multi-View Observations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Gaussian DAG Models without Condition Number Bounds |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Imbalanced Data with Beneficial Label Noise |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Imperfect Information Extensive-form Games with Last-iterate Convergence under Bandit Feedback |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning In-context $n$-grams with Transformers: Sub-$n$-grams Are Near-Stationary Points |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning Initial Basis Selection for Linear Programming via Duality-Inspired Tripartite Graph Representation and Comprehensive Supervision |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Input Encodings for Kernel-Optimal Implicit Neural Representations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning Invariant Causal Mechanism from Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Joint Interventional Effects from Single-Variable Interventions in Additive Models |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning Latent Graph Structures and their Uncertainty |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Likelihood-Free Reference Priors |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Mean Field Control on Sparse Graphs |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Learning Minimum-Size BDDs: Towards Efficient Exact Algorithms |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learning Mixtures of Experts with EM: A Mirror Descent Perspective |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Monotonic Probabilities with a Generative Cost Model |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning Multi-Level Features with Matryoshka Sparse Autoencoders |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Optimal Multimodal Information Bottleneck Representations |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning Parametric Distributions from Samples and Preferences |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning Policy Committees for Effective Personalization in MDPs with Diverse Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Progress Driven Multi-Agent Curriculum |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning Representations of Instruments for Partial Identification of Treatment Effects |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Robust Neural Processes with Risk-Averse Stochastic Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning Safe Control via On-the-Fly Bandit Exploration |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning Safe Strategies for Value Maximizing Buyers in Uniform Price Auctions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Safety Constraints for Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning Single Index Models with Diffusion Priors |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Smooth and Expressive Interatomic Potentials for Physical Property Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning Soft Sparse Shapes for Efficient Time-Series Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learning State-Based Node Representations from a Class Hierarchy for Fine-Grained Open-Set Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Strategic Language Agents in the Werewolf Game with Iterative Latent Space Policy Optimization |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Learning Survival Distributions with the Asymmetric Laplace Distribution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Time-Aware Causal Representation for Model Generalization in Evolving Domains |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Time-Varying Multi-Region Brain Communications via Scalable Markovian Gaussian Processes |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning Utilities from Demonstrations in Markov Decision Processes |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning Vision and Language Concepts for Controllable Image Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning With Multi-Group Guarantees For Clusterable Subpopulations |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning curves theory for hierarchically compositional data with power-law distributed features |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning dynamics in linear recurrent neural networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Learning from Loss Landscape: Generalizable Mixed-Precision Quantization via Adaptive Sharpness-Aware Gradient Aligning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning from Sample Stability for Deep Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning from Suboptimal Data in Continuous Control via Auto-Regressive Soft Q-Network |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning from True-False Labels via Multi-modal Prompt Retrieving |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning from others’ mistakes: Finetuning machine translation models with span-level error annotations |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning multivariate Gaussians with imperfect advice |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning the Electronic Hamiltonian of Large Atomic Structures |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning the RoPEs: Better 2D and 3D Position Encodings with STRING |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning to (Learn at Test Time): RNNs with Expressive Hidden States |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning to Generate Projections for Reducing Dimensionality of Heterogeneous Linear Programming Problems |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning to Incentivize in Repeated Principal-Agent Problems with Adversarial Agent Arrivals |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning to Match Unpaired Data with Minimum Entropy Coupling |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning to Plan & Reason for Evaluation with Thinking-LLM-as-a-Judge |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning to Quantize for Training Vector-Quantized Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning to Reuse Policies in State Evolvable Environments |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Learning to Route LLMs with Confidence Tokens |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Steer Learners in Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning to Stop: Deep Learning for Mean Field Optimal Stopping |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning to Trust Bellman Updates: Selective State-Adaptive Regularization for Offline RL |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning with Exact Invariances in Polynomial Time |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning with Expected Signatures: Theory and Applications |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning with Selectively Labeled Data from Multiple Decision-makers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning without Isolation: Pathway Protection for Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning-Augmented Hierarchical Clustering |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning-Order Autoregressive Models with Application to Molecular Graph Generation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learnings from Scaling Visual Tokenizers for Reconstruction and Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Learnware Specification via Dual Alignment |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Lego Sketch: A Scalable Memory-augmented Neural Network for Sketching Data Streams |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LensLLM: Unveiling Fine-Tuning Dynamics for LLM Selection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Less is More: Federated Graph Learning with Alleviating Topology Heterogeneity from A Causal Perspective |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Let LLM Tell What to Prune and How Much to Prune |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Leveraging Diffusion Model as Pseudo-Anomalous Graph Generator for Graph-Level Anomaly Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Leveraging Model Guidance to Extract Training Data from Personalized Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Leveraging Offline Data in Linear Latent Contextual Bandits |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Leveraging Online Olympiad-Level Math Problems for LLMs Training and Contamination-Resistant Evaluation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Leveraging Per-Instance Privacy for Machine Unlearning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Leveraging Predictive Equivalence in Decision Trees |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Leveraging Randomness in Model and Data Partitioning for Privacy Amplification |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Leveraging Sparsity for Sample-Efficient Preference Learning: A Theoretical Perspective |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Lexico: Extreme KV Cache Compression via Sparse Coding over Universal Dictionaries |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LieRE: Lie Rotational Positional Encodings |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Liger: Linearizing Large Language Models to Gated Recurrent Structures |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LightGTS: A Lightweight General Time Series Forecasting Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LightningDrag: Lightning Fast and Accurate Drag-based Image Editing Emerging from Videos |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lightspeed Geometric Dataset Distance via Sliced Optimal Transport |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lightweight Dataset Pruning without Full Training via Example Difficulty and Prediction Uncertainty |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Lightweight Online Adaption for Time Series Foundation Model Forecasts |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Lightweight Protocols for Distributed Private Quantile Estimation |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lightweight-Mark: Rethinking Deep Learning-Based Watermarking |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Limitations of measure-first protocols in quantum machine learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| LineFlow: A Framework to Learn Active Control of Production Lines |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Linear $Q$-Learning Does Not Diverge in $L^2$: Convergence Rates to a Bounded Set |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Linear Bandits with Partially Observable Features |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Linear Contextual Bandits With Interference |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Linear Mode Connectivity between Multiple Models modulo Permutation Symmetries |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Linear Transformers as VAR Models: Aligning Autoregressive Attention Mechanisms with Autoregressive Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Linear convergence of Sinkhorn’s algorithm for generalized static Schrödinger bridge |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Linearization Turns Neural Operators into Function-Valued Gaussian Processes |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| LipsNet++: Unifying Filter and Controller into a Policy Network |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LlavaGuard: An Open VLM-based Framework for Safeguarding Vision Datasets and Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LoRA Training Provably Converges to a Low-Rank Global Minimum Or It Fails Loudly (But it Probably Won’t Fail) |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| LoRA-Gen: Specializing Large Language Model via Online LoRA Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Local Identifying Causal Relations in the Presence of Latent Variables |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Local Manifold Approximation and Projection for Manifold-Aware Diffusion Planning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Local Pan-privacy for Federated Analytics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Locality Preserving Markovian Transition for Instance Retrieval |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Locate-then-edit for Multi-hop Factual Recall under Knowledge Editing |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Log-Sum-Exponential Estimator for Off-Policy Evaluation and Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Logarithmic Regret for Online KL-Regularized Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Logits are All We Need to Adapt Closed Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Long-Form Speech Generation with Spoken Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Long-Short Alignment for Effective Long-Context Modeling in LLMs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LongRoPE2: Near-Lossless LLM Context Window Scaling |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Looking Beyond the Top-1: Transformers Determine Top Tokens in Order |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Loss Functions and Operators Generated by f-Divergences |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LotteryCodec: Searching the Implicit Representation in a Random Network for Low-Complexity Image Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Low-Dimension-to-High-Dimension Generalization and Its Implications for Length Generalization |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Low-Rank Adapting Models for Sparse Autoencoders |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Low-Rank Tensor Transitions (LoRT) for Transferable Tensor Regression |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Low-Rank Thinning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Low-distortion and GPU-compatible Tree Embeddings in Hyperbolic Space |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| M$^3$HF: Multi-agent Reinforcement Learning from Multi-phase Human Feedback of Mixed Quality |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| M+: Extending MemoryLLM with Scalable Long-Term Memory |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| M2PDE: Compositional Generative Multiphysics and Multi-component PDE Simulation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| M3-JEPA: Multimodal Alignment via Multi-gate MoE based on the Joint-Embedding Predictive Architecture |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MA-LoT: Model-Collaboration Lean-based Long Chain-of-Thought Reasoning enhances Formal Theorem Proving |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MAGELLAN: Metacognitive predictions of learning progress guide autotelic LLM agents in large goal spaces |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MAPLE: Many-Shot Adaptive Pseudo-Labeling for In-Context Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MARGE: Improving Math Reasoning with Guided Exploration |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MARS: Unleashing the Power of Variance Reduction for Training Large Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MASS: Mathematical Data Selection via Skill Graphs for Pretraining Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MATH-Perturb: Benchmarking LLMs’ Math Reasoning Abilities against Hard Perturbations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| MATS: An Audio Language Model under Text-only Supervision |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MCU: An Evaluation Framework for Open-Ended Game Agents |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| MDDM: Practical Message-Driven Generative Image Steganography Based on Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MELON: Provable Defense Against Indirect Prompt Injection Attacks in AI Agents |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MERGE$^3$: Efficient Evolutionary Merging on Consumer-grade GPUs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MERIT: Maximum-normalized Element-wise Ratio for Language Model Large-batch Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MF-LAL: Drug Compound Generation Using Multi-Fidelity Latent Space Active Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MGD$^3$ : Mode-Guided Dataset Distillation using Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MIB: A Mechanistic Interpretability Benchmark |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MIPT: Multilevel Informed Prompt Tuning for Robust Molecular Property Prediction |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| MIRROR: Make Your Object-Level Multi-View Generation More Consistent with Training-Free Rectification |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MITIGATING OVER-EXPLORATION IN LATENT SPACE OPTIMIZATION USING LES |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ML$^2$-GCL: Manifold Learning Inspired Lightweight Graph Contrastive Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| MM-RLHF: The Next Step Forward in Multimodal LLM Alignment |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| MMInference: Accelerating Pre-filling for Long-Context Visual Language Models via Modality-Aware Permutation Sparse Attention |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| MODA: MOdular Duplex Attention for Multimodal Perception, Cognition, and Emotion Understanding |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MODULI: Unlocking Preference Generalization via Diffusion Models for Offline Multi-Objective Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MOGIC: Metadata-infused Oracle Guidance for Improved Extreme Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MONA: Myopic Optimization with Non-myopic Approval Can Mitigate Multi-step Reward Hacking |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MP-Nav: Enhancing Data Poisoning Attacks against Multimodal Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MPO: An Efficient Post-Processing Framework for Mixing Diverse Preference Alignment |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MTL-UE: Learning to Learn Nothing for Multi-Task Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MTSTRec: Multimodal Time-Aligned Shared Token Recommender |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| MVA: Linear Attention with High-order Query-Keys Integration and Multi-level Vocabulary Decomposition |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Machine Learning meets Algebraic Combinatorics: A Suite of Datasets Capturing Research-level Conjecturing Ability in Pure Mathematics |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Machines and Mathematical Mutations: Using GNNs to Characterize Quiver Mutation Classes |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
4 |
| Mahalanobis++: Improving OOD Detection via Feature Normalization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Maintaining Proportional Committees with Dynamic Candidate Sets |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Making Hard Problems Easier with Custom Data Distributions and Loss Regularization: A Case Study in Modular Arithmetic |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MaskTwins: Dual-form Complementary Masking for Domain-Adaptive Image Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Masked Autoencoders Are Effective Tokenizers for Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Masked Generative Nested Transformers with Decode Time Scaling |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Massive Values in Self-Attention Modules are the Key to Contextual Knowledge Understanding |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Mastering Board Games by External and Internal Planning with Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Mastering Massive Multi-Task Reinforcement Learning via Mixture-of-Expert Decision Transformer |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mastering Multiple-Expert Routing: Realizable $H$-Consistency and Strong Guarantees for Learning to Defer |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| MathConstruct: Challenging LLM Reasoning with Constructive Proofs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Matrix Completion with Incomplete Side Information via Orthogonal Complement Projection |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Matryoshka Quantization |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Maximal Update Parametrization and Zero-Shot Hyperparameter Transfer for Fourier Neural Operators |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Maximizing Intermediate Checkpoint Value in LLM Pretraining with Bayesian Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Maximum Coverage in Turnstile Streams with Applications to Fingerprinting Measures |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Maximum Entropy Reinforcement Learning with Diffusion Policy |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Maximum Total Correlation Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Measuring Diversity in Synthetic Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Measuring Diversity: Axioms and Challenges |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Measuring In-Context Computation Complexity via Hidden State Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Measuring Representational Shifts in Continual Learning: A Linear Transformation Perspective |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Measuring Variable Importance in Heterogeneous Treatment Effects with Confidence |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Mechanisms of Projective Composition of Diffusion Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Mechanistic PDE Networks for Discovery of Governing Equations |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Mechanistic Unlearning: Robust Knowledge Unlearning and Editing via Mechanistic Localization |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MedRAX: Medical Reasoning Agent for Chest X-ray |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MemFreezing: A Novel Adversarial Attack on Temporal Graph Neural Networks under Limited Future Knowledge |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Memorization Sinks: Isolating Memorization during LLM Training |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Memory Layers at Scale |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Merge-Friendly Post-Training Quantization for Multi-Target Domain Adaptation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Meta Optimality for Demographic Parity Constrained Regression via Post-Processing |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Meta-Black-Box-Optimization through Offline Q-function Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Meta-Reinforcement Learning with Adaptation from Human Feedback via Preference-Order-Preserving Task Embedding |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MetaAgent: Automatically Constructing Multi-Agent Systems Based on Finite State Machines |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MetaOptimize: A Framework for Optimizing Step Sizes and Other Meta-parameters |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Metadata Conditioning Accelerates Language Model Pre-training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Metastable Dynamics of Chain-of-Thought Reasoning: Provable Benefits of Search, RL and Distillation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| MetricEmbedding: Accelerate Metric Nearness by Tropical Inner Product |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Mind the Gap: A Practical Attack on GGUF Quantization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mind the Gap: a Spectral Analysis of Rank Collapse and Signal Propagation in Attention Layers |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| MindAligner: Explicit Brain Functional Alignment for Cross-Subject Visual Decoding from Limited fMRI Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MindCustomer: Multi-Context Image Generation Blended with Brain Signal |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-text Decoding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Minerva: A Programmable Memory Test Benchmark for Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Minimalist Concept Erasure in Generative Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Minimax Optimal Regret Bound for Reinforcement Learning with Trajectory Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Minimum Width for Universal Approximation using Squashable Activation Functions |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| MiraGe: Editable 2D Images using Gaussian Splatting |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias? |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MissScore: High-Order Score Estimation in the Presence of Missing Data |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mitigating Heterogeneous Token Overfitting in LLM Knowledge Editing |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Mitigating Local Cohesion and Global Sparseness in Graph Contrastive Learning with Fuzzy Boundaries |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Mitigating Object Hallucination in Large Vision-Language Models via Image-Grounded Guidance |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mitigating Over-Squashing in Graph Neural Networks by Spectrum-Preserving Sparsification |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Mitigating Plasticity Loss in Continual Reinforcement Learning by Reducing Churn |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MixBridge: Heterogeneous Image-to-Image Backdoor Attack through Mixture of Schrödinger Bridges |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MixMin: Finding Data Mixtures via Convex Minimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mixed-curvature decision trees and random forests |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Mixture of Experts Made Intrinsically Interpretable |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mixture of Experts Provably Detect and Learn the Latent Cluster Structure in Gradient-Based Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Mixture of Hidden-Dimensions: Not All Hidden-States’ Dimensions are Needed in Transformer |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mixture of Lookup Experts |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MoE-SVD: Structured Mixture-of-Experts LLMs Compression via Singular Value Decomposition |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MoH: Multi-Head Attention as Mixture-of-Head Attention |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MoMa: Modulating Mamba for Adapting Image Foundation Models to Video Recognition |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MoRAgent: Parameter Efficient Agent Tuning with Mixture-of-Roles |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Modalities Contribute Unequally: Enhancing Medical Multi-modal Learning through Adaptive Modality Token Re-balancing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Model Immunization from a Condition Number Perspective |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Model Steering: Learning with a Reference Model Improves Generalization Bounds and Scaling Laws |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Model Uncertainty Quantification by Conformal Prediction in Continual Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Model-Based Exploration in Monitored Markov Decision Processes |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Modeling All-Atom Glycan Structures via Hierarchical Message Passing and Multi-Scale Pre-training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Modeling Multi-Task Model Merging as Adaptive Projective Gradient Descent |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Models of Heavy-Tailed Mechanistic Universality |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Modified K-means Algorithm with Local Optimality Guarantees |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Modular Duality in Deep Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Modularized Self-Reflected Video Reasoner for Multimodal LLM with Application to Video Question Answering |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Modulated Diffusion: Accelerating Generative Modeling with Modulated Quantization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Momentum-Driven Adaptivity: Towards Tuning-Free Asynchronous Federated Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Monte Carlo Tree Diffusion for System 2 Planning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Monte Carlo Tree Search for Comprehensive Exploration in LLM-Based Automatic Heuristic Design |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Monte-Carlo Tree Search with Uncertainty Propagation via Optimal Transport |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| More Than Meets the Eye: Enhancing Multi-Object Tracking Even with Prolonged Occlusions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Morse: Dual-Sampling for Lossless Acceleration of Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MuLan: Adapting Multilingual Diffusion Models for Hundreds of Languages with Negligible Cost |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Multi-Armed Bandits with Interference: Bridging Causal Inference and Adversarial Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Multi-Marginal Stochastic Flow Matching for High-Dimensional Snapshot Data at Irregular Time Points |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Multi-Modal Object Re-identification via Sparse Mixture-of-Experts |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-Objective Causal Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multi-Session Budget Optimization for Forward Auction-based Federated Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Multi-Stage Manipulation with Demonstration-Augmented Reward, Policy, and World Model Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multi-Timescale Dynamics Model Bayesian Optimization for Plasma Stabilization in Tokamaks |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Multi-Turn Code Generation Through Single-Step Rewards |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multi-View Graph Clustering via Node-Guided Contrastive Encoding |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Multi-agent Architecture Search via Agentic Supernet |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Multi-band Frequency Reconstruction for Neural Psychoacoustic Coding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-objective Linear Reinforcement Learning with Lexicographic Rewards |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Multiaccuracy and Multicalibration via Proxy Groups |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Multidimensional Adaptive Coefficient for Inference Trajectory Optimization in Flow and Diffusion |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multilayer Matrix Factorization via Dimension-Reducing Diffusion Variational Inference |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Multimodal Medical Code Tokenizer |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Multinoulli Extension: A Lossless Yet Effective Probabilistic Framework for Subset Selection over Partition Constraints |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Multiobjective distribution matching |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Multiple-policy Evaluation via Density Estimation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Multivariate Conformal Selection |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MuseControlLite: Multifunctional Music Generation with Lightweight Conditioners |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mutual Learning for SAM Adaptation: A Dual Collaborative Network Framework for Source-Free Domain Transfer |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| N2GON: Neural Networks for Graph-of-Net with Position Awareness |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NBDI: A Simple and Effective Termination Condition for Skill Extraction from Task-Agnostic Demonstrations |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| NEAR: Neural Electromagnetic Array Response |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| NETS: A Non-equilibrium Transport Sampler |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| NExtLong: Toward Effective Long-Context Training without Long Documents |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| NICE Data Selection for Instruction Tuning in LLMs with Non-differentiable Evaluation Metric |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NMA-tune: Generating Highly Designable and Dynamics Aware Protein Backbones |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| NTK-DFL: Enhancing Decentralized Federated Learning in Heterogeneous Settings via Neural Tangent Kernel |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| NTPP: Generative Speech Language Modeling for Dual-Channel Spoken Dialogue via Next-Token-Pair Prediction |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Natural Perturbations for Black-box Training of Neural Networks by Zeroth-Order Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Navigating Conflicting Views: Harnessing Trust for Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Navigating Semantic Drift in Task-Agnostic Class-Incremental Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Navigating the Social Welfare Frontier: Portfolios for Multi-objective Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Near Optimal Best Arm Identification for Clustered Bandits |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Near Optimal Non-asymptotic Sample Complexity of 1-Identification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Near-Optimal Consistency-Robustness Trade-Offs for Learning-Augmented Online Knapsack Problems |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Near-Optimal Decision Trees in a SPLIT Second |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Near-Optimal Sample Complexity for MDPs via Anchoring |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Near-optimal Regret Using Policy Optimization in Online MDPs with Aggregate Bandit Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Near-optimal Sketchy Natural Gradients for Physics-Informed Neural Networks |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Nearly Optimal Sample Complexity for Learning with Label Proportions |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| NegMerge: Sign-Consensual Weight Merging for Machine Unlearning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Neighbour-Driven Gaussian Process Variational Autoencoders for Scalable Structured Latent Modelling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Nemotron-CORTEXA: Enhancing LLM Agents for Software Engineering Tasks via Improved Localization and Solution Diversity |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| NestQuant: nested lattice quantization for matrix products and LLMs |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Nested Expectations with Kernel Quadrature |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Nesterov Method for Asynchronous Pipeline Parallel Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Neural Collapse Beyond the Unconstrained Features Model: Landscape, Dynamics, and Generalization in the Mean-Field Regime |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Neural Discovery in Mathematics: Do Machines Dream of Colored Planes? |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Encoding and Decoding at Scale |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Event-Triggered Control with Optimal Scheduling |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Genetic Search in Discrete Spaces |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Neural Graph Matching Improves Retrieval Augmented Generation in Molecular Machine Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Guided Diffusion Bridges |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Neural Interpretable PDEs: Harmonizing Fourier Insights with Attention for Scalable and Interpretable Physics Discovery |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Representational Consistency Emerges from Probabilistic Neural-Behavioral Representation Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Solver Selection for Combinatorial Optimization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| NeuralCohort: Cohort-aware Neural Representation Learning for Healthcare Analytics |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| NeuroTree: Hierarchical Functional Brain Pathway Decoding for Mental Health Disorders |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| NeuronTune: Towards Self-Guided Spurious Bias Mitigation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Neurosymbolic World Models for Sequential Decision Making |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Neutral residues: revisiting adapters for model extension |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| New Bounds for Sparse Variational Gaussian Processes |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| NextCoder: Robust Adaptation of Code LMs to Diverse Code Edits |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| No Free Lunch from Random Feature Ensembles: Scaling Laws and Near-Optimality Conditions |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| No Metric to Rule Them All: Toward Principled Evaluations of Graph-Learning Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| No Soundness in the Real World: On the Challenges of the Verification of Deployed Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| No-Regret is not enough! Bandits with General Constraints through Adaptive Regret Minimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| NoLiMa: Long-Context Evaluation Beyond Literal Matching |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Noise Conditional Variational Score Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Noise-Guided Predicate Representation Extraction and Diffusion-Enhanced Discretization for Scene Graph Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Noisy SIGNSGD Is More Differentially Private Than You (Might) Think |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Non-Asymptotic Length Generalization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Non-Asymptotic and Non-Lipschitzian Bounds on Optimal Values in Stochastic Optimization Under Heavy Tails |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Non-Stationary Predictions May Be More Informative: Exploring Pseudo-Labels with a Two-Phase Pattern of Training Dynamics |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Non-asymptotic Error Bounds in $\mathcalW_2$-Distance with Sqrt(d) Dimension Dependence and First Order Convergence for Langevin Monte Carlo beyond Log-Concavity |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Non-stationary Diffusion For Probabilistic Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Nonconvex Theory of $M$-estimators with Decomposable Regularizers |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Nonlinear transformers can perform inference-time feature learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Nonlinearly Preconditioned Gradient Methods under Generalized Smoothness |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Nonparametric Identification of Latent Concepts |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Nonparametric Modern Hopfield Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Nonparametric Teaching for Graph Property Learners |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Normalizing Flows are Capable Generative Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Not All Tokens Matter All The Time: Dynamic Token Aggregation Towards Efficient Detection Transformers |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Not All Wrong is Bad: Using Adversarial Examples for Unlearning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Not all solutions are created equal: An analytical dissociation of functional and representational similarity in deep linear neural networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Novelty Detection in Reinforcement Learning with World Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| O-MAPL: Offline Multi-agent Preference Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OOD-Chameleon: Is Algorithm Selection for OOD Generalization Learnable? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| OR-Bench: An Over-Refusal Benchmark for Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OV-MER: Towards Open-Vocabulary Multimodal Emotion Recognition |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| OW-VAP: Visual Attribute Parsing for Open World Object Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OWLS: Scaling Laws for Multilingual Speech Recognition and Translation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Objective drives the consistency of representational similarity across datasets |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Observation Interference in Partially Observable Assistance Games |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Occult: Optimizing Collaborative Communications across Experts for Accelerated Parallel MoE Training and Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Of Mice and Machines: A Comparison of Learning Between Real World Mice and RL Agents |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Off-Policy Actor-Critic for Adversarial Observation Robustness: Virtual Alternative Training via Symmetric Policy Evaluation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Off-Policy Evaluation under Nonignorable Missing Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Offline Learning for Combinatorial Multi-armed Bandits |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Offline Model-based Optimization for Real-World Molecular Discovery |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Offline Opponent Modeling with Truncated Q-driven Instant Policy Refinement |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Offline-to-Online Reinforcement Learning with Classifier-Free Diffusion Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Olica: Efficient Structured Pruning of Large Language Models without Retraining |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| OmiAD: One-Step Adaptive Masked Diffusion Model for Multi-class Anomaly Detection via Adversarial Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Omni-Angle Assault: An Invisible and Powerful Physical Adversarial Attack on Face Recognition |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| OmniArch: Building Foundation Model for Scientific Computing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OmniAudio: Generating Spatial Audio from 360-Degree Video |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On Differential Privacy for Adaptively Solving Search Problems via Sketching |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On Efficient Estimation of Distributional Treatment Effects under Covariate-Adaptive Randomization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| On Exact Bit-level Reversible Transformers Without Changing Architecture |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On Explaining Equivariant Graph Networks via Improved Relevance Propagation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On Expressive Power of Looped Transformers: Theoretical Analysis and Enhancement via Timestep Encoding |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On Fine-Grained Distinct Element Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| On Learning Parallel Pancakes with Mostly Uniform Weights |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On Linear Convergence in Smooth Convex-Concave Bilinearly-Coupled Saddle-Point Optimization: Lower Bounds and Optimal Algorithms |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On Measuring Long-Range Interactions in Graph Neural Networks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On Mitigating Affinity Bias through Bandits with Evolving Biased Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| On Path to Multimodal Generalist: General-Level and General-Bench |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| On Teacher Hacking in Language Model Distillation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| On Temperature Scaling and Conformal Prediction of Deep Classifiers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On The Concurrence of Layer-wise Preconditioning Methods and Provable Feature Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| On Understanding Attention-Based In-Context Learning for Categorical Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On Volume Minimization in Conformal Regression |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On Zero-Initialized Attention: Optimal Prompt and Gating Factor Estimation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Adversarial Robustness of Multi-Kernel Clustering |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Alignment between Fairness and Accuracy: from the Perspective of Adversarial Robustness |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| On the Benefits of Active Data Collection in Operator Learning |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| On the Clean Generalization and Robust Overfitting in Adversarial Training from Two Theoretical Views: Representation Complexity and Training Dynamics |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Convergence of Continuous Single-timescale Actor-critic |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Diversity of Adversarial Ensemble Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Duality between Gradient Transformations and Adapters |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Dynamic Regret of Following the Regularized Leader: Optimism with History Pruning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Emergence of Position Bias in Transformers |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| On the Generalization Ability of Next-Token-Prediction Pretraining |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Guidance of Flow Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On the Impact of Performative Risk Minimization for Binary Random Variables |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Importance of Embedding Norms in Self-Supervised Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On the Importance of Gaussianizing Representations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On the Interplay between Graph Structure and Learning Algorithms in Graph Neural Networks |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| On the Learnability of Distribution Classes with Adaptive Adversaries |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Local Complexity of Linear Regions in Deep ReLU Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Out-of-Distribution Generalization of Self-Supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On the Power of Context-Enhanced Learning in LLMs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Power of Learning-Augmented Search Trees |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| On the Private Estimation of Smooth Transport Maps |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Provable Separation of Scales in Maximal Update Parameterization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Query Complexity of Verifier-Assisted Language Generation |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Robustness of Reward Models for Language Model Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Role of Label Noise in the Feature Learning Process |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Similarities of Embeddings in Contrastive Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Statistical Mechanisms of Distributional Compositional Generalization |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| On the Tension between Byzantine Robustness and No-Attack Accuracy in Distributed Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| On the Training Convergence of Transformers for In-Context Classification of Gaussian Mixtures |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| On the Vulnerability of Applying Retrieval-Augmented Generation within Knowledge-Intensive Application Domains |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention for Long-Context LLM Serving |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| One Arrow, Two Hawks: Sharpness-aware Minimization for Federated Learning via Global Model Trajectory |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| One Diffusion Step to Real-World Super-Resolution via Flow Trajectory Distillation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| One Example Shown, Many Concepts Known! Counterexample-Driven Conceptual Reasoning in Mathematical LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| One Image is Worth a Thousand Words: A Usability Preservable Text-Image Collaborative Erasing Framework |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| One Leaf Reveals the Season: Occlusion-Based Contrastive Learning with Semantic-Aware Views for Efficient Visual Representation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| One Wave To Explain Them All: A Unifying Perspective On Feature Attribution |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| One-Pass Feature Evolvable Learning with Theoretical Guarantees |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| One-Shot Heterogeneous Federated Learning with Local Model-Guided Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| One-Step Generalization Ratio Guided Optimization for Domain Generalization |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| One-dimensional Path Convolution |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| OneForecast: A Universal Framework for Global and Regional Weather Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Online Clustering of Dueling Bandits |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Online Conformal Prediction via Online Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Online Curvature-Aware Replay: Leveraging $\mathbf2^nd$ Order Information for Online Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Online Detection of LLM-Generated Texts via Sequential Hypothesis Testing by Betting |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Online Differentially Private Conformal Prediction for Uncertainty Quantification |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Online Episodic Convex Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Online Laplacian-Based Representation Learning in Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Online Learning in Risk Sensitive constrained MDP |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Online Learning in the Random-Order Model |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Learning with Unknown Constraints |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Linear Classification with Massart Noise |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Online Pre-Training for Offline-to-Online Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Online Robust Reinforcement Learning Through Monte-Carlo Planning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Online Sparsification of Bipartite-Like Clusters in Graphs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Open Materials Generation with Stochastic Interpolants |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Open Your Eyes: Vision Enhances Message Passing Neural Networks in Link Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Open-Det: An Efficient Learning Framework for Open-Ended Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OpenworldAUC: Towards Unified Evaluation and Optimization for Open-world Prompt Tuning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| OptMATH: A Scalable Bidirectional Data Synthesis Framework for Optimization Modeling |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Optimal Algorithm for Max-Min Fair Bandit |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Optimal Auction Design in the Joint Advertising |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Optimal Decision Tree Pruning Revisited: Algorithms and Complexity |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Optimal Fair Learning Robust to Adversarial Distribution Shift |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Optimal Information Retention for Time-Series Explanations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimal Sensor Scheduling and Selection for Continuous-Discrete Kalman Filtering with Auxiliary Dynamics |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Optimal Survey Design for Private Mean Estimation |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Optimal Task Order for Continual Learning of Multiple Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Optimal Transfer Learning for Missing Not-at-Random Matrix Completion |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Optimal Transport Barycenter via Nonconvex-Concave Minimax Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimal and Practical Batched Linear Bandit Algorithm |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Optimal transport-based conformal prediction |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Optimistic Algorithms for Adaptive Estimation of the Average Treatment Effect |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Optimization Proxies using Limited Labeled Data and Training Time – A Semi-Supervised Bayesian Neural Network Approach |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Optimization for Neural Operators can Benefit from Width |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimization over Sparse Support-Preserving Sets: Two-Step Projection with Global Optimality Guarantees |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Optimizing Adaptive Attacks against Watermarks for Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Optimizing Language Models for Inference Time Objectives using Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Optimizing Large Language Model Training Using FP4 Quantization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimizing Noise Distributions for Differential Privacy |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Optimizing Robustness and Accuracy in Mixture of Experts: A Dual-Model Approach |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Optimizing Social Network Interventions via Hypergradient-Based Recommender System Design |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Optimizing Temperature for Language Models with Multi-Sample Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Optimizing Test-Time Compute via Meta Reinforcement Finetuning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Oracle-MoE: Locality-preserving Routing in the Oracle Space for Memory-constrained Large Language Model Inference |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| OrcaLoca: An LLM Agent Framework for Software Issue Localization |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Organize the Web: Constructing Domains Enhances Pre-Training Data Curation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Origin Identification for Text-Guided Image-to-Image Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OrthoRank: Token Selection via Sink Token Orthogonality for Efficient LLM inference |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Orthus: Autoregressive Interleaved Image-Text Generation with Modality-Specific Heads |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Oscillation-Reduced MXFP4 Training for Vision Transformers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Otter: Generating Tests from Issues to Validate SWE Patches |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Outlier-Aware Post-Training Quantization for Discrete Graph Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Outsourced Diffusion Sampling: Efficient Posterior Inference in Latent Spaces of Generative Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Over-Tokenized Transformer: Vocabulary is Generally Worth Scaling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Overcoming Multi-step Complexity in Multimodal Theory-of-Mind Reasoning: A Scalable Bayesian Planner |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Overcoming Non-monotonicity in Transducer-based Streaming Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Overcoming Spurious Solutions in Semi-Dual Neural Optimal Transport: A Smoothing Approach for Learning the Optimal Transport Plan |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Overcoming Vocabulary Mismatch: Vocabulary-agnostic Teacher Guided Language Modeling |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Overcoming the Curse of Dimensionality in Reinforcement Learning Through Approximate Factorization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Overestimation in LLM Evaluation: A Controlled Large-Scale Study on Data Contamination’s Impact on Machine Translation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Overtrained Language Models Are Harder to Fine-Tune |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| P(all-atom) Is Unlocking New Path For Protein Design |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PAC Learning with Improvements |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PAC-Bayes Analysis for Recalibration in Classification |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PAK-UCB Contextual Bandit: An Online Learning Approach to Prompt-Aware Selection of Generative Models and LLMs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| PANDAS: Improving Many-shot Jailbreaking via Positive Affirmation, Negative Demonstration, and Adaptive Sampling |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| PARM: Multi-Objective Test-Time Alignment via Preference-Aware Autoregressive Reward Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PARQ: Piecewise-Affine Regularized Quantization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| PASS: Private Attributes Protection with Stochastic Data Substitution |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PCEvolve: Private Contrastive Evolution for Synthetic Dataset Generation via Few-Shot Private Data and Generative APIs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PDE-Transformer: Efficient and Versatile Transformers for Physics Simulations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PDUDT: Provable Decentralized Unlearning under Dynamic Topologies |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| PEAKS: Selecting Key Training Examples Incrementally via Prediction Error Anchored by Kernel Similarity |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| PEINR: A Physics-enhanced Implicit Neural Representation for High-Fidelity Flow Field Reconstruction |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| PENCIL: Long Thoughts with Short Memory |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
3 |
| PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting for Novel View Synthesis |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PIGDreamer: Privileged Information Guided World Models for Safe Partially Observable Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PILAF: Optimal Human Preference Sampling for Reward Modeling |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PINNsAgent: Automated PDE Surrogation with Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PIPA: Preference Alignment as Prior-Informed Statistical Estimation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| POQD: Performance-Oriented Query Decomposer for Multi-vector retrieval |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| POROver: Improving Safety and Reducing Overrefusal in Large Language Models with Overgeneration and Preference Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PPDiff: Diffusing in Hybrid Sequence-Structure Space for Protein-Protein Complex Design |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PRIME: Deep Imbalanced Regression with Proxies |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PROTOCOL: Partial Optimal Transport-enhanced Contrastive Learning for Imbalanced Multi-view Clustering |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| PROXSPARSE: REGULARIZED LEARNING OF SEMI-STRUCTURED SPARSITY MASKS FOR PRETRAINED LLMS |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PTTA: Purifying Malicious Samples for Test-Time Model Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Pairwise Maximum Likelihood For Multi-Class Logistic Regression Model With Multiple Rare Classes |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PaperBench: Evaluating AI’s Ability to Replicate AI Research |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Parallel Simulation for Log-concave Sampling and Score-based Diffusion Models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| ParallelComp: Parallel Long-Context Compressor for Length Extrapolation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Parameter-Efficient Fine-Tuning of State Space Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Parametric Scaling Law of Tuning Bias in Conformal Prediction |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Pareto Merging: Multi-Objective Optimization for Preference-Aware Model Merging |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Pareto-Optimal Fronts for Benchmarking Symbolic Regression Algorithms |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Pareto-frontier Entropy Search with Variational Lower Bound Maximization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Parrot: Multilingual Visual Instruction Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Partially Observable Reinforcement Learning with Memory Traces |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Partition First, Embed Later: Laplacian-Based Feature Partitioning for Refined Embedding and Visualization of High-Dimensional Data |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Patch-wise Structural Loss for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PatchPilot: A Cost-Efficient Software Engineering Agent with Early Attempts on Formal Verification |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PepTune: De Novo Generation of Therapeutic Peptides with Multi-Objective-Guided Discrete Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Perception in Reflection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Perceptual-GS: Scene-adaptive Perceptual Densification for Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Perceptually Constrained Precipitation Nowcasting Model |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Peri-LN: Revisiting Normalization Layer in the Transformer Architecture |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Peripheral Memory for LLMs: Integration of Sequential Memory Banks with Adaptive Querying |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| Permutation Equivariant Neural Networks for Symmetric Tensors |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Permutation-Free High-Order Interaction Tests |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Permutation-based Rank Test in the Presence of Discretization and Application in Causal Discovery with Mixed Data |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Persistent Topological Features in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PertEval-scFM: Benchmarking Single-Cell Foundation Models for Perturbation Effect Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pessimism Principle Can Be Effective: Towards a Framework for Zero-Shot Transfer Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Pfeife: Automatic Pipeline Parallelism for PyTorch |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Phase and Amplitude-aware Prompting for Enhancing Adversarial Robustness |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Phase transitions for the existence of unregularized M-estimators in single index models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Physics Aware Neural Networks for Unsupervised Binding Energy Prediction |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Physics-Informed DeepONets for drift-diffusion on metric graphs: simulation and parameter identification |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Physics-Informed Generative Modeling of Wireless Channels |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Physics-Informed Weakly Supervised Learning For Interatomic Potentials |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Physics-informed Temporal Alignment for Auto-regressive PDE Foundation Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PiD: Generalized AI-Generated Images Detection with Pixelwise Decomposition Residuals |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| PieClam: A Universal Graph Autoencoder Based on Overlapping Inclusive and Exclusive Communities |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Piloting Structure-Based Drug Design via Modality-Specific Optimal Schedule |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PipeOffload: Improving Scalability of Pipeline Parallelism with Memory Optimization |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Pivoting Factorization: A Compact Meta Low-Rank Representation of Sparsity for Efficient Inference in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pixel-level Certified Explanations via Randomized Smoothing |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Pixel2Feature Attack (P2FA): Rethinking the Perturbed Space to Enhance Adversarial Transferability |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Plan-and-Act: Improving Planning of Agents for Long-Horizon Tasks |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Plausible Token Amplification for Improving Accuracy of Differentially Private In-Context Learning Based on Implicit Bayesian Inference |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PlaySlot: Learning Inverse Latent Dynamics for Controllable Object-Centric Video Prediction and Planning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Playmate: Flexible Control of Portrait Animation via 3D-Implicit Space Guided Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Point Cloud Dataset Distillation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Point-Level Topological Representation Learning on Point Clouds |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Pointwise Information Measures as Confidence Estimators in Deep Neural Networks: A Comparative Study |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PoisonBench: Assessing Language Model Vulnerability to Poisoned Preference Data |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| PoisonedEye: Knowledge Poisoning Attack on Retrieval-Augmented Generation based Large Vision-Language Models |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| PokéChamp: an Expert-level Minimax Language Agent |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Policy Design for Two-sided Platforms with Participation Dynamics |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Policy Filtration for RLHF to Mitigate Noise in Reward Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Policy Gradient with Tree Expansion |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Policy Guided Tree Search for Enhanced LLM Reasoning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Policy Optimization for CMDPs with Bandit Feedback: Learning Stochastic and Adversarial Constraints |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Policy Regularization on Globally Accessible States in Cross-Dynamics Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Policy-Regret Minimization in Markov Games with Function Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Policy-labeled Preference Learning: Is Preference Enough for RLHF? |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Poly2Vec: Polymorphic Fourier-Based Encoding of Geospatial Objects for GeoAI Applications |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| PolyConf: Unlocking Polymer Conformation Generation through Hierarchical Generative Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Polynomial Time Learning Augmented Algorithms for NP-hard Permutation Problems |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Polynomial-Delay MAG Listing with Novel Locally Complete Orientation Rules |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Polynomial-Time Approximability of Constrained Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Portable Reward Tuning: Towards Reusable Fine-Tuning across Different Pretrained Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Position: A Theory of Deep Learning Must Include Compositional Sparsity |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: AI Agents Need Authenticated Delegation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: AI Competitions Provide the Gold Standard for Empirical Rigor in GenAI Evaluation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: AI Evaluation Should Learn from How We Test Humans |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Position: AI Safety Must Embrace an Antifragile Perspective |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: AI Safety should prioritize the Future of Work |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: AI Scaling: From Up to Down and Out |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: AI Should Not Be An Imitation Game: Centaur Evaluations |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: AI’s growing due process problem |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Algebra Unveils Deep Learning - An Invitation to Neuroalgebraic Geometry |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: All Current Generative Fidelity and Diversity Metrics are Flawed |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Position: An Empirically Grounded Identifiability Theory Will Accelerate Self Supervised Learning Research |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Beyond Assistance – Reimagining LLMs as Ethical and Adaptive Co-Creators in Mental Health Care |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Build Agent Advocates, Not Platform Agents |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Causal Machine Learning Requires Rigorous Synthetic Experiments for Broader Adoption |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Position: Certified Robustness Does Not (Yet) Imply Model Security |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Challenges and Future Directions of Data-Centric AI Alignment |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: Constants are Critical in Regret Bounds for Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Position: Contextual Integrity is Inadequately Applied to Language Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Current Model Licensing Practices are Dragging Us into a Quagmire of Legal Noncompliance |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: Deep Learning is Not So Mysterious or Different |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Position: Democratic AI is Possible. The Democracy Levels Framework Shows How It Might Work. |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Don’t Use the CLT in LLM Evals With Fewer Than a Few Hundred Datapoints |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Position: Editing Large Language Models Poses Serious Safety Risks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Enough of Scaling LLMs! Lets Focus on Downscaling |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: Evaluating Generative AI Systems Is a Social Science Measurement Challenge |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Explainable AI Cannot Advance Without Better User Studies |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Position: Formal Mathematical Reasoning—A New Frontier in AI |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: Future Research and Challenges Remain Towards AI for Software Engineering |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: General Intelligence Requires Reward-based Pretraining |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Position: Generative AI Regulation Can Learn from Social Media Regulation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Position: Graph Matching Systems Deserve Better Benchmarks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Position: Human Baselines in Model Evaluations Need Rigor and Transparency (With Recommendations & Reporting Checklist) |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Position: Humanity Faces Existential Risk from Gradual Disempowerment |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: In-House Evaluation Is Not Enough. Towards Robust Third-Party Evaluation and Flaw Disclosure for General-Purpose AI |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: It Is Time We Test Neural Computation In Vitro |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Iterative Online-Offline Joint Optimization is Needed to Manage Complex LLM Copyright Risks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: LLM Social Simulations Are a Promising Research Method |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: LLMs Need a Bayesian Meta-Reasoning Framework for More Robust and Generalizable Reasoning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| Position: Language model developers should report train-test overlap |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Position: Lifetime tuning is incompatible with continual reinforcement learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Position: Machine Learning Models Have a Supply Chain Problem |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Position: Medical Large Language Model Benchmarks Should Prioritize Construct Validity |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Position: Not All Explanations for Deep Learning Phenomena Are Equally Valuable |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Political Neutrality in AI Is Impossible — But Here Is How to Approximate It |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
4 |
| Position: Principles of Animal Cognition to Improve LLM Evaluations |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Position: Probabilistic Modelling is Sufficient for Causal Inference |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Rethinking Explainable Machine Learning as Applied Statistics |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Rethinking LLM Bias Probing Using Lessons from the Social Sciences |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: Retrieval-augmented systems can be dangerous medical communicators |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Position: Scaling LLM Agents Requires Asymptotic Analysis with LLM Primitives |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: Societal Impacts Research Requires Benchmarks for Creative Composition Tasks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking) |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Position: Spectral GNNs Rely Less on Graph Fourier Basis than Conceived |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: Stop treating ‘AGI’ as the north-star goal of AI research |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Strong Consumer Protection is an Inalienable Defense for AI Safety in the United States |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Supervised Classifiers Answer the Wrong Questions for OOD Detection |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Position: The Artificial Intelligence and Machine Learning Community Should Adopt a More Transparent and Regulated Peer Review Process |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: The Categorization of Race in ML is a Flawed Premise |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Position: The Future of Bayesian Prediction Is Prior-Fitted |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Position: The Most Expensive Part of an LLM *should* be its Training Data |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Position: The Right to AI |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Theory of Mind Benchmarks are Broken for Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Position: Truly Self-Improving Agents Require Intrinsic Metacognitive Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: Trustworthy AI Agents Require the Integration of Large Language Models and Formal Methods |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Position: Uncertainty Quantification Needs Reassessment for Large Language Model Agents |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: We Can’t Understand AI Using our Existing Vocabulary |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Position: We Need An Algorithmic Understanding of Generative AI |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
2 |
| Position: We Need Responsible, Application-Driven (RAD) AI Research |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: When Incentives Backfire, Data Stops Being Human |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Position: You Can’t Manufacture a NeRF |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Positional Attention: Expressivity and Learnability of Algorithmic Computation |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Positional Encoding meets Persistent Homology on Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Positive-unlabeled AUC Maximization under Covariate Shift |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Posterior Inference with Diffusion Models for High-dimensional Black-box Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Potemkin Understanding in Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Power Mean Estimation in Stochastic Continuous Monte-Carlo Tree Search |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for EEG |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Pre-training Auto-regressive Robotic Models with 4D Representations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Preconditioned Riemannian Gradient Descent Algorithm for Low-Multilinear-Rank Tensor Completion |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Predicting High-precision Depth on Low-Precision Devices Using 2D Hilbert Curves |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Predicting mutational effects on protein binding from folding energy |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Predicting the Susceptibility of Examples to Catastrophic Forgetting |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prediction models that learn to avoid missing values |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Prediction via Shapley Value Regression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Prediction-Aware Learning in Multi-Agent Systems |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Prediction-Powered Adaptive Shrinkage Estimation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Prediction-Powered E-Values |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Predictive Data Selection: The Data That Predicts Is the Data That Teaches |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Predictive Performance of Deep Quantum Data Re-uploading Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Preference Adaptive and Sequential Text-to-Image Generation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Preference Controllable Reinforcement Learning with Advanced Multi-Objective Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Preference Learning for AI Alignment: a Causal Perspective |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Preference Optimization for Combinatorial Optimization Problems |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Preference learning made easy: Everything should be understood through win rate |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Preference-CFR: Beyond Nash Equilibrium for Better Game Strategies |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Premise-Augmented Reasoning Chains Improve Error Identification in Math reasoning with LLMs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Preserving AUC Fairness in Learning with Noisy Protected Groups |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Prices, Bids, Values: One ML-Powered Combinatorial Auction to Rule Them All |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Primal-Dual Neural Algorithmic Reasoning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Primitive Vision: Improving Diagram Understanding in MLLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Primphormer: Efficient Graph Transformers with Primal Representations |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Principal-Agent Bandit Games with Self-Interested and Exploratory Learning Agents |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Principled Algorithms for Optimizing Generalized Metrics in Binary Classification |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Prior Knowledge Guided Neural Architecture Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Privacy Amplification Through Synthetic Data: Insights from Linear Regression |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Privacy Attacks on Image AutoRegressive Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Privacy-Preserving Federated Convex Optimization: Balancing Partial-Participation and Efficiency via Noise Cancellation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Privacy-Shielded Image Compression: Defending Against Exploitation from Vision-Language Pretrained Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Private Federated Learning using Preference-Optimized Synthetic Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Private Lossless Multiple Release |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Private Model Personalization Revisited |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| ProDiff: Prototype-Guided Diffusion for Minimal Information Trajectory Imputation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ProSec: Fortifying Code LLMs with Proactive Security Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Probabilistic Factorial Experimental Design for Combinatorial Interventions |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Probabilistic Group Mask Guided Discrete Optimization for Incremental Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Probabilistic Interactive 3D Segmentation with Hierarchical Neural Processes |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Probably Approximately Global Robustness Certification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Probing Visual Language Priors in VLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Procurement Auctions via Approximately Optimal Submodular Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Product of Experts with LLMs: Boosting Performance on ARC Is a Matter of Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Progressive Tempering Sampler with Diffusion |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Progressively Label Enhancement for Large Language Model Alignment |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Projection Optimization: A General Framework for Multi-Objective and Multi-Group RLHF |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Projection Pursuit Density Ratio Estimation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Promoting Ensemble Diversity with Interactive Bayesian Distributional Robustness for Fine-tuning Foundation Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prompt-based Depth Pruning of Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prompt-to-Leaderboard: Prompt-Adaptive LLM Evaluations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ProofAug: Efficient Neural Theorem Proving via Fine-grained Proof Structure Analysis |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Propagate and Inject: Revisiting Propagation-Based Feature Imputation for Graphs with Partially Observed Features |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Propagation of Chaos for Mean-Field Langevin Dynamics and its Application to Model Ensemble |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Proposer-Agent-Evaluator (PAE): Autonomous Skill Discovery For Foundation Model Internet Agents |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Protein Structure Tokenization: Benchmarking and New Recipe |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Proto Successor Measure: Representing the Behavior Space of an RL Agent |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Protriever: End-to-End Differentiable Protein Homology Search for Fitness Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Provable Benefit of Random Permutations over Uniform Sampling in Stochastic Coordinate Descent |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Provable Efficiency of Guidance in Diffusion Models for General Data Distribution |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Provable In-Context Vector Arithmetic via Retrieving Task Concepts |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Provable Length Generalization in Sequence Prediction via Spectral Filtering |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Provable Maximum Entropy Manifold Exploration via Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Provable Policy Gradient for Robust Average-Reward MDPs Beyond Rectangularity |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Provable Zero-Shot Generalization in Offline Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Provably Cost-Sensitive Adversarial Defense via Randomized Smoothing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Provably Efficient Algorithm for Best Scoring Rule Identification in Online Principal-Agent Information Acquisition |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provably Efficient Exploration in Inverse Constrained Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Provably Efficient RL for Linear MDPs under Instantaneous Safety Constraints in Non-Convex Feature Spaces |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Provably Improving Generalization of Few-shot models with Synthetic Data |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Provably Near-Optimal Federated Ensemble Distillation with Negligible Overhead |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Proxy-FDA: Proxy-based Feature Distribution Alignment for Fine-tuning Vision Foundation Models without Forgetting |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Prune ’n Predict: Optimizing LLM Decision-making with Conformal Prediction |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Pruning for GNNs: Lower Complexity with Comparable Expressiveness |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Putnam-AXIOM: A Functional & Static Benchmark for Measuring Higher Level Mathematical Reasoning in LLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Puzzle: Distillation-Based NAS for Inference-Optimized LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PyTDC: A multimodal machine learning training, evaluation, and inference platform for biomedical foundation models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Q-Supervised Contrastive Representation: A State Decoupling Framework for Safe Offline Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| QEM-Bench: Benchmarking Learning-based Quantum Error Mitigation and QEMFormer as a Multi-ranged Context Learning Baseline |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| QMamba: On First Exploration of Vision Mamba for Image Quality Assessment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| QPRL : Learning Optimal Policies with Quasi-Potential Functions for Asymmetric Traversal |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| QT-DoG: Quantization-Aware Training for Domain Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| QUTE: Quantifying Uncertainty in TinyML models with Early-exit-assisted ensembles for model-monitoring |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| QoS-Efficient Serving of Multiple Mixture-of-Expert LLMs Using Partial Runtime Reconfiguration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| QuEST: Stable Training of LLMs with 1-Bit Weights and Activations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| QuEst: Enhancing Estimates of Quantile-Based Distributional Measures Using Model Predictions |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| QuRe: Query-Relevant Retrieval through Hard Negative Sampling in Composed Image Retrieval |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Quadratic Upper Bound for Boosting Robustness |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Quadruple Attention in Many-body Systems for Accurate Molecular Property Predictions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Quamba2: A Robust and Scalable Post-training Quantization Framework for Selective State Space Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| QuanONet: Quantum Neural Operator with Application to Differential Equation |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| QuantSpec: Self-Speculative Decoding with Hierarchical Quantized KV Cache |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Quantifying Memory Utilization with Effective State-Size |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Quantifying Prediction Consistency Under Fine-tuning Multiplicity in Tabular LLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Quantifying Treatment Effects: Estimating Risk Ratios via Observational Studies |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| Quantum Algorithms for Finite-horizon Markov Decision Processes |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Quantum Optimization via Gradient-Based Hamiltonian Descent |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Quantum Speedup for Hypergraph Sparsification |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Quantum Speedups in Regret Analysis of Infinite Horizon Average-Reward Markov Decision Processes |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| R*: Efficient Reward Design via Reward Structure Evolution and Parameter Alignment Optimization with Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| R.I.P.: Better Models by Survival of the Fittest Prompts |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| R3DM: Enabling Role Discovery and Diversity Through Dynamics Models in Multi-agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| RAGGED: Towards Informed Design of Scalable and Stable RAG Systems |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RAPID: Long-Context Inference with Retrieval-Augmented Speculative Decoding |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RATE: Causal Explainability of Reward Models with Imperfect Counterfactuals |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| RBench: Graduate-level Multi-disciplinary Benchmarks for LLM & MLLM Complex Reasoning Evaluation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| RE-Bench: Evaluating Frontier AI R&D Capabilities of Language Model Agents against Human Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| RE-IMAGINE: Symbolic Benchmark Synthesis for Reasoning Evaluation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| REG: Rectified Gradient Guidance for Conditional Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| RISE: Radius of Influence based Subgraph Extraction for 3D Molecular Graph Explanation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| RLTHF: Targeted Human Feedback for LLM Alignment |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ROME is Forged in Adversity: Robust Distilled Datasets via Information Bottleneck |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ROPO: Robust Preference Optimization for Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ROS: A GNN-based Relax-Optimize-and-Sample Framework for Max-$k$-Cut Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| RULEBREAKERS: Challenging LLMs at the Crossroads between Formal Logic and Human-like Reasoning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| RUN: Reversible Unfolding Network for Concealed Object Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| RZ-NAS: Enhancing LLM-guided Neural Architecture Search via Reflective Zero-Cost Strategy |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Radio: Rate–Distortion Optimization for Large Language Model Compression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Random Feature Representation Boosting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Random Policy Evaluation Uncovers Policies of Generative Flow Networks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Random Registers for Cross-Domain Few-Shot Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Randomized Dimensionality Reduction for Euclidean Maximization and Diversity Measures |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Rank-One Modified Value Iteration |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Ranked Entropy Minimization for Continual Test-Time Adaptation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Ranked from Within: Ranking Large Multimodal Models Without Labels |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Ranking with Multiple Oracles: From Weak to Strong Stochastic Transitivity |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Rapid Overfitting of Multi-Pass SGD in Stochastic Convex Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Raptor: Scalable Train-Free Embeddings for 3D Medical Volumes Leveraging Pretrained 2D Foundation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Re-ranking Reasoning Context with Tree Search Makes Large Vision-Language Models Stronger |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReFrame: Layer Caching for Accelerated Inference in Real-Time Rendering |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality Protein Backbone Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ReVISE: Learning to Refine at Test-Time via Intrinsic Self-Verification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reaction Graph: Towards Reaction-Level Modeling for Chemical Reactions with 3D Structures |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| RealRAG: Retrieval-augmented Realistic Image Generation via Self-reflective Contrastive Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Reasoning Limitations of Multimodal Large Language Models. A case study of Bongard Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reasoning Through Execution: Unifying Process and Outcome Rewards for Code Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reasoning-as-Logic-Units: Scaling Test-Time Reasoning in Large Language Models Through Logic Unit Alignment |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Recommendations with Sparse Comparison Data: Provably Fast Convergence for Nonconvex Matrix Factorization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Reconstructing Cell Lineage Trees from Phenotypic Features with Metric Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Rectifying Conformity Scores for Better Conditional Coverage |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reducing Confounding Bias without Data Splitting for Causal Inference via Optimal Transport |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Reducing Tool Hallucination via Reliability Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Reducing Variance of Stochastic Optimization for Approximating Nash Equilibria in Normal-Form Games |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Redundancy Undermines the Trustworthiness of Self-Interpretable GNNs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ReferSplat: Referring Segmentation in 3D Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Refined generalization analysis of the Deep Ritz Method and Physics-Informed Neural Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Refining Adaptive Zeroth-Order Optimization at Ease |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Reflect-then-Plan: Offline Model-Based Planning through a Doubly Bayesian Lens |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reflection-Bench: Evaluating Epistemic Agency in Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Reflection-Window Decoding: Text Generation with Selective Refinement |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Regress, Don’t Guess: A Regression-like Loss on Number Tokens for Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Regression for the Mean: Auto-Evaluation and Inference with Few Labels through Post-hoc Regression |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Regret-Free Reinforcement Learning for Temporal Logic Specifications |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Regularized Langevin Dynamics for Combinatorial Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reidentify: Context-Aware Identity Generation for Contextual Multi-Agent Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ReinboT: Amplifying Robot Visual-Language Manipulation with Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Reinforce LLM Reasoning through Multi-Agent Reflection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reinforced Learning Explicit Circuit Representations for Quantum State Characterization from Local Measurements |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Reinforced Lifelong Editing for Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reinforcement Learning Control of a Physical Robot Device for Assisted Human Walking without a Simulator |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reinforcement Learning for Quantum Control under Physical Constraints |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Reinforcement Learning with Adaptive Reward Modeling for Expensive-to-Evaluate Systems |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Reinforcement Learning with Random Time Horizons |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reinforcement Learning with Segment Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Rejecting Hallucinated State Targets during Planning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RelGNN: Composite Message Passing for Relational Deep Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Relating Misfit to Gain in Weak-to-Strong Generalization Beyond the Squared Loss |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Relational Conformal Prediction for Correlated Time Series |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Relational Invariant Learning for Robust Solvation Free Energy Prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Relative Error Fair Clustering in the Weak-Strong Oracle Model |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reliable Algorithm Selection for Machine Learning-Guided Design |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Reliable and Efficient Amortized Model-based Evaluation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RepLoRA: Reparameterizing Low-rank Adaptation via the Perspective of Mixture of Experts |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| RepoAudit: An Autonomous LLM-Agent for Repository-Level Code Auditing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Representation Preserving Multiclass Agnostic to Realizable Reduction |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Representation Shattering in Transformers: A Synthetic Study with Knowledge Editing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Representation Surgery in Model Merging with Probabilistic Modeling |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Representations Shape Weak-to-Strong Generalization: Theoretical Insights and Empirical Predictions |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Representative Language Generation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Representative Ranking for Deliberation in the Public Sphere |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ResKoopNet: Learning Koopman Representations for Complex Dynamics with Spectral Residuals |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| ResQ: Mixed-Precision Quantization of Large Language Models with Low-Rank Residuals |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ResearchTown: Simulator of Human Research Community |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Residual Matrix Transformers: Scaling the Size of the Residual Stream |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Residual TPP: A Unified Lightweight Approach for Event Stream Data Analysis |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Resolving Lexical Bias in Model Editing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RestoreGrad: Signal Restoration Using Conditional Denoising Diffusion Models with Jointly Learned Prior |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Restoring Calibration for Aligned Large Language Models: A Calibration-Aware Fine-Tuning Approach |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethink GraphODE Generalization within Coupled Dynamical System |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Rethink the Role of Deep Learning towards Large-scale Quantum Systems |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Aleatoric and Epistemic Uncertainty |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Rethinking Benign Overfitting in Two-Layer Neural Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Rethinking Causal Ranking: A Balanced Perspective on Uplift Model Evaluation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Chain-of-Thought from the Perspective of Self-Training |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Rethinking Confidence Scores and Thresholds in Pseudolabeling-based SSL |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethinking External Slow-Thinking: From Snowball Errors to Probability of Correct Reasoning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Rethinking Latent Redundancy in Behavior Cloning: An Information Bottleneck Approach for Robot Manipulation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Rethinking Point Cloud Data Augmentation: Topologically Consistent Deformation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethinking Score Distilling Sampling for 3D Editing and Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Rethinking Time Encoding via Learnable Transformation Functions |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Rethinking the Bias of Foundation Model under Long-tailed Distribution |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Rethinking the Stability-Plasticity Trade-off in Continual Learning from an Architectural Perspective |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Rethinking the Temperature for Federated Heterogeneous Distillation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Retraining with Predicted Hard Labels Provably Increases Model Accuracy |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Retraining-free Merging of Sparse MoE via Hierarchical Clustering |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Retrieval Augmented Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Retrieval Augmented Zero-Shot Enzyme Generation for Specified Substrate |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
3 |
| Retrieval-Augmented Language Model for Knowledge-aware Protein Encoding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Retrieval-Augmented Perception: High-resolution Image Perception Meets Visual RAG |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Return Capping: Sample Efficient CVaR Policy Gradient Optimisation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Return of the Latent Space COWBOYS: Re-thinking the use of VAEs for Bayesian Optimisation of Structured Spaces |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Revealing Weaknesses in Text Watermarking Through Self-Information Rewrite Attacks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReverB-SNN: Reversing Bit of the Weight and Activation for Spiking Neural Networks |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Revisiting Chain-of-Thought in Code Generation: Do Language Models Need to Learn Reasoning before Coding? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Revisiting Continuity of Image Tokens for Cross-domain Few-shot Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Revisiting Convergence: Shuffling Complexity Beyond Lipschitz Smoothness |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Revisiting Cooperative Off-Policy Multi-Agent Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Revisiting Differentially Private Algorithms for Decentralized Online Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Revisiting Diffusion Models: From Generative Pre-training to One-Step Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Revisiting Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Revisiting Neural Networks for Few-Shot Learning: A Zero-Cost NAS Perspective |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Revisiting Noise Resilience Strategies in Gesture Recognition: Short-Term Enhancement in sEMG Analysis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Revisiting Non-Acyclic GFlowNets in Discrete Environments |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Revisiting Unbiased Implicit Variational Inference |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Revisiting the Predictability of Performative, Social Events |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reward Modeling with Ordinal Feedback: Wisdom of the Crowd |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Reward Translation via Reward Machine in Semi-Alignable MDPs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Reward-Augmented Data Enhances Direct Preference Alignment of LLMs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Reward-Guided Iterative Refinement in Diffusion Models at Test-Time with Applications to Protein and DNA Design |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Reward-Guided Prompt Evolving in Reinforcement Learning for LLMs |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Reward-Guided Speculative Decoding for Efficient LLM Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reward-free World Models for Online Imitation Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rhomboid Tiling for Geometric Graph Deep Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Riemann Tensor Neural Networks: Learning Conservative Systems with Physics-Constrained Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Riemannian Diffusion Adaptation for Distributed Optimization on Manifolds |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Right Now, Wrong Then: Non-Stationary Direct Preference Optimization under Preference Drift |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Right Time to Learn: Promoting Generalization via Bio-inspired Spacing Effect in Knowledge Distillation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Risk and cross validation in ridge regression with correlated samples |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Risk-Sensitive Theory of Mind: Coordinating with Agents of Unknown Bias using Cumulative Prospect Theory |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
4 |
| RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robot-Gated Interactive Imitation Learning with Adaptive Intervention Mechanism |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Robust Automatic Modulation Classification with Fuzzy Regularization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Robust Autonomy Emerges from Self-Play |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Robust Conformal Outlier Detection under Contaminated Reference Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Robust Consensus Anchor Learning for Efficient Multi-view Subspace Clustering |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Robust ML Auditing using Prior Knowledge |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Robust Multi-Agent Reinforcement Learning with Stochastic Adversary |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Robust Multi-bit Text Watermark with LLM-based Paraphrasers |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robust Multimodal Large Language Models Against Modality Conflict |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Robust Noise Attenuation via Adaptive Pooling of Transformer Outputs |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Robust Offline Reinforcement Learning with Linearly Structured $f$-Divergence Regularization |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Robust Reward Alignment via Hypothesis Space Batch Cutting |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Robust Secure Swap: Responsible Face Swap With Persons of Interest Redaction and Provenance Traceability |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Robust Sparsification via Sensitivity |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Robust Spatio-Temporal Centralized Interaction for OOD Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robust and Conjugate Spatio-Temporal Gaussian Processes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RobustLight: Improving Robustness via Diffusion Reinforcement Learning for Traffic Signal Control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RobustZero: Enhancing MuZero Reinforcement Learning Robustness to State Perturbations |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RollingQ: Reviving the Cooperation Dynamics in Multimodal Transformer |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| RuleAdapter: Dynamic Rules for training Safety Reward Models in RLHF |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Runtime Analysis of Evolutionary NAS for Multiclass Classification |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Rényi Neural Processes |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| S2-Track: A Simple yet Strong Approach for End-to-End 3D Multi-Object Tracking |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| S4S: Solving for a Fast Diffusion Model Solver |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SADA: Stability-guided Adaptive Diffusion Acceleration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SAE-V: Interpreting Multimodal Models for Enhanced Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SAFE: Finding Sparse and Flat Minima to Improve Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SAFER: A Calibrated Risk-Aware Multimodal Recommendation Model for Dynamic Treatment Regimes |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SAH-Drive: A Scenario-Aware Hybrid Planner for Closed-Loop Vehicle Trajectory Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SAN: Hypothesizing Long-Term Synaptic Development and Neural Engram Mechanism in Scalable Model’s Parameter-Efficient Fine-Tuning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SAND: One-Shot Feature Selection with Additive Noise Distortion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SAeUron: Interpretable Concept Unlearning in Diffusion Models with Sparse Autoencoders |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SBGD: Improving Graph Diffusion Generative Model via Stochastic Block Diffusion |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SCENIR: Visual Semantic Clarity through Unsupervised Scene Graph Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SCENT: Robust Spatiotemporal Learning for Continuous Scientific Data via Scalable Conditioned Neural Fields |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SCISSOR: Mitigating Semantic Bias through Cluster-Aware Siamese Networks for Robust Classification |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SDMG: Smoothing Your Diffusion Models for Powerful Graph Representation Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SDP-CROWN: Efficient Bound Propagation for Neural Network Verification with Tightness of Semidefinite Programming |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SE(3)-Equivariant Diffusion Policy in Spherical Fourier Space |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SEAD: Unsupervised Ensemble of Streaming Anomaly Detectors |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SECOND: Mitigating Perceptual Hallucination in Vision-Language Models via Selective and Contrastive Decoding |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SEFE: Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SEMU: Singular Value Decomposition for Efficient Machine Unlearning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SENSEI: Semantic Exploration Guided by Foundation Models to Learn Versatile World Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| SERENA: A Unified Stochastic Recursive Variance Reduced Gradient Framework for Riemannian Non-Convex Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SGD Jittering: A Training Strategy for Robust and Accurate Model-Based Architectures |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SHARP-Distill: A 68$\times$ Faster Recommender System with Hypergraph Neural Networks and Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SHE: Streaming-media Hashing Retrieval |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SHIELD: Multi-task Multi-distribution Vehicle Routing Solver with Sparsity and Hierarchy |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SIMPLEMIX: Frustratingly Simple Mixing of Off- and On-policy Data in Language Model Preference Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| SING: Spatial Context in Large Language Model for Next-Gen Wearables |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SITCOM: Step-wise Triple-Consistent Diffusion Sampling For Inverse Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SKIM: Any-bit Quantization Pushing The Limits of Post-Training Quantization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SKOLR: Structured Koopman Operator Linear RNN for Time-Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SLiM: One-shot Quantization and Sparsity with Low-rank Approximation for LLM Weight Compression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SMART-PC: Skeletal Model Adaptation for Robust Test-Time Training in Point Clouds |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SNS-Bench: Defining, Building, and Assessing Capabilities of Large Language Models in Social Networking Services |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| SOLD: Slot Object-Centric Latent Dynamics Models for Relational Manipulation Learning from Pixels |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SPD: Sync-Point Drop for Efficient Tensor Parallelism of Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SPEX: Scaling Feature Interaction Explanations for LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SPHINX: Structural Prediction using Hypergraph Inference Network |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| SPMC: Self-Purifying Federated Backdoor Defense via Margin Contribution |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SPRI: Aligning Large Language Models with Context-Situated Principles |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SSHR: More Secure Generative Steganography with High-Quality Revealed Secret Images |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| STAIR: Improving Safety Alignment with Introspective Reasoning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| STAR: Learning Diverse Robot Skill Abstractions through Rotation-Augmented Vector Quantization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| STD-FD: Spatio-Temporal Distribution Fitting Deviation for AIGC Forgery Identification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| STP: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SToFM: a Multi-scale Foundation Model for Spatial Transcriptomics |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SUICA: Learning Super-high Dimensional Sparse Implicit Neural Representations for Spatial Transcriptomics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SWAN: SGD with Normalization and Whitening Enables Stateless LLM Training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sable: a Performant, Efficient and Scalable Sequence Model for MARL |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Safe-EF: Error Feedback for Non-smooth Constrained Optimization |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| SafeArena: Evaluating the Safety of Autonomous Web Agents |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal Foundation Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SafeMap: Robust HD Map Construction from Incomplete Observations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Safely Learning Optimal Auctions: A Testable Learning Framework for Mechanism Design |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Safety Alignment Can Be Not Superficial With Explicit Safety Signals |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Safety Certificate against Latent Variables with Partially Unidentifiable Dynamics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Safety Reasoning with Guidelines |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Safety-Polarized and Prioritized Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SafetyAnalyst: Interpretable, Transparent, and Steerable Safety Moderation for AI Behavior |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sample Complexity of Branch-length Estimation by Maximum Likelihood |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sample Complexity of Correlation Detection in the Gaussian Wigner Model |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sample Efficient Demonstration Selection for In-Context Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling Verification |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Sample-Optimal Agnostic Boosting with Unlabeled Data |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sample-specific Noise Injection for Diffusion-based Adversarial Purification |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Sampling Binary Data by Denoising through Score Functions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Sampling from Binary Quadratic Distributions via Stochastic Localization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sanity Checking Causal Representation Learning on a Simple Real-World System |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sassha: Sharpness-aware Adaptive Second-order Optimization with Stable Hessian Approximation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Scaffold with Stochastic Gradients: New Analysis with Linear Speed-Up |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Scalable Approximation Algorithms for $p$-Wasserstein Distance and Its Variants |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Scalable Attribute-Missing Graph Clustering via Neighborhood Differentiation |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Scalable Equilibrium Sampling with Sequential Boltzmann Generators |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable First-order Method for Certifying Optimal k-Sparse GLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Scalable Gaussian Processes with Latent Kronecker Structure |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scalable Generation of Spatial Transcriptomics from Histology Images via Whole-Slide Flow Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Scalable Meta-Learning via Mixed-Mode Differentiation |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scalable Model Merging with Progressive Layer-wise Distillation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Scalable Non-Equivariant 3D Molecule Generation via Rotational Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable Private Partition Selection via Adaptive Weighting |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scalable Sobolev IPM for Probability Measures on a Graph |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Scaling Inference-Efficient Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Large Motion Models with Million-Level Human Motions |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Scaling Laws for Differentially Private Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Scaling Laws for Floating–Point Quantization Training |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Scaling Laws for Forgetting during Finetuning with Pretraining Data Injection |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Scaling Laws for Pre-training Agents and World Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Scaling Laws for Upcycling Mixture-of-Experts Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Probabilistic Circuits via Monarch Matrices |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Scaling Sparse Feature Circuits For Studying In-Context Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Test-Time Compute Without Verification or RL is Suboptimal |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Scaling Trends in Language Model Robustness |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Scaling Value Iteration Networks to 5000 Layers for Extreme Long-Term Planning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Schwarz–Schur Involution: Lightspeed Differentiable Sparse Linear Solvers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Score Matching with Missing Data |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Score as Action: Fine Tuning Diffusion Generative Models by Continuous-time Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Score-Based Diffusion Policy Compatible with Reinforcement Learning via Optimal Transport |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Score-based Pullback Riemannian Geometry: Extracting the Data Manifold Geometry using Anisotropic Flows |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Score-of-Mixture Training: One-Step Generative Model Training Made Simple via Score Estimation of Mixture Distributions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SecEmb: Sparsity-Aware Secure Federated Learning of On-Device Recommender System with Large Embedding |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Secant Line Search for Frank-Wolfe Algorithms |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Securing Equal Share: A Principled Approach for Learning Multiplayer Symmetric Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| SeedLoRA: A Fusion Approach to Efficient LLM Fine-Tuning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Segment Anyword: Mask Prompt Inversion for Open-Set Grounded Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Selective Preference Aggregation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Selective Prompt Anchoring for Code Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Selective Response Strategies for GenAI |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Self-Bootstrapping for Versatile Test-Time Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-Consistency Preference Optimization |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Self-Consuming Generative Models with Adversarially Curated Data |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Self-Discriminative Modeling for Anomalous Graph Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Disentanglement and Re-Composition for Cross-Domain Few-Shot Segmentation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Self-Improving Language Models for Evolutionary Program Synthesis: A Case Study on ARC-AGI |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
4 |
| Self-Organizing Visual Prototypes for Non-Parametric Representation Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Self-Play $Q$-Learners Can Provably Collude in the Iterated Prisoner’s Dilemma |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Self-Supervised Learning of Intertwined Content and Positional Features for Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Supervised Transformers as Iterative Solution Improvers for Constraint Satisfaction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-cross Feature based Spiking Neural Networks for Efficient Few-shot Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Self-supervised Adversarial Purification for Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-supervised Masked Graph Autoencoder via Structure-aware Curriculum |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Semantic Shift Estimation via Dual-Projection and Classifier Reconstruction for Exemplar-Free Class-Incremental Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Semantics-aware Test-time Adaptation for 3D Human Pose Estimation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Semi-Supervised Blind Quality Assessment with Confidence-quantifiable Pseudo-label Learning for Authentic Images |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Separating Knowledge and Perception with Procedural Data |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Set Valued Predictions For Robust Domain Generalization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Settling the Maximin Share Fairness for Scheduling among Groups of Machines |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sharp Generalization for Nonparametric Regression by Over-Parameterized Neural Networks: A Distribution-Free Analysis in Spherical Covariate |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Sharp Optimality of Simple, Plug-in Estimation of the Fisher Information of a Smoothed Density |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| ShieldAgent: Shielding Agents via Verifiable Safety Policy Reasoning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Shielded Diffusion: Generating Novel and Diverse Images using Sparse Repellency |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Shifting Time: Time-series Forecasting with Khatri-Rao Neural Operators |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Shortcut-connected Expert Parallelism for Accelerating Mixture of Experts |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Should Decision-Makers Reveal Classifiers in Online Strategic Classification? |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Sidechain conditioning and modeling for full-atom protein sequence design with FAMPNN |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Signed Laplacians for Constrained Graph Clustering |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Simple Path Structural Encoding for Graph Transformers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Simple Policy Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Simple Randomized Rounding for Max-Min Eigenvalue Augmentation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Simple and Critical Iterative Denoising: A Recasting of Discrete Diffusion in Graph Generation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Simplicity Bias and Optimization Threshold in Two-Layer ReLU Networks |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Simplifying DINO via Coding Rate Regularization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Simultaneous Multi-Robot Motion Planning with Projected Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Since Faithfulness Fails: The Performance Limits of Neural Causal Discovery |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SketchDNN: Joint Continuous-Discrete Diffusion for CAD Sketch Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Skip the Equations: Learning Behavior of Personalized Dynamical Systems Directly From Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SkipGPT: Each Token is One of a Kind |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Skrr: Skip and Re-use Text Encoder Layers for Memory Efficient Text-to-Image Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sleeping Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sliding Puzzles Gym: A Scalable Benchmark for State Representation in Visual Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SlimLLM: Accurate Structured Pruning for Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Slimming the Fat-Tail: Morphing-Flow for Adaptive Time Series Modeling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Smooth Interpolation for Improved Discrete Graph Generative Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Smoothed Preference Optimization via ReNoise Inversion for Aligning Diffusion Models with Varied Human Preferences |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Socialized Coevolution: Advancing a Better World through Cross-Task Collaboration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Softmax is not Enough (for Sharp Size Generalisation) |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Solving Linear-Gaussian Bayesian Inverse Problems with Decoupled Diffusion Sequential Monte Carlo |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Solving Probabilistic Verification Problems of Neural Networks using Branch and Bound |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Solving Satisfiability Modulo Counting Exactly with Probabilistic Circuits |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Solving Zero-Sum Convex Markov Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sorbet: A Neuromorphic Hardware-Compatible Transformer-Based Spiking Language Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sort Before You Prune: Improved Worst-Case Guarantees of the DiskANN Family of Graphs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sortformer: A Novel Approach for Permutation-Resolved Speaker Supervision in Speech-to-Text Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sounding that Object: Interactive Object-Aware Image to Audio Generation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SpargeAttention: Accurate and Training-free Sparse Attention Accelerating Any Model Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sparse Autoencoders for Hypothesis Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Sparse Autoencoders, Again? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Sparse Causal Discovery with Generative Intervention for Unsupervised Graph Domain Adaptation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sparse Spectral Training and Inference on Euclidean and Hyperbolic Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Sparse Video-Gen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sparse-pivot: Dynamic correlation clustering for node insertions |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Sparsing Law: Towards Large Language Models with Greater Activation Sparsity |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spatial Reasoning with Denoising Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Spectral-Aware Reservoir Computing for Fast and Accurate Time Series Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Speculative Prefill: Turbocharging TTFT with Lightweight and Training-Free Token Importance Estimation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Speeding up Policy Simulation in Supply Chain RL |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Spherical-Nested Diffusion Model for Panoramic Image Outpainting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SpikF: Spiking Fourier Network for Efficient Long-term Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SpikeVideoFormer: An Efficient Spike-Driven Video Transformer with Hamming Attention and $\mathcalO(T)$ Complexity |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Splitting & Integrating: Out-of-Distribution Detection via Adversarial Gradient Attribution |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Splitting with Importance-aware Updating for Heterogeneous Federated Learning with Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Square$χ$PO: Differentially Private and Robust $χ^2$-Preference Optimization in Offline Direct Alignment |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Stability and Generalization Analysis of Decentralized SGD: Sharper Bounds Beyond Lipschitzness and Smoothness |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Stability and Generalization Capability of Subgraph Reasoning Models for Inductive Knowledge Graph Completion |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Stabilizing Sample Similarity in Representation via Mitigating Random Consistency |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Stable Fair Graph Representation Learning with Lipschitz Constraint |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Stable Offline Value Function Learning with Bisimulation-based Representations |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Stacey: Promoting Stochastic Steepest Descent via Accelerated $\ell_p$-Smooth Nonconvex Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Staged and Physics-Grounded Learning Framework with Hyperintensity Prior for Pre-Contrast MRI Synthesis |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Star Attention: Efficient LLM Inference over Long Sequences |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Statistical Collusion by Collectives on Learning Platforms |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Statistical Hypothesis Testing for Auditing Robustness in Language Models |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Statistical Query Hardness of Multiclass Linear Classification with Random Classification Noise |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Statistical Test for Feature Selection Pipelines by Selective Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Statistical and Computational Guarantees of Kernel Max-Sliced Wasserstein Distances |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Stay Hungry, Keep Learning: Sustainable Plasticity for Deep Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Stay-Positive: A Case for Ignoring Real Image Features in Fake Image Detection |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Stealing That Free Lunch: Exposing the Limits of Dyna-Style Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stealix: Model Stealing via Prompt Evolution |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| StealthInk: A Multi-bit and Stealthy Watermark for Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Steer LLM Latents for Hallucination Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Steerable Transformers for Volumetric Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Steering Protein Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Step-DAD: Semi-Amortized Policy-Based Bayesian Experimental Design |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stochastic Control for Fine-tuning Diffusion Models: Optimality, Regularity, and Convergence |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Stochastic Deep Restoration Priors for Imaging Inverse Problems |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Stochastic Encodings for Active Feature Acquisition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stochastic Forward–Backward Deconvolution: Training Diffusion Models with Finite Noisy Datasets |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Stochastic Layer-Wise Shuffle for Improving Vision Mamba Training |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Stochastic Online Conformal Prediction with Semi-Bandit Feedback |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Stochastic Poisson Surface Reconstruction with One Solve using Geometric Gaussian Processes |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Stochastic Smoothed Primal-Dual Algorithms for Nonconvex Optimization with Linear Inequality Constraints |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Strategic A/B testing via Maximum Probability-driven Two-armed Bandit |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Strategic Planning: A Top-Down Approach to Option Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Strategy Coopetition Explains the Emergence and Transience of In-Context Learning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Stray Intrusive Outliers-Based Feature Selection on Intra-Class Asymmetric Instance Distribution or Multiple High-Density Clusters |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stream-level Flow Matching with Gaussian Processes |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Streamline Without Sacrifice - Squeeze out Computation Redundancy in LMM |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Strengthen Out-of-Distribution Detection Capability with Progressive Self-Knowledge Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Strong and Weak Identifiability of Optimization-based Causal Discovery in Non-linear Additive Noise Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stronger Neyman Regret Guarantees for Adaptive Experimental Design |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Structure Is All You Need: Structural Representation Learning on Hyper-Relational Knowledge Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Structure-Guided Large Language Models for Text-to-SQL Generation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Structure-informed Risk Minimization for Robust Ensemble Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Structured Preconditioners in Adaptive Optimization: A Unified Analysis |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Sub-Sequential Physics-Informed Learning with State Space Model |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Subgoal-Guided Policy Heuristic Search with Learned Subgoals |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Subgroups Matter for Robust Bias Mitigation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Subobject-level Image Tokenization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Subspace Optimization for Large Language Models with Convergence Guarantees |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sum-of-Parts: Self-Attributing Neural Networks with End-to-End Learning of Feature Groups |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sundial: A Family of Highly Capable Time Series Foundation Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Super Deep Contrastive Information Bottleneck for Multi-modal Clustering |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Supercharging Graph Transformers with Advective Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Supervised Contrastive Learning from Weakly-Labeled Audio Segments for Musical Version Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Surrogate Prompt Learning: Towards Efficient and Diverse Prompt Learning for Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Survival Analysis via Density Estimation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Symmetric Reinforcement Learning Loss for Robust Learning on Diverse Tasks and Model Scales |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Symmetry-Aware GFlowNets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Symmetry-Driven Discovery of Dynamical Variables in Molecular Simulations |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
2 |
| Symmetry-Robust 3D Orientation Estimation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SynEVO: A neuro-inspired spatiotemporal evolutional framework for cross-domain adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SyncMind: Measuring Agent Out-of-Sync Recovery in Collaborative Software Engineering |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Synonymous Variational Inference for Perceptual Image Compression |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Synthesizing Images on Perceptual Boundaries of ANNs for Uncovering and Manipulating Human Perceptual Variability |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Synthesizing Privacy-Preserving Text Data via Finetuning *without* Finetuning Billion-Scale LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Synthesizing Software Engineering Data in a Test-Driven Manner |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Synthetic Face Datasets Generation via Latent Space Exploration from Brownian Identity Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Synthetic Text Generation for Training Large Language Models via Gradient Matching |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| System-Aware Unlearning Algorithms: Use Lesser, Forget Faster |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| T1: Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TANGO: Clustering with Typicality-Aware Nonlocal Mode-Seeking and Graph-Cut Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| TAROT: Targeted Data Selection via Optimal Transport |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TCP-Diffusion: A Multi-modal Diffusion Model for Global Tropical Cyclone Precipitation Forecasting with Change Awareness |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| TGDPO: Harnessing Token-Level Reward Guidance for Enhancing Direct Preference Optimization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TIMING: Temporality-Aware Integrated Gradients for Time Series Explanation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| TINED: GNNs-to-MLPs by Teacher Injection and Dirichlet Energy Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TLLC: Transfer Learning-based Label Completion for Crowdsourcing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| TMetaNet: Topological Meta-Learning Framework for Dynamic Link Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TOPLOC: A Locality Sensitive Hashing Scheme for Trustless Verifiable Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| TRACE Back from the Future: A Probabilistic Reasoning Approach to Controllable Language Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TRUST-VLM: Thorough Red-Teaming for Uncovering Safety Threats in Vision-Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| TS-SNN: Temporal Shift Module for Spiking Neural Networks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TSP: A Two-Sided Smoothed Primal-Dual Method for Nonconvex Bilevel Optimization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TTFSFormer: A TTFS-based Lossless Conversion of Spiking Transformer |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| TUMTraf VideoQA: Dataset and Benchmark for Unified Spatio-Temporal Video Understanding in Traffic Scenes |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| TabFSBench: Tabular Benchmark for Feature Shifts in Open Environments |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TabFlex: Scaling Tabular Learning to Millions with Linear Attention |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TabICL: A Tabular Foundation Model for In-Context Learning on Large Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TabNAT: A Continuous-Discrete Joint Generative Framework for Tabular Data |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TabPFN Unleashed: A Scalable and Effective Solution to Tabular Classification Problems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TabSDS: a Lightweight, Fully Non-Parametric, and Model Free Approach for Generating Synthetic Tabular Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Tackling View-Dependent Semantics in 3D Language Gaussian Splatting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Taming Diffusion for Dataset Distillation with High Representativeness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Taming Knowledge Conflicts in Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Taming Rectified Flow for Inversion and Editing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Target Concrete Score Matching: A Holistic Framework for Discrete Diffusion |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Targeted Low-rank Refinement: Enhancing Sparse Language Models with Precision |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Targeted Unlearning with Single Layer Unlearning Gradient |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Targeted control of fast prototyping through domain-specific interface |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Task Generalization with Autoregressive Compositional Structure: Can Learning from $D$ Tasks Generalize to $D^T$ Tasks? |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Task-Agnostic Pre-training and Task-Guided Fine-tuning for Versatile Diffusion Planner |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Task-Aware Virtual Training: Enhancing Generalization in Meta-Reinforcement Learning for Out-of-Distribution Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Task-Gated Multi-Expert Collaboration Network for Degraded Multi-Modal Image Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TeDS: Joint Learning of Diachronic and Synchronic Perspectives in Quaternion Space for Temporal Knowledge Graph Completion |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| TeLoGraF: Temporal Logic Planning via Graph-encoded Flow Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Teaching Language Models to Critique via Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Teaching Physical Awareness to LLMs through Sounds |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Teaching Transformers Causal Reasoning through Axiomatic Training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Telling Peer Direct Effects from Indirect Effects in Observational Network Data |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Temperature-Annealed Boltzmann Generators |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Temporal Difference Flows |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Temporal Distance-aware Transition Augmentation for Offline Model-based Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Temporal Misalignment in ANN-SNN Conversion and its Mitigation via Probabilistic Spiking Neurons |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Temporal Query Network for Efficient Multivariate Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tensor Decomposition Based Memory-Efficient Incremental Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Tensor Product Neural Networks for Functional ANOVA Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tensor-Var: Efficient Four-Dimensional Variational Data Assimilation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tensorized Multi-View Multi-Label Classification via Laplace Tensor Rank |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Test-Time Adaptation for Online Vision-Language Navigation with Feedback-based Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Test-Time Adaptation with Binary Feedback |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Test-Time Canonicalization by Foundation Models for Robust Perception |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Test-Time Graph Neural Dataset Search With Generative Projection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
3 |
| Test-Time Learning for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Test-Time Multimodal Backdoor Detection by Contrastive Prompting |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Test-Time Selective Adaptation for Uni-Modal Distribution Shift in Multi-Modal Data |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Test-Time Training Provably Improves Transformers as In-context Learners |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Test-time Adaptation on Graphs via Adaptive Subgraph-based Selection and Regularized Prototypes |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Test-time Adapted Reinforcement Learning with Action Entropy Regularization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Test-time Correlation Alignment |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Testing Conditional Mean Independence Using Generative Neural Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Testing the Limits of Fine-Tuning for Improving Visual Cognition in Vision Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Text-to-CAD Generation Through Infusing Visual Feedback in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Text-to-LoRA: Instant Transformer Adaption |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TextCenGen: Attention-Guided Text-Centric Background Adaptation for Text-to-Image Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Textual Unlearning Gives a False Sense of Unlearning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Textural or Textual: How Vision-Language Models Read Text in Images |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Batch Complexity of Bandit Pure Exploration |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Berkeley Function Calling Leaderboard (BFCL): From Tool Use to Agentic Evaluation of Large Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Best of Both Worlds: Bridging Quality and Diversity in Data Selection with Bipartite Graph |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Brain’s Bitter Lesson: Scaling Speech Decoding With Self-Supervised Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| The Butterfly Effect: Neural Network Training Trajectories Are Highly Sensitive to Initial Conditions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Canary’s Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Case for Learned Provenance-based System Behavior Baseline |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Complexity of Learning Sparse Superposed Features with Feedback |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| The Courage to Stop: Overcoming Sunk Cost Fallacy in Deep Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| The Devil Is in the Details: Tackling Unimodal Spurious Correlations for Generalizable Multimodal Reward Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Diffusion Duality |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Disparate Benefits of Deep Ensembles |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Double-Ellipsoid Geometry of CLIP |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| The Elicitation Game: Evaluating Capability Elicitation Techniques |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Emperor’s New Clothes in Benchmarking? A Rigorous Examination of Mitigation Strategies for LLM Benchmark Data Contamination |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| The Empirical Mean is Minimax Optimal for Local Glivenko-Cantelli |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| The Energy Loss Phenomenon in RLHF: A New Perspective on Mitigating Reward Hacking |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| The Four Color Theorem for Cell Instance Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Generalized Skew Spectrum of Graphs |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| The Global Convergence Time of Stochastic Gradient Descent in Non-Convex Landscapes: Sharp Estimates via Large Deviations |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Harder Path: Last Iterate Convergence for Uncoupled Learning in Zero-Sum Games with Bandit Feedback |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Hidden Dimensions of LLM Alignment: A Multi-Dimensional Analysis of Orthogonal Safety Directions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Hidden Joules: Evaluating the Energy Consumption of Vision Backbones for Progress Towards More Efficient Model Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models Via Visual Information Steering |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| The Illusion of Role Separation: Hidden Shortcuts in LLM Role Learning (and How to Fix Them) |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| The Impact of On-Policy Parallelized Data Collection on Deep Reinforcement Learning Networks |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| The Importance of Being Lazy: Scaling Limits of Continual Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| The Jailbreak Tax: How Useful are Your Jailbreak Outputs? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Limits of Predicting Agents from Behaviour |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Limits of Tractable Marginalization |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Lock-in Hypothesis: Stagnation by Algorithm |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| The Logical Implication Steering Method for Conditional Interventions on Transformer Generation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Missing Alignment Link of In-context Learning on Sequences |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| The Noisy Laplacian: a Threshold Phenomenon for Non-Linear Dimension Reduction |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Number of Trials Matters in Infinite-Horizon General-Utility Markov Decision Processes |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Panaceas for Improving Low-Rank Decomposition in Communication-Efficient Federated Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Polynomial Stein Discrepancy for Assessing Moment Convergence |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Power of Random Features and the Limits of Distribution-Free Gradient Descent |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Price of Freedom: Exploring Expressivity and Runtime Tradeoffs in Equivariant Tensor Products |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| The Price of Linear Time: Error Analysis of Structured Kernel Interpolation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| The Relationship Between No-Regret Learning and Online Conformal Prediction |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Ripple Effect: On Unforeseen Complications of Backdoor Attacks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Role of Randomness in Stability |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Role of Sparsity for Length Generalization in LLMs |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| The Sample Complexity of Online Strategic Decision Making with Information Asymmetry and Knowledge Transportability |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Sharpness Disparity Principle in Transformers for Accelerating Language Model Pre-Training |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Sparse-Plus-Low-Rank Quasi-Newton Method for Entropic-Regularized Optimal Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The Surprising Effectiveness of Test-Time Training for Few-Shot Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| The Synergy of LLMs & RL Unlocks Offline Learning of Generalizable Language-Conditioned Policies with Low-fidelity Data |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| The Underlying Universal Statistical Structure of Natural Datasets |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| The Value of Prediction in Identifying the Worst-Off |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| The dark side of the forces: assessing non-conservative force models for atomistic machine learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| The impact of uncertainty on regularized learning in games |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Theoretical Limitations of Ensembles in the Age of Overparameterization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Theoretical Performance Guarantees for Partial Domain Adaptation via Partial Optimal Transport |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Theoretical guarantees on the best-of-n alignment policy |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Thermalizer: Stable autoregressive neural emulation of spatiotemporal chaos |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Thickness-aware E(3)-Equivariant 3D Mesh Neural Networks |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
4 |
| Think Smarter not Harder: Adaptive Reasoning with Inference Aware Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Think Twice, Act Once: A Co-Evolution Framework of LLM and RL for Large-Scale Decision Making |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Thinking LLMs: General Instruction Following with Thought Generation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Three-Dimensional Trajectory Prediction with 3DMoTraj Dataset |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tight and Fast Bounds for Multi-Label Learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Tightening Causal Bounds via Covariate-Aware Optimal Transport |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Tilted Sharpness-Aware Minimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Time Series Representations with Hard-Coded Invariances |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Time to Spike? Understanding the Representational Power of Spiking Neural Networks in Discrete Time |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Time-Aware World Model for Adaptive Prediction and Control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Time-VLM: Exploring Multimodal Vision-Language Models for Augmented Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TimeBase: The Power of Minimalism in Efficient Long-term Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| TimeBridge: Non-Stationarity Matters for Long-term Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TimeDART: A Diffusion Autoregressive Transformer for Self-Supervised Time Series Representation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TimeFilter: Patch-Specific Spatial-Temporal Graph Filtration for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TimePoint: Accelerated Time Series Alignment via Self-Supervised Keypoint and Descriptor Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TimePro: Efficient Multivariate Long-term Time Series Forecasting with Variable- and Time-Aware Hyper-state |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TimeStacker: A Novel Framework with Multilevel Observation for Capturing Nonstationary Patterns in Time Series Forecasting |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TimeStep Master: Asymmetrical Mixture of Timestep LoRA Experts for Versatile and Efficient Diffusion Models in Vision |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| TinyMIG: Transferring Generalization from Vision Foundation Models to Single-Domain Medical Imaging |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| To Each Metric Its Decoding: Post-Hoc Optimal Decision Rules of Probabilistic Hierarchical Classifiers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| To Steer or Not to Steer? Mechanistic Error Reduction with Abstention for Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ToMA: Token Merge with Attention for Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Token Coordinated Prompt Attention is Needed for Visual Prompting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Token Signature: Predicting Chain-of-Thought Gains with Token Decoding Feature in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Tokenized Bandit for LLM Decoding and Alignment |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Tool Unlearning for Tool-Augmented LLMs |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| TopInG: Topologically Interpretable Graph Learning via Persistent Rationale Filtration |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| TopoTune: A Framework for Generalized Combinatorial Complex Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Topological Signatures of Adversaries in Multimodal Alignments |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Topology-aware Neural Flux Prediction Guided by Physics |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Toward Data-centric Directed Graph Learning: An Entropy-driven Approach |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Toward Efficient Kernel-Based Solvers for Nonlinear PDEs |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Toward a Unified Theory of Gradient Descent under Generalized Smoothness |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| Towards Attributions of Input Variables in a Coalition |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Towards Better-than-2 Approximation for Constrained Correlation Clustering |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Towards Black-Box Membership Inference Attack for Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Cost-Effective Reward Guided Text Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Towards Efficient Online Tuning of VLM Agents via Counterfactual Soft Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Escaping from Class Dependency Modeling for Multi-Dimensional Classification |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Global-level Mechanistic Interpretability: A Perspective of Modular Circuits of Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Learning to Complete Anything in Lidar |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards Lifelong Model Editing via Simulating Ideal Editor |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Memorization Estimation: Fast, Formal and Free |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Towards Practical Defect-Focused Automated Code Review |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Rationale-Answer Alignment of LVLMs via Self-Rationale Calibration |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Towards Robust Influence Functions with Flat Validation Minima |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Robustness and Explainability of Automatic Algorithm Selection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Theoretical Understanding of Sequential Decision Making with Preference Feedback |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Towards Trustworthy Federated Learning with Untrusted Participants |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Understanding Catastrophic Forgetting in Two-layer Convolutional Neural Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Towards Understanding Fine-Tuning Mechanisms of LLMs via Circuit Analysis |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Understanding Gradient Dynamics of the Sliced-Wasserstein Distance via Critical Point Analysis |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Towards Understanding Parametric Generalized Category Discovery on Graphs |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Towards Universal Offline Black-Box Optimization via Learning Language Model Embeddings |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
4 |
| Towards a Formal Theory of Representational Compositionality |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards a General Time Series Forecasting Model with Unified Representation and Adaptive Transfer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards a Mechanistic Explanation of Diffusion Model Generalization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Towards a Unified Framework of Clustering-based Anomaly Detection |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Towards an Explainable Comparison and Alignment of Feature Embeddings |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards characterizing the value of edge embeddings in Graph Neural Networks |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Towards flexible perception with visual memory |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards the Causal Complete Cause of Multi-Modal Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards the Efficient Inference by Incorporating Automated Computational Phenotypes under Covariate Shift |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TraceGrad: a Framework Learning Expressive SO(3)-equivariant Non-linear Representations for Electronic-Structure Hamiltonian Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Tracking Most Significant Shifts in Infinite-Armed Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Tracking The Best Expert Privately |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Tractable Transformers for Flexible Conditional Generation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Training Deep Learning Models with Norm-Constrained LMOs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Training Diffusion-based Generative Models with Limited Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Training Dynamics of In-Context Learning in Linear Attention |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Training Flexible Models of Genetic Variant Effects from Functional Annotations using Accelerated Linear Algebra |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Training High Performance Spiking Neural Network by Temporal Model Calibration |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Training Software Engineering Agents and Verifiers with SWE-Gym |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Training a Generally Curious Agent |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Trajectory Inference with Smooth Schrödinger Bridges |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Trajectory World Models for Heterogeneous Environments |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TransPL: VQ-Code Transition Matrices for Pseudo-Labeling of Time Series Unsupervised Domain Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Transfer Learning for Nonparametric Contextual Dynamic Pricing |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Transfer Q-Learning with Composite MDP Structures |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Transformative or Conservative? Conservation laws for ResNets and Transformers |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Transformer-Based Spatial-Temporal Counterfactual Outcomes Estimation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Transolver++: An Accurate Neural Solver for PDEs on Million-Scale Geometries |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tree-Sliced Wasserstein Distance with Nonlinear Projection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Tree-Sliced Wasserstein Distance: A Geometric Perspective |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TreeLoRA: Efficient Continual Learning via Layer-Wise LoRAs Guided by a Hierarchical Gradient-Similarity Tree |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Triple-Optimistic Learning for Stochastic Contextual Bandits with General Constraints |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Trust-Region Twisted Policy Improvement |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Trusted Multi-View Classification with Expert Knowledge Constraints |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Trustworthy Machine Learning through Data-Specific Indistinguishability |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| TruthFlow: Truthful LLM Generation via Representation Flow Correction |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TtBA: Two-third Bridge Approach for Decision-Based Adversarial Attack |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| TuCo: Measuring the Contribution of Fine-Tuning to Individual Responses of LLMs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Tuning LLM Judge Design Decisions for 1/1000 of the Cost |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tuning Sequential Monte Carlo Samplers via Greedy Incremental Divergence Minimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Two Tickets are Better than One: Fair and Accurate Hiring Under Strategic LLM Manipulations |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| TypyBench: Evaluating LLM Type Inference for Untyped Python Repositories |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
5 |
| UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| UGPhysics: A Comprehensive Benchmark for Undergraduate Physics Reasoning with Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Ultra Lowrate Image Compression with Semantic Residual Coding and Compression-aware Diffusion |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Ultra-Resolution Adaptation with Ease |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UltraTWD: Optimizing Ultrametric Trees for Tree-Wasserstein Distance |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UnHiPPO: Uncertainty-aware Initialization for State Space Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unbiased Evaluation of Large Language Models from a Causal Perspective |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Unbiased Recommender Learning from Implicit Feedback via Weakly Supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UncertainSAM: Fast and Efficient Uncertainty Quantification of the Segment Anything Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Uncertainty Estimation for Heterophilic Graphs Through the Lens of Information Theory |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Uncertainty Quantification for LLM-Based Survey Simulations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Uncertainty-Based Extensible Codebook for Discrete Federated Learning in Heterogeneous Data Silos |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unconstrained Robust Online Convex Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding Chain-of-Thought in LLMs through Information Theory |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Understanding Complexity in VideoQA via Visual Program Generation |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Understanding Fixed Predictions via Confined Regions |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Understanding Generalization in Quantum Machine Learning with Margins |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Understanding High-Dimensional Bayesian Optimization |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Understanding Input Selectivity in Mamba: Impact on Approximation Power, Memorization, and Associative Recall Capacity |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Understanding Mode Connectivity via Parameter Space Symmetry |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Understanding Model Ensemble in Transferable Adversarial Attack |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Understanding Model Reprogramming for CLIP via Decoupling Visual Prompts |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Understanding Multimodal LLMs Under Distribution Shifts: An Information-Theoretic Approach |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Understanding Nonlinear Implicit Bias via Region Counts in Input Space |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Understanding Overadaptation in Supervised Fine-Tuning: The Role of Ensemble Methods |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding Sharpness Dynamics in NN Training with a Minimalist Example: The Effects of Dataset Difficulty, Depth, Stochasticity, and More |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Understanding Synthetic Context Extension via Retrieval Heads |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Understanding and Improving Length Generalization in Recurrent Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding and Mitigating Memorization in Diffusion Models for Tabular Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Understanding and Mitigating Memorization in Generative Models via Sharpness of Probability Landscapes |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding the Emergence of Multimodal Representation Alignment |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding the Forgetting of (Replay-based) Continual Learning via Feature Learning: Angle Matters |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Understanding the Kronecker Matrix-Vector Complexity of Linear Algebra |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Understanding the Limits of Deep Tabular Methods with Temporal Shift |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding the Logic of Direct Preference Alignment through Logic |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Understanding the Skill Gap in Recurrent Language Models: The Role of the Gather-and-Aggregate Mechanism |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Understanding the Statistical Accuracy-Communication Trade-off in Personalized Federated Learning with Minimax Guarantees |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Understanding the Unfairness in Network Quantization |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Understanding the difficulties of posterior predictive estimation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniDB: A Unified Diffusion Bridge Framework via Stochastic Optimal Control |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UniMC: Taming Diffusion Transformer for Unified Keypoint-Guided Multi-Class Image Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| UniMate: A Unified Model for Mechanical Metamaterial Generation, Property Prediction, and Condition Confirmation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UniMoMo: Unified Generative Modeling of 3D Molecules for De Novo Binder Design |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| UniSim: A Unified Simulator for Time-Coarsened Dynamics of Biomolecules |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unifews: You Need Fewer Operations for Efficient Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unified Analysis of Continuous Weak Features Learning with Applications to Learning from Missing Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Unified Breakdown Analysis for Byzantine Robust Gossip |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Unified K-Means Clustering with Label-Guided Manifold Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Unified Screening for Multiple Diseases |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Uniform Mean Estimation for Heavy-Tailed Distributions via Median-of-Means |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Unifying 2D and 3D Vision-Language Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unifying Knowledge from Diverse Datasets to Enhance Spatial-Temporal Modeling: A Granularity-Adaptive Geographical Embedding Approach |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Unifying Specialized Visual Encoders for Video Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unisoma: A Unified Transformer-based Solver for Multi-Solid Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Universal Approximation Theorem of Deep Q-Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Universal Approximation of Mean-Field Models via Transformers |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Universal Biological Sequence Reranking for Improved De Novo Peptide Sequencing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Universal Length Generalization with Turing Programs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Universal Neural Optimal Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unlocking Post-hoc Dataset Inference with Synthetic Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Unlocking the Capabilities of Large Vision-Language Models for Generalizable and Explainable Deepfake Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unlocking the Power of SAM 2 for Few-Shot Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unnatural Languages Are Not Bugs but Features for LLMs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Unpaired Point Cloud Completion via Unbalanced Optimal Transport |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Unraveling the Interplay between Carryover Effects and Reward Autocorrelations in Switchback Experiments |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unsupervised Learning for Class Distribution Mismatch |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Unveiling AI’s Blind Spots: An Oracle for In-Domain, Out-of-Domain, and Adversarial Errors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unveiling Markov heads in Pretrained Language Models for Offline Reinforcement Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Update Your Transformer to the Latest Release: Re-Basin of Task Vectors |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Upweighting Easy Samples in Fine-Tuning Mitigates Forgetting |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| VCT: Training Consistency Models with Variational Noise Coupling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| VIP: Vision Instructed Pre-training for Robotic Manipulation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| VTGaussian-SLAM: RGBD SLAM for Large Scale Scenes with Splatting View-Tied 3D Gaussians |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Validating Mechanistic Interpretations: An Axiomatic Approach |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
6 |
| Value-Based Deep RL Scales Predictably |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Variance as a Catalyst: Efficient and Transferable Semantic Erasure Adversarial Attack for Customized Diffusion Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Variance-Reduced Forward-Reflected-Backward Splitting Methods for Nonmonotone Generalized Equations |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Variational Control for Guidance in Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Variational Counterfactual Intervention Planning to Achieve Target Outcomes |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Variational Learning of Fractional Posteriors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Variational Phylogenetic Inference with Products over Bipartitions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Variational Rectified Flow Matching |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VerbalTS: Generating Time Series from Texts |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Verification Learning: Make Unsupervised Neuro-Symbolic System Feasible |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ViTally Consistent: Scaling Biological Representation Learning for Cell Microscopy |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Video-Enhanced Offline Reinforcement Learning: A Model-Based Approach |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| VideoRoPE: What Makes for Good Video Rotary Position Embedding? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| VinePPO: Refining Credit Assignment in RL Training of LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Vintix: Action Model via In-Context Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Vision Graph Prompting via Semantic Low-Rank Decomposition |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Vision-Language Model Selection and Reuse for Downstream Adaptation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Vision-Language Models Create Cross-Modal Task Representations |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Visual Abstraction: A Plug-and-Play Approach for Text-Visual Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Visual Attention Never Fades: Selective Progressive Attention ReCalibration for Detailed Image Captioning in Multimodal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Visual Autoregressive Modeling for Image Super-Resolution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Visual Generation Without Guidance |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Visual Graph Arena: Evaluating Visual Conceptualization of Vision and Multimodal Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Visual and Domain Knowledge for Professional-level Graph-of-Thought Medical Reasoning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Volume Optimality in Conformal Prediction with Structured Prediction Sets |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Volume-Aware Distance for Robust Similarity Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Voronoi-grid-based Pareto Front Learning and Its Application to Collaborative Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Vulnerability-Aware Alignment: Mitigating Uneven Forgetting in Harmful Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| WATCH: Adaptive Monitoring for AI Deployments via Weighted-Conformal Martingales |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| WAVE: Weighted Autoregressive Varying Gate for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WGFormer: An SE(3)-Transformer Driven by Wasserstein Gradient Flows for Molecular Ground-State Conformation Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WILTing Trees: Interpreting the Distance Between MPNN Embeddings |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| WMAdapter: Adding WaterMark Control to Latent Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WMarkGPT: Watermarked Image Understanding via Multimodal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WOMD-Reasoning: A Large-Scale Dataset for Interaction Reasoning in Driving |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Wait-Less Offline Tuning and Re-solving for Online Decision Making |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Wasserstein Flow Matching: Generative Modeling Over Families of Distributions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Wasserstein Policy Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| WeGeFT: Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Weak-to-Strong Generalization Even in Random Feature Networks, Provably |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Weak-to-Strong Jailbreaking on Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Weakly Supervised Anomaly Detection via Dual-Tailed Kernel |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Weakly-Supervised Contrastive Learning for Imprecise Class Labels |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Weight matrices compression based on PDB model in deep neural networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Weisfeiler and Leman Go Gambling: Why Expressive Lottery Tickets Win |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning? |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| What If We Recaption Billions of Web Images with LLaMA-3? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| What Limits Bidirectional Model’s Generative Capabilities? A Uni-Bi-Directional Mixture-of-Expert Method For Bidirectional Fine-tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| What Limits Virtual Agent Application? OmniBench: A Scalable Multi-Dimensional Benchmark for Essential Virtual Agent Capabilities |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| What Makes In-context Learning Effective for Mathematical Reasoning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| What Makes a Good Feedforward Computational Graph? |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| What can large language models do for sustainable food? |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| What makes an Ensemble (Un) Interpretable? |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| When Bad Data Leads to Good Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| When Can Proxies Improve the Sample Complexity of Preference Learning? |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| When Data-Free Knowledge Distillation Meets Non-Transferable Teacher: Escaping Out-of-Distribution Trap is All You Need |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| When Diffusion Models Memorize: Inductive Biases in Probability Flow of Minimum-Norm Shallow Neural Nets |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| When Do LLMs Help With Node Classification? A Comprehensive Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| When Dynamic Data Selection Meets Data Augmentation: Achieving Enhanced Training Acceleration |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| When Every Millisecond Counts: Real-Time Anomaly Detection via the Multimodal Asynchronous Hybrid Network |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| When Maximum Entropy Misleads Policy Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| When Model Knowledge meets Diffusion Model: Diffusion-assisted Data-free Image Synthesis with Alignment of Domain and Class |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| When Will It Fail?: Anomaly to Prompt for Forecasting Future Anomalies in Time Series |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| When and How Does CLIP Enable Domain and Compositional Generalization? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| When can in-context learning generalize out of task distribution? |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| When do neural networks learn world models? |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| When to Forget? Complexity Trade-offs in Machine Unlearning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| When to retrain a machine learning model |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| When, Where and Why to Average Weights? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Where is the Truth? The Risk of Getting Confounded in a Continual World |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Which Agent Causes Task Failures and When? On Automated Failure Attribution of LLM Multi-Agent Systems |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Which Attention Heads Matter for In-Context Learning? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Whitened CLIP as a Likelihood Surrogate of Images and Captions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Whoever Started the interference Should End It: Guiding Data-Free Model Merging via Task Vectors |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Widening the Network Mitigates the Impact of Data Heterogeneity on FedAvg |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WildChat-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Winner-takes-all for Multivariate Probabilistic Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Wolfpack Adversarial Attack for Robust Multi-Agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| World Model Implanting for Test-time Adaptation of Embodied Agents |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| WorldSimBench: Towards Video Generation Models as World Simulators |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Wrapped Gaussian on the manifold of Symmetric Positive Definite Matrices |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Wyckoff Transformer: Generation of Symmetric Crystals |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| WyckoffDiff – A Generative Diffusion Model for Crystal Symmetry |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| X-Hacking: The Threat of Misguided AutoML |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| XAttention: Block Sparse Attention with Antidiagonal Scoring |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| XAttnMark: Learning Robust Audio Watermarking with Cross-Attention |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| You Always Recognize Me (YARM): Robust Texture Synthesis Against Multi-View Corruption |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| You Get What You Give: Reciprocally Fair Federated Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Zebra: In-Context Generative Pretraining for Solving Parametric PDEs |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Zero Shot Generalization of Vision-Based RL Without Data Augmentation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Zero-Inflated Bandits |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Zero-Shot Generalization of GNNs over Distinct Attribute Domains |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Zero-Shot Offline Imitation Learning via Optimal Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ZeroFlow: Overcoming Catastrophic Forgetting is Easier than You Think |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ZipAR: Parallel Autoregressive Image Generation through Spatial Locality |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| am-ELO: A Stable Framework for Arena-based LLM Evaluation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| any4: Learned 4-bit Numeric Representation for LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| e-GAI: e-value-based Generalized $α$-Investing for Online False Discovery Rate Control |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| iDPA: Instance Decoupled Prompt Attention for Incremental Medical Object Detection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| iN2V: Bringing Transductive Node Embeddings to Inductive Graphs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| polybasic Speculative Decoding Through a Theoretical Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| scSSL-Bench: Benchmarking Self-Supervised Learning for Single-Cell Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| sciLaMA: A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| unMORE: Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| xLSTM 7B: A Recurrent LLM for Fast and Efficient Inference |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |