| $F^3Set$: Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| $InterLCM$: Low-Quality Images as Intermediate States of Latent Consistency Models for Effective Blind Face Restoration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| $R^2$-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| $\gamma-$MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| $\mathbb{X}$-Sample Contrastive Loss: Improving Contrastive Learning with Sample Similarity Graphs |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| $\phi$-Update: A Class of Policy Update Methods with Policy Convergence Guarantee |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| $\sigma$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| $\text{D}_{2}\text{O}$: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| $\text{I}^2\text{AM}$: Interpreting Image-to-Image Latent Diffusion Models via Bi-Attribution Maps |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| $q$-exponential family for policy optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| (Mis)Fitting Scaling Laws: A Survey of Scaling Law Fitting Techniques in Deep Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| 3D StreetUnveiler with Semantic-aware 2DGS - a simple baseline |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| 3D Vision-Language Gaussian Splatting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| 3D-AffordanceLLM: Harnessing Large Language Models for Open-Vocabulary Affordance Detection in 3D Worlds |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| 3D-MolT5: Leveraging Discrete Structural Information for Molecule-Text Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| 3D-Properties: Identifying Challenges in DPO and Charting a Path Forward |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| 3D-SPATIAL MULTIMODAL MEMORY |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| 3DGS-Drag: Dragging Gaussians for Intuitive Point-Based 3D Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| 3DIS: Depth-Driven Decoupled Image Synthesis for Universal Multi-Instance Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| 3DMolFormer: A Dual-channel Framework for Structure-based Drug Discovery |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| 3DTrajMaster: Mastering 3D Trajectory for Multi-Entity Motion in Video Generation |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| 4K4DGen: Panoramic 4D Generation at 4K Resolution |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| 6D Object Pose Tracking in Internet Videos for Robotic Manipulation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| 6DGS: Enhanced Direction-Aware Gaussian Splatting for Volumetric Rendering |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Benchmark for Semantic Sensitive Information in LLMs Outputs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| A Black Swan Hypothesis: The Role of Human Irrationality in AI Safety |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A CLIP-Powered Framework for Robust and Generalizable Data Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Causal Lens for Learning Long-term Fair Policies |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Closer Look at Machine Unlearning for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Coefficient Makes SVRG Effective |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Common Pitfall of Margin-based Language Model Alignment: Gradient Entanglement |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| A Computational Framework for Modeling Emergence of Color Vision in the Human Brain |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| A Conditional Independence Test in the Presence of Discretization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Decade's Battle on Dataset Bias: Are We There Yet? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Deep Generative Learning Approach for Two-stage Adaptive Robust Optimization |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| A Differentiable Rank-Based Objective for Better Feature Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Distributional Approach to Uncertainty-Aware Preference Alignment Using Offline Demonstrations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Formal Framework for Understanding Length Generalization in Transformers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A General Framework for Off-Policy Learning with Partially-Observed Reward |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| A General Framework for Producing Interpretable Semantic Text Embeddings |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Generalist Hanabi Agent |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| A Generic Framework for Conformal Fairness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Geometric Framework for Understanding Memorization in Generative Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| A Graph Enhanced Symbolic Discovery Framework For Efficient Logic Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Large-scale Dataset and Benchmark for Commuting Origin-Destination Flow Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Large-scale Training Paradigm for Graph Generative Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| A Meta-Learning Approach to Bayesian Causal Discovery |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Multi-Power Law for Loss Curve Prediction Across Learning Rate Schedules |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Multiscale Frequency Domain Causal Framework for Enhanced Pathological Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A New Perspective on Shampoo's Preconditioner |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| A Non-Contrastive Learning Framework for Sequential Recommendation with Preference-Preserving Profile Generation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Periodic Bayesian Flow for Material Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Policy-Gradient Approach to Solving Imperfect-Information Games with Best-Iterate Convergence |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Probabilistic Perspective on Unlearning and Alignment for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Quantum Circuit-Based Compression Perspective for Parameter-Efficient Learning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| A Riemannian Framework for Learning Reduced-order Lagrangian Dynamics |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| A Robust Method to Discover Causal or Anticausal Relation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Sanity Check for AI-generated Image Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| A Second-Order Perspective on Model Compositionality and Incremental Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| A Simple Approach to Unifying Diffusion-based Conditional Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| A Simple Framework for Open-Vocabulary Zero-Shot Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Simple yet Effective $\Delta\Delta G$ Predictor is An Unsupervised Antibody Optimizer and Explainer |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| A Skewness-Based Criterion for Addressing Heteroscedastic Noise in Causal Discovery |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A Solvable Attention for Neural Scaling Laws |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Statistical Approach for Controlled Training Data Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Statistical Framework for Ranking LLM-based Chatbots |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Stochastic Approach to the Subset Selection Problem via Mirror Descent |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Theoretical Analysis of Self-Supervised Learning for Vision Transformers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Theoretical Framework for Partially-Observed Reward States in RLHF |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| A Theoretical Perspective: How to Prevent Model Collapse in Self-consuming Training Loops |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| A Theoretically-Principled Sparse, Connected, and Rigid Graph Representation of Molecules |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| A Theory for Token-Level Harmonization in Retrieval-Augmented Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| A Theory of Initialisation's Impact on Specialisation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Tight Convergence Analysis of Inexact Stochastic Proximal Point Algorithm for Stochastic Composite Optimization Problems |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A Training-Free Sub-quadratic Cost Transformer Model Serving Framework with Hierarchically Pruned Attention |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Transfer Attack to Image Watermarks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A Truncated Newton Method for Optimal Transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| A Unified Framework for Forward and Inverse Problems in Subsurface Imaging using Latent Space Translations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Unified Theory of Quantum Neural Network Loss Landscapes |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| A Unifying Framework for Representation Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| A Watermark for Order-Agnostic Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| A deep inverse-mapping model for a flapping robotic wing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A new framework for evaluating model out-of-distribution generalisation for the biochemical domain |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| A transfer learning framework for weak to strong generalization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A-Bench: Are LMMs Masters at Evaluating AI-generated Images? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| A3D: Does Diffusion Dream about 3D Alignment? |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| ACC-Collab: An Actor-Critic Approach to Multi-Agent LLM Collaboration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| ACES: Automatic Cohort Extraction System for Event-Stream Datasets |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ACTIVE: Offline Reinforcement Learning via Adaptive Imitation and In-sample $V$-Ensemble |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| ADAM Optimization with Adaptive Batch Selection |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ADAM: An Embodied Causal Agent in Open-World Environments |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| ADAPT: Attentive Self-Distillation and Dual-Decoder Prediction Fusion for Continual Panoptic Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ADBM: Adversarial Diffusion Bridge Model for Reliable Adversarial Purification |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ADIFF: Explaining audio difference using natural language |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ADMM for Nonconvex Optimization under Minimal Continuity Assumption |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ADMM for Structured Fractional Minimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AFlow: Automating Agentic Workflow Generation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AI Sandbagging: Language Models can Strategically Underperform on Evaluations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AI2TALE: An Innovative Information Theory-based Approach for Learning to Localize Phishing Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| AIMS.au: A Dataset for the Analysis of Modern Slavery Countermeasures in Corporate Statements |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AIR-BENCH 2024: A Safety Benchmark based on Regulation and Policies Specified Risk Categories |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| ALBAR: Adversarial Learning approach to mitigate Biases in Action Recognition |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ALLaM: Large Language Models for Arabic and English |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ANaGRAM: A Natural Gradient Relative to Adapted Model for efficient PINNs learning |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| APE: Faster and Longer Context-Augmented Generation via Adaptive Parallel Encoding |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| API Pack: A Massive Multi-Programming Language Dataset for API Call Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ARB-LLM: Alternating Refined Binarizations for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ARLON: Boosting Diffusion Transformers with Autoregressive Models for Long Video Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| AVHBench: A Cross-Modal Hallucination Benchmark for Audio-Visual Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Accelerated Over-Relaxation Heavy-Ball Method: Achieving Global Accelerated Convergence with Broad Generalization |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Accelerated training through iterative gradient propagation along the residual path |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Accelerating 3D Molecule Generation via Jointly Geometric Optimal Transport |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Accelerating Diffusion Transformers with Token-wise Feature Caching |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Accelerating Goal-Conditioned Reinforcement Learning Algorithms and Research |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Accelerating Inference of Retrieval-Augmented Generation via Sparse Context Selection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Accelerating Neural ODEs: A Variational Formulation-based Approach |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Accelerating Task Generalisation with Multi-Level Skill Hierarchies |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Accelerating Training with Neuron Interaction and Nowcasting Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Accelerating neural network training: An analysis of the AlgoPerf competition |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Accessing Vision Foundation Models via ImageNet-1K |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Accurate and Scalable Graph Neural Networks via Message Invariance |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Action Sequence Augmentation for Action Anticipation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Action abstractions for amortized sampling |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Actions Speak Louder Than Words: Rate-Reward Trade-off in Markov Decision Processes |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Activation Gradient based Poisoned Sample Detection Against Backdoor Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Active Learning for Continual Learning: Keeping the Past Alive in the Present |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Active Learning for Neural PDE Solvers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Active Task Disambiguation with LLMs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Ada-K Routing: Boosting the Efficiency of MoE-based LLMs |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| AdaFisher: Adaptive Second Order Optimization via Fisher Information |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AdaGrad under Anisotropic Smoothness |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| AdaIR: Adaptive All-in-One Image Restoration via Frequency Mining and Modulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AdaManip: Adaptive Articulated Object Manipulation Environments and Policy Learning |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| AdaRankGrad: Adaptive Gradient Rank and Moments for Memory-Efficient LLMs Training and Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AdaWM: Adaptive World Model based Planning for Autonomous Driving |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adam Exploits $\ell_\infty$-geometry of Loss Landscape via Coordinate-wise Adaptivity |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Adam-mini: Use Fewer Learning Rates To Gain More |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adapt-$\infty$: Scalable Continual Multimodal Instruction Tuning via Dynamic Data Selection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adapters for Altering LLM Vocabularies: What Languages Benefit the Most? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adapting Multi-modal Large Language Model to Concept Drift From Pre-training Onwards |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive $Q$-Network: On-the-fly Target Selection for Deep Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Adaptive Batch Size for Privately Finding Second-Order Stationary Points |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Adaptive Camera Sensor for Vision Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Data Optimization: Dynamic Sample Selection with Scaling Laws |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adaptive Deployment of Untrusted LLMs Reduces Distributed Threats |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Adaptive Energy Alignment for Accelerating Test-Time Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Gradient Clipping for Robust Federated Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive Length Image Tokenization via Recurrent Allocation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adaptive Methods through the Lens of SDEs: Theoretical Insights on the Role of Noise |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive Pruning of Pretrained Transformer via Differential Inclusions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adaptive Rank Allocation: Speeding Up Modern Transformers with RaNA Adapters |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adaptive Retention & Correction: Test-Time Training for Continual Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Adaptive Shrinkage Estimation for Personalized Deep Kernel Regression in Modeling Brain Trajectories |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Adaptive Transformer Programs: Bridging the Gap Between Performance and Interpretability in Transformers |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Adaptive backtracking for faster optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adaptive teachers for amortized samplers |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adding Conditional Control to Diffusion Models with Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Addressing Label Shift in Distributed Learning via Entropy Regularization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AdvWave: Stealthy Adversarial Jailbreak Attack against Large Audio-Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Advancing Graph Generation through Beta Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Advancing LLM Reasoning Generalists with Preference Trees |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Advancing Mathematical Reasoning in Language Models: The Impact of Problem-Solving Data, Data Synthesis Methods, and Training Stages |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Advancing Out-of-Distribution Detection via Local Neuroplasticity |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Advancing Prompt-Based Methods for Replay-Independent General Continual Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Advantage Alignment Algorithms |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Advantage-Guided Distillation for Preference Alignment in Small Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarial Attacks on Data Attribution |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Adversarial Generative Flow Network for Solving Vehicle Routing Problems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adversarial Latent Feature Augmentation for Fairness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adversarial Machine Unlearning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Adversarial Mixup Unlearning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Adversarial Policy Optimization for Offline Preference-based Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adversarial Search Engine Optimization for Large Language Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Adversarial Training for Defense Against Label Poisoning Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Adversarially Robust Anomaly Detection through Spurious Negative Pair Mitigation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Adversarially Robust Out-of-Distribution Detection Using Lyapunov-Stabilized Embeddings |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Adversaries With Incentives: A Strategic Alternative to Adversarial Robustness |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Affine Steerable Equivariant Layer for Canonicalization of Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Agent S: An Open Agentic Framework that Uses Computers Like a Human |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Agent Skill Acquisition for Large Language Models via CycleQD |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Agent-Oriented Planning in Multi-Agent Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Agent-to-Sim: Learning Interactive Behavior Models from Casual Longitudinal Videos |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| AgentRefine: Enhancing Agent Generalization through Refinement Tuning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| AgentSquare: Automatic LLM Agent Search in Modular Design Space |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| AgentStudio: A Toolkit for Building General Virtual Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Agents' Room: Narrative Generation through Multi-step Collaboration |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Agree to Disagree: Demystifying Homogeneous Deep Ensembles through Distributional Equivalence |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Aioli: A Unified Optimization Framework for Language Model Data Mixing |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Air Quality Prediction with Physics-Guided Dual Neural ODEs in Open Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Alchemy: Amplifying Theorem-Proving Capability Through Symbolic Mutation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Algorithmic Stability Based Generalization Bounds for Adversarial Training |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Aligned Better, Listen Better for Audio-Visual Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Aligned Datasets Improve Detection of Latent Diffusion-Generated Images |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Aligned LLMs Are Not Aligned Browser Agents |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Aligning Generative Denoising with Discriminative Objectives Unleashes Diffusion for Visual Perception |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Aligning Human Motion Generation with Human Perceptions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Aligning Language Models with Demonstrated Feedback |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Aligning Visual Contrastive learning models via Preference Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Almost Optimal Batch-Regret Tradeoff for Batch Linear Contextual Bandits |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Ambient Diffusion Posterior Sampling: Solving Inverse Problems with Diffusion Models Trained on Corrupted Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Amulet: ReAlignment During Test Time for Personalized Preference Adaptation of LLMs |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| An Asynchronous Bundle Method for Distributed Learning Problems |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| An Auditing Test to Detect Behavioral Shift in Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An Effective Manifold-based Optimization Method for Distributionally Robust Classification |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| An Effective Theory of Bias Amplification |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| An Efficient Framework for Crediting Data Contributors of Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| An Empirical Analysis of Uncertainty in Large Language Model Evaluations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| An Engorgio Prompt Makes Large Language Model Babble on |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| An Evolved Universal Transformer Memory |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| An Exploration with Entropy Constrained 3D Gaussians for 2D Video Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| An Information Criterion for Controlled Disentanglement of Multimodal Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| An Intelligent Agentic System for Complex Image Restoration Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| An Online Learning Theory of Trading-Volume Maximization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| An Optimal Discriminator Weighted Imitation Perspective for Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| An Undetectable Watermark for Generative Image Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| AnalogGenie: A Generative Engine for Automatic Discovery of Analog Circuit Topologies |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Analysis of Linear Mode Connectivity via Permutation-Based Weight Matching: With Insights into Other Permutation Search Methods |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Analytic DAG Constraints for Differentiable DAG Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Analyzing Neural Scaling Laws in Two-Layer Networks with Power-Law Data Spectra |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Animate Your Thoughts: Reconstruction of Dynamic Natural Vision from Human Brain Activity |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Animate-X: Universal Character Image Animation with Enhanced Motion Representation |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| AnoLLM: Large Language Models for Tabular Anomaly Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Answer, Assemble, Ace: Understanding How LMs Answer Multiple Choice Questions |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Anti-Exposure Bias in Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Any-step Dynamics Model Improves Future Predictions for Online and Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Anyprefer: An Agentic Framework for Preference Data Synthesis |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Apollo-MILP: An Alternating Prediction-Correction Neural Solving Framework for Mixed-Integer Linear Programming |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Approaching Rate-Distortion Limits in Neural Compression with Lattice Transform Coding |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Approximating Full Conformal Prediction for Neural Network Regression with Gauss-Newton Influence |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Approximation algorithms for combinatorial optimization with predictions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Are Large Vision Language Models Good Game Players? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Are Transformers Able to Reason by Connecting Separated Knowledge in Training Data? |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Aria-MIDI: A Dataset of Piano MIDI Files for Symbolic Music Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Arithmetic Transformers Can Length-Generalize in Both Operand Length and Count |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Arithmetic Without Algorithms: Language Models Solve Math with a Bag of Heuristics |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Articulate-Anything: Automatic Modeling of Articulated Objects via a Vision-Language Foundation Model |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Artificial Kuramoto Oscillatory Neurons |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| As Simple as Fine-tuning: LLM Alignment via Bidirectional Negative Feedback Loss |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Ask, and it shall be given: On the Turing completeness of prompting |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| AssembleFlow: Rigid Flow Matching with Inertial Frames for Molecular Assembly |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Associative memory and dead neurons |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| AstroCompress: A benchmark dataset for multi-purpose compression of astronomical data |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Asymmetric Factorized Bilinear Operation for Vision Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Asymptotic Analysis of Two-Layer Neural Networks after One Gradient Step under Gaussian Mixtures Data with Structure |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Atlas Gaussians Diffusion for 3D Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| AtomSurf: Surface Representation for Learning on Protein Structures |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Atomas: Hierarchical Adaptive Alignment on Molecule-Text for Unified Molecule Understanding and Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Attention as a Hypernetwork |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Attention layers provably solve single-location regression |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Attention with Markov: A Curious Case of Single-layer Transformers |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| AttriBoT: A Bag of Tricks for Efficiently Approximating Leave-One-Out Context Attribution |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Attribute-based Visual Reprogramming for Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Attributing Culture-Conditioned Generations to Pretraining Corpora |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Audio Large Language Models Can Be Descriptive Speech Quality Evaluators |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| AugKD: Ingenious Augmentations Empower Knowledge Distillation for Image Super-Resolution |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval-Augmented Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| AutoBencher: Towards Declarative Benchmark Construction |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| AutoCGP: Closed-Loop Concept-Guided Policies from Unlabeled Demonstrations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| AutoG: Towards automatic graph construction from tabular data |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| AutoUAD: Hyper-parameter Optimization for Unsupervised Anomaly Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Autocorrelation Matters: Understanding the Role of Initialization Schemes for State Space Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Automated Design of Agentic Systems |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Automated Filtering of Human Feedback Data for Aligning Text-to-Image Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Automated Proof Generation for Rust Code via Self-Evolution |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Automatic Curriculum Expert Iteration for Reliable LLM Reasoning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Autonomous Evaluation of LLMs for Truth Maintenance and Reasoning Tasks |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Autoregressive Pretraining with Mamba in Vision |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Autoregressive Video Generation without Vector Quantization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| B-STaR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| BALROG: Benchmarking Agentic LLM and VLM Reasoning On Games |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| BAMDP Shaping: a Unified Framework for Intrinsic Motivation and Reward Shaping |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| BANGS: Game-theoretic Node Selection for Graph Self-Training |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| BEEM: Boosting Performance of Early Exit DNNs using Multi-Exit Classifiers as Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BIRD: A Trustworthy Bayesian Inference Framework for Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| BLEND: Behavior-guided Neural Population Dynamics Modeling via Privileged Knowledge Distillation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| BOFormer: Learning to Solve Multi-Objective Bayesian Optimization via Non-Markovian RL |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| BOND: Aligning LLMs with Best-of-N Distillation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| BP-Modified Local Loss for Efficient Training of Deep Neural Networks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| BRAID: Input-driven Nonlinear Dynamical Modeling of Neural-Behavioral Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BTBS-LNS: Binarized-Tightening, Branch and Search on Learning LNS Policies for MIP |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| BaB-ND: Long-Horizon Motion Planning with Branch-and-Bound and Neural Dynamics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Backdooring Vision-Language Models with Out-Of-Distribution Data |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
3 |
| Backtracking Improves Generation Safety |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Bad-PFL: Exploiting Backdoor Attacks against Personalized Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BadJudge: Backdoor Vulnerabilities of LLM-As-A-Judge |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BadRobot: Jailbreaking Embodied LLM Agents in the Physical World |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Balanced Neural ODEs: nonlinear model order reduction and Koopman operator approximations |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Balanced Ranking with Relative Centrality: A multi-core periphery perspective |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Balancing Act: Diversity and Consistency in Large Language Model Ensembles |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Balancing Bias in Two-sided Markets for Fair Stable Matchings |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bandit Learning in Matching Markets with Indifference |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Basis Sharing: Cross-Layer Parameter Sharing for Large Language Model Compression |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Bayesian Analysis of Combinatorial Gaussian Process Bandits |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bayesian Experimental Design Via Contrastive Diffusions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bayesian Image Regression with Soft-thresholded Conditional Autoregressive Prior |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Bayesian Optimization of Antibodies Informed by a Generative Model of Evolving Sequences |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bayesian Optimization via Continual Variational Last Layer Training |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bayesian Regularization of Latent Representation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Bayesian Treatment of the Spectrum of the Empirical Kernel in (Sub)Linear-Width Neural Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Bayesian WeakS-to-Strong from Text Classification to Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Be More Diverse than the Most Diverse: Optimal Mixtures of Generative Models via Mixture-UCB Bandit Algorithms |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Behavioral Entropy-Guided Dataset Generation for Offline Reinforcement Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| BenTo: Benchmark Reduction with In-Context Transferability |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Benchmarking Agentic Workflow Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Benchmarking LLMs' Judgments with No Gold Standard |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Benchmarking Predictive Coding Networks -- Made Simple |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Benign Overfitting in Out-of-Distribution Generalization of Linear Models |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Better Instruction-Following Through Minimum Bayes Risk |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Better autoregressive regression with LLMs via regression-aware fine-tuning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Better than Your Teacher: LLM Agents that learn from Privileged AI Feedback |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Beware of Calibration Data for Pruning Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Beyond Autoregression: Fast LLMs via Self-Distillation Through Time |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Beyond Canonicalization: How Tensorial Messages Improve Equivariant Message Passing |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Beyond Circuit Connections: A Non-Message Passing Graph Transformer Approach for Quantum Error Mitigation |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Beyond Content Relevance: Evaluating Instruction Following in Retrieval Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Beyond FVD: An Enhanced Evaluation Metrics for Video Generation Distribution Quality |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Beyond Graphs: Can Large Language Models Comprehend Hypergraphs? |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Beyond Interpretability: The Gains of Feature Monosemanticity on Model Robustness |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Beyond Mere Token Analysis: A Hypergraph Metric Space Framework for Defending Against Socially Engineered LLM Attacks |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Model Collapse: Scaling Up with Synthesized Data Requires Verification |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Beyond Next Token Prediction: Patch-Level Training for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Beyond Random Augmentations: Pretraining with Hard Views |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Beyond Random Masking: When Dropout meets Graph Convolutional Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Sequence: Impact of Geometric Context for RNA Property Prediction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Beyond Single Concept Vector: Modeling Concept Subspace in LLMs with Gaussian Distribution |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Squared Error: Exploring Loss Design for Enhanced Training of Generative Flow Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Beyond Surface Structure: A Causal Assessment of LLMs' Comprehension ability |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Beyond Worst-Case Dimensionality Reduction for Sparse Vectors |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Beyond correlation: The impact of human uncertainty in measuring the effectiveness of automatic evaluation and LLM-as-a-judge |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Beyond single neurons: population response geometry in digital twins of mouse visual cortex |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Beyond the convexity assumption: Realistic tabular data generation under quantifier-free real linear constraints |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Beyond-Expert Performance with Limited Demonstrations: Efficient Imitation Learning with Double Exploration |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bi-Factorial Preference Optimization: Balancing Safety-Helpfulness in Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Bias Mitigation in Graph Diffusion Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Bidirectional Decoding: Improving Action Chunking via Guided Test-Time Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| BigDocs: An Open Dataset for Training Multimodal Models on Document and Code Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bilinear MLPs enable weight-based mechanistic interpretability |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Binary Losses for Density Ratio Estimation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| BingoGuard: LLM Content Moderation Tools with Risk Levels |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Bio-xLSTM: Generative modeling, representation and in-context learning of biological and chemical sequences |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BioDiscoveryAgent: An AI Agent for Designing Genetic Perturbation Experiments |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Biologically Constrained Barrel Cortex Model Integrates Whisker Inputs and Replicates Key Brain Network Dynamics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Biologically Plausible Brain Graph Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| BirdSet: A Large-Scale Dataset for Audio Classification in Avian Bioacoustics |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Bisimulation Metric for Model Predictive Control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| BitStack: Any-Size Compression of Large Language Models in Variable Memory Environments |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Black Sheep in the Herd: Playing with Spuriously Correlated Attributes for Vision-Language Recognition |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Black-Box Detection of Language Model Watermarks |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| BlendRL: A Framework for Merging Symbolic and Neural Policy Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Block Verification Accelerates Speculative Decoding |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Block-Attention for Efficient Prefilling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| BodyGen: Advancing Towards Efficient Embodiment Co-Design |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| Boltzmann Semantic Score: A Semantic Metric for Evaluating Large Vision Models Using Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Boltzmann priors for Implicit Transfer Operators |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Boltzmann-Aligned Inverse Folding Model as a Predictor of Mutational Effects on Protein-Protein Interactions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| BoneMet: An Open Large-Scale Multi-Modal Murine Dataset for Breast Cancer Bone Metastasis Diagnosis and Prognosis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bonsai: Gradient-free Graph Condensation for Node Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Boosting Latent Diffusion with Perceptual Objectives |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Boosting Methods for Interval-censored Data with Regression and Classification |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Boosting Multiple Views for pretrained-based Continual Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Boosting Neural Combinatorial Optimization for Large-Scale Vehicle Routing Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Boosting Perturbed Gradient Ascent for Last-Iterate Convergence in Games |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Boosting Ray Search Procedure of Hard-label Attacks with Transfer-based Priors |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Boosting the visual interpretability of CLIP via adversarial fine-tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bootstrapped Model Predictive Control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bootstrapping Language Models with DPO Implicit Rewards |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Both Ears Wide Open: Towards Language-Driven Spatial Audio Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bounds on $L_p$ Errors in Density Ratio Estimation via $f$-Divergence Loss Functions |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Brain Bandit: A Biologically Grounded Neural Network for Efficient Control of Exploration |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Brain Mapping with Dense Features: Grounding Cortical Semantic Selectivity in Natural Images With Vision Transformers |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Brain-inspired $L_p$-Convolution benefits large kernels and aligns better with visual cortex |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| BrainACTIV: Identifying visuo-semantic properties driving cortical selectivity using diffusion-based image manipulation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| BrainOOD: Out-of-distribution Generalizable Brain Network Analysis |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| BrainUICL: An Unsupervised Individual Continual Learning Framework for EEG Applications |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Breach By A Thousand Leaks: Unsafe Information Leakage in 'Safe' AI Responses |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Breaking Mental Set to Improve Reasoning through Diverse Multi-Agent Debate |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Breaking Neural Network Scaling Laws with Modularity |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Breaking the $\log(1/\Delta_2)$ Barrier: Better Batched Best Arm Identification with Adaptive Grids |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Breaking the Reclustering Barrier in Centroid-based Deep Clustering |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Bridging Compressed Image Latents and Multimodal Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Bridging Information Asymmetry in Text-video Retrieval: A Data-centric Approach |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bridging the Data Provenance Gap Across Text, Speech, and Video |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Bridging the Gap Between f-divergences and Bayes Hilbert Spaces |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Bridging the Gap between Database Search and \emph{De Novo} Peptide Sequencing with SearchNovo |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Bridging the Gap between Variational Inference and Stochastic Gradient MCMC in Function Space |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Bridging the Semantic Gap Between Text and Table: A Case Study on NL2SQL |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Bringing NeRFs to the Latent Space: Inverse Graphics Autoencoder |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Broaden your SCOPE! Efficient Multi-turn Conversation Planning for LLMs with Semantic Space |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Broadening Target Distributions for Accelerated Diffusion Models via a Novel Analysis Approach |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Budgeted Online Continual Learning by Adaptive Layer Freezing and Frequency-based Sampling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Build-A-Scene: Interactive 3D Layout Control for Diffusion-Based Image Generation |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Building Interactable Replicas of Complex Articulated Objects via Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Building Math Agents with Multi-Turn Iterative Preference Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Building, Reusing, and Generalizing Abstract Representations from Concrete Sequences |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Bundle Neural Network for message diffusion on graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| C-CLIP: Multimodal Continual Learning for Vision-Language Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CAKE: Cascading and Adaptive KV Cache Eviction with Layer Preferences |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CAMEx: Curvature-aware Merging of Experts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CARTS: Advancing Neural Theorem Proving with Diversified Tactic Calibration and Bias-Resistant Tree Search |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| CAT-3DGS: A Context-Adaptive Triplane Approach to Rate-Distortion-Optimized 3DGS Compression |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| CATCH: Channel-Aware Multivariate Time Series Anomaly Detection via Frequency Patching |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| CAX: Cellular Automata Accelerated in JAX |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CBGBench: Fill in the Blank of Protein-Molecule Complex Binding Graph |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| CBMA: Improving Conformal Prediction through Bayesian Model Averaging |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| CBQ: Cross-Block Quantization for Large Language Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CBraMod: A Criss-Cross Brain Foundation Model for EEG Decoding |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| CFD: Learning Generalized Molecular Representation via Concept-Enhanced Feedback Disentanglement |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CG-Bench: Clue-grounded Question Answering Benchmark for Long Video Understanding |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| CHAMP: Conformalized 3D Human Multi-Hypothesis Pose Estimators |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CHASE-SQL: Multi-Path Reasoning and Preference Optimized Candidate Selection in Text-to-SQL |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CL-DiffPhyCon: Closed-loop Diffusion Control of Complex Physical Systems |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| CL-MFAP: A Contrastive Learning-Based Multimodal Foundation Model for Molecular Property Prediction and Antibiotic Screening |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CLDyB: Towards Dynamic Benchmarking for Continual Learning with Pre-trained Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CLIPDrag: Combining Text-based and Drag-based Instructions for Image Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| CLIPure: Purification in Latent Space via CLIP for Adversarially Robust Zero-Shot Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CO-MOT: Boosting End-to-end Transformer-based Multi-Object Tracking via Coopetition Label Assignment and Shadow Sets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| COAT: Compressing Optimizer states and Activations for Memory-Efficient FP8 Training |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| COFlowNet: Conservative Constraints on Flows Enable High-Quality Candidate Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| COMBO: Compositional World Models for Embodied Multi-Agent Cooperation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| COME: Test-time Adaption by Conservatively Minimizing Entropy |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CONDA: Adaptive Concept Bottleneck for Foundation Models Under Distribution Shifts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CONGO: Compressive Online Gradient Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CONTRA: Conformal Prediction Region via Normalizing Flow Transformation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| COPER: Correlation-based Permutations for Multi-View Clustering |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CPSample: Classifier Protected Sampling for Guarding Training Data During Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CR-CTC: Consistency regularization on CTC for improved speech recognition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CR2PQ: Continuous Relative Rotary Positional Query for Dense Visual Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CREAM: Consistency Regularized Self-Rewarding Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CTSyn: A Foundation Model for Cross Tabular Data Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CURIE: Evaluating LLMs on Multitask Scientific Long-Context Understanding and Reasoning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CViT: Continuous Vision Transformer for Operator Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CaPo: Cooperative Plan Optimization for Efficient Embodied Multi-Agent Cooperation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Cached Multi-Lora Composition for Multi-Concept Image Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Cafe-Talk: Generating 3D Talking Face Animation with Multimodal Coarse- and Fine-grained Control |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Calibrating Expressions of Certainty |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Calibrating LLMs with Information-Theoretic Evidential Deep Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CameraCtrl: Enabling Camera Control for Video Diffusion Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Can Generative AI Solve Your In-Context Learning Problem? A Martingale Perspective |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Can In-context Learning Really Generalize to Out-of-distribution Tasks? |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Can Knowledge Editing Really Correct Hallucinations? |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book? |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Can LLMs Separate Instructions From Data? And What Do We Even Mean By That? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Can LLMs Solve Longer Math Word Problems Better? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Can LLMs Understand Time Series Anomalies? |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Can Large Language Models Understand Symbolic Graphics Programs? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Can One Modality Model Synergize Training of Other Modality Models? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Can Reinforcement Learning Solve Asymmetric Combinatorial-Continuous Zero-Sum Games? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Can Textual Gradient Work in Federated Learning? |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Can Transformers Do Enumerative Geometry? |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
2 |
| Can Video LLMs Refuse to Answer? Alignment for Answerability in Video Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Can Watermarked LLMs be Identified by Users via Crafted Prompts? |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Can Watermarks be Used to Detect LLM IP Infringement For Free? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Can We Ignore Labels in Out of Distribution Detection? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Can We Talk Models Into Seeing the World Differently? |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-Based Decision-Making Systems |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Can a Large Language Model be a Gaslighter? |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Capability Localization: Capabilities Can be Localized rather than Individual Knowledge |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CapeX: Category-Agnostic Pose Estimation from Textual Point Explanation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Captured by Captions: On Memorization and its Mitigation in CLIP Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Capturing the Temporal Dependence of Training Data Influence |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Catastrophic Failure of LLM Unlearning via Quantization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Cauchy-Schwarz Regularizers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Causal Concept Graph Models: Beyond Causal Opacity in Deep Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Causal Discovery via Bayesian Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Causal Effect Estimation with Mixed Latent Confounders and Post-treatment Variables |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Causal Graph Transformer for Treatment Effect Estimation Under Unknown Interference |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Causal Graphical Models for Vision-Language Compositional Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Causal Identification for Complex Functional Longitudinal Studies |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Causal Information Prioritization for Efficient Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Causal Order: The Key to Leveraging Imperfect Experts in Causal Inference |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Causal Representation Learning from Multimodal Biomedical Observations |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CausalRivers - Scaling up benchmarking of causal discovery for real-world time-series |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Causally Motivated Sycophancy Mitigation for Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Centrality-guided Pre-training for Graph |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Century: A Framework and Dataset for Evaluating Historical Contextualisation of Sensitive Images |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CertainlyUncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Certified Robustness Under Bounded Levenshtein Distance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Certifying Counterfactual Bias in LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Certifying Language Model Robustness with Fuzzed Randomized Smoothing: An Efficient Defense Against Backdoor Attacks |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Chain-of-Focus Prompting: Leveraging Sequential Visual Cues to Prompt Large Autoregressive Vision Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Chain-of-Thought Provably Enables Learning the (Otherwise) Unlearnable |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Chain-of-region: Visual Language Models Need Details for Diagram Analysis |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| ChartMimic: Evaluating LMM's Cross-Modal Reasoning Capability via Chart-to-Code Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Charting the Design Space of Neural Graph Representations for Subgraph Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CheapNet: Cross-attention on Hierarchical representations for Efficient protein-ligand binding Affinity Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| ChemAgent: Self-updating Memories in Large Language Models Improves Chemical Reasoning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Chemistry-Inspired Diffusion with Non-Differentiable Guidance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Chunk-Distilled Language Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CipherPrune: Efficient and Scalable Private Transformer Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| CirT: Global Subseasonal-to-Seasonal Forecasting with Geometry-inspired Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Circuit Representation Learning with Masked Gate Modeling and Verilog-AIG Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Circuit Transformer: A Transformer That Preserves Logical Equivalence |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CircuitFusion: Multimodal Circuit Representation Learning for Agile Chip Design |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| CityAnchor: City-scale 3D Visual Grounding with Multi-modality LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Class Distribution-induced Attention Map for Open-vocabulary Semantic Segmentations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Classic but Everlasting: Traditional Gradient-Based Algorithms Converge Fast Even in Time-Varying Multi-Player Games |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| ClawMachine: Learning to Fetch Visual Tokens for Referential Comprehension |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ClimaQA: An Automated Evaluation Framework for Climate Question Answering Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Clique Number Estimation via Differentiable Functions of Adjacency Matrix Permutations |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Closed-Form Merging of Parameter-Efficient Modules for Federated Continual Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Co$^{\mathbf{3}}$Gesture: Towards Coherent Concurrent Co-speech 3D Gesture Generation with Interactive Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CoInD: Enabling Logical Compositions in Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| CoMRes: Semi-Supervised Time Series Forecasting Utilizing Consensus Promotion of Multi-Resolution |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CoMotion: Concurrent Multi-person 3D Motion |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CoRNStack: High-Quality Contrastive Data for Better Code Retrieval and Reranking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CoTFormer: A Chain of Thought Driven Architecture with Budget-Adaptive Computation Cost at Inference |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Cocoon: Robust Multi-Modal Perception with Uncertainty-Aware Sensor Fusion |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CodeMMLU: A Multi-Task Benchmark for Assessing Code Understanding & Reasoning Capabilities of CodeLLMs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CodePlan: Unlocking Reasoning Potential in Large Language Models by Scaling Code-form Planning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CofCA: A STEP-WISE Counterfactual Multi-hop QA benchmark |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CogCoM: A Visual Language Model with Chain-of-Manipulations Reasoning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ColPali: Efficient Document Retrieval with Vision Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Collab: Controlled Decoding using Mixture of Agents for LLM Alignment |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| CollabEdit: Towards Non-destructive Collaborative Knowledge Editing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Collaborative Discrete-Continuous Black-Box Prompt Learning for Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Collapsed Language Models Promote Fairness |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ComLoRA: A Competitive Learning Approach for Enhancing LoRA |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ComPC: Completing a 3D Point Cloud with 2D Diffusion Priors |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| ComaDICE: Offline Cooperative Multi-Agent Reinforcement Learning with Stationary Distribution Shift Regularization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Combatting Dimensional Collapse in LLM Pre-Training Data via Submodular File Selection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Combining Induction and Transduction for Abstract Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Commit0: Library Generation from Scratch |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Comparing Targeting Strategies for Maximizing Social Welfare with Limited Resources |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Comparing noisy neural population dynamics using optimal transport distances |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Competing Large Language Models in Multi-Agent Gaming Environments |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Competition Dynamics Shape Algorithmic Phases of In-Context Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Competitive Fair Scheduling with Predictions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Complementary Label Learning with Positive Label Guessing and Negative Label Enhancement |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Complexity Lower Bounds of Adaptive Gradient Algorithms for Non-convex Stochastic Optimization under Relaxed Smoothness |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Composable Interventions for Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Composing Unbalanced Flows for Flexible Docking and Relaxation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Compositional 4D Dynamic Scenes Understanding with Physics Priors for Video Question Answering |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Compositional Entailment Learning for Hyperbolic Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Compositional simulation-based inference for time series |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Computational Explorations of Total Variation Distance |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Computational Limits of Low-Rank Adaptation (LoRA) Fine-Tuning for Transformer Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Computationally Efficient RL under Linear Bellman Completeness for Deterministic Dynamics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Compute-Constrained Data Selection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Compute-Optimal LLMs Provably Generalize Better with Scale |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Computing Circuits Optimization via Model-Based Circuit Genetic Evolution |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ConFIG: Towards Conflict-free Training of Physics Informed Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| ConMix: Contrastive Mixup at Representation Level for Long-tailed Deep Clustering |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Concept Bottleneck Language Models For Protein Design |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Concept Bottleneck Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Concept Pinpoint Eraser for Text-to-image Diffusion Models via Residual Attention Gate |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Concept-ROT: Poisoning Concepts in Large Language Models with Model Editing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ConcreTizer: Model Inversion Attack via Occupancy Classification and Dispersion Control for 3D Point Cloud Restoration |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Conditional Diffusion Models are Minimax-Optimal and Manifold-Adaptive for Conditional Distribution Estimation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Conditional Diffusion with Ordinal Regression: Longitudinal Data Generation for Neurodegenerative Disease Studies |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Conditional Testing based on Localized Conformal $p$-values |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Confidence Elicitation: A New Attack Vector for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Conflict-Averse Gradient Aggregation for Constrained Multi-Objective Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Conformal Language Model Reasoning with Coherent Factuality |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Conformal Prediction Sets Can Cause Disparate Impact |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Conformal Structured Prediction |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Conformalized Interactive Imitation Learning: Handling Expert Shift and Intermittent Feedback |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Conformalized Survival Analysis for General Right-Censored Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Connecting Federated ADMM to Bayes |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Connectome Mapping: Shape-Memory Network via Interpretation of Contextual Semantic Information |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Conservative Contextual Bandits: Beyond Linear Representations |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Consistency Checks for Language Model Forecasters |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Consistency Models Made Easy |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Consistent Flow Distillation for Text-to-3D Generation |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Constraint-Conditioned Actor-Critic for Offline Safe Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Constructing Confidence Intervals for Average Treatment Effects from Multiple Datasets |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Content-Style Learning from Unaligned Domains: Identifiability under Unknown Latent Dimensions |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Context Clues: Evaluating Long Context Models for Clinical Prediction Tasks on EHR Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Context Steering: Controllable Personalization at Inference Time |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Context-Alignment: Activating and Enhancing LLMs Capabilities in Time Series |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Context-aware Dynamic Pruning for Speech Foundation Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ContextGNN: Beyond Two-Tower Recommendation Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Contextual Document Embeddings |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Contextual Self-paced Learning for Weakly Supervised Spatio-Temporal Video Grounding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Contextualizing biological perturbation experiments through language |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Continual Slow-and-Fast Adaptation of Latent Neural Dynamics (CoSFan): Meta-Learning What-How & When to Adapt |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Continuity-Preserving Convolutional Autoencoders for Learning Continuous Latent Dynamical Models from Images |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Continuous Autoregressive Modeling with Stochastic Monotonic Alignment for Speech Synthesis |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Continuous Diffusion for Mixed-Type Tabular Data |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Continuous Ensemble Weather Forecasting with Diffusion models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Continuous Exposure Learning for Low-light Image Enhancement using Neural ODEs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ContraDiff: Planning Towards High Return States via Contrastive Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Contractive Dynamical Imitation Policies for Efficient Out-of-Sample Recovery |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Contrastive Learning from Synthetic Audio Doppelgängers |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Control-oriented Clustering of Visual Latent Representation |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| ControlAR: Controllable Image Generation with Autoregressive Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Controllable Blur Data Augmentation Using 3D-Aware Motion Estimation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Controllable Context Sensitivity and the Knob Behind It |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Controllable Generation via Locally Constrained Resampling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Controllable Satellite-to-Street-View Synthesis with Precise Pose Alignment and Zero-Shot Environmental Control |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Controllable Unlearning for Image-to-Image Generative Models via $\epsilon$-Constrained Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Controlled LLM Decoding via Discrete Auto-regressive Biasing |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Controlling Language and Diffusion Models by Transporting Activations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Controlling Space and Time with Diffusion Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ConvCodeWorld: Benchmarking Conversational Code Generation in Reproducible Feedback Environments |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Convergence and Implicit Bias of Gradient Descent on Continual Linear Classification |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Convergence of Distributed Adaptive Optimization with Local Updates |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Convergent Privacy Loss of Noisy-SGD without Convexity and Smoothness |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Convex Formulations for Training Two-Layer ReLU Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Copyright-Protected Language Generation via Adaptive Model Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Coreset Selection via Reducible Loss in Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Coreset Spectral Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Correcting the Mythos of KL-Regularization: Direct Alignment without Overoptimization via Chi-Squared Preference Optimization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Correlated Proxies: A New Definition and Improved Mitigation for Reward Hacking |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain) |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Correlation and Navigation in the Vocabulary Key Representation Space of Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Counterfactual Concept Bottleneck Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Counterfactual Generative Modeling with Variational Causal Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Counterfactual Realizability |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| CraftRTL: High-quality Synthetic Data Generation for Verilog Code Models with Correct-by-Construction Non-Textual Representations and Targeted Code Repair |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Credal Wrapper of Model Averaging for Uncertainty Estimation in Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Credit-based self organizing maps: training deep topographic networks with minimal performance degradation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Cross-Attention Head Position Patterns Can Align with Human Visual Concepts in Text-to-Image Generative Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Cross-Domain Off-Policy Evaluation and Learning for Contextual Bandits |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Cross-Domain Offline Policy Adaptation with Optimal Transport and Dataset Constraint |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Cross-Embodiment Dexterous Grasping with Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cross-Entropy Is All You Need To Invert the Data Generating Process |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Cross-Modal Safety Mechanism Transfer in Large Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| CrossMPT: Cross-attention Message-passing Transformer for Error Correcting Codes |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| CryoFM: A Flow-based Foundation Model for Cryo-EM Densities |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CryoGEN: Generative Energy-based Models for Cryogenic Electron Tomography Reconstruction |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CtD: Composition through Decomposition in Emergent Communication |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Ctrl-U: Robust Conditional Image Generation via Uncertainty-aware Reward Modeling |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| CubeDiff: Repurposing Diffusion-Based Image Models for Panorama Generation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Curriculum-aware Training for Discriminating Molecular Property Prediction Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cut Your Losses in Large-Vocabulary Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Cut the Crap: An Economical Communication Pipeline for LLM-based Multi-Agent Systems |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| CyberHost: A One-stage Diffusion Framework for Audio-driven Talking Body Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| CycleResearcher: Improving Automated Research via Automated Review |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Cyclic Contrastive Knowledge Transfer for Open-Vocabulary Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| D-FINE: Redefine Regression Task of DETRs as Fine-grained Distribution Refinement |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DAMO: Decoding by Accumulating Activations Momentum for Mitigating Hallucinations in Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DARE the Extreme: Revisiting Delta-Parameter Pruning For Fine-Tuned Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DAWN: Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for Talking head Video Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DCT-CryptoNets: Scaling Private Inference in the Frequency Domain |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DECO: Unleashing the Potential of ConvNets for Query-based Detection and Segmentation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DEEM: Diffusion models serve as the eyes of large language models for image perception |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DELIFT: Data Efficient Language model Instruction Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| DELTA: DENSE EFFICIENT LONG-RANGE 3D TRACKING FOR ANY VIDEO |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DEPT: Decoupled Embeddings for Pre-training Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DEPfold: RNA Secondary Structure Prediction as Dependency Parsing. |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DICE: Data Influence Cascade in Decentralized Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DICE: End-to-end Deformation Capture of Hand-Face Interactions from a Single Image |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DLEFT-MKC: Dynamic Late Fusion Multiple Kernel Clustering with Robust Tensor Learning via Min-Max Optimization |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| DOCS: Quantifying Weight Similarity for Deeper Insights into Large Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| DON’T STOP ME NOW: EMBEDDING BASED SCHEDULING FOR LLMS |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| DOPL: Direct Online Preference Learning for Restless Bandits with Preference Feedback |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| DOTS: Learning to Reason Dynamically in LLMs via Optimal Reasoning Trajectories Search |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| DPLM-2: A Multimodal Diffusion Protein Language Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DPaI: Differentiable Pruning at Initialization with Node-Path Balance Principle |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DRESSing Up LLM: Efficient Stylized Question-Answering via Style Subspace Editing |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DRL: Decomposed Representation Learning for Tabular Anomaly Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DRoC: Elevating Large Language Models for Complex Vehicle Routing via Decomposed Retrieval of Constraints |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| DRoP: Distributionally Robust Data Pruning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DS-LLM: Leveraging Dynamical Systems to Enhance Both Training and Inference of Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DSBench: How Far Are Data Science Agents from Becoming Data Science Experts? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DSPO: Direct Score Preference Optimization for Diffusion Model Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DUALFormer: Dual Graph Transformer |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DUET: Decentralized Bilevel Optimization without Lower-Level Strong Convexity |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DarkBench: Benchmarking Dark Patterns in Large Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| DartControl: A Diffusion-Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Data Center Cooling System Optimization Using Offline Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Data Distillation for extrapolative protein design through exact preference optimization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Data Mixing Laws: Optimizing Data Mixtures by Predicting Language Modeling Performance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Data Pruning by Information Maximization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Data Scaling Laws in Imitation Learning for Robotic Manipulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Data Selection via Optimal Control for Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Data Shapley in One Training Run |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Data Taggants: Dataset Ownership Verification Via Harmless Targeted Data Poisoning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Data Unlearning in Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Data-centric Prediction Explanation via Kernelized Stein Discrepancy |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| DataGen: Unified Synthetic Dataset Generation via Large Language Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DataMan: Data Manager for Pre-training Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-training of Deep Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Dataset Ownership Verification in Contrastive Pre-trained Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DeLLMa: Decision Making Under Uncertainty with Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| DebGCD: Debiased Learning with Distribution Guidance for Generalized Category Discovery |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Debiasing Federated Learning with Correlated Client Participation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Debiasing Mini-Batch Quadratics for Applications in Deep Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Decentralized Optimization with Coupled Constraints |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DeciMamba: Exploring the Length Extrapolation Potential of Mamba |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decision Information Meets Large Language Models: The Future of Explainable Operations Research |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Decision Tree Induction Through LLMs via Semantically-Aware Evolution |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Decoding Game: On Minimax Optimality of Heuristic Text Generation Strategies |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Decomposition Polyhedra of Piecewise Linear Functions |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Deconstructing Denoising Diffusion Models for Self-Supervised Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Deconstructing What Makes a Good Optimizer for Autoregressive Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Decoupled Finetuning for Domain Generalizable Semantic Segmentation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Decoupled Graph Energy-based Model for Node Out-of-Distribution Detection on Heterophilic Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decoupled Subgraph Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Decoupling Angles and Strength in Low-rank Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Decoupling Layout from Glyph in Online Chinese Handwriting Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Distributed Optimization for Large-Scale Quadratic Programming |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Incomplete Multi-view Learning via Cyclic Permutation of VAEs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Deep Kernel Posterior Learning under Infinite Variance Prior Weights |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Deep Kernel Relative Test for Machine-generated Text Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Deep Learning Alternatives Of The Kolmogorov Superposition Theorem |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Deep Linear Probe Generators for Weight Space Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Deep MMD Gradient Flow without adversarial training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Networks Learn Features From Local Discontinuities in the Label Function |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Random Features for Scalable Interpolation of Spatiotemporal Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deep Signature: Characterization of Large-Scale Molecular Dynamics |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DeepGate4: Efficient and Effective Representation Learning for Circuit Design at Scale |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DeepLTL: Learning to Efficiently Satisfy Complex LTL Specifications for Multi-Task RL |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| DeepRTL: Bridging Verilog Understanding and Generation with a Unified Representation Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DeepTAGE: Deep Temporal-Aligned Gradient Enhancement for Optimizing Spiking Neural Networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DeeperForward: Enhanced Forward-Forward Training for Deeper and Better Performance |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Democratic Training Against Universal Adversarial Perturbations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Demystifying Online Clustering of Bandits: Enhanced Exploration Under Stochastic and Smoothed Adversarial Contexts |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Demystifying Topological Message-Passing with Relational Structures: A Case Study on Oversquashing in Simplicial Message-Passing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Demystifying the Token Dynamics of Deep Selective State Space Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DenoiseVAE: Learning Molecule-Adaptive Noise Distributions for Denoising-based 3D Molecular Pre-training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Denoising Autoregressive Transformers for Scalable Text-to-Image Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Denoising Levy Probabilistic Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Denoising Task Difficulty-based Curriculum for Training Diffusion Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Denoising with a Joint-Embedding Predictive Architecture |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Dense Video Object Captioning from Disjoint Supervision |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DenseGrounding: Improving Dense Language-Vision Semantics for Ego-centric 3D Visual Grounding |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| DenseMatcher: Learning 3D Semantic Correspondence for Category-Level Manipulation from a Single Demo |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Density estimation with LLMs: a geometric investigation of in-context learning trajectories |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Depth Any Video with Scalable Synthetic Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Depth Pro: Sharp Monocular Metric Depth in Less Than a Second |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Deriving Causal Order from Single-Variable Interventions: Guarantees & Algorithm |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Descent with Misaligned Gradients and Applications to Hidden Convexity |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Designing Concise ConvNets with Columnar Stages |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Designing Mechanical Meta-Materials by Learning Equivariant Flows |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Detecting Backdoor Samples in Contrastive Language Image Pretraining |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Determine-Then-Ensemble: Necessity of Top-k Union for Large Language Model Ensembling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DexTrack: Towards Generalizable Neural Tracking Control for Dexterous Manipulation from Human References |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiTTo-TTS: Diffusion Transformers for Scalable Text-to-Speech without Domain-Specific Factors |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Diff-2-in-1: Bridging Generation and Dense Perception with Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Diff-PIC: Revolutionizing Particle-In-Cell Nuclear Fusion Simulation with Diffusion Models |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Diff-Prompt: Diffusion-driven Prompt Generator with Mask Supervision |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Diff3DS: Generating View-Consistent 3D Sketch via Differentiable Curve Rendering |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DiffGAD: A Diffusion-based Unsupervised Graph Anomaly Detector |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| DiffPC: Diffusion-based High Perceptual Fidelity Image Compression with Semantic Refinement |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DiffPuter: Empowering Diffusion Models for Missing Data Imputation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian Splat Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Difference-of-submodular Bregman Divergence |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Differentiable Causal Discovery for Latent Hierarchical Causal Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Differentiable Integer Linear Programming |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Differentiable Optimization of Similarity Scores Between Models and Brains |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Differentiable Rule Induction from Raw Sequence Inputs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Differentiable and Learnable Wireless Simulation with Geometric Transformers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Differential Transformer |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Differential learning kinetics govern the transition from memorization to generalization during in-context learning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Differentially Private Federated Learning with Time-Adaptive Privacy Spending |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Differentially Private Steering for Large Language Model Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Differentially private learners for heterogeneous treatment effects |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Differentially private optimization for non-decomposable objective functions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Diffusing States and Matching Scores: A New Framework for Imitation Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Diffusion Actor-Critic: Formulating Constrained Policy Iteration as Diffusion Noise Regression for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion Attribution Score: Evaluating Training Data Influence in Diffusion Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Diffusion Bridge AutoEncoders for Unsupervised Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion Bridge Implicit Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion Feedback Helps CLIP See Better |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion Generative Modeling for Spatially Resolved Gene Expression Inference from Histology Images |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Diffusion Models Are Real-Time Game Engines |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
3 |
| Diffusion Models are Evolutionary Algorithms |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Diffusion Models as Cartoonists: The Curious Case of High Density Regions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Diffusion On Syntax Trees For Program Synthesis |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Diffusion Policy Policy Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion State-Guided Projected Gradient for Inverse Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Diffusion Transformer Captures Spatial-Temporal Dependencies: A Theory for Gaussian Process Data |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Diffusion Transformers for Tabular Data Time Series Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Diffusion$^2$: Dynamic 3D Content Generation via Score Composition of Video and Multi-view Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Diffusion-Based Planning for Autonomous Driving with Flexible Guidance |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Diffusion-NPO: Negative Preference Optimization for Better Preference Aligned Generation of Diffusion Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Diffusion-based Decoupled Deterministic and Uncertain Framework for Probabilistic Multivariate Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Diffusion-based Neural Network Weights Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Digi-Q: Learning VLM Q-Value Functions for Training Device-Control Agents |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Dimension Agnostic Neural Processes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Direct Distributional Optimization for Provable Alignment of Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Direct Post-Training Preference Alignment for Multi-Agent Motion Generation Model Using Implicit Feedback from Pre-training Demonstrations |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Directional Gradient Projection for Robust Fine-Tuning of Foundation Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| DisPose: Disentangling Pose Guidance for Controllable Human Image Animation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Discovering Clone Negatives via Adaptive Contrastive Learning for Image-Text Matching |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Discovering Group Structures via Unitary Representation Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Discovering Influential Neuron Path in Vision Transformers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Discovering Temporally Compositional Neural Manifolds with Switching Infinite GPFA |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DiscoveryBench: Towards Data-Driven Discovery with Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Discrete Codebook World Models for Continuous Control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Discrete Copula Diffusion |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Discrete Diffusion Schrödinger Bridge Matching for Graph Transformation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Discrete Distribution Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Discrete GCBF Proximal Policy Optimization for Multi-agent Safe Optimal Control |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Discrete Latent Plans via Semantic Skill Abstractions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Discretization-invariance? On the Discretization Mismatch Errors in Neural Operators |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Discriminating image representations with principal distortions |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Discriminator-Guided Embodied Planning for LLM Agent |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Disentangled Representation Learning with the Gromov-Monge Gap |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Disentangling 3D Animal Pose Dynamics with Scrubbed Conditional Latent Variables |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Disentangling Representations through Multi-task Learning |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Dissecting Adversarial Robustness of Multimodal LM Agents |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Dist Loss: Enhancing Regression in Few-Shot Region through Distribution Distance Constraint |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DistRL: An Asynchronous Distributed Reinforcement Learning Framework for On-Device Control Agent |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Distance-Based Tree-Sliced Wasserstein Distance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| DistillHGNN: A Knowledge Distillation Approach for High-Speed Hypergraph Neural Networks |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Distilling Dataset into Neural Field |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Distilling Reinforcement Learning Algorithms for In-Context Model-Based Planning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Distilling Structural Representations into Protein Sequence Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Distributed Speculative Inference (DSI): Speculation Parallelism for Provably Faster Lossless Language Model Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Distribution Backtracking Builds A Faster Convergence Trajectory for Diffusion Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Distribution-Free Data Uncertainty for Neural Network Regression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Distribution-Specific Agnostic Conditional Classification With Halfspaces |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Distributional Associations vs In-Context Reasoning: A Study of Feed-forward and Attention Layers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Divergence of Neural Tangent Kernel in Classification Problems |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Divergence-Regularized Discounted Aggregation: Equilibrium Finding in Multiplayer Partially Observable Stochastic Games |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Divergence-enhanced Knowledge-guided Context Optimization for Visual-Language Prompt Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Diverse Policies Recovering via Pointwise Mutual Information Weighted Imitation Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Diverse Preference Learning for Capabilities and Alignment |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Diversity-Rewarded CFG Distillation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Do Contemporary Causal Inference Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Do Deep Neural Network Solutions Form a Star Domain? |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Do Egocentric Video-Language Models Truly Understand Hand-Object Interactions? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Do LLM Agents Have Regret? A Case Study in Online Learning and Games |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Do LLMs ``know'' internally when they follow instructions? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Do LLMs estimate uncertainty well in instruction-following? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Do LLMs have Consistent Values? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Do Large Language Models Truly Understand Geometric Structures? |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Do Mice Grok? Glimpses of Hidden Progress in Sensory Cortex |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Do Stochastic, Feel Noiseless: Stable Stochastic Optimization via a Double Momentum Mechanism |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference under Ambiguities |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Do WGANs succeed because they minimize the Wasserstein Distance? Lessons from Discrete Generators |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Do You Keep an Eye on What I Ask? Mitigating Multimodal Hallucination via Attention-Guided Ensemble Decoding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Do as I do (Safely): Mitigating Task-Specific Fine-tuning Risks in Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Do as We Do, Not as You Think: the Conformity of Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| DoF: A Diffusion Factorization Framework for Offline Multi-Agent Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Dobi-SVD: Differentiable SVD for LLM Compression and Some New Perspectives |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DocMIA: Document-Level Membership Inference Attacks against DocVQA Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Does Refusal Training in LLMs Generalize to the Past Tense? |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Does SGD really happen in tiny subspaces? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Does Safety Training of LLMs Generalize to Semantically Related Natural Prompts? |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Does Spatial Cognition Emerge in Frontier Models? |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Does Training with Synthetic Data Truly Protect Privacy? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Domain Guidance: A Simple Transfer Approach for a Pre-trained Diffusion Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Don't Take Things Out of Context: Attention Intervention for Enhancing Chain-of-Thought Reasoning in Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Don't flatten, tokenize! Unlocking the key to SoftMoE's efficacy in deep RL |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Doubly Optimal Policy Evaluation for Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Doubly robust identification of treatment effects from multiple environments |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| DreamCatalyst: Fast and High-Quality 3D Editing via Controlling Editability and Identity Preservation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| DreamDistribution: Learning Prompt Distribution for Diverse In-distribution Generation |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Dreamweaver: Learning Compositional World Models from Pixels |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DriveTransformer: Unified Transformer for Scalable End-to-End Autonomous Driving |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Dual Process Learning: Controlling Use of In-Context vs. In-Weights Strategies with Weight Forgetting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dualformer: Controllable Fast and Slow Thinking by Learning with Randomized Reasoning Traces |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Duoduo CLIP: Efficient 3D Understanding with Multi-View Images |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Durable Quantization Conditioned Misalignment Attack on Large Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| DyCAST: Learning Dynamic Causal Structure from Time Series |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| DynAlign: Unsupervised Dynamic Taxonomy Alignment for Cross-Domain Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| DynFrs: An Efficient Framework for Machine Unlearning in Random Forest |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| DynaMath: A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| DynaPrompt: Dynamic Test-Time Prompt Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Dynamic Assortment Selection and Pricing with Censored Preference Feedback |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Dynamic Contrastive Skill Learning with State-Transition Based Skill Clustering and Dynamic Length Adjustment |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Dynamic Diffusion Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Dynamic Scenes |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Dynamic Low-Rank Sparse Adaptation for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Dynamic Modeling of Patients, Modalities and Tasks via Multi-modal Multi-task Mixture of Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language Bootstrapping |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Dynamic Negative Guidance of Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dynamic-SUPERB Phase-2: A Collaboratively Expanding Benchmark for Measuring the Capabilities of Spoken Language Models with 180 Tasks |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| DynamicCity: Large-Scale 4D Occupancy Generation from Dynamic Scenes |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Dynamical Diffusion: Learning Temporal Dynamics with Diffusion Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| E(3)-equivariant models cannot learn chirality: Field-based molecular generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| E(n) Equivariant Topological Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routing |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EC-Diffuser: Multi-Object Manipulation via Entity-Centric Behavior Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ECD: A Machine Learning Benchmark for Predicting Enhanced-Precision Electronic Charge Density in Crystalline Inorganic Materials |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ECHOPulse: ECG Controlled Echocardio-gram Video Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| EFFICIENT JAILBREAK ATTACK SEQUENCES ON LARGE LANGUAGE MODELS VIA MULTI-ARMED BANDIT-BASED CONTEXT SWITCHING |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EG4D: Explicit Generation of 4D Object without Score Distillation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EIA: ENVIRONMENTAL INJECTION ATTACK ON GENERALIST WEB AGENTS FOR PRIVACY LEAKAGE |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| ELBOing Stein: Variational Bayes with Stein Mixture Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ELFS: Label-Free Coreset Selection with Proxy Training Dynamics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ELICIT: LLM Augmentation Via External In-context Capability |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EMOS: Embodiment-aware Heterogeneous Multi-robot Operating System with LLM Agents |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
4 |
| ESE: Espresso Sentence Embeddings |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ET-SEED: EFFICIENT TRAJECTORY-LEVEL SE(3) EQUIVARIANT DIFFUSION POLICY |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
3 |
| ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| EVA: Geometric Inverse Design for Fast Protein Motif-Scaffolding with Coupled Flow |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Earlier Tokens Contribute More: Learning Direct Preference Optimization From Temporal Decay Perspective |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Easing Training Process of Rectified Flow Models Via Lengthening Inter-Path Distance |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| EcoFace: Audio-Visual Emotional Co-Disentanglement Speech-Driven 3D Talking Face Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Edge Prompt Tuning for Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Edge-aware Image Smoothing with Relative Wavelet Domain Representation |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| EdgeRunner: Auto-regressive Auto-encoder for Artistic Mesh Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| EditRoom: LLM-parameterized Graph Diffusion for Composable 3D Room Layout Editing |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Effective Interplay between Sparsity and Quantization: From Theory to Practice |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Effective and Efficient Time-Varying Counterfactual Prediction with State-Space Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Effective post-training embedding compression via temperature control in contrastive training |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Efficient Action-Constrained Reinforcement Learning via Acceptance-Rejection Method and Augmented MDPs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Efficient Active Imitation Learning with Random Network Distillation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Efficient Alternating Minimization with Applications to Weighted Low Rank Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Biological Data Acquisition through Inference Set Design |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Efficient Causal Decision Making with One-sided Feedback |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Efficient Cross-Episode Meta-RL |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Dictionary Learning with Switch Sparse Autoencoders |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Discovery of Pareto Front for Multi-Objective Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed GFlowNets |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Evolutionary Search Over Chemical Space with Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Efficient Exploration and Discriminative World Model Learning with an Object-Centric Abstraction |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Imitation under Misspecification |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Efficient Inference for Large Language Model-based Generative Recommendation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Interpolation between Extragradient and Proximal Methods for Weak MVIs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Efficient Learning with Sine-Activated Low-Rank Matrices |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Low-Bit Quantization with Adaptive Scales for Multi-Task Co-Training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Masked AutoEncoder for Video Object Counting and A Large-Scale Benchmark |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Model Editing with Task-Localized Sparse Fine-tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Model-Based Reinforcement Learning Through Optimistic Thompson Sampling |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Efficient Multi-agent Offline Coordination via Diffusion-based Trajectory Stitching |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Neuron Segmentation in Electron Microscopy by Affinity-Guided Queries |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Off-Policy Learning for High-Dimensional Action Spaces |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Online Pruning and Abstraction for Imperfect Information Extensive-Form Games |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Efficient Online Reinforcement Learning Fine-Tuning Need Not Retain Offline Data |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Efficient Perplexity Bound and Ratio Matching in Discrete Diffusion Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Efficient Policy Evaluation with Safety Constraint for Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Efficient Reinforcement Learning with Large Language Model Priors |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Efficient Sparse PCA via Block-Diagonalization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficient Top-m Data Values Identification for Data Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient Training of Neural Stochastic Differential Equations by Matching Finite Dimensional Distributions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient and Accurate Explanation Estimation with Distribution Compression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient and Context-Aware Label Propagation for Zero-/Few-Shot Training-Free Adaptation of Vision-Language Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient and Robust Neural Combinatorial Optimization via Wasserstein-Based Coresets |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Efficient and Trustworthy Causal Discovery with Latent Variables and Complex Relations |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| Efficient stagewise pretraining via progressive subnetworks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Efficiently Parameterized Neural Metriplectic Systems |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
6 |
| EffoVPR: Effective Foundation Model Utilization for Visual Place Recognition |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| EgoExo-Gen: Ego-centric Video Prediction by Watching Exo-centric Videos |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| EgoSim: Egocentric Exploration in Virtual Worlds with Multi-modal Conditioning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ElasticTok: Adaptive Tokenization for Image and Video |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Eliciting Human Preferences with Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Eliminating Position Bias of Language Models: A Mechanistic Approach |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Elliptic Loss Regularization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Elucidating the Preconditioning in Consistency Distillation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| EmbedLLM: Learning Compact Representations of Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| EmbodiedSAM: Online Segment Any 3D Thing in Real Time |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Emergence of a High-Dimensional Abstraction Phase in Language Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Emergence of meta-stable clustering in mean-field transformer models |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Emergent Orientation Maps —— Mechanisms, Coding Efficiency and Robustness |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Empowering LLM Agents with Zero-Shot Optimal Decision-Making through Q-learning |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Empowering Users in Digital Privacy Management through Interactive LLM-Based Agents |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Enabling Realtime Reinforcement Learning at Scale with Staggered Asynchronous Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Encryption-Friendly LLM Architecture |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| End-to-end Learning of Gaussian Mixture Priors for Diffusion Sampler |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Endless Jailbreaks with Bijection Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Endowing Visual Reprogramming with Adversarial Robustness |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Energy-Based Diffusion Language Models for Text Generation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Energy-Weighted Flow Matching for Offline Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Energy-based Backdoor Defense Against Federated Graph Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Enhance Multi-View Classification Through Multi-Scale Alignment and Expanded Boundary |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhanced Diffusion Sampling via Extrapolation with Multiple ODE Solutions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Clustered Federated Learning: Integration of Strategies and Improved Methodologies |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Compositional Text-to-Image Generation with Reliable Random Seeds |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Enhancing Document Understanding with Group Position Embedding: A Novel Approach to Incorporate Layout Information |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing End-to-End Autonomous Driving with Latent World Model |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Enhancing Federated Domain Adaptation with Multi-Domain Prototype-Based Federated Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Enhancing Graph Of Thought: Enhancing Prompts with LLM Rationales and Dynamic Temperature Control |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Enhancing Language Model Agents using Diversity of Thoughts |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Enhancing Learning with Label Differential Privacy by Vector Approximation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Enhancing Pre-trained Representation Classifiability can Boost its Interpretability |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Enhancing Prediction Performance through Influence Measure |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Enhancing Robust Fairness via Confusional Spectral Regularization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Enhancing Uncertainty Estimation and Interpretability with Bayesian Non-negative Decision Layer |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank Structures |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Enhancing the Scalability and Applicability of Kohn-Sham Hamiltonians for Molecular Systems |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Ensembles of Low-Rank Expert Adapters |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Ensembling Diffusion Models via Adaptive Feature Aggregation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Entropy-based Activation Function Optimization: A Method on Searching Better Activation Functions |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Episodic Memories Generation and Evaluation Benchmark for Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Episodic Novelty Through Temporal Distance |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Epistemic Monte Carlo Tree Search |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| EqNIO: Subequivariant Neural Inertial Odometry |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Equivariant Denoisers Cannot Copy Graphs: Align Your Graph Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Equivariant Masked Position Prediction for Efficient Molecular Representation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Equivariant Neural Functional Networks for Transformers |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Erasing Concept Combination from Text-to-Image Diffusion Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Error-quantified Conformal Inference for Time Series |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Estimating the Probabilities of Rare Outputs in Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Estimation of single-cell and tissue perturbation effect in spatial transcriptomics via Spatial Causal Disentanglement |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
4 |
| EvA: Erasing Spurious Correlations with Activations |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Evaluating Large Language Models through Role-Guide and Self-Reflection: A Comparative Study |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Evaluating Semantic Variation in Text-to-Image Synthesis: A Causal Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Event-Driven Online Vertical Federated Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Evidential Learning-based Certainty Estimation for Robust Dense Feature Matching |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Exact Byte-Level Probabilities from Tokenized Language Models for FIM-Tasks and Model Ensembles |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Exact Certification of (Graph) Neural Networks Against Label Poisoning |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Exact Community Recovery under Side Information: Optimality of Spectral Algorithms |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Exact Computation of Any-Order Shapley Interactions for Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Examining Alignment of Large Language Models through Representative Heuristics: the case of political stereotypes |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Execution-guided within-prompt search for programming-by-example |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Expand and Compress: Exploring Tuning Principles for Continual Spatio-Temporal Graph Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Expected Return Symmetries |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Expected Sliced Transport Plans |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Explain Yourself, Briefly! Self-Explaining Neural Networks with Concise Sufficient Reasons |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Explanations of GNN on Evolving Graphs via Axiomatic Layer edges |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Exploiting Distribution Constraints for Scalable and Efficient Image Retrieval |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Exploiting Hidden Symmetry to Improve Objective Perturbation for DP Linear Learners with a Nonsmooth L1-Norm |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploiting Structure in Offline Multi-Agent RL: The Benefits of Low Interaction Rank |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Explore Theory of Mind: program-guided adversarial data generation for theory of mind reasoning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Exploring Learning Complexity for Efficient Downstream Dataset Pruning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Exploring Local Memorization in Diffusion Models via Bright Ending Attention |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Exploring The Forgetting in Adversarial Training: A Novel Method for Enhancing Robustness |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Exploring The Loss Landscape Of Regularized Neural Networks Via Convex Duality |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Exploring a Principled Framework for Deep Subspace Clustering |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Exploring channel distinguishability in local neighborhoods of the model space in quantum neural networks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Exploring the Camera Bias of Person Re-identification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Exploring the Design Space of Visual Context Representation in Video MLLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Exponential Topology-enabled Scalable Communication in Multi-agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Exposure Bracketing Is All You Need For A High-Quality Image |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Expressivity of Neural Networks with Random Weights and Learned Biases |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Extendable and Iterative Structure Learning Strategy for Bayesian Networks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Extending Mercer's expansion to indefinite and asymmetric kernels |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| FACTS: A Factored State-Space Framework for World Modelling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FIG: Flow with Interpolant Guidance for Linear Inverse Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FIRING-Net: A filtered feature recycling network for speech enhancement |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| FLIP: Flow-Centric Generative Planning as General-Purpose Manipulation World Model |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| FLOPS: Forward Learning with OPtimal Sampling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FOSP: Fine-tuning Offline Safe Policy through World Models |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| FaceShot: Bring Any Character into Life |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Facilitating Multi-turn Function Calling for LLMs via Compositional Instruction Tuning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Factor Graph-based Interpretable Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Failures to Find Transferable Image Jailbreaks Between Vision-Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Fair Clustering in the Sliding Window Model |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fair Submodular Cover |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| FairDen: Fair Density-Based Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FairMT-Bench: Benchmarking Fairness for Multi-turn Dialogue in Conversational LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows" |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fantastic Copyrighted Beasts and How (Not) to Generate Them |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Fantastic Targets for Concept Erasure in Diffusion Models and Where To Find Them |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Fast Direct: Query-Efficient Online Black-box Guidance for Diffusion-model Target Generation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Fast Feedforward 3D Gaussian Splatting Compression |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fast Summation of Radial Kernels via QMC Slicing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fast Training of Sinusoidal Neural Fields via Scaling Initialization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Fast Uncovering of Protein Sequence Diversity from Structure |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fast and Accurate Blind Flexible Docking |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast and Slow Streams for Online Time Series Forecasting Without Information Leakage |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast training and sampling of Restricted Boltzmann Machines |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fast unsupervised ground metric learning with tree-Wasserstein distance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Faster Algorithms for Structured Linear and Kernel Support Vector Machines |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Faster Cascades via Speculative Decoding |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Faster Diffusion Sampling with Randomized Midpoints: Sequential and Parallel |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| Faster Inference of Flow-Based Generative Models via Improved Data-Noise Coupling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fat-to-Thin Policy Optimization: Offline Reinforcement Learning with Sparse Policies |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Feature Averaging: An Implicit Bias of Gradient Descent Leading to Non-Robustness in Neural Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Feature-Based Online Bilateral Trade |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| FedLWS: Federated Learning with Adaptive Layer-wise Weight Shrinking |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| FedTMOS: Efficient One-Shot Federated Learning with Tsetlin Machine |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Federated $Q$-Learning with Reference-Advantage Decomposition: Almost Optimal Regret and Logarithmic Communication Cost |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Federated Class-Incremental Learning: A Hybrid Approach Using Latent Exemplars and Data-Free Techniques to Address Local and Global Forgetting |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Federated Continual Learning Goes Online: Uncertainty-Aware Memory Management for Vision Tasks and Beyond |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Federated Domain Generalization with Data-free On-server Matching Gradient |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Federated Few-Shot Class-Incremental Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Federated Granger Causality Learning For Interdependent Clients With State Space Representation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Federated Residual Low-Rank Adaption of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Feedback Favors the Generalization of Neural ODEs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Feedback Schrödinger Bridge Matching |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Fengbo: a Clifford Neural Operator pipeline for 3D PDEs in Computational Fluid Dynamics |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Ferret-UI 2: Mastering Universal User Interface Understanding Across Platforms |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Few-Class Arena: A Benchmark for Efficient Selection of Vision Models and Dataset Difficulty Measurement |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fewer May Be Better: Enhancing Offline Reinforcement Learning with Reduced Dataset |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Field-DiT: Diffusion Transformer on Unified Video, 3D, and Game Field Generation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Filtered not Mixed: Filtering-Based Online Gating for Mixture of Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Finally Rank-Breaking Conquers MNL Bandits: Optimal and Efficient Algorithms for MNL Assortment |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Find A Winning Sign: Sign Is All We Need to Win the Lottery |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Finding Shared Decodable Concepts and their Negations in the Brain |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fine-Grained Verifiers: Preference Modeling as Next-token Prediction in Vision-Language Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fine-Tuning Attention Modules Only: Enhancing Weight Disentanglement in Task Arithmetic |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fine-Tuning Discrete Diffusion Models via Reward Optimization with Applications to DNA and Protein Design |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fine-tuning can Help Detect Pretraining Data from Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fine-tuning with Reserved Majority for Noise Reduction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| First-Person Fairness in Chatbots |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Fitting Networks with a Cancellation Trick |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Flash Inference: Near Linear Time Inference for Long Convolution Sequence Models and Beyond |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| FlashMask: Efficient and Rich Mask Extension of FlashAttention |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| FlashRNN: I/O-Aware Optimization of Traditional RNNs on modern hardware |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Flat Reward in Policy Parameter Space Implies Robust Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Flavors of Margin: Implicit Bias of Steepest Descent in Homogeneous Neural Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| FlexCAD: Unified and Versatile Controllable CAD Generation with Fine-tuned Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FlickerFusion: Intra-trajectory Domain Generalizing Multi-agent Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Flow Distillation Sampling: Regularizing 3D Gaussians with Pre-trained Matching Priors |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Flow Matching with Gaussian Process Priors for Probabilistic Time Series Forecasting |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Flow Matching with General Discrete Paths: A Kinetic-Optimal Perspective |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Flow matching achieves almost minimax optimal convergence |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Flow-based Variational Mutual Information: Fast and Flexible Approximations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Flow: Modularized Agentic Workflow Automation |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| FlowDec: A flow-based full-band general audio codec with high perceptual quality |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Following the Human Thread in Social Navigation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| For Better or For Worse? Learning Minimum Variance Features With Label Augmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ForecastBench: A Dynamic Benchmark of AI Forecasting Capabilities |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Forewarned is Forearmed: Harnessing LLMs for Data Synthesis via Failure-induced Exploration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Forget the Data and Fine-Tuning! Just Fold the Network to Compress |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Forgetting Transformer: Softmax Attention with a Forget Gate |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Forking Paths in Neural Text Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| FormalAlign: Automated Alignment Evaluation for Autoformalization |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Formation of Representations in Neural Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Forte : Finding Outliers with Representation Typicality Estimation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Foundation Models Secretly Understand Neural Network Weights: Enhancing Hypernetwork Architectures with Foundation Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Fourier Head: Helping Large Language Models Learn Complex Probability Distributions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Fourier Sliced-Wasserstein Embedding for Multisets and Measures |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fragment and Geometry Aware Tokenization of Molecules for Structure-Based Drug Design Using Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Frame-Voyager: Learning to Query Frames for Video Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Framer: Interactive Frame Interpolation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| FreCaS: Efficient Higher-Resolution Image Generation via Frequency-aware Cascaded Sampling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FreDF: Learning to Forecast in the Frequency Domain |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FreSh: Frequency Shifting for Accelerated Neural Representation Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| FreeCG: Free the Design Space of Clebsch-Gordan Transform for Machine Learning Force Fields |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| FreeVS: Generative View Synthesis on Free Driving Trajectory |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| FreqPrior: Improving Video Diffusion Models with Frequency Filtering Gaussian Noise |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From Attention to Activation: Unraveling the Enigmas of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| From Commands to Prompts: LLM-based Semantic File System for AIOS |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
2 |
| From Decoupling to Adaptive Transformation: a Wider Optimization Space for PTQ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| From Few to Many: Self-Improving Many-Shot Reasoners Through Iterative Optimization and Generation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From GNNs to Trees: Multi-Granular Interpretability for Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From Isolated Conversations to Hierarchical Schemas: Dynamic Tree Memory Representation for LLMs |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| From Layers to States: A State Space Model Perspective to Deep Neural Network Layer Dynamics |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| From Models to Microtheories: Distilling a Model's Topical Knowledge for Grounded Question-Answering |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| From Probability to Counterfactuals: the Increasing Complexity of Satisfiability in Pearl's Causal Hierarchy |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| From Promise to Practice: Realizing High-performance Decentralized Training |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| From Risk to Uncertainty: Generating Predictive Uncertainty Measures via Bayesian Estimation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| From Search to Sampling: Generative Models for Robust Algorithmic Recourse |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| From Sparse Dependence to Sparse Attention: Unveiling How Chain-of-Thought Enhances Transformer Sample Efficiency |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| From Tokens to Lattices: Emergent Lattice Structures in Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| From Tokens to Words: On the Inner Lexicon of LLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| From an LLM Swarm to a PDDL-empowered Hive: Planning Self-executed Instructions in a Multi-modal Jungle |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Fréchet Wavelet Distance: A Domain-Agnostic Metric for Image Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Fugatto 1: Foundational Generative Audio Transformer Opus 1 |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Fully-inductive Node Classification on Arbitrary Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Fundamental Limitations on Subquadratic Alternatives to Transformers |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language Model |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| GALA: Geometry-Aware Local Adaptive Grids for Detailed 3D Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GANDALF: Generative AttentioN based Data Augmentation and predictive modeLing Framework for personalized cancer treatment |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| GDrag:Towards General-Purpose Interactive Editing with Anti-ambiguity Point Diffusion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GEVRM: Goal-Expressive Video Generation Model For Robust Visual Manipulation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GLOMA: Global Video Text Spotting with Morphological Association |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| GLoRa: A Benchmark to Evaluate the Ability to Learn Long-Range Dependencies in Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GMValuator: Similarity-based Data Valuation for Generative Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GNNs Getting ComFy: Community and Feature Similarity Guided Rewiring |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| GOAL: A Generalist Combinatorial Optimization Agent Learner |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| GOFA: A Generative One-For-All Model for Joint Graph Language Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial Latent Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| GOttack: Universal Adversarial Attacks on Graph Neural Networks via Graph Orbits Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| GPS: A Probabilistic Distributional Similarity with Gumbel Priors for Set-to-Set Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GPromptShield: Elevating Resilience in Graph Prompt Tuning Against Adversarial Attacks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| GRAIN: Exact Graph Reconstruction from Gradients |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GROOT-2: Weakly Supervised Multimodal Instruction Following Agents |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| GReaTer: Gradients Over Reasoning Makes Smaller Language Models Strong Prompt Optimizers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GS-CPR: Efficient Camera Pose Refinement via 3D Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GS-LiDAR: Generating Realistic LiDAR Point Clouds with Panoramic Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GSBA$^K$: $top$-$K$ Geometric Score-based Black-box Attack |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| GSE: Group-wise Sparse and Explainable Adversarial Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| GUI-World: A Video Benchmark and Dataset for Multimodal GUI-oriented Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GameArena: Evaluating LLM Reasoning through Live Computer Games |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| GameGen-X: Interactive Open-world Game Video Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Gap-Dependent Bounds for Q-Learning using Reference-Advantage Decomposition |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gated Delta Networks: Improving Mamba2 with Delta Rule |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gaussian Differentially Private Human Faces Under a Face Radial Curve Representation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Gaussian Ensemble Belief Propagation for Efficient Inference in High-Dimensional, Black-box Systems |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gaussian Head & Shoulders: High Fidelity Neural Upper Body Avatars with Anchor Gaussian Guided Texture Warping |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| Gaussian Mixture Counterfactual Generator |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Gaussian Splatting Lucas-Kanade |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Gaussian-Based Instance-Adaptive Intensity Modeling for Point-Supervised Facial Expression Spotting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Gaussian-Det: Learning Closed-Surface Gaussians for 3D Object Detection |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| GaussianAnything: Interactive Point Cloud Flow Matching for 3D Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| GeSubNet: Gene Interaction Inference for Disease Subtype Network Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-Time Alignment |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| GenDataAgent: On-the-fly Dataset Augmentation with Synthetic Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| GenEx: Generating an Explorable World |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GenSE: Generative Speech Enhancement via Language Models using Hierarchical Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GenVP: Generating Visual Puzzles with Contrastive Hierarchical VAEs |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| GenXD: Generating Any 3D and 4D Scenes |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| General Scene Adaptation for Vision-and-Language Navigation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalizability of Neural Networks Minimizing Empirical Risk Based on Expressive Power |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Generalizable Human Gaussians from Single-View Image |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Generalizable Motion Planning via Operator Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalization Bounds and Model Complexity for Kolmogorov–Arnold Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Generalization Bounds for Canonicalization: A Comparative Study with Group Averaging |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Generalization Guarantees for Representation Learning via Data-Dependent Gaussian Mixture Priors |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Generalization and Distributed Learning of GFlowNets |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalization in VAE and Diffusion Models: A Unified Information-Theoretic Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalization through variance: how noise shapes inductive biases in diffusion models |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Generalization v.s. Memorization: Tracing Language Models’ Capabilities Back to Pretraining Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generalization, Expressivity, and Universality of Graph Neural Networks on Attributed Graphs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Generalized Behavior Learning from Diverse Demonstrations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Generalized Consistency Trajectory Models for Image Manipulation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Generalized Principal-Agent Problem with a Learning Agent |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Generalized Video Moment Retrieval |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generalizing Reasoning Problems to Longer Lengths |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Generalizing Weisfeiler-Lehman Kernels to Subgraphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generating Graphs via Spectral Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generating CAD Code with Vision-Language Models for 3D Designs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Generating Freeform Endoskeletal Robots |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generating Likely Counterfactuals Using Sum-Product Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generating Physical Dynamics under Priors |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Generation and Comprehension Hand-in-Hand: Vision-guided Expression Diffusion for Boosting Referring Expression Generation and Comprehension |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generative Classifiers Avoid Shortcut Solutions |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generative Flows on Synthetic Pathway for Drug Design |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Generative Monoculture in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Generative Representational Instruction Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Generative Verifiers: Reward Modeling as Next-Token Prediction |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Generator Matching: Generative modeling with arbitrary Markov processes |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| GeoILP: A Synthetic Dataset to Guide Large-Scale Rule Induction |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| GeoLoRA: Geometric integration for parameter efficient fine-tuning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GeoX: Geometric Problem Solving Through Unified Formalized Vision-Language Pre-training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Geometry Image Diffusion: Fast and Data-Efficient Text-to-3D with Image-Based Surface Representation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Geometry of Lightning Self-Attention: Identifiability and Dimension |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Geometry of Long-Tailed Representation Learning: Rebalancing Features for Skewed Distributions |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Geometry of Neural Reinforcement Learning in Continuous State and Action Spaces |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Geometry-aware RL for Manipulation of Varying Shapes and Deformable Objects |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Glad: A Streaming Scene Generator for Autonomous Driving |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Glauber Generative Model: Discrete Diffusion Models via Binary Classification |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Glimpse: Enabling White-Box Methods to Use Proprietary Models for Zero-Shot LLM-Generated Text Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Global Convergence in Neural ODEs: Impact of Activation Functions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Global Convergence of Policy Gradient in Average Reward MDPs |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Global Identifiability of Overcomplete Dictionary Learning via L1 and Volume Minimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Global Well-posedness and Convergence Analysis of Score-based Generative Models via Sharp Lipschitz Estimates |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| GlycanML: A Multi-Task and Multi-Structure Benchmark for Glycan Machine Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Transformers |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Going Beyond Feature Similarity: Effective Dataset distillation based on Class-aware Conditional Mutual Information |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Going Beyond Static: Understanding Shifts with Time-Series Attribution |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| GoodDrag: Towards Good Practices for Drag Editing with Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| GrabS: Generative Embodied Agent for 3D Object Segmentation without Scene Supervision |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Gradient correlation is a key ingredient to accelerate SGD with momentum |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Gradient descent with generalized Newton’s method |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Gradient-Free Generation for Hard-Constrained Systems |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Gramian Multimodal Representation Learning and Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Grammar Reinforcement Learning: path and cycle counting in graphs with a Context-Free Grammar and Transformer approach |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Graph Assisted Offline-Online Deep Reinforcement Learning for Dynamic Workflow Scheduling |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Graph Neural Networks Are More Than Filters: Revisiting and Benchmarking from A Spectral Perspective |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Graph Neural Networks Can (Often) Count Substructures |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Graph Neural Networks Gone Hogwild |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph Neural Networks for Edge Signals: Orientation Equivariance and Invariance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph Neural Preconditioners for Iterative Solutions of Sparse Linear Systems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Graph Neural Ricci Flow: Evolving Feature from a Curvature Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Graph Sparsification via Mixture of Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Graph Transformers Dream of Electric Flow |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Graph-Guided Scene Reconstruction from Images with 3D Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Graph-based Document Structure Analysis |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| GraphArena: Evaluating and Exploring Large Language Models on Graph Computation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| GraphBridge: Towards Arbitrary Transfer Learning in GNNs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| GraphEval: A Lightweight Graph-Based LLM Framework for Idea Evaluation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GraphRouter: A Graph-based Router for LLM Selections |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GravMAD: Grounded Spatial Value Maps Guided Action Diffusion for Generalized 3D Manipulation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| GridMix: Exploring Spatial Modulation for Neural Fields in PDE Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Grokking at the Edge of Numerical Stability |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Grounding Continuous Representations in Geometry: Equivariant Neural Fields |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Grounding Multimodal Large Language Model in GUI World |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Grounding Video Models to Actions through Goal Conditioned Exploration |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Group Distributionally Robust Dataset Distillation with Risk Minimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Group Downsampling with Equivariant Anti-aliasing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Group Ligands Docking to Protein Pockets |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Group-robust Sample Reweighting for Subpopulation Shifts via Influence Functions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Growth Inhibitors for Suppressing Inappropriate Image Concepts in Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Guaranteed Generation from Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Guided Score identity Distillation for Data-Free One-Step Text-to-Image Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Gumbel Counterfactual Generation From Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Gyrogroup Batch Normalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HADAMRNN: BINARY AND SPARSE TERNARY ORTHOGONAL RNNS |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HALL-E: Hierarchical Neural Codec Language Model for Minute-Long Zero-Shot Text-to-Speech Synthesis |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| HAMSTER: Hierarchical Action Models for Open-World Robot Manipulation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| HARDMath: A Benchmark Dataset for Challenging Problems in Applied Mathematics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HART: Efficient Visual Generation with Hybrid Autoregressive Transformer |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HASARD: A Benchmark for Vision-Based Safe Reinforcement Learning in Embodied Agents |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| HELM: Hierarchical Encoding for mRNA Language Modeling |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| HELMET: How to Evaluate Long-context Models Effectively and Thoroughly |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HERO: Human-Feedback Efficient Reinforcement Learning for Online Diffusion Model Finetuning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| HG-Adapter: Improving Pre-Trained Heterogeneous Graph Neural Networks with Dual Adapters |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HGM³: Hierarchical Generative Masked Motion Modeling with Hard Token Mining |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| HMoRA: Making LLMs More Effective with Hierarchical Mixture of LoRA Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HOPE for a Robust Parameterization of Long-memory State Space Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HQ-Edit: A High-Quality Dataset for Instruction-based Image Editing |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| HQGS: High-Quality Novel View Synthesis with Gaussian Splatting in Degraded Scenes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HR-Extreme: A High-Resolution Dataset for Extreme Weather Forecasting |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| HShare: Fast LLM Decoding by Hierarchical Key-Value Sharing |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| HaDeMiF: Hallucination Detection and Mitigation in Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image Animation |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Halton Scheduler for Masked Generative Image Transformer |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Handling Delay in Real-Time Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Harnessing Diversity for Important Data Selection in Pretraining Large Language Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Harnessing Webpage UIs for Text-Rich Visual Understanding |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Has the Deep Neural Network learned the Stochastic Process? An Evaluation Viewpoint |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| HeadMap: Locating and Enhancing Knowledge Circuits in LLMs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Heavy-Tailed Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HelpSteer2-Preference: Complementing Ratings with Preferences |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Herald: A Natural Language Annotated Lean 4 Dataset |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Hessian-Free Online Certified Unlearning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| HexGen-2: Disaggregated Generative Inference of LLMs in Heterogeneous Environment |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| HiBug2: Efficient and Interpretable Error Slice Discovery for Comprehensive Model Debugging |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| HiLo: A Learning Framework for Generalized Category Discovery Robust to Domain Shifts |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hidden in the Noise: Two-Stage Robust Watermarking for Images |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hierarchical Autoregressive Transformers: Combining Byte- and Word-Level Processing for Robust, Adaptable Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Hierarchical Uncertainty Estimation for Learning-based Registration in Neuroimaging |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hierarchical World Models as Visual Whole-Body Humanoid Controllers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Hierarchically Encapsulated Representation for Protocol Design in Self-Driving Labs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| High-Dimensional Bayesian Optimisation with Gaussian Process Prior Variational Autoencoders |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| High-Dynamic Radar Sequence Prediction for Weather Nowcasting Using Spatiotemporal Coherent Gaussian Representation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| High-Precision Dichotomous Image Segmentation via Probing Diffusion Capacity |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| High-Quality Joint Image and Video Tokenization with Causal VAE |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| High-dimension Prototype is a Better Incremental Object Detection Learner |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| High-quality Text-to-3D Character Generation with SparseCubes and Sparse Transformers. |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Higher-Order Graphon Neural Networks: Approximation and Cut Distance |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Highly Efficient Self-Adaptive Reward Shaping for Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Holistic Reasoning with Long-Context LMs: A Benchmark for Database Operations on Massive Textual Data |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Holistically Evaluating the Environmental Impact of Creating Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
3 |
| Holographic Node Representations: Pre-training Task-Agnostic Node Embeddings |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Homomorphism Counts as Structural Encodings for Graph Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Homomorphism Expressivity of Spectral Invariant Graph Neural Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Horizon Generalization in Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Hot-pluggable Federated Learning: Bridging General and Personalized FL via Dynamic Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Hotspot-Driven Peptide Design via Multi-Fragment Autoregressive Extension |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How Discrete and Continuous Diffusion Meet: Comprehensive Analysis of Discrete Diffusion Models via a Stochastic Integral Framework |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| How Does Critical Batch Size Scale in Pre-training? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| How Does Vision-Language Adaptation Impact the Safety of Vision Language Models? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How Far Are We from True Unlearnability? |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| How Feature Learning Can Improve Neural Scaling Laws |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| How Gradient descent balances features: A dynamical analysis for two-layer neural networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| How Learnable Grids Recover Fine Detail in Low Dimensions: A Neural Tangent Kernel Analysis of Multigrid Parametric Encodings |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| How Low Can You Go? Searching for the Intrinsic Dimensionality of Complex Networks using Metric Node Embeddings |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| How Much is Unseen Depends Chiefly on Information About the Seen |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| How Much is a Noisy Image Worth? Data Scaling Laws for Ambient Diffusion. |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| How efficient is LLM-generated code? A rigorous & high-standard benchmark |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| How many samples are needed to train a deep neural network? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| How much of my dataset did you use? Quantitative Data Usage Inference in Machine Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| How new data permeates LLM knowledge and how to dilute it |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| How to Evaluate Reward Models for RLHF |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| How to Find the Exact Pareto Front for Multi-Objective MDPs? |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| How to Probe: Simple Yet Effective Techniques for Improving Post-hoc Explanations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| How to Verify Any (Reasonable) Distribution Property: Computationally Sound Argument Systems for Distributions |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Human Simulacra: Benchmarking the Personification of Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Human-Aligned Chess With a Bit of Search |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Human-inspired Episodic Memory for Infinite Context LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Humanizing the Machine: Proxy Attacks to Mislead LLM Detectors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| HyPoGen: Optimization-Biased Hypernetworks for Generalizable Policy Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hybrid Regularization Improves Diffusion-based Inverse Problem Solving |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hydra-SGG: Hybrid Relation Assignment for One-stage Scene Graph Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Hymba: A Hybrid-head Architecture for Small Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Hyper-Connections |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| HyperDAS: Towards Automating Mechanistic Interpretability with Hypernetworks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| HyperFace: Generating Synthetic Face Recognition Datasets by Exploring Face Embedding Hypersphere |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| HyperPLR: Hypergraph Generation through Projection, Learning, and Reconstruction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Hyperbolic Genome Embeddings |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Hypothetical Minds: Scaffolding Theory of Mind for Multi-Agent Tasks with Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| I Can Hear You: Selective Robust Training for Deepfake Audio Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| I2VControl-Camera: Precise Video Camera Control with Adjustable Motion Strength |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ICLR: In-Context Learning of Representations |
❌ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
3 |
| IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| IDInit: A Universal and Stable Initialization Method for Neural Network Training |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| IFORMER: INTEGRATING CONVNET AND TRANSFORMER FOR MOBILE APPLICATION |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| IGL-Bench: Establishing the Comprehensive Benchmark for Imbalanced Graph Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ILLUSION: Unveiling Truth with a Comprehensive Multi-Modal, Multi-Lingual Deepfake Dataset |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| IMDPrompter: Adapting SAM to Image Manipulation Detection by Cross-View Automated Prompt Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| INFER: A Neural-symbolic Model For Extrapolation Reasoning on Temporal Knowledge Graph |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| INS: Interaction-aware Synthesis to Enhance Offline Multi-agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| IPDreamer: Appearance-Controllable 3D Object Generation with Complex Image Prompts |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| IRIS: LLM-Assisted Static Analysis for Detecting Security Vulnerabilities |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| IV-mixed Sampler: Leveraging Image Diffusion Models for Enhanced Video Synthesis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Identifiability for Gaussian Processes with Holomorphic Kernels |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Identifiable Exchangeable Mechanisms for Causal Structure and Representation Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Identification of Intermittent Temporal Latent Process |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Identifying latent state transitions in non-linear dynamical systems |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| IgGM: A Generative Model for Functional Antibody and Nanobody Design |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ImDy: Human Inverse Dynamics from Imitated Observations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ImProver: Agent-Based Automated Proof Optimization |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Image Watermarks are Removable using Controllable Regeneration from Clean Noise |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Image and Video Tokenization with Binary Spherical Quantization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Image-level Memorization Detection via Inversion-based Inference Perturbation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ImageFolder: Autoregressive Image Generation with Folded Tokens |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ImagineNav: Prompting Vision-Language Models as Embodied Navigator through Scene Imagination |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Immunogenicity Prediction with Dual Attention Enables Vaccine Target Selection |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| ImpScore: A Learnable Metric For Quantifying The Implicitness Level of Sentences |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Implicit Bias of Mirror Flow for Shallow Neural Networks in Univariate Regression |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| Implicit In-context Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Implicit Neural Surface Deformation with Explicit Velocity Fields |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Implicit Search via Discrete Diffusion: A Study on Chess |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improved Algorithms for Kernel Matrix-Vector Multiplication Under Sparsity Assumptions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Improved Approximation Algorithms for $k$-Submodular Maximization via Multilinear Extension |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Convergence Rate for Diffusion Probabilistic Models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Improved Diffusion-based Generative Model with Better Adversarial Robustness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improved Finite-Particle Convergence Rates for Stein Variational Gradient Descent |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Improved Sampling Algorithms for Lévy-Itô Diffusion Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Improved Sampling Of Diffusion Models In Fluid Dynamics With Tweedie's Formula |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improved Techniques for Optimization-Based Jailbreaking on Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Improved Training Technique for Latent Consistency Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Improving Complex Reasoning with Dynamic Prompt Corruption: A Soft Prompt Optimization Approach |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Improving Convergence Guarantees of Random Subspace Second-order Algorithm for Nonconvex Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Improving Data Efficiency via Curating LLM-Driven Rating Systems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Deep Regression with Tightness |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving Equivariant Networks with Probabilistic Symmetry Breaking |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Generalization and Robustness in SNNs Through Signed Rate Encoding and Sparse Encoding Attacks |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Improving Graph Neural Networks by Learning Continuous Edge Directions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Instruction-Following in Language Models through Activation Steering |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving Language Model Distillation through Hidden State Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Improving Large Language Model Planning with Action Sequence Similarity |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Improving Long-Text Alignment for Text-to-Image Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Neural Network Accuracy by Concurrently Training with a Twin Network |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Improving Neural Optimal Transport via Displacement Interpolation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improving Pretraining Data Using Perplexity Correlations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Probabilistic Diffusion Models With Optimal Diagonal Covariance Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Improving Reasoning Performance in Large Language Models via Representation Engineering |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Improving Semantic Understanding in Speech Language Models via Brain-tuning |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Improving Uncertainty Estimation through Semantically Diverse Language Generation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improving Unsupervised Constituency Parsing via Maximizing Semantic Information |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Improving the Sparse Structure Learning of Spiking Neural Networks from the View of Compression Efficiency |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Imputation for prediction: beware of diminishing returns. |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| In Search of Forgotten Domain Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| In vivo cell-type and brain region classification via multimodal contrastive learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| In-Context Editing: Learning Knowledge from Self-Induced Distributions |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| In-context Time Series Predictor |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| InCoDe: Interpretable Compressed Descriptions For Image Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Incorporating Visual Correspondence into Diffusion Model for Virtual Try-On |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Incremental Causal Effect for Time to Treatment Initialization |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Indirect Gradient Matching for Adversarial Robust Distillation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Inference Optimal VLMs Need Fewer Visual Tokens and More Parameters |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for LLM Problem-Solving |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Inference Scaling for Long-Context Retrieval Augmented Generation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Inference-Aware Fine-Tuning for Best-of-N Sampling in Large Language Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Infilling Score: A Pretraining Data Detection Algorithm for Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Infinite-Resolution Integral Noise Warping for Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Influence Functions for Scalable Data Attribution in Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Influence-Guided Diffusion for Dataset Distillation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| InfoGS: Efficient Structure-Aware 3D Gaussians via Lightweight Information Shaping |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Information Theoretic Text-to-Image Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Injecting Universal Jailbreak Backdoors into LLMs in Minutes |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Injective flows for star-like manifolds |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Inner Information Analysis Algorithm for Deep Neural Network based on Community |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Innovative Thinking, Infinite Humor: Humor Research of Large Language Models through Structured Thought Leaps |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Input Space Mode Connectivity in Deep Neural Networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Inspection and Control of Self-Generated-Text Recognition Ability in Llama3-8b-Instruct |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| InstaRevive: One-Step Image Enhancement via Dynamic Score Matching |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| InstaSHAP: Interpretable Additive Models Explain Shapley Values Instantly |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| InstaTrain: Adaptive Training via Ultra-Fast Natural Annealing within Dynamical Systems |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Instance-dependent Early Stopping |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Instant Policy: In-Context Imitation Learning via Graph Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| InstantPortrait: One-Step Portrait Editing via Diffusion Multi-Objective Distillation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| InstantSplamp: Fast and Generalizable Stenography Framework for Generative Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| InstantSwap: Fast Customized Concept Swapping across Sharp Shape Differences |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Instructional Segment Embedding: Improving LLM Safety with Instruction Hierarchy |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Integral Performance Approximation for Continuous-Time Reinforcement Learning Control |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Integrating Protein Dynamics into Structure-Based Drug Design via Full-Atom Stochastic Flows |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Integrative Decoding: Improving Factuality via Implicit Self-consistency |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Intelligence at the Edge of Chaos |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Intent3D: 3D Object Detection in RGB-D Scans Based on Human Intention |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| InterMask: 3D Human Interaction Generation via Collaborative Masked Modeling |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Interaction Asymmetry: A General Principle for Learning Composable Abstractions |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Interactive Adjustment for Human Trajectory Prediction with Individual Feedback |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interactive Speculative Planning: Enhance Agent Efficiency through Co-design of System and User Interface |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Interference Among First-Price Pacing Equilibria: A Bias and Variance Analysis |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Interleaved Scene Graphs for Interleaved Text-and-Image Generation Assessment |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Intermediate Layer Classifiers for OOD generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Interpretable Causal Representation Learning for Biological Data in the Pathway Space |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Interpretable Unsupervised Joint Denoising and Enhancement for Real-World low-light Scenarios |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Interpretable Vision-Language Survival Analysis with Ordinal Inductive Bias for Computational Pathology |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Interpreting Emergent Planning in Model-Free Reinforcement Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Interpreting Language Reward Models via Contrastive Explanations |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Interpreting the Second-Order Effects of Neurons in CLIP |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| IntersectionZoo: Eco-driving for Benchmarking Multi-Agent Contextual Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Intervening Anchor Token: Decoding Strategy in Alleviating Hallucinations for MLLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Intrinsic Dimension Correlation: uncovering nonlinear connections in multimodal representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Intrinsic User-Centric Interpretability through Global Mixture of Experts |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Inverse Attention Agents for Multi-Agent Systems |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Inverse Constitutional AI: Compressing Preferences into Principles |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Inverse Rendering using Multi-Bounce Path Tracing and Reservoir Sampling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Inverse decision-making using neural amortized Bayesian actors |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| InverseBench: Benchmarking Plug-and-Play Diffusion Priors for Inverse Problems in Physical Sciences |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| InversionGNN: A Dual Path Network for Multi-Property Molecular Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Investigating Pattern Neurons in Urban Time Series Forecasting |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Is Factuality Enhancement a Free Lunch For LLMs? Better Factuality Can Lead to Worse Context-Faithfulness |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Is In-Context Learning Sufficient for Instruction Following in LLMs? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Is Large-scale Pretraining the Secret to Good Domain Generalization? |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Is Your Multimodal Language Model Oversensitive to Safe Queries? |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Is Your Video Language Model a Reliable Judge? |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Is uniform expressivity too restrictive? Towards efficient expressivity of GNNs |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Isometric Regularization for Manifolds of Functional Data |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| It Helps to Take a Second Opinion: Teaching Smaller LLMs To Deliberate Mutually via Selective Rationale Optimisation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| IterGen: Iterative Semantic-aware Structured LLM Generation with Backtracking |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Iterative Label Refinement Matters More than Preference Optimization under Weak Supervision |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Iterative Substructure Extraction for Molecular Relational Learning with Interactive Graph Information Bottleneck |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| JPEG Inspired Deep Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Jailbreak Antidote: Runtime Safety-Utility Balance via Sparse Representation Adjustment in Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Jailbreaking as a Reward Misspecification Problem |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Jamba: Hybrid Transformer-Mamba Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| JetFormer: An autoregressive generative model of raw images and text |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexity |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Joint Gradient Balancing for Data Ordering in Finite-Sum Multi-Objective Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Joint Graph Rewiring and Feature Denoising via Spectral Resonance |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Joint Reward and Policy Learning with Demonstrations and Human Feedback Improves Alignment |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| JudgeBench: A Benchmark for Evaluating LLM-Based Judges |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| JudgeLM: Fine-tuned Large Language Models are Scalable Judges |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Jump Your Steps: Optimizing Sampling Schedule of Discrete Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| K-HALU: Multiple Answer Korean Hallucination Benchmark for Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| KAA: Kolmogorov-Arnold Attention for Enhancing Attentive Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| KAN: Kolmogorov–Arnold Networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| KBLaM: Knowledge Base augmented Language Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| KGARevion: An AI Agent for Knowledge-Intensive Biomedical QA |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| KLay: Accelerating Arithmetic Circuits for Neurosymbolic AI |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| KOR-Bench: Benchmarking Language Models on Knowledge-Orthogonal Reasoning Tasks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Kernel-based Optimally Weighted Conformal Time-Series Prediction |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| KiVA: Kid-inspired Visual Analogies for Testing Large Multimodal Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| KinFormer: Generalizable Dynamical Symbolic Regression for Catalytic Organic Reaction Kinetics |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| KinPFN: Bayesian Approximation of RNA Folding Kinetics using Prior-Data Fitted Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Kinetix: Investigating the Training of General Agents through Open-Ended Physics-Based Control Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Knowing Your Target: Target-Aware Transformer Makes Better Spatio-Temporal Video Grounding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Knowledge Graph Finetuning Enhances Knowledge Manipulation in Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Knowledge Localization: Mission Not Accomplished? Enter Query Localization! |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Kolmogorov-Arnold Transformer |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| KooNPro: A Variance-Aware Koopman Probabilistic Model Enhanced by Neural Process for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Kronecker Mask and Interpretive Prompts are Language-Action Video Learners |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| L-WISE: Boosting Human Visual Category Learning Through Model-Based Image Selection and Enhancement |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| L3Ms — Lagrange Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| LASER: A Neuro-Symbolic Framework for Learning Spatio-Temporal Scene Graphs with Weak Supervision |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LASeR: Towards Diversified and Generalizable Robot Design with Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LICO: Large Language Models for In-Context Molecular Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LICORICE: Label-Efficient Concept-Based Interpretable Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| LIFe-GoM: Generalizable Human Rendering with Learned Iterative Feedback Over Multi-Resolution Gaussians-on-Mesh |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LLM Unlearning via Loss Adjustment with Only Forget Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LLM-SR: Scientific Equation Discovery via Programming with Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LLM-based Typed Hyperresolution for Commonsense Reasoning with Knowledge Bases |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| LLM-wrapper: Black-Box Semantic-Aware Adaptation of Vision-Language Models for Referring Expression Comprehension |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLMOPT: Learning to Define and Solve General Optimization Problems from Scratch |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLMs Can Plan Only If We Tell Them |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LLaMA-Omni: Seamless Speech Interaction with Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LLaMaFlex: Many-in-one LLMs via Generalized Pruning and Weight Sharing |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| LLaRA: Supercharging Robot Learning Data for Vision-Language Policy |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| LLaVA-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LOIRE: LifelOng learning on Incremental data via pre-trained language model gRowth Efficiently |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| LR0.FM: LOW-RESOLUTION ZERO-SHOT CLASSIFICATION BENCHMARK FOR FOUNDATION MODELS |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and Captioning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LaMPlace: Learning to Optimize Cross-Stage Metrics in Macro Placement |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Lambda-Skip Connections: the architectural component that prevents Rank Collapse |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| LancBiO: Dynamic Lanczos-aided Bilevel Optimization via Krylov Subspace |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Langevin Soft Actor-Critic: Efficient Exploration through Uncertainty-Driven Critic Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Language Agents Meet Causality -- Bridging LLMs and Causal World Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Language Guided Skill Discovery |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Language Imbalance Driven Rewarding for Multilingual Self-improving |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Language Model Alignment in Multilingual Trolley Problems |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Language Models Are Implicitly Continuous |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Language Models Learn to Mislead Humans via RLHF |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| Language Models Need Inductive Biases to Count Inductively |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Language Models Trained to do Arithmetic Predict Human Risky and Intertemporal Choice |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Language Models are Advanced Anonymizers |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Language Representations Can be What Recommenders Need: Findings and Potentials |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Language models scale reliably with over-training and on downstream tasks |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Language-Assisted Feature Transformation for Anomaly Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Language-Image Models with 3D Understanding |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Laplace Sample Information: Data Informativeness Through a Bayesian Lens |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Large (Vision) Language Models are Unsupervised In-Context Learners |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Large Convolutional Model Tuning via Filter Subspace |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Large Language Models Assume People are More Rational than We Really are |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Large Language Models Meet Symbolic Provers for Logical Reasoning Evaluation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Large Language Models Often Say One Thing and Do Another |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Large Language Models are Interpretable Learners |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Large Language Models can Become Strong Self-Detoxifiers |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Large Scale Knowledge Washing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Large-scale and Fine-grained Vision-language Pre-training for Enhanced CT Image Understanding |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Lasso Bandit with Compatibility Condition on Optimal Arm |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Last Iterate Convergence of Incremental Methods as a Model of Forgetting |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Last-Iterate Convergence Properties of Regret-Matching Algorithms in Games |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Latent Action Pretraining from Videos |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Latent Bayesian Optimization via Autoregressive Normalizing Flows |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Latent Radiance Fields with 3D-aware 2D Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Latent Safety-Constrained Policy Approach for Safe Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Latent-EnSF: A Latent Ensemble Score Filter for High-Dimensional Data Assimilation with Sparse Observation Data |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Law of the Weakest Link: Cross Capabilities of Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Lawma: The Power of Specialization for Legal Annotation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LayerDAG: A Layerwise Autoregressive Diffusion Model for Directed Acyclic Graph Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
5 |
| Layerwise Recurrent Router for Mixture-of-Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Layout-your-3D: Controllable and Precise 3D Generation with 2D Blueprint |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| LeFusion: Controllable Pathology Synthesis via Lesion-Focused Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Lean-STaR: Learning to Interleave Thinking and Proving |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LeanAgent: Lifelong Learning for Formal Theorem Proving |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LeanQuant: Accurate and Scalable Large Language Model Quantization with Loss-error-aware Grid |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learn Your Reference Model for Real Good Alignment |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Learn hybrid prototypes for multivariate time series anomaly detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learn-by-interact: A Data-Centric Framework For Self-Adaptive Agents in Realistic Environments |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learnable Expansion of Graph Operators for Multi-Modal Feature Fusion |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learned Reference-based Diffusion Sampler for multi-modal distributions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning 3D Perception from Others' Predictions |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning Causal Alignment for Reliable Disease Diagnosis |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning Chaos In A Linear Way |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learning Clustering-based Prototypes for Compositional Zero-Shot Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Color Equivariant Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Continually by Spectral Regularization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning Diagrams: A Graphical Language for Compositional Training Regimes |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
4 |
| Learning Distributions of Complex Fluid Simulations with Diffusion Graph Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learning Diverse Attacks on Large Language Models for Robust Red-Teaming and Safety Tuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Dynamics of Deep Matrix Factorization Beyond the Edge of Stability |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning Dynamics of LLM Finetuning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning Efficient Positional Encodings with Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning Equivariant Non-Local Electron Density Functionals |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Evolving Tools for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learning Fine-Grained Representations through Textual Token Disentanglement in Composed Video Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Gain Map for Inverse Tone Mapping |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning General-purpose Biomedical Volume Representations using Randomized Synthesis |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Learning Generalizable Skills from Offline Multi-Task Data for Multi-Agent Cooperation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Geometric Reasoning Networks For Robot Task And Motion Planning |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Graph Invariance by Harnessing Spuriosity |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Graph Quantized Tokenizers |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Harmonized Representations for Speculative Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning Hierarchical Polynomials of Multiple Nonlinear Features |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning High-Degree Parities: The Crucial Role of the Initialization |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Learning How Hard to Think: Input-Adaptive Allocation of LM Computation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning Interleaved Image-Text Comprehension in Vision-Language Large Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Learning Interpretable Hierarchical Dynamical Systems Models from Time Series Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning LLM-as-a-Judge for Preference Alignment |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Learning Long Range Dependencies on Graphs via Random Walks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Mask Invariant Mutual Information for Masked Image Modeling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Molecular Representation in a Cell |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learning Neural Networks with Distribution Shift: Efficiently Certifiable Guarantees |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Learning Partial Graph Matching via Optimal Partial Transport |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Learning Randomized Algorithms with Transformers |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Learning Robust Representations with Long-Term Information for Generalization in Visual Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Shape-Independent Transformation via Spherical Representations for Category-Level Object Pose Estimation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Learning Spatial-Semantic Features for Robust Video Object Segmentation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Spatiotemporal Dynamical Systems from Point Process Observations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Splitting Heuristics in Divide-and-Conquer SAT Solvers with Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning Structured Representations by Embedding Class Hierarchy with Fast Optimal Transport |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Structured Universe Graph with Outlier OOD Detection for Partial Matching |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning Successor Features with Distributed Hebbian Temporal Memory |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Learning Task Belief Similarity with Latent Dynamics for Meta-Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning Transformer-based World Models with Contrastive Predictive Coding |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning Video-Conditioned Policy on Unlabelled Data with Joint Embedding Predictive Transformer |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning View-invariant World Models for Visual Robotic Manipulation |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Learning a Fast Mixing Exogenous Block MDP using a Single Trajectory |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning a Neural Solver for Parametric PDEs to Enhance Physics-Informed Methods |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning and aligning single-neuron invariance manifolds in visual cortex |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Learning from End User Data with Shuffled Differential Privacy over Kernel Densities |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Learning from Imperfect Human Feedback: A Tale from Corruption-Robust Dueling |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning from negative feedback, or positive feedback or both |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning from weak labelers as constraints |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning local equivariant representations for quantum operators |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning mirror maps in policy mirror descent |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning on One Mode: Addressing Multi-modality in Offline Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning stochastic dynamics from snapshots through regularized unbalanced optimal transport |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning system dynamics without forgetting |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Learning the Complexity of Weakly Noisy Quantum States |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Learning the Optimal Stopping for Early Classification within Finite Horizons via Sequential Probability Ratio Test |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Learning to Adapt Frozen CLIP for Few-Shot Test-Time Domain Adaptation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Communicate Through Implicit Communication Channels |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning to Discover Regulatory Elements for Gene Expression Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Discretize Denoising Diffusion ODEs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning to Explore and Exploit with GNNs for Unsupervised Combinatorial Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Learning to Help in Multi-Class Settings |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Learning to Plan Before Answering: Self-Teaching LLMs to Learn Abstract Plans for Problem Solving |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning to Search from Demonstration Sequences |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning to Select Nodes in Branch and Bound with Sufficient Tree Representation |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning to Solve Differential Equation Constrained Optimization Problems |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning to Steer Markovian Agents under Model Uncertainty |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Learning to engineer protein flexibility |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning under Temporal Label Noise |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Learning vector fields of differential equations on manifolds with geometrically constrained operator-valued kernels |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Learning-Augmented Frequent Directions |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Learning-Augmented Search Data Structures |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Learning-Guided Rolling Horizon Optimization for Long-Horizon Flexible Job-Shop Scheduling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Leave-One-Out Stable Conformal Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Less is More: Masking Elements in Image Condition Features Avoids Content Leakages in Style Transfer Diffusion Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Let Me Grok for You: Accelerating Grokking via Embedding Transfer from a Weaker Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Let SSMs be ConvNets: State-space Modeling with Optimal Tensor Contractions |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Let Your Features Tell The Differences: Understanding Graph Convolution By Feature Splitting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Let the Code LLM Edit Itself When You Edit the Code |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LevAttention: Time, Space and Streaming Efficient Algorithm for Heavy Attentions |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Leveraging Driver Field-of-View for Multimodal Ego-Trajectory Prediction |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Leveraging Flatness to Improve Information-Theoretic Generalization Bounds for SGD |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Leveraging Sub-Optimal Data for Human-in-the-Loop Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Leveraging Variable Sparsity to Refine Pareto Stationarity in Multi-Objective Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| LiFT: Learning to Fine-Tune via Bayesian Parameter Efficient Meta Fine-Tuning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Lie Algebra Canonicalization: Equivariant Neural Operators under arbitrary Lie Groups |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Lightning-Fast Image Inversion and Editing for Text-to-Image Diffusion Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Lightweight Neural App Control |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Lightweight Predictive 3D Gaussian Splats |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Limits to scalable evaluation at the frontier: LLM as judge won’t beat twice the data |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Linear Mode Connectivity in Differentiable Tree Ensembles |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Linear Multistep Solver Distillation for Fast Sampling of Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Linear Partial Gromov-Wasserstein Embedding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Linear Representations of Political Perspective Emerge in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Linear SCM Identification in the Presence of Confounders and Gaussian Noise |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Linear Spherical Sliced Optimal Transport: A Fast Metric for Comparing Spherical Data |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Linear Transformer Topological Masking with Graph Random Features |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Linear combinations of latents in generative models: subspaces and beyond |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Lines of Thought in Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Lipschitz Bandits in Optimal Space |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| LiveBench: A Challenging, Contamination-Limited LLM Benchmark |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| LiveXiv - A Multi-Modal live benchmark based on Arxiv papers content |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| LoLCATs: On Low-Rank Linearizing of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LoR-VP: Low-Rank Visual Prompting for Efficient Vision Model Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LoRA-Pro: Are Low-Rank Adapters Properly Optimized? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LoRA-X: Bridging Foundation Models with Training-Free Cross-Model Adaptation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LoRA3D: Low-Rank Self-Calibration of 3D Geometric Foundation models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Local Patterns Generalize Better for Novel Anomalies |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic Regression |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Local convergence of simultaneous min-max algorithms to differential equilibrium on Riemannian manifold |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Local-Prompt: Extensible Local Prompts for Few-Shot Out-of-Distribution Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Locality Alignment Improves Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Locality Sensitive Avatars From Video |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Locality-aware Gaussian Compression for Fast and High-quality Rendering |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Locally Connected Echo State Networks for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Logic-Logit: A Logic-Based Approach to Choice Modeling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Logical Consistency of Large Language Models in Fact-Checking |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Logically Consistent Language Models via Neuro-Symbolic Integration |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Long Context Compression with Activation Beacon |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Long-Context Linear System Identification |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Long-Sequence Recommendation Models Need Decoupled Embeddings |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Long-Short Decision Transformer: Bridging Global and Local Dependencies for Generalized Decision-Making |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Long-horizon Visual Instruction Generation with Logic and Attribute Self-reflection |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Long-tailed Adversarial Training with Self-Distillation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Long-time asymptotics of noisy SVGD outside the population limit |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| LongGenBench: Benchmarking Long-Form Generation in Long Context LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| LongMamba: Enhancing Mamba's Long-Context Capabilities via Training-Free Receptive Field Enlargement |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| LongVILA: Scaling Long-Context Visual Language Models for Long Videos |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Longhorn: State Space Models are Amortized Online Learners |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Look Before You Leap: Universal Emergent Mechanism for Retrieval in Language Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Looking Backward: Retrospective Backward Synthesis for Goal-Conditioned GFlowNets |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Looking Backward: Streaming Video-to-Video Translation with Feature Banks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Looking Inward: Language Models Can Learn About Themselves by Introspection |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Looking into User’s Long-term Interests through the Lens of Conservative Evidential Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Looped Transformers for Length Generalization |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Lossy Compression with Pretrained Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| LucidPPN: Unambiguous Prototypical Parts Network for User-centric Interpretable Computer Vision |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Lumina-T2X: Scalable Flow-based Large Diffusion Transformer for Flexible Resolution Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MA$^2$E: Addressing Partial Observability in Multi-Agent Reinforcement Learning with Masked Auto-Encoder |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MAD-TD: Model-Augmented Data stabilizes High Update Ratio RL |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MADGEN: Mass-Spec attends to De Novo Molecular generation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MAESTRO: Masked Encoding Set Transformer with Self-Distillation |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| MAGE: Model-Level Graph Neural Networks Explanations via Motif-based Graph Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MAGNet: Motif-Agnostic Generation of Molecules from Scaffolds |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MAI: A Multi-turn Aggregation-Iteration Model for Composed Image Retrieval |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| MANTRA: The Manifold Triangulations Assemblage |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| MAP: Multi-Human-Value Alignment Palette |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MAPS: Advancing Multi-Modal Reasoning in Expert-Level Physical Science |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| MAST: model-agnostic sparsified training |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MAVIS: Mathematical Visual Instruction Tuning with an Automatic Data Engine |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MCNC: Manifold-Constrained Reparameterization for Neural Compression |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MDSGen: Fast and Efficient Masked Diffusion Temporal-Aware Transformers for Open-Domain Sound Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MELODI: Exploring Memory Compression for Long Contexts |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MGCFNN: A Neural MultiGrid Solver with Novel Fourier Neural Network for High Wave Number Helmholtz Equations |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| MGDA Converges under Generalized Smoothness, Provably |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MGMapNet: Multi-Granularity Representation Learning for End-to-End Vectorized HD Map Construction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Masked Image Modeling Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MIND over Body: Adaptive Thinking using Dynamic Computation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MIND: Math Informed syNthetic Dialogues for Pretraining LLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MIRACLE 3D: Memory-efficient Integrated Robust Approach for Continual Learning on 3D Point Clouds via Shape Model Construction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MIRAGE: Evaluating and Explaining Inductive Reasoning Process in Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MLPs Learn In-Context on Regression and Classification Tasks |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| MM-EMBED: UNIVERSAL MULTIMODAL RETRIEVAL WITH MULTIMODAL LLMS |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MMAD: A Comprehensive Benchmark for Multimodal Large Language Models in Industrial Anomaly Detection |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MMDisCo: Multi-Modal Discriminator-Guided Cooperative Diffusion for Joint Audio and Video Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans? |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| MMEgo: Towards Building Egocentric Multimodal LLMs for Video QA |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| MMQA: Evaluating LLMs with Multi-Table Multi-Hop Complex Questions |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MMR: A Large-scale Benchmark Dataset for Multi-target and Multi-granularity Reasoning Segmentation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MMRole: A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MMSearch: Unveiling the Potential of Large Models as Multi-modal Search Engines |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MMTEB: Massive Multilingual Text Embedding Benchmark |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| MOFFlow: Flow Matching for Structure Prediction of Metal-Organic Frameworks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MOOSE-Chem: Large Language Models for Rediscovering Unseen Chemistry Scientific Hypotheses |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| MOS: Model Synergy for Test-Time Adaptation on LiDAR-Based 3D Object Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| MP-Mat: A 3D-and-Instance-Aware Human Matting and Editing Framework with Multiplane Representation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MQuAKE-Remastered: Multi-Hop Knowledge Editing Can Only Be Advanced with Reliable Evaluations |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
5 |
| MR-GSM8K: A Meta-Reasoning Benchmark for Large Language Model Evaluation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MRAG-Bench: Vision-Centric Evaluation for Retrieval-Augmented Multimodal Models |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| MTSAM: Multi-Task Fine-Tuning for Segment Anything Model |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| MUSE: Machine Unlearning Six-Way Evaluation for Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MVTokenFlow: High-quality 4D Content Generation using Multiview Token Flow |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| M^3PC: Test-time Model Predictive Control using Pretrained Masked Trajectory Model |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MaRS: A Fast Sampler for Mean Reverting Diffusion based on ODE and SDE Solvers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| Machine Unlearning Fails to Remove Data Poisoning Attacks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Machine Unlearning via Simulated Oracle Matching |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| MaestroMotif: Skill Design from Artificial Intelligence Feedback |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MagicPIG: LSH Sampling for Efficient LLM Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Magnetic Preference Optimization: Achieving Last-iterate Convergence for Language Model Alignment |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Maintaining Structural Integrity in Parameter Spaces for Parameter Efficient Fine-tuning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Making Text Embedders Few-Shot Learners |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Making Transformer Decoders Better Differentiable Indexers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| MallowsPO: Fine-Tune Your LLM with Preference Dispersions |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MamBEV: Enabling State Space Models to Learn Birds-Eye-View Representations |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| MamKO: Mamba-based Koopman operator for modeling and predictive control |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MambaExtend: A Training-Free Approach to Improve Long Context Extension of Mamba |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MambaPEFT: Exploring Parameter-Efficient Fine-Tuning for Mamba |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MambaQuant: Quantizing the Mamba Family with Variance Aligned Rotation Methods |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| ManiSkill-HAB: A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Manifold Constraint Reduces Exposure Bias in Accelerated Diffusion Sampling |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Manifold Induced Biases for Zero-shot and Few-shot Detection of Generated Images |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Manifolds, Random Matrices and Spectral Gaps: The geometric phases of generative diffusion |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Many-Objective Multi-Solution Transport |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MarS: a Financial Market Simulation Engine Powered by Generative Foundation Model |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
4 |
| Mask in the Mirror: Implicit Sparsification |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Masked Diffusion Models are Secretly Time-Agnostic Masked Models and Exploit Inaccurate Categorical Sampling |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Masked Temporal Interpolation Diffusion for Procedure Planning in Instructional Videos |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mastering Task Arithmetic: $\tau$Jp as a Key Indicator for Weight Disentanglement |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MatExpert: Decomposing Materials Discovery By Mimicking Human Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Matcha: Mitigating Graph Structure Shifts with Test-Time Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MathGAP: Out-of-Distribution Evaluation on Problems with Arbitrarily Complex Proofs |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Matrix Product Sketching via Coordinated Sampling |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Matryoshka Multimodal Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MatryoshkaKV: Adaptive KV Compression via Trainable Orthogonal Projection |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Matérn Kernels for Tunable Implicit Surface Reconstruction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MaxCutPool: differentiable feature-aware Maxcut for pooling in graph neural networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Maximizing the Potential of Synthetic Data: Insights from Random Matrix Theory |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| McEval: Massively Multilingual Code Evaluation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MeToken: Uniform Micro-environment Token Boosts Post-Translational Modification Prediction |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Measuring And Improving Engagement of Text-to-Image Generation Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Measuring And Improving Persuasiveness Of Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Measuring Non-Adversarial Reproduction of Training Data in Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Measuring memorization in RLHF for code completion |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mechanism and Emergence of Stacked Attention Heads in Multi-Layer Transformers |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Mechanistic Permutability: Match Features Across Layers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations for Medicine |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Medium-Difficulty Samples Constitute Smoothed Decision Boundary for Knowledge Distillation on Pruned Datasets |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Memory Efficient Transformer Adapter for Dense Predictions |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Memory Mosaics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MeshMask: Physics-Based Simulations with Masked Graph Neural Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Meta Flow Matching: Integrating Vector Fields on the Wasserstein Manifold |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Meta-Continual Learning of Neural Fields |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Meta-Dynamical State Space Models for Integrative Neural Data Analysis |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MetaDesigner: Advancing Artistic Typography through AI-Driven, User-Centric, and Multilingual WordArt Synthesis |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| MetaMetrics: Calibrating Metrics for Generation Tasks Using Human Preferences |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MetaOOD: Automatic Selection of OOD Detection Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Metalic: Meta-Learning In-Context with Protein Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Metamizer: A Versatile Neural Optimizer for Fast and Accurate Physics Simulations |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Metric-Driven Attributions for Vision Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Microcanonical Langevin Ensembles: Advancing the Sampling of Bayesian Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Min-K%++: Improved Baseline for Pre-Training Data Detection from Large Language Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Mind Control through Causal Inference: Predicting Clean Images from Poisoned Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Mind the GAP: Glimpse-based Active Perception improves generalization and sample efficiency of visual reasoning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Mind the Gap: Examining the Self-Improvement Capabilities of Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MindSearch: Mimicking Human Minds Elicits Deep AI Searcher |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MindSimulator: Exploring Brain Concept Localization via Synthetic fMRI |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mini-Monkey: Alleviating the Semantic Sawtooth Effect for Lightweight MLLMs via Complementary Image Pyramid |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mini-batch Coresets for Memory-efficient Language Model Training on Data Mixtures |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MiniPLM: Knowledge Distillation for Pre-training Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Minimal Impact ControlNet: Advancing Multi-ControlNet Integration |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Minimal Variance Model Aggregation: A principled, non-intrusive, and versatile integration of black box models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Minimalistic Predictions for Online Class Constraint Scheduling |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Minimax Optimal Reinforcement Learning with Quasi-Optimism |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Minimax Optimal Two-Stage Algorithm For Moment Estimation Under Covariate Shift |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Mining your own secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Misspecified $Q$-Learning with Sparse Linear Function Approximation: Tight Bounds on Approximation Error |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Mitigate the Gap: Improving Cross-Modal Alignment in CLIP |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mitigating Information Loss in Tree-Based Reinforcement Learning via Direct Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mitigating Memorization in Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Mitigating Object Hallucination in MLLMs via Data-augmented Phrase-level Alignment |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Mitigating Parameter Interference in Model Merging via Sharpness-Aware Fine-Tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mitigating Reward Over-Optimization in RLHF via Behavior-Supported Regularization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mitigating Spurious Correlations in Zero-Shot Multimodal Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Mix-CPT: A Domain Adaptation Framework via Decoupling Knowledge Learning and Format Alignment |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MixEval-X: Any-to-any Evaluations from Real-world Data Mixture |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| MixMax: Distributional Robustness in Function Space via Optimal Data Mixtures |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mixture Compressor for Mixture-of-Experts LLMs Gains More |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mixture of Attentions For Speculative Decoding |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Mixture of In-Context Prompters for Tabular PFNs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Mixture of Parrots: Experts improve memorization more than reasoning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Mixture-of-Agents Enhances Large Language Model Capabilities |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MoDGS: Dynamic Gaussian Splatting from Casually-captured Monocular Videos with Depth Priors |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MoDeGPT: Modular Decomposition for Large Language Model Compression |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MoLEx: Mixture of Layer Experts for Fine-tuning with Sparse Upcycling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Modality-Specialized Synergizers for Interleaved Vision-Language Generalists |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Model Editing as a Robust and Denoised variant of DPO: A Case Study on Toxicity |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Model Equality Testing: Which Model is this API Serving? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Model Risk-sensitive Offline Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Model merging with SVD to tie the Knots |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Model-Agnostic Knowledge Guided Correction for Improved Neural Surrogate Rollout |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Model-Free Offline Reinforcement Learning with Enhanced Robustness |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Model-agnostic meta-learners for estimating heterogeneous treatment effects over time |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Model-based Offline Reinforcement Learning with Lower Expectile Q-Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Model-based RL as a Minimalist Approach to Horizon-Free and Second-Order Bounds |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Modeling Complex System Dynamics with Flow Matching Across Time and Conditions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Modeling Fine-Grained Hand-Object Dynamics for Egocentric Video Representation Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Modeling Future Conversation Turns to Teach LLMs to Ask Clarifying Questions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Modeling Unseen Environments with Language-guided Composable Causal Components in Reinforcement Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Modeling dynamic social vision highlights gaps between deep learning and humans |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MolSpectra: Pre-training 3D Molecular Representation with Multi-modal Energy Spectra |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Moner: Motion Correction in Undersampled Radial MRI with Unsupervised Neural Representation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Monet: Mixture of Monosemantic Experts for Transformers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Monitoring Latent World States in Language Models with Propositional Probes |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Monte Carlo Planning with Large Language Model for Text-Based Game Agents |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
5 |
| Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Moral Alignment for LLM Agents |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| More Experts Than Galaxies: Conditionally-Overlapping Experts with Biologically-Inspired Fixed Routing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Morphing Tokens Draw Strong Masked Image Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| MorphoDiff: Cellular Morphology Painting with Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| MotherNet: Fast Training and Inference via Hyper-Network Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Motion Control of High-Dimensional Musculoskeletal Systems with Hierarchical Model-Based Planning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Motion-Agent: A Conversational Framework for Human Motion Generation with LLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| MotionClone: Training-Free Motion Cloning for Controllable Video Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MotionDreamer: One-to-Many Motion Synthesis with Localized Generative Masked Transformer |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequences |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| MrSteve: Instruction-Following Agents in Minecraft with What-Where-When Memory |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| MrT5: Dynamic Token Merging for Efficient Byte-level Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| MuHBoost: Multi-Label Boosting For Practical Longitudinal Human Behavior Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| MuPT: A Generative Symbolic Music Pretrained Transformer |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Multi-Dimensional Conformal Prediction |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Multi-Draft Speculative Sampling: Canonical Decomposition and Theoretical Limits |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-Field Adaptive Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Multi-Label Node Classification with Label Influence Propagation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-Label Test-Time Adaptation with Bound Entropy Minimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Multi-Modal and Multi-Attribute Generation of Single Cells with CFGen |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Multi-Perspective Data Augmentation for Few-shot Object Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-Resolution Decomposable Diffusion Model for Non-Stationary Time Series Anomaly Detection |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Multi-Reward as Condition for Instruction-based Image Editing |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Multi-Robot Motion Planning with Diffusion Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multi-Scale Fusion for Object Representation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Multi-Task Corrupted Prediction for Learning Robust Audio-Visual Speech Representation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-Task Dense Predictions via Unleashing the Power of Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multi-agent cooperation through learning-aware policy gradients |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Multi-domain Distribution Learning for De Novo Drug Design |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Multi-level Certified Defense Against Poisoning Attacks in Offline Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-modal Agent Tuning: Building a VLM-Driven Agent for Efficient Tool Usage |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multi-modal brain encoding models for multi-modal stimuli |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multi-objective Differentiable Neural Architecture Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multi-objective antibody design with constrained preference optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Multi-session, multi-task neural decoding from distinct cell-types and brain regions |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multilevel Generative Samplers for Investigating Critical Phenomena |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multimodal Lego: Model Merging and Fine-Tuning Across Topologies and Modalities in Biomedicine |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multimodal Quantitative Language for Generative Recommendation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multimodal Situational Safety |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Multimodal Unsupervised Domain Generalization by Retrieving Across the Modality Gap |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multimodality Helps Few-shot 3D Point Cloud Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multiple Heads are Better than One: Mixture of Modality Knowledge Experts for Entity Representation Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Multiview Equivariance Improves 3D Correspondence Understanding with Minimal Feature Finetuning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| MuseGNN: Forming Scalable, Convergent GNN Layers that Minimize a Sampling-Based Energy |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Mutual Effort for Efficiency: A Similarity-based Token Pruning for Vision Transformers in Self-Supervised Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solver |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| N-ForGOT: Towards Not-forgetting and Generalization of Open Temporal Graph Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| NExUME: Adaptive Training and Inference for DNNs under Intermittent Power Environments |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
4 |
| NL-Eye: Abductive NLI For Images |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| NNsight and NDIF: Democratizing Access to Open-Weight Foundation Model Internals |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| NRGBoost: Energy-Based Generative Boosted Trees |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| NUDGE: Lightweight Non-Parametric Fine-Tuning of Embeddings for Retrieval |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| NarrativeBridge: Enhancing Video Captioning with Causal-Temporal Narrative |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Natural Language Inference Improves Compositionality in Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| NatureLM-audio: an Audio-Language Foundation Model for Bioacoustics |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Navigation-Guided Sparse Scene Representation for End-to-End Autonomous Driving |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| NeRAF: 3D Scene Infused Neural Radiance and Acoustic Fields |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NeSyC: A Neuro-symbolic Continual Learner For Complex Embodied Tasks in Open Domains |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Near, far: Patch-ordering enhances vision foundation models' scene understanding |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Near-Exact Privacy Amplification for Matrix Mechanisms |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| Near-Optimal Policy Identification in Robust Constrained Markov Decision Processes via Epigraph Form |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Near-optimal Active Regression of Single-Index Models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMs |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Needle Threading: Can LLMs Follow Threads Through Near-Million-Scale Haystacks? |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Nesterov acceleration in benignly non-convex landscapes |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| NetFormer: An interpretable model for recovering dynamical connectivity in neuronal population dynamics |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| NetMoE: Accelerating MoE Training through Dynamic Sample Placement |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| NeurFlow: Interpreting Neural Networks through Neuron Groups and Functional Interactions |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Neural Approximate Mirror Maps for Constrained Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Causal Graph for Interpretable and Intervenable Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Context Flows for Meta-Learning of Dynamical Systems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Dueling Bandits: Preference-Based Optimization with Human Feedback |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Neural Eulerian Scene Flow Fields |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Neural Exploratory Landscape Analysis for Meta-Black-Box-Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Neural Fluid Simulation on Geometric Surfaces |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural Functions for Learning Periodic Signal |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Neural Interactive Proofs |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Neural Multi-Objective Combinatorial Optimization via Graph-Image Multimodal Fusion |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural Phylogeny: Fine-Tuning Relationship Detection among Neural Networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Neural Sampling from Boltzmann Densities: Fisher-Rao Curves in the Wasserstein Geometry |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Neural Spacetimes for DAG Representation Learning |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Neural Stochastic Differential Equations for Uncertainty-Aware Offline RL |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Neural Wave Equation for Irregularly Sampled Sequence Data |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neural networks on Symmetric Spaces of Noncompact Type |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NeuralPlane: Structured 3D Reconstruction in Planar Primitives with Neural Fields |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neuralized Markov Random Field for Interaction-Aware Stochastic Human Trajectory Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG Signals |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Neuron Platonic Intrinsic Representation From Dynamics Using Contrastive Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Neuron based Personality Trait Induction in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Neuron-based Multifractal Analysis of Neuron Interaction Dynamics in Large Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Neuroplastic Expansion in Deep Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| New Algorithms for the Learning-Augmented k-means Problem |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| NextBestPath: Efficient 3D Mapping of Unseen Environments |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| No Equations Needed: Learning System Dynamics Without Relying on Closed-Form ODEs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| No Free Lunch: Fundamental Limits of Learning Non-Hallucinating Generative Models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| No Location Left Behind: Measuring and Improving the Fairness of Implicit Representations for Earth Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| No Need to Talk: Asynchronous Mixture of Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| No Preference Left Behind: Group Distributional Preference Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| No Training, No Problem: Rethinking Classifier-Free Guidance for Diffusion Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Node Identifiers: Compact, Discrete Representations for Efficient Graph Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Node Similarities under Random Projections: Limits and Pathological Cases |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Node-Time Conditional Prompt Learning in Dynamic Graphs |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Noise Separation guided Candidate Label Reconstruction for Noisy Partial Label Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Noise-conditioned Energy-based Annealed Rewards (NEAR): A Generative Framework for Imitation Learning from Observation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Noisy Test-Time Adaptation in Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Non-Adversarial Inverse Reinforcement Learning via Successor Feature Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Non-Equilibrium Dynamics of Hybrid Continuous-Discrete Ground-State Sampling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Non-myopic Generation of Language Models for Reasoning and Planning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson–Romberg Extrapolation |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Nonconvex Stochastic Optimization under Heavy-Tailed Noises: Optimal Convergence without Gradient Clipping |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Nonlinear Sequence Embedding by Monotone Variational Inequality |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Nonlinear multiregion neural dynamics with parametric impulse response communication channels |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Not All LLM-Generated Data Are Equal: Rethinking Data Weighting in Text Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Not All Language Model Features Are One-Dimensionally Linear |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Not-So-Optimal Transport Flows for 3D Point Cloud Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Null Counterfactual Factor Interactions for Goal-Conditioned Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Number Cookbook: Number Understanding of Language Models and How to Improve It |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| NutriBench: A Dataset for Evaluating Large Language Models in Nutrition Estimation from Meal Descriptions |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| O(d/T) Convergence Theory for Diffusion Probabilistic Models under Minimal Assumptions |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| OASIS Uncovers: High-Quality T2I Models, Same Old Stereotypes |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| OBI-Bench: Can LMMs Aid in Study of Ancient Script on Oracle Bones? |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| OCCAM: Towards Cost-Efficient and Accuracy-Aware Classification Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| OCEAN: Offline Chain-of-thought Evaluation and Alignment in Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| ODE-based Smoothing Neural Network for Reinforcement Learning Tasks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| OGBench: Benchmarking Offline Goal-Conditioned RL |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OLMoE: Open Mixture-of-Experts Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OMG: Opacity Matters in Material Modeling with Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
4 |
| ONLINE EPSILON NET & PIERCING SET FOR GEOMETRIC CONCEPTS |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| OPTAMI: Global Superlinear Convergence of High-order Methods |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| ORSO: Accelerating Reward Design via Online Reward Selection and Policy Optimization |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| OS-ATLAS: Foundation Action Model for Generalist GUI Agents |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| OSCAR: Operating System Control via State-Aware Reasoning and Re-Planning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| OSDA Agent: Leveraging Large Language Models for De Novo Design of Organic Structure Directing Agents |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitting |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| OVTR: End-to-End Open-Vocabulary Multiple Object Tracking with Transformer |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Object-Centric Pretraining via Target Encoder Bootstrapping |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ObscuraCoder: Powering Efficient Code LM Pre-Training Via Obfuscation Grounding |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| OccProphet: Pushing the Efficiency Frontier of Camera-Only 4D Occupancy Forecasting with an Observer-Forecaster-Refiner Framework |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Occlusion-aware Non-Rigid Point Cloud Registration via Unsupervised Neural Deformation Correntropy |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Offline Hierarchical Reinforcement Learning via Inverse Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Offline Model-Based Optimization by Learning to Rank |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Offline RL in Regular Decision Processes: Sample Efficiency via Language Metrics |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Offline RL with Smooth OOD Generalization in Convex Hull and its Neighborhood |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Omni-MATH: A Universal Olympiad Level Mathematic Benchmark for Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| OmniBind: Large-scale Omni Multimodal Representation via Binding Spaces |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| OmniEdit: Building Image Editing Generalist Models Through Specialist Supervision |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| OmniKV: Dynamic Context Selection for Efficient Long-Context LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| OmniPhysGS: 3D Constitutive Gaussians for General Physics-Based Dynamics Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| OmniRe: Omni Urban Scene Reconstruction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| OmniSep: Unified Omni-Modality Sound Separation with Query-Mixup |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| OmnixR: Evaluating Omni-modality Language Models on Reasoning across Modalities |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
3 |
| On Bits and Bandits: Quantifying the Regret-Information Trade-off |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On Calibration of LLM-based Guard Models for Reliable Content Moderation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On Conformal Isometry of Grid Cells: Learning Distance-Preserving Position Embedding |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On Designing General and Expressive Quantum Graph Neural Networks with Applications to MILP Instance Representation |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On Disentangled Training for Nonlinear Transform in Learned Image Compression |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| On Evaluating the Durability of Safeguards for Open-Weight LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On Generalization Across Environments In Multi-Objective Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On Large Language Model Continual Unlearning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On Linear Representations and Pretraining Data Frequency in Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On Minimizing Adversarial Counterfactual Error in Adversarial Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On Quantizing Neural Representation for Variable-Rate Video Coding |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On Rollouts in Model-Based Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| On Scaling Up 3D Gaussian Splatting Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On Speeding Up Language Model Evaluation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On Statistical Rates of Conditional Diffusion Transformers: Approximation, Estimation and Minimax Optimality |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On Stochastic Contextual Bandits with Knapsacks in Small Budget Regime |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On a Connection Between Imitation Learning and RLHF |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On the Adversarial Vulnerability of Label-Free Test-Time Adaptation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On the Almost Sure Convergence of the Stochastic Three Points Algorithm |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| On the Benefits of Attribute-Driven Graph Domain Adaptation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Benefits of Memory for Modeling Time-Dependent PDEs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| On the Byzantine-Resilience of Distillation-Based Federated Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On the Completeness of Invariant Geometric Deep Learning Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| On the Convergence of No-Regret Dynamics in Information Retrieval Games with Proportional Ranking Functions |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Crucial Role of Initialization for Matrix Factorization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On the Expressive Power of Sparse Geometric MPNNs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Expressiveness of Rational ReLU Neural Networks With Bounded Depth |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| On the Feature Learning in Diffusion Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| On the Fourier analysis in the SO(3) space : the EquiLoPO Network |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| On the Hölder Stability of Multiset and Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| On the Identification of Temporal Causal Representation with Instantaneous Dependence |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| On the Importance of Language-driven Representation Learning for Heterogeneous Federated Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| On the Linear Speedup of Personalized Federated Reinforcement Learning with Shared Representations |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| On the Modeling Capabilities of Large Language Models for Sequential Decision Making |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Optimal Memorization Capacity of Transformers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On the Optimization Landscape of Low Rank Adaptation Methods for Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| On the Performance Analysis of Momentum Method: A Frequency Domain Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Price of Differential Privacy for Hierarchical Clustering |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Relation between Trainability and Dequantization of Variational Quantum Learning Models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| On the Role of Attention Heads in Large Language Model Safety |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| On the Transfer of Object-Centric Representation Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| On the expressiveness and spectral bias of KANs |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
3 |
| On the self-verification limitations of large language models on reasoning and planning tasks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| On-the-fly Preference Alignment via Principle-Guided Decoding |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaptation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| One Hundred Neural Networks and Brains Watching Videos: Lessons from Alignment |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| One Model Transfer to All: On Robust Jailbreak Prompts Generation against LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| One Step Diffusion via Shortcut Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| One for all and all for one: Efficient computation of partial Wasserstein distances on the line |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| One-for-All Few-Shot Anomaly Detection via Instance-Induced Prompt Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Online Clustering with Nearly Optimal Consistency |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Online Preference Alignment for Language Models via Count-based Exploration |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Online Reinforcement Learning in Non-Stationary Context-Driven Environments |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein Regularization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Online-to-Offline RL for Agent Alignment |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Open-CK: A Large Multi-Physics Fields Coupling benchmarks in Combustion Kinetics |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Open-Set Graph Anomaly Detection via Normal Structure Regularisation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Open-Vocabulary Customization from CLIP via Data-Free Knowledge Distillation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Open-World Reinforcement Learning over Long Short-Term Imagination |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Open-YOLO 3D: Towards Fast and Accurate Open-Vocabulary 3D Instance Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| OpenHands: An Open Platform for AI Software Developers as Generalist Agents |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
4 |
| OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| OpenPRM: Building Open-domain Process-based Reward Models with Preference Trees |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| OpenRCA: Can Large Language Models Locate the Root Cause of Software Failures? |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Operator Deep Smoothing for Implied Volatility |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Optimal Brain Apoptosis |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Optimal Flow Transport and its Entropic Regularization: a GPU-friendly Matrix Iterative Algorithm for Flow Balance Satisfaction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimal Learning of Kernel Logistic Regression for Complex Classification Scenarios |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Optimal Non-Asymptotic Rates of Value Iteration for Average-Reward Markov Decision Processes |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Optimal Protocols for Continual Learning via Statistical Physics and Control Theory |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Optimal Strong Regret and Violation in Constrained MDPs via Policy Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Optimal Transport for Time Series Imputation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Optimality and Adaptivity of Deep Neural Features for Instrumental Variable Regression |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Optimality of Matrix Mechanism on $\ell_p^p$-metric |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Optimistic Games for Combinatorial Bayesian Optimization with Application to Protein Design |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Optimization by Parallel Quasi-Quantum Annealing with Gradient-Based Sampling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Optimized Multi-Token Joint Decoding With Auxiliary Model for LLM Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimizing $(L_0, L_1)$-Smooth Functions by Gradient Methods |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Optimizing 4D Gaussians for Dynamic Scene Video from Single Landscape Images |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimizing Backward Policies in GFlowNets via Trajectory Likelihood Maximization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Optimizing Neural Network Representations of Boolean Networks |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Optimizing Posterior Samples for Bayesian Optimization via Rootfinding |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Optimizing importance weighting in the presence of sub-population shifts |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| OptionZero: Planning with Learned Options |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Oracle efficient truncated statistics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Order-aware Interactive Segmentation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Oscillatory State-Space Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Out-of-distribution Generalization for Total Variation based Invariant Risk Minimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Outlier Synthesis via Hamiltonian Monte Carlo for Out-of-Distribution Detection |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Overcoming Lower-Level Constraints in Bilevel Optimization: A Novel Approach with Regularized Gap Functions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Overcoming Slow Decision Frequencies in Continuous Control: Model-Based Sequence Reinforcement Learning for Model-Free Control |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| OvercookedV2: Rethinking Overcooked for Zero-Shot Coordination |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| P-SPIKESSM: HARNESSING PROBABILISTIC SPIKING STATE SPACE MODELS FOR LONG-RANGE DEPENDENCY TASKS |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PABBO: Preferential Amortized Black-Box Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PAD: Personalized Alignment of LLMs at Decoding-time |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PADRe: A Unifying Polynomial Attention Drop-in Replacement for Efficient Vision Transformer |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PAL: Sample-Efficient Personalized Reward Modeling for Pluralistic Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PALMBENCH: A COMPREHENSIVE BENCHMARK OF COMPRESSED LARGE LANGUAGE MODELS ON MOBILE PLATFORMS |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| PEAR: Primitive Enabled Adaptive Relabeling for Boosting Hierarchical Reinforcement Learning |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| PEARL: Parallel Speculative Decoding with Adaptive Draft Length |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| PEARL: Towards Permutation-Resilient LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PETRA: Parallel End-to-end Training with Reversible Architectures |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PFDiff: Training-Free Acceleration of Diffusion Models Combining Past and Future Scores |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PFGuard: A Generative Framework with Privacy and Fairness Safeguards |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
4 |
| PICASO: Permutation-Invariant Context Composition with State Space Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PIED: Physics-Informed Experimental Design for Inverse Problems |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PIG: Physics-Informed Gaussians as Adaptive Parametric Mesh Representations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| PIN: Prolate Spheroidal Wave Function-based Implicit Neural Representations |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| PINP: Physics-Informed Neural Predictor with latent estimation of fluid flows |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PIORF: Physics-Informed Ollivier-Ricci Flow for Long–Range Interactions in Mesh Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PN-GAIL: Leveraging Non-optimal Information from Imperfect Demonstrations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| POGEMA: A Benchmark Platform for Cooperative Multi-Agent Pathfinding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| POTEC: Off-Policy Contextual Bandits for Large Action Spaces via Policy Decomposition |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PPT: Patch Order Do Matters In Time Series Pretext Task |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| PQMass: Probabilistic Assessment of the Quality of Generative Models using Probability Mass Estimation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PRDP: Progressively Refined Differentiable Physics |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| PRISM: Privacy-Preserving Improved Stochastic Masking for Federated Generative Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PT-T2I/V: An Efficient Proxy-Tokenized Diffusion Transformer for Text-to-Image/Video-Task |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PWM: Policy Learning with Multi-Task World Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PaCA: Partial Connection Adaptation for Efficient Fine-Tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PaLD: Detection of Text Partially Written by Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PaPaGei: Open Foundation Models for Optical Physiological Signals |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PaRa: Personalizing Text-to-Image Diffusion via Parameter Rank Reduction |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Pacmann: Efficient Private Approximate Nearest Neighbor Search |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Painting with Words: Elevating Detailed Image Captioning with Benchmark and Alignment Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Palu: KV-Cache Compression with Low-Rank Projection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ParFam -- (Neural Guided) Symbolic Regression via Continuous Global Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ParaSolver: A Hierarchical Parallel Integral Solver for Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Param$\Delta$ for Direct Mixing: Post-Train Large Language Model At Zero Cost |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Parameter Expanded Stochastic Gradient Markov Chain Monte Carlo |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Parameter and Memory Efficient Pretraining via Low-rank Riemannian Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pareto Prompt Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ParetoFlow: Guided Flows in Multi-Objective Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Partial Gromov-Wasserstein Metric |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Partially Observed Trajectory Inference using Optimal Transport and a Dynamics Prior |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PathGen-1.6M: 1.6 Million Pathology Image-text Pairs Generation through Multi-agent Collaboration |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Pedestrian Motion Reconstruction: A Large-scale Benchmark via Mixed Reality Rendering with Multiple Perspectives and Modalities |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| PeriodWave: Multi-Period Flow Matching for High-Fidelity Waveform Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Periodic Materials Generation using Text-Guided Joint Diffusion Model |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Perm: A Parametric Representation for Multi-Style 3D Hair Modeling |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Permute-and-Flip: An optimally stable and watermarkable decoder for LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Perplexity Trap: PLM-Based Retrievers Overrate Low Perplexity Documents |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Persistent Pre-training Poisoning of LLMs |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PersonalLLM: Tailoring LLMs to Individual Preferences |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Personality Alignment of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Personalized Representation from Personalized Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Personalized Visual Instruction Tuning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Perturbation-Restrained Sequential Model Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| PerturboLLaVA: Reducing Multimodal Hallucinations with Perturbative Visual Training |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| PharmacoMatch: Efficient 3D Pharmacophore Screening via Neural Subgraph Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| PhiNets: Brain-inspired Non-contrastive Learning Based on Temporal Prediction Hypothesis |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PhyMPGN: Physics-encoded Message Passing Graph Network for spatiotemporal PDE systems |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| PhyloLM: Inferring the Phylogeny of Large Language Models and Predicting their Performances in Benchmarks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| PhyloVAE: Unsupervised Learning of Phylogenetic Trees via Variational Autoencoders |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| PhysPDE: Rethinking PDE Discovery and a Physical HYpothesis Selection Benchmark |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
5 |
| Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Physics of Language Models: Part 2.2, How to Learn From Mistakes on Grade-School Math Problems |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Physics of Language Models: Part 3.2, Knowledge Manipulation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Physics-Informed Deep Inverse Operator Networks for Solving PDE Inverse Problems |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Physics-Informed Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Physics-aligned field reconstruction with diffusion bridge |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Physics-informed Temporal Difference Metric Learning for Robot Motion Planning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Physiome-ODE: A Benchmark for Irregularly Sampled Multivariate Time-Series Forecasting Based on Biological ODEs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PiCO: Peer Review in LLMs based on Consistency Optimization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| PianoMotion10M: Dataset and Benchmark for Hand Motion Generation in Piano Performance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PivotMesh: Generic 3D Mesh Generation via Pivot Vertices Guidance |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Planning Anything with Rigor: General-Purpose Zero-Shot Planning with LLM-based Formalized Programming |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Planning in Natural Language Improves LLM Search for Code Generation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Plastic Learning with Deep Fourier Features |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PnP-Flow: Plug-and-Play Image Restoration with Flow Matching |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Point Cluster: A Compact Message Unit for Communication-Efficient Collaborative Perception |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Point-SAM: Promptable 3D Segmentation Model for Point Clouds |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Point-based Instance Completion with Scene Constraints |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| PointOBB-v2: Towards Simpler, Faster, and Stronger Single Point Supervised Oriented Object Detection |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Poison-splat: Computation Cost Attack on 3D Gaussian Splatting |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Poisson-Dirac Neural Networks for Modeling Coupled Dynamical Systems across Domains |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| PolaFormer: Polarity-aware Linear Attention for Vision Transformers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Policy Decorator: Model-Agnostic Online Refinement for Large Policy Model |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Policy Design in Long-run Welfare Dynamics |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Policy Optimization under Imperfect Human Interactions with Agent-Gated Shared Autonomy |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| PolyNet: Learning Diverse Solution Strategies for Neural Combinatorial Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PolyPythias: Stability and Outliers across Fifty Language Model Pre-Training Runs |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| PolyhedronNet: Representation Learning for Polyhedra with Surface-attributed Graph |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Polyrating: A Cost-Effective and Bias-Aware Rating System for LLM Evaluation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| PooDLe🐩: Pooled and dense self-supervised learning from naturalistic videos |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Population Transformer: Learning Population-level Representations of Neural Activity |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Positive-Unlabeled Diffusion Models for Preventing Sensitive Data Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Post-hoc Reward Calibration: A Case Study on Length Bias |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PostCast: Generalizable Postprocessing for Precipitation Nowcasting via Unsupervised Blurriness Modeling |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| PostEdit: Posterior Sampling for Efficient Zero-Shot Image Editing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Posterior-Mean Rectified Flow: Towards Minimum MSE Photo-Realistic Image Restoration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Preble: Efficient Distributed Prompt Scheduling for LLM Serving |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Precedence-Constrained Winter Value for Effective Graph Data Valuation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Precise Localization of Memories: A Fine-grained Neuron-level Knowledge Editing Technique for LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Precise Parameter Localization for Textual Generation in Diffusion Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Predicate Hierarchies Improve Few-Shot State Classification |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Predicting the Energy Landscape of Stochastic Dynamical System via Physics-informed Self-supervised Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prediction Risk and Estimation Risk of the Ridgeless Least Squares Estimator under General Assumptions on Regression Errors |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Predictive Uncertainty Quantification for Bird's Eye View Segmentation: A Benchmark and Novel Loss Function |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Preference Diffusion for Recommendation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Preference Elicitation for Offline Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Preference Optimization for Reasoning with Pseudo Feedback |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Preserving Deep Representations in One-Shot Pruning: A Hessian-Free Second-Order Optimization Framework |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Preserving Diversity in Supervised Fine-Tuning of Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Presto! Distilling Steps and Layers for Accelerating Music Generation |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Prevalence of Negative Transfer in Continual Reinforcement Learning: Analyses and a Simple Baseline |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Prioritized Generative Replay |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Privacy Auditing of Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Privacy-Aware Lifelong Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Privacy-Preserving Personalized Federated Prompt Learning for Multimodal Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Private Mechanism Design via Quantile Estimation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Privately Counting Partially Ordered Data |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| ProAdvPrompter: A Two-Stage Journey to Effective Adversarial Prompting for LLMs |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Proactive Privacy Amnesia for Large Language Models: Safeguarding PII with Negligible Impact on Model Utility |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Probabilistic Conformal Prediction with Approximate Conditional Validity |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Probabilistic Geometric Principal Component Analysis with application to neural data |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Probabilistic Language-Image Pre-Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Probabilistic Learning to Defer: Handling Missing Expert Annotations and Controlling Workload Distribution |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Probabilistic Neural Pruning via Sparsity Evolutionary Fokker-Planck-Kolmogorov Equation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Probe before You Talk: Towards Black-box Defense against Backdoor Unalignment for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Probing the Latent Hierarchical Structure of Data via Diffusion Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Problem-Parameter-Free Federated Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Procedural Knowledge in Pretraining Drives Reasoning in Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Procedural Synthesis of Synthesizable Molecules |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Process Reward Model with Q-value Rankings |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Programming Refusal with Conditional Activation Steering |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Progress or Regress? Self-Improvement Reversal in Post-training |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Progressive Compositionality in Text-to-Image Generative Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Progressive Compression with Universally Quantized Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Progressive Mixed-Precision Decoding for Efficient LLM Inference |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Progressive Parameter Efficient Transfer Learning for Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Progressive distillation induces an implicit curriculum |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Projection Head is Secretly an Information Bottleneck |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prompt as Knowledge Bank: Boost Vision-language model via Structural Representation for zero-shot medical detection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Prompting Fairness: Integrating Causality to Debias Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ProtComposer: Compositional Protein Structure Generation with 3D Ellipsoids |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ProtPainter: Draw or Drag Protein via Topology-guided Diffusion |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Protecting against simultaneous data poisoning attacks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Protein Language Model Fitness is a Matter of Preference |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ProteinBench: A Holistic Evaluation of Protein Foundation Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Proteina: Scaling Flow-based Protein Structure Generative Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ProtoSnap: Prototype Alignment For Cuneiform Signs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Prototype antithesis for biological few-shot class-incremental learning |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Provable Benefit of Annealed Langevin Monte Carlo for Non-log-concave Sampling |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Provable Convergence Bounds for Hybrid Dynamical Sampling and Optimization |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Provable Convergence and Limitations of Geometric Tempering for Langevin Dynamics |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Provable Robust Overfitting Mitigation in Wasserstein Distributionally Robust Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Provable Uncertainty Decomposition via Higher-Order Calibration |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Provable unlearning in topic modeling and downstream tasks |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Provable weak-to-strong generalization via benign overfitting |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Provably Accurate Shapley Value Estimation via Leverage Score Sampling |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Provably Robust Explainable Graph Neural Networks against Graph Perturbation Attacks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Provably Safeguarding a Classifier from OOD and Adversarial Samples |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Provence: efficient and robust context pruning for retrieval-augmented generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Proving Olympiad Inequalities by Synergizing LLMs and Symbolic Reasoning |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Proximal Mapping Loss: Understanding Loss Functions in Crowd Counting & Localization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Proxy Denoising for Source-Free Domain Adaptation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| PseDet: Revisiting the Power of Pseudo Label in Incremental Object Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Pursuing Better Decision Boundaries for Long-Tailed Object Detection via Category Information Amount |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Pursuing Feature Separation based on Neural Collapse for Out-of-Distribution Detection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Pushing the Limits of All-Atom Geometric Graph Neural Networks: Pre-Training, Scaling, and Zero-Shot Transfer |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| PuzzleFusion++: Auto-agglomerative 3D Fracture Assembly by Denoise and Verify |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| PvNeXt: Rethinking Network Design and Temporal Motion for Point Cloud Video Recognition |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Pyramidal Flow Matching for Efficient Video Generative Modeling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Q-Adapter: Customizing Pre-trained LLMs to New Preferences with Forgetting Mitigation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| QA-Calibration of Language Model Confidence Scores |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| QERA: an Analytical Framework for Quantization Error Reconstruction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| QMP: Q-switch Mixture of Policies for Multi-Task Behavior Sharing |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| QP-SNN: Quantized and Pruned Spiking Neural Networks |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| QPM: Discrete Optimization for Globally Interpretable Image Classification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Qinco2: Vector Compression and Search with Improved Implicit Neural Codebooks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| QuaDiM: A Conditional Diffusion Model For Quantum State Property Estimation |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Quality Measures for Dynamic Graph Generative Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Quality over Quantity in Attention Layers: When Adding More Heads Hurts |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Quamba: A Post-Training Quantization Recipe for Selective State Space Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Quantifying Generalization Complexity for Large Language Models |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Quantitative Approximation for Neural Operators in Nonlinear Parabolic Equations |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Quantized Spike-driven Transformer |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Quantum (Inspired) $D^2$-sampling with Applications |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| Quantum-PEFT: Ultra parameter-efficient fine-tuning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Query-based Knowledge Transfer for Heterogeneous Learning Environments |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Quest: Query-centric Data Synthesis Approach for Long-context Scaling of Large Language Model |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| R-Sparse: Rank-Aware Activation Sparsity for Efficient LLM Inference |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| R2Det: Exploring Relaxed Rotation Equivariance in 2D Object Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RA-TTA: Retrieval-Augmented Test-Time Adaptation for Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rewards |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RAG-SR: Retrieval-Augmented Generation for Neural Symbolic Regression |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| RAPID: Retrieval Augmented Training of Differentially Private Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RB-Modulation: Training-Free Stylization using Reference-Based Modulation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| REBIND: Enhancing Ground-state Molecular Conformation Prediction via Force-Based Graph Rewiring |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RECAST: Reparameterized, Compact weight Adaptation for Sequential Tasks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| REEF: Representation Encoding Fingerprints for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| REFINE: Inversion-Free Backdoor Defense via Model Reprogramming |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| REMEDY: Recipe Merging Dynamics in Large Vision-Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| RESfM: Robust Deep Equivariant Structure from Motion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RESuM: A Rare Event Surrogate Model for Physics Detector Design |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
2 |
| REVISITING MULTI-PERMUTATION EQUIVARIANCE THROUGH THE LENS OF IRREDUCIBLE REPRESENTATIONS |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| REvolve: Reward Evolution with Large Language Models using Human Feedback |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| RFMamba: Frequency-Aware State Space Model for RF-Based Human-Centric Perception |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| RFWave: Multi-band Rectified Flow for Audio Waveform Reconstruction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RGB-Event ISP: The Dataset and Benchmark |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RMB: Comprehensively benchmarking reward models in LLM alignment |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RNNs are not Transformers (Yet): The Key Bottleneck on In-Context Retrieval |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ROUTE: Robust Multitask Tuning and Collaboration for Text-to-SQL |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RRM: Robust Reward Model Training Mitigates Reward Hacking |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RTDiff: Reverse Trajectory Synthesis via Diffusion for Offline Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| RTop-K: Ultra-Fast Row-Wise Top-K Selection for Neural Network Acceleration on GPUs |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| RaSA: Rank-Sharing Low-Rank Adaptation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Radar: Fast Long-Context Decoding for Any Transformer |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| RainbowPO: A Unified Framework for Combining Improvements in Preference Optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| RandLoRA: Full rank parameter-efficient fine-tuning of large models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Random Is All You Need: Random Noise Injection on Feature Statistics for Generalizable Deep Image Denoising |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Random-Set Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Range, not Independence, Drives Modularity in Biologically Inspired Representations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| RankSHAP: Shapley Value Based Feature Attributions for Learning to Rank |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Ranking-aware adapter for text-driven image ordering with CLIP |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rapid Selection and Ordering of In-Context Demonstrations via Prompt Embedding Clustering |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Rapidly Adapting Policies to the Real-World via Simulation-Guided Fine-Tuning |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Rare event modeling with self-regularized normalizing flows: what can we learn from a single failure? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Rational Decision-Making Agent with Learning Internal Utility Judgment |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Rationalizing and Augmenting Dynamic Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RazorAttention: Efficient KV Cache Compression Through Retrieval Heads |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Re-Aligning Language to Visual Objects with an Agentic Workflow |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Re-Evaluating the Impact of Unseen-Class Unlabeled Data on Semi-Supervised Learning Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Re-Imagining Multimodal Instruction Tuning: A Representation View |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Re-evaluating Open-ended Evaluation of Large Language Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| ReAttention: Training-Free Infinite Context with Finite Attention Scope |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| ReCogLab: a framework testing relational reasoning & cognitive hypotheses on LLMs |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ReGen: Generative Robot Simulation via Inverse Design |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| ReMatching Dynamic Reconstruction Flow |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ReNovo: Retrieval-Based \emph{De Novo} Mass Spectrometry Peptide Sequencing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ReSi: A Comprehensive Benchmark for Representational Similarity Measures |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Reading Your Heart: Learning ECG Words and Sentences via Pre-training ECG Language Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Ready-to-React: Online Reaction Policy for Two-Character Interaction Generation |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Real-Time Video Generation with Pyramid Attention Broadcast |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Real-time design of architectural structures with differentiable mechanics and neural networks |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Real2Code: Reconstruct Articulated Objects via Code Generation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Realistic Evaluation of Deep Partial-Label Learning Algorithms |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reasoning Elicitation in Language Models via Counterfactual Feedback |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Reasoning of Large Language Models over Knowledge Graphs with Super-Relations |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reasoning with Latent Thoughts: On the Power of Looped Transformers |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Reassessing How to Compare and Improve the Calibration of Machine Learning Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RecDreamer: Consistent Text-to-3D Generation via Uniform Score Distillation |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| RecFlow: An Industrial Full Flow Recommendation Dataset |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Recognize Any Surgical Object: Unleashing the Power of Weakly-Supervised Data |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Reconciling Model Multiplicity for Downstream Decision Making |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reconsidering Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Reconstruction-Guided Policy: Enhancing Decision-Making through Agent-Wise State Consistency |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reconstructive Visual Instruction Tuning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Recovering Manifold Structure Using Ollivier Ricci Curvature |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Recovery of Causal Graph Involving Latent Variables via Homologous Surrogates |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Rectified Diffusion: Straightness Is Not Your Need in Rectified Flow |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Redefining the task of Bioactivity Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reducing Hallucinations in Large Vision-Language Models via Latent Space Steering |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RefactorBench: Evaluating Stateful Reasoning in Language Agents Through Code |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Refine Knowledge of Large Language Models via Adaptive Contrastive Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Refine-by-Align: Reference-Guided Artifacts Refinement through Semantic Alignment |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
2 |
| Refining CLIP's Spatial Awareness: A Visual-Centric Perspective |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reflective Gaussian Splatting |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Reflexive Guidance: Improving OoDD in Vision-Language Models via Self-Guided Image-Adaptive Concept Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Reframing Structure-Based Drug Design Model Evaluation via Metrics Correlated to Practical Needs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RegMix: Data Mixture as Regression for Language Model Pre-training |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Regressing the Relative Future: Efficient Policy Optimization for Multi-turn RLHF |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Regret Bounds for Episodic Risk-Sensitive Linear Quadratic Regulator |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Regret-Optimal List Replicable Bandit Learning: Matching Upper and Lower Bounds |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Regretful Decisions under Label Noise |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Regularization by Texts for Latent Diffusion Inverse Solvers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Regularizing Energy among Training Samples for Out-of-Distribution Generalization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Regulatory DNA Sequence Design with Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Reinforcement Learning for Control of Non-Markovian Cellular Population Dynamics |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Reinforcement Learning from Imperfect Corrective Actions and Proxy Rewards |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Reinforcement learning with combinatorial actions for coupled restless bandits |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Relation-Aware Diffusion for Heterogeneous Graphs with Partially Observed Features |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Relax and Merge: A Simple Yet Effective Framework for Solving Fair $k$-Means and $k$-sparse Wasserstein Barycenter Problems |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Release the Powers of Prompt Tuning: Cross-Modality Prompt Transfer |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Reliable and Diverse Evaluation of LLM Medical Knowledge Mastery |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RelitLRM: Generative Relightable Radiance for Large Reconstruction Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Remove Symmetries to Control Model Expressivity and Improve Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Repetition Improves Language Model Embeddings |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| RepoGraph: Enhancing AI Software Engineering with Repository-level Code Graph |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
3 |
| Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Representational Similarity via Interpretable Visual Concepts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Representative Guidance: Diffusion Model Sampling with Coherence |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Repulsive Latent Score Distillation for Solving Inverse Problems |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Residual Connections and Normalization Can Provably Prevent Oversmoothing in GNNs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Residual Deep Gaussian Processes on Manifolds |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Residual Kernel Policy Network: Enhancing Stability and Robustness in RKHS-Based Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Residual Stream Analysis with Multi-Layer SAEs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Residual-MPPI: Online Policy Customization for Continuous Control |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Resolution Attack: Exploiting Image Compression to Deceive Deep Neural Networks |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Restructuring Vector Quantization with the Rotation Trick |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Restyling Unsupervised Concept Based Interpretable Networks with Generative Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Rethinking Artistic Copyright Infringements In the Era Of Text-to-Image Generative Models |
✅ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Rethinking Audio-Visual Adversarial Vulnerability from Temporal and Modality Perspectives |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Rethinking Classifier Re-Training in Long-Tailed Recognition: Label Over-Smooth Can Balance |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Diffusion Posterior Sampling: From Conditional Score Estimator to Maximizing a Posterior |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Rethinking Fair Representation Learning for Performance-Sensitive Tasks |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Rethinking Graph Neural Networks From A Geometric Perspective Of Node Features |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Rethinking Invariance in In-context Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Rethinking Light Decoder-based Solvers for Vehicle Routing Problems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Multiple-Instance Learning From Feature Space to Probability Space |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Neural Multi-Objective Combinatorial Optimization via Neat Weight Embedding |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree? |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Rethinking Reward Modeling in Preference-based Large Language Model Alignment |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Rethinking Self-Distillation: Label Averaging and Enhanced Soft Label Refinement with Partial Labels |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Rethinking Shapley Value for Negative Interactions in Non-convex Games |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Rethinking Spiking Neural Networks from an Ensemble Learning Perspective |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking Visual Counterfactual Explanations Through Region Constraint |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Rethinking and Improving Autoformalization: Towards a Faithful Metric and a Dependency Retrieval-based Approach |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Rethinking the role of frames for SE(3)-invariant crystal structure modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Reti-Diff: Illumination Degradation Image Restoration with Retinex-based Latent Diffusion Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Retri3D: 3D Neural Graphics Representation Retrieval |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Retrieval Augmented Diffusion Model for Structure-informed Antibody Design and Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Retrieval Head Mechanistically Explains Long-Context Factuality |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| RetroInText: A Multimodal Large Language Model Enhanced Framework for Retrosynthetic Planning via In-Context Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Reveal Object in Lensless Photography via Region Gaze and Amplification |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Revealing and Mitigating Over-Attention in Knowledge Editing |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Revealing the 3D Cosmic Web through Gravitationally Constrained Neural Fields |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| RevisEval: Improving LLM-as-a-Judge via Response-Adapted References |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Revisit Micro-batch Clipping: Adaptive Data Pruning via Gradient Manipulation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Revisit the Open Nature of Open Vocabulary Semantic Segmentation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Revisiting Convolution Architecture in the Realm of DNA Foundation Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Revisiting In-context Learning Inference Circuit in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Revisiting Large-Scale Non-convex Distributionally Robust Optimization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Revisiting Mode Connectivity in Neural Networks with Bezier Surface |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Revisiting Nearest Neighbor for Tabular Data: A Deep Tabular Baseline Two Decades Later |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Revisiting Random Walks for Learning on Graphs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Revisiting Source-Free Domain Adaptation: a New Perspective via Uncertainty Control |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Revisiting a Design Choice in Gradient Temporal Difference Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Revisiting text-to-image evaluation with Gecko: on metrics, prompts, and human rating |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Revolutionizing EMCCD Denoising through a Novel Physics-Based Learning Framework for Noise Modeling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reward Dimension Reduction for Scalable Multi-Objective Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Reward Learning from Multiple Feedback Types |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Risk-Sensitive Diffusion: Robustly Optimizing Diffusion Models with Noisy Samples |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Risk-Sensitive Variational Actor-Critic: A Model-Based Approach |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robotouille: An Asynchronous Planning Benchmark for LLM Agents |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Robots Pre-train Robots: Manipulation-Centric Robotic Representation from Large-Scale Robot Datasets |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Robust Barycenter Estimation using Semi-Unbalanced Neural Optimal Transport |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robust Conformal Prediction with a Single Binary Certificate |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Robust Feature Learning for Multi-Index Models in High Dimensions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robust Function-Calling for On-Device Language Model via Function Masking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Robust Gymnasium: A Unified Modular Benchmark for Robust Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robust LLM safeguarding via refusal feature adversarial training |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Robust Representation Consistency Model via Contrastive Denoising |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robust Root Cause Diagnosis using In-Distribution Interventions |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Robust Simulation-Based Inference under Missing Data via Neural Processes |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robust System Identification: Finite-sample Guarantees and Connection to Regularization |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Robust Transfer of Safety-Constrained Reinforcement Learning Agents |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Robust-PIFu: Robust Pixel-aligned Implicit Function for 3D Human Digitalization from a Single Image |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robustness Auditing for Linear Regression: To Singularity and Beyond |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Robustness Inspired Graph Backdoor Defense |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Robustness Reprogramming for Representation Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Robustness of Quantum Algorithms for Nonconvex Optimization |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| RocketEval: Efficient automated LLM evaluation via grading checklist |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Rodimus*: Breaking the Accuracy-Efficiency Trade-Off with Efficient Attentions |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Root Cause Analysis of Anomalies in Multivariate Time Series through Granger Causal Discovery |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Round and Round We Go! What makes Rotary Positional Encodings useful? |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| RouteLLM: Learning to Route LLMs from Preference Data |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Routing Experts: Learning to Route Dynamic Experts in Existing Multi-modal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| RuAG: Learned-rule-augmented Generation for Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| S4M: S4 for multivariate time series forecasting with Missing values |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SAGEPhos: Sage Bio-Coupled and Augmented Fusion for Phosphorylation Site Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SAM 2: Segment Anything in Images and Videos |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SAM-CP: Marrying SAM with Composable Prompts for Versatile Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| SAMRefiner: Taming Segment Anything Model for Universal Mask Refinement |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SANA: Efficient High-Resolution Text-to-Image Synthesis with Linear Diffusion Transformers |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SANER: Annotation-free Societal Attribute Neutralizer for Debiasing CLIP |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SAVA: Scalable Learning-Agnostic Data Valuation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SBSC: Step-by-Step Coding for Improving Mathematical Olympiad Performance |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SC-OmniGS: Self-Calibrating Omnidirectional Gaussian Splatting |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SCBench: A KV Cache-Centric Analysis of Long-Context Methods |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SEBRA : Debiasing through Self-Guided Bias Ranking |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SELF-EVOLVED REWARD LEARNING FOR LLMS |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SEMDICE: Off-policy State Entropy Maximization via Stationary Distribution Correction Estimation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SEPARATE: A Simple Low-rank Projection for Gradient Compression in Modern Large-scale Model Training Process |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SFESS: Score Function Estimators for $k$-Subset Sampling |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SFS: Smarter Code Space Search improves LLM Inference Scaling |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SGD with memory: fundamental properties and stochastic acceleration |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| SIM: Surface-based fMRI Analysis for Inter-Subject Multimodal Decoding from Movie-Watching Experiments |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SIMPL: Scalable and hassle-free optimisation of neural representations from behaviour |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SINGAPO: Single Image Controlled Generation of Articulated Parts in Objects |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SINGER: Stochastic Network Graph Evolving Operator for High Dimensional PDEs |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| SLMRec: Distilling Large Language Models into Small for Sequential Recommendation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SMI-Editor: Edit-based SMILES Language Model with Fragment-level Supervision |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SMITE: Segment Me In TimE |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SMT: Fine-Tuning Large Language Models with Sparse Matrices |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SOAP: Improving and Stabilizing Shampoo using Adam for Language Modeling |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SONICS: Synthetic Or Not - Identifying Counterfeit Songs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SOO-Bench: Benchmarks for Evaluating the Stability of Offline Black-Box Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SOREL: A Stochastic Algorithm for Spectral Risks Minimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SPA-BENCH: A COMPREHENSIVE BENCHMARK FOR SMARTPHONE AGENT EVALUATION |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SPA: 3D Spatial-Awareness Enables Effective Embodied Representation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SPARTUN3D: Situated Spatial Understanding of 3D World in Large Language Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SPDIM: Source-Free Unsupervised Conditional and Label Shift Adaptation in EEG |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| SPORTU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SRSA: Skill Retrieval and Adaptation for Robotic Assembly Tasks |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SSLAM: Enhancing Self-Supervised Models with Audio Mixtures for Polyphonic Soundscapes |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SSOLE: Rethinking Orthogonal Low-rank Embedding for Self-Supervised Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ST-GCond: Self-supervised and Transferable Graph Dataset Condensation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| STAFF: Speculative Coreset Selection for Task-Specific Fine-tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| STAMP: Scalable Task- And Model-agnostic Collaborative Perception |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| STAR: Stability-Inducing Weight Perturbation for Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| STAR: Synthesis of Tailored Architectures |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| STORM: Spatio-TempOral Reconstruction Model For Large-Scale Outdoor Scenes |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| STRAP: Robot Sub-Trajectory Retrieval for Augmented Policy Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SV-RAG: LoRA-Contextualizing Adaptation of MLLMs for Long Document Understanding |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View Consistency |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video Understanding |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
4 |
| SVDQuant: Absorbing Outliers by Low-Rank Component for 4-Bit Diffusion Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| SVG: 3D Stereoscopic Video Generation via Denoising Frame Matrix |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains? |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| SWEb: A Large Web Dataset for the Scandinavian Languages |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SaMer: A Scenario-aware Multi-dimensional Evaluator for Large Language Models |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SafeDiffuser: Safe Planning with Diffusion Probabilistic Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Safety Alignment Should be Made More Than Just a Few Tokens Deep |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Safety Layers in Aligned Large Language Models: The Key to LLM Security |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Safety Representations for Safer Policy Learning |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Safety-Prioritizing Curricula for Constrained Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Sail into the Headwind: Alignment via Robust Rewards and Dynamic Labels against Reward Hacking |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Salvage: Shapley-distribution Approximation Learning Via Attribution Guided Exploration for Explainable Image Classification |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Samba: Synchronized Set-of-Sequences Modeling for Multiple Object Tracking |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Sample then Identify: A General Framework for Risk Control and Assessment in Multimodal Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Satisficing Regret Minimization in Bandits |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| ScImage: How good are multimodal large language models at scientific text-to-image generation? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Scalable Bayesian Learning with posteriors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable Benchmarking and Robust Learning for Noise-Free Ego-Motion and 3D Reconstruction from Noisy Video |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Scalable Decentralized Learning with Teleportation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scalable Decision-Making in Stochastic Environments through Learned Temporal Abstraction |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable Extraction of Training Data from Aligned, Production Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Scalable Influence and Fact Tracing for Large Language Model Pretraining |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Scalable Mechanistic Neural Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable Universal T-Cell Receptor Embeddings from Adaptive Immune Repertoires |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scalable and Certifiable Graph Unlearning: Overcoming the Approximation Error Barrier |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scale-Aware Contrastive Reverse Distillation for Unsupervised Medical Anomaly Detection |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scale-Free Graph-Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Scale-aware Recognition in Satellite Images under Resource Constraints |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Scaling Autonomous Agents via Automatic Reward Modeling And Planning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Scaling Diffusion Language Models via Adaptation from Autoregressive Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scaling FP8 training to trillion-token LLMs |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scaling In-the-Wild Training for Diffusion-based Illumination Harmonization and Editing by Imposing Consistent Light Transport |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Scaling Instruction-tuned LLMs to Million-token Contexts via Hierarchical Synthetic Data Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scaling LLM Test-Time Compute Optimally Can be More Effective than Scaling Parameters for Reasoning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Scaling Large Language Model-based Multi-Agent Collaboration |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Scaling Laws for Adversarial Attacks on Language Model Activations and Tokens |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Scaling Laws for Downstream Task Performance in Machine Translation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Scaling Laws for Precision |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Scaling Long Context Training Data by Long-Distance Referrals |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretraining |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Optimal LR Across Token Horizons |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Scaling Speech-Text Pre-training with Synthetic Interleaved Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Stick-Breaking Attention: An Efficient Implementation and In-depth Study |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scaling Transformers for Low-Bitrate High-Quality Speech Coding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scaling Wearable Foundation Models |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Scaling and evaluating sparse autoencoders |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Scaling up Masked Diffusion Models on Text |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Scaling up the Banded Matrix Factorization Mechanism for Large Scale Differentially Private ML |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Schur's Positive-Definite Network: Deep Learning in the SPD cone with structure |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SciLitLLM: How to Adapt LLMs for Scientific Literature Understanding |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Score Forgetting Distillation: A Swift, Data-Free Method for Machine Unlearning in Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Score-based Self-supervised MRI Denoising |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Score-based free-form architectures for high-dimensional Fokker-Planck equations |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Scrutinize What We Ignore: Reining In Task Representation Shift Of Context-Based Offline Meta Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SeCom: On Memory Construction and Retrieval for Personalized Conversational Agents |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| SePer: Measure Retrieval Utility Through The Lens Of Semantic Perplexity Reduction |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| SeRA: Self-Reviewing and Alignment of LLMs using Implicit Reward Margins |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Searching for Optimal Solutions with LLMs via Bayesian Optimization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Second Order Bounds for Contextual Bandits with Function Approximation |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Second-Order Fine-Tuning without Pain for LLMs: A Hessian Informed Zeroth-Order Optimizer |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Second-Order Min-Max Optimization with Lazy Hessians |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| SecureGS: Boosting the Security and Fidelity of 3D Gaussian Splatting Steganography |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| See It from My Perspective: How Language Affects Cultural Bias in Image Understanding |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
3 |
| See What You Are Told: Visual Attention Sink in Large Multimodal Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SeedLM: Compressing LLM Weights into Seeds of Pseudo-Random Generators |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SegLLM: Multi-round Reasoning Segmentation with Large Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Segment Any 3D Object with Language |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
4 |
| SelKD: Selective Knowledge Distillation via Optimal Transport Perspective |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Select before Act: Spatially Decoupled Action Repetition for Continuous Control |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| SelectFormer in Data Markets: Privacy-Preserving and Efficient Data Selection for Transformers with Multi-Party Computation |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Selective Aggregation for Low-Rank Adaptation in Federated Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Selective Attention Improves Transformer |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Selective Label Enhancement Learning for Test-Time Adaptation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Selective Task Group Updates for Multi-Task Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Selective Unlearning via Representation Erasure Using Domain Adversarial Training |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Selective induction Heads: How Transformers Select Causal Structures in Context |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Self-Attention-Based Contextual Modulation Improves Neural System Identification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Boosting Large Language Models with Synthetic Preference Data |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Self-Correcting Decoding with Generative Feedback for Mitigating Hallucinations in Large Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Evolving Multi-Agent Collaboration Networks for Software Development |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Self-Improvement in Language Models: The Sharpening Mechanism |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Improving Robust Preference Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Self-Normalized Resets for Plasticity in Continual Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Self-Play Preference Optimization for Language Model Alignment |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-Supervised Diffusion MRI Denoising via Iterative and Stable Refinement |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Self-Supervised Diffusion Models for Electron-Aware Molecular Representation Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Self-Updatable Large Language Models by Integrating Context into Model Parameters |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Self-supervised Monocular Depth Estimation Robust to Reflective Surface Leveraged by Triplet Mining |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Self-supervised contrastive learning performs non-linear system identification |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Semantic Aware Representation Learning for Lifelong Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Semantic Loss Guided Data Efficient Supervised Fine Tuning for Safe Responses in LLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Semantic Temporal Abstraction via Vision-Language Model Guidance for Efficient Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Semantics-Adaptive Activation Intervention for LLMs via Dynamic Steering Vectors |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Semantix: An Energy-guided Sampler for Semantic Style Transfer |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Semi-Parametric Retrieval via Binary Bag-of-Tokens Index |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Semi-Supervised CLIP Adaptation by Enforcing Semantic and Trapezoidal Consistency |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Semi-Supervised Vision-Centric 3D Occupancy World Model for Autonomous Driving |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Semialgebraic Neural Networks: From roots to representations |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Sensitivity Verification for Additive Decision Tree Ensembles |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sensitivity-Constrained Fourier Neural Operators for Forward and Inverse Problems in Parametric Differential Equations |
✅ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sensor-Invariant Tactile Representation |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Separation Power of Equivariant Neural Networks |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sequential Controlled Langevin Diffusions |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Sequential Stochastic Combinatorial Optimization Using Hierarchal Reinforcement Learning |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Severing Spurious Correlations with Data Pruning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ShEPhERD: Diffusing shape, electrostatics, and pharmacophores for bioisosteric drug design |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Shallow diffusion networks provably learn hidden low-dimensional structure |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Shape as Line Segments: Accurate and Flexible Implicit Surface Representation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Shapley-Guided Utility Learning for Effective Graph Inference Data Valuation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
4 |
| Shared-AE: Automatic Identification of Shared Subspaces in High-dimensional Neural and Behavioral Activity |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Sharper Guarantees for Learning Neural Network Classifiers with Gradient Methods |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Sharpness-Aware Black-Box Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sharpness-Aware Minimization Efficiently Selects Flatter Minima Late In Training |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sharpness-Aware Minimization: General Analysis and Improved Rates |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Shedding Light on Time Series Classification using Interpretability Gated Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Shh, don't say that! Domain Certification in LLMs |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Shifting the Paradigm: A Diffeomorphism Between Time Series Data Manifolds for Achieving Shift-Invariancy in Deep Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Shot2Story: A New Benchmark for Comprehensive Understanding of Multi-shot Videos |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Should VLMs be Pre-trained with Image Data? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Show-o: One Single Transformer to Unify Multimodal Understanding and Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SiMHand: Mining Similar Hands for Large-Scale 3D Hand Pose Pre-training |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SiReRAG: Indexing Similar and Related Information for Multihop Reasoning |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| SigDiffusions: Score-Based Diffusion Models for Time Series via Log-Signature Embeddings |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Signature Kernel Conditional Independence Tests in Causal Discovery for Stochastic Processes |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SimXRD-4M: Big Simulated X-ray Diffraction Data and Crystal Symmetry Classification Benchmark |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Simple Guidance Mechanisms for Discrete Diffusion Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Simple ReFlow: Improved Techniques for Fast Flow Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Simple is Effective: The Roles of Graphs and Large Language Models in Knowledge-Graph-Based Retrieval-Augmented Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Simple yet Effective Incomplete Multi-view Clustering: Similarity-level Imputation and Intra-view Hybrid-group Prototype Construction |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Simple, Good, Fast: Self-Supervised World Models Free of Baggage |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SimpleTM: A Simple Baseline for Multivariate Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Simplifying Deep Temporal Difference Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Simplifying, Stabilizing and Scaling Continuous-time Consistency Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SimulPL: Aligning Human Preferences in Simultaneous Machine Translation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Simulating Human-like Daily Activities with Desire-driven Autonomy |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
3 |
| Simulating Training Dynamics to Reconstruct Training Data from Deep Neural Networks |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Single Teacher, Multiple Perspectives: Teacher Knowledge Augmentation for Enhanced Knowledge Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Single-agent Poisoning Attacks Suffice to Ruin Multi-Agent Learning |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Singular Subspace Perturbation Bounds via Rectangular Random Matrix Diffusions |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Size-Generalizable RNA Structure Evaluation by Exploring Hierarchical Geometries |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Sketch2Diagram: Generating Vector Diagrams from Hand-Drawn Sketches |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Sketching for Convex and Nonconvex Regularized Least Squares with Sharp Guarantees |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Skill Expansion and Composition in Parameter Space |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| SleepSMC: Ubiquitous Sleep Staging via Supervised Multimodal Coordination |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Slot-Guided Adaptation of Pre-trained Diffusion Models for Object-Centric Learning and Compositional Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Small Models are LLM Knowledge Triggers for Medical Tabular Prediction |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Small-to-Large Generalization: Training Data Influences Models Consistently Across Scale |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| SmartPretrain: Model-Agnostic and Dataset-Agnostic Representation Learning for Motion Prediction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SmartRAG: Jointly Learn RAG-Related Tasks From the Environment Feedback |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Smoothing the Shift: Towards Stable Test-Time Adaptation under Complex Multimodal Noises |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SoftCVI: Contrastive variational inference with self-generated soft labels |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SoftMatcha: A Soft and Fast Pattern Matcher for Billion-Scale Corpus Searches |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Solving Differential Equations with Constrained Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Solving New Tasks by Adapting Internet Video Knowledge |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Solving Video Inverse Problems Using Image Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Solving hidden monotone variational inequalities with surrogate losses |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SonicSim: A customizable simulation platform for speech processing in moving sound source scenarios |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sort-free Gaussian Splatting via Weighted Sum Rendering |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SoundCTM: Unifying Score-based and Consistency Models for Full-band Text-to-Sound Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SpaceGNN: Multi-Space Graph Neural Network for Node Anomaly Detection with Extremely Limited Labels |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Sparse Autoencoders Do Not Find Canonical Units of Analysis |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sparse Learning for State Space Models on Mobile |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Sparse autoencoders reveal selective remapping of visual concepts during adaptation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Sparse components distinguish visual pathways & their alignment to neural networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SparsyFed: Sparse Adaptive Federated Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Spatial-Mamba: Effective Visual State Space Models via Structure-Aware State Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Specialized Foundation Models Struggle to Beat Supervised Baselines |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spectral Compressive Imaging via Unmixing-driven Subspace Diffusion Refinement |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Spectral-Refiner: Accurate Fine-Tuning of Spatiotemporal Fourier Neural Operator for Turbulent Flows |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spectro-Riemannian Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Speech Robust Bench: A Robustness Benchmark For Speech Recognition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spherical Tree-Sliced Wasserstein Distance |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Spiking Vision Transformer with Saccadic Attention |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| SpinQuant: LLM Quantization with Learned Rotations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SplatFormer: Point Transformer for Robust 3D Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SplineGS: Learning Smooth Trajectories in Gaussian Splatting for Dynamic Scene Reconstruction |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Sports-Traj: A Unified Trajectory Generation Model for Multi-Agent Movement in Sports |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Spreading Out-of-Distribution Detection on Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Spurious Forgetting in Continual Learning of Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SqueezeAttention: 2D Management of KV-Cache in LLM Inference via Layer-wise Optimal Budget |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stabilized Neural Prediction of Potential Outcomes in Continuous Time |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Stabilizing Reinforcement Learning in Differentiable Multiphysics Simulation |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Stable Segment Anything Model |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Standard Gaussian Process is All You Need for High-Dimensional Bayesian Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Standardizing Structural Causal Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Start Smart: Leveraging Gradients For Enhancing Mask-based XAI Methods |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| State Space Model Meets Transformer: A New Paradigm for 3D Object Detection |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| State Space Models are Provably Comparable to Transformers in Dynamic Token Selection |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Statistical Advantages of Perturbing Cosine Router in Mixture of Experts |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Statistical Tractability of Off-policy Evaluation of History-dependent Policies in POMDPs |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Stealthy Shield Defense: A Conditional Mutual Information-Based Approach against Black-Box Model Inversion Attacks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Steering Large Language Models between Code Execution and Textual Reasoning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Steering Protein Family Design through Profile Bayesian Flow |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Stem-OB: Generalizable Visual Imitation Learning with Stem-Like Convergent Observation through Diffusion Inversion |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Step-by-Step Reasoning for Math Problems via Twisted Sequential Monte Carlo |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Stiefel Flow Matching for Moment-Constrained Structure Elucidation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| StochSync: Stochastic Diffusion Synchronization for Image Generation in Arbitrary Spaces |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stochastic Bandits Robust to Adversarial Attacks |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Stochastic Semi-Gradient Descent for Learning Mean Field Games with Population-Aware Function Approximation |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Stochastic variance-reduced Gaussian variational inference on the Bures-Wasserstein manifold |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| Storybooth: Training-Free Multi-Subject Consistency for Improved Visual Storytelling |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Straight to Zero: Why Linearly Decaying the Learning Rate to Zero Works Best for LLMs |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Strategic Classification With Externalities |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Strategist: Self-improvement of LLM Decision Making via Bi-Level Tree Search |
✅ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Streaming Algorithms For $\ell_p$ Flows and $\ell_p$ Regression |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Streaming Video Question-Answering with In-context Video KV-Cache Retrieval |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Streamlining Prediction in Bayesian Deep Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Streamlining Redundant Layers to Compress Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Strength Estimation and Human-Like Strength Adjustment in Games |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| StringLLM: Understanding the String Processing Capability of Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Strong Model Collapse |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Strong Preferences Affect the Robustness of Preference Models and Value Alignment |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information Structurization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Structural-Entropy-Based Sample Selection for Efficient and Effective Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Structure Language Models for Protein Conformation Generation |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Structuring Benchmark into Knowledge Graphs to Assist Large Language Models in Retrieving and Designing Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Student-Informed Teacher Training |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Studying the Interplay Between the Actor and Critic Representations in Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Subgraph Federated Learning for Local Generalization |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Subtask-Aware Visual Reward Learning from Segmented Demonstrations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sufficient Context: A New Lens on Retrieval Augmented Generation Systems |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Supervised and Semi-Supervised Diffusion Maps with Label-Driven Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Support is All You Need for Certified VAE Training |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SurFhead: Affine Rig Blending for Geometrically Accurate 2D Gaussian Surfel Head Avatars |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Surprising Effectiveness of pretraining Ternary Language Model at Scale |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Swift Hydra: Self-Reinforcing Generative Framework for Anomaly Detection with Multiple Mamba Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Swift4D: Adaptive divide-and-conquer Gaussian Splatting for compact and efficient reconstruction of dynamic scene |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Swing-by Dynamics in Concept Learning and Compositional Generalization |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Sylber: Syllabic Embedding Representation of Speech from Raw Audio |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| SyllableLM: Learning Coarse Semantic Units for Speech Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SymDiff: Equivariant Diffusion via Stochastic Symmetrisation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Symbolic regression via MDLformer-guided search: from minimizing prediction error to minimizing description length |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SymmCD: Symmetry-Preserving Crystal Generation with Diffusion Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| SymmetricDiffusers: Learning Discrete Diffusion on Finite Symmetric Groups |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| SynFlowNet: Design of Diverse and Novel Molecules with Synthesis Constraints |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| SynQ: Accurate Zero-shot Quantization by Synthesis-aware Fine-tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Synergy Between Sufficient Changes and Sparse Mixing Procedure for Disentangled Representation Learning |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Synergy and Diversity in CLIP: Enhancing Performance Through Adaptive Backbone Ensembling |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Synthesizing Realistic fMRI: A Physiological Dynamics-Driven Hierarchical Diffusion Model for Efficient fMRI Acquisition |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Synthetic continued pretraining |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Synthio: Augmenting Small-Scale Audio Classification Datasets with Synthetic Data |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| SysBench: Can LLMs Follow System Message? |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| SysCaps: Language Interfaces for Simulation Surrogates of Complex Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| System 1.x: Learning to Balance Fast and Slow Planning with Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Systematic Outliers in Large Language Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Systematic Relational Reasoning With Epistemic Graph Neural Networks |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Systems with Switching Causal Relations: A Meta-Causal Perspective |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
3 |
| T-JEPA: Augmentation-Free Self-Supervised Learning for Tabular Data |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| T2V-Turbo-v2: Enhancing Video Model Post-Training through Data, Reward, and Conditional Guidance Design |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| T2V2: A Unified Non-Autoregressive Model for Speech Recognition and Synthesis via Multitask Learning |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TANGO: Co-Speech Gesture Video Reenactment with Hierarchical Audio Motion Embedding and Diffusion Interpolation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TASAR: Transfer-based Attack on Skeletal Action Recognition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TAU-106K: A New Dataset for Comprehensive Understanding of Traffic Accident |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TC-MoE: Augmenting Mixture of Experts with Ternary Expert Choice |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TD-Paint: Faster Diffusion Inpainting Through Time-Aware Pixel Conditioning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| TDDBench: A Benchmark for Training data detection |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| TEASER: Token Enhanced Spatial Modeling for Expressions Reconstruction |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TFG-Flow: Training-free Guidance in Multimodal Generative Flow |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TGB-Seq Benchmark: Challenging Temporal GNNs with Complex Sequential Dynamics |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| THE ROBUSTNESS OF DIFFERENTIABLE CAUSAL DISCOVERY IN MISSPECIFIED SCENARIOS |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TIGeR: Unifying Text-to-Image Generation and Retrieval with Large Multimodal Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TIPS: Text-Image Pretraining with Spatial awareness |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TLDR: Token-Level Detective Reward Model for Large Vision Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| TODO: Enhancing LLM Alignment with Ternary Preferences |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TOP-ERL: Transformer-based Off-Policy Episodic Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| TPO: Aligning Large Language Models with Multi-branch & Multi-step Preference Trees |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| TRACE: Temporal Grounding Video LLM via Causal Event Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TRENDy: Temporal Regression of Effective Nonlinear Dynamics |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| TS-LIF: A Temporal Segment Spiking Neuron Network for Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TSC-Net: Prediction of Pedestrian Trajectories by Trajectory-Scene-Cell Classification |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| TSVD: Bridging Theory and Practice in Continual Learning with Pre-trained Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| TTVD: Towards a Geometric Framework for Test-Time Adaptation Based on Voronoi Diagram |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TULIP: Token-length Upgraded CLIP |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TVNet: A Novel Time Series Analysis Method Based on Dynamic Convolution and 3D-Variation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TabDiff: a Mixed-type Diffusion Model for Tabular Data Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TabM: Advancing tabular deep learning with parameter-efficient ensembling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TabReD: Analyzing Pitfalls and Filling the Gaps in Tabular Deep Learning Benchmarks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TabWak: A Watermark for Tabular Diffusion Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Tackling Data Corruption in Offline Reinforcement Learning via Sequence Modeling |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tailoring Mixup to Data for Calibration |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Talking Turns: Benchmarking Audio Foundation Models on Turn-Taking Dynamics |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Taming Overconfidence in LLMs: Reward Calibration in RLHF |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Taming Transformer Without Using Learning Rate Warmup |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tamper-Resistant Safeguards for Open-Weight LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Targeted Attack Improves Protection against Unauthorized Diffusion Customization |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Task Descriptors Help Transformers Learn Linear Models In-Context |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Task-Adaptive Pretrained Language Models via Clustered-Importance Sampling |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TaskGalaxy: Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Teaching Human Behavior Improves Content Understanding Abilities Of VLMs |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| Teaching LLMs How to Learn with Contextual Fine-Tuning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| TeaserGen: Generating Teasers for Long Documentaries |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tell me about yourself: LLMs are aware of their learned behaviors |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| TempMe: Video Temporal Token Merging for Efficient Text-Video Retrieval |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Temporal Difference Learning: Why It Can Be Fast and How It Will Be Faster |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Temporal Flexibility in Spiking Neural Networks: Towards Generalization Across Time Steps and Deployment Friendliness |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Temporal Heterogeneous Graph Generation with Privacy, Utility, and Efficiency |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Temporal Reasoning Transfer from Text to Video |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Test of Time: A Benchmark for Evaluating LLMs on Temporal Reasoning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Test-Time Adaptation for Combating Missing Modalities in Egocentric Videos |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Test-Time Ensemble via Linear Mode Connectivity: A Path to Better Adaptation |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Test-time Adaptation for Cross-modal Retrieval with Query Shift |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Test-time Adaptation for Image Compression with Distribution Regularization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Test-time Adaptation for Regression by Subspace Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Test-time Alignment of Diffusion Models without Reward Over-optimization |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| TestGenEval: A Real World Unit Test Generation and Test Completion Benchmark |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TetSphere Splatting: Representing High-Quality Geometry with Lagrangian Volumetric Meshes |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TexTailor: Customized Text-aligned Texturing via Effective Resampling |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Text-to-Image Rectified Flow as Plug-and-Play Priors |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Text2PDE: Latent Diffusion Models for Accessible Physics Simulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Text4Seg: Reimagining Image Segmentation as Text Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The "Law'' of the Unconscious Contrastive Learner: Probabilistic Alignment of Unpaired Modalities |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The 3D-PC: a benchmark for visual perspective taking in humans and machines |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The AdEMAMix Optimizer: Better, Faster, Older |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Belief State Transformer |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| The Breakdown of Gaussian Universality in Classification of High-dimensional Linear Factor Mixtures |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Case for Cleaner Biosignals: High-fidelity Neural Compressor Enables Transfer from Cleaner iEEG to Noisier EEG |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Complexity of Two-Team Polymatrix Games with Independent Adversaries |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Computational Complexity of Circuit Discovery for Inner Interpretability |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Computational Complexity of Positive Non-Clashing Teaching in Graphs |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Crucial Role of Samplers in Online Direct Preference Optimization |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Crystal Ball Hypothesis in diffusion models: Anticipating object positions from initial noise |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| The Directionality of Optimization Trajectories in Neural Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| The Effectiveness of Curvature-Based Rewiring and the Role of Hyperparameters in GNNs Revisited |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| The Foundations of Tokenization: Statistical and Computational Concerns |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| The Geometry of Categorical and Hierarchical Concepts in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| The Hidden Cost of Waiting for Accurate Predictions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| The Hyperfitting Phenomenon: Sharpening and Stabilizing LLMs for Open-Ended Text Generation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| The Journey Matters: Average Parameter Count over Pre-training Unifies Sparse and Dense Scaling Laws |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| The KoLMogorov Test: Compression by Code Generation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| The OMG dataset: An Open MetaGenomic corpus for mixed-modality genomic language modeling |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| The Optimization Landscape of SGD Across the Feature Learning Strength |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Pitfalls of Memorization: When Memorization Hurts Generalization |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| The Power of LLM-Generated Synthetic Data for Stance Detection in Online Political Discussions |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Ramanujan Library - Automated Discovery on the Hypergraph of Integer Relations |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| The Rise and Down of Babel Tower: Investigating the Evolution Process of Multilingual Code Large Language Model |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Same but Different: Structural Similarities and Differences in Multilingual Language Modeling |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The Superposition of Diffusion Models Using the Itô Density Estimator |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| The Unreasonable Ineffectiveness of the Deeper Layers |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| The Utility and Complexity of In- and Out-of-Distribution Machine Unlearning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The Value of Sensory Information to a Robot |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| The impact of allocation strategies in subset learning on the expressive power of neural networks |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
✅ |
3 |
| Theory on Mixture-of-Experts in Continual Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Theory, Analysis, and Best Practices for Sigmoid Self-Attention |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| ThermalGaussian: Thermal 3D Gaussian Splatting |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ThinK: Thinner Key Cache by Query-Driven Pruning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Think Then React: Towards Unconstrained Action-to-Reaction Motion Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Think Thrice Before You Act: Progressive Thought Refinement in Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Think while You Generate: Discrete Diffusion with Planned Denoising |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Think-on-Graph 2.0: Deep and Faithful Large Language Model Reasoning with Knowledge-guided Retrieval Augmented Generation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| ThinkBot: Embodied Instruction Following with Thought Chain Reasoning |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Three Mechanisms of Feature Learning in a Linear Network |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Three-in-One: Fast and Accurate Transducer for Hybrid-Autoregressive ASR |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| ThunderKittens: Simple, Fast, and $\textit{Adorable}$ Kernels |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
✅ |
5 |
| TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Tight Clusters Make Specialized Experts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tight Lower Bounds under Asymmetric High-Order Hölder Smoothness and Uniform Convexity |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Tight Time Complexities in Parallel Stochastic Optimization with Arbitrary Computation Dynamics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Time After Time: Deep-Q Effect Estimation for Interventions on When and What to do |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Time-to-Event Pretraining for 3D Medical Imaging |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| TimeInf: Time Series Data Contribution via Influence Functions |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TimeKAN: KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| TimeMixer++: A General Time Series Pattern Machine for Universal Predictive Analysis |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Timer-XL: Long-Context Transformers for Unified Time Series Forecasting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| To Clip or not to Clip: the Dynamics of SGD with Gradient Clipping in High-Dimensions |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| To Code or Not To Code? Exploring Impact of Code in Pre-training |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| To Tackle Adversarial Transferability: A Novel Ensemble Training Method with Fourier Transformation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| To Trust or Not to Trust? Enhancing Large Language Models' Situated Faithfulness to External Contexts |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ToVE: Efficient Vision-Language Learning via Knowledge Transfer from Vision Experts |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| ToddlerDiffusion: Interactive Structured Image Generation with Cascaded Schrödinger Bridge |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Token Statistics Transformer: Linear-Time Attention via Variational Rate Reduction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Token-Supervised Value Models for Enhancing Mathematical Problem-Solving Capabilities of Large Language Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tool-Planner: Task Planning with Clusters across Multiple Tools |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ToolACE: Winning the Points of LLM Function Calling |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ToolDial: Multi-turn Dialogue Generation Method for Tool-Augmented Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| ToolGen: Unified Tool Retrieval and Calling via Generation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TopoDiffusionNet: A Topology-aware Diffusion Model |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| TopoGaussian: Inferring Internal Topology Structures from Visual Clues |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| TopoLM: brain-like spatio-functional organization in a topographic language model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TopoNets: High performing vision and language models with brain-like topography |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Topograph: An Efficient Graph-Based Framework for Strictly Topology Preserving Image Segmentation |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Topological Blindspots: Understanding and Extending Topological Deep Learning Through the Lens of Expressivity |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Topological Schrödinger Bridge Matching |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Topological Zigzag Spaghetti for Diffusion-based Generation and Prediction on Graphs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| TorchTitan: One-stop PyTorch native solution for production ready LLM pretraining |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Toward Efficient Multi-Agent Exploration With Trajectory Entropy Maximization |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Toward Exploratory Inverse Constraint Inference with Generative Diffusion Verifiers |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Toward Generalizing Visual Brain Decoding to Unseen Subjects |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Toward Understanding In-context vs. In-weight Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Auto-Regressive Next-Token Prediction: In-context Learning Emerges from Generalization |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Automated Knowledge Integration From Human-Interpretable Representations |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Bridging Generalization and Expressivity of Graph Neural Networks |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Calibrated Deep Clustering Network |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Certification of Uncertainty Calibration under Adversarial Attacks |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Towards Continuous Reuse of Graph Models via Holistic Memory Diversification |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Towards Domain Adaptive Neural Contextual Bandits |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Effective Evaluations and Comparisons for LLM Unlearning Methods |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Towards Empowerment Gain through Causal Structure Learning in Model-Based Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Explaining the Power of Constant-depth Graph Neural Networks for Structured Linear Programming |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
6 |
| Towards Fast, Specialized Machine Learning Force Fields: Distilling Foundation Models via Energy Hessians |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Faster Decentralized Stochastic Optimization with Communication Compression |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Towards Federated RLHF with Aggregated Client Preference for LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Foundation Models for Mixed Integer Linear Programming |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Towards General-Purpose Model-Free Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Towards Generalizable Reinforcement Learning via Causality-Guided Self-Adaptive Representations |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Generalization Bounds of GCNs for Adversarially Robust Node Classification |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards Hierarchical Rectified Flow |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Homogeneous Lexical Tone Decoding from Heterogeneous Intracranial Recordings |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Towards Improving Exploration through Sibling Augmented GFlowNets |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Interpreting Visual Information Processing in Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Learning High-Precision Least Squares Algorithms with Sequence Models |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Marginal Fairness Sliced Wasserstein Barycenter |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Multiple Character Image Animation Through Enhancing Implicit Decoupling |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Neural Scaling Laws for Time Series Foundation Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards Optimal Multi-draft Speculative Decoding |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Towards Out-of-Modal Generalization without Instance-level Modal Correspondence |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Principled Evaluations of Sparse Autoencoders for Interpretability and Control |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Towards Realistic Data Generation for Real-World Super-Resolution |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Realistic UAV Vision-Language Navigation: Platform, Benchmark, and Methodology |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
5 |
| Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Towards Robust Multimodal Open-set Test-time Adaptation via Adaptive Entropy-aware Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Scalable Topological Regularizers |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Self-Supervised Covariance Estimation in Deep Heteroscedastic Regression |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Towards Semantic Equivalence of Tokenization in Multimodal LLM |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Synergistic Path-based Explanations for Knowledge Graph Completion: Exploration and Evaluation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Unbiased Learning in Semi-Supervised Semantic Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards Understanding the Robustness of Diffusion-Based Purification: A Stochastic Perspective |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Towards Understanding the Universality of Transformers for Next-Token Prediction |
❌ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
2 |
| Towards Unified Human Motion-Language Understanding via Sparse Interpretable Characterization |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Towards a Complete Logical Framework for GNN Expressiveness |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Towards a General Time Series Anomaly Detector with Adaptive Bottlenecks and Dual Adversarial Decoders |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Towards a Theoretical Understanding of Synthetic Data in LLM Post-Training: A Reverse-Bottleneck Perspective |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Towards a Unified and Verified Understanding of Group-Operation Networks |
❌ |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
4 |
| Towards a learning theory of representation alignment |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Towards counterfactual fairness through auxiliary variables |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Towards hyperparameter-free optimization with differential privacy |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Tracing Representation Progression: Analyzing and Enhancing Layer-Wise Similarity |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Track-On: Transformer-based Online Point Tracking with Memory |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Tracking objects that change in appearance with phase synchrony |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tracking the Copyright of Large Vision-Language Models through Parameter Learning Adversarial Images |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Tractable Multi-Agent Reinforcement Learning through Behavioral Economics |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Trained Transformer Classifiers Generalize and Exhibit Benign Overfitting In-Context |
❌ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
3 |
| Training Free Exponential Context Extension via Cascading KV Cache |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Training Free Guided Flow-Matching with Optimal Control |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Training Language Models on Synthetic Edit Sequences Improves Code Synthesis |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Training Language Models to Self-Correct via Reinforcement Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Training Large Language Models for Retrieval-Augmented Question Answering through Backtracking Correction |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Training Neural Networks as Recognizers of Formal Languages |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Training Nonlinear Transformers for Chain-of-Thought Inference: A Theoretical Generalization Analysis |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Training One-Dimensional Graph Neural Networks is NP-Hard |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Training Robust Ensembles Requires Rethinking Lipschitz Continuity |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Training on the Test Task Confounds Evaluation and Emergence |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Training-Free Activation Sparsity in Large Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Training-Free Dataset Pruning for Instance Segmentation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Training-Free Diffusion Model Alignment with Sampling Demons |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Training-Free Message Passing for Learning on Hypergraphs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Training-free Camera Control for Video Generation |
✅ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Training-free LLM-generated Text Detection by Mining Token Probability Sequences |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Trajectory attention for fine-grained video motion control |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Trajectory-Class-Aware Multi-Agent Reinforcement Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Trajectory-LLM: A Language-based Data Generator for Trajectory Prediction in Autonomous Driving |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Transformer Block Coupling and its Correlation with Generalization in LLMs |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Transformer Encoder Satisfiability: Complexity and Impact on Formal Reasoning |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Transformer Learns Optimal Variable Selection in Group-Sparse Classification |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Transformer Meets Twicing: Harnessing Unattended Residual Information |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Transformer-Squared: Self-adaptive LLMs |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Transformers Handle Endogeneity in In-Context Linear Regression |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Transformers Learn Low Sensitivity Functions: Investigations and Implications |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Transformers Provably Learn Two-Mixture of Linear Classification via Gradient Flow |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Transformers Provably Solve Parity Efficiently with Chain of Thought |
❌ |
❌ |
❌ |
❌ |
✅ |
❌ |
✅ |
2 |
| Transformers Struggle to Learn to Search |
✅ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
4 |
| Transformers are Universal In-context Learners |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Transition Path Sampling with Improved Off-Policy Training of Diffusion Path Samplers |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
6 |
| Tree of Attributes Prompt Learning for Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Tree-Wasserstein Distance for High Dimensional Data with a Latent Feature Hierarchy |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Triples as the Key: Structuring Makes Decomposition and Verification Easier in LLM-based TableQA |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Trivialized Momentum Facilitates Diffusion Generative Modeling on Lie Groups |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Truncated Consistency Models |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Trust or Escalate: LLM Judges with Provable Guarantees for Human Agreement |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Trusted Multi-View Classification via Evolutionary Multi-View Fusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tuning Frequency Bias of State Space Models |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM Outputs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Two Sparse Matrices are Better than One: Sparsifying Neural Networks with Double Sparse Factorization |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| TypedThinker: Diversify Large Language Model Reasoning with Typed Thinking |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| UGMathBench: A Diverse and Dynamic Benchmark for Undergraduate-Level Mathematical Reasoning with Large Language Models |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| UIFace: Unleashing Inherent Model Capabilities to Enhance Intra-Class Diversity in Synthetic Face Recognition |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| UNIP: Rethinking Pre-trained Attention Patterns for Infrared Semantic Segmentation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UNSURE: self-supervised learning with Unknown Noise level and Stein's Unbiased Risk Estimate |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| URLOST: Unsupervised Representation Learning without Stationarity or Topology |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UTILITY: Utilizing Explainable Reinforcement Learning to Improve Reinforcement Learning |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| UV-Attack: Physical-World Adversarial Attacks on Person Detection via Dynamic-NeRF-based UV Mapping |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Ultra-Sparse Memory Network |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Unbounded: A Generative Infinite Game of Character Life Simulation |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Uncertainty Herding: One Active Learning Method for All Label Budgets |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Uncertainty Modeling in Graph Neural Networks via Stochastic Differential Equations |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Uncertainty and Influence aware Reward Model Refinement for Reinforcement Learning from Human Feedback |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Uncertainty modeling for fine-tuned implicit functions |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Uncertainty-Aware Decoding with Minimum Bayes Risk |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Uncovering Gaps in How Humans and LLMs Interpret Subjective Language |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Uncovering Latent Memories in Large Language Models |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Uncovering Overfitting in Large Language Model Editing |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Underdamped Diffusion Bridges with Applications to Sampling |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Understanding Constraint Inference in Safety-Critical Inverse Reinforcement Learning |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
✅ |
4 |
| Understanding Factual Recall in Transformers via Associative Memories |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| Understanding Long Videos with Multimodal Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Understanding Optimization in Deep Learning with Central Flows |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Understanding Virtual Nodes: Oversquashing and Node Heterogeneity |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape View |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Understanding and Enhancing Safety Mechanisms of LLMs via Safety-Specific Neuron |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding and Enhancing the Transferability of Jailbreaking Attacks |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding and Mitigating Hallucination in Large Vision-Language Models via Modular Attribution and Intervention |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Understanding the Generalization of In-Context Learning in Transformers: An Empirical Study |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Understanding the Stability-based Generalization of Personalized Federated Learning |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Unearthing Skill-level Insights for Understanding Trade-offs of Foundation Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
3 |
| Unhackable Temporal Reward for Scalable Video MLLMs |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Uni$^2$Det: Unified and Universal Framework for Prompt-Guided Multi-dataset 3D Detection |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Uni-Sign: Toward Unified Sign Language Understanding at Scale |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| UniCBE: An Uniformity-driven Comparing Based Evaluation Framework with Unified Multi-Objective Optimization |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| UniCO: On Unified Combinatorial Optimization via Problem Reduction to Matrix-Encoded General TSP |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| UniCoTT: A Unified Framework for Structural Chain-of-Thought Distillation |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| UniCon: Unidirectional Information Flow for Effective Control of Large-Scale Diffusion Models |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniDetox: Universal Detoxification of Large Language Models via Dataset Distillation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniDrive: Towards Universal Driving Perception Across Camera Configurations |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
2 |
| UniGEM: A Unified Approach to Generation and Property Prediction for Molecules |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniGS: Unified Language-Image-3D Pretraining with Gaussian Splatting |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| UniMatch: Universal Matching from Atom to Task for Few-Shot Drug Discovery |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| UniRestore3D: A Scalable Framework For General Shape Restoration |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| UniWav: Towards Unified Pre-training for Speech Representation Learning and Generation |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Unified Parameter-Efficient Unlearning for LLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| Unify ML4TSP: Drawing Methodological Principles for TSP and Beyond from Streamlined Design Space of Learning and Search |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Unifying Causal Representation Learning with the Invariance Principle |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unifying Unsupervised Graph-Level Anomaly Detection and Out-of-Distribution Detection: A Benchmark |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Union-over-Intersections: Object Detection beyond Winner-Takes-All |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Universal Image Restoration Pre-training via Degradation Classification |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Universal Sharpness Dynamics in Neural Network Training: Fixed Point Analysis, Edge of Stability, and Route to Chaos |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Universal generalization guarantees for Wasserstein distributionally robust models |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unlearning or Obfuscating? Jogging the Memory of Unlearned LLMs via Benign Relearning |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unlearning-based Neural Interpretations |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Unleashing the Potential of Vision-Language Pre-Training for 3D Zero-Shot Lesion Segmentation via Mask-Attribute Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-tuning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| Unlocking Global Optimality in Bilevel Optimization: A Pilot Study |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Unlocking Guidance for Discrete State-Space Diffusion and Flow Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unlocking Point Processes through Point Set Diffusion |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unlocking the Potential of Model Calibration in Federated Learning |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Unsupervised Disentanglement of Content and Style via Variance-Invariance Constraints |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unsupervised Meta-Learning via In-Context Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Unsupervised Model Tree Heritage Recovery |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Unsupervised Multiple Kernel Learning for Graphs via Ordinality Preservation |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Unsupervised Zero-Shot Reinforcement Learning via Dual-Value Forward-Backward Representation |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and Amendment |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
6 |
| Unveiling the Secret Recipe: A Guide For Supervised Fine-Tuning Small LLMs |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Utilitarian Algorithm Configuration for Infinite Parameter Spaces |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Utility-Directed Conformal Prediction: A Decision-Aware Framework for Actionable Uncertainty Quantification |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| VAE-Var: Variational Autoencoder-Enhanced Variational Methods for Data Assimilation in Meteorology |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| VCR: A Task for Pixel-Level Complex Reasoning in Vision Language Models via Restoring Occluded Text |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VEDIT: Latent Prediction Architecture For Procedural Video Representation Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| VICtoR: Learning Hierarchical Vision-Instruction Correlation Rewards for Long-horizon Manipulation |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| VL-Cache: Sparsity and Modality-Aware KV Cache Compression for Vision-Language Model Inference Acceleration |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| VLAS: Vision-Language-Action Model with Speech Instructions for Customized Robot Manipulation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| VLMaterial: Procedural Material Generation with Large Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| VOILA: Evaluation of MLLMs For Perceptual Understanding and Analogical Reasoning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| VTDexManip: A Dataset and Benchmark for Visual-tactile Pretraining and Dexterous Manipulation with Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VVC-Gym: A Fixed-Wing UAV Reinforcement Learning Environment for Multi-Goal Long-Horizon Problems |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Valid Conformal Prediction for Dynamic GNNs |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| Value-Incentivized Preference Optimization: A Unified Approach to Online and Offline RLHF |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Value-aligned Behavior Cloning for Offline Reinforcement Learning via Bi-level Optimization |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| Variance-Reducing Couplings for Random Features |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Variational Bayesian Pseudo-Coreset |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Variational Best-of-N Alignment |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Variational Diffusion Posterior Sampling with Midpoint Guidance |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Variational Search Distributions |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Varying Shades of Wrong: Aligning LLMs with Wrong Answers Only |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Vec2Face: Scaling Face Dataset Generation with Loosely Constrained Vectors |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Vector-ICL: In-context Learning with Continuous Vector Representations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Verifying Properties of Binary Neural Networks Using Sparse Polynomial Optimization |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Vertical Federated Learning with Missing Features During Training and Inference |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
❌ |
3 |
| Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ViSAGe: Video-to-Spatial Audio Generation |
❌ |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
4 |
| VibeCheck: Discover and Quantify Qualitative Differences in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Video Action Differencing |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Video In-context Learning: Autoregressive Transformers are Zero-Shot Video Imitators |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |
| VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| VideoPhy: Evaluating Physical Commonsense for Video Generation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| VideoWebArena: Evaluating Long Context Multimodal Agents with Video Understanding Web Tasks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| VisRAG: Vision-based Retrieval-augmented Generation on Multi-modality Documents |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Vision CNNs trained to estimate spatial latents learned similar ventral-stream-aligned representations |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Vision Language Models are In-Context Value Learners |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Vision and Language Synergy for Rehearsal Free Continual Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Vision-LSTM: xLSTM as Generic Vision Backbone |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Visual Agents as Fast and Slow Thinkers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Visual-O1: Understanding Ambiguous Instructions via Multi-modal Multi-turn Chain-of-thoughts Reasoning |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning |
✅ |
❌ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| Visually Consistent Hierarchical Image Classification |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Visually Guided Decoding: Gradient-Free Hard Prompt Inversion with Language Models |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| VoxDialogue: Can Spoken Dialogue Systems Understand Information Beyond Words? |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
❌ |
3 |
| W-PCA Based Gradient-Free Proxy for Efficient Search of Lightweight Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Walk the Talk? Measuring the Faithfulness of Large Language Model Explanations |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Ward: Provable RAG Dataset Inference via LLM Watermarks |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| WardropNet: Traffic Flow Predictions via Equilibrium-Augmented Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Warm Diffusion: Recipe for Blur-Noise Mixture Diffusion Models |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Wasserstein Distances, Neuronal Entanglement, and Sparsity |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Wasserstein-Regularized Conformal Prediction under General Distribution Shift |
✅ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
4 |
| Watch Less, Do More: Implicit Skill Discovery for Video-Conditioned Policy |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Watermark Anything With Localized Messages |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Wavelet Diffusion Neural Operator |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Wavelet-based Positional Representation for Long Context |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Wayward Concepts In Multimodal Models |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Weak to Strong Generalization for Large Language Models with Multi-capabilities |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Weak-to-Strong Generalization Through the Data-Centric Lens |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
5 |
| Weak-to-Strong Preference Optimization: Stealing Reward from Weak Aligned Model |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Weakly Supervised Video Scene Graph Generation via Natural Language Supervision |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| Weakly-Supervised Affordance Grounding Guided by Part-Level Semantic Priors |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| WeatherGFM: Learning a Weather Generalist Foundation Model via In-context Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Weighted Multi-Prompt Learning with Description-free Large Language Model Distillation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Weighted-Reward Preference Optimization for Implicit Model Fusion |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What Are Good Positional Encodings for Directed Graphs? |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What Do You See in Common? Learning Hierarchical Prototypes over Tree-of-Life to Discover Evolutionary Traits |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis |
❌ |
✅ |
❌ |
❌ |
❌ |
❌ |
✅ |
2 |
| What Makes Large Language Models Reason in (Multi-Turn) Code Generation? |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| What Makes a Good Diffusion Planner for Decision Making? |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| What Makes a Maze Look Like a Maze? |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| What Matters When Repurposing Diffusion Models for General Dense Perception Tasks? |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| What Matters in Learning from Large-Scale Datasets for Robot Manipulation |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| What Secrets Do Your Manifolds Hold? Understanding the Local Geometry of Generative Models |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| What is Wrong with Perplexity for Long-context Language Modeling? |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| What should a neuron aim for? Designing local objective functions based on information theory |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| What to align in multimodal contrastive learning? |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| What's New in My Data? Novelty Exploration via Contrastive Generation |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| What's the Move? Hybrid Imitation Learning via Salient Points |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| When Attention Sink Emerges in Language Models: An Empirical View |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| When GNNs meet symmetry in ILPs: an orbit-based feature augmentation approach |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| When Graph Neural Networks Meet Dynamic Mode Decomposition |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| When LLMs Play the Telephone Game: Cultural Attractors as Conceptual Tools to Evaluate LLMs in Multi-turn Settings |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| When Prompt Engineering Meets Software Engineering: CNL-P as Natural and Robust "APIs'' for Human-AI Interaction |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| When Selection Meets Intervention: Additional Complexities in Causal Discovery |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| When do GFlowNets learn the right distribution? |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| When does compositional structure yield compositional generalization? A kernel theory. |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| When narrower is better: the narrow width limit of Bayesian parallel branching neural networks |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Where Am I and What Will I See: An Auto-Regressive Model for Spatial Localization and View Prediction |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Which Tasks Should Be Compressed Together? A Causal Discovery Approach for Efficient Multi-Task Representation Compression |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Why Does the Effective Context Length of LLMs Fall Short? |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Why In-Context Learning Models are Good Few-Shot Learners? |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Words in Motion: Extracting Interpretable Control Vectors for Motion Transformers |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| WorkflowLLM: Enhancing Workflow Orchestration Capability of Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| World Model on Million-Length Video And Language With Blockwise RingAttention |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| X-Drive: Cross-modality Consistent Multi-Sensor Data Synthesis for Driving Scenarios |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| X-Fi: A Modality-Invariant Foundation Model for Multimodal Human Sensing |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| X-NeMo: Expressive Neural Motion Reenactment via Disentangled Latent Attention |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| XAIguiFormer: explainable artificial intelligence guided transformer for brain disorder identification |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| YOLO-RD: Introducing Relevant and Compact Explicit Knowledge to YOLO by Retriever-Dictionary |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| You Only Sample Once: Taming One-Step Text-to-Image Synthesis by Self-Cooperative Diffusion GANs |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| YouTube-SL-25: A Large-Scale, Open-Domain Multilingual Sign Language Parallel Corpus |
❌ |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
5 |
| Youku Dense Caption: A Large-scale Chinese Video Dense Caption Dataset and Benchmarks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| Your Mixture-of-Experts LLM Is Secretly an Embedding Model for Free |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Your Weak LLM is Secretly a Strong Teacher for Alignment |
❌ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
6 |
| ZAPBench: A Benchmark for Whole-Brain Activity Prediction in Zebrafish |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ZETA: Leveraging $Z$-order Curves for Efficient Top-$k$ Attention |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| ZIP: An Efficient Zeroth-order Prompt Tuning for Black-box Vision-Language Models |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Zero-Shot Natural Language Explanations |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Zero-cost Proxy for Adversarial Robustness Evaluation |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Zero-shot Imputation with Foundation Inference Models for Dynamical Systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| Zero-shot Model-based Reinforcement Learning using Large Language Models |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| Zero-shot forecasting of chaotic systems |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| ZeroDiff: Solidified Visual-semantic Correlation in Zero-Shot Learning |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Zeroth-Order Fine-Tuning of LLMs with Transferable Static Sparsity |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
✅ |
7 |
| Zeroth-Order Policy Gradient for Reinforcement Learning from Human Feedback without Reward Inference |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| ZooProbe: A Data Engine for Evaluating, Exploring, and Evolving Large-scale Training Data for Multimodal LLMs |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
3 |
| cryoSPHERE: Single-Particle HEterogeneous REconstruction from cryo EM |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| eQMARL: Entangled Quantum Multi-Agent Reinforcement Learning for Distributed Cooperation over Quantum Channels |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| econSG: Efficient and Multi-view Consistent Open-Vocabulary 3D Semantic Gaussians |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| gRNAde: Geometric Deep Learning for 3D RNA inverse design |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| h4rm3l: A Language for Composable Jailbreak Attack Synthesis |
✅ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
5 |
| kNN Attention Demystified: A Theoretical Exploration for Scalable Transformers |
✅ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
5 |
| mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| metabench - A Sparse Benchmark of Reasoning and Knowledge in Large Language Models |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| miniCTX: Neural Theorem Proving with (Long-)Contexts |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| nGPT: Normalized Transformer with Representation Learning on the Hypersphere |
❌ |
✅ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| pMoE: Prompting Diverse Experts Together Wins More in Visual Adaptation |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| qNBO: quasi-Newton Meets Bilevel Optimization |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| u-$\mu$P: The Unit-Scaled Maximal Update Parametrization |
✅ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
5 |
| uniINF: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed MABs |
✅ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
1 |
| xFinder: Large Language Models as Automated Evaluators for Reliable Evaluation |
❌ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |
| {$\tau$}-bench: A Benchmark for \underline{T}ool-\underline{A}gent-\underline{U}ser Interaction in Real-World Domains |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
✅ |
4 |