HyperIV: Real-time Implied Volatility Smoothing

Authors: Yongxin Yang, Wenqi Chen, Chao Shu, Timothy Hospedales

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across 8 index options demonstrate that Hyper IV achieves superior accuracy compared to existing methods while maintaining computational efficiency.
Researcher Affiliation Academia 1Queen Mary University of London 2University of Edinburgh. Correspondence to: Yongxin Yang <EMAIL>.
Pseudocode Yes The Py Torch implementation is shown below: iv_network = torch.nn.Sequential( torch.nn.Linear(2, 16), torch.nn.Tanh(), torch.nn.Linear(16, 16), torch.nn.Tanh(), torch.nn.Linear(16, 1), torch.nn.Softplus() ) class Set Embedding Network(nn.Module): def __init__(self, input_dim, output_dim, num_heads=2, num_layers=2, hidden_dim=128): super(Set Embedding Network, self).__init__() self.fc1 = nn.Linear(input_dim, hidden_dim) self.attention_layers = nn.ModuleList( [nn.TransformerEncoderLayer( d_model=hidden_dim, nhead=num_heads, dim_feedforward=hidden_dim, batch_first=True, dropout=0, activation="relu") for _ in range(num_layers)]) self.fc2 = nn.Linear(hidden_dim, output_dim) def forward(self, x): x = self.fc1(x) for layer in self.attention_layers: x = layer(x) x = x.mean(dim=1) x = self.fc2(x) return x
Open Source Code Yes We make code available at https://github.com/qmfin/hyperiv.
Open Datasets No Due to licence restrictions, we cannot redistribute the data used for model training. Academic researchers may access the end-of-day data through their institution s subscription to WRDS (which includes Option Metrics).
Dataset Splits Yes For the one-minute interval data, we train the model using data before 2023-08-01 and test on the subsequent intervals. For the one-day data, we train the model using data before 2023-01-01 and test on the remaining intervals.
Hardware Specification Yes All training and testing procedures are executed on a NVIDIA A100 GPU with 80G VRAM, with the exception of SSVI, which runs on an Intel Xeon CPU.
Software Dependencies No The Py Torch implementation is shown below: iv_network = torch.nn.Sequential(...) (This indicates PyTorch, but no specific version number is provided for PyTorch or any other software dependency).
Experiment Setup Yes The total training duration spans 500 epochs, with each epoch processing all intervals in batches of 128. ... We found that a simple multi-layer perceptron (MLP) with two hidden layers, each containing 16 neurons, performs well after initial experimentation. ... The architectures of gθ( ) and hω( ) are described in detail in Appendix A.