1

Revisiting Adversarial Training at Scale

Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training

FedConv: Enhancing Convolutional Neural Networks for Handling Data Heterogeneity in Federated Learning

DistillBEV: Boosting Multi-Camera 3D Object Detection with Cross-Modal Knowledge Distillation

An Inverse Scaling Law for CLIP Training

Masked Autoencoders Enable Efficient Knowledge Distillers

Can CNNs Be More Robust Than Transformers?

Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines