Bar-Ilan Faculty of Engineering

2548 articles

52 publishers

Join mailing list

Dec 2025 • arXiv preprint arXiv:2212.02459

Resilient distributed optimization for multi-agent cyberphysical systems

Michal Yemini, Angelia Nedić, Andrea J Goldsmith, Stephanie Gil

Enhancing resilience in distributed networks in the face of malicious agents is an important problem for which many key theoretical results and applications require further development and characterization. This work focuses on the problem of distributed optimization in multi-agent cyberphysical systems, where a legitimate agent's dynamic is influenced both by the values it receives from potentially malicious neighboring agents, and by its own self-serving target function. We develop a new algorithmic and analytical framework to achieve resilience for the class of problems where stochastic values of trust between agents exist and can be exploited. In this case we show that convergence to the true global optimal point can be recovered, both in mean and almost surely, even in the presence of malicious agents. Furthermore, we provide expected convergence rate guarantees in the form of upper bounds on the expected squared distance to the optimal value. Finally, we present numerical results that validate the analytical convergence guarantees we present in this paper even when the malicious agents compose the majority of agents in the network.

Show more

Dec 2025 • arXiv preprint arXiv:2412.18234

Conditional Deep Canonical Time Warping

Afek Steinberg, Ran Eisenberg, Ofir Lindenbaum

Temporal alignment of sequences is a fundamental challenge in many applications, such as computer vision and bioinformatics, where local time shifting needs to be accounted for. Misalignment can lead to poor model generalization, especially in high-dimensional sequences. Existing methods often struggle with optimization when dealing with high-dimensional sparse data, falling into poor alignments. Feature selection is frequently used to enhance model performance for sparse data. However, a fixed set of selected features would not generally work for dynamically changing sequences and would need to be modified based on the state of the sequence. Therefore, modifying the selected feature based on contextual input would result in better alignment. Our suggested method, Conditional Deep Canonical Temporal Time Warping (CDCTW), is designed for temporal alignment in sparse temporal data to address these challenges. CDCTW enhances alignment accuracy for high dimensional time-dependent views be performing dynamic time warping on data embedded in maximally correlated subspace which handles sparsity with novel feature selection method. We validate the effectiveness of CDCTW through extensive experiments on various datasets, demonstrating superior performance over previous techniques.

Show more

Dec 2025 • arXiv preprint arXiv:2412.20596

Zero-Shot Image Restoration Using Few-Step Guidance of Consistency Models (and Beyond)

Tomer Garber, Tom Tirer

In recent years, it has become popular to tackle image restoration tasks with a single pretrained diffusion model (DM) and data-fidelity guidance, instead of training a dedicated deep neural network per task. However, such "zero-shot" restoration schemes currently require many Neural Function Evaluations (NFEs) for performing well, which may be attributed to the many NFEs needed in the original generative functionality of the DMs. Recently, faster variants of DMs have been explored for image generation. These include Consistency Models (CMs), which can generate samples via a couple of NFEs. However, existing works that use guided CMs for restoration still require tens of NFEs or fine-tuning of the model per task that leads to performance drop if the assumptions during the fine-tuning are not accurate. In this paper, we propose a zero-shot restoration scheme that uses CMs and operates well with as little as 4 NFEs. It is based on a wise combination of several ingredients: better initialization, back-projection guidance, and above all a novel noise injection mechanism. We demonstrate the advantages of our approach for image super-resolution, deblurring and inpainting. Interestingly, we show that the usefulness of our noise injection technique goes beyond CMs: it can also mitigate the performance degradation of existing guided DM methods when reducing their NFE count.

Show more

Nov 2025 • arXiv preprint arXiv:1811.12369

Small hazard-free transducers

Johannes Bund, Christoph Lenzen, Moti Medina


Oct 2025 • arXiv preprint arXiv:2410.17881

AdaRankGrad: Adaptive Gradient-Rank and Moments for Memory-Efficient LLMs Training and Fine-Tuning

Yehonathan Refael, Jonathan Svirsky, Boris Shustin, Wasim Huleihel, Ofir Lindenbaum

Training and fine-tuning large language models (LLMs) come with challenges related to memory and computational requirements due to the increasing size of the model weights and the optimizer states. Various techniques have been developed to tackle these challenges, such as low-rank adaptation (LoRA), which involves introducing a parallel trainable low-rank matrix to the fixed pre-trained weights at each layer. However, these methods often fall short compared to the full-rank weight training approach, as they restrict the parameter search to a low-rank subspace. This limitation can disrupt training dynamics and require a full-rank warm start to mitigate the impact. In this paper, we introduce a new method inspired by a phenomenon we formally prove: as training progresses, the rank of the estimated layer gradients gradually decreases, and asymptotically approaches rank one. Leveraging this, our approach involves adaptively reducing the rank of the gradients during Adam optimization steps, using an efficient online-updating low-rank projections rule. We further present a randomized SVD scheme for efficiently finding the projection matrix. Our technique enables full-parameter fine-tuning with adaptive low-rank gradient updates, significantly reducing overall memory requirements during training compared to state-of-the-art methods while improving model performance in both pretraining and fine-tuning. Finally, we provide a convergence analysis of our method and demonstrate its merits for training and fine-tuning language and biological foundation models.

Show more

Sep 2025 • Optics & Laser Technology

Cascade time-lens

Sara Meir, Hamootal Duadi, Yuval Tamir, Moti Fridman

Temporal optics rises from the equivalence between light diffraction in free space and pulse dispersion in dispersive media, paving the way for the development of temporal devices and applications, such as time-lenses. A Four-wave mixing based time-lens allows single-shot measurements of ultra-short signals in high temporal resolution by imaging signals, and inducing temporal Fourier transform. We introduce a cascade time-lens by utilizing a cascade FWM process within the time-lens. We theoretically develop and experimentally demonstrate the cascade time-lens, and confirm that different cascade orders correspond to different effective temporal systems, leading to measuring in various temporal imaging configurations simultaneously with a single optical setup. This approach can simplify experiments and provide a more comprehensive view of a signal’s phase and temporal structure. Such capabilities are …

Show more

Jul 2025 • arXiv preprint arXiv:2407.01779

peerRTF: Robust MVDR Beamforming Using Graph Convolutional Network

Amit Sofer, Daniel Levi, Sharon Gannot

Accurate and reliable identification of the RTF between microphones with respect to a desired source is an essential component in the design of microphone array beamformers, specifically the MVDR criterion. Since an accurate estimation of the RTF in a noisy and reverberant environment is a cumbersome task, we aim at leveraging prior knowledge of the acoustic enclosure to robustify the RTF estimation by learning the RTF manifold. In this paper, we present a novel robust RTF identification method, tested and trained with real recordings, which relies on learning the RTF manifold using a GCN to infer a robust representation of the RTF in a confined area, and consequently enhance the beamformer's performance.

Show more

Jun 2025 • arXiv preprint arXiv:2406.02105

Can Kernel Methods Explain How the Data Affects Neural Collapse?

Vignesh Kothapalli, Tom Tirer

Recently, a vast amount of literature has focused on the "Neural Collapse" (NC) phenomenon, which emerges when training neural network (NN) classifiers beyond the zero training error point. The core component of NC is the decrease in the within class variability of the network's deepest features, dubbed as NC1. The theoretical works that study NC are typically based on simplified unconstrained features models (UFMs) that mask any effect of the data on the extent of collapse. In this paper, we provide a kernel-based analysis that does not suffer from this limitation. First, given a kernel function, we establish expressions for the traces of the within- and between-class covariance matrices of the samples' features (and consequently an NC1 metric). Then, we turn to focus on kernels associated with shallow NNs. First, we consider the NN Gaussian Process kernel (NNGP), associated with the network at initialization, and the complement Neural Tangent Kernel (NTK), associated with its training in the "lazy regime". Interestingly, we show that the NTK does not represent more collapsed features than the NNGP for prototypical data models. As NC emerges from training, we then consider an alternative to NTK: the recently proposed adaptive kernel, which generalizes NNGP to model the feature mapping learned from the training data. Contrasting our NC1 analysis for these two kernels enables gaining insights into the effect of data distribution on the extent of collapse, which are empirically aligned with the behavior observed with practical training of NNs.

Show more

Jun 2025 • SIAM Journal on Discrete Mathematics

On Bipartite Graph Realizations of a Single Degree Sequence

Amotz Bar-Noy, Toni Böhnlein, David Peleg, Dror Rawitz

We consider the problem of characterizing degree sequences that can be realized by a bipartite graph. If a partition of the sequence into the two sides of the bipartite graph is given as part of the input, then there is a complete characterization that was established more than 60 years ago. However, the general question, in which a partition and a realizing graph need to be determined, is still open. We investigate the role of an important class of special partitions, called High-Low partitions, which separate the degrees of a sequence into two groups, the high degrees and the low degrees. We show that when the High-Low partition exists and satisfies some natural properties, analyzing the High-Low partition resolves the bigraphic realization problem. For sequences that are known to be not realizable by a bipartite graph or that are undecided, we provide approximate realizations based on the High-Low partition.

Show more

Jun 2025 • arXiv preprint arXiv:2406.03272

Multi-Microphone Speech Emotion Recognition Using the Hierarchical Token-Semantic Audio Transformer Architecture

Ohad Cohen, Gershon Hazan, Sharon Gannot

Most emotion recognition systems fail in real-life situations (in the wild scenarios) where the audio is contaminated by reverberation. Our study explores new methods to alleviate the performance degradation of Speech Emotion Recognition (SER) algorithms and develop a more robust system for adverse conditions. We propose processing multi-microphone signals to address these challenges and improve emotion classification accuracy. We adopt a state-of-the-art transformer model, the Hierarchical Token-semantic Audio Transformer (HTS-AT), to handle multi-channel audio inputs. We evaluate two strategies: averaging mel-spectrograms across channels and summing patch-embedded representations. Our multimicrophone model achieves superior performance compared to single-channel baselines when tested on real-world reverberant environments.

Show more

Jun 2025 • Journal of Computer and System Sciences 148, 103588, 2025

Approximate realizations for outerplanaric degree sequences

Amotz Bar-Noy, Toni Böhnlein, David Peleg, Yingli Ran, Dror Rawitz

We study the question of whether a sequence of positive integers is the degree sequence of some outerplanar (a.k.a. 1-page book embeddable) graph G. If so, G is an outerplanar realization of d and d is an outerplanaric sequence. The case where is easy, as d has a realization by a forest (which is trivially an outerplanar graph). In this paper, we consider the family of all sequences d of even sum , where is the number of x’s in d. (The second inequality is a necessary condition for a sequence d with to be outerplanaric.) We partition into two disjoint subfamilies, , such that every sequence in is provably non-outerplanaric, and every sequence in is given a realizing graph G enjoying a 2-page book embedding (and moreover, one of the pages is also bipartite).

Show more

Apr 2025 • ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and …, 2025

Conditional Deep Canonical Time Warping

Ran Eisenberg, Afek Steinberg, Ofir Lindenbaum

Temporal alignment of sequences is a fundamental challenge in many applications, such as computer vision and bioinformatics, where local time shifting needs to be accounted for. Misalignment can lead to poor model generalization, especially in high-dimensional sequences. Existing methods often struggle with optimization when dealing with high-dimensional sparse data, falling into poor alignments. Feature selection is frequently used to enhance model performance for sparse data. However, a fixed set of selected features would not generally work for dynamically changing sequences and would need to be modified based on the state of the sequence. Therefore, modifying the selected feature based on contextual input would result in better alignment. Our suggested method, Conditional Deep Canonical Temporal Time Warping (CDCTW), is designed for temporal alignment in sparse temporal data to address …

Show more

Apr 2025 • ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and …, 2025

Near Optimal Privacy Preserving Fair Multi-Agent Bandits*

Amir Leshem

In this paper, we study the problem of fair multi-agent multi-arm bandit learning when agents do not communicate with each other, except collision information, provided to agents accessing the same arm simultaneously. We provide an algorithm with regret O(N3f(log T)log T) (assuming bounded rewards, with unknown bound), where f(t) is any function diverging to infinity with t. In contrast to optimal algorithms which share the rewards with a selected leader, our algorithm does not require a centralized collection of the arm rewards, allowing each agent to keep its rewards private. We also significantly improved previous privacy-preserving algorithms with the same upper bound on the regret of order O(f(log T)log T) but an exponential dependence on the number of agents. Simulation results present the dependence of the regret on log T.

Show more

Apr 2025

Tempo oscillations in rhythmic human networks

Elad Shniderman, Maya Wertsman, Hadar Granot, Hamootal Duadi, Moti Fridman

Understanding oscillatory behavior in human networks is essential for exploring synchronization, coordination, and collective dynamics. In this study, we investigate tempo oscillations in complex human networks using a system of coupled violin players with precisely controlled network parameters. Each player interacts via delayed auditory feedback, allowing us to explore the effects of connectivity, delay, and tempo on network oscillations. We identify two distinct types of oscillations: fast (2–3 seconds) and slow (5–15 seconds), and demonstrate that their periods are independent of network size and delay but are strongly correlated with the network's average tempo. Additionally, we show that increasing the number of coupled neighbors enhances oscillation damping, indicating the role of connectivity in stabilizing network dynamics. By varying the delay rate, we discover a critical decay rate where oscillation …

Show more

Apr 2025 • arXiv preprint arXiv:2504.04558

Roadmap for Photonics with 2D Materials

F de Abajo, DN Basov, Frank HL Koppens, Lorenzo Orsini, Matteo Ceccanti, Sebastián Castilla, Lorenzo Cavicchi, Marco Polini, PAD Gonçalves, AT Costa, NMR Peres, N Asger Mortensen, Sathwik Bharadwaj, Zubin Jacob, PJ Schuck, AN Pasupathy, Milan Delor, MK Liu, Aitor Mugarza, Pablo Merino, Marc G Cuxart, Emigdio Chávez-Angel, Martin Svec, Luiz HG Tizei, Florian Dirnberger, Hui Deng, Christian Schneider, Vinod Menon, Thorsten Deilmann, Alexey Chernikov, Kristian S Thygesen, Yohannes Abate, Mauricio Terrones, Vinod K Sangwan, Mark C Hersam, Leo Yu, Xueqi Chen, Tony F Heinz, Puneet Murthy, Martin Kroner, Tomasz Smolenski, Deepankur Thureja, Thibault Chervy, Armando Genco, Chiara Trovatello, Giulio Cerullo, Stefano Dal Conte, Daniel Timmer, Antonietta De Sio, Christoph Lienau, Nianze Shang, Hao Hong, Kaihui Liu, Zhipei Sun, Lee A Rozema, Philip Walther, Andrea Alù, Michele Cotrufo, Raquel Queiroz, X-Y Zhu, Joel D Cox, Eduardo JC Dias, Álvaro Rodríguez Echarri, Fadil Iyikanat, Andrea Marini, Paul Herrmann, Nele Tornow, Sebastian Klimmer, Jan Wilhelm, Giancarlo Soavi, Zeyuan Sun, Shiwei Wu, Ying Xiong, Oles Matsyshyn, Roshan Krishna Kumar, Justin CW Song, Tomer Bucher, Alexey Gorlach, Shai Tsesses, Ido Kaminer, Julian Schwab, Florian Mangold, Harald Giessen, M Sánchez Sánchez, DK Efetov, T Low, G Gómez-Santos, T Stauber, Gonzalo Álvarez-Pérez, Jiahua Duan, Luis Martín-Moreno, Alexander Paarmann, Joshua D Caldwell, Alexey Y Nikitin, Pablo Alonso-González, Niclas S Mueller, Valentyn Volkov, Deep Jariwala, Timur Shegai, Jorik van de Groep, Alexandra Boltasseva, Igor V Bondarev, Vladimir M Shalaev, Jeffrey Simon, Colton Fruhling, Guangzhen Shen, Dino Novko, Shijing Tan, Bing Wang, Hrvoje Petek, Vahagn Mkhitaryan, Renwen Yu, Alejandro Manjavacas, J Enrique Ortega, Xu Cheng, Ruijuan Tian, Dong Mao, Dries Van Thourhout, Xuetao Gan, Qing Dai, Aaron Sternbach, You Zhou, Mohammad Hafezi, Dmitrii Litvinov, Magdalena Grzeszczyk, Kostya S Novoselov, Maciej Koperski, Sotirios Papadopoulos, Lukas Novotny, Leonardo Viti, Miriam Serena Vitiello, Nathan D Cottam, Benjamin T Dewes, Oleg Makarovsky, Amalia Patanè, Yihao Song, Mingyang Cai, Jiazhen Chen, Doron Naveh, Houk Jang, Suji Park, Fengnian Xia, Philipp K Jenke, Josip Bajo, Benjamin Braun, Kenneth S Burch, Liuyan Zhao, Xiaodong Xu

Triggered by the development of exfoliation and the identification of a wide range of extraordinary physical properties in self-standing films consisting of one or few atomic layers, two-dimensional (2D) materials such as graphene, transition metal dichalcogenides (TMDs), and other van der Waals (vdW) crystals currently constitute a wide research field protruding in multiple directions in combination with layer stacking and twisting, nanofabrication, surface-science methods, and integration into nanostructured environments. Photonics encompasses a multidisciplinary collection of those directions, where 2D materials contribute with polaritons of unique characteristics such as strong spatial confinement, large optical-field enhancement, long lifetimes, high sensitivity to external stimuli (e.g., electric and magnetic fields, heating, and strain), a broad spectral range from the far infrared to the ultraviolet, and hybridization with spin and momentum textures of electronic band structures. The explosion of photonics with 2D materials as a vibrant research area is producing breakthroughs, including the discovery and design of new materials and metasurfaces with unprecedented properties as well as applications in integrated photonics, light emission, optical sensing, and exciting prospects for applications in quantum information, and nanoscale thermal transport. This Roadmap summarizes the state of the art in the field, identifies challenges and opportunities, and discusses future goals and how to meet them through a wide collection of topical sections prepared by leading practitioners.

Show more

Apr 2025 • arXiv e-prints

Roadmap for Photonics with 2D Materials

F Javier García de Abajo, DN Basov, Frank HL Koppens, Lorenzo Orsini, Matteo Ceccanti, Sebastián Castilla, Lorenzo Cavicchi, Marco Polini, PAD Gonçalves, AT Costa, NMR Peres, N Asger Mortensen, Sathwik Bharadwaj, Zubin Jacob, PJ Schuck, AN Pasupathy, Milan Delor, MK Liu, Aitor Mugarza, Pablo Merino, Marc G Cuxart, Emigdio Chávez-Angel, Martin Svec, Luiz HG Tizei, Florian Dirnberger, Hui Deng, Christian Schneider, Vinod Menon, Thorsten Deilmann, Alexey Chernikov, Kristian S Thygesen, Yohannes Abate, Mauricio Terrones, Vinod K Sangwan, Mark C Hersam, Leo Yu, Xueqi Chen, Tony F Heinz, Puneet Murthy, Martin Kroner, Tomasz Smolenski, Deepankur Thureja, Thibault Chervy, Armando Genco, Chiara Trovatello, Giulio Cerullo, Stefano Dal Conte, Daniel Timmer, Antonietta De Sio, Christoph Lienau, Nianze Shang, Hao Hong, Kaihui Liu, Zhipei Sun, Lee A Rozema, Philip Walther, Andrea Alù, Michele Cotrufo, Raquel Queiroz, X-Y Zhu, Joel D Cox, Eduardo JC Dias, Álvaro Rodríguez Echarri, Fadil Iyikanat, Andrea Marini, Paul Herrmann, Nele Tornow, Sebastian Klimmer, Jan Wilhelm, Giancarlo Soavi, Zeyuan Sun, Shiwei Wu, Ying Xiong, Oles Matsyshyn, Roshan Krishna Kumar, Justin CW Song, Tomer Bucher, Alexey Gorlach, Shai Tsesses, Ido Kaminer, Julian Schwab, Florian Mangold, Harald Giessen, M Sánchez Sánchez, DK Efetov, T Low, G Gómez-Santos, T Stauber, Gonzalo Álvarez-Pérez, Jiahua Duan, Luis Martín-Moreno, Alexander Paarmann, Joshua D Caldwell, Alexey Y Nikitin, Pablo Alonso-González, Niclas S Mueller, Valentyn Volkov, Deep Jariwala, Timur Shegai, Jorik van de Groep, Alexandra Boltasseva, Igor V Bondarev, Vladimir M Shalaev, Jeffrey Simon, Colton Fruhling, Guangzhen Shen, Dino Novko, Shijing Tan, Bing Wang, Hrvoje Petek, Vahagn Mkhitaryan, Renwen Yu, Alejandro Manjavacas, J Enrique Ortega, Xu Cheng, Ruijuan Tian, Dong Mao, Dries Van Thourhout, Xuetao Gan, Qing Dai, Aaron Sternbach, You Zhou, Mohammad Hafezi, Dmitrii Litvinov, Magdalena Grzeszczyk, Kostya S Novoselov, Maciej Koperski, Sotirios Papadopoulos, Lukas Novotny, Leonardo Viti, Miriam Serena Vitiello, Nathan D Cottam, Benjamin T Dewes, Oleg Makarovsky, Amalia Patanè, Yihao Song, Mingyang Cai, Jiazhen Chen, Doron Naveh, Houk Jang, Suji Park, Fengnian Xia, Philipp K Jenke, Josip Bajo, Benjamin Braun, Kenneth S Burch, Liuyan Zhao, Xiaodong Xu

Triggered by the development of exfoliation and the identification of a wide range of extraordinary physical properties in self-standing films consisting of one or few atomic layers, two-dimensional (2D) materials such as graphene, transition metal dichalcogenides (TMDs), and other van der Waals (vdW) crystals currently constitute a wide research field protruding in multiple directions in combination with layer stacking and twisting, nanofabrication, surface-science methods, and integration into nanostructured environments. Photonics encompasses a multidisciplinary collection of those directions, where 2D materials contribute with polaritons of unique characteristics such as strong spatial confinement, large optical-field enhancement, long lifetimes, high sensitivity to external stimuli (eg, electric and magnetic fields, heating, and strain), a broad spectral range from the far infrared to the ultraviolet, and hybridization with …

Show more

Apr 2025 • IEEE Transactions on Emerging Topics in Computing

CAM4: In-Memory Viral Pathogen Genome Classification using Similarity Search Dynamic Content-Addressable Memory

Zuher Jahshan, Itay Merlin, Esteban Garzon, Leonid Yavits

We present CAM4, a novel embedded dynamic storage-based similarity search content addressable memory. CAM4 is designated for in-memory computational genomics applications, particularly the identification and classification of pathogen DNA. CAM4 employs a novel gain cell design and one-hot encoding of DNA bases to address retention time variations, and mitigate potential data loss from pulldown leakage and soft errors in embedded DRAM. CAM4 features performance overhead-free refresh and data upload, allowing simultaneous search and refresh without performance degradation. CAM4 offers approximate search versatility in scenarios with a variety of industrial sequencers with different error profiles. When classifying DNA reads with a 10% error rate, it achieves, on average, a 25% higher-score compared to MetaCache-GPU and Kraken2 DNA classification tools. Simulated at 1GHz, CAM4 …

Show more

Apr 2025 • arXiv preprint arXiv:2404.12381

Wavelength-accurate and wafer-scale process for nonlinear frequency mixers in thin-film lithium niobate

CJ Xin, Shengyuan Lu, Jiayu Yang, Amirhassan Shams-Ansari, Boris Desiatov, Letícia S Magalhães, Soumya S Ghosh, Erin McGee, Dylan Renaud, Nicholas Achuthan, Arseniy Zvyagintsev, David Barton III, Neil Sinclair, Marko Lončar


Apr 2025 • arXiv preprint arXiv:2504.20625

DiffusionRIR: Room Impulse Response Interpolation using Diffusion Models

Sagi Della Torre, Mirco Pezzoli, Fabio Antonacci, Sharon Gannot

Room Impulse Responses (RIRs) characterize acoustic environments and are crucial in multiple audio signal processing tasks. High-quality RIR estimates drive applications such as virtual microphones, sound source localization, augmented reality, and data augmentation. However, obtaining RIR measurements with high spatial resolution is resource-intensive, making it impractical for large spaces or when dense sampling is required. This research addresses the challenge of estimating RIRs at unmeasured locations within a room using Denoising Diffusion Probabilistic Models (DDPM). Our method leverages the analogy between RIR matrices and image inpainting, transforming RIR data into a format suitable for diffusion-based reconstruction. Using simulated RIR data based on the image method, we demonstrate our approach's effectiveness on microphone arrays of different curvatures, from linear to semi-circular. Our method successfully reconstructs missing RIRs, even in large gaps between microphones. Under these conditions, it achieves accurate reconstruction, significantly outperforming baseline Spline Cubic Interpolation in terms of Normalized Mean Square Error and Cosine Distance between actual and interpolated RIRs. This research highlights the potential of using generative models for effective RIR interpolation, paving the way for generating additional data from limited real-world measurements.

Show more

Apr 2025 • IEEE

Zero-Shot Image Restoration via Few-Step Guidance of Consistency Models

Tomer Garber, Tom Tirer

Recently, it has become popular to tackle image restoration tasks with a single pretrained (unconditional) denoising diffusion model (DDM) and data-fidelity guidance, instead of training a dedicated deep neural network per task. However, such "zero-shot" restoration schemes require many Neural Function Evaluations (NFEs). This follows from the need of iterative schemes with many NFEs already in the original generative functionality of the DDMs. Very recently, faster variants of DDMs have been explored for image generation. A prominent alternative are Consistency Models (CMs), which can generate samples via a couple of NFEs. However, existing works that use guided CMs for restoration still require tens of NFEs or fine-tuning of the model per task. Clearly, the latter is not a zero-shot strategy and, as such, leads to performance drop if the assumptions during the fine-tuning (e.g., the noise level) are not …

Show more

Apr 2025 • arXiv preprint arXiv:2504.02982

Inferring scattering-type Scanning Near-Field Optical Microscopy Data from Atomic Force Microscopy Images

Stefan G Stanciu, Stefan R Anton, Denis E Tranca, George A Stanciu, Bogdan Ionescu, Zeev Zalevsky, Binyamin Kusnetz, Jeremy Belhassen, Avi Karsenty, Gabriella Cincotti

Optical nanoscopy is crucial in life and materials sciences, revealing subtle cellular processes and nanomaterial properties. Scattering-type Scanning Near-field Optical Microscopy (s-SNOM) provides nanoscale resolution, relying on the interactions taking place between a laser beam, a sharp tip and the sample. The Atomic Force Microscope (AFM) is a fundamental part of an s-SNOM system, providing the necessary probe-sample feedback mechanisms for data acquisition. In this Letter, we demonstrate that s-SNOM data can be partially inferred from AFM images. We first show that a generative artificial intelligence (AI) model (pix2pix) can generate synthetic s-SNOM data from experimental AFM images. Second, we demonstrate that virtual s-SNOM data can be extrapolated from knowledge of the tip position and, consequently, from AFM signals. To this end, we introduce an analytical model that explains the mechanisms underlying AFM-to-s-SNOM image translation. These insights have the potential to be integrated into future physics-informed explainable AI models. The two proposed approaches generate pseudo s-SNOM data without direct optical measurements, significantly expanding access to optical nanoscopy through widely available AFM systems. This advancement holds great promise for reducing both time and costs associated with nanoscale imaging.

Show more

logo
Articali

Powered by Articali

TermsPrivacy