

Reprogram patch embeddings to align the modalities of time series and natural language
Learning a form of noise allows the pre-trained source model to produce the desired target outputs
Feasible for bridging data modalities that are identical or similar
In past, learnable transformations between the source and target data allowing for the direct editing of input samples
Howerver, time series can neither be directly edited nor described losslessly in natural language
To close this gap, we propose reprogramming using pre-trained word embeddings
No prior knowledge which source tokens are directly relevant
→ simply leveraging E result in large and potentially dense reprogramming space
Simple solution is to maintain a small collection of text prototypes by linearly probing

Text prototypes learn connecting language cues which are combined to represent the local patch informaton (e.g. “short up then down steadily”)
without leaving the space where the language model is pre-trained
This approach allows for the adaptive selection of relevant source information
Employ multi-head cross-attention layer


