최근 opanAI 구독자에게 deepresearch가 공개되었다. plus 구독자는 한달에 10번, 그리고 pro 구독자는 100번 사용이 가능하다.

사용 사례 1
한번은 집에가고싶어라고 적고 deep research를 사용해보았다. (옆에서 보던 사람이 나보고 미친사람이라 놀린다 깔깔깔 ㅋㅋㅋㅋㅋㅋ) 당연히 기회를 한번 날리는 줄 알았는데, 상세하게 알려달라고 하길래 현재위치와 집위치를 알려주었다. 버스와 기차편으로 집에 갈 수 있고, 기차는 배차기간이 기니까 버스를 어디서 타고 가라고 상세히 찾아보고 알려주는 모습을 볼수있었다. 다만, deep research의 비용이 매우 비싸므로 이런 방식으로 사용하는것은 매우 비효율 적이라 볼 수 있다.
(필자의 개인정보이므로, 비공개하겠다.)
사용사례2
최근 flow matching이란 기술에 관심이 있어서 (취미로) 공부하고 있다. 다음과 같은 질문을 올려보았더니 상당히 괜찮은 대답을 얻었다. (o3mini -high) 이를 공유하고자 블로그에 포스팅한다.
개인적인 신기함
곧 바로 딥리서치를 활용하는 것이 아니라, 사용자에게 자세한 정보를 요구한다는 점에서 상당히 만족스럽다.
생각보다 그렇게 오래 걸리지 않는다(10분). 그리고 어느 부분에서 어느 문서를 참고했는지를 매우 명확하게 보여준다. 그리고, 문서를 정말로 읽고 답을 생성하는지 아니면 제목만 보고 답을 생성하는지가 관건인데, 내가 볼 때는 제목만 보고 지어내는 것 같지는 않다. (공부해보고 지금 공유물에 치명적인 오류가 있음 추가 포스팅을 할 것이다.)
먼저 Deepresearch 기능을 활성화 시켰다.
Look at more papers on applying Riemman geometry with flow matching/diffusion models and check their applications
Could you clarify what specific applications or domains you're most interested in? For example, are you looking for applications in machine learning, physics, finance, or another field? Also, would you like a focus on theoretical developments, practical implementations, or both?
우리는 궁극적으로 flow matching에 group equivalent network를 사용하고자 해. 기존에 방법들이 있는데, 이것은 기하학적 가정이 별로 들어가지 않아. 예를들면 Equivariant Flow Matching with Hybrid Probability Transport이라는 논문이 있어. 나는 여기에서 리만기하학을 사용할 수 있는지 궁금해.
Look at more papers on applying Riemman geometry with flow matching/diffusion models and check their applications
I'll review recent papers that apply Riemannian geometry to flow matching and diffusion models, with a focus on their applications and theoretical foundations. Specifically, I'll check whether Riemannian geometry can enhance approaches like Equivariant Flow Matching with Hybrid Probability Transport and examine its role in group equivariant networks. I'll update you with my findings soon.
그리고 나서 딥리서치 실행 되고 다음 답을 얻었다. 이렇게 실행되어야먄 주어진 기회 (한달 10번)을 사용한다.
I'll review recent papers that apply Riemannian geometry to flow matching and diffusion models, with a focus on their applications and theoretical foundations. Specifically, I'll check whether Riemannian geometry can enhance approaches like Equivariant Flow Matching with Hybrid Probability Transport and examine its role in group equivariant networks. I'll update you with my findings soon.
Recent works have extended generative modeling techniques – like diffusion models and continuous normalizing flows – to Riemannian manifolds (curved or non-Euclidean spaces). These methods explicitly use Riemannian geometry (metrics, geodesics, and volume forms) to respect the intrinsic structure of manifold-valued data:
Riemannian Diffusion Models (RDM) generalize continuous-time diffusion processes to arbitrary Riemannian manifolds (). Huang et al. (2022) derive a variational likelihood framework on manifolds and introduce techniques for computing the Riemannian divergence needed to train diffusion models on these spaces (). Notably, they prove that maximizing the manifold ELBO is equivalent to performing Riemannian score matching (i.e. matching the score of the data distribution on the manifold) (). Empirically, RDMs demonstrated the expressive power of diffusion models on various curved domains – including spheres, tori, hyperbolic spaces, and even orthogonal group manifolds – achieving new state-of-the-art likelihoods on all such benchmarks (). This was a clear improvement over earlier approaches that treated manifold data with Euclidean assumptions.
Riemannian Score-Based Generative Modeling (RSGM), introduced by De Bortoli et al. (2022), similarly extends score-based diffusion models to data living on Riemannian manifolds ([2202.02763] Riemannian Score-Based Generative Modelling). Traditional score-based models assume data lie in flat , which is unsuitable for many domains (e.g. directional data on spheres, rotations, or poses). RSGM formulates the forward noising (diffusion) and reverse denoising processes intrinsically on the manifold, ensuring that model training accounts for curvature and geodesic distances ([2202.02763] Riemannian Score-Based Generative Modelling). They demonstrated this approach on tasks like climate and geology data defined on the sphere, showing substantially better results than naive Euclidean methods ([2202.02763] Riemannian Score-Based Generative Modelling).
Flow Matching (FM) is a simulation-free approach to train continuous normalizing flows, recently proposed by Lipman and colleagues, that turns the generative modeling problem into regressing a vector field along a prescribed probability flow ([2210.02747] Flow Matching for Generative Modeling). Instead of simulating a stochastic diffusion, FM chooses a probability path from the base noise distribution to the data distribution (for example, a family of Gaussian distributions bridging noise and data). The neural ODE (continuous flow) is then trained to match the time-dependent vector field that carries out this transport ([2210.02747] Flow Matching for Generative Modeling). This framework encompasses diffusion models as a special case – in fact, using a standard diffusion (Gaussian noise) path in flow matching yields a more robust and stable training procedure for diffusion models ([2210.02747] Flow Matching for Generative Modeling). Moreover, FM allows using alternative paths such as optimal transport (OT) geodesics (also known as displacement interpolation): by choosing an OT-based probability flow, one can often train faster and sample more efficiently, with better generative performance, compared to the classical diffusion path ([2210.02747] Flow Matching for Generative Modeling).
Riemannian Flow Matching (RFM) combines the above ideas by enabling flow matching on manifold domains. Chen and Lipman (2024) present RFM as a unified framework for training continuous flows on general geometries without relying on heavy simulation or restrictive assumptions (Flow Matching on General Geometries | OpenReview). In RFM, a Riemannian metric (or a suitably constructed premetric) defines the geometry of the probability flow (Flow Matching on General Geometries | OpenReview). This premetric generalizes the Euclidean case and is used to obtain closed-form target vector fields on “simple” manifolds (those with known geodesics) (Flow Matching on General Geometries | OpenReview). Thanks to this construction, RFM enjoys several advantages: it remains simulation-free on manifolds where geodesics are known, requires no computing of complicated log-density corrections (no divergence term) on those geometries, and still generalizes to more complex manifolds via spectral decompositions computed on the fly (Flow Matching on General Geometries | OpenReview) (Flow Matching on General Geometries | OpenReview). The authors demonstrate state-of-the-art results on a variety of non-Euclidean datasets using RFM, including high-dimensional tori (e.g. protein backbone angle distributions) and even triangular mesh surfaces with non-trivial curvature, which are challenging domains (Flow Matching on General Geometries | OpenReview). This showcases how Riemannian geometry (through metrics, geodesic flows, and spectral methods) is explicitly leveraged to model data that resides on curved spaces.
Riemannian generative models contrast with earlier approaches that do not incorporate strong geometric assumptions (i.e. those assuming data in a flat Euclidean space). Without a manifold-aware formulation, one typically has to ignore or approximate the true geometry of the data:
Empirical comparisons in the literature back up these theoretical advantages. Riemannian diffusion models have achieved likelihoods that outperform Euclidean diffusion baselines on manifold-structured data (). Likewise, RFM reports better performance on complex geometric datasets (like molecular structures and 3D shape data) than prior Euclidean-based generative models, which often could not even be directly applied without preprocessing data into a vector form (Flow Matching on General Geometries | OpenReview). In summary, methods that embrace the true geometry tend to outperform those that ignore it, especially as the complexity or dimensionality of the manifold grows.
The ability to work directly on manifolds opens up a range of applications, particularly in scenarios with underlying group symmetries or geometric constraints:
Molecular and 3D Object Generation: A key example is generative modeling of 3D molecular structures, where the model must be E(3)-equivariant (invariant to translations and rotations of the molecule). Traditional approaches enforce equivariance via network architecture or data augmentation. In contrast, a Riemannian approach can treat the orientation as part of the data's manifold structure (the orientation of an object lives on the rotation group SO(3), which is a Riemannian manifold). By conducting diffusion or flow on (or on the configuration space modulo rotations), one can naturally incorporate rotational invariance. Huang et al. indeed showed that RDMs can model distributions on orthogonal groups (), hinting at applications in which the generator explicitly outputs a distribution over orientations. For a molecule, one could generate internal coordinates (bond lengths, angles, dihedrals) which lie on product manifolds (some Euclidean, some circular) and use a Riemannian diffusion to ensure those are modeled correctly while the overall structure is invariant to global rotation. Recent work on Equivariant Flow Matching for molecules already takes a step in this direction by building rotation equivariance into the model architecture and employing an optimal transport-driven path for stable generation ([2312.07168] Equivariant Flow Matching with Hybrid Probability Transport). A Riemannian formulation could further enhance this by, for example, defining the OT in terms of geodesic distances on the manifold of molecular configurations, or by using manifold-aware noise on orientation variables for more coherent sampling.
Geometric Deep Learning & Equivariant Networks: Group-equivariant networks often deal with data on homogeneous spaces or group manifolds (e.g. spherical images, point clouds on a sphere, rigid motions in SE(3)). Riemannian diffusion and flow models can serve as generative priors in these settings. For instance, an equivariant network for spherical data could be paired with a Riemannian diffusion model on to generate new samples of climate patterns or global seismic data, inherently respecting the spherical geometry ([2202.02763] Riemannian Score-Based Generative Modelling). Similarly, one can generate rotations or transformations by sampling from a diffusion on , which is directly applicable to problems like generating random orientations or poses in graphics and robotics. The use of manifold-aware generative models ensures that the outputs of an equivariant network lie on the valid manifold (no “chart jumping” or distortion), complementing the network’s symmetry-preserving properties.
Robotics and Control: In robotics, system states often have angular components (e.g. a robot arm’s joint angles lie on a torus ). Diffusion policies have been used for visuomotor control, but they suffer from slow sampling due to iterative denoising. Riemannian Flow Matching has been applied to learn robot policies, precisely because it can incorporate the geometry of the state space (like the configuration manifold of the robot) and yield faster inference. Ding et al. (2024) introduce a Riemannian Flow Matching Policy that inherits the fast inference of flow matching and builds in the robot’s manifold constraints (e.g. orientation angles wrapping around) ([2412.10855] Fast and Robust Visuomotor Riemannian Flow Matching Policy). Their approach showed improved efficiency over standard diffusion-based policies and naturally handled both Euclidean and non-Euclidean state components ([2412.10855] Fast and Robust Visuomotor Riemannian Flow Matching Policy). This indicates that Riemannian generative methods are not limited to data generation; they are also beneficial in decision-making domains where states or actions have geometric structure.
Shape and Graphics Applications: Another domain is 3D shape generation and graphics. Surfaces and meshes can be viewed as manifolds, and modeling distributions over shapes (e.g. generating new shapes or deformations) can benefit from Riemannian methods. The RFM work demonstrated training on a triangular mesh with complex curvature (Flow Matching on General Geometries | OpenReview), suggesting the possibility of generative models that learn distributions of deformable shapes or patterns on surfaces. Traditional approaches might struggle with such data (often requiring cut-up parameterizations or linear approximations), whereas a flow matching on the mesh manifold handles it in a coordinate-invariant way.
In all these applications, Riemannian generative models provide a principled way to enforce symmetries or constraints. By operating on the true geometric domain (be it a group like or a curved space), they ensure the output respects the invariants by design. This can simplify the integration with group-equivariant neural networks, as the generative model itself takes care of the symmetry, reducing the burden on the network architecture.
The recently proposed Equivariant Flow Matching with Hybrid Probability Transport (EFM-HPT) for 3D molecules is a prime example of combining equivariance with advanced transport paths ([2312.07168] Equivariant Flow Matching with Hybrid Probability Transport). EFM-HPT uses flow matching to define a probability path where atomic coordinates are guided by an equivariant optimal transport, while atomic types (discrete features) are aligned alongside ([2312.07168] Equivariant Flow Matching with Hybrid Probability Transport). This yields more stable probability dynamics and significantly faster sampling (nearly 4.8× speedup) compared to standard diffusion models ([2312.07168] Equivariant Flow Matching with Hybrid Probability Transport). Although EFM-HPT did not explicitly formulate the problem in Riemannian terms, there is a natural synergy: the OT regularization on coordinates can be interpreted geometrically.
Riemannian methods could enhance such approaches in a couple of ways:
Geodesic Probability Paths: EFM-HPT already utilizes an OT path in Euclidean space. If the molecular configuration space is viewed with a Riemannian metric (for instance, a metric that accounts for rotational degrees of freedom or bond length constraints), one could define a geodesic flow in that metric as the probability path. This would mean the transport of the coordinate distribution is the “straightest” or most natural path on the manifold of structures. In theory, this could further stabilize training by avoiding unnecessary distortion of intermediate distributions. It aligns with the idea from Flow Matching that OT-based paths are more efficient than diffusion ([2210.02747] Flow Matching for Generative Modeling) – here the OT would be cognizant of the molecule’s geometry, potentially improving efficiency even more.
Manifold Score Matching: Parts of the EFM-HPT model (e.g. orienting a molecule in space, or distributing points on a spherical shell for molecular conformations) could be treated with Riemannian score matching. For example, orientational components could be generated by a Riemannian diffusion on rather than a Euclidean approximation. This would guarantee rotational equivariance analytically (since the diffusion on has a uniform invariant measure) instead of only through equivariant networks. Integrating such manifold-specific subroutines could improve the overall model’s fidelity to symmetry and might simplify the network design (the network could focus on other aspects while geometry is handled by the generative process).
Hybrid Modeling: A hybrid approach could be envisioned where the continuous features (atom positions in 3D space) are handled by Riemannian flow matching on the appropriate manifold (e.g. modulo rigid motions, or on internuclear distance/angle manifold), while the categorical features (atom types) are handled by a complementary mechanism. By aligning these with a shared time parameter (as done in HPT), one ensures coordination between the two. The Riemannian flow matching part would bring its fast ODE-based sampling and geometry-aware accuracy to the continuous side, potentially boosting the overall performance of EFM-HPT.
In summary, Riemannian geometric techniques can be seen as a natural extension to equivariant flow models, grounding them in the true geometry of the problem. While EFM-HPT already achieves impressive results, incorporating a Riemannian viewpoint (e.g. using manifold distances for OT or manifold diffusion for orientations) could further improve its theoretical elegance and possibly its empirical results. This remains an exciting direction for future research, as it would unite group equivariant modeling with manifold-aware generative dynamics.
Riemannian flow matching and diffusion models introduce several theoretical and practical benefits to generative modeling, especially when compared to conventional (non-geometric) approaches:
Respect for Data Geometry: Manifold-aware models naturally handle curved or periodic domains without flattening or ad-hoc fixes. This means, for example, that probability mass is correctly allocated on a sphere or torus, and transformations like rotations are treated as fundamental symmetries. By using the intrinsic metric, these methods avoid distortions that Euclidean models would inadvertently introduce ([2202.02763] Riemannian Score-Based Generative Modelling). In practice, this leads to more accurate modeling of phenomena such as directional data, spherical signals, and other inherently geometric data.
Stable and Simulation-Free Training: Methods like flow matching eliminate the need to simulate or sample from the forward process during training ([2210.02747] Flow Matching for Generative Modeling). This stability is enhanced on manifolds by avoiding intricate calculations (like computing Jacobians or Riemannian volumes) when possible. RFM, for instance, avoids computing any divergence term on simple manifolds and sidesteps stochastic simulation entirely (Flow Matching on General Geometries | OpenReview). The result is often a simpler training pipeline that is less prone to numerical issues, since one can directly regress the continuous dynamics in closed-form. Empirically, using flow matching with diffusion paths has been observed to produce a more robust training than classical diffusion model training ([2210.02747] Flow Matching for Generative Modeling).
Efficiency and Fast Sampling: Generative models grounded in ODEs or deterministic flows can leverage fast integrators and do not require thousands of tiny denoising steps. By choosing an optimal transport or otherwise geometrically short path between distributions, Riemannian flow models can generate samples with significantly fewer steps than a standard diffusion. For example, the hybrid OT approach in EFM led to a ~4.75× speedup in sampling for molecules ([2312.07168] Equivariant Flow Matching with Hybrid Probability Transport). Generally, flow-matching models allow one-shot or adaptive ODE solvers for sampling, which can be much faster than iterative diffusion decoding. This is a crucial practical advantage when deploying generative models in real-time applications.
Scalability to Complex Spaces: Incorporating geometric structure often makes high-dimensional or complex-curvature spaces more tractable. RFM showed that even highly non-trivial manifolds (like a curved 3D surface) can be handled by leveraging spectral methods to compute the required geometric quantities on the fly (Flow Matching on General Geometries | OpenReview). This means that as long as one can define the manifold (possibly via a mesh or an implicit equation), the generative model can, in principle, learn a distribution on it. Traditional approaches without geometry would struggle or require splitting the space into patches. Riemannian methods thus broaden the scope of generative modeling to any data lying on a known manifold, without sacrificing scalability or accuracy.
Improved Performance: Across various benchmarks, imposing the correct geometric priors has led to better log-likelihoods and sample quality. Riemannian Diffusion Models attained state-of-the-art likelihood scores on all tested manifold datasets (), and Riemannian Flow Matching likewise achieved superior results on real-world non-Euclidean datasets (Flow Matching on General Geometries | OpenReview). These improvements are not just due to model expressiveness, but also because the training objectives are more correct (unbiased and appropriate for the manifold) and the models can concentrate capacity on relevant structure rather than learning to “undo” embedding distortions.
Inherent Symmetry Handling: Perhaps one of the most attractive features is that Riemannian generative models can encode symmetries by design. If a task requires invariance under a group (like rotations, reflections, or permuting identical elements), one can often incorporate that by formulating the model on the quotient space or on the group manifold itself. This was evidenced by applications like the Riemannian flow policy in robotics, where the model naturally respects joint angle periodicity and rotational constraints of the robot’s state space ([2412.10855] Fast and Robust Visuomotor Riemannian Flow Matching Policy). In generative modeling, this means fewer spurious degrees of freedom and more consistent outputs under transformations, often leading to better generalization. It complements group-equivariant network architectures: the geometry-aware generative model handles global symmetries, while the network can focus on other features of the data.
In conclusion, the infusion of Riemannian geometry into flow matching and diffusion models represents a significant step forward for generative modeling on non-Euclidean domains. It provides the tools to faithfully model data on manifolds, yielding models that are more theoretically sound (respecting the true data geometry) and practically effective (often faster and more accurate). As research continues, we can expect these geometric approaches to enhance existing methods (like equivariant generative models) and unlock new applications in areas where geometry and symmetry play a central role. (Flow Matching on General Geometries | OpenReview) ()