<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Parallel Mind]]></title><description><![CDATA[Wanderings of a multi-dimensional mind around Artificial Intelligence, Simulation, Virtual and Augmented reality (XR).]]></description><link>https://www.parallelmind.xyz</link><generator>Substack</generator><lastBuildDate>Fri, 03 Apr 2026 21:23:22 GMT</lastBuildDate><atom:link href="https://www.parallelmind.xyz/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Michal Takáč]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[parallelmind@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[parallelmind@substack.com]]></itunes:email><itunes:name><![CDATA[Michal Takáč]]></itunes:name></itunes:owner><itunes:author><![CDATA[Michal Takáč]]></itunes:author><googleplay:owner><![CDATA[parallelmind@substack.com]]></googleplay:owner><googleplay:email><![CDATA[parallelmind@substack.com]]></googleplay:email><googleplay:author><![CDATA[Michal Takáč]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Will 2024 be the year of text-to-3D?]]></title><description><![CDATA[Latest developments in open-source 3D generative models.]]></description><link>https://www.parallelmind.xyz/p/will-2024-be-the-year-of-text-to-3d</link><guid isPermaLink="false">https://www.parallelmind.xyz/p/will-2024-be-the-year-of-text-to-3d</guid><dc:creator><![CDATA[Michal Takáč]]></dc:creator><pubDate>Sat, 29 Jun 2024 09:39:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CYPF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In recent years, we've witnessed a remarkable surge in text-to-3D and image-to-3D AI models, revolutionizing the way we create and interact with three-dimensional content. These innovative approaches have opened up new possibilities for artists, designers, developers, and creators across various industries, enabling them to bring their ideas to life in the digital realm with unprecedented ease and speed.</p><p>While closed-source apps like <a href="https://www.meshy.ai">Meshy.ai</a>, <a href="https://www.sloyd.ai">Sloyd.ai</a>, <a href="https://www.alpha3d.io">Alpha3D</a>, <a href="https://3dfy.ai">3DFY.ai</a>, and Luma Labs's <a href="https://lumalabs.ai/genie?view=create">Genie</a> have dominated the field with their impressive capabilities, a new wave of open-source alternatives is rapidly gaining ground. These emerging open-source models, architectures, and frameworks are not only democratizing access to 3D content creation but also pushing the boundaries of what's possible in this exciting domain.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Parallel Mind is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The closed-source solutions have set a high bar for performance, quality, and speed, offering users powerful tools to generate complex 3D models from simple text descriptions or 2D images. They have found applications in diverse fields such as gaming, film production, virtual reality, augmented reality, and product visualization. However, their proprietary nature often means limited accessibility and customization options for users and researchers.</p><p><strong>Enter the world of open-source text-to-3D and image-to-3D models</strong>. These projects, driven by passionate communities of developers and researchers, are making significant strides in bridging the gap between proprietary and freely available solutions. By leveraging cutting-edge machine learning techniques, computer vision algorithms, and 3D modeling principles, these open-source initiatives are slowly but surely closing the gap with their closed-source counterparts.</p><p>The advantages of open-source models in this space are multiple. They offer transparency, allowing users to understand and modify the underlying algorithms. This openness fosters innovation, as developers can build upon existing work, experiment with new approaches, and contribute improvements back to the community. Additionally, open-source solutions often provide greater flexibility in terms of deployment, integration with other tools, and customization to specific use cases.</p><p>As these open-source projects continue to evolve, we're seeing exciting developments in areas such as:</p><ul><li><p>Improved accuracy and detail in 3D model generation</p></li><li><p>Enhanced support for diverse input formats and styles</p></li><li><p>Faster processing times and more efficient resource utilization</p></li><li><p>Better integration with popular 3D modeling and rendering software</p></li><li><p>Expanded datasets for training and fine-tuning models</p></li><li><p>Novel architectures that combine the strengths of different approaches</p></li></ul><p>The implications of these advancements are far-reaching. As open-source text-to-3D and image-to-3D models become more sophisticated, we can expect to see their adoption across a wide range of industries. From rapid prototyping in product design to creating immersive virtual environments for education, training, and simulation, the potential applications are vast and varied.</p><p>Moreover, the democratization of 3D content creation through open-source tools has the power to level the playing field for small businesses, independent creators, and educational institutions. It enables them to compete with larger entities by providing access to powerful 3D generation capabilities without the need for substantial financial investments in proprietary software, although there&#8217;s still some needed investment on the hardware side, specifically on the powerful NVIDIA GPUs (because of the better performance capabilities due to the CUDA library) .</p><p>Without further ado, let&#8217;s jump to explore the latest developments in open-source text-to-3D and image-to-3D models and frameworks!</p><p></p><h3>DreamFusion</h3><p><a href="https://dreamfusion3d.github.io">Page</a> | <a href="https://arxiv.org/abs/2209.14988">Paper</a></p><p>In their paper titled "DreamFusion: Text-to-3D using 2D Diffusion," authors Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall propose a novel method for text-to-3D synthesis that leverages pretrained image diffusion models as effective priors. This approach eliminates the need for specialized training data in the 3D domain and requires no modifications to existing image diffusion models.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;4af3590c-1f1b-43cf-a2f6-aa67e22a9b65&quot;,&quot;duration&quot;:null}"></div><p>Generating 3D images from text poses unique challenges compared to traditional text-to-image synthesis. One of the main obstacles is the lack of large-scale labeled datasets containing both textual descriptions and corresponding 3D images. This makes it difficult for deep learning models to learn meaningful representations for generating 3D objects solely from textual input. Another challenge is finding efficient denoising architectures for 3D data. Denoising plays a crucial role in generating high-quality images from noisy inputs or incomplete information. However, applying existing denoising techniques designed for two-dimensional (2D) data may not be suitable for handling noise in 3D data.</p><p>To address these challenges, Poole et al. propose DreamFusion - a novel method that utilizes a pretrained 2D text-to-image diffusion model as a prior for optimizing a parametric image generator for text-to-3D synthesis. The first step involves training an image diffusion model on a large dataset of 2D images and corresponding textual descriptions. This model learns to generate realistic images from text by optimizing a loss function based on probability density distillation. Next, the authors use this pretrained 2D diffusion model as a prior for optimizing a parametric image generator for text-to-3D synthesis. They achieve this by using gradient descent to refine a randomly initialized 3D model known as a Neural Radiance Field (NeRF). The optimization process aims to minimize the loss incurred when rendering the 3D model from various angles in its 2D projections.</p><p>The resulting 3D models generated from textual input can be visualized from any perspective, illuminated by different light sources, or seamlessly integrated into diverse 3D environments. This allows for immersive and interactive experiences with virtual objects created solely from text descriptions. One potential application of DreamFusion is in the field of virtual reality (VR) and augmented reality (AR). By generating realistic 3D objects from text, it could enable users to interact with virtual objects in VR/AR environments without requiring specialized training data or complex denoising techniques.</p><p></p><h3>ProlificDreamer</h3><p><a href="https://ml.cs.tsinghua.edu.cn/prolificdreamer/">Page</a> | <a href="https://arxiv.org/abs/2305.16213">Paper</a> | <a href="https://github.com/thu-ml/prolificdreamer">Code</a></p><p>In their paper titled "ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation," authors Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu introduce a novel approach to address the limitations of score distillation sampling (SDS) in text-to-3D generation. SDS has shown promise in leveraging pretrained text-to-image diffusion models but has been plagued by issues such as over-saturation, over-smoothing, and low diversity in generated samples. The key innovation proposed by the authors is the modeling of the 3D parameter as a random variable rather than a constant as done in SDS. This leads to the development of variational score distillation (VSD), a particle-based variational framework that aims to tackle these challenges. </p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;06ca3d2b-eaa3-45f3-bdde-a1c81098a587&quot;,&quot;duration&quot;:null}"></div><h6>Michelangelo style statue of dog reading news on a cellphone.</h6><p></p><p>The authors demonstrate that SDS can be viewed as a special case of VSD but often produces subpar samples across different configuration weights. In contrast, VSD proves to be effective with various configuration weights by employing ancestral sampling from diffusion models. It not only enhances sample diversity but also improves overall sample quality. </p><p>Additionally, the authors present several enhancements in the design space for text-to-3D generation including optimizations related to distillation time schedule and density initialization. The proposed approach, dubbed ProlificDreamer, showcases impressive capabilities in generating high rendering resolution (512x512) outputs and high-fidelity Neural Radiance Fields (NeRF) with intricate structures and complex visual effects like smoke and drops. By fine-tuning meshes initialized from NeRF using VSD, the generated 3D models exhibit high details and photorealistic qualities. This research was presented at NeurIPS 2023 as a Spotlight paper.</p><p></p><h3>Magic3D</h3><p><a href="https://research.nvidia.com/labs/dir/magic3d/">Page</a> | <a href="https://arxiv.org/abs/2211.10440">Paper</a> </p><p>Magic3D is a new text-to-3D content creation tool that creates 3D mesh models with unprecedented quality. Together with image conditioning techniques as well as prompt-based editing approach, it provides new ways to control 3D synthesis, opening up new avenues to various creative applications.</p><p>Existing techniques for generating 3D models from text prompts have several limitations that hinder their efficiency and quality. For example, DreamFusion - one of the most widely used methods - utilizes a single-stage optimization process that can take up to 1.5 hours to generate a model. This is due to its reliance on high-resolution diffusion priors (up to 512 &#215; 512) which significantly slows down the process. Moreover, DreamFusion's approach also suffers from memory and compute inefficiencies as it uses a dense voxel grid representation for scenes. This not only increases processing time but also limits the level of detail that can be achieved in the final model.</p><p>To address these limitations, the authors of this paper (working at NVIDIA) propose Magic3D - a two-stage optimization framework that combines multiple diffusion priors with an efficient scene representation based on hash grids. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CYPF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CYPF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic 424w, https://substackcdn.com/image/fetch/$s_!CYPF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic 848w, https://substackcdn.com/image/fetch/$s_!CYPF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic 1272w, https://substackcdn.com/image/fetch/$s_!CYPF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CYPF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic" width="1456" height="449" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:449,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:176053,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CYPF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic 424w, https://substackcdn.com/image/fetch/$s_!CYPF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic 848w, https://substackcdn.com/image/fetch/$s_!CYPF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic 1272w, https://substackcdn.com/image/fetch/$s_!CYPF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff270a37-4c48-40cc-adfb-c1d4765c0279_2786x860.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In the first stage, Magic3D optimizes a coarse neural field representation using multiple diffusion priors while utilizing hash grids for scene representation. This allows for quick generation of view-consistent geometry while reducing memory usage and computation time compared to DreamFusion's dense voxel grid approach. The second stage involves optimizing mesh representations with high-resolution diffusion priors (up to 512 &#215; 512) using an efficient differentiable rasterizer and camera close-ups. This allows for the recovery of high-frequency details in geometry and texture, resulting in a more realistic and detailed final model.</p><p>In terms of speed, on average, Magic3D can generate high-quality 3D models in just 40 minutes - half the time required by DreamFusion.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!U_xo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!U_xo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic 424w, https://substackcdn.com/image/fetch/$s_!U_xo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic 848w, https://substackcdn.com/image/fetch/$s_!U_xo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic 1272w, https://substackcdn.com/image/fetch/$s_!U_xo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!U_xo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic" width="1456" height="1106" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1106,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:186812,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!U_xo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic 424w, https://substackcdn.com/image/fetch/$s_!U_xo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic 848w, https://substackcdn.com/image/fetch/$s_!U_xo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic 1272w, https://substackcdn.com/image/fetch/$s_!U_xo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc713de80-db6b-460f-83b0-7f26dd484c08_1778x1350.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>One of the most significant advantages of Magic3D is its ability to provide control over the 3D synthesis process. By incorporating advancements from text-to-image editing applications, users can now manipulate various aspects such as lighting, materials, textures, and camera angles through simple text prompts. This not only makes 3D content creation more accessible for novices but also enhances the workflow for expert artists.</p><p>The efficiency and quality offered by Magic3D open up new possibilities for creative applications across various industries. In gaming and entertainment, it can be used to quickly generate realistic characters or environments based on text descriptions provided by writers or game designers. In architecture, it can assist architects in creating virtual representations of their designs with ease. For robotics simulation, it can aid engineers in generating accurate models for testing purposes.</p><p></p><h3>HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance</h3><p><a href="https://josephzhu.com/HiFA-site/">Page</a> | <a href="https://arxiv.org/abs/2305.18766">Paper</a> | <a href="https://github.com/JunzheJosephZhu/HiFA">Code</a></p><p>Recent advancements in text-to-3D generation have been remarkable, with most existing methods leveraging pre-trained text-to-image diffusion models to optimize 3D representations like Neural Radiance Fields (NeRFs) via latent-space denoising score matching. However, these methods often result in artifacts and inconsistencies across different views due to suboptimal optimization approaches and limited understanding of 3D geometry. Additionally, the inherent constraints of NeRFs in rendering crisp geometry and stable textures often require a two-stage optimization to attain high-resolution details.</p><p>To address these limitations, authors of the paper Junzhe Zhu, Peiye Zhuang,&#8217; and Sanmi Koyejo propose holistic sampling and smoothing approaches to achieve high-quality text-to-3D generation in a single-stage optimization. Their method computes denoising scores in both the text-to-image diffusion model's latent and image spaces. Instead of randomly sampling timesteps, they introduced a novel timestep annealing approach that progressively reduces the sampled timestep throughout optimization. This ensures a more controlled and efficient optimization process.</p><p>To generate high-quality renderings, authors propose regularization for the variance of z-coordinates along NeRF rays. This helps to stabilize the geometry and texture rendering. Furthermore, paper introduces a kernel smoothing technique that refines importance sampling weights coarse-to-fine, ensuring accurate and thorough sampling in high-density regions.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;06447b5e-1700-4c80-9c65-ed482f122117&quot;,&quot;duration&quot;:null}"></div><p></p><h3><strong>Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation</strong></h3><p><a href="https://fantasia3d.github.io">Page</a> | <a href="https://arxiv.org/abs/2303.13873">Paper</a> | <a href="https://github.com/Gorilla-Lab-SCUT/Fantasia3D">Code</a></p><p>Fantasia3D is a cutting-edge method for high-quality text-to-3D content creation. Proposed by authors Rui Chen et.al., this innovative approach sets itself apart from existing methods in the realm of automatic 3D content generation. </p><p>Recent advancements in this field have been fueled by the availability of pre-trained large language models and image diffusion models. This has led to the emergence of text-to-3D content creation as a prominent research topic. However, existing methods often utilize implicit scene representations that link geometry and appearance through volume rendering. While effective to some extent, these approaches fall short in capturing finer geometries and achieving photorealistic rendering. This limits their ability to generate top-notch 3D assets. </p><p>Fantasia3D addresses this issue by introducing a novel approach that focuses on disentangling the modeling and learning of geometry and appearance. For geometry learning, the method relies on a hybrid scene representation and introduces the encoding of surface normals extracted from this representation as input for the image diffusion model. On the other hand, for appearance modeling, Fantasia3D incorporates spatially varying bidirectional reflectance distribution function (BRDF) into the text-to-3D task. By learning surface materials for photorealistic rendering of generated surfaces, this disentangled framework enhances compatibility with popular graphics engines while enabling functionalities such as relighting, editing, and physical simulation of the resulting 3D assets. </p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;94b31469-ee69-4cd3-974a-4eb0eb4c36a1&quot;,&quot;duration&quot;:null}"></div><p>The efficacy of Fantasia3D is demonstrated through comprehensive experiments showcasing its superiority over existing methods across various text-to-3D task settings. As presented at ICCV conference in 2023, this innovative approach opens up new possibilities in high-quality 3D content creation by bridging the gap between geometry and appearance modeling. </p><p></p><h3>Zero-1-to-3: Zero-shot One Image to 3D Object</h3><p><a href="https://zero123.cs.columbia.edu">Page</a> | <a href="https://arxiv.org/abs/2303.11328">Paper</a> | <a href="https://github.com/cvlab-columbia/zero123">Code</a></p><p>In recent years, there has been a growing interest in developing computer vision systems that can accurately manipulate and reconstruct 3D objects from limited visual input. This has led to significant advancements in the field of view synthesis and 3D reconstruction, which have numerous applications in areas such as virtual reality, gaming, and robotics. However, most existing approaches require multiple images or depth information to generate novel views of an object or reconstruct its 3D structure. To address this limitation, a team of researchers from MIT and NVIDIA created a novel framework called "Zero-1-to-3" for manipulating camera viewpoints of objects based on just a single RGB image. Their research paper titled "Zero-1-to-3: Zero-shot One Image to 3D Object" introduces this groundbreaking approach that leverages geometric priors learned by large-scale diffusion models from natural images to enable accurate view synthesis in an under-constrained setting.</p><p>Most existing methods rely on large datasets with multiple images or depth information for training their models. This limits their applicability to real-world scenarios where obtaining such data may not be feasible. Moreover, even if trained on synthetic data generated using computer graphics techniques, these models often fail to generalize well when presented with out-of-distribution datasets or diverse real-world images. This is because they lack robustness against variations in lighting conditions, textures, and object appearances.</p><p>To overcome these limitations, the authors propose a conditional diffusion model that utilizes a synthetic dataset for learning parameters controlling the relative camera viewpoint. This enables the generation of new images depicting the same object from different perspectives following a specified camera transformation. The key innovation lies in leveraging geometric priors learned by large-scale diffusion models from natural images to enable novel view synthesis in an under-constrained setting. The authors demonstrate that their approach significantly outperforms existing state-of-the-art models for single-view 3D reconstruction and novel view synthesis by harnessing Internet-scale pre-training.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2vk7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2vk7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic 424w, https://substackcdn.com/image/fetch/$s_!2vk7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic 848w, https://substackcdn.com/image/fetch/$s_!2vk7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic 1272w, https://substackcdn.com/image/fetch/$s_!2vk7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2vk7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic" width="1456" height="1299" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1299,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:133984,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2vk7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic 424w, https://substackcdn.com/image/fetch/$s_!2vk7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic 848w, https://substackcdn.com/image/fetch/$s_!2vk7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic 1272w, https://substackcdn.com/image/fetch/$s_!2vk7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e805ab4-504e-4a88-9602-b37b6d559048_1966x1754.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The proposed framework, Zero-1-to-3, consists of two main components - a conditional diffusion model and a synthetic dataset. The conditional diffusion model is trained on the synthetic dataset to learn parameters controlling the relative camera viewpoint. This allows for accurate manipulation of object viewpoints from limited visual input. The synthetic dataset used for training is created using computer graphics techniques and contains various objects with different textures, lighting conditions, and backgrounds. This diverse dataset ensures that the model learns robust representations that can generalize well to real-world scenarios.</p><p>The proposed framework has numerous applications in areas such as virtual reality, gaming, robotics, autonomous driving, etc., where precise control over camera transformations or accurate 3D scene reconstruction is crucial. It can also be used for generating realistic images from limited visual input or enhancing low-quality images by synthesizing new views. Moreover, the viewpoint-conditioned diffusion methodology introduced in this work can also be employed for other tasks such as image translation and style transfer by conditioning on different camera transformations.</p><p></p><h2>Conclusion</h2><p>The landscape of text-to-3D and image-to-3D technologies is rapidly evolving, with open-source solutions making significant strides. While proprietary models still hold the lead in many aspects, the gap is narrowing, and the future looks incredibly promising for open-source alternatives.</p><p>The democratization of 3D content creation through these open-source tools is not just a technological advancement; it's a paradigm shift that has the potential to revolutionize industries and empower creators worldwide. As these models continue to improve, we can expect to see an explosion of creativity and innovation across various fields, from entertainment and education to manufacturing and scientific visualization.</p><p>However, the journey is far from over. The open-source community faces challenges in terms of computational resources, data quality, and achieving consistently photorealistic results. Yet, the developers and researchers are pushing the boundaries, refining algorithms, and expanding the possibilities of what can be achieved.</p><p>I encourage you to experiment with the latest tools, or simply staying informed about the latest breakthroughs. The future of 3D content creation is being shaped right now, and it's more accessible and exciting than ever before!</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/p/will-2024-be-the-year-of-text-to-3d?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thank you for reading Parallel Mind. This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/p/will-2024-be-the-year-of-text-to-3d?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.parallelmind.xyz/p/will-2024-be-the-year-of-text-to-3d?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><p></p><p></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Parallel Mind is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Decentralized scientific computing on Golem Network's blockchain might lead us to the origins of life... and the future of AI training]]></title><description><![CDATA[The groundbreaking project showcases the immense power of Golem's decentralized computing platform in advancing scientific understanding and opens up a path to decentralized AI.]]></description><link>https://www.parallelmind.xyz/p/decentralized-scientific-computing</link><guid isPermaLink="false">https://www.parallelmind.xyz/p/decentralized-scientific-computing</guid><dc:creator><![CDATA[Michal Takáč]]></dc:creator><pubDate>Sun, 28 Jan 2024 10:28:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4w8q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4w8q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4w8q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4w8q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4w8q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4w8q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4w8q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:366722,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4w8q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4w8q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4w8q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4w8q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4f8308e-35da-49dd-9e9e-50b275c3afdd_1792x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In a remarkable venture into the realms of prebiotic chemistry, researchers have embarked on a quest that takes us back to the very dawn of life. Their latest <a href="https://doi.org/10.1016/j.chempr.2023.12.009">study published in Chem journal</a>, which focused on the chemistry before life began, has provided a glimpse into the complex interplay of molecules that could have sparked life on our planet. This e&#8230;</p>
      <p>
          <a href="https://www.parallelmind.xyz/p/decentralized-scientific-computing">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[[SK] PhysicsML: Využitie sily umelej inteligencie pre inžinierstvo a vedu]]></title><description><![CDATA[Kr&#225;tky talk na SlovakiaTech 2023 v Ko&#353;iciach o fyzik&#225;lne-informovan&#253;ch neur&#243;nov&#253;ch sie&#357;ach a ich vyu&#382;it&#237; v praxi]]></description><link>https://www.parallelmind.xyz/p/sk-physicsml-vyuzitie-sily-umelej</link><guid isPermaLink="false">https://www.parallelmind.xyz/p/sk-physicsml-vyuzitie-sily-umelej</guid><dc:creator><![CDATA[Michal Takáč]]></dc:creator><pubDate>Tue, 02 Jan 2024 13:00:28 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/140126667/f1cda39656f1e39d567fa4820441489d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.parallelmind.xyz/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/p/sk-physicsml-vyuzitie-sily-umelej?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.parallelmind.xyz/p/sk-physicsml-vyuzitie-sily-umelej?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[Intro to PhysicsML]]></title><description><![CDATA[Talk from Art&Tech Days 2023 in Ko&#353;ice, focusing on the introduction to novel approaches to modeling and simulation using physics-informed deep learning techniques.]]></description><link>https://www.parallelmind.xyz/p/intro-to-physicsml</link><guid isPermaLink="false">https://www.parallelmind.xyz/p/intro-to-physicsml</guid><dc:creator><![CDATA[Michal Takáč]]></dc:creator><pubDate>Tue, 02 Jan 2024 12:37:17 GMT</pubDate><enclosure url="https://substack-video.s3.amazonaws.com/video_upload/post/140125832/90d1881b-8c19-4b63-8cf4-256634630b4c/transcoded-00001.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p>
      <p>
          <a href="https://www.parallelmind.xyz/p/intro-to-physicsml">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Harnessing the Power of AI in Technology Development]]></title><description><![CDATA[A blend of physics and deep learning is set to transform how we approach problem-solving in numerous domains.]]></description><link>https://www.parallelmind.xyz/p/harnessing-the-power-of-ai-in-technology</link><guid isPermaLink="false">https://www.parallelmind.xyz/p/harnessing-the-power-of-ai-in-technology</guid><dc:creator><![CDATA[Michal Takáč]]></dc:creator><pubDate>Thu, 28 Dec 2023 12:37:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!d6kG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.parallelmind.xyz/subscribe?"><span>Subscribe now</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Parallel Mind is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!d6kG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!d6kG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!d6kG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!d6kG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!d6kG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!d6kG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:925193,&quot;alt&quot;:&quot;AI agents are helping us to develop new technologies&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="AI agents are helping us to develop new technologies" title="AI agents are helping us to develop new technologies" srcset="https://substackcdn.com/image/fetch/$s_!d6kG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!d6kG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!d6kG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!d6kG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dc74ed9-4f16-4814-b8fc-6022c639fadf_1792x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Autonomous AI agents are helping us to develop new technologies.</figcaption></figure></div><p>Understanding our environment, and predicting how it will evolve is one of the key challenges of humankind. A key tool for achieving these goals is numerical simulation, and next-gen simulations could strongly profit from integrating deep learning components to make even more accurate predictions about our world. To quote <a href="https://ge.in.tum.de/about/n-thuerey/">Nils Thuerey</a>:</p><div class="pullquote"><p>&#8220;Understanding our environment, and predicting how it will evolve is one of the key challenges of humankind. A key tool for achieving these goals are simulations, and next-gen simulations could strongly profit from integrating deep learning components to make even more accurate predictions about our world.&#8221; &#8212;Nils Thuerey</p></div><p>Since stuff around us is not static (even though some things might look like it), we need to understand how things change over time, and it's also very beneficial to know how we can influence these changes. Mathematics provides us with a powerful toolset to do so. Problems stemming from the interplay of the complex, interconnected systems that govern our world are often described by partial differential equations (PDEs).</p><p>As mathematics is evolving, we are able to describe more and more complex phenomena with PDEs, and we are also able to solve them with increasing accuracy. However, the complexity of the problems we are trying to solve is growing even faster, and even though throughout the years humans developed a wide range of numerical methods to solve PDEs and figure out how to tackle numerical instabilities in the process, we are reaching the limits of our current methods. This is where AI comes into play.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5DMM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5DMM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!5DMM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!5DMM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!5DMM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5DMM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/98fed36d-89d3-48dd-8dce-9972e7289bbf_1792x1024.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:827354,&quot;alt&quot;:&quot;Multidimensionality&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Multidimensionality" title="Multidimensionality" srcset="https://substackcdn.com/image/fetch/$s_!5DMM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!5DMM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!5DMM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!5DMM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa3ba86d0-29fe-44cb-b7f5-a79931d9577e_1792x1024.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">DALL-E 3, prompt: "An ultrarealistic octane rendered image depicting the multidimensional nature of partial differential equations. A central holographic display shows layers of compressible Navier-Stokes equations, with waves and particles interacting dynamically, illustrating the flow dynamics."</figcaption></figure></div><p>In the past few years, we've seen a tremendous progress in the field of AI, although the mainstream interest was mostly focused on applications from the fields of computer vision, natural language processing, and others that are more closely related to the human experience.</p><p>The new generation of AI models is able to learn complex patterns from data, but in addition to that, new architectures and approaches were recently developed for integrating a knowledge of physics into the learning process. They are often called various names, but mostly are grouped under the category of so-called "scientific machine learning", or SciML in short. These new methods opened up a whole new world of possibilities for engineering and science.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3M_t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3M_t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png 424w, https://substackcdn.com/image/fetch/$s_!3M_t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png 848w, https://substackcdn.com/image/fetch/$s_!3M_t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png 1272w, https://substackcdn.com/image/fetch/$s_!3M_t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3M_t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png" width="448" height="330.46153846153845" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1074,&quot;width&quot;:1456,&quot;resizeWidth&quot;:448,&quot;bytes&quot;:83138,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3M_t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png 424w, https://substackcdn.com/image/fetch/$s_!3M_t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png 848w, https://substackcdn.com/image/fetch/$s_!3M_t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png 1272w, https://substackcdn.com/image/fetch/$s_!3M_t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f9af153-d021-4e06-8ecc-3fcd49b4ac3b_1976x1458.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">SciML is a combination of machine learning, deep learning, and physics-based modeling.</figcaption></figure></div><p>SciML is a combination of machine learning, deep learning, and physics-based modeling. When taking a step back and looking into data-driven deep learning methods, these are often associated with black boxes, and the training process is looked onto as something beyond human understanding.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8qHY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8qHY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!8qHY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!8qHY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!8qHY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8qHY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d14b17fe-0d43-4156-b8eb-c3ee08424cdd_1792x1024.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:692147,&quot;alt&quot;:&quot;Black box&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Black box" title="Black box" srcset="https://substackcdn.com/image/fetch/$s_!8qHY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!8qHY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!8qHY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!8qHY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2183b259-521f-4b81-8179-00fc2eaa4673_1792x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">DALL-E 3, prompt: "Rendered in ultrarealistic style using octane, a black box symbolizing a machine learning neural network is placed on an aged metal stand. The room has a vintage ambiance with worn-out walls and dim lighting. Modern elements on the box include a digital screen and futuristic ports."</figcaption></figure></div><p>I know many of you may be hesitant when you think about this, like, &#8220;How can something that we don't understand be used to help us understand nature if it's a black box that we don&#8217;t fully understand?&#8221; We still need to do more work in terms of trying to learn more about these types of architectures, going down deep into understanding and doubling down on explainable AI models. So, the problem with black boxes in terms of data-driven models has slowly been mitigated with these new types of AI models called physics-driven machine learning or PhysicsML, which can be considered part of the scientific machine learning framework. This way, we can incorporate the knowledge of some mathematical formulation into the neural network itself, which is a huge thing actually. </p><p>Another thing is that these types of models, after they are trained, can be used for simulations of various physics phenomena with <a href="https://developer.nvidia.com/blog/using-carbon-capture-and-storage-digital-twins-for-net-zero-strategies/">extreme speed-ups up to hundreds of thousands of times</a> faster than the traditional simulation software</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Parallel Mind is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Parallel Mind is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.parallelmind.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Parallel Mind is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>
      <p>
          <a href="https://www.parallelmind.xyz/p/harnessing-the-power-of-ai-in-technology">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>