Publication
Performing Incremental Bayesian Inference by Dynamic Model Counting
AbstractThe ability to update the structure of a Bayesian network when new data becomes available is crucial for building adaptive systems. Recent work by Sang, Beame, and Kautz (AAAI 2005) demonstrates that the well-known Davis-Putnam procedure combined with a dynamic decomposition and caching technique is an effective method for exact inference in Bayesian networks with high density and width. In this paper, we define dynamic model counting and extend the dynamic decomposition and caching technique to multiple runs on a series of problems with similar structure. This allows us to perform Bayesian inference incrementally as the structure of the network changes. Experimental results show that our approach yields significant improvements over the previous model counting approaches on multiple challenging Bayesian network instances.
Download publicationRelated Resources
See what’s new.
2025
ARCH: Hierarchical Hybrid Learning for Long-Horizon Contact-Rich Robotic AssemblyIntroduces a hierarchical modular learning framework which enables…
2025
Using AI to Power AutoConstrain in Fusion Automated SketchingLearn about a new feature in Fusion that automates the application of…
2023
Learned Visual Features to Textual ExplanationsA novel method that leverages the capabilities of large language…
2013
Dirichlet energy for analysis and synthesis of soft mapsSoft maps taking points on one surface to probability distributions on…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us