🚀 Limited Time Offer: Get a Free Business Strategy Consultation with Every Project!
AMS IT ServicesAMS IT Services
AMS
Back to Blog
AI Research

See and Fix the Flaws: Enabling VLMs and Diffusion Models to Comprehend Visual Artifacts via Agentic Data Synthesis

2026-02-25
By AI Curator
See and Fix the Flaws: Enabling VLMs and Diffusion Models to Comprehend Visual Artifacts via Agentic Data Synthesis

📄 See and Fix the Flaws: Enabling VLMs and Diffusion Models to Comprehend Visual Artifacts via Agentic Data Synthesis

👥 Authors: Jaehyun Park, Minyoung Ahn, Minkyu Kim, Jonghyun Lee, Jae-Gil Lee, Dongmin Park

📅 Published: February 24, 2026

🔥 Upvotes: 9

🎯 What This Research Is About

Despite recent advances in diffusion models, AI generated images still often contain visual artifacts that compromise realism. Although more thorough pre-training and bigger models might reduce artifacts, there is no assurance that they can be completely eliminated, which makes artifact mitigation a highly crucial area of study. Previous artifact-aware methodologies depend on human-labeled artifact datasets, which are costly and difficult to scale, underscoring the need for an automated approach to reliably acquire artifact-annotated datasets. In this paper, we propose ArtiAgent, which efficiently creates pairs of real and artifact-injected images. It comprises three agents: a perception agent that recognizes and grounds entities and subentities from real images, a synthesis agent that introduces artifacts via artifact injection tools through novel patch-wise embedding manipulation within a diffusion transformer, and a curation agent that filters the synthesized artifacts and generates both local and global explanations for each instance. Using ArtiAgent, we synthesize 100K images with rich artifact annotations and demonstrate both efficacy and versatility across diverse applications. Code is available at link.

💡 Why This Matters

  • Automated Quality Control: ArtiAgent automatically generates and annotates visual artifacts, eliminating the need for expensive manual labeling of training data.
  • Three-Agent Architecture: Combines perception (entity detection), synthesis (artifact injection), and curation (quality filtering) to create high-quality training datasets.
  • Scalable Solution: Successfully generated 100K annotated images, proving the approach can scale to improve diffusion models and vision-language models at detecting and fixing visual flaws.

📖 Read Full Paper →

💻 View Code on GitHub →


Curated from Hugging Face daily papers

Have a Brilliant Idea?

Let's turn your vision into a digital reality. Our experts are ready to collaborate.

Start Your Project Today