AIResearch AIResearch
Back to articles
Data

AI Builds Interactive Maps from Simple Sketches

A new AI framework transforms basic wireframes into fully functional geospatial dashboards, enabling scientists to create complex data visualization tools without coding expertise.

AI Research
March 26, 2026
4 min read
AI Builds Interactive Maps from Simple Sketches

Creating interactive web dashboards for visualizing environmental data has long been a complex, time-consuming task requiring specialized software engineering skills. This barrier often prevents scientists and policymakers from quickly developing tools needed for risk analysis and decision-making. A new study introduces an AI-driven framework that automates this process, allowing users to generate functional geospatial applications from simple sketches and natural language descriptions. This approach could democratize access to advanced data visualization, making it easier for domain experts to build custom tools for monitoring climate, hazards, and urban systems.

The researchers developed a system called Context-Aware Visual Prompting (CAVP) that uses large language models (LLMs) to generate code for interactive web dashboards from user inputs. Users provide wireframe sketches, often created in tools like PowerPoint and exported as Scalable Vector Graphics (SVGs), along with text descriptions of their requirements. The AI interprets these visual layouts and annotations to produce React-based web applications that include maps, charts, and data visualizations. A key innovation is the integration of an ontological knowledge base, which embeds domain-specific knowledge about geospatial tools and software engineering practices into the generation process. This ensures the generated code adheres to industry standards and is maintainable and scalable.

Ology involves a multi-stage pipeline that combines visual context extraction, knowledge retrieval, and automated code validation. First, the system parses SVG wireframes to extract UI elements, their positions, and functional annotations. It then uses a knowledge graph to map these elements to appropriate libraries and frameworks, such as React-Leaflet for maps or Highcharts for charts. The LLM generates code using retrieval-augmented generation (RAG), pulling relevant code snippets from a curated codebase to guide the output. Finally, an AI agent validates the generated code through automated testing, including Selenium-based interaction with deployed pages to simulate user behavior and detect issues. This self-validation loop allows the system to repair errors iteratively, improving reliability without human intervention.

Demonstrate that the framework can successfully generate multi-page, interactive dashboards for environmental monitoring. In a case study, it created a meteorological data dashboard that visualizes sensor measurements, including temperature, wind speed, and humidity, on an interactive map with time-series charts. The system achieved improved performance over baseline s like ScreenShots2Code, which relies on static screenshots and lacks semantic understanding. Evaluation metrics showed that few-shot prompting with the AI repair agent yielded the best outcomes, with pass@1 rates up to 0.143 for complex geovisualization pages and BLEU scores indicating high similarity to expert-coded references. The ablation study in Table 1 highlights that strategies incorporating example-based guidance consistently produced more accurate and compilable code.

Of this research are significant for fields like environmental science, urban planning, and emergency response, where timely access to data visualization tools is critical. By reducing the need for coding expertise, the framework empowers domain experts to rapidly prototype and deploy dashboards for risk analysis and decision support. For instance, it could help climate scientists monitor extreme weather events or assist policymakers in assessing hazard mitigation strategies. The ontological knowledge base ensures that generated applications are not only functional but also aligned with domain-specific standards, enhancing their utility in real-world scenarios. This automation could accelerate the development of cyberGIS platforms and digital twins, fostering more collaborative and data-driven research.

However, the study acknowledges limitations, including dependence on expert-crafted prompt templates and the need for further refinement in UI design validation. The framework currently focuses on front-end code generation and requires integration with backend data services for full functionality. Future work will aim to enhance these aspects, improving the system's ability to handle complex data interpretations and servicing. Additionally, while the approach is demonstrated in geospatial contexts, the researchers note its potential adaptability to other scientific domains by swapping domain-specific knowledge sources, though this would require careful curation of new ontologies and codebases.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn