Research Portrait: Deepika Raghu

Dr. Deepika Raghu is a postdoctoral researcher at ETH Zurich in the Chair of Circular Engineering for Architecture, led by Prof. Dr. Catherine De Wolf. Her research involves the development of AI-driven tools to accelerate the circular economy in the built environment. She focuses on digitally enabled reuse, material traceability, and creating global cadasters to help cities transition toward sustainable and regenerative construction.

Can you describe your current research project for a general audience and share what inspired your work?

I am part of the project CIRCAD: Circular Facade Design, funded by external page Bouygues Construction. I am currently working on building AI-powered tools that help cities identify reusable building materials from existing buildings by analysing street-level images, maps, and photogrammetry-based 3D models. The idea is simple: rather than demolishing buildings and sending everything to landfill, we can use technology to predict how materials and components, such as windows, bricks, or timber can be saved and reused. My research sits at the intersection of computer vision, urban data, and sustainability, and it’s been shaped by my own professional experiences, where I’ve seen firsthand how much potential is wasted simply because reuse is too slow, too manual, or too invisible. That invisibility is what I’m trying to change by making reuse measurable, mappable, and visualizable using emerging digital technologies.


Growing up in India, I saw how informal systems already circulate materials in ways that are surprisingly efficient, but they lack visibility, data, and support. At the same time, in more formal construction contexts such as in Switzerland, I noticed how circularity is often treated as an afterthought, disconnected from real workflows. The tools I am building are shaped by this dual vision: to serve both the top-down needs of government institutions, architects and construction companies as well as the bottom-up efforts of informal circular practices. Using multi-modal models which consider both computer vision and language-based AI, I interpret urban data in more flexible and accessible ways, supporting a wide range of end-users and enabling decision-making in different global contexts.

What are the potential real-world applications of your research within the AEC industry? How might it positively impact our built environment and everyday life? 

One of the key outcomes of my work has been the development of an AI system that interprets the built environment through images, spatial data, and learned material patterns. This tool predicts the material, condition and architectural style of buildings by analyzing their visual and geometric features, then indicating which components might be salvaged, their material type, and reuse potential. It contributes to a growing digital cadastre of potential secondary materials, i.e. a spatial building material inventory of what cities already have, ready to be reused from buildings that may undergo renovation or demolition.
The technical work behind these tools combines multi-label classification, zero-shot learning, and segmentation models that can detect building components even in noisy or low-quality image data. The use of photogrammetry allows me to capture geometry at scale using just drone or phone images, feeding into the models to improve spatial understanding. The AR tool acts as a final layer that makes predictions tangible. Users can walk up to a building, hold up a phone or tablet, and see overlays identifying reusable windows, facade materials, and their confidence scores. 

Could you share a surprising or unexpected finding or outcome from your research so far?

What surprised me most in the research was realizing that even imperfect data can generate meaningful reuse predictions when paired with the right models. This finding changed how I think about accessibility in data-driven research. Most cities do notn’t have high-resolution scans or clean datasets. However they do have messy, open data such as street view images or sparse government records. By designing workflows that can work with such inputs, my research enables the matching of supply and demand for secondary materials. Architects can create designs based on what already exists in buildings marked for demolition, rather than starting from scratch with virgin materials. Policymakers get better data to guide circular building regulations. Construction companies can lower costs and reduce carbon by reusing rather than buying new, and cities benefit from less waste, more local jobs, and stronger circular economies. 

What does augmented computational design mean to you, and how do you see it evolving in your field?

To me, augmented computational design is about empowering people including designers, builders, policymakers or waste-pickers with tools that enhance their decision-making processes. It is about embedding intelligence into the design process so that sustainability emerges naturally from the tools we use. I see this field evolving towards more real-time, data-rich, and participatory design workflows, especially in cities where resources are limited but the need for action is urgent.
I envision a future where AI, AR and spatial computing are not just experimental add-ons but foundational parts of urban planning and design. Where a planner in Nairobi, a mason in Bangalore, or an architect in Zurich can all use the same core tools, adapted to their context to make circular decisions faster and more confidently. For this to happen, we need to design for diversity: of building materials, geographical and cultural settings, and of users themselves. 

JavaScript has been disabled in your browser