Research Log

Notes, demos, and deep dives

Follow along as we build — lab experiments, engineering breakdowns, and lessons from the frontier.

Inside the Lab Feb 16, 2026

Building a Sign Language Avatar That Actually Works

This avatar looked possessed. Then I stopped fighting the skeleton mismatch — and it worked immediately.

Inside the Lab Feb 10, 2026

The Hardest Part of Medical AI — 3D Reconstruction

This 3D tumor visualization looks impressive, this system works. But more importantly, it fails loudly when it should.

Inside the Lab Feb 5, 2026

Turn Any Video Into a 3D Space You Can Explore

Upload a video, get a 3D space you can explore, annotate, and measure. No scanners or technical skills required.

Inside the Lab Feb 4, 2026

Building a Medical AI That Doesn't Lie

A brain tumor detection AI that shows its work, expresses uncertainty, and fails loudly when it's not confident.

Inside the Lab Feb 1, 2026

Draw a Board Game on Paper, AI Makes It Playable

Draw a board game on paper, scan it, and watch AI turn it into a playable digital game with auto-generated rules.

Inside the Lab Jan 22, 2026

Fine-Tuning, Medical LLMs, and Clinical Alignment

Zeineb improves AutoScanAI through fine-tuning and adds a Medical LLM Explainer to deliver robust, clinically interpretable, WHO-aware AI outputs.

Inside the Lab Jan 21, 2026

Designing Medical AI Without Oversimplifying Reality

Zeineb explains the motivation and design behind AutoScanAI, a clinically grounded medical AI system built for early brain tumor detection using MRI data.

Inside the Lab Jan 20, 2026

A Deeper Look at Gaussian Splatting and Its Pipeline

Kevin breaks down how Gaussian Splatting works for 3D reconstruction and explains the design choices behind a web platform built to make it easier to use.

Inside the Lab Jan 19, 2026

How the Grid-Reading Pipeline Works for AI-Generated Board Games

Eman explains how his grid-reading pipeline works, showing how hand-drawn grids are processed into structured inputs for AI-generated board games.

Inside the Lab Jan 17, 2026

Adding Face Tracking and Fingerspelling to ASL Motion Generation

Mugi updates SignMate's motion generation system by adding face tracking and fingerspelling to improve ASL accuracy, expressiveness, and realism.

Inside the Lab Jan 13, 2026

Building a Grid-Based Interface for AI-Generated Games

Eman demos a website that reads grid-based inputs and game elements, helping AI reliably understand board game layouts, mechanics, and rules.

Inside the Lab Jan 5, 2026

I am Building an AI That Finds Brain Tumors in MRI Scans

Zeineb introduces her medical AI project focused on early brain tumor detection, explaining how AI, precision, and analysis can improve patient outcomes.

Inside the Lab Dec 23, 2025

Exploring Apple's 3D Reconstruction Pipeline

This session breaks down Apple's 3D reconstruction approach, exploring its design choices and how the tools can be applied or extended in future projects.

Inside the Lab Dec 23, 2025

Developing 3D Brain Models to Detect Tumors

Zeineb shares progress on her model, discussing recent experiments and how she's refining the system while evaluating results as the project moves forward.

Inside the Lab Dec 21, 2025

Why a Dual Camera Setup Matters for Motion Generation

Mugi updates her project by moving from a single-camera setup to a dual-camera solution, improving motion capture accuracy and overall system reliability.

Inside the Lab Dec 20, 2025

Updating the Camera and Robot Setup in Isaac Lab

Eman shares updates to his robotics project, improving the camera and robot setup in Isaac Lab to create a more stable and accurate system for future autonomy.

Inside the Lab Dec 18, 2025

Teaching a Robot to Seek Rewards Autonomously

Eman showcases a major robotics update, where the robot now navigates autonomously toward rewards with high accuracy based on its environment.

Inside the Lab Dec 17, 2025

Robots, AI robots everywhere!

Check out Muhammad Eman Aftab's work from our Budapest research lab, where he builds AI projects in public, sharing progress, challenges, and learnings.

Inside the Lab Dec 16, 2025

Comparing AI Agents for Image Analysis

Kevin compares different AI agents for image analysis, showing how each approaches visual tasks and where certain tools perform better by use case.

Inside the Lab Dec 16, 2025

From Land to Digital World: Introducing Cenarius

Kevin introduces Cenarius, a project focused on scanning real-world environments like farmlands and construction sites to create interactive 3D digital spaces.

Inside the Lab Dec 14, 2025

Camera Calibration and Isaac Setup Explained

Eman advances his vision-based system by upgrading camera calibration from 2D to 3D and setting up the Isaac environment for testing and future autonomy.

Inside the Lab Dec 13, 2025

Venato, Simplifying Regulation Monitoring with AI

David showcases Venato, an AI-powered regulation monitoring platform, explaining how it uses AI to collect, structure, and surface regulatory information.

Inside the Lab Dec 11, 2025

Exploring ASL Motion Capture with MediaPipe and RPM

Mugi shares early SignMate experiments capturing ASL motion, testing MediaPipe, RPM, and ASL Mocap datasets to evaluate gesture accuracy and complexity.

Inside the Lab Dec 11, 2025

Exploring ASL Motion Capture with Plask & FreeMoCap

Mugi tests Plask and FreeMoCap for ASL motion capture, comparing how well they track gestures and outlining strengths, limits, and animation challenges.

Inside the Lab Dec 11, 2025

Why Noise-Free Data Matters for ASL Avatars

Munkhchimeg Sergelen explains how SignMate avatars are created and why clean, precise motion data is essential for accurate ASL handshapes and movements.

Inside the Lab Dec 10, 2025

Building Motion Generation for Sign Language AI

Mugi explains how SignMate converts ASL gloss into 3D avatar motion, walking through the system that maps language inputs to realistic signing gestures.

Inside the Lab Dec 9, 2025

How ASL Structure Shapes Our SignMate Model

Munkhchimeg Sergelen explains how ASL differs from spoken English and why these differences are essential when designing SignMate.

Inside the Lab Dec 8, 2025

Boosting Tumor Detection Accuracy

Zeineb explains why she upgraded her MRI tumor-detection model from ResNet18 to ResNet50 to build a stronger system for more accurate detection.

Inside the Lab Dec 7, 2025

Eman's Big Idea at the Lightbloom Lab

Eman introduces his Lightbloom AI Lab project: a system designed to turn simple hand-drawn sketches into fully playable board games.

Inside the Lab Dec 7, 2025

Real-Time 3D Reconstruction with Gaussian Splatting

Kevin showcases a breakthrough in 3D reconstruction using Gaussian Splatting, enabling high-quality 3D scans without expensive computing or specialized hardware.

Inside the Lab Dec 6, 2025

Vision-Driven Robot Movement

Eman demos a vision-driven robot control system that reads printed signs to interpret commands and move autonomously in real time.

Inside the Lab Dec 5, 2025

Experimenting With a Real-Time Sign Language Avatar

Join us inside the Lightbloom Lab in Budapest as Mugi walks through her latest experiment attempting to adapt our teammate Kevin into a real-time signing avatar.

Inside the Lab Dec 5, 2025

How We Run Workloads on Our New Computer Cluster

Kevin demonstrates how to use the Lightbloom Lab's newly assembled computer cluster, the shared infrastructure powering all ongoing projects.

Inside the Lab Dec 4, 2025

AI That Turns Your Sketches Into Playable Board Games

Join us inside the Lightbloom AI Lab in Budapest as our researcher, Eman, demonstrates the latest updates to our Sketch-to-Game Generator.

Inside the Lab Dec 3, 2025

AI for Reliable Brain Tumor Detection

Zeineb demonstrates the latest improvements to the Autoscan AI model, advancing early brain tumor prediction at the Lightbloom AI Lab in Budapest.

Inside the Lab Nov 28, 2025

How Mugi Is Building SignMate

Meet Mugi, an AI researcher in our Budapest Lab, building SignMate to make communication more accessible through speech-to-sign avatars.