Research Log

Notes, demos, and deep dives

Follow along as we build. Lab experiments, engineering breakdowns, and lessons from the frontier.

Inside the Lab Apr 2, 2026

We Built AI Employees — Not Assistants, Actual Employees

We built AI employees with job titles, responsibilities, and performance reviews. Then we built mission control to manage them.

Yield Mar 26, 2026

40 Seconds Was Too Slow — Making Yield Faster and More Reliable

40 seconds per turn was too slow. Eman parallelized agents, fixed caching, and cut latency by 12%. Here's how.

Yield Mar 23, 2026

75% Pain Points, 10% Everything Else — Fixing Coverage Imbalance

Pain points had 75% coverage. Everything else? Under 10%. Mugi shows how we taught Yield to go wider, not just deeper.

Yield Mar 16, 2026

Our AI Stopped Responding — Fixing Silent Failures in Yield

Sometimes our AI just stopped responding. Mugi shows how we fixed silent failures and made Yield reliable.

Yield Mar 14, 2026

Power Follows Visibility — Introducing Yield

Yield interviews every employee, connects every system, and builds a live map of how your company actually works.

Yield Mar 11, 2026

"I Like Jira" Became "Has Issues With Jira" — Fixing AI Data Accuracy

"I like Jira" became "has issues with Jira." Zeineb shows how we're fixing data accuracy in Yield's AI interviews.

Yield Mar 7, 2026

Why Trust Is the Biggest Challenge in AI Interviews (And How to Fix It)

Research-backed principles that make AI interviews feel more natural and trustworthy.

Yield Mar 4, 2026

What Makes An AI Interviewer Trustworthy? (It's Not What You Think)

What makes an AI interviewer trustworthy? Not intelligence — behavior. Four research-backed principles for building Yield.

Inside the Lab Feb 26, 2026

Making Controlled AI Videos - What Actually Works

Eman breaks down how to make controlled AI-generated videos — why consistency is hard and the strategy that actually works

Inside the Lab Feb 16, 2026

Building a Sign Language Avatar That Actually Works

This avatar looked possessed. Then I stopped fighting the skeleton mismatch — and it worked immediately.

Inside the Lab Feb 10, 2026

The Hardest Part of Medical AI — 3D Reconstruction

This 3D tumor visualization looks impressive, this system works. But more importantly, it fails loudly when it should.

Inside the Lab Feb 5, 2026

Turn Any Video Into a 3D Space You Can Explore

Upload a video, get a 3D space you can explore, annotate, and measure. No scanners or technical skills required.

Inside the Lab Feb 4, 2026

Building a Medical AI That Doesn't Lie

A brain tumor detection AI that shows its work, expresses uncertainty, and fails loudly when it's not confident.

Inside the Lab Feb 1, 2026

Draw a Board Game on Paper, AI Makes It Playable

Draw a board game on paper, scan it, and watch AI turn it into a playable digital game with auto-generated rules.

Inside the Lab Jan 22, 2026

Fine-Tuning, Medical LLMs, and Clinical Alignment

Zeineb improves AutoScanAI through fine-tuning and adds a Medical LLM Explainer to deliver robust, clinically interpretable, WHO-aware AI outputs.

Inside the Lab Jan 21, 2026

Designing Medical AI Without Oversimplifying Reality

Zeineb explains the motivation and design behind AutoScanAI, a clinically grounded medical AI system built for early brain tumor detection using MRI data.

Inside the Lab Jan 20, 2026

A Deeper Look at Gaussian Splatting and Its Pipeline

Kevin breaks down how Gaussian Splatting works for 3D reconstruction and explains the design choices behind a web platform built to make it easier to use.

Inside the Lab Jan 19, 2026

How the Grid-Reading Pipeline Works for AI Board Games

Eman explains how his grid-reading pipeline works, showing how hand-drawn grids are processed into structured inputs for AI-generated board games.

Inside the Lab Jan 17, 2026

Face Tracking and Fingerspelling for ASL Motion Generation

Mugi updates SignMate's motion generation system by adding face tracking and fingerspelling to improve ASL accuracy, expressiveness, and realism.

Inside the Lab Jan 13, 2026

Building a Grid-Based Interface for AI-Generated Games

Eman demos a website that reads grid-based inputs and game elements, helping AI reliably understand board game layouts, mechanics, and rules.

Inside the Lab Jan 5, 2026

Building an AI That Finds Brain Tumors in MRIs

Zeineb introduces her medical AI project focused on early brain tumor detection, explaining how AI, precision, and analysis can improve patient outcomes.

Inside the Lab Dec 23, 2025

Exploring Apple's 3D Reconstruction Pipeline

This session breaks down Apple's 3D reconstruction approach, exploring its design choices and how the tools can be applied or extended in future projects.

Inside the Lab Dec 23, 2025

Developing 3D Brain Models to Detect Tumors

Zeineb shares progress on her model, discussing recent experiments and how she's refining the system while evaluating results as the project moves forward.

Inside the Lab Dec 21, 2025

Why a Dual Camera Setup Matters for Motion Generation

Mugi updates her project by moving from a single-camera setup to a dual-camera solution, improving motion capture accuracy and overall system reliability.

Inside the Lab Dec 20, 2025

Updating the Camera and Robot Setup in Isaac Lab

Eman improves the camera and robot setup in Isaac Lab, building a more stable and accurate system for future autonomous movement.

Inside the Lab Dec 18, 2025

Teaching a Robot to Seek Rewards Autonomously

Eman showcases a major robotics update, where the robot now navigates autonomously toward rewards with high accuracy based on its environment.

Inside the Lab Dec 17, 2025

Robots, AI robots everywhere!

Check out Muhammad Eman Aftab's work from our Budapest research lab, where he builds AI projects in public, sharing progress, challenges, and learnings.

Inside the Lab Dec 16, 2025

Comparing AI Agents for Image Analysis

Kevin compares different AI agents for image analysis, showing how each approaches visual tasks and where certain tools perform better by use case.

Inside the Lab Dec 16, 2025

From Land to Digital World: Introducing Cenarius

Kevin introduces Cenarius, a project that scans real-world environments like farmlands and construction sites into interactive 3D spaces.

Inside the Lab Dec 14, 2025

Camera Calibration and Isaac Setup Explained

Eman advances his vision-based system by upgrading camera calibration from 2D to 3D and setting up the Isaac environment for testing and future autonomy.

Inside the Lab Dec 13, 2025

Venato, Simplifying Regulation Monitoring with AI

David showcases Venato, an AI-powered regulation monitoring platform, explaining how it uses AI to collect, structure, and surface regulatory information.

Inside the Lab Dec 11, 2025

Exploring ASL Motion Capture with MediaPipe and RPM

Mugi shares early SignMate experiments capturing ASL motion, testing MediaPipe, RPM, and ASL Mocap datasets to evaluate gesture accuracy and complexity.

Inside the Lab Dec 11, 2025

Exploring ASL Motion Capture with Plask & FreeMoCap

Mugi tests Plask and FreeMoCap for ASL motion capture, comparing how well they track gestures and outlining strengths, limits, and animation challenges.

Inside the Lab Dec 11, 2025

Why Noise-Free Data Matters for ASL Avatars

Munkhchimeg Sergelen explains how SignMate avatars are created and why clean, precise motion data is essential for accurate ASL handshapes and movements.

Inside the Lab Dec 10, 2025

Building Motion Generation for Sign Language AI

Mugi explains how SignMate converts ASL gloss into 3D avatar motion, walking through the system that maps language inputs to realistic signing gestures.

Inside the Lab Dec 9, 2025

How ASL Structure Shapes Our SignMate Model

Munkhchimeg Sergelen explains how ASL differs from spoken English and why these differences are essential when designing SignMate.

Inside the Lab Dec 8, 2025

Boosting Tumor Detection Accuracy

Zeineb explains why she upgraded her MRI tumor-detection model from ResNet18 to ResNet50 to build a stronger system for more accurate detection.

Inside the Lab Dec 7, 2025

Eman's Big Idea at the Lightbloom Lab

Eman introduces his Lightbloom AI Lab project: a system designed to turn simple hand-drawn sketches into fully playable board games.

Inside the Lab Dec 7, 2025

Real-Time 3D Reconstruction with Gaussian Splatting

Kevin demonstrates real-time 3D reconstruction with Gaussian Splatting — high-quality scans without expensive hardware or compute.

Inside the Lab Dec 6, 2025

Vision-Driven Robot Movement

Eman demos a vision-driven robot control system that reads printed signs to interpret commands and move autonomously in real time.

Inside the Lab Dec 5, 2025

Experimenting With a Real-Time Sign Language Avatar

Inside the Lightbloom Lab in Budapest, Mugi walks through her latest experiment turning teammate Kevin into a real-time signing avatar.

Inside the Lab Dec 5, 2025

How We Run Workloads on Our New Computer Cluster

Kevin demonstrates how to use the Lightbloom Lab's newly assembled computer cluster, the shared infrastructure powering all ongoing projects.

Inside the Lab Dec 4, 2025

AI That Turns Your Sketches Into Playable Board Games

Join us inside the Lightbloom AI Lab in Budapest as our researcher, Eman, demonstrates the latest updates to our Sketch-to-Game Generator.

Inside the Lab Dec 3, 2025

AI for Reliable Brain Tumor Detection

Zeineb demonstrates the latest improvements to the Autoscan AI model, advancing early brain tumor prediction at the Lightbloom AI Lab in Budapest.

Inside the Lab Nov 28, 2025

How Mugi Is Building SignMate

Meet Mugi, an AI researcher in our Budapest Lab, building SignMate to make communication more accessible through speech-to-sign avatars.