โ Back to projects
Face2Visceral
Live DemoHackathon ยท 6 hours ยท 3rd PlaceA dual-encoder ResNet-18 model that estimates visceral body fat ratio from a facial image, optionally conditioned on age and sex. Trained on UTKFace and CT scan datasets using PyTorch Lightning.
๐ธ
Drop a photo here or click to upload
AI estimation only โ not medical advice.
How it works
Two ResNet-18 encoders process the face and a paired CT reference. Their features are fused and passed through a regression head to predict the visceral fat ratio.
Training
UTKFace (facial images) paired with CT scan measurements from the AATTCT dataset.
Backend not connected
The live demo requires the Face2Visceral FastAPI server to be running. To connect it, set the environment variable NEXT_PUBLIC_API_URL to your backend URL.
# 1. Download the model checkpoint
curl -L https://github.com/EngEmmanuel/face2visceral/releases/download/v0.1.0/last.ckpt \
-o last.ckpt
# 2. Run the FastAPI server
cd face2visceral
pip install -r requirements.txt
python -m scripts.serve.inference_api --checkpoint ../last.ckpt
# 3. Set the env var and restart this app
NEXT_PUBLIC_API_URL=http://localhost:8000 npm run dev